Why Artificial Intelligence cannot ruin Human Civilizations
The fear mongering anthropomorphic aspect regarding Artificial Intelligence is not a science-fact; instead, it has been kicked off by science-fiction. From Ex Machina to Blade Runner, fear has been instrumented over the futures and diluted presents where an AI is to be sometimes termed either as an issue or a hopeless consideration. However, let us get it clear: an AI is not a technological sin; it is the manifold effort of efforts by which we have the potential to quantify identity in many roles and forms, which itself is a more lively contribution for the human society today.
“The upheavals [of artificial intelligence] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.” — Nick Bilton, tech columnist wrote in the New York Times
The Hawking-Musk Fear: Can/Should/May it be debunked??
Hawking and Musk, are not wrong to understand and estimate the perils that artificial intelligence can really bring. However, the notions and normative utility of ML-based ecosystem of AI species is not a reasonable justification, which itself is a public and social sham. Here, we need to understand few concepts and issues, we have to adequately resolve for an AI.
- Artificial Intelligence cannot adhere International Human Rights Law obligations de facto; it needs an ethical climate to condone towards it;
- The normative compromises, which have been created in the furtherance and usage of an AI system sometimes renders to be as simple, but these notions have destroyed the ethical purpose of innovation and dissent to regular social mappings that technology can provide (not just for or for only research, but for a social life that human need to live)
- Globalization with relevant democratized substantiation in its 4th stage is getting dilute and gradually fearing; perhaps some cold has let a globalist ideology in fever; however, we need not to confer the frugal principles of globalization if we cannot lead to inclusive solutions;
- Economic inequality and resource inadequacy is an impending factor, which makes an AI a technology rather than a special and accessible creativity, and here I do not refer startups, governments, firms and these obvious entities. I also do not mean cyber hackers, programmers, data scientists and other people in this category; most people do never understand that technology reforms our social culture — that too I mean in the scope of ethical climate (means how you live, how you move on and work, how you catch your transportation means, etc.)
- To understand a normal life, the complex things, which make life simple need to be at ground to meet the needy — the needy then also need to advance itself. And it is not a political problem, but rather a socio-economic problem, where we do not need a Marxist Taboo. We just need a neutral approach to curb appropriation of economic sections and make life standards, at least if not equal, then apportioned. The due apportionment of economic spaces makes an interactive, accessible and viable society, which in the end is the demand of humans.
The Myths: World, Facade and Post-Truth Reality
So if you think that an AI is just an automata, to kill you then just ignore it, because it is not. Our socio-economic values, spaces and trusts make our better cultural rapprochement and utility of technology. Now, let us get to the understandability of a few key terms we need to understand. These are: (1) Technology Distancing; (2) Democratic/Political Backsliding; (3) Perceptionism. The very first one elucidates about the deemed role of technology to distance human capabilities of excess effort and physical/immaterial value. This is a naturally driven/called human anticipation, which Genevieve Bell has termed it as a discerning fear of human society as a resistance against ruination. This ages us back from the Mauryas and Greeks when, in every conundrum, efforts were placed to advance our utilities and make the immeasurable possibilities of nature in hands with real possibility. This is Musk’s perception. However, the ethical role of technology distancing is not that generic which Musk holds, if we regard Arnold Pacey for a moment. Certainly it is not a scholarly gamble out here. It is quite simple to understand from trends that utilizing technology is a generic possibility, which itself condones towards better utility rather than human as the subject to manufacture utility as an orientation of management woes. I term it rather like a greed, but not in general. Maybe it is not just about needs; it is about economic saturation in a human social ecosystem. Right now: China can be blamed for the treatment they are having with the Uighur Muslims in Xinjiang, but their social credit system is also benefiting lives out in other regions to some extent. At least we need not to fall into the trap of a West-oriented fear-sick propaganda/factual perceptionism, which is to be dumped soon.
The next one is oriented to international and comparative state-level politic trends. Technology, even if feared by populists to be a weapon against mankind or at least that of a connivance towards public privacy, then I would like to make it clear. The scope that such populist governments keep is erratic and a facade. In fact, the marriage of capitalism and authoritarian methods are a true reality, which we generally see in elections such as in India in 2014 (although I do not recognize India in the wake of a populist government), US in 2016 (same; Trump is a minor leaning so-called populist leant, which does not signify him to be a true populist like Jair Bolsonaro), in the Brexit Referendum vote in 2016 (although no populist parties such as UKIP, Green Party, etc have any margin in the Houses nor the Conservatives and Labour parties have such populist centrality; Brits are fighting in the intergenerational question of lame obstinacy), Italy, Poland, Romania and Greece. This shows how much economic realism and saturation in the observational delicacy is a need. Thus, we need to fear for AI to be that much of a problem for destructive purposes. What we require to control are our extrinsic motivations and to do is to act when we are needed to, with the thing to prevent ourselves from drafting us into a mere instant political fragrance for electoral games. We essentially never require ourselves to be identity-workers. We need to be identity aspirants, achievers, perfectionists, lovers, learners and sharers. If a compassionate approach of mankind related to this is applied, it may lead to rapid implications.
The last one is psychological; it is entitled to more variant aspects of emotions and social understanding, with the dated role of foot-printing of relationships we make — with the ethical climate we need to sustain. On this, I cannot give a sure point because such implications are very dynamic. However, if we have an approach of cultural rapprochement, then certainly, an AI-based world is not a problem. Still, we must understand that our psychological development is rendered by the imagined, learnt, acquired and optimized subservients of human ethical climate being observed in an individual-oriented apportionment of limited possibilities. This also entails us to get to question the juristic approach which Legal Theorists have regarding AI. So, in the words of Jack Ma of Alibaba, we need to build inclusive values and make technology not just a semblant part of services and products, but a encultured affinity, which we can cultivate, improve and with which we can lead to be happy. That is how tech entities can always be challenged, with the acceptance of the fact that a technological limitation or the ultimate breakthrough between the state of nature and technology (as a limited part of civil society) may lead to the natural need of fusion and fission of species and their ethical climates, which are created and furthered in the coming thousand and million years.
So what? Is there no peril?
The simplest answer to this question is: a peril is a challenge. We may never know when it has to come, or certainly such a subjective occurrence is a time-coalesced realization. occurrence and expansionism in a human/natural ethical life. So let us stay strong, improve ourselves, focus on economic neutrality and viability with respect to resources and make AI a better part of cyberspace and human life.