Can AI Handle Global Consumerism?Part 1: Mass Humanism, Accessibility & Omnipresence
Artificial Intelligence, although is based on the advanced and evolving facets of mathematics and the metaethics of logic and reason, is becoming a powerful tool to gauge economic revolution and democratization. Although the mathematical and logical branches of AI in Computer Sciences and Mathematics are still growing, to achieve the status of Artificial General Intelligence, there are certain relevant disparities with respect to AI Ethics, which may not confront the mathematical and electronic paucities of AI, but are way relevant to be estimated when AI is seen in the eyes of law and other social sciences. This article covers the role of AI in the political side of globalization and consumerism, its merits and demerits & also the impeccable role of AI in its aesthetic relationship with human society.
Technology Diplomacy and Artificial Intelligence: Certain Ideological Developments under Globalization
To be honest, AI is the idea of an artificial legal entity, which itself stemmed its theoretical and consumerist rise in the West, had certain mathematical and historical references in works written by writers and scholars from Central Asia and the Indian Subcontinent, and after Alan Turing’s special work — The Turing Test, AI became (at least not by its original name), a part of the culture of scientific humanism since 1945, when the international community had already seen two world wars, the formation of relevant ‘internationalist’ bodies such as the UN, the ECC, the LAS, the NAM and other organizations. By 1975, a group of prolific minds in the Dartmouth Conference had already discussed and proposed about the comprehensive role of artificial intelligence, which I believe was an existential feat because the world was still maturing, and so does mathematics.
By 2005, the Tunis Agenda on ICTs came into being, and within the decade of 2000–2009, we saw the rise of various digital non-state actors, which encouraged the democratization of digital infrastructure as well as its proactive improvement. Blockchain, Big Data, Internet of Things and then AI became the folklore among cybersecurity professionals, diplomats, terrorist groups, hackers, politicians and even educationists. The role of technology wasn’t timid anymore: it had become integral to human lives, in many ways, because as infamous data lawyer Renata Avila has noted in her prolific pieces, I would certainly appreciate the fact that internet is now seeking the age of surveillance, which is a rampant shift from the age of creation. If we analyse the international law scenario, then we can say that we are in the fourth stage of globalization and conflict management. In addition, if we understand in terms of anthropology, then we can say that we are now in the age of post-truth, which follows the age of reason. Let us correlate the post-truth age, the fourth stage of globalization and conflict management & the age of surveillance. My observations on the relative commonalities found among these are as follows:
- That the ideology of utility derived from economic neoliberalism has influenced the market of technological disruption and will give rise to the market of legal disruption as well. However, this disruptive market, based on the principles of giving and take in a market economy, is shallow by virtue of ethics, if not morality. The reason is simple: AI should never be used to relinquish the obscure folds and edges of an individual data subject’s privacy. Even if the privacy of that individual is deeply affected, and relinquished, I see that it may seem difficult in complex cases as to how we can control the digital footprints that are manifested in cyberspace, which may cause contamination of any realtime dimension of human life, like politics, lifestyle, social issues and even economics;
- Let us get on policy perspectives too. The European Approach to AI, unlike of other D9 economies, has been prolific, technocratic but based on a proper libertarian approach, which I would regard as a cautious soft power approach. India still has far to go to transform the NITI Aayog’s objectives and there would be a lot to figure it out, but China has already begun working on strategic implementation of its AI policy in some aspects. We have seen that China endorses its own technocratic agenda of control, which Western scholars often term out as authoritarian and even in democracies, the role of surveillance infrastructure is being democratized, if not liberalized. Other digital economies in ASEAN like South Korea and Singapore do endorse relevant AI policymaking where they nevertheless have a clinical and liberal approach to put AI into the use of public welfare. But we have to explore more as political developments happen;
- Consumerism is not a good approach to foster economic development where we create an ecosystem of interconnectedness without any ethical autonomy (Read Proceduralism v Instrumentalism: Purpose in Various Political and Legal Ecosystems). If an interconnectedness by economic measures is solely or majorly reliant on such activities that materialize human identity and resources, to create a vicious cycle of consumer-to-service/consumer-to-product conversion, it would certainly not help anything at the end of the day because the same will weaken the noble objectives of globalization in the digital age. No one would appreciate an interventionist way of economic realization through influencing AI systems used by organizations and the State to (if not to scrap up or decay the socio-anthropological infrastructure of a human data subject’s privacy) unduly design and transform the entrepreneurial nature of a human data subject’s privacy;
- Furthermore, it’s true that economic nationalism, as resistance to interventionist globalism can affect our data policies better, but we need to examine as how better federalization can be achieved in the political and social discourses of data privacy exchanges and engagements because that could ideally be more helpful in keeping weak AI non-vulnerable to any undue influence in the entrepreneurial design of human privacies;
- We have to understand that privacy as conception can be technologically surveyed but cannot be equally or equitably put into unconditional social surveillance. AI is a volatile entity and it has the potential to develop diverse and dynamic qualities of what is called out as a 'legal entity’. It may have some kind of empathy, language and perception-building qualities, which may range and blossom based on the learning capabilities of the AI. When the life cycle of AI would mature and the AI by learning would become prone to more complex data operations, it’s quite certain that the system would be capable enough to influence the human-originated aesthetics of empathy, which would affect the social and individual liberties of the human society in general;
- Political humanism will be challenged by scientific conservatism, as has already been challenged in India, to mention. Yes, the role of misinformation politics is negative and we always need to face that. But as companies and governments(especially in the West) have pertained ethnocentric approaches to influence the global market in many aspects, like as to mention (an example) — misusing the eco-anxiety of people towards climate change, and ignoring the very fact that facing climate change can be a proper amalgamation of geoengineering and climate engineering, they will have to face some disruption in their political and ‘secular’ humanist order created via the international media. Thus, a cycle of change intercepted between science-facts and science-fiction would not be monopolized by any political or social group anymore. Both facts and fiction would exist but in the post-truth age, technology diplomacy would shift from its ethnocentric approach to a pragmatic, multi-polar and maybe — populist approach, which still has to unfold properly.
Be it as it may, but it is imperative to seek the problems in order to attain solutions, and that would be essential to democratize AI in a healthy manner.
The Aesthetics of AI: Need to Break and Recreate Certain Anthropocentric Notions
There are certain pre-developed socio-economic and ethnocentric notions related to disruptive technologies that have been developed by companies, think tanks and other non-state actors that will affect the moral capital of AI — meaning in simple terms that the future of AI would be affected if its moral capital, i.e, trust among consumers and companies would change drastically negative. It would not wake up anyone to consider that AI disrupts the nominal structure of human society in various dimensions. Thus, let us dive deep into these anthropocentric notions that are required to be screwed up.
- Accessibility: The entity which trumps the language of accessibility is obviously ahead others in the race of human evolution and development. Introducing and using AI is based on the promises made by technology companies to their consumers, which is good. The world, its components, and the procedural capillaries which keep the global economy work must be based on the liberal notions of accessibility. We see it often that accessibility becomes paramount to run markets and earn profits. In fact, most of the weak AIs that are put into use would be benefited by their accessibility, which is a big thing, and we can see that in simple recognition devices, for example. However, how much aesthetic involvement is there when the thresholds and barriers of accessibility are broken with time? We need to ask this question always when we have to test the accountability of companies in matters related to AI because if these aesthetic involvements are not realized by the data subjects who become susceptible, then accessibility would become interventionist, and this is something significant. Human artefacts, under constitutional jurisprudence, can be tenably regarded by the applied and legal historicity of usages that they assume, which in the end, can influence the existential, if the not definitive role and functions of the same human artefacts. There can be reservations installed in the activities related to various human artefacts, but as we intend to leap into effective research into AI Ethics and make sustainable solutions for mankind, it is important that we protect and preserve the autonomy of data subjects. Nevertheless — the ethical autonomy I had discussed in my previous articles cannot be achieved by the unilateral responsibility of AI and the entities involved in the creation and limited management of AI — because humans are those who choose and designate their human artefacts;
They do have to become autonomous and compete for their intelligent needs with those of various human artefacts in order to keep the road to human evolution existent.
- Omnipresence: AI as a legal entity may be regarded in a singular reference of responsibility and standing if properly adjudicated and interpreted, but its activities would pose its multi-entity role, which simply means that its repercussions would be multi-referential and may have the potential to swap its role via shifting itself in different references of action and liability. This can be estimated interestingly from the example of any weak AI, like a recognition system used for detection. Assuming that a recognition system has the potential and design to retrieve data from some data subject (assuming the same to be human for better understanding), then be it the fact that mathematical and logistical faculties can be decided to figure out as how the AI functions, and that it may have accuracy issues or not as may be based on its experience of data and other logistical check-ups done, then still, the AI system is inevitable to gain some logistical advantage to humans, who on average use limited faculties of the brain — because since we, the humans assume that we are unique to our species, we develop scientific and artistic notions in various spheres of our lives that are anthropocentric by nature. Law, politics, technology, health, entertainment and all other spheres of life have been compelled to follow anthropocentric approaches to logic and reason. Therefore, while on average, that is on a mass scale, it does not seem certain that a good chunk of human data subjects are more brainy than the AI systems involved, I, therefore, can rest on the point that the AI itself would be advantageous in comparison to humans, because their language, their empathy and maybe their existential topology is way foreign and not rooted to human and natural values. Now, the topology of AI may be estimated, screwed up and defined by notions under legal positivism and pragmatism & the naturalist approach of understanding, which involves a deep-mapped study of the infrastructure of the natural world (science, so to mean here). We still would have to figure out that as soon as possible. It is like the same argument why Sanskrit is more scientific than English when it comes to programming, as is in general claimed;
AI itself would be advantageous in comparison to humans, because their language, their empathy and maybe their existential topology is way foreign and not rooted to human and natural values.
- Mass Humanism: Humanism is a sociological ideology that — in order to properly disseminate technology and its development among communities, is being spread and secularised. In the post-truth age, as discussed above, we see the regimentation of ideas into 2 popular segments, which may be homogenous or heterogeneous, that is — fact & fiction. Fiction often trumps facts, but that would only work in a precedent when the message of pragmatism shared is accepted and becomes the new normal. As discussed, the role of ethnocentrism has been vital to the rise of secular humanism, which to some extent has been good in many ways. We are in a rules-based international order where demands of scientific humanism are not undone & upholding the rule of law is also a new normal, beyond old interests, which might be feudal by nature. However, anything can become a new normal, and the dynamics of normalcy may be a swift moderate change for some, while for some, it is authoritative and unjust. Same is it for mass humanism. AI — is aesthetically delicate to humanism, especially scientific humanism. The notions of logic and reason that are important to enquire what is relevant and rational are something that also makes the moral capital of AI among businesses and consumers. However, this may change. Nevertheless, the AI ecosystem is not ready for this because researchers are still figuring out the existential role of AI in terms of law and policy-making. Now that multipolarity will drive out ethnocentrism, and globalization will face severity of consequences due to the repercussions of the world is interconnected, which is obvious, we can be positive for the fact that we can face our problems, but we have to change the way AI is instrumental to our lives.
The secularization of technology as a field is not affected, but the political side will, and thus, it is important for us as human data subjects not to relinquish our privacies and liberties in order to develop a resistant and replenishing way of competence against the influential aesthetics of AI, to advance human civilizations and societies, culturally as well as biologically.
I will continue discussing this problem in my next article. Till then, I hope you really would have a lot to think about.