Why AI needs Plurilateralism First and Multilateralism Later

Artificial Intelligence is a dynamic technology, and among the veriest species of disruptive technologies from Blockchain to the Internet of Things (and Everything), AI is interestingly voracious to the complex, computed and independent (may be limited by its capabilities) processing of data. Currently, there is a flock of debates on adaptive AI, explainable AI, reinforcement learning, pruning, and so, other relevant processes. Many thinkers are trying to realize what kind of scientific prescriptions of an AGI could be determined. However, the neural capabilities of an AGI are not in the mythical fashion of Hobbes’ Leviathan, nor are redeemed by any digital Holocaust of human society, to be honest. Neural networks are complex and their methods take months and years to develop. Still, there is no quantitative presence of strong AI or less narrow AI of the level, which can broadly understand data and dissect its multidimensional and clustered identities. To exemplify this here is a fresh May 2020 paper co-authored by Prof Sandra Wachter from Oxford University, on why fairness cannot be automated:

The increasing use of algorithms disrupts traditional legal remedies and procedures for detection, investigation, prevention, and correction of discrimination which have predominantly relied upon intuition. Consistent assessment procedures that define a common standard for statistical evidence to detect and assess prima facie automated discrimination are urgently needed to support judges, regulators, system controllers and developers, and claimants. […]

A related issue that arises when considering the composition of the disadvantaged group is the problem of multi-dimensional discrimination. This occurs when people are discriminated against based on more than one protected characteristic. A distinction can be drawn between additive and intersectional discrimination. Additive discrimination refers to disparity based on two or more protected characteristics considered individually, for example, being treated separately as “black” and a “woman.” Intersectional discrimination refers to a disadvantage based on two or more characteristics considered together, for example being a “black woman.”

[Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI; Sandra Wachter, Brent Mittelstadt, Chris Russell]

Therefore, in order to resolve such issues of concern, it is imperative to understand that a multilateral or general arrangement of systems, whether in law or technology, has to be based on the plurality of ideas and identities. Unfortunately, we have no global approaches towards understanding how AI will sustain its purpose, if not economic value. Now, the proposition of the article is simple and clear —

Artificial Intelligence cannot grow and become all-purposive and human-centric without rendering a stage of plurilateralism first, and when it is indigenised and regionalised incrementally, it can go global swiftly to regularize equality at the international level.

Is the Equality Framework under International Law possible to be fixed? No.

The reform of the equality framework in a legal sense cannot be rendered through ‘jus cogens framework’ since there is no international legal conception that can enforce or monolithically direct relevant state practices towards diplomatic and parliamentary achievements to achieve global equality. Even covenants like ICCPR and ICESCR were formed during the Cold War where the CCPR was central to capitalist ideals of the Westosphere in those times, and the CESCR was central to the communist/socialist ideals of the Soviet Bloc, with utmost honesty.

After the Cold War, neither did the Democrat-Republican lobby in the US, which advanced the neoliberal theory under Clinton and Reagan in the coming years nor did Tony Blair, Gordon Brown, Margaret Thatcher or John Major could squeeze the international community out of the cold war mentality so often. Thankfully, the equality paradigm expanded under decades-long literature by experts in the fields of human rights and social justice, which still miserably did not compete for implementation due to the cold war mentality of the international community, in economics, society, security and law. Despite the calls that the digital (often quoted as empirical, but due to the rhetoric and inconsistent interpretation of the international media and scholars, I consider it digital, because it lacks realism) shreds of evidence of inequality are momentous and even unclear. However, it does not mean that the principles behind conceiving justice and rights should not be improved.

The incompleteness of the human rights literature paradoxically could never have been helpful to assert and realize the abstracted idea of human rights because of the simple reason that the conception of human rights — is dynamic, reasonable and proceduralist, where its substantive values will change, sink, float and even fly. Despite that, we can at least try to improve layer-by-layer the conception of human rights with time.

Understanding AI through Culture and Pluralism

The equality paradox about AI is important to be categorized in a schematic format to estimate how we can go beyond the incompleteness and incapacity of AI systems beyond the four common problems we face today.

Political Fundamentalism and Ideological Obscuration

Politics in the global arena has three sub and supersets, which exist in a parallel manner, but due to their avowed complexity, it becomes difficult for us to determine what they are, and how do they affect technologies and their developments. In the case of disruptive technologies, like AI (which the article specifically focuses on), the best bet we can ever have is to understand how and why politics and ideology’s rough tussles influence the history and relevance of AI.

  • Ideology and policy require detachment. We cannot expect the rise of technology as a global means to connect societies and communities if both of them stay far apart. Now, there are examples, which already exist, where amalgamating policy and ideology miserably affected the role of technology. For example — the Cambridge Analytica scam was nothing but the scam of big data because, in both the 2016 EU Referendum on Brexit and Trump’s Election in 2016, psychographic measures were enabled using big data (not AI) to employ algorithms to target on a particular set of people.
  • Now, this is a very common and plain example, but we have never realized how infinite scrolling on Facebook influences (for example) affects our decision-making models within our mind, and despite knowing it — we have no consistent self-prepared human solutions because (1) even scientists and scholars are engaged in the generalization of AI into the left-right tussle, leading to the pollution of technology politics due to political correctness, which in fact before the Euro crisis in 2018 and the FIFA World Cup in 2018, was central to consciousness, individualism, conservation of values and moral constructs and even sovereignty as a government affair.

Nowadays, this overdramatization of the role of NGOs and Companies without even rendering enlightenment over how would the non-state actors can act more responsible to strengthen international cyber law, has severely damaged the trust in technology diplomacy.

  • Even before political correctness, the amalgamation of ideology and policy had happened, but there were strategic causes behind the same. It is true that during the Obama Administration, the big data years, the international community could not understand what political correctness can be because of the fact that no inherent responsibility was assumed by the non-state actors, and that even Western powers had no replenished models to avoid the amalgamation of ideology and policy in the aesthetics of technology. Although cybersecurity is integral to defence and strategy, it has no inherent role in influencing the aesthetics of technology diplomacy.
  • Cyber defence strategies enable countries to precede and calibrate the geopolitical abstractions of what we call ‘power’, which enables us to act and resolve. The United States is the only country in the world apart from India under Narendra Modi (and perhaps some ASEAN and other Western/US Allies), which fought armed conflicts for moral reasons and attempted to reinvent conceptions and principles that governed and helmed the international community.
  • Evangelism is a destructive construct due to the mistaken fusion of ideology and policy. It is impossible to endorse disruptive technologies safely with moderate cybersecurity strategies and liberal means of censorship because evangelism
  1. The concept in practice is not just atheist, as it is found among people too, who preach a particular faith;
  2. it is perhaps not the faith’s aesthetic and historic value that causes evangelism to grow (in fact if we revere our history and culture peacefully and incrementally, evangelism is near to impossible) but it is mostly the ideological patterns and their pathological formulations, which are misused abruptly; and
  3. despite the fact that populism is not a long-term political phenomenon, but mostly, in mainstream world politics, it is subservient to the divorce or marriage of ideology and policy in whatever forms possible, if and if the political leader fails to control and direct the major sections of populist representations, which is the ‘vox populi’, then it is pretty sure that evangelism/fundamentalism will shake the control and direction that the political elite or leaders should have rendered.

It, therefore, affects the information environment we make, and so does the AI, which are we reliant on. I have some examples to show this trend.

MIT Technology Review reports about a ‘2-year fight’ with regards to facial recognition technologies. On June 10, 2020, “[A]mazon shocked civil rights activists and researchers when it announced that it would place a one-year moratorium on police use of Rekognition. The move followed IBM’s decision to discontinue its general-purpose face recognition system. The next day, Microsoft announced that it would stop selling its system to police departments until federal law regulates the technology. While Amazon made the smallest concession of the three companies, it is also the largest provider of the technology to law enforcement.[The two-year fight to stop Amazon from selling face recognition to the police, MIT Technology Review, 2020]”

“A year is a start,” says Kade Crockford, the director of the technology liberty program at the ACLU of Massachusetts. “It is absolutely an admission on the company’s part, at least implicitly, that what racial justice advocates have been telling them for two years is correct: face surveillance technology endangers Black and brown people in the United States. That’s a remarkable admission.”

Despite this, even Prof Sandra Wachter's paper (2020) on why Fairness cannot be automated (cited at the beginning) is worth reading. Another appalling work is ‘Algorithms of Oppression’ by Safiya Noble, along with ‘Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor’ by Virginia Eubanks and many more, like ‘Weapons of Math Destruction’ by Cathy O’ Neil, The Intersectional Internet: Race, Sex, Class, and Culture Online (edited book), Race After Technology: Abolitionist Tools for the New Jim Code by Ruha Benjamin and Artificial Unintelligence: How Computers Misunderstand the World by Meredith Broussard is a league of the Americanized literature on AI Ethics and Human Rights, which shows that yes — we have deeper problems with AI and data pseudonymization and learning politics, but the portrayal cannot be limited to human rights protests, blank advocacy over tech, and merging technology’s policy with human ideology. In retaliation, you can find a lot of literature on Christian, Islamic and other religious evangelisms to retaliate politics without understandings of anthropology.

Therefore, it is important to realize that unless we focus on the better indigenous analysis of the anthropological connect of disruptive tech like AI with human societies on a case-to-case basis, and unless we invest more in anthropological innovation, like innovation in farming, education, analytics, research, medical science and so on, with an anthropomorphic sensation and sensibility, it is virtually impossible to render legal and political models for artificial intelligence.

Blatant Extra-misuses of Economic Models

The homogeneity of economic models has become a not-so-distant reality in the realm of cyberspace due to artificial intelligence. In some aspects, where the influence and the involvement of the opaque learning mechanisms of AI systems are secularly limited and constrained, we are achieving various innovations. Even while you are reading this article, some innovation would have come up, which even I do not know. However, the economic policies with regard AI have been at worse for the 2–3 years, because:

  1. the marriage of politics and ideology, due to which algorithms influenced the aesthetics of livelihood and purpose, as we see via the TikTok (+59 apps) ban by India and also various mercantilistic Chinese approaches (which Samir Saran, President, Observer Research Foundation calls as the Digital BRI);
  2. the lack of aesthetic thoughtfulness and meaningfulness in the economic prospects that AI could bring by its influence over the way employment and entrepreneurial opportunities can be achieved, which we can see it not through the unemployment issue, but the moment of revisionism and realization (which is yet slow) that the Global South and the Global North are now, not before COVID19 pandemic, thinking clearly and comprehensively about the potential transition of the global economy from the old approaches of skill development to more prepared, more reasonable notions and practices of skill rejuvenation; and
  3. of the lack of any comprehensive approach of AI Economics through the lens of environment studies, education, anthropology and identity.

Economic models need a reboot, and perhaps through better indigenization of the Westosphere, AI’s economics will become more connected to public aspirations and needs to posit solutions with a constructive human touch. We are seeing some stages of that happening. However, more collaboration with unique and responsible models can assist the globe towards civilizational economic rejuvenation.

Legal Judgementalism in the Rule of Law Practices

The issue of legal judgementalism is often not bad — but maintaining Rule of Law is to be done by detaching ideology and policy. Still, that does not happen in reality, and we must, therefore, understand why we have inner problems in the issues of legal pluralism thereby. The problems we face in general are:

  1. Generalization of rule of law and constitutionalism and the abstraction of technical concepts never helps us in rendering solutions in applied technology law.
  2. Too much focus is on the problem of regulation. Instead of procedural regularity, incomplete (as there is no absolute strategy) automation and its approaches must be replaced with incremental regularization of the AI system. Privacy issues are deeper and AI’s dynamic and uncertain self-transformative nature cannot be avoided. Instead, the approach of legal analysis of AI must focus on anthropology instead of obsession towards anthropocentrism. Further, the approach must be transformative so that privacy regulation and regularization is achieved at a limited level.
  3. Algorithms will be a part of the legal and diplomatic forum, and thus, it is important that the legal backdrop of the responsibility-accountability-liability framework is acutely understood.

It is now high time that AI is gauged through its plurilateral formulations so that better multilateral solutions are properly rendered.

Host, Indus Think | Founder of Think Tanks & Journals | AI-Global Law Futurist | YouTuber | Views Personal on the Indus Think Blog