Can AI Handle Global Consumerism? Part 2: Information Warfare & Mass Wisdom

Riot police force protesters back across the Kasr Al Nile Bridge as they attempt to get into Tahrir Square on January 28, 2011, in downtown Cairo, Egypt [Peter Macdiarmid/Getty Images]

As a part of the article on Global Consumerism and AI, here is another article on AI and Globalization.

Recap: Read Can AI Handle Global Consumerism? Part 1: Mass Humanism, Accessibility & Omnipresence

As discussed on the 3 essential aspects related to AI’s value in a consumerist world in the previous article, this part focuses on three more imperative issues to complete my arguments on whether AI can handle Global Consumerism. I would conclude my inferences drawn on the issue and would be open for comments as they go on.

Information Warfare

It is a consensual understanding that fake news and misinformation in general dissolves and affects the meaningfulness of our own republican constitutionalism and the structural components of human society. Go to India and the US, you will find how misinformation can gauge political controversies and influence the people at the worst. There are platforms which advocate the cause for digital literacy, biometric awareness and endorse aggressive efforts to deal with information warfare itself. Now, I would like to connect this issue with data colonization. We have certain notions as well as decisive understandings of how information sets can be assimilated to form a data-centric infrastructure that endorses formalistic aspects of surveillance. However, in the case of disruptive technologies, like AI, in this case, a hyper globalized world can sustain only when (a) better data observance is done by both the users and the systems; (b) the AI itself becomes explainable — and efforts are being made for that, and (c) the surveillance mechanism is not extra-interventionist but balanced by better data quality and human responsiveness equilibrium. Much can happen at systemic and indigenous levels.

Mass Wisdom

Artificial Intelligence by virtue, is a more mobile infrastructure, and in terms of its utility and cognitive aspects, it can dominate humans. The reason is that there may exist competing empathies among disruptive technologies and humans. These competing empathies, machinic and human, may not be in a binary domination duo, which means that their presence is not required to be Hobbesian by nature. The problem is that any empathy of a system is developed on certain components that cause its flow, which are —

  • The input
  • The Stimulus
  • The Vector quantities: adversarial and experiential learnings

This is the reason why a debate between Adaptive and non-Adaptive learning mechanisms is in due process. Based on the flaws in the ethics of mass humanism that people develop on social media, for example — flawed moral understandings are underpinned within the mechanisms that drive machinic empathies of the recognition-based ML systems in the Social Media Platforms like Facebook and Twitter as well as we, the humans. The reason is quite obvious:

In a more anthropological sense, the pluralism of identity and expression — even if is appreciated, cannot be legitimized as an ethical imposition, nor a moral dogma among people.

For example, multicultural values like secular values have good objectives, but if the ethical systems, whether social, legal or political in nature, do not replenish and improve to soak in and learn more from certain shocks they have to face (causing damage to both secularism and multiculturalism in their own ways), the systems cannot build their ethical narratives properly. That is why in the bargain of systemic ethics and remnant morality, it is the role of ethics to fix its learning and understanding mechanisms in order to endorse pluralism of identity and expression properly. YouTube is one of the most prolific examples. Its machine learning-based systems do show relatively homogenous or identical content to viewers because first — its dominating, and its viewer choices and other matrices are biased.

You might hope and expect as a human user, in a moral pretext — that in a free world, you wish to know all and everything, but your experiential learnings empower the credibility of your ethical structure of understanding and experimentation with realities, which means that whatever moral footprints you implant in your expectations, is what you wish to seek in the world of social media. Maybe your biases will correlatively exist because to some extent, your implanted expectations are comforted, which may be unrealistic because it is uncertain whether it works because it does not do all the time. That is why Mass Wisdoms and Mass Humanisms on Social Media can easily be polarised. The 2008–2011 social media coup of Slacktivism according to Zeynep Tufecki, in Egypt and the Middle East, the rise of Donald Trump and his misinformation campaign, the Brexit Campaign led by Dominic Cummings in 2016 and the Coronavirus information warfare between Xi Jinping and Donald Trump are some of the most secondary, discrete but estimable evidences to prove why biases arise.

Therefore, we need to remove the biases in AI-based systems not just by technological measures and innovation, but by certain reasonable and innovative measures of discourse and thinking, where we calm and neutralize pluralism of identity and expression. Our central focus should on these issues rather than the politics of populism and the so-called Reich-Hitler-Holocaust fears, which could have been prevented in the 1940s, but our world is different, and the liberal narrative needs to be adequately controlled.

The One $ Question: Will Consumerism Survive?

No. It cannot survive. The reasons are quite obvious. First — a reckless, underestimated cluster of narratives and procedures involving the global supply chain will topple, which means a protectionist mechanism of discourses will come into being. Since protectionism will affect the notions of technology governance and expressionism, the cyberspace under AI systems will get more analogue breaks, like recently, the Indian Government’s Home Ministry issued an advisory on the Zoom App, and a letter petition has been sent to the Indian Chief Justice SA Bobde, that the Zoom App must be banned in India owing to its controversies. Even Guy Verhofstadt, a pro-EU politician even raised an interesting concern on the EU-Huawei tech affair. I would quote his tweet here for curious reasons:

Even Donald Trump has taken a smart decision to control Huawei’s influence in the US, with some utter failures of course. However, I believe maybe something would change. Even in a Brexit scenario, Google shifts its offices for the comfort of the UK so that the UK citizens’ Google Data is not under the tight restrictions of the GDPR of the EU, which I see would assist in Boris Johnson’s agendas to partner with countries like India, Japan and Australia primarily. Maybe the US is also on the top of the very list. Therefore, the notions of privacy and surveillance too will change and become better for certain political purposes. All we need to do is to fear less and find harder and better options. It is possible but would take time.

Host, Indus Think | Founder of Think Tanks & Journals | AI-Global Law Futurist | YouTuber | Views Personal on the Indus Think Blog