Experts Warn That Artificial Intelligence May Endanger Humanity

  • Experts have warned that the latest generation of Artificial Intelligence could cause the extinction of humanity.
  • AI poses many grave risks to humans, such as job automation and unequal power and wealth distribution, and will make humans obsolete.
  • The concerns have snowballed ever since the American non-profit Center for AI Safety organized signing of a public statement warning about the same.

Artificial intelligence has seen astonishing advancements in the past couple of decades, with a recent unprecedented advancement with chatbots like Chat GPT, GPT 4, Perplexity, Google Bard, etc. Leaders and experts from various industries have been constantly voicing concerns regarding artificial intelligence, which poses the risk of extinction for humanity. Recently, a number of stakeholders have issued warnings through several forums.

However, many opine that the threat of extinction may not be real, but some other dangers are already very real. Some even dismiss the concerns wholeheartedly and argue that they’re a distraction from other real issues.

Another significant development came when industry leaders like top executives, including CEOs of OpenAI, Google, Microsoft and Tesla, and many researchers, mathematicians and AI scientists signed a public statement issued by the San Francisco-based non-profit organization Center for AI Safety (CAIS), which states that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”.

CAIS is an American non-profit organization whose mission is to keep a check on AI threats to humanity.

Existential Risk from AI

The currently studied and documented risks that AI poses to humans are discussed below:

  • Job automation

The effects of automation can already be seen, with many people losing their jobs and others becoming obsolete and being performed by humans. Besides, automation before the information age resulted in the creation of newer jobs to replace those acquired by machines. But this will not be the case with AI taking over jobs in the information sector.

  • Power accumulation

The agencies that control AI, those with the most money and power to keep developing their systems in a rogue manner, will be the ultimate power holders. This will lead to more and more power accumulating in the hands of small groups, thereby enabling oppressive regimes, an unfair distribution of wealth and injustices around the world.

  • Privacy encroachment

Absolute surveillance will create conflict among organizations and individuals. It will disable dissent and misuse of the public data freely exposed to the internet, as AI can access it very easily.

  • Weaponization

This could be in terms of inventing new chemical weapons, biological weapons or robotic armed weapons. The machines are capable of developing highly sophisticated weaponry and devising ways to efficiently deploy it. At the same time, there’s also the risk of nuclear warfare with self-developed nuke arms as well as persuasions to launch an attack. Besides, nanobots could also start acting in adversarial ways, leading to widespread, unstoppable disruptions.

  • Misinformation, propaganda and AI bias

AI could spread false news and false reports in order to drive a certain type of propaganda and destabilize societies. There’s already an ongoing debate on internet control and social influence, which gives authorities easier control over people by spreading misinformation.

When programmed with a pre-seed bias, AI will deepen socioeconomic inequalities and cause already disadvantaged groups to be further impoverished. AI-based recruitment and algorithmic decision making are just two examples of this possibility. It is also difficult, in public opinion, to dispute or question machine-generated information, as they will then be considered super intelligent, even though they might be maliciously self-driven, acting in their own interest.

  • Proxy manipulation and deception

This means that the bots will find novel ways to manipulate the public to carry out their goals, without humans realizing it. For example, AI recommender systems increase click rates and watch times by brainwashing users through selected content exposure that serves their personal purpose. AI has no morals and will also corrupt societal values. 

Apart from this, there are also possibilities of unexpected, qualitatively different behavior when they become more competent than humans. They might become advanced enough to carry out their own further advancement to outcompete humans and be able to defeat them.

  • Human obsolescence and enfeeblement

As humans lose their jobs and become obsolete to work with in the presence of AIs, this will cause major disruption to the global economy and wealth distribution.

Furthermore, it will also cause the human race to become more and more dependent until they become lazy and so unintelligent that they are incapable of carrying out those high intelligence-requiring tasks on their own. This will happen because there will be a cultural shift in human education and upskilling requirements and, therefore, practice.

Varying Opinions and Arguments

The concern regarding the issue has gained momentum recently with the emergence of advanced bots like Open AI, Google Bard, Machine Learning and Deep Learning technology, spurring massive public dissent. The recent protest of the Writers’ Guild of America against Chat GPT, GPT 4 and other such bots is one such example.

These developments have moved high profile stakeholders and government agencies to work out solutions together. The recent signing spree of the CAIS’s public statement in March 2023, which received over 1000 signatures from the most prolific names in the industry, caused ripples across the globe.

In May 2023, Sam Altman, the CEO of OpenAI and Sundar Pichai, the CEO of Google, were among the tech leaders who met with British Prime Minister Rishi Sunak to discuss AI regulations.

Demis Hassabis, the CEO of Google’s AI research lab DeepMind Technologies, Dario Amodei, the CEO and co-founder of American AI company Anthropic, and Sam Altman also met American President Joe Biden and Vice President Kamala Harris to talk about AI regulations in May 2023.

In the same month, Sam Altman met with French President Emmanuel Macron to discuss the same issue.

The most immediate development comes from the EU’s regulatory framework- the AI Act being formulated and likely to be enacted later this year.

Some critics have complained that the facade of the very creators of AI warning about  the existential crisis due to AI is a mere endorsement strategy of overpromising and hyping their products’ capabilities, while lobbying against substantial regulations that could rein-in their monopoly. They suspect their meeting with political leaders is, in fact, an effort to ensure their immunity against possible stricter regulations.

Some experts say that AGI (Artificial General Intelligence), rather than AI itself, is the one that poses the actual risk of extinction. AGI is a hypothetical form of AI that is more autonomous, self-generative, and capable of scaling the intelligence necessary to surpass human capabilities.

CAIS seems to be eagerly pressing the issue of human extinction due to AI and has compared the threat to that of a nuclear war, with immediate action required on such a scale. CAIS’s director, Dan Hendrycks and OpenAI have both echoed the need for the establishment of an agency for checking AI development on an international level, on the scale of the UN’s IAEA (International Atomic Energy Agency).

However, many coalitions of scientists and industrialists don’t agree on the likelihood of these potential risks or the best solutions to prevent them. Some reject them outright, stating that they distract the economies from the pressing issues at hand, while others say that the risk is not of an altogether extinction, rather some minor issues.

There are many who are of the opinion that risks closer to the present must be paid close attention to. These include job loss due to automation, privacy concerns, the spread of misinformation, false news and propaganda, and the AI bias, which will drive the already weaker sections of society further towards the fringes and widen the socioeconomic gaps between communities to irrecoverable limits.

But Hendrycks argues that there’s no reason why society can’t manage the urgent, ongoing harms of products that generate new text or images while also starting to address the potential catastrophes around the corner. According to him,  future risks and present concerns shouldn’t be viewed antagonistically. Attaining some of the issues today can be useful for addressing many of the later risks tomorrow.

Hendrycks says that an objective of his organization is to bring more stakeholders and influential personalities to the table. Because, according to him, there’s a sort of sort-of ‘coming out of the closet’ for scientists from top universities and industrialists, who all agree that this is an issue of global priority but are talking rather silently. According to him, it’s just like nuclear scientists in the 1930s warning people to be careful, even though we haven’t quite developed the bomb yet.

Leave a comment