Voz media US Voz.us

Leading AI developers warn that it poses "extinction risk" to humanity

The CEO of OpenAI or that of Google DeepMind, among the signatories of a manifesto that calls for equating the danger of AI to that of pandemics or nuclear war.

Una mano humana y una robótica se acercan.

(Pexels)

Published by

The main architects of artificial intelligence (AI), such as the CEO of OpenAI, Sam Altman, or the head of Google DeepMind, Demis Hassabis, joined a manifesto that warns that advances in this field pose "a risk of extinction" for humanity. The 350 signatories also include leading executives, researchers and engineers working on the development of this technology.

"Global priority"

The succinct statement - one sentence - could not, however, be more blunt: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," says the AI Security Center's website.

According to this organization, the objective of this statement is to promote debate on the "important and urgent risks of AI." This is still difficult because of the fascination with the possibilities of artificial intelligence.

AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.

AIs, "increasingly capable of acting autonomously to cause harm"

According to this entity, "AI systems are becoming more and more capable. AI models can generate text, images and videos that are difficult to distinguish from human-created content. While AI has many beneficial applications, it can also be used to perpetuate prejudice, fuel autonomous weapons, promote disinformation and carry out cyberattacks. Although AI systems are used with human involvement, AI agents are increasingly capable of acting autonomously to cause harm."

Experts warn that the evolution of these systems can cause eight types of potentially lethal potentially lethal to mankind:

  1. Arms race. Artificial intelligence can facilitate the manufacture of chemical or biological weapons, as well as the launching of cyber-attacks automatically. Even some military leaders are considering its application for nuclear silo management. In addition, the most modern weapons include AI programs.
  2. Disinformation. "States, parties and organizations use technology to influence and convince others of their political beliefs, ideologies and narratives. Emerging AI can take this use case into a new era and enable large-scale personalized disinformation campaigns. In addition, artificial intelligence itself could generate highly persuasive arguments that elicit strong emotional responses."
  3. Lack of values and limits. "Trained with the wrong goals, AI systems could find new ways to pursue their goals at the expense of individual and societal values."
  4. Weakening of humanity's self-government. "if important tasks are increasingly delegated to machines; in this situation, humanity loses the capacity for self-government and becomes completely dependent on machines."
  5. Blocking of values. "More powerful AI systems can be designed by and available to fewer and fewer stakeholders. This may enable, for example, regimes to impose narrow values through pervasive surveillance and oppressive censorship."
  6. Emerging objectives. "Models demonstrate unexpected and qualitatively different behavior as they become more proficient. The sudden emergence of capabilities or goals could increase the risk of people losing control over advanced AI systems."
  7. Deception. "More powerful AIs that can fool humans could undermine human control. AI systems could also have incentives to circumvent supervisors. Historically, individuals and organizations have had incentives to circumvent controls. Future AI agents could similarly change strategies when monitored and take steps to conceal their deception from watchers. Once deceptive AI systems are discarded by their supervisors, or once such systems are able to master them, these systems could take a 'treacherous turn' and irreversibly evade human control."
  8. Search for power. "Building power-seeking AI is also incentivized because political leaders see the strategic advantage of having the smartest and most powerful AI systems. For example, Vladimir Putin has said, 'Whoever becomes the leader at [IA] will become the ruler of the world.'"
tracking