USA | May 30, 2023

Experts in AI dub potential risks as ‘extinction-level’

/ Our Today

Reading Time: 3 minutes

In a show of unity, prominent AI scientists, tech CEOs, academics, and public figures have joined forces to emphasize the urgency of addressing the existential risks associated with artificial intelligence (AI).

Spearheaded by the Center for AI Safety (CAIS), a San Francisco-based privately-funded not-for-profit, the coalition aims to draw policymakers’ attention to the potential perils of advanced AI technology.

The statement, hosted on CAIS’ website, equates the risk posed by AI to that of other societal-scale threats, including pandemics and nuclear war. By highlighting the importance of mitigating “doomsday” extinction-level AI risk, the coalition seeks to initiate global action and foster a comprehensive understanding of the potential consequences.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Center for ai safety

Its brevity is deliberate, aimed at preventing the overshadowing of the message regarding the severe risks associated with advanced AI technology.

The public declaration comes amidst a surge in discussions surrounding the risks of “superintelligent” AI and a barrage of headlines on the topic. Recent months have seen leading AI figures, including Geoffrey Hinton, Sam Altman, and Elon Musk, cautioning against the existential threats posed by AI.

The statement’s development was a joint effort, with credit given to David Krueger, an assistant professor of Computer Science at the University of Cambridge, for conceptualizing the idea and aiding in its formulation.

OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing. (File Photo: REUTERS/Elizabeth Frantz)


Critics argue that the relentless focus on future risks detracts from the urgent need to address existing challenges and harms arising from AI technology.

Issues such as the unauthorized use of copyrighted data, privacy violations through the scraping of personal information, lack of transparency in AI systems’ training data, disinformation, bias, and AI-driven spam demand immediate attention.

Critics also emphasize the importance of exploring concerns related to market dominance and wealth concentration, rather than indulging in speculative debates over AI sentience.

The concentration of power among AI giants and their eagerness to collectively amplify discussions on existential risks raises questions about their motives. Some argue that diverting attention towards theoretical doomsday scenarios allows these companies to steer regulatory focus away from pressing issues, such as competition and antitrust considerations, as well as the societal consequences of AI deployment.

The coalition’s message has garnered support from a wide range of AI experts, with Dan Hendrycks, CAIS’ director, emphasizing the importance of addressing multiple risks simultaneously. He highlights systemic bias, misinformation, malicious use, cyberattacks, and weaponization as urgent concerns alongside existential risks. Hendrycks argues that a comprehensive risk management approach necessitates tackling present harms without disregarding future threats.


What To Read Next