As tech companies continue to pursue advances in artificial intelligence, yet another attention-grabbing AI intervention policy emerges, with many signatures from tech company scientists and hundreds of others, including musician Grimes.
Hundreds of AI scientists and academics, administration, and public figures — including musician Grimes and OpenAI CEO Sam Altman — have signed their names to a new statement urging global attention to the existential risk of advancements in artificial intelligence.
The statement is brief and calls for policymakers to focus on mitigating risks up to and including “extinction-level AI risk” — comparable with the existential concerns associated with nuclear war — hosted on the website of a San Francisco-based not-for-profit organization called the Center for AI Safety (CAIS.)
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the intentionally brief statement.
CAIS’ website explains that the statement has been kept purposely “succinct” as those behind it are concerned that their message about “some of advanced AI’s most severe risks” may get lost in a sea of discussion about other “important and urgent risks from AI.”
“AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum (of AI risks),” CAIS explains. “The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.”
If this sounds familiar, that’s because we’ve heard much of the same concerns voiced loudly and repeatedly in recent months as the hype surrounding AI development has surged alongside increased public access to generative AI like ChatGPT and DALL-E.
Interestingly, the onslaught of heavily-publicized warnings surrounding the potential risks of AI has, if anything, only distracted attention from the very real and existent harms caused by AI — such as the use of copyrighted data to train AI systems without permission or compensation or the subsequent spam of AI-created content on streaming platforms. Or the systematic privacy-violating data-scraping — or any transparency from companies developing AI regarding the data used to train their tools.
Naturally, there are apparent commercial motivations to focus regulatory attention on AI development issues in the theoretical future, as opposed to the genuine problems plaguing the present. As TechCrunch succinctly explains, “Data exploitation as a tool to concentrate market power is nothing new.”
Among the signatories are Google DeepMind CEO Demis Hassabis, Inflection AI CEO Mustafa Suleyman, Stability AI CEO Emad Mostaque, OpenAI co-founders Sam Altman (who also serves as CEO) and John Schulman, Anthropic AI co-founders Jared Kaplan and Chris Olah, and many more prominent figureheads in the artificial intelligence space.