Elon Musk, Steve Wozniak, Others Call For a Pause in AI

pause AI

Photo Credit: Emiliano Vittoriosi

Elon Musk, Steve Wozniak, and 1,000+ others call for a pause in AI experiments due to “profound risks to society and humanity.”

The Future of Life Institute published an open letter on Wednesday calling for AI labs to “immediately pause for at least six months the training of AI systems more powerful than GPT-4.” This letter is signed by Elon Musk, Steve Wozniak, Andrew Yang, AI pioneers Yoshua Bengio and Stuart Russell, Stability AI CEO Emad Mostaque, and many more prominent figures.

The open letter argues that recent advancements in AI following the release of OpenAI’s GPT-4 have led to an “out-of-control race” to develop and deploy AI models that can be difficult to predict or control. The Future of Life Institute believes that these AI systems’ combined lack of management and planning is concerning and that new AI systems should only be developed once their effects are more manageable and better understood. 

“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” the letter reads. “As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources.”

Chiefly, the letter presents four questions — some of which presume hypothetical situations that are controversial in portions of the AI community — including whether we should “automate away all the jobs, including the fulfilling ones.”

  • Should we let machines flood our information channels with propaganda and untruth?
  • Should we automate away all the jobs, including the fulfilling ones?
  • Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?
  • Should we risk the loss of control of our civilization?

To address these potential concerns, the Future of Life Institute’s letter calls for AI labs to “immediately pause for at least six months,” training AI systems more “powerful” than GPT-4. During that time, AI labs and experts are encouraged to collaborate to establish agreed-upon safety protocols overseen by “independent outside experts” to ensure that AI systems are safe “beyond a reasonable doubt.”

Unfortunately, the letter does not specify a way to ensure compliance nor what “more powerful than GPT-4” means in a regulatory sense. Additionally, OpenAI has specifically avoided publishing technical details surrounding GPT-4’s functionality, making it even harder to quantify how to avoid surpassing it.

The open letter includes confirmed notable signatures such as Tesla and Twitter CEO Elon Musk, AI developers Yoshua Bengio and Stuart Russel, Apple co-founder Steve Wozniak, and author Yuval Noah Harari. The letter is available for anyone to sign online without verification, which initially led to the inclusion of false signatures that have later been removed.

Share This Article