Last month, hundreds of top technologists asked for a moratorium on creating and developing advanced artificial intelligence (AI) systems.
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” an open letter published by the Future of Life Institute read.
Last year, 36 percent of artificial intelligence scientists who were surveyed said they believe AI could some day be the cause of a nuclear-level catastrophe.
These warnings and more were presented recently by Tristan Harris and Aza Raskin, two of the co-founders of the Center for Humane Technology and the men behind the Netflix documentary The Social Dilemma.
During their presentation, Harris warned, “50 percent of AI researchers believe there’s a 10 percent or greater chance that humans go extinct from our inability to control AI.”
Raskin also added, “A lot of what the AI community worries most about is when there’s what they call ‘takeoff.’ That AI becomes smarter than humans in a broad spectrum of things. Begins the ability to self-improve. Then we ask it to do something. It, you know, the old standard story of be careful what you wish for because it’ll come true in an unexpected way.”
“What we want is AI that enriches our lives. AI that works for people, that works for human benefit that is helping us cure cancer, that is helping us find climate solutions,” Harris told NBC Nightly News anchor Lester Holt in a follow-up interview. “We can do that. We can have AI and research labs that’s applied to specific applications that does advance those areas. But when we’re in an arms race to deploy AI to every human being on the planet as fast as possible with as little testing as possible, that’s not an equation that’s going to end well.”
Harris continued, “What’s surprising and what nobody foresaw is that just by learning to predict the next piece of text on the internet, these models are developing new capabilities that no one expected. So just by learning to predict the next character on the internet, it’s learned how to play chess.”
Raskin added, “What’s very surprising about these new technologies is that they have emergent capabilities that nobody asked for.”
He also warned that one of the biggest problems with AI right now is “that it speaks very confidently about any topic and it’s not clear when it is getting it right and when it is getting it wrong.”
“No one is building the guardrails,” Harris warned. “And this has moved so much faster than our government has been able to understand or appreciate.”