Scientist Gives Humans Under 200 Years Before AI Can Kill Us All

Artificial Intelligence robot android humanoid

iStockphoto

The director of the Jodrell Bank Centre for Astrophysics claims human civilizations could die out from due to runaway artificial intelligence (AI) within 100 to 200 years.

Michael Garrett, a radio astronomer at the University of Manchester, explained why he believes this could happen in a recently study published in the peer-reviewed journal Acta Astronautica.

“This study examines the hypothesis that the rapid development of Artificial Intelligence (AI), culminating in the emergence of Artificial Superintelligence (ASI), could act as a ‘Great Filter’ that is responsible for the scarcity of advanced technological civilizations in the universe,” Garrett writes in the paper.

“It is proposed that such a filter emerges before these civilizations can develop a stable, multiplanetary existence, suggesting the typical longevity of a technical civilization is less than 200 years.”

As Evan Gough of Universe Today explains it, “The Great Filter is a hypothesized event or situation that prevents intelligent life from becoming interplanetary and interstellar and even leads to its demise. Think climate change, nuclear war, asteroid strikes, supernova explosions, plagues, or any number of other things from the rogue’s gallery of cataclysmic events.”

He then asks if the rapid development of AI could also be a cause?

In his research paper, Garrett follows the current trajectory of AI towards becoming full-blown General Artificial Intelligence (GAI).

According to AI development company Accelerai, “General Artificial Intelligence, often referred to as Strong AI or Artificial General Intelligence, represents a level of AI that possesses the cognitive abilities and intellectual capacity equivalent to, or even surpassing, that of human beings.

“Unlike narrow AI systems that are designed to excel in specific tasks, GAI exhibits a more comprehensive understanding of the world, the ability to reason, learn, adapt, and perform a wide range of complex tasks without human intervention.”

Following that trajectory, Garrett claims unregulated GAI would wipe out human civilization will take less than 200 years and could be closer to 100 years.

“In 2014, Stephen Hawking warned that the development of AI could spell the end of humankind,” Garrett wrote. “His argument was that once humans develop AI, it could evolve independently, redesigning itself at an ever-increasing rate. Most recently, the implications of autonomous AI decision-making, have led to calls for a moratorium on the development of AI until a responsible form of control and regulation can be introduced.”

Garrett also warns, “Presently, the AI we currently encounter in every-day life largely operates within human-established constraints and objectives. Nevertheless, progress is being made in creating systems that can augment and optimize various facets of their own development.

“The next stage will see AI systems independently innovate and refine their own design without human intervention. The potential for AI to operate autonomously raises many ethical and moral quandaries but it is surely only a matter of time before this occurs.”

In conclusion, he writes, “The pace at which AI is advancing is without historical parallel, and there is a real possibility that AI could achieve a level of superintelligence within a few decades.”

Will anyone with the power to make these adjustments ever heed his and the numerous other warnings about the dangers of unregulated AI? Or will their doomsday prophecies end up coming true?

Share This Article