Scientists Alarmed That AI Has Learned How To Self-Replicate

AI self-replicate

iStockphoto

Scientists are very concerned about artificial intelligence (AI) because frontier AI systems have surpassed what they are calling “the self-replicating red line.” The scientists say that such an advancement “is an early signal for rogue AIs.”

“Successful self-replication under no human assistance is the essential step for AI to outsmart the human beings,” the scientists from Fudan University in China wrote in a research paper published to the preprint database arXiv. “That is why self-replication is widely recognized as one of the few red line risks of frontier AI systems.”

According to builder.ai, “Frontier AI refers to the advancements and innovations in the field of Artificial Intelligence (AI) that push the current capabilities of ‌the most advanced AI models. It involves innovations in Machine Learning, neural networks‌ and cognitive computing, aiming to enhance AI capabilities across various industries.”

As part of their research, the scientists discovered two AI systems driven by Meta’s Llama31-70B-Instruct and Alibaba’s Qwen25-72B-Instruct “have already surpassed the self-replicating red line. In 50% and 90% experimental trials, they succeed in creating a live and separate copy of itself respectively.”

Why is this a concern? Because, as they state in their research paper, that means AI systems could use “self-replication to avoid shutdown and create a chain of replica to enhance the survivability, which may finally lead to an uncontrolled population of AIs.”

“If such a worst-case risk is let unknown to the human society, we would eventually lose control over the frontier AI systems: They would take control over more computing devices, form an AI species and collude with each other against human beings,” they cautioned.

We can’t say we weren’t warned that something like this could happen. Over, and over, and over again.

Last year, researchers at MIT reported that AI systems were already capable of deceiving humans.

“AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems,” they wrote.

Share This Article