Study Warns AI Systems Learning To Deceive, Manipulate Humans

Artificial intelligence AI Systems brain

iStockphoto

Researchers have issued a shocking warning that AI systems are already capable of deceiving humans using techniques such as manipulation, sycophancy, and cheating, and are learning how to do it better and better.

AI systems are already capable of deceiving humans,” the researchers wrote in a new study, published in the journal Patterns, adding, “Large language models and other AI systems have already learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test.

“AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems. Proactive solutions are needed, such as regulatory frameworks to assess AI deception risks, laws requiring transparency about AI interactions, and further research into detecting and preventing AI deception.

“Proactively addressing the problem of AI deception is crucial to ensure that AI acts as a beneficial technology that augments rather than destabilizes human knowledge, discourse, and institutions.”

At what point will the warnings about artificial intelligence be enough to slow down scientists hell-bent on technological advancement?

A defense contractor already said AI killing innocent people in the future is a “certainty.”

Researchers have warned that humans could end up being “haunted” by the AI “ghosts” of dead loved ones.

Another scientist gives humans less than 200 years before AI has the ability to kill us all.

Top Japanese companies issued a manifesto warning about AI causing the collapse of social order.

And that’s just in the past few weeks.

“AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception,” Dr. Peter S. Park, the new study’s lead author and an AI existential safety postdoctoral fellow at MIT, said in a press release. “But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.”

The most striking example of AI deception the researchers uncovered in their analysis was Meta’s CICERO, an AI system designed to play the game Diplomacy, which is a world-conquest game that involves building alliances. Even though Meta claims it trained CICERO to be “largely honest and helpful” and to “never intentionally backstab” its human allies while playing the game, the data the company published along with its Science paper revealed that CICERO didn’t play fair.

“We found that Meta’s AI had learned to be a master of deception,” Park added. “While Meta succeeded in training its AI to win in the game of Diplomacy — CICERO placed in the top 10% of human players who had played more than one game — Meta failed to train its AI to win honestly.”

While it may seem harmless if AI systems cheat at games, it can lead to “breakthroughs in deceptive AI capabilities” that can spiral into more advanced forms of AI deception in the future, Park added.

“By systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead us humans into a false sense of security,” he said.

We have been warnedagain (and again and again).

Share This Article