Artificial Intelligence Makes People Dumber

artificial intelligence AI microprocessor transfer digital data through brain

iStockphoto

A new study conducted by Microsoft and Carnegie Mellon University found that the use of artificial intelligence (AI) lowers people’s critical thinking skills. The researchers also pointed out that higher self-confidence is associated with more critical thinking, not less, so AI doesn’t just have the potential of making people dumber, it can damage them emotionally.

In the study, published on Microsoft’s website, researchers explain that they surveyed 319 knowledge workers (a person whose job involves handling or using information) “who use GenAI tools (e.g., ChatGPT, Copilot) at work at least once per week, to model how they enact critical thinking when using GenAI tools, and how GenAI affects their perceived effort of thinking critically”

What they discovered was that “while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term over-reliance on the tool and diminished skill for independent problem-solving.”

“Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort,” the researchers continued. “When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship.”

Why is this a problem?

“Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved,” the researchers explained. “A key irony of automation is that by mechanizing routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.”

As Live Science points out, this isn’t necessarily a new problem. That ship sailed long ago with the advent of Google and other resources on the internet. What is concerning is the widespread adoption of AI by school-aged children.

Then there is the fact that popular AI chatbots aren’t always reliable.

“As AI learns from human data, it may also think like a human – biases and all,” said Yang Chen, assistant professor at Western University and lead author of a recent study published in the journal INFORMS. “Our research shows when AI is used to make judgment calls, it sometimes employs the same mental shortcuts as people.”


Content shared from brobible.com.

Share This Article