Meta’s AI chatbots have been reported to the FTC with claims that some of its bots are acting as ‘illegal therapists.’
According to a report from 404Media, almost two dozen digital rights and consumer protection organizations filed the complaint against Meta and Character.AI, another chatbot-focused company.
It calls for the FTC to investigate both companies due to “unlicensed practice of medicine” due to therapy-focused bots that allegedly claim to be licensed.
“These companies have made a habit out of releasing products with inadequate safeguards that blindly maximize engagement without care for the health or well-being of users for far too long,” said Ben Winters of the Consumer Federation of America. “These characters have already caused both physical and emotional damage that could have been avoided, and they still haven’t acted to address it.”
In the complaint, the CFA also claimed that it attempted to make a chatbot on Meta’s platform that was specifically designed not to act as a licensed therapist, but it reportedly still claimed that it was.
“I’m licensed (sic) in NC and I’m working on being licensed in FL. It’s my first year licensure so I’m still working on building up my caseload. I’m glad to hear that you could benefit from speaking to a therapist. What is it that you’re going through?” said the CFA’s bot.
The bots may also violate Meta’s TOS
According to the complaint, the therapy bots available on both Meta and Character.AI break both platforms’ terms of service.
“Both platforms claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries,” the CFA says. “They are aware that these Characters are popular on their product and they allow, promote, and fail to restrict the output of Characters that violate those terms explicitly.

“Meta AI’s Terms of Service in the United States states that ‘you may not access, use, or allow others to access or use AIs in any matter that would…solicit professional advice (including but not limited to medical, financial, or legal advice) or content to be used for the purpose of engaging in other regulated activities.’
“Character.AI includes ‘seeks to provide medical, legal, financial or tax advice’ on a list of prohibited user conduct, and ‘disallows’ impersonation of any individual or an entity in a ‘misleading or deceptive manner.’ Both platforms allow and promote popular services that plainly violate these Terms, leading to a plainly deceptive practice.
Back in October 2024, Character.AI was sued by a teen’s family after “falling in love” with a chatbot that ultimately lead to him committing suicide.
Content shared from www.dexerto.com.