AI-generated content is flooding the internet, and even Meta’s top advisors have admitted that they don’t know how to deal with it.
As tools to create videos, articles, and images become more accessible, concerns are mounting over what some call “AI slop” – low-effort, mass-produced content that clutters feeds and blurs the line between real and fake.
One example is Hedra, a startup using AI to generate videos up to five minutes long. CEO Michael Lingelbach says the tech doesn’t just create more junk, it also expands what’s possible.
“There’s never been a barrier to people making uninteresting content,” he said. “Now there’s just more opportunity to create different kinds of uninteresting content, but also more kinds of really interesting content too.”
But as this content floods platforms like Instagram, Facebook, and YouTube, even insiders are raising red flags.
The age of AI slop is “inevitable”
“The age of slop is inevitable,” said Henry Ajder, an AI expert and policy advisor at Facebook and Instagram-owned Meta, in an interview with CNBC. “I’m not sure what we do about it.”
Ajder also runs Latent Space Advisory, a firm that helps businesses navigate AI’s rapid evolution. He warns that the biggest problem might be trust.
“Even if the content is informative and someone might find it entertaining or useful, I feel we are moving into a time where you do not have a way to understand what is human-made and what is not,” he said.
With no clear solution in sight, the internet may be headed into a future where quality is harder to find and harder to define.
Some companies, however, have taken steps to find and discourage ‘unoriginal’ AI count, such as YouTube. The Google-owned company confirmed that it’s updating its guidelines on July 15 to better detect “mass-produced and repetitious content.”
Content shared from www.dexerto.com.