FACEBOOK will begin to flag AI-created images to users ahead of election season amid fears that deepfakes will be used to sway voters.
Meta, Facebook and Instagram’s parent company, says it will label photorealistic images created using Meta AI since with the tag: “Imagined with AI”.
The label aims to make social media users aware that certain posts are machine generated, and not real.
However, current technology and practices mean Meta can only pin this label onto images created by its own AI image generator, called Llama.
Although the tech giant is apparently working with industry partners on a universal standard for identifying content made by other companies’ AI tools.
This includes video and audio, as well as images.
“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” Nick Clegg, President of Global Affairs at Meta, said in the announcement.
“People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology.
“So it’s important that we help people know when photorealistic content they’re seeing has been created using AI.”
The label will be rolled out across Facebook, Instagram and Threads “in the coming months”.
AI clones, also known as deepfakes, have already been used to impersonate politicians during election years.
A month before the UK’s General Election in 2019, a deepfake video of Jeremy Corbyn backing rival Boris Johnson went viral online.
Last week, a former White House Information Officer warned that the chilling rise of deepfakes and AI-created content will be used as propaganda ahead of the US election.
There are similar fears surrounding all upcoming elections in 2024, which is considered a record year for elections.
Clegg added that it’s important Meta – and Facebook – stay “one step ahead” when it comes to deepfake content.
“We’re taking this approach through the next year, during which a number of important elections are taking place around the world,” he added.
“This work is especially important as this is likely to become an increasingly adversarial space in the years ahead.
“People and organisations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it.”
How can you spot the real from the fake?
“It’s not yet possible to identify all AI-generated content,” according to Clegg.
But there are ways to help protect yourself against AI clones:
- Inspect the context around the content
- Evaluate the claim
- Check for distortions
While phishers are known for their poor writing skills, AI-generated text may be more grammatically correct.
However, sometimes the sentences can appear choppy – an important clue.
If an image seems too bizarre to be real it is probably fake.
AI generates a lot of distortions when creating content, like too many fingers or soulless eyes.