Artificial Intelligence Content Should Be Labeled, EU Official Says

Artificial Intelligence Content Should Be Labeled, EU Official Says

European Commission VP for values and transparency Věra Jourová, who’s calling for enhanced artificial intelligence content-identification requirements.

Content created by artificial intelligence “must be recognized and clearly labelled” – at least according to European Commission VP for values and transparency Věra Jourová, who’s calling for a broader crackdown on AI media.

Jourová voiced the demand for labeled AI content today, during a meeting with the signatories of the EU’s controversial Code of Practice on Disinformation. Purportedly designed to combat disinformation on the internet, the Code of Practice, encompassing “44 commitments and 128 specific measures,” was finalized in June of 2022 by the 44 signatory entities at hand.

Among the latter are Adobe, Clubhouse, Twitch, TikTok, Google, Meta, Kinzen (which Spotify, a dedicated EU lobbyist, acquired in October of 2022), and an array of comparatively obscure players. Each of the companies and organizations that added its name to the Code of Practice selected and agreed to various “commitments” (the essentials of which are outlined in the voluminous legal text) ostensibly designed to prevent the spread of disinformation.

And while the verbose Code of Practice includes a lone commitment pertaining to artificial intelligence, it goes without saying that the technology’s reach and prevalence have expanded dramatically during the last year. Consequently, Jourová in the leadup to today’s meeting disclosed her belief that the Code of Practice “should also start addressing new threats such as [the] misuse of generative AI.”

In keeping with the remark, Jourová emphasized in a tweet this morning that signatories’ “main homework” includes “addressing risks of AI.”

When it comes to generative AI “like ChatGPT” – the parent company of which could exit the EU altogether as MEPs continue to mold the lengthy “AI Act” – the government official communicated that “these services cannot be used by malicious actors to generate disinformation” and said that “such content must be recognized and clearly labelled to users.”

“Image generators can create authentic-looking pictures of events that never occurred,” Jourová said during a brief speech. “Voice-generation software can imitate the voice of a person based on a sample of a few seconds. The new technologies raise fresh challenges for the fight against disinformation as well.

“Signatories who have services with the potential to disseminate AI-generated disinformation should in turn put in place technology to recognize such content and clearly label this to users,” proceeded the 58-year-old, who also touched upon the quick-approaching implementation of the Digital Services Act. “I said many times that we have the main task to protect the freedom of speech. But when it comes to the AI production, I don’t see any right for the machines to have the freedom of speech.”

Though outside the “disinformation” category, all manner of music (including authorized and unauthorized releases alike) would be affected by a regulatory requirement obligating the identification of media generated entirely or in part by AI.

Notwithstanding the Code of Practice on Disinformation’s relatively limited scope, time will tell whether the possible rule makes its way into the aforementioned AI Act, another preliminary vote on which is scheduled for later this month. Meanwhile, as the legislation continues to take shape, more than a few artists and professionals are speaking out against the perceived pitfalls of artificial intelligence, the music-specific effects of which have prompted some financial experts to downgrade Warner Music Group stock.

Share This Article