Sony Music head Rob Stringer, whose company says it’s taken down over 75,000 AI deepfakes to date. Photo Credit: SME
As the UK considers a controversial AI law, Sony Music Entertainment (SME) has revealed that it’s taken down a total of over 75,000 tracks containing unauthorized soundalike audio.
SME disclosed as much in remarks submitted as part of a consultation on AI copyright regulations. As we previously reported, said consultation coincided with a UK government proposal for (among different things) an AI-training “opt-out” option.
Like its name suggests, that regulatory framework would establish a system under which AI giants can lawfully train models on protected materials without prior authorization. In a nutshell, it’d be on rightsholders to expressly opt out of training.
Against the backdrop of justified rights-related concerns in the AI space, many are far from thrilled about the approach. The majors, Merlin, music orgs including AIM, and a variety of non-music players called out the potential law via public remarks.
Meanwhile, most if not all those parties undoubtedly offered comments for the actual consultation, which spanned 10 weeks and stopped taking responses on February 25th.
(Looking to simultaneously “turbocharge AI” and drive economic growth, the government, for its part, says it’s reviewing the information “to help design the best possible policy to achieve the aims and objectives set out in this consultation.”)
It’s here that Sony Music pointed to the more than 75,000 takedowns of unapproved digital replicas.
Considering the figure from multiple angles, there is, of course, a clear-cut commercial downside to deepfake audio for today’s most commercially prominent artists and labels. And the pile of unauthorized uploads, presumably deflecting streams and fan interest from the impacted professionals, is nothing to scoff at.
On the other hand, as of November 2023, Sony Music placed the deepfake-takedown total at approximately 10,000 – for a low-end average of 144 flagged works per day in the interim. Given the popular acts in question, SME’s presumably robust flagging procedures, and the outputs’ relative absence from platforms including Spotify, the average isn’t necessarily terrible from the company’s perspective.
Nevertheless, deepfakes are deepfakes, some of the AI creations are presumably falling through the cracks, and the problem could well intensify in the coming years.
More immediately, SME also took the opportunity to push back against the opt-out proposal as well as the broader idea of relaxing rules at the intersection of AI and copyright. Moving forward with the plan, Sony Music said, would harm artists, disrupt ongoing training-license negotiations, negatively affect the wider IP arena, and fuel economic fallout to boot.
Unsurprisingly, these and adjacent worries are prompting continued discussions across the pond. The Independent yesterday described the regulatory talks, and the UK government has reportedly delayed the overarching AI Bill’s publication until at least summer.
Content shared from www.digitalmusicnews.com.