Anthropic Agrees to Maintain Copyright ‘Guardrails’

Anthropic copyright guardrails ruling

Photo Credit: Anthropic

Anthropic must maintain guardrails to prevent future AI tools from producing infringing material from copyrighted content. This stipulation partially resolves the music publishers’ preliminary injunction motion filed in the Northern District of California.

Eight music publishers sued Anthropic in October 2023 and sought an injunction in August 2024. Music publishers argued that the injunction was necessary to prevent the infringement of their works. Anthropic opposed the motion and argued its use of training AI models on copyrighted content was ‘fair use,’ given the output was transformed from the original work.

Under this new agreement, Anthropic will maintain its implemented filters on responses to users’ queries. It is allowed to expand, improve, optimize, or change the implementation of these guard rails as long as their overall efficacy at preventing the reproduction of copyrighted content is not diminished.

“Anthropic will maintain its already implemented Guardrails in its current AI models and product offerings. With respect to new large language models and new product offerings that are introduced in the future, Anthropic will apply Guardrails on text input and output in a manner consistent with its already-implemented Guardrails. Nothing herein prevents Anthropic from expanding, improving, optimizing, or changing the implementation of such Guardrails, provided that such changes do not materially diminish the efficacy of the Guardrails,” the agreement reads.

“At any time during the pendency of this proceeding, publishers may notify Anthropic in writing that its guardrails are not effectively preventing output that reproduces, distributes, or displays—in whole or in part—the lyrics to compositions owned or controlled by publishers, or creates derivative works based on those compositions,” the agreement continues.

Anthropic is required to respond to publishers in an expedient manner and must undertake an investigation into any allegations made by publishers. “Anthropic will ultimately provide a detailed written response identifying when and how Anthropic will address the issue identified in Publishers’ notice, or Anthropic will clearly state its intent not to address the issue,” the stipulation states.

Nothing in the parties’ agreement should be interpreted as an admission of liability, fault, or wrongdoing by any party it concludes. The music publisher complaint that Anthropic refrain from using unauthorized lyrics to train its future AI models remains pending.

Share This Article