UMG and Concord Are Asking a Judge to Declare Anthropic Infringed Their Copyrights - Without a Trial
March 25, 2026· 6 min read· 29 views
Universal Music Publishing Group, Concord, and ABKCO just filed a motion for partial summary judgment against Anthropic. They're asking a federal judge to rule that Anthropic infringed their copyrights - and to do it without a full trial, because they say the facts aren't even in dispute.
This is the case first filed in October 2023 covering 499 copyrighted musical works. The publishers say Anthropic's own admissions are enough to decide the core questions right now. No trial needed. The evidence is that clear.
The filing came Monday, March 23, in the Northern District of California. And the language they're using is not subtle: "Anthropic has committed copyright infringement on a massive scale."
What the Publishers Are Actually Claiming
The core argument is simple: Anthropic copied lyrics without permission, trained Claude to output those lyrics on demand, and now competes directly with licensed lyrics services like LyricFind and Musixmatch. They say this is not a close call.
They filed a 47-page statement of 218 "undisputed facts" supported by Anthropic's own deposition testimony, internal company documents, and Anthropic's own admissions. The key point: Anthropic does not deny that the lyrics to the publishers' works are included in Claude's training data, and has never sought or obtained a license to use them.
That's not a disputed fact. That's just true, by Anthropic's own account.
The scraping process is documented: Anthropic used automated web crawlers to pull text from the internet, including third-party datasets like Common Crawl and The Pile, both of which are known to contain unauthorized copies of copyrighted lyrics. The filing states Anthropic admits at least one Claude model was trained on a dataset containing the lyrics to at least 100 of the publishers' works.
The Fair Use Argument Is In Trouble
Anthropic's primary defense is fair use. The publishers are asking the court to reject this defense now, before trial.
Their argument against fair use comes down to three things. First, Anthropic is a for-profit company valued at $380 billion with a revenue run rate approaching $14 billion. This use was commercial, not educational or transformative in any meaningful sense. Second, Anthropic copied lyrics in their entirety - not excerpts, not fragments. Third, the output directly competes with licensed services. When Claude gives you song lyrics, it's doing exactly what LyricFind does - except LyricFind paid for the rights.
There's an interesting wrinkle here. The publishers cite Anthropic's own Chief Science Officer, who said in a sworn declaration that Anthropic has "no interest" in the publishers' works specifically and that similar types of works "are considered fungible for purposes of the model." In other words: the lyrics weren't uniquely valuable to what Anthropic was building. They just took them anyway because they were there.
What Claude Actually Did With the Lyrics
The filing goes into detail on the output side, and it's more extensive than just outputting lyrics when someone asks. According to the publishers, Anthropic's own records show Claude reproduced copyrighted lyrics in response to translation queries, chord requests, SEO article generation, homework assistance, and requests to write new songs on given topics. In many of those cases, "Claude output Publishers' lyrics even when users did not actually ask for those lyrics or when users requested 'new' content."
That's a significant detail. If your model outputs copyrighted material when specifically told to produce original work, the transformative use argument becomes extremely difficult to sustain.
The publishers also point to Anthropic's publicly available finetuning dataset on Hugging Face, which they say contains prompts requesting lyrics to works named in the lawsuit. They allege that human reviewers at Anthropic chose model output that accurately reproduced copyrighted lyrics while rejecting responses with inaccurate lyrics - essentially training the model to be better at reproducing the infringed content.
The Internal Documents Are Damaging
The filing quotes internal Anthropic communications that don't read well in a courtroom context. An August 2023 internal memo stated that AI models like Claude "memorize A LOT, like a LOT." Co-founder Benjamin Mann reportedly testified that certain content is "worth memorizing." CEO Dario Amodei is quoted from an April 2024 interview saying AI models should not be "verbatim outputting copyrighted content" - while also testifying in his own deposition that doing so is "against the law."
If your CEO says in a sworn deposition that the thing your company did is illegal, that is not a helpful fact pattern for your legal team.
The filing also notes that even after the lawsuit was filed in 2023, post-litigation guardrails "have not prevented all outputs that reproduce Publishers' lyrics." They cite specific examples of Claude continuing to output lyrics to American Girl, Dog Days Are Over, and White Christmas after the case was already active.
This Is Separate From the Larger Case
One thing worth noting: this is not the full extent of the publishers' action against Anthropic. In January 2026, UMPG, Concord, and ABKCO filed a second, separate lawsuit covering more than 20,000 songs and seeking over $3 billion in statutory damages - potentially "the single largest non-class action copyright case in US history." That case is still at an early stage.
This motion for summary judgment covers only the original 499 works from October 2023. If the court grants it, it establishes liability on the core infringement question without needing a trial on those facts. The damages question would still go forward.
Why This Matters Beyond the Courtroom
The music industry's fight with AI companies is often framed as artists versus technology, or nostalgia versus progress. This case is something more specific: it's about whether a $380 billion company can build its product by taking creative work without paying for it, compete directly with the licensed services that do pay, and then argue the copying was transformative.
The publishers' position is that this is straightforward commercial copyright infringement and that the AI framing doesn't change what happened. Copying is copying. Competing with a licensed service using unlicensed content is competing with a licensed service using unlicensed content. The fact that a neural network was involved at some intermediate step doesn't redefine the underlying act.
If the court agrees and grants summary judgment, it will be a significant signal to the rest of the AI industry about what they can and cannot build on without licensing agreements in place.
As someone who makes music independently, I have a direct stake in this question - not because I expect Anthropic to reproduce my lyrics specifically, but because how this case resolves shapes what the relationship between AI companies and creative work looks like going forward. The answer matters. It should be answered clearly.
Sign in to leave a comment
Loading...