
By: Alec Eyre, Assessment Lead at Armilla AI
When legal systems diverge, insurers don't just face uncertainty, they face unpriceable risk.
On November 11, 2025, a Munich regional court handed OpenAI its first major European copyright loss, ruling that ChatGPT violated German law by training on and reproducing copyrighted song lyrics. The decision arrived just months after U.S. courts issued contradictory rulings on whether fair use protects AI training, some judges blessing the practice, others rejecting it outright.
For insurers, the message is unmistakable: AI companies now navigate incompatible copyright standards across key markets, each carrying billion-dollar exposure.
The Munich case centered on nine German songs, including works by Herbert Grönemeyer. The court found OpenAI liable for both memorizing lyrics during training and reproducing them in outputs, rejecting OpenAI's reliance on Europe's text-and-data-mining exception. Legal observers believe similar reasoning could echo across the EU.
This follows Anthropic's $1.5 billion settlement with U.S. authors in September 2025, the largest copyright settlement in U.S. history. Yet even as American courts lean toward permitting AI training under fair use, European judges are charting a different course entirely.
More than 40 active copyright cases are working through U.S. courts, and 2025's decisions reveal deep inconsistency. In California, Judge William Alsup ruled that using legally acquired books for AI training was "transformative." Delaware's court reached the opposite conclusion in Thomson Reuters v. ROSS Intelligence, finding no transformative purpose when an AI tool directly competed with the plaintiff's product. As one legal analyst noted, "A slight change in how an AI system uses copyrighted data can flip the legal outcome entirely."
AI models ignore borders, but laws don't. A single ChatGPT deployment can expose a company to conflicting standards at every stage: training in U.S. data centres raises fair-use questions, deployment through global cloud infrastructure triggers EU licensing requirements, and outputs delivered anywhere face local reproduction laws. The recent decision in the Munich Court found OpenAI liable for both training and output reproduction, two distinct acts separated by time and geography.
Standard Commercial General Liability policies rarely cover AI training-related infringement, while the Munich ruling's dual-liability finding creates new complexity: Are training and output two separate occurrences? Does Media Liability respond to training activities? How do Tech E&O forms treat ingestion of millions of copyrighted works?
The market's response has been initially silent on coverage but a trend is starting to be more defensive in generative AI based risks. Verisk filed a Generative-AI Exclusion for CGL policies effective January 2026. Philadelphia Indemnity and Hamilton Select added exclusions to Professional Liability.
Fair use may protect training on lawfully obtained works, but using pirated material triggers catastrophic statutory damages. Under U.S. law, willful infringement allows up to $150,000 per work. A dataset with 5,000 unlawfully sourced works could generate $750 million in theoretical exposure before defense costs.
Most of these cases, including Anthropic's $1.5 billion settlement, center on Books3, a now-infamous dataset containing roughly 200,000 pirated books. Multiple suits against Meta, Google, Microsoft, and Apple allege its use. Even if AI training proves transformative, mass piracy remains indefensible. Fair use might save you in court, but only if your data was legal to begin with.
No major U.S. fair-use ruling is expected before mid-2026, while European courts accelerate in parallel. For at least the next 18-24 months, insurers must operate in a legal vacuum with minimal actuarial data and no settled law to guide them.
The challenge isn't just legal uncertainty, it's that traditional risk management breaks down when the same dataset triggers lawful fair use in the United States but unlawful memorization in the EU (Germany). Insurers can't simply "price for the jurisdiction" when a single model deployment exposes companies to incompatible standards simultaneously.
Anthropic's $1.5 billion settlement starts to resolve only U.S. training issues, suggesting the minimal legal standards required for training, but it says nothing about the output liability now emerging in Europe. With German courts rejecting what California judges accept, and the EU poised to impose higher standards on model recall and reproduction, dozens of unresolved cases point toward escalating exposure, not containment. For insurers, the question isn't whether AI copyright risk is large, it's whether it's insurable at all when the same model triggers lawful use in one jurisdiction and billion-dollar liability in another.
More Information
As one of a few specialty companies offering AI Third-Party Liability insurance, Armilla bridges the gap between traditional Cyber or E&O coverages and the emerging AI risk landscape. Armilla prides itself on partnering with clients throughout their AI lifecycle—conducting technical assessments, structuring tailored coverage, and providing ongoing risk guidance as you and your business progress through your AI journey.
Related Resources:
Questions about your AI risks? Contact us at hello@armilla.ai or visit armilla.ai