We can confidently say that artificial intelligence law stopped being “emerging” in 2025. This was the year the courts, regulators, and legislators around the world started drawing real lines in the sand on copyright, data use, AI-washing, and high-risk systems—with obligations that will fully bite in 2026 and beyond. For in-house teams, founders, and boards, this year was less about theoretical risk and more about the following issues: what, exactly, is now illegal, what must we document, and how do we keep launching AI products without stepping on a legal landmine?
- Copyright & IP: The “Fair Use Triangle” Takes Shape
This year gave us the first real cluster of U.S. decisions on whether using copyrighted works to train AI is fair use. The answer so far: it depends heavily on how you got the data and what you do with it.
Thomson Reuters v. ROSS (D. Del.) – “Headnotes are not a free training set”
In February 2025, a Delaware federal court issued one of the first major training-data decisions in Thomson Reuters Enterprise Centre GmbH v. ROSS Intelligence Inc. Thomson Reuters, owner of Westlaw, accused ROSS of using Westlaw headnotes to train a competing AI-driven legal research tool. The court rejected ROSS’s fair-use defense at summary judgment and found infringement, emphasizing the commercial, competitive use and the creative value of the curated headnotes.
Takeaway: Scraping proprietary, value-added content from a competitor to build a directly competing AI product is a high-risk strategy.
Bartz v. Anthropic (N.D. Cal.) – Lawful copies vs. pirated “central library”
In June 2025, Judge William Alsup issued a pivotal summary-judgment ruling in Bartz v. Anthropic:
- Training on lawfully acquired books was “quintessentially transformative” and fair use.
- But creating and retaining a “central library” of pirated books raised serious infringement concerns, and fair use was denied for those works.
The case later settled on the eve of trial in September 2025 for a reported $1.5 billion, underscoring the stakes of training-data decisions. So, the courts are increasingly drawing a line between lawfully acquired corpora (more defensible) and pirated or unauthorized data.
Kadrey v. Meta & other N.D. Cal. cases – More nuance on fair use
Companion cases out of the Northern District of California (including Kadrey v. Meta) produced additional rulings that, on their face, are more favorable to AI developers, finding fair use in some training scenarios involving lawfully sourced content.
Collectively, practitioners talk about a “fair use triangle”:
- Delaware (Thomson Reuters) – highly skeptical when AI is trained on proprietary, curated content to build a direct competitor.
- N.D. Cal. (Anthropic / Meta) – more open to fair use where content is lawfully acquired and the AI model is considered transformative, but not when developers hoard pirated content.
Media & music: NYT v. OpenAI, Disney/Universal v. Midjourney, and Suno/Udio
Meanwhile, The New York Times v. OpenAI / Microsoft continued as one of the most closely watched AI cases. In 2025, the court issued a sweeping preservation order requiring OpenAI to retain and segregate ChatGPT and API output logs, then later allowed OpenAI to resume normal deletion after the order expired in September. In November, Magistrate Judge Ona Wang ordered OpenAI to produce some 20 million ChatGPT logs, a stark reminder that product logs can become discoverable evidence in AI litigation.
In the media space, Disney and Universal sued Midjourney this year for alleged copyright infringement related to image training, marking the first major visual-media plaintiffs in the AI space.
Music labels likewise intensified litigation against AI music generators like Suno and Udio; by late 2025, Warner Music had settled and pivoted into a licensing partnership with Suno, allowing licensed AI models and artist opt-ins. This signals a likely future: litigation leading to structured licensing deals instead of pure prohibition.
Emerging frontiers: Trade secrets, trademarks, and data promises
This year, we also saw new angles:
- A proposed class action against Figma alleges the company used customers’ design files to train AI without consent, focusing on misappropriation of confidential information and broken data promises rather than pure copyright.
- OverDrive v. OpenAI accuses OpenAI of trademark infringement for naming its video model “Sora” in a way that allegedly conflicts with OverDrive’s existing “Sora” library app.
Strategic IP lesson for 2026: Build a documented data-provenance strategy. Track what data is used, how it was obtained, and under what license; wall off dubious sources (pirated sites, competitor headnotes, confidential customer content) and revisit your public promises about “never” using certain data for training.
- Enforcement: FTC’s “AI-Washing” Crackdown and Agentic AI Claims
On the enforcement front, the Federal Trade Commission (FTC) made clear that there is no AI exemption from existing consumer-protection laws.
Operation AI Comply and “AI-washing”
Building on a 2024 announcement that “using AI tools to trick, mislead, or defraud people is illegal,” the FTC has now brought at least a dozen “AI-washing” cases, targeting companies that overstate what their AI does or mislead consumers about AI-powered earnings and performance claims. In August 2025, the FTC sued Air AI, alleging deceptive claims that its agentic AI could fully replace human sales reps and deliver unrealistic business results, while also raising concerns about exaggerated “AI-powered” marketing around a business opportunity scheme.
Key themes:
- Claiming “full automation” or “no humans needed” without proof is risky.
- Exaggerated ROI/earnings tied to “AI” are classic unsubstantiated claims.
- Labeling something as “AI-powered” when it’s not meaningfully different from a standard SaaS tool can be deceptive.
Strategic enforcement lesson for 2026:
Run all AI product and marketing copy through a truth-in-advertising filter:
- Can we prove this claim with competent evidence?
- Are we implying capabilities (e.g., “human-level,” “guaranteed replacement of employees”) we can’t substantiate?
- Are we clear about limitations, guardrails, and human oversight?
- New Statutes & Regulatory Frameworks: EU AI Act, Colorado, and State Patchwork
EU AI Act: Obligations start phasing in
The EU Artificial Intelligence Act formally entered into force on August 1, 2024, but 2025 is when the first obligations started to bite.
Key 2025–2026 milestones:
- Feb 2, 2025 – Ban on “unacceptable-risk” AI systems (e.g., social scoring, certain manipulative systems) and AI literacy obligations.
- Aug 2, 2025 – Governance rules and obligations for general-purpose AI (GPAI) providers take effect, including documentation, transparency and some risk-management obligations.
- Aug 2, 2026–27 – The full high-risk framework for AI embedded into regulated products, sectoral compliance, and national AI sandboxes come online.
If you build or deploy AI in the EU (or serve EU users), 2025 was the year to start classifying use cases and mapping them to future obligations.
Colorado AI Act: The first comprehensive U.S. AI statute
Colorado’s SB 24-205 (Colorado Artificial Intelligence Act / CAIA), signed in 2024, has been under intense scrutiny in 2025 but remains on track to take effect February 1, 2026.
Key features:
- Risk-based approach similar to the EU AI Act.
- Focus on preventing algorithmic discrimination by “high-risk” AI systems. (NAAG)
- Obligations for both developers and deployers, including risk assessments, notice to consumers, and documentation.
This is the first broad, state-level AI framework in the U.S.—and it’s influencing drafts in other states.
California & other states: Deepfakes, elections, and transparency
California and other states continued to enact narrower, issue-specific AI laws, often around election integrity and deepfakes:
- California has laws requiring disclosures on AI-generated political ads and manipulated media used in campaign communications.
- Additional bills (like AB 2839 and AB 2655) target election-related deepfake disinformation and require platforms to block or label deceptive AI-generated political content during sensitive pre-election periods.
- California also advanced an AI Transparency Act aimed at labeling or watermarking AI content and addressing harms from non-consensual sexual deepfakes.
In November 2025, a bipartisan group of 35 state attorneys general urged Congress not to preempt state AI laws, highlighting state-level momentum around AI harms such as chatbots causing injuries, discrimination, and deepfake abuse.
Strategic regulatory lesson for 2026:
You should assume a patchwork:
- EU: horizontal, comprehensive AI Act.
- U.S. states: sector- and harm-specific rules (discrimination, elections, deepfakes, consumer AI).
- Vertical rules: financial, health, employment, housing, etc.
Building a single, global AI risk-management framework that can be tuned to local rules will be more sustainable than playing whack-a-mole with individual laws.
- Strategy for 2026: Practical AI Compliance Priorities
Given this 2025 landscape, here are concrete planning priorities for 2026.
- Build an AI inventory and risk map
- Catalogue all AI systems you develop or deploy (internal tools, customer-facing features, vendor models).
- Tag each system by jurisdiction, purpose, and risk (e.g., customer scoring, hiring, health, safety-critical, election content).
- Map each category to obligations under the EU AI Act, Colorado AI Act, and relevant state deepfake / discrimination laws.
- Clean up your training data and contracts
- Document sources and licenses for training corpora.
- Avoid or segregate pirated or obviously unauthorized content; it is now clearly litigated territory.
- Update customer contracts and privacy notices to be explicit (and honest) about whether customer data will be used for training, and on what terms. Cases like the Figma lawsuit show how quickly this can become a trade secret and data-privacy problem.
- Tighten AI marketing and sales claims
In light of the FTC’s “AI-washing” enforcement:
- Scrub your website, decks, and sales scripts for overblown AI claims (“fully autonomous,” “guaranteed 10x revenue,” “no human oversight needed”).
- Document evidence for material claims, including benchmarks, A/B tests, or client case studies.
- Train marketing and sales teams on what they can and cannot say about AI.
- Prepare for discovery in AI litigation
Cases like NYT v. OpenAI show that courts are willing to order production of massive volumes of logs and training records.
- Implement data-retention policies that balance privacy, storage cost, and anticipated litigation needs.
- Ensure your logging and observability systems avoid storing more personal data than necessary but still capture enough metadata to defend your systems (e.g., to show filtering, safety measures, and provenance).
- Stand up cross-functional AI governance
For most organizations, AI is no longer “just an IT issue.” Consider:
- An AI Governance Committee with legal, security, product, and compliance represented.
- A lightweight but formal AI impact assessment process for higher-risk deployments (hiring, lending, health, elections, safety-critical use).
- Regular updates to the board on AI risk and opportunity, especially as EU and state laws phase in by 2026.
- What All of This Means for 2026
If 2023–2024 were the years of AI experimentation, 2025 was the year courts and regulators began to tighten the frame. The pattern is clear:
- Data provenance and licensing will decide many copyright disputes.
- Truthfulness and transparency will drive enforcement around AI marketing and consumer protection.
- Risk-based frameworks (EU, Colorado, state laws) will reward organizations that can explain how their models work, what data they use, and what safeguards they put in place.
For companies building or deploying AI, 2026 is not the time to pause innovation—but it is the time to professionalize your AI compliance program. Please feel free to contact our law firm if you’d like help auditing your AI systems, updating your contracts and product claims, or building an AI governance framework tailored to your risk profile.
Internet Lawyer Blog

