The legal landscape for AI image generation.
Cases, rulings, and where we are in 2026. We cite court filings, judgements, and regulator pages. Where facts are contested, we say so. We don't opine on who should win.
Timeline at a glance.
Getty Images v Stability AI.
Filed January 2023 (UK) and February 2023 (US). Judgement (UK): November 2025.
Getty Images sued Stability AI in two jurisdictions on overlapping but distinct theories. The UK action, filed in the High Court of Justice Business and Property Courts in January 2023, alleged copyright and database-right infringement on the basis that Stability had scraped Getty's copyrighted images during training of Stable Diffusion. The US action, filed in the District of Delaware as case 1:23-cv-00135 in February 2023, brought parallel claims under US law plus trademark claims for the appearance of distorted Getty watermarks in some Stable Diffusion outputs.
The UK trial was heard in 2025. The High Court delivered judgement in November 2025. The judgement, available via BAILII verified April 2026, addressed whether Stability's training process constituted copyright infringement, the territorial scope of UK copyright law for cross-border data processing, and trademark issues arising from Getty watermark replication.
Read the judgement directly on BAILII; we don't paraphrase a substantial UK High Court ruling. Practical takeaways for evaluating generators trained on web-scraped data: training-data provenance is now a live legal question, jurisdictional reach matters, and watermark replication is a separate trademark concern from copyright in the underlying image.
Getty's public-facing summaries are on the company's newsroom verified April 2026. Stability AI has issued statements via its own press channels. Both sides represent the case to their constituents in their preferred framing; the judgement text is the authoritative document.
Andersen et al. v Stability AI.
Filed January 2023, Northern District of California. Class action by named artists.
A class action filed by named artists Sarah Andersen, Kelly McKernan, and Karla Ortiz against Stability AI, Midjourney, and DeviantArt in the Northern District of California in January 2023. The original complaint alleged direct copyright infringement, vicarious infringement, DMCA violations, and right-of-publicity claims arising from training on the LAION dataset which contained the artists' works.
The procedural history, available on CourtListener verified April 2026, includes Judge Orrick's October 2023 dismissal of several claims, with leave to amend, and the August 2024 ruling that allowed direct copyright infringement claims to proceed. The case continues into 2026.
What this teaches a buyer of generators: training-data sourcing is the central legal question, models trained predominantly on web-scraped data carry uncertain liability tail risk, and generators that publish licensed-only training sources address that risk directly. This is the reason Adobe's Firefly positioning around training data is more than marketing.
US Copyright Office guidance.
The US Copyright Office has been the most active national copyright authority on AI questions. Three documents matter:
- Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence (March 2023). The foundational policy statement requiring human authorship for registration. Read it on copyright.gov verified April 2026.
- Zarya of the Dawn registration decision (February 2023). A graphic novel by Kris Kashtanova; the Office cancelled the original registration and issued a new partial registration covering the human-authored text and arrangement, excluding the AI-generated images. The decision letter is the most concrete application of the policy to a real work.
- Report on Copyright and Artificial Intelligence (2024-2025 series). A multi-part report covering digital replicas, copyrightability of AI outputs, training, and policy options. Available at copyright.gov/ai verified April 2026.
The headline takeaway has not changed: under US law, copyright requires human authorship; purely AI-generated outputs are not registrable. Material with substantive human authorship in the form of selection, arrangement, and editing of AI outputs may be partly registered for those human contributions.
USPTO inventorship guidance.
Patent law is a separate regime from copyright but asks an analogous question about non-human creators. The Federal Circuit ruled in Thaler v Vidal (2022) that an AI cannot be a named inventor under the Patent Act, which requires natural persons. The USPTO followed up in February 2024 with inventorship guidance verified April 2026 for AI-assisted inventions.
The USPTO position is that AI tools may assist human inventors but the named inventor must be a natural person who made a significant contribution to the conception of the invention. This is broadly aligned with the Copyright Office's human-authorship requirement, with the difference that patent law has the "significant contribution" threshold whereas copyright requires creative authorship of the protected expression.
For visual asset creators this matters indirectly: derivative works built on AI-generated imagery sit in a chain of rights that includes both copyright and, occasionally, design-patent considerations. Both regimes treat AI as a tool, not a creator.
EU AI Act and Copyright Directive.
Two EU instruments shape the landscape. The Artificial Intelligence Act (Regulation 2024/1689) verified April 2026 entered into force in 2024 with a phased application schedule. Provisions on general-purpose AI models, including image generators, include transparency obligations: providers must publish a sufficiently detailed summary of the training content.
The Copyright Directive (2019/790) verified April 2026 includes a text-and-data-mining exception at Article 4 that permits TDM for any purpose, subject to a rights-holder opt-out expressed in machine-readable form. The opt-out mechanism is the foundation for tools like Spawning's ai.txt and the C2PA-related metadata signals discussed on /training-data.
Practical implication for evaluating generators: a generator that publishes a training-data summary in line with EU AI Act expectations is signalling its provenance position openly. Rights-holder opt-out compliance is verifiable through whether the generator's training corpus excludes opted-out works.
What this means when you evaluate a generator.
Five practical questions follow from the legal landscape:
- Training data disclosure. Does the vendor publish what the model was trained on? Vendors that disclose are signalling confidence in their licensing position.
- Indemnification. If the generator was trained on potentially infringing data, does the vendor stand behind the outputs commercially? This is the difference between "commercial use allowed" and "commercial use safe".
- Jurisdiction. Where is the vendor incorporated, where are the servers, and where are you publishing? Three jurisdictions can apply different rules; the most restrictive often controls.
- Output copyrightability. If you need the output to be your registered copyright, plan for human creative input in the workflow (selection, arrangement, editing) sufficient to meet the relevant national standard.
- Rights-holder opt-out. If you are a rights-holder yourself, the EU TDM opt-out and US robots.txt-style mechanisms exist; the training-data page covers the practical steps.
The legal landscape will continue to evolve through 2026 and beyond. The /licensing and /legal-landscape pages on this site are reviewed quarterly and updated as filings, judgements, and regulator outputs arrive.