If you train an AI on millions of stock images, is that copyright theft or just clever maths?
Last week, the High Court gave us the first big UK answer in Getty Images v Stability AI. This was the case many hoped would finally settle the fight between AI developers and creators. It didn’t do that. But it did do something important.
Spoiler: Getty lost most of its case. Stability AI (the company behind Stable Diffusion) is having a very good week.
So what actually happened, and what does it mean if you’re a UK business using (or building) AI?
- What was Getty complaining about?
Broadly, Getty said:
- You trained on our stuff
Stability AI allegedly used millions of Getty images (and captions/watermarks) as part of the training data for Stable Diffusion without a licence.
- You imported an infringing model into the UK
Getty argued that the model itself (the trained Stable Diffusion model) was an “infringing copy” of its works, so importing/using it in the UK was secondary copyright infringement.
- You messed with our brand
Some AI-generated images popped out with a Getty-style watermark, leading to trade mark claims.
On paper, this was the perfect test case: creators vs AI, copyright vs machine learning.
In reality, things got messier.
- What did the court actually decide?
- The big copyright claim… quietly fell away
Getty’s main claim (that training Stable Diffusion involved infringing copying in the UK) was dropped mid-trial. Getty simply couldn’t prove the training happened here rather than on servers abroad.
So, the court did not give a definitive ruling on whether training on copyrighted content is, in itself, infringement under UK law. That question is still alive (and now marching off to the US litigation Getty is pursuing in parallel).
- Is an AI model an “infringing copy”?
The remaining UK fight was about secondary infringement: is Stable Diffusion, as a trained model, an infringing copy of Getty’s works?
The judge said:
An “article” for copyright purposes can be intangible (so software or a model can be an article).
But Stable Diffusion does not store or reproduce Getty’s images. It contains learned patterns/weights, not the works themselves.
Therefore, the model is not an “infringing copy” and importing/using it in the UK is not secondary copyright infringement.
In very non-technical terms:
However many lawyers you throw at it, a diffusion model is not a giant secret folder of Getty photos.
That’s a big win for Stability AI and a comfort blanket (a thin one, but still usable) for generative AI developers.
- Trade marks: a small “ouch” for AI
Where Stable Diffusion produced images that included Getty-style watermarks, the court did find trade mark infringement, albeit “historic and extremely limited”.
So:
Copyright claims? Mostly failed in the UK.
Trade marks? Some limited success for Getty.
- What does this actually mean?
For AI developers
The court has effectively said: the model itself. as a set of weights and parameters, isn’t automatically an infringing copy, even if it was trained on copyrighted material, because it doesn’t store the works.
That doesn’t magically legalise all training everywhere. It just means: “Under current UK law, this particular model, on these facts, is not itself an infringing copy.”
Training acts that happen outside the UK are outside UK copyright’s direct reach, which nudges the real fight over to the US and other jurisdictions.
For creators and rights-holders
This is understandably being seen as a blow: the UK High Court has not given a big, creator-friendly “you must get consent and pay” ruling.
However, it hasn’t closed the door either:
Trade marks, passing off and output copying remain very live routes.
Legislators are now under pressure to clarify whether there should be a specific “text and data mining” exception, with opt-outs or compensation mechanisms.
In other words, the battle is shifting from courtroom to Parliament and policy.
For ordinary UK businesses using AI tools
This case doesn’t mean “anything goes”. But it does slightly reduce the risk that simply using a reputable model is, in itself, a copyright time-bomb.
Your bigger risks remain:
How you use outputs (e.g. are you reproducing third-party IP without checking?).
How you fine-tune or retrain (are you feeding confidential or copyrighted material into public tools?).
What your contracts say (and don’t say) about IP ownership, indemnities and training data.
- So… is this progress or a problem?
That depends who you are.
If you’re building AI: this is progress. The court has confirmed that not everything that touches copyright is automatically infringing. There is space for maths that learns without being treated as storage that copies.
If you’re a creator: it feels more like a problem. This is because a big, headline-making case has come and gone without the strong protective precedent many were hoping for.
From a systems perspective, though, this might be exactly what was always going to happen:
The judge has said, in effect: “These tools are genuinely different. If you want different rules, you probably need Parliament, not me.”
And that’s the real message for everyone: the law is finally waking up to AI, but it isn’t fully dressed yet.
- What should UK businesses do now?
Whether you’re enthusiastic or sceptical about AI, here are some sensible moves post-Getty v Stability:
- Map your AI use
- Where are you already using AI (internally and via vendors)?
- What data goes in? What comes out? Where’s the human check?
- Tidy your contracts
- Add IP and copyright warranties/indemnities for AI vendors
- Be clear about who owns outputs and whether your data can be used for training
- Check for any brand/trade mark risks, especially if outputs might contain watermarks or logos
- Set internal rules for training/fine-tuning
- Don’t let staff upload client documents or sensitive material into public models
- Have a simple “What you can and can’t put into AI tools” policy
- Watch policy, not just case law
- UK and EU copyright rules around text and data mining, opt-outs, and licensing models are evolving
- Your risk profile in 12–24 months may be driven more by regulation and industry codes than by this one judgment
- Brand protection still matters
- If you have a strong brand, keep monitoring outputs that mimic your name or watermark, and be prepared to use trade mark law where needed, just like Getty did.
Final thought
Getty v Stability AI doesn’t answer the big philosophical question “Should AI be allowed to train on my work without asking?”
What it does say, for now, is that under current UK law:
AI models are weird maths, not secret image libraries.
Whether that’s progress or a problem probably depends which side of the prompt you’re on.
Over to you: is this the law wisely adapting to new tech, or a missed chance to protect creators?