The case before London’s High Court accuses the AI firm of illegally using millions of its images to train its Stable Diffusion model.
Getty Images has launched a high-profile copyright lawsuit against artificial intelligence firm Stability AI, with proceedings now under way at London’s High Court. The case is expected to set significant legal precedents for how copyright law applies to artificial intelligence.
Getty alleges that Stability AI unlawfully scraped millions of images from its website to train its text-to-image model, Stable Diffusion. The tool generates images from written prompts and, Getty claims, relies on creative content taken without permission or payment. It argues that this use amounts to copyright infringement and undermines its business.
Stability AI denies the allegations and has framed the dispute as a broader test of how copyright should function in the context of emerging technologies. A spokesperson said its models build on “collective human knowledge,” and suggested the training process aligns with fair use principles.
The case comes amid a wider global reckoning over the use of copyrighted content in AI training. As generative tools such as ChatGPT and image models like Midjourney have grown in prominence, artists, photographers and other rights holders have called for stronger legal protections. Prominent voices, including Elton John, have warned of the risks posed to creators if their work is reused without consent.
Lawyers expect the Getty case to be closely watched by governments and regulators. A win for Getty could open the door to a wave of legal actions from other content owners. Rebecca Newman, a partner at Addleshaw Goddard, described the case as “uncharted territory,” with potentially far-reaching consequences for copyright enforcement in the AI era.
Cerys Wyn Davies, a partner at Pinsent Masons, said the decision could also influence investment in the UK’s AI sector. “The outcome may affect how the UK is viewed as a market for developing and deploying AI technologies,” she said.
The lawsuit also highlights growing concerns about the limits of current copyright frameworks. In the US, the Copyright Office has published reports outlining the challenges of regulating AI-generated content and deepfakes, and there have been calls for new federal laws to protect people’s likenesses and rein in unauthorised use of digital replicas.
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
10
Notes:
The narrative is current, with the lawsuit commencing today, 9 June 2025. No earlier versions of this specific content were found, indicating high freshness.
Quotes check
Score:
10
Notes:
The direct quotes in the narrative are unique and do not appear in earlier material, suggesting originality.
Source reliability
Score:
10
Notes:
The narrative originates from a reputable source, Reuters, enhancing its credibility.
Plausability check
Score:
10
Notes:
The claims made in the narrative are plausible and align with known facts about the lawsuit and the parties involved.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is fresh, original, and sourced from a reputable outlet, with all claims being plausible and supported by current information.

