Intellectual property and AI do not seem to go together... Not yet. The Northern District Court of California made an important ruling. The court ruled on a lawsuit by a group of artists against Stability AI, Midjourney, DeviantArt, and Runway AI. The artists accuse the companies of using their copyrighted works to train artificial intelligence (AI) models without their consent, thereby infringing their rights.
The case, led by artists such as Sarah Andersen, Kelly McKernan, and Karla Ortiz, aims to clarify the legal consequences of using artistic works to train AI models. The plaintiffs allege that the companies used their works in datasets to train tools such as Stable Diffusion, a technology that enables the creation of images using AI. According to the lawsuit, these works were used without permission, which directly violates copyright law.
The controversy is not new, but it is highly topical, especially considering that Karla Ortiz is one of the illustrators for blockbuster films such as Thor: Ragnarok, Doctor Strange, and Jurassic World.
Court decision
The judge overseeing the case decided to grant in part and deny in part the motions to dismiss the artists' claims. The court allowed the allegation of direct copyright infringement against Stability AI. However, it dismissed other claims, including those for removing and modifying copyright management information under the Digital Millennium Copyright Act (DMCA).
This decision means that the case will continue. As one of the defense attorneys, Matthew Butterick, explained in El País, "We may now request documents from the defendants and obtain witness statements. We will ask the companies that trained the AI models to generate images to give us information about how they copied the plaintiffs' work and how they used it to develop their tools."
A key issue was the plaintiffs' claim that Stability AI and Runway AI used a vast dataset of internet images to train their AI models. The artists argue that these images were retrieved from platforms such as DeviantArt without their permission, violating their intellectual property rights.
Ortiz celebrated the news of this court resolution, asserting on social media, "We may be one of the largest copyright infringement cases in history. We look forward to the next phase of our fight!" This partial judgment highlights the challenges at the intersection of intellectual property and artificial intelligence.
The court has also allowed the plaintiffs to amend certain dismissed claims, suggesting that the legal battle is far from over. The following steps could include new rounds of arguments or possibly a settlement between the parties.
More litigation
As early as 2022, some recognized that training AI models could lead to problems. In Butterick's case, Microsoft's co-pilot made him sit up and take notice. In November of that year, he filed a lawsuit, which is still pending, accusing Bill Gates' company of violating open-source license agreements.
The illustrators' lawsuit followed in January 2023. In July, a group of writers sued OpenAI and Meta for including books they had written in their training databases. In October last year, several record companies, including Universal Music Group, sued Anthropic for training its algorithms with copyrighted material.
Since then, Getty Images has sued Stability AI for using images from its archives without permission. The New York Times has sued OpenAI and Microsoft for using millions of articles to train ChatGPT, and more writers have filed suit against Anthropic on similar grounds.
It does not look like this will end, and with the rise of tools like Sora, audiovisual rights could also be in jeopardy. Therefore, more and more regulations are needed to avoid problems and copyright lawsuits for both companies and authors.