A federal district court judge has thrown out most of the claims that a group of artists asserted against artificial intelligence (AI) platforms that they claim used their work without permission.
The case is Andersen et al. v. Stability AI Ltd.
Sarah Anderson and other artists filed a putative class action on behalf of themselves and other artists to challenge the defendants’ creation or use of Stable Diffusion, an AI software product.
Plaintiffs assert the following claims against all three sets of defendants: Stability AI Ltd. and Stability AI, Inc., DeviantArt, Inc., and Midjourney, Inc.:
- Direct Copyright Infringement, 17 U.S.C. § 106;
- Vicarious Copyright Infringement, 17 U.S.C. § 106;
- violation of the Digital Millennium Copyright Act, 17 U.S.C. §§ 1201-1205 (“DMCA”);
- violation of the Right to Publicity, Cal. Civil Code § 3344;
- violation of the Common Law Right of Publicity;
- Unfair Competition, Cal. Bus. & Prof. Code § 17200; and
- Declaratory Relief.
Plaintiffs alleged that Stable Diffusion was “trained” on plaintiffs’ works of art to be able to produce Output Images “in the style” of particular artists.
The three sets of defendants each filed separate motions to dismiss. DeviantArt also filed a special motion to strike.
The judge granted the motions to dismiss and deferred the motion to strike.
According to the judge’s summary of the complaint,
Stability created and released in August 2022 a “general-purpose” software program called Stable Diffusion under a “permission open-source license.” … Stability is alleged to have “downloaded of otherwise acquired copies of billions of copyrighted images without permission to create Stable Diffusion,” known as “training images”… . Over five billion images were scraped (and thereby copied) from the internet for training purposes for Stable Diffusion through the services of an organization (LAION, Large-Scale Artificial Intelligence Open Network) paid by Stability. … Stability’s founder and
CEO “publicly acknowledged the importance of using licensed training images, saying that future versions of Stable Diffusion would be based on ‘fully licensed’ training images. But for the current version, he took no steps to obtain or negotiate suitable licenses.
LAION scraped five billion images to create the training image datasets.
Consumers use the AI platforms at issue by entering text prompts into the programs to create images “in the style” of specified artists. The new images are created “through a mathematical process” that are based entirely on the training images and are “derivative” of the training images.
The defendants sought to dismiss two of the plaintiffs’ claims because neither of them had registered their images with the US Copyright Office, as is required before the owner of a work can sue for infringement. They also sought to limit Anderson’s copyright claim to the 16 collections of works that she registered. The judge granted these motions.
The defendants argued that Anderson can’t proceed with her copyright infringement allegations unless she identifies with specificity each of her registered works that she believes were used as training images for Stable Diffusion.
The court noted that a search of her name on the “ihavebeentrained.com” supported the plausibility and reasonableness of her belief that her works were, in fact, used in the LAION datasets and training for Stable Diffusion. The court found that a sufficient basis to allow her copyright claims to proceed.
The court noted that it’s unclear exactly how the AI platforms work:
It is unclear, for example, if Stable Diffusion contains only algorithms and instructions that can be applied to the creation of images that include only a few elements of a copyrighted Training Image, whether DeviantArt or Midjourney can be liable for direct infringement by offering their clients use of the Stable Diffusion “library” through their own apps and websites. But if plaintiffs can plausibly plead that defendants’ AI products allow users to create new works by expressly referencing Anderson’s works by name, the inferences about how and how much of Anderson’s protected content remains in Stable Diffusion or is used by the AI end-products might be stronger.
One of the plaintiffs’ theories of infringement was that the output images based on the Training Images are all infringing derivative works.
The court noted that to support that claim the output images would need to be substantially similar to the protected works. However, noted the court,
none of the Stable Diffusion output images provided in response to a particular Text Prompt is likely to be a close match for any specific image in the training data.
The plaintiffs argued that there was no need to show substantial similarity when there was direct proof of copying. The judge was skeptical of that argument.
This is just one of many AI-related cases making its way through the courts, and this is just a ruling on a motion rather than an appellate court decision. Nevertheless, this line of analysis will likely be cited in other cases now pending.
Also, this case shows the importance of artists registering their works with the Copyright Office before seeking to sue for infringement.
Just like the haiku above, we like to keep our posts short and sweet. Hopefully, you found this bite-sized information helpful. If you would like more information, please do not hesitate to contact us here.