The widespread use of generative artificial intelligence, or AI, is raising significant copyright concerns that could threaten the future and legality of this new technology, according to a recent study conducted by Berkeley Law professor Pamela Samuelson.
The study, released Thursday, examines recent lawsuits initiated by artists and writers that highlight the controversial nature of generative AI — a technology that critics argue uses their original creations to “train” generative AI systems and generate content that could jeopardize their creative works and careers.
“The copyright claims are especially interesting because they, if upheld, would essentially shut down generative AI,” Samuelson said. “That would be a very big outcome.”
In her research, Samuelson examined copyright lawsuits filed since November. She carefully reviewed relevant case documents, such as complaints and motions, from each individual case and later consulted with lawyers.
Since generative AI is relatively new technology, there is limited existing literature on the subject, Samuelson noted. In addition, previous studies are often intended for those in legal fields rather than scientific and technical audiences, she added.
“It’s important to try to distill these complicated legal claims and arguments to have a form (through which) people who are in the scientific and technical fields can get an accurate understanding,” Samuelson said.
The study also focuses on “innovation arbitrage” — the idea that generative AI developers may move jurisdictions to seek preferable regulatory conditions. One such example in AI is “training data scraping,” or extracting AI training data which, although unlawful in the United States, is legal in other countries. The study also looks at action steps from Congress and the U.S. Copyright Office regarding the use of in-copyright works in generative AI.
Samuelson noted that her research on the implications of advanced technologies on copyright law has been ongoing since the 1980s. Currently, Samuelson also serves as the co-director of Berkeley Center for Law & Technology. She added that her most recent research is the “latest manifestation” of the same type of issues.
Further, Samuelson emphasized the study’s contribution to raising awareness on the issues of legality surrounding this technology.
“(The study) allows readers who are themselves either using or are trying to build large language models or image models at least some way to understand whether what they’re doing is likely to be lawful or not,” Samuelson said. “That seems to me to be a pretty useful service.”