Three authors have filed a class-action lawsuit against artificial intelligence startup Anthropic, alleging the company used pirated copies of their work to train its chatbot Claude.

While the first suit of its kind against Anthropic — a big player in the AI industry that has raised more than $7 billion over the past year — it joins a growing pile of copyright challenges targeting AI chatbots.

Authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson seek to represent authors of fiction and non-fiction works whose work has been stolen and used without pay by Anthropic.

Three authors have filed a class action lawsuit against AI startup Anthropic, alleging the company stole copyrighted work to train its chatbot.

The writers claim Anthropic — backed by venture capital firms like Menlo Ventures, which pledged a $750 million donation — committed “large-scale theft” by using copyrighted materials to train Claude without permission.

Similar lawsuits have piled up against OpenAI and its chatbot ChatGPT, but this is the first suit against the smaller San Francisco-based startup.

Anthropic was co-founded by siblings Dario and Daniela Amodei, who both worked at OpenAI prior to starting their own company.

Anthropic has largely been viewed as more safety-conscious than its larger, well-known predecessor. 

It gained this image when Dario said he withheld Claude technology – even though he could have released it before OpenAI’s wildly popular ChatGPT – over safety concerns.

Claude has been used to generate emails, summarize documents and interact with human users. But now it is facing the same claims of copyright infringement as its forebear. 

The lawsuit filed Monday in a San Francisco federal court claims Anthropic has “made a mockery of its lofty goals” by using pirated copies of authors’ work to train its chatbot.

The suit claimed Anthropic built a multibillion-dollar business off the backs of authors, stealing copyrighted books and feeding them into AI models to help the chatbot create human-like messages.

Anthropic gained a reputation as a safer AI startup after its founder said he refrained from releasing the chatbot earlier over safety concerns.

“It is no exaggeration to say that Anthropic’s model seeks to profit from strip-mining the human expression and ingenuity behind each one of those works,” the lawsuit said.

California and New York AI startups have been hit with a growing number of copyright infringement lawsuits from authors, musicians, producers and other artists.

The Authors Guild filed a lawsuit against Microsoft-backed OpenAI last September on behalf of famous writers including John Grisham, Jodi Picoult and “Game of Thrones” author George R.R. Martin, accusing the startup of illegally using published works to train its chatbot.

Anthropic’s chatbot Claude has been used to generate emails, summarize documents and interact with human users.

OpenAI is facing lawsuits from media giants as well, including The New York Times, Chicago Tribune and Mother Jones.

The tech startups have argued they are protected under the “fair use” doctrine, which allows for limited use of copyrighted materials for teaching or when transforming the work into something new.

But the lawsuit argued that the copyright infringement could not be considered “fair use” because the chatbot is not being “taught” like a real human.

Share.

Leave A Reply

Exit mobile version