Anthropic victory AI Copyright ruling, judge says that training on purchased books is proper use
An American judge ruled that anthropic AI training on copyright books is appropriate use, but did not storage of pirated books. The test for December has been set to determine the loss.
Listen to the story

In short
- The judge says that AI Book Training is counted as proper use under copyright law
- Anthropic violated copyright by storing pirated books in the library
- The testing of the disadvantage will decide how many anthropic arrears the author is
Artificial Intelligence (AI) may be widespread implications for the industry, in which a federal judge in San Francisco has allegedly decided that to train its AI model cloud falls under “proper use” to the use of copyright books of anthropic. However, the same court also found that Anthropic had violated copyright by storing pirated copies of books in a digital library, according to a report. RootsThe decision submitted on Monday by US District Judge William Alsup is seen as a partial victory for the AI firm. The judge concluded that the use of books of anthropic writers was legal to train AI, the use was described as “highly transformative” and in line with the principles of fair use mentioned in the US copyright law.
“There is a desire to be a writer like any reader, the LLMS of Anthropic has trained not to run forward and repeat them or press them – but to turn a hard corner and to divert something different,” Judge Alsup said.
Anthropic, supported by Amazon and Google, was sued by writers Andrea Bartz, Charles Grabber and Kirkas Johnson last year. The authors argued that anthropic used the pirated versions of their books to teach their cloud models without permission or compensation to respond to how users respond. The lawsuit is one of the numbers filed against major AI developers including OpenAI, Meta and Microsoft on copyright concerns.
An anthropic spokesperson welcomed the court’s verdict on fair use. The spokesperson said, “We are happy that the court recognized that our AI was compatible with the purpose of transformative and copyright in training and promoted scientific progress.”
However, Judge Alsup also ruled that Anthropic crossed a legal line by downloading and storing more than seven million pirated books, which was described as “the Central Library of all books in the world”. The judge stated that this storage was not protected by proper use and the amount of copyright violation. A test has been determined for December which will determine how much anthropic outstanding authors for violations.
Under the US law, a loss of up to $ 150,000 per work can result in a willful copyright violation. The court will examine the scale and impact of the violation before deciding on the amount.
Judge Alsup dismissed the argument of anthropic that the source of books was irrelevant to fair use. He said, “This order suspects that any accused informer can ever fulfill his burden of explaining why the sources can download copies from the sites, which they can buy or otherwise can be legally accessible, was appropriately required for any subsequent fair use,” he wrote.
The ruling AI companies come as companies as they increase legal investigation on how to acquire and use copyright materials. Fair use has become an important defense for tech firms, arguing that their AI systems produce new and transformative materials instead of copies only.
Anthropic argued in the court that AI is not allowed to be trained on existing content, but is encouraged by copyright law as it promotes innovation. The company said it has used books to study the writing styles, remove ideas that are not protected by copyright, and produce advanced techniques.
However, authors argue that this use threatens their ability to live life. They say that training AI models at your work without permission creates devices that can generate competitive materials, putting their career at risk.
While the use of a judge’s fair is the ruling anthropic, the discovery of copyright violations due to storage practices underlines the legal gray area in which AI companies work. Along with other cases, the case may set an example to explain the use of copyright content in AI development.
The ruling cloud follows a recent lawsuit filed by the Reddit against Anthropic for scrapping the user’s comments without consent to train the ruling cloud. While this suit focuses on a violation of the contract rather than copyright, it combines increasing pressure on AI companies to honor digital material boundaries.
Anthropic established by former Openai authorities in 2021 is considered one of the leading challenges for Openai. With support from Amazon and Google parent alphabet, its cloud chatbot has been integrated into products such as Alexa.