Introduction to the Case
OpenAI may soon be forced to explain why it deleted a pair of controversial datasets composed of pirated books, and the stakes could not be higher. At the heart of a class-action lawsuit from authors alleging that ChatGPT was illegally trained on their works, OpenAI’s decision to delete the datasets could end up being a deciding factor that gives the authors the win.
The Datasets in Question
It’s undisputed that OpenAI deleted the datasets, known as “Books 1” and “Books 2,” prior to ChatGPT’s release in 2022. Created by former OpenAI employees in 2021, the datasets were built by scraping the open web and seizing the bulk of its data from a shadow library called Library Genesis (LibGen). As OpenAI tells it, the datasets fell out of use within that same year, prompting an internal decision to delete them.
Authors’ Suspicions and OpenAI’s Reversal
But the authors suspect there’s more to the story than that. They noted that OpenAI appeared to flip-flop by retracting its claim that the datasets’ “non-use” was a reason for deletion, then later claiming that all reasons for deletion, including “non-use,” should be shielded under attorney-client privilege. To the authors, it seemed like OpenAI was quickly backtracking after the court granted the authors’ discovery requests to review OpenAI’s internal messages on the firm’s “non-use.”
Court Ruling
In fact, OpenAI’s reversal only made authors more eager to see how OpenAI discussed “non-use,” and now they may get to find out all the reasons why OpenAI deleted the datasets. Last week, US district judge Ona Wang ordered OpenAI to share all communications with in-house lawyers about deleting the datasets, as well as “all internal references to LibGen that OpenAI has redacted or withheld on the basis of attorney-client privilege.” According to Wang, OpenAI slipped up by arguing that “non-use” was not a “reason” for deleting the datasets, while simultaneously claiming that it should also be deemed a “reason” considered privileged.
Conclusion
The court’s decision to order OpenAI to share its internal communications regarding the deletion of the datasets could have significant implications for the case. If OpenAI is found to have intentionally deleted the datasets to avoid liability, it could strengthen the authors’ claims of copyright infringement. The outcome of this case will be closely watched, as it could set a precedent for how AI companies use and protect copyrighted materials.
FAQs
- Q: What are the "Books 1" and "Books 2" datasets?
- A: The "Books 1" and "Books 2" datasets are collections of books compiled by OpenAI, largely from a shadow library called Library Genesis (LibGen), for training AI models.
- Q: Why did OpenAI delete these datasets?
- A: OpenAI claims the datasets were deleted because they fell out of use, but the company’s reasoning and the timing of the deletion are under scrutiny.
- Q: What is the significance of the court’s ruling?
- A: The court’s ruling requires OpenAI to disclose its internal communications about the deletion of the datasets, which could reveal whether the company acted to avoid liability for copyright infringement.
- Q: How might this case impact the future of AI and copyright law?
- A: The outcome could set a precedent for how AI companies must handle copyrighted materials, potentially affecting the development and training of future AI models.









