Introduction to AI Training Data
A new study from the AI Disclosures Project has raised questions about the data OpenAI uses to train its large language models (LLMs). The research indicates the GPT-4o model from OpenAI demonstrates a “strong recognition” of paywalled and copyrighted data from O’Reilly Media books.
The AI Disclosures Project
The AI Disclosures Project, led by technologist Tim O’Reilly and economist Ilan Strauss, aims to address the potentially harmful societal impacts of AI’s commercialisation by advocating for improved corporate and technological transparency. The project’s working paper highlights the lack of disclosure in AI, drawing parallels with financial disclosure standards and their role in fostering robust securities markets.
Methodology and Findings
The study used a legally-obtained dataset of 34 copyrighted O’Reilly Media books to investigate whether LLMs from OpenAI were trained on copyrighted data without consent. The researchers applied the DE-COP membership inference attack method to determine if the models could differentiate between human-authored O’Reilly texts and paraphrased LLM versions. Key findings from the report include:
- GPT-4o shows “strong recognition” of paywalled O’Reilly book content, with an AUROC score of 82%. In contrast, OpenAI’s earlier model, GPT-3.5 Turbo, does not show the same level of recognition (AUROC score just above 50%).
- GPT-4o exhibits stronger recognition of non-public O’Reilly book content compared to publicly accessible samples (82% vs 64% AUROC scores respectively).
- GPT-3.5 Turbo shows greater relative recognition of publicly accessible O’Reilly book samples than non-public ones (64% vs 54% AUROC scores).
- GPT-4o Mini, a smaller model, showed no knowledge of public or non-public O’Reilly Media content when tested (AUROC approximately 50%).
Implications and Concerns
The researchers suggest that access violations may have occurred via the LibGen database, as all of the O’Reilly books tested were found there. They also acknowledge that newer LLMs have an improved ability to distinguish between human-authored and machine-generated language, which does not reduce the method’s ability to classify data. The study highlights the potential for “temporal bias” in the results, due to language changes over time.
Need for Transparency and Accountability
The AI Disclosures Project emphasises the need for stronger accountability in AI companies’ model pre-training processes. They suggest that liability provisions that incentivise improved corporate transparency in disclosing data provenance may be an important step towards facilitating commercial markets for training data licensing and remuneration. The EU AI Act’s disclosure requirements could help trigger a positive disclosure-standards cycle if properly specified and enforced.
Emerging Solutions
Despite evidence that AI companies may be obtaining data illegally for model training, a market is emerging in which AI model developers pay for content through licensing deals. Companies like Defined.ai facilitate the purchasing of training data, obtaining consent from data providers and stripping out personally identifiable information.
Conclusion
The report concludes by stating that using 34 proprietary O’Reilly Media books, the study provides empirical evidence that OpenAI likely trained GPT-4o on non-public, copyrighted data. This raises important questions about the use of copyrighted data in AI model training and the need for greater transparency and accountability in the AI industry.
FAQs
- Q: What is the AI Disclosures Project?
A: The AI Disclosures Project is a research initiative that aims to address the potentially harmful societal impacts of AI’s commercialisation by advocating for improved corporate and technological transparency. - Q: What did the study find?
A: The study found that OpenAI’s GPT-4o model demonstrates a “strong recognition” of paywalled and copyrighted data from O’Reilly Media books, suggesting that the model may have been trained on copyrighted data without consent. - Q: What are the implications of the study’s findings?
A: The study’s findings raise important questions about the use of copyrighted data in AI model training and the need for greater transparency and accountability in the AI industry. - Q: What solutions are emerging to address these issues?
A: A market is emerging in which AI model developers pay for content through licensing deals, and companies like Defined.ai facilitate the purchasing of training data, obtaining consent from data providers and stripping out personally identifiable information.