What Does it Mean for an AI Model to be “Open”?
As the AI industry focuses on transparency and security, debates around the true meaning of “openness” are intensifying. Experts from open-source security firm Endor Labs weighed in on these pressing topics.
Applying Lessons from Software Security to AI Systems
Andrew Stiefel, Senior Product Marketing Manager at Endor Labs, emphasized the importance of applying lessons learned from software security to AI systems. He noted that the US government’s 2021 Executive Order on Improving America’s Cybersecurity includes a provision requiring organizations to produce a software bill of materials (SBOM) for each product sold to federal government agencies. An SBOM is essentially an inventory detailing the open-source components within a product, helping detect vulnerabilities. Stiefel argued that applying these same principles to AI systems is the logical next step.
Providing Better Transparency
Providing better transparency for citizens and government employees not only improves security, but also gives visibility into a model’s datasets, training, weights, and other components. According to Stiefel, "this will make it easier for the community to audit their systems for security risks and also for individuals and organizations to run their own versions of DeepSeek in production."
What is an "Open" AI Model?
Julien Sobrier, Senior Product Manager at Endor Labs, added crucial context to the ongoing discussion about AI transparency and "openness." Sobrier broke down the complexity inherent in categorizing AI systems as truly open.
"An AI model is made of many components: the training set, the weights, and programs to train and test the model, etc. It is important to make the whole chain available as open source to call the model ‘open’. It is a broad definition for now."
The Need for Consistency and Clarity
Sobrier noted the lack of consistency across major players, which has led to confusion about the term. "Among the main players, the concerns about the definition of ‘open’ started with OpenAI, and Meta is in the news now for their LLAMA model even though that’s ‘more open’. We need a common understanding of what an open model means. We want to watch out for any ‘open-washing’, as we saw it with free vs open-source software."
The Danger of "Open-Washing"
Sobrier highlighted the increasingly common practice of "open-washing," where organizations claim transparency while imposing restrictions. "With cloud providers offering a paid version of open-source projects (such as databases) without contributing back, we’ve seen a shift in many open-source projects: The source code is still open, but they added many commercial restrictions."
DeepSeek’s Approach to Transparency
DeepSeek, one of the rising players in the AI industry, has taken steps to address some of these concerns by making portions of its models and code open-source. The move has been praised for advancing transparency while providing security insights.
Conclusion
As open-source AI adoption accelerates, managing risk becomes ever more critical. Stiefel outlined a systematic approach centered around three key steps: discovery, evaluation, and response. "The key is finding the right balance between enabling innovation and managing risk. We need to give software engineering teams latitude to experiment but must do so with full visibility. The security team needs line-of-sight and the insight to act."
FAQs
- What does it mean for an AI model to be "open"?
An AI model is considered open if its components, including training data, weights, and programs, are made available as open-source. - What is the danger of "open-washing"?
Open-washing occurs when organizations claim transparency while imposing restrictions, making it difficult to use the model or contribute to its development. - How can organizations ensure the security of their AI models?
By following a systematic approach, including discovery, evaluation, and response, organizations can ensure the security of their AI models and mitigate potential risks. - What is the significance of open-source AI adoption?
Open-source AI adoption is gaining momentum, with 60% of organizations opting for open-source AI models over commercial alternatives for their generative AI (GenAI) projects.