• About Us
  • Contact Us
  • Terms & Conditions
  • Privacy Policy
Technology Hive
  • Home
  • Technology
  • Artificial Intelligence (AI)
  • Cyber Security
  • Machine Learning
  • More
    • Deep Learning
    • AI in Healthcare
    • AI Regulations & Policies
    • Business
    • Cloud Computing
    • Ethics & Society
No Result
View All Result
  • Home
  • Technology
  • Artificial Intelligence (AI)
  • Cyber Security
  • Machine Learning
  • More
    • Deep Learning
    • AI in Healthcare
    • AI Regulations & Policies
    • Business
    • Cloud Computing
    • Ethics & Society
No Result
View All Result
Technology Hive
No Result
View All Result
Home Technology

AI-generated code could be a disaster for the software supply chain

Linda Torries – Tech Writer & Digital Trends Analyst by Linda Torries – Tech Writer & Digital Trends Analyst
April 29, 2025
in Technology
0
AI-generated code could be a disaster for the software supply chain
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

Introduction to AI Hallucinations

In AI, hallucinations occur when an LLM (Large Language Model) produces outputs that are factually incorrect, nonsensical, or completely unrelated to the task it was assigned. Hallucinations have long dogged LLMs because they degrade their usefulness and trustworthiness and have proven vexingly difficult to predict and remedy. Recently, a phenomenon known as “package hallucination” has been identified in a study scheduled to be presented at the 2025 USENIX Security Symposium.

What are Package Hallucinations?

Package hallucinations occur when an LLM generates code that references non-existent packages. For the study, the researchers ran 30 tests, 16 in the Python programming language and 14 in JavaScript, that generated 19,200 code samples per test, for a total of 576,000 code samples. Of the 2.23 million package references contained in those samples, 440,445, or 19.7 percent, pointed to packages that didn’t exist. Among these 440,445 package hallucinations, 205,474 had unique package names.

The Threat of Package Hallucinations

One of the things that makes package hallucinations potentially useful in supply-chain attacks is that 43 percent of package hallucinations were repeated over 10 queries. This means that specific names of non-existent packages are repeated over and over, making them a predictable and potentially exploitable vulnerability. Attackers could seize on the pattern by identifying nonexistent packages that are repeatedly hallucinated, publishing malware using those names, and waiting for them to be accessed by large numbers of developers.

Patterns and Disparities in Package Hallucinations

The study uncovered disparities in the LLMs and programming languages that produced the most package hallucinations. The average percentage of package hallucinations produced by open source LLMs such as CodeLlama and DeepSeek was nearly 22 percent, compared with a little more than 5 percent by commercial models. Code written in Python resulted in fewer hallucinations than JavaScript code, with an average of almost 16 percent compared with a little over 21 percent for JavaScript.

Conclusion

Package hallucinations pose a significant threat to the security of software development, particularly in the context of supply-chain attacks. The predictable and repeatable nature of these hallucinations makes them a valuable target for malicious actors. As the use of LLMs in software development continues to grow, it is essential to address this vulnerability and develop strategies to prevent and mitigate the effects of package hallucinations.

FAQs

  • Q: What are package hallucinations?
    A: Package hallucinations occur when an LLM generates code that references non-existent packages.
  • Q: How common are package hallucinations?
    A: According to the study, 19.7 percent of package references pointed to packages that didn’t exist.
  • Q: Can package hallucinations be exploited by attackers?
    A: Yes, package hallucinations can be used in supply-chain attacks, particularly if the same non-existent package names are repeated over and over.
  • Q: Are some LLMs or programming languages more prone to package hallucinations?
    A: Yes, the study found that open source LLMs and JavaScript code were more likely to produce package hallucinations than commercial models and Python code.
Previous Post

Is AI Normal?

Next Post

Accelerate PyTorch Training with Perforated Backpropagation

Linda Torries – Tech Writer & Digital Trends Analyst

Linda Torries – Tech Writer & Digital Trends Analyst

Linda Torries is a skilled technology writer with a passion for exploring the latest innovations in the digital world. With years of experience in tech journalism, she has written insightful articles on topics such as artificial intelligence, cybersecurity, software development, and consumer electronics. Her writing style is clear, engaging, and informative, making complex tech concepts accessible to a wide audience. Linda stays ahead of industry trends, providing readers with up-to-date analysis and expert opinions on emerging technologies. When she's not writing, she enjoys testing new gadgets, reviewing apps, and sharing practical tech tips to help users navigate the fast-paced digital landscape.

Related Posts

Google Generates Fake AI Podcast From Search Results
Technology

Google Generates Fake AI Podcast From Search Results

by Linda Torries – Tech Writer & Digital Trends Analyst
June 13, 2025
Meta Invests  Billion in Scale AI to Boost Disappointing AI Division
Technology

Meta Invests $15 Billion in Scale AI to Boost Disappointing AI Division

by Linda Torries – Tech Writer & Digital Trends Analyst
June 13, 2025
Drafting a Will to Avoid Digital Limbo
Technology

Drafting a Will to Avoid Digital Limbo

by Linda Torries – Tech Writer & Digital Trends Analyst
June 13, 2025
AI Erroneously Blames Airbus for Fatal Air India Crash Instead of Boeing
Technology

AI Erroneously Blames Airbus for Fatal Air India Crash Instead of Boeing

by Linda Torries – Tech Writer & Digital Trends Analyst
June 12, 2025
AI Chatbots Tell Users What They Want to Hear
Technology

AI Chatbots Tell Users What They Want to Hear

by Linda Torries – Tech Writer & Digital Trends Analyst
June 12, 2025
Next Post
Accelerate PyTorch Training with Perforated Backpropagation

Accelerate PyTorch Training with Perforated Backpropagation

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Latest Articles

Inclusive Training Data for Ethical AI

Inclusive Training Data for Ethical AI

April 14, 2025
Shaping AI with Ethics and Innovation

Shaping AI with Ethics and Innovation

March 31, 2025
Claude Hasn’t Beaten Pokémon Yet

Claude Hasn’t Beaten Pokémon Yet

March 21, 2025

Browse by Category

  • AI in Healthcare
  • AI Regulations & Policies
  • Artificial Intelligence (AI)
  • Business
  • Cloud Computing
  • Cyber Security
  • Deep Learning
  • Ethics & Society
  • Machine Learning
  • Technology
Technology Hive

Welcome to Technology Hive, your go-to source for the latest insights, trends, and innovations in technology and artificial intelligence. We are a dynamic digital magazine dedicated to exploring the ever-evolving landscape of AI, emerging technologies, and their impact on industries and everyday life.

Categories

  • AI in Healthcare
  • AI Regulations & Policies
  • Artificial Intelligence (AI)
  • Business
  • Cloud Computing
  • Cyber Security
  • Deep Learning
  • Ethics & Society
  • Machine Learning
  • Technology

Recent Posts

  • Best Practices for AI in Bid Proposals
  • Artificial Intelligence for Small Businesses
  • Google Generates Fake AI Podcast From Search Results
  • Technologies Shaping a Nursing Career
  • AI-Powered Next-Gen Services in Regulated Industries

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Check your inbox or spam folder to confirm your subscription.

© Copyright 2025. All Right Reserved By Technology Hive.

No Result
View All Result
  • Home
  • Technology
  • Artificial Intelligence (AI)
  • Cyber Security
  • Machine Learning
  • AI in Healthcare
  • AI Regulations & Policies
  • Business
  • Cloud Computing
  • Ethics & Society
  • Deep Learning

© Copyright 2025. All Right Reserved By Technology Hive.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?