• About Us
  • Contact Us
  • Terms & Conditions
  • Privacy Policy
Technology Hive
  • Home
  • Technology
  • Artificial Intelligence (AI)
  • Cyber Security
  • Machine Learning
  • More
    • Deep Learning
    • AI in Healthcare
    • AI Regulations & Policies
    • Business
    • Cloud Computing
    • Ethics & Society
No Result
View All Result
  • Home
  • Technology
  • Artificial Intelligence (AI)
  • Cyber Security
  • Machine Learning
  • More
    • Deep Learning
    • AI in Healthcare
    • AI Regulations & Policies
    • Business
    • Cloud Computing
    • Ethics & Society
No Result
View All Result
Technology Hive
No Result
View All Result
Home Technology

Why Do Large Language Models Fabricate Information?

Linda Torries – Tech Writer & Digital Trends Analyst by Linda Torries – Tech Writer & Digital Trends Analyst
March 29, 2025
in Technology
0
Why Do Large Language Models Fabricate Information?
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

Introduction to Artificial Intelligence Models

Fine-tuning helps mitigate the problem of artificial intelligence models providing inaccurate or unrelated responses, guiding the model to act as a helpful assistant and to refuse to complete a prompt when its related training data is sparse. That fine-tuning process creates distinct sets of artificial neurons that researchers can see activating when the model encounters the name of a "known entity" (e.g., "Michael Jordan") or an "unfamiliar name" (e.g., "Michael Batkin") in a prompt.

How the Model Works

Activating the "unfamiliar name" feature amid an LLM’s neurons tends to promote an internal "can’t answer" circuit in the model, encouraging it to provide a response starting along the lines of "I apologize, but I cannot…" In fact, the researchers found that the "can’t answer" circuit tends to default to the "on" position in the fine-tuned "assistant" version of the model, making the model reluctant to answer a question unless other active features in its neural net suggest that it should.

Recognition vs. Recall

When the model encounters a well-known term like "Michael Jordan" in a prompt, it activates the "known entity" feature and causes the neurons in the "can’t answer" circuit to be "inactive or more weakly active." Once that happens, the model can dive deeper into its graph of Michael Jordan-related features to provide its best guess at an answer to a question like "What sport does Michael Jordan play?" On the other hand, artificially increasing the neurons’ weights in the "known answer" feature could force the model to confidently hallucinate information about completely made-up athletes like "Michael Batkin."

Understanding Hallucinations

The researchers suggest that "at least some" of the model’s hallucinations are related to a "misfire" of the circuit inhibiting that "can’t answer" pathway—that is, situations where the "known entity" feature (or others like it) is activated even when the token isn’t actually well-represented in the training data. This highlights the importance of fine-tuning and the need for more research into how artificial intelligence models process and respond to different types of input.

Conclusion

In conclusion, the fine-tuning process of artificial intelligence models is crucial in mitigating the problem of inaccurate or unrelated responses. The model’s ability to recognize and respond to "known entities" and "unfamiliar names" is a key aspect of its functionality, and understanding how it works can help improve its performance and reduce hallucinations.

FAQs

  • Q: What is fine-tuning in artificial intelligence models?
    A: Fine-tuning is the process of adjusting the model’s parameters to improve its performance on a specific task or dataset.
  • Q: What is the "can’t answer" circuit in the model?
    A: The "can’t answer" circuit is a mechanism that prevents the model from providing an answer when it is unsure or lacks sufficient information.
  • Q: What are hallucinations in artificial intelligence models?
    A: Hallucinations refer to the model’s tendency to provide false or inaccurate information, often due to a "misfire" of the circuit inhibiting the "can’t answer" pathway.
  • Q: How can hallucinations be reduced in artificial intelligence models?
    A: Hallucinations can be reduced through fine-tuning, improving the model’s training data, and adjusting its parameters to prevent overconfidence in its responses.
Previous Post

AI-Native Networking Optimization

Next Post

Preventing Claims Denials with AI Strategies

Linda Torries – Tech Writer & Digital Trends Analyst

Linda Torries – Tech Writer & Digital Trends Analyst

Linda Torries is a skilled technology writer with a passion for exploring the latest innovations in the digital world. With years of experience in tech journalism, she has written insightful articles on topics such as artificial intelligence, cybersecurity, software development, and consumer electronics. Her writing style is clear, engaging, and informative, making complex tech concepts accessible to a wide audience. Linda stays ahead of industry trends, providing readers with up-to-date analysis and expert opinions on emerging technologies. When she's not writing, she enjoys testing new gadgets, reviewing apps, and sharing practical tech tips to help users navigate the fast-paced digital landscape.

Related Posts

Google Generates Fake AI Podcast From Search Results
Technology

Google Generates Fake AI Podcast From Search Results

by Linda Torries – Tech Writer & Digital Trends Analyst
June 13, 2025
Meta Invests  Billion in Scale AI to Boost Disappointing AI Division
Technology

Meta Invests $15 Billion in Scale AI to Boost Disappointing AI Division

by Linda Torries – Tech Writer & Digital Trends Analyst
June 13, 2025
Drafting a Will to Avoid Digital Limbo
Technology

Drafting a Will to Avoid Digital Limbo

by Linda Torries – Tech Writer & Digital Trends Analyst
June 13, 2025
AI Erroneously Blames Airbus for Fatal Air India Crash Instead of Boeing
Technology

AI Erroneously Blames Airbus for Fatal Air India Crash Instead of Boeing

by Linda Torries – Tech Writer & Digital Trends Analyst
June 12, 2025
AI Chatbots Tell Users What They Want to Hear
Technology

AI Chatbots Tell Users What They Want to Hear

by Linda Torries – Tech Writer & Digital Trends Analyst
June 12, 2025
Next Post
Preventing Claims Denials with AI Strategies

Preventing Claims Denials with AI Strategies

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Latest Articles

Saudi Arabia Partners with HUMAIN and NVIDIA to Build AI Future

Saudi Arabia Partners with HUMAIN and NVIDIA to Build AI Future

May 14, 2025
Claude 3.7 Sonnet: Extended Thinking

Claude 3.7 Sonnet: Extended Thinking

February 25, 2025
Aged Care Provider Reduces Staff Turnover with Automation

Aged Care Provider Reduces Staff Turnover with Automation

March 18, 2025

Browse by Category

  • AI in Healthcare
  • AI Regulations & Policies
  • Artificial Intelligence (AI)
  • Business
  • Cloud Computing
  • Cyber Security
  • Deep Learning
  • Ethics & Society
  • Machine Learning
  • Technology
Technology Hive

Welcome to Technology Hive, your go-to source for the latest insights, trends, and innovations in technology and artificial intelligence. We are a dynamic digital magazine dedicated to exploring the ever-evolving landscape of AI, emerging technologies, and their impact on industries and everyday life.

Categories

  • AI in Healthcare
  • AI Regulations & Policies
  • Artificial Intelligence (AI)
  • Business
  • Cloud Computing
  • Cyber Security
  • Deep Learning
  • Ethics & Society
  • Machine Learning
  • Technology

Recent Posts

  • Best Practices for AI in Bid Proposals
  • Artificial Intelligence for Small Businesses
  • Google Generates Fake AI Podcast From Search Results
  • Technologies Shaping a Nursing Career
  • AI-Powered Next-Gen Services in Regulated Industries

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Check your inbox or spam folder to confirm your subscription.

© Copyright 2025. All Right Reserved By Technology Hive.

No Result
View All Result
  • Home
  • Technology
  • Artificial Intelligence (AI)
  • Cyber Security
  • Machine Learning
  • AI in Healthcare
  • AI Regulations & Policies
  • Business
  • Cloud Computing
  • Ethics & Society
  • Deep Learning

© Copyright 2025. All Right Reserved By Technology Hive.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?