• About Us
  • Contact Us
  • Terms & Conditions
  • Privacy Policy
Technology Hive
  • Home
  • Technology
  • Artificial Intelligence (AI)
  • Cyber Security
  • Machine Learning
  • More
    • Deep Learning
    • AI in Healthcare
    • AI Regulations & Policies
    • Business
    • Cloud Computing
    • Ethics & Society
No Result
View All Result
  • Home
  • Technology
  • Artificial Intelligence (AI)
  • Cyber Security
  • Machine Learning
  • More
    • Deep Learning
    • AI in Healthcare
    • AI Regulations & Policies
    • Business
    • Cloud Computing
    • Ethics & Society
No Result
View All Result
Technology Hive
No Result
View All Result
Home Artificial Intelligence (AI)

AI learns how vision and sound are connected, without human intervention

Adam Smith – Tech Writer & Blogger by Adam Smith – Tech Writer & Blogger
May 22, 2025
in Artificial Intelligence (AI)
0
AI learns how vision and sound are connected, without human intervention
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

Introduction to Multimodal Learning

Humans naturally learn by making connections between sight and sound. For instance, we can watch someone playing the cello and recognize that the cellist’s movements are generating the music we hear. A new approach developed by researchers from MIT and elsewhere improves an AI model’s ability to learn in this same fashion. This could be useful in applications such as journalism and film production, where the model could help with curating multimodal content through automatic video and audio retrieval.

Improving AI Models

In the longer term, this work could be used to improve a robot’s ability to understand real-world environments, where auditory and visual information are often closely connected. Improving upon prior work from their group, the researchers created a method that helps machine-learning models align corresponding audio and visual data from video clips without the need for human labels. They adjusted how their original model is trained so it learns a finer-grained correspondence between a particular video frame and the audio that occurs in that moment.

Key Enhancements

The researchers also made some architectural tweaks that help the system balance two distinct learning objectives, which improves performance. Taken together, these relatively simple improvements boost the accuracy of their approach in video retrieval tasks and in classifying the action in audiovisual scenes. For instance, the new method could automatically and precisely match the sound of a door slamming with the visual of it closing in a video clip.

Building AI Systems

“We are building AI systems that can process the world like humans do, in terms of having both audio and visual information coming in at once and being able to seamlessly process both modalities. Looking forward, if we can integrate this audio-visual technology into some of the tools we use on a daily basis, like large language models, it could open up a lot of new applications,” says Andrew Rouditchenko, an MIT graduate student and co-author of a paper on this research.

Syncing Up

This work builds upon a machine-learning method the researchers developed a few years ago, which provided an efficient way to train a multimodal model to simultaneously process audio and visual data without the need for human labels. The researchers feed this model, called CAV-MAE, unlabeled video clips and it encodes the visual and audio data separately into representations called tokens. Using the natural audio from the recording, the model automatically learns to map corresponding pairs of audio and visual tokens close together within its internal representation space.

Improving Performance

They found that using two learning objectives balances the model’s learning process, which enables CAV-MAE to understand the corresponding audio and visual data while improving its ability to recover video clips that match user queries. But CAV-MAE treats audio and visual samples as one unit, so a 10-second video clip and the sound of a door slamming are mapped together, even if that audio event happens in just one second of the video.

CAV-MAE Sync

In their improved model, called CAV-MAE Sync, the researchers split the audio into smaller windows before the model computes its representations of the data, so it generates separate representations that correspond to each smaller window of audio. During training, the model learns to associate one video frame with the audio that occurs during just that frame. “By doing that, the model learns a finer-grained correspondence, which helps with performance later when we aggregate this information,” Araujo says.

Adding “Wiggle Room”

The model incorporates a contrastive objective, where it learns to associate similar audio and visual data, and a reconstruction objective which aims to recover specific audio and visual data based on user queries. In CAV-MAE Sync, the researchers introduced two new types of data representations, or tokens, to improve the model’s learning ability. They include dedicated “global tokens” that help with the contrastive learning objective and dedicated “register tokens” that help the model focus on important details for the reconstruction objective.

Conclusion

While the researchers had some intuition these enhancements would improve the performance of CAV-MAE Sync, it took a careful combination of strategies to shift the model in the direction they wanted it to go. “Because we have multiple modalities, we need a good model for both modalities by themselves, but we also need to get them to fuse together and collaborate,” Rouditchenko says. In the end, their enhancements improved the model’s ability to retrieve videos based on an audio query and predict the class of an audio-visual scene, like a dog barking or an instrument playing.

FAQs

Q: What is the main goal of the researchers’ work?
A: The main goal is to improve an AI model’s ability to learn by making connections between sight and sound, similar to how humans learn.
Q: What is CAV-MAE Sync?
A: CAV-MAE Sync is an improved model that splits audio into smaller windows and generates separate representations for each window, allowing for finer-grained correspondence between audio and visual data.
Q: What are the potential applications of this research?
A: The research could be useful in applications such as journalism, film production, and robotics, where the model could help with curating multimodal content and understanding real-world environments.
Q: How does the model learn to associate audio and visual data?
A: The model uses a contrastive objective to associate similar audio and visual data, and a reconstruction objective to recover specific audio and visual data based on user queries.
Q: What are the next steps for the researchers?
A: The researchers want to incorporate new models that generate better data representations into CAV-MAE Sync and enable their system to handle text data, which could improve performance and lead to the development of an audiovisual large language model.

Previous Post

Jony Ive to Lead OpenAI’s Design Future

Next Post

Middle East: A Hotspot for Global Tech Investments

Adam Smith – Tech Writer & Blogger

Adam Smith – Tech Writer & Blogger

Adam Smith is a passionate technology writer with a keen interest in emerging trends, gadgets, and software innovations. With over five years of experience in tech journalism, he has contributed insightful articles to leading tech blogs and online publications. His expertise covers a wide range of topics, including artificial intelligence, cybersecurity, mobile technology, and the latest advancements in consumer electronics. Adam excels in breaking down complex technical concepts into engaging and easy-to-understand content for a diverse audience. Beyond writing, he enjoys testing new gadgets, reviewing software, and staying up to date with the ever-evolving tech industry. His goal is to inform and inspire readers with in-depth analysis and practical insights into the digital world.

Related Posts

Chatbots Can Debunk Conspiracy Theories Surprisingly Well
Artificial Intelligence (AI)

Chatbots Can Debunk Conspiracy Theories Surprisingly Well

by Adam Smith – Tech Writer & Blogger
October 30, 2025
The Consequential AGI Conspiracy Theory
Artificial Intelligence (AI)

The Consequential AGI Conspiracy Theory

by Adam Smith – Tech Writer & Blogger
October 30, 2025
Clinician-Centered Agentic AI Solutions
Artificial Intelligence (AI)

Clinician-Centered Agentic AI Solutions

by Adam Smith – Tech Writer & Blogger
October 30, 2025
Samsung Semiconductor Recovery Explained
Artificial Intelligence (AI)

Samsung Semiconductor Recovery Explained

by Adam Smith – Tech Writer & Blogger
October 30, 2025
DeepSeek may have found a new way to improve AI’s ability to remember
Artificial Intelligence (AI)

DeepSeek may have found a new way to improve AI’s ability to remember

by Adam Smith – Tech Writer & Blogger
October 29, 2025
Next Post
Middle East: A Hotspot for Global Tech Investments

Middle East: A Hotspot for Global Tech Investments

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Latest Articles

Reddit bug caused lesbian subreddit to be labeled as a place for “straight” women

Reddit bug caused lesbian subreddit to be labeled as a place for “straight” women

September 10, 2025
Model Context Protocol by Anthropic

Model Context Protocol by Anthropic

March 13, 2025
Streamlining Financial Insights with Automation

Streamlining Financial Insights with Automation

July 10, 2025

Browse by Category

  • AI in Healthcare
  • AI Regulations & Policies
  • Artificial Intelligence (AI)
  • Business
  • Cloud Computing
  • Cyber Security
  • Deep Learning
  • Ethics & Society
  • Machine Learning
  • Technology
Technology Hive

Welcome to Technology Hive, your go-to source for the latest insights, trends, and innovations in technology and artificial intelligence. We are a dynamic digital magazine dedicated to exploring the ever-evolving landscape of AI, emerging technologies, and their impact on industries and everyday life.

Categories

  • AI in Healthcare
  • AI Regulations & Policies
  • Artificial Intelligence (AI)
  • Business
  • Cloud Computing
  • Cyber Security
  • Deep Learning
  • Ethics & Society
  • Machine Learning
  • Technology

Recent Posts

  • Character.AI to restrict chats for under-18 users after teen death lawsuits
  • Chatbots Can Debunk Conspiracy Theories Surprisingly Well
  • Bending Spoons’ Acquisition of AOL Highlights Legacy Platform Value
  • The Consequential AGI Conspiracy Theory
  • MLOps Mastery with Multi-Cloud Pipeline

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Check your inbox or spam folder to confirm your subscription.

© Copyright 2025. All Right Reserved By Technology Hive.

No Result
View All Result
  • Home
  • Technology
  • Artificial Intelligence (AI)
  • Cyber Security
  • Machine Learning
  • AI in Healthcare
  • AI Regulations & Policies
  • Business
  • Cloud Computing
  • Ethics & Society
  • Deep Learning

© Copyright 2025. All Right Reserved By Technology Hive.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?