Artificial Adversarial Intelligence: A Game-Changer in Cybersecurity
The Challenge of Evading Persistent Hackers
If you’ve watched cartoons like Tom and Jerry, you’ll recognize a common theme: An elusive target avoids his formidable adversary. This game of "cat-and-mouse" – whether literal or otherwise – involves pursuing something that ever-so-narrowly escapes you at each try.
In a similar way, evading persistent hackers is a continuous challenge for cybersecurity teams. Keeping them chasing what’s just out of reach, MIT researchers are working on an AI approach called "artificial adversarial intelligence" that mimics attackers of a device or network to test network defenses before real attacks happen. Other AI-based defensive measures help engineers further fortify their systems to avoid ransomware, data theft, or other hacks.
How Artificial Adversarial Intelligence Works
Artificial adversarial intelligence is designed to mimic the tactics and strategies used by cyber attackers. It involves creating AI-powered agents that can mimic the behavior of real-life hackers, allowing cybersecurity teams to prepare for and anticipate potential attacks.
Interview with Una-May O’Reilly
I had the opportunity to speak with Una-May O’Reilly, an MIT principal investigator who leads the Anyscale Learning For All Group (ALFA).
Q: In what ways can artificial adversarial intelligence play the role of a cyber attacker, and how does artificial adversarial intelligence portray a cyber defender?
A: Cyber attackers exist along a competence spectrum. At the lowest end, there are so-called script-kiddies, or threat actors who spray well-known exploits and malware in the hopes of finding some network or device that hasn’t practiced good cyber hygiene. In the middle are cyber mercenaries who are better-resourced and organized to prey upon enterprises with ransomware or extortion. And, at the high end, there are groups that are sometimes state-supported, which can launch the most difficult-to-detect "advanced persistent threats" (or APTs).
Q: What are some examples in our everyday lives where artificial adversarial intelligence has kept us safe? How can we use adversarial intelligence agents to stay ahead of threat actors?
A: Machine learning has been used in many ways to ensure cybersecurity. There are all kinds of detectors that filter out threats. They are tuned to anomalous behavior and to recognizable kinds of malware, for example. There are AI-enabled triage systems. Some of the spam protection tools right there on your cell phone are AI-enabled!
Conclusion
Artificial adversarial intelligence is a powerful tool in the fight against cyber threats. By mimicking the tactics and strategies used by cyber attackers, AI-powered agents can help cybersecurity teams prepare for and anticipate potential attacks. As new risks emerge, adversarial intelligence agents can quickly adapt to address them, keeping our systems and networks safe and secure.
FAQs
Q: What are some of the new risks that artificial adversarial intelligence is adapting to?
A: There never seems to be an end to new software being released and new configurations of systems being engineered. With every release, there are vulnerabilities an attacker can target. These may be examples of weaknesses in code that are already documented, or they may be novel.
Q: How do you see adversarial intelligence evolving in the future?
A: We will need to translate that to AI-based products and services that automate some of those efforts. And, of course, to keep designing smarter and smarter adversarial agents to keep us on our toes, or help us practice defending our cyber assets.