How AI Boosts Cybercriminal Activity at Scale
Since OpenAI’s breakthrough in Large Language Models (LLMs), threats to organisations have skyrocketed. That’s because whenever we make significant technological progress, bad actors also benefit: cybercriminals now use LLMs and generative AI to execute hyper-local attack playbooks at scale and obfuscate the traditional ‘tells’ that alert employees and organisations to attack.
We’re in a metaphorical arms race with threat actors to make the best use of AI. So, who is winning?
How AI Boosts Cybercriminal Activity at Scale
Historically, phishing emails held obvious clues to their malicious intent. Spelling mistakes, grammatical errors, and language gaps raised red flags and alerted most readers. Now, however, AI is helping craft well-written and more convincing messages; attacks can now closely mimic local vernacular, internal corporate lingo, and professionalism. This isn’t your grandma’s spam inbox. Old tells are no longer reliable.
The same is true for scams that use voice and video recordings. AI makes impersonation much easier via ‘deepfakes’ – video, audio, or imagery that can trick existing cybersecurity defences to carry out things like financial fraud. For example, threat actors trained on a CEO’s public voice have been used to initiate wire fraud worth nearly a quarter of a million dollars.
And it doesn’t stop there: AI can help automate attacks, identify vulnerabilities, create more sophisticated malware, analyse vast amounts of sensitive data, and reverse engineer security tools. With the assistance of AI, the barrier to entry to becoming a cybercriminal just became a lot less technical. It is little surprise, then, that HackerOne’s 2024 Hacker-Powered Security Report found that 48 per cent of respondents believe AI poses the biggest security risk to their company.
How Teams Can Use AI to Combat Threats
The good news is that security teams can harness AI to see off these threats, making themselves faster, smarter, and better able to spot threats and vulnerabilities at scale. One group that is using AI to help identify threats is security researchers, whose role is to actively search for vulnerabilities within software and systems and responsibly disclose threats to companies before cybercriminals can find them.
Conclusions
In such a close-run race, it can be hard to tell who is in front: the cybercriminals or the security professionals. It’s fair to say that, for the time being, the security teams hold the upper hand. However, bad actors are not far behind. Through exploring new ways AI can support security teams, organisations can ensure their defences remain ahead of the game.
Frequently Asked Questions
- What is AI’s impact on cybersecurity?
AI is helping boost cybercriminal activity at scale, making it more difficult for security teams to detect and prevent attacks. - Can AI be used to combat cyber threats?
Yes, security teams can use AI to identify threats, automate routine tasks, and stay ahead of cybercriminals. - Who is winning the race between cybercriminals and security professionals?
For now, security teams hold the upper hand, but it’s a close-run race. - How can organisations stay ahead of the game?
By exploring new ways AI can support security teams and staying up-to-date with the latest AI-powered cybersecurity strategies.