• About Us
  • Contact Us
  • Terms & Conditions
  • Privacy Policy
Technology Hive
  • Home
  • Technology
  • Artificial Intelligence (AI)
  • Cyber Security
  • Machine Learning
  • More
    • Deep Learning
    • AI in Healthcare
    • AI Regulations & Policies
    • Business
    • Cloud Computing
    • Ethics & Society
No Result
View All Result
  • Home
  • Technology
  • Artificial Intelligence (AI)
  • Cyber Security
  • Machine Learning
  • More
    • Deep Learning
    • AI in Healthcare
    • AI Regulations & Policies
    • Business
    • Cloud Computing
    • Ethics & Society
No Result
View All Result
Technology Hive
No Result
View All Result
Home Machine Learning

Will AI Increase Cyberattacks?

Sam Marten – Tech & AI Writer by Sam Marten – Tech & AI Writer
March 12, 2025
in Machine Learning
0
Will AI Increase Cyberattacks?
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

Introduction to AI-Related Cyber Threats

The growth in the use of artificial intelligence (AI) is impacting businesses in many ways, but one of the most dangerous could be as a result of exposing them to cyber threats. According to Gigamon’s 2024 Hybrid Cloud Security Survey, released in June 2024, 82 per cent of security and IT leaders around the world believe the global ransomware threat will grow as AI becomes more commonly used in cyberattacks.

AI Making Cyberattacks More Sophisticated

One of the biggest risks comes from the use of AI to create much more convincing phishing and social engineering attacks. “Cybercriminals can use tools like ChatGPT to craft highly convincing emails and messages,” says Dan Shiebler, head of machine learning at Abnormal Security. “It’s now easier than ever for a threat actor to create perfectly written and even personalised email attacks, making them more likely to deceive recipients.”

AI is also creating entirely new ways to impersonate people. Four in 10 security leaders say they have seen an increase in deepfake-related attacks over the last 12 months, the Gigamon survey finds. “Deepfake technology holds real potential to manipulate employees into sharing personal details or even sending money through false video calls, recordings and phone calls,” says Mark Jow, EMEA technical evangelist at Gigamon.

In February 2024, a finance worker for engineering firm Arup was tricked into making a payment of $25.6 million after scammers impersonated the company’s chief financial officer (CFO) and several other staff members on a group live video chat. “The victim originally received a message purportedly from the UK-based CFO asking for the funds to be transferred,” says Chris Hawkins, security consultant at Prism Infosec.

“The request seemed out of the ordinary, so the worker went on a video call to clarify whether it was a legitimate request. Unknown to them, they were the only real person on the call. Everyone else was a real-time deepfake. The most difficult deepfakes to spot are audio followed by photos and then video, and for this reason it’s vishing attacks that are the main cause for concern in the industry at the present time.”

But AI is also being deployed by cybercriminals to identify opportunities and vulnerabilities to carry out distributed denial-of-service (DDoS) attacks. “It is being used both to better profile a target for selection of the initial attack vectors to be used, to ensure that they will have the highest impact, and to ‘tune’ an ongoing attack to overcome defences as they react,” says Darren Anstee, chief technology officer for security at NETSCOUT. “These capabilities mean that attacks can have a higher initial impact, with little or no warning, and can also change frequently to circumvent static defences.”

Mind Your Business – and Its Use of AI

Organisations are also potentially exposing themselves to cyber threats through their own use of AI. According to research by law firm Hogan Lovells, 56 per cent of compliance leaders and C-suite executives believe misuse of generative AI within their organisation is a top technology-associated risk that could impact their organisation over the next few years. Despite this, over three-quarters (78 per cent) of leaders say their organisation allows employees to use generative AI in their daily work.

One of the biggest threats here is so-called ‘shadow AI’, where criminals or other actors make use of, or manipulate, AI-based programmes to cause harm. “One of the key risks lies in the potential for adversaries to manipulate the underlying code and data used to develop these AI systems, leading to the production of incorrect, biased or even offensive outcomes,” says Isa Goksu, UK and Ireland chief technology officer at Globant. “A prime example of this is the danger of prompt injection attacks. Adversaries can carefully craft input prompts designed to bypass the model’s intended functionality and trigger the generation of harmful or undesirable content.”

Jow believes organisations need to wake up to the risk of such activities. “These services are often free, which appeals to employees using AI applications off the record, but they generally carry a higher level of security risk and are largely unregulated,” he says. “CISOs must ensure that their AI deployments are secure and that no proprietary, confidential or private information is being provided to any insecure AI solutions.

“But it is also critical to challenge the security of these tools at the code level,” he adds. “Is the AI solution provided by a trusted and reputable provider? Any solutions should be from a trusted nation state, or a corporation with a good history of data protection, privacy and compliance.” A clear AI usage policy is needed, he adds.

What Can I Do to Reduce the Threat?

There are other steps organisations can take to reduce the risk of being negatively impacted by AI-related cyber threats, although currently 40 per cent of chief information security officers have not yet altered their priorities as a result, according to research by ClubCISO.

Educating employees on the evolving threat is vital, says Hawkins, but he points out that in the Arup attack the person in question had raised concerns. “Employee vigilance is only one piece of the puzzle and should be used in conjunction with a resilient data recovery plan and thorough Defence in Depth, with large money transfers requiring the sign-off of several senior members of staff,” he says.

Ev Kontsevoy, CEO of cybersecurity startup Teleport, believes organisations need to overhaul their approach around both credentials and privileges. “By securing identities cryptographically based on physical world attributes that cannot be stolen, like biometric authentication, and enforcing access based on ephemeral privileges that are granted only for the period of time that work needs to be completed, companies can materially reduce the attack surface that threat actors are targeting with these strategies,” he suggests.

The bottom line is that organisations will need to draw on a variety of techniques to ensure they can keep up with the new threats that are emerging because of AI. “In the coming years, cybercriminals are expected to increasingly exploit AI, automating and scaling attacks with sophisticated, undetectable malware and AI-powered reconnaissance tools,” points out Goksu. “This could flood platforms with AI-generated content, deepfakes and misinformation, amplifying social engineering risks.

“Firms not keeping pace risk vulnerabilities in critical AI systems, potentially leading to costly failures, legal issues and reputational harm. Failure to invest in training, security and AI defences may expose them to devastating attacks and eroded customer trust.”

Read More

  • The importance of disaster recovery and backup in your cybersecurity strategy – A strong disaster recovery as-a-service (DRaaS) solution can prove the difference between success and failure when it comes to keeping data protected
  • Can NIS2 and DORA improve firms’ cybersecurity? Daniel Lattimer, Area VP at Semperis, explores NIS2 and DORA to see how they compare to more prescriptive compliance models
  • The changing role of the CISO – The cybersecurity head of any organisation has moved from being purely tech and reactive to someone forward-thinking and strategic. Lamont Orange looks at how to navigate the changing role of the CISO

Conclusion

AI-related cyber threats are a growing concern for organisations, and it is crucial to take proactive steps to mitigate these risks. By understanding the threats, implementing robust security measures, and educating employees, organisations can reduce the likelihood of falling victim to AI-powered cyberattacks.

FAQs

Q: What is the biggest risk of AI in cyberattacks?
A: The biggest risk is the use of AI to create convincing phishing and social engineering attacks, as well as the potential for deepfakes and other forms of impersonation.
Q: How can organisations protect themselves from AI-related cyber threats?
A: Organisations can protect themselves by implementing robust security measures, educating employees, and ensuring that their AI deployments are secure.
Q: What is the role of employee vigilance in preventing AI-related cyber threats?
A: Employee vigilance is crucial in preventing AI-related cyber threats, but it should be used in conjunction with a resilient data recovery plan and thorough Defence in Depth.
Q: What is the importance of a clear AI usage policy?
A: A clear AI usage policy is essential to ensure that organisations are using AI in a secure and responsible manner, and to mitigate the risks associated with AI-related cyber threats.

Previous Post

Neural Networks Decoded: Concepts Over Code

Next Post

Optimizing AI Workloads

Sam Marten – Tech & AI Writer

Sam Marten – Tech & AI Writer

Sam Marten is a skilled technology writer with a strong focus on artificial intelligence, emerging tech trends, and digital innovation. With years of experience in tech journalism, he has written in-depth articles for leading tech blogs and publications, breaking down complex AI concepts into engaging and accessible content. His expertise includes machine learning, automation, cybersecurity, and the impact of AI on various industries. Passionate about exploring the future of technology, Sam stays up to date with the latest advancements, providing insightful analysis and practical insights for tech enthusiasts and professionals alike. Beyond writing, he enjoys testing AI-powered tools, reviewing new software, and discussing the ethical implications of artificial intelligence in modern society.

Related Posts

ISO 42001: The Standard for Responsible AI Governance
Machine Learning

ISO 42001: The Standard for Responsible AI Governance

by Sam Marten – Tech & AI Writer
May 15, 2025
Key Strategies for MLOps Success
Machine Learning

Key Strategies for MLOps Success

by Sam Marten – Tech & AI Writer
April 23, 2025
Synthetic Data: The Key to Unlocking AI Success
Machine Learning

Synthetic Data: The Key to Unlocking AI Success

by Sam Marten – Tech & AI Writer
March 26, 2025
Improving Asset Reliability with AI
Machine Learning

Improving Asset Reliability with AI

by Sam Marten – Tech & AI Writer
March 13, 2025
AI Helps UK Banks Reduce Fraud
Machine Learning

AI Helps UK Banks Reduce Fraud

by Sam Marten – Tech & AI Writer
March 11, 2025
Next Post
Optimizing AI Workloads

Optimizing AI Workloads

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Latest Articles

Introduction to Multi-Agent Reinforcement Learning

Introduction to Multi-Agent Reinforcement Learning

March 11, 2025
New Tool Evaluates Progress in Reinforcement Learning

New Tool Evaluates Progress in Reinforcement Learning

May 5, 2025
An AI Future for All

An AI Future for All

March 18, 2025

Browse by Category

  • AI in Healthcare
  • AI Regulations & Policies
  • Artificial Intelligence (AI)
  • Business
  • Cloud Computing
  • Cyber Security
  • Deep Learning
  • Ethics & Society
  • Machine Learning
  • Technology
Technology Hive

Welcome to Technology Hive, your go-to source for the latest insights, trends, and innovations in technology and artificial intelligence. We are a dynamic digital magazine dedicated to exploring the ever-evolving landscape of AI, emerging technologies, and their impact on industries and everyday life.

Categories

  • AI in Healthcare
  • AI Regulations & Policies
  • Artificial Intelligence (AI)
  • Business
  • Cloud Computing
  • Cyber Security
  • Deep Learning
  • Ethics & Society
  • Machine Learning
  • Technology

Recent Posts

  • Best Practices for AI in Bid Proposals
  • Artificial Intelligence for Small Businesses
  • Google Generates Fake AI Podcast From Search Results
  • Technologies Shaping a Nursing Career
  • AI-Powered Next-Gen Services in Regulated Industries

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Check your inbox or spam folder to confirm your subscription.

© Copyright 2025. All Right Reserved By Technology Hive.

No Result
View All Result
  • Home
  • Technology
  • Artificial Intelligence (AI)
  • Cyber Security
  • Machine Learning
  • AI in Healthcare
  • AI Regulations & Policies
  • Business
  • Cloud Computing
  • Ethics & Society
  • Deep Learning

© Copyright 2025. All Right Reserved By Technology Hive.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?