Introduction to AI and Cybersecurity
The majority of organisations have in place at least the bare minimum of cybersecurity, and thankfully, in most cases, operate a decently comprehensive raft of cybersecurity measures that cover off communications, data storage, and perimeter defences. However, in the last couple of years, AI has changed the picture, both in terms of how companies can leverage the technology internally, and in how AI is used in cybersecurity – in advanced detection, and in the new ways the tech is used by bad actors.
AI as a Cybersecurity Tool
As a cybersecurity tool, AI can be used in network anomaly detection and the smart spotting of phishing messages, among other uses. As a business enabler, AI means that the enterprise has to be proactive to ensure AI is used responsibly, balancing the innovation AI offers with privacy, data sovereignty, and risk. Considered a relatively new area, AI, smart automation, data governance and security all inhabit a niche at present. But given the growing presence of AI in the enterprise, those niches are set to become mainstream issues: problems, solutions, and advice that will need to be observed in every organisation, sooner rather than later.
Governance and Risk
Integrating AI into business processes isn’t solely about the technology and methods for its deployment. Internal processes will need to change to make best use of AI, and to better protect the business that’s using AI daily. Kieran Norton, Deloitte’s US Cyber AI & Automation leader, draws a parallel to earlier changes made necessary by new technologies: “I would correlate [AI] with cloud adoption where it was a fairly significant shift. People understood the advantages of it and were moving in that direction, although sometimes it took them more time than others to get there.”
Those changes mean casting the net wide, to encompass the update of governance frameworks, establishing secure architectures, even leveraging a new generation of specialists to ensure AI and the data associated with it are used safely and responsibly. Companies actively using AI have to detect and correct bias, test for hallucinations, impose guardrails, manage where, and by whom AI is used, and more. As Kieran puts it: “You probably weren’t doing a lot of testing for hallucination, bias, toxicity, data poisoning, model vulnerabilities, etc. That now has to be part of your process.”
The Right Use-Cases
Kieran advocates that companies start with smaller, lower-risk AI implementations. While some of the first sightings of AI ‘in the wild’ have been chatbots, he was quick to differentiate between a chatbot that can intelligently answer questions from customers, and agents, which can take action by means of triggering interactions with the apps and services the business operates. “So there’s a delineation […] chatbots have been one of the primary starting places […] As we get into agents and agentic, that changes the picture. It also changes the complexity and risk profile.”
Customer-facing agentic AI instances are indubitably higher risk, as a misstep can have significant effects on a brand. “That’s a higher risk scenario. Particularly if the agent is executing financial transactions or making determinations based on healthcare coverage […] that’s not the first use case you want to try.” Kieran therefore emphasises the importance of practicality, and a grounded assessment of need and ability as needing careful examination before AI can gain a foothold.
Conclusion
Kieran’s message for business professionals investigating AI uses for their organisations was not to build an AI risk assessment and management programme from scratch. Instead, companies should evolve existing systems, have a clear understanding of each use-case, and avoid the trap of building for theoretical value. “You shouldn’t create another programme just for AI security on top of what you’re already doing […] you should be modernising your programme to address the nuances associated with AI workloads.” Success in AI starts with clear, realistic goals built on solid foundations.
FAQs
Q: What is the role of AI in cybersecurity?
A: AI can be used in network anomaly detection and the smart spotting of phishing messages, among other uses.
Q: What are the risks associated with AI in business?
A: Companies actively using AI have to detect and correct bias, test for hallucinations, impose guardrails, manage where, and by whom AI is used, and more.
Q: What is the importance of governance and risk in AI implementation?
A: Integrating AI into business processes requires changes to internal processes to make best use of AI and to better protect the business that’s using AI daily.
Q: What are the right use-cases for AI implementation?
A: Companies should start with smaller, lower-risk AI implementations, such as chatbots, and avoid higher risk scenarios, such as customer-facing agentic AI instances.
Q: How can companies ensure success in AI implementation?
A: Success in AI starts with clear, realistic goals built on solid foundations, and companies should evolve existing systems, have a clear understanding of each use-case, and avoid the trap of building for theoretical value.