Introduction to AI in Healthcare
Artificial Intelligence (AI) in healthcare has been gaining a lot of attention lately, but with it comes a lot of concern about its potential risks and pitfalls. Etay Maor, chief security strategist at Cato Networks, spoke at HIMSS25 in Las Vegas about the potential dangers of AI, particularly when it comes to hacking and fraud.
The Risks of AI
Maor believes that AI is not yet advanced enough to replace humans completely, but those who know how to use AI will have an advantage over those who don’t. One of the main problems with AI is that it has lowered the bar for potential threat actors. In the past, hackers needed to have deep knowledge of coding and hacking to attack computer systems. Now, malicious actors can use AI to do their dirty work for them.
How Hackers Use AI
Hackers look for vulnerabilities to attack, and one common method they use is called feedback poisoning. This is when they purposefully misdirect generative AI models like ChatGPT by telling them they’re wrong or making suggestions that "confuse" the AI. This can be done through text or images. For example, Maor shared a story about uploading a picture of London to ChatGPT and asking it to describe the image. The AI gave a nonsensical response because of extremely small text embedded in the image that was readable by the AI.
Benefits and Risks of AI in Healthcare
Many healthcare leaders have been keen to adopt AI because of its perceived benefits, including improved diagnostic speed and accuracy. AI can be used to analyze medical images, such as X-rays and MRI scans, to identify patterns and anomalies that a human might miss. However, there are also potential risks, particularly when it comes to security and privacy. One of the biggest risks is the potential for data breaches, since large quantities of patient data are often targets for cybercriminals.
Unique AI Attacks
Other types of unique AI attacks include model extraction, in which an adversary might extract enough information about the algorithm to create a substitute model. Feedback poisoning is another type of attack, where an adversary can mistrain a model by providing it with false information.
Staying Ahead of the Technology
Maor advised hospital leaders and AI teams to be vigilant and stay one step ahead of the technology. "If you don’t know how to use AI, the ones who do are going to take advantage," he said. It’s essential for healthcare leaders to employ personnel with a deep knowledge of how current AI models can be manipulated.
Conclusion
In conclusion, while AI has the potential to revolutionize healthcare, it also comes with significant risks and pitfalls. Healthcare leaders must be aware of these risks and take steps to mitigate them. By staying ahead of the technology and employing personnel with a deep knowledge of AI, healthcare leaders can ensure that AI is used to improve patient care, rather than compromise it.
FAQs
- Q: What is feedback poisoning?
A: Feedback poisoning is a type of attack where an adversary purposefully misdirects generative AI models by telling them they’re wrong or making suggestions that "confuse" the AI. - Q: What are the benefits of AI in healthcare?
A: The benefits of AI in healthcare include improved diagnostic speed and accuracy, as well as the ability to analyze medical images to identify patterns and anomalies that a human might miss. - Q: What are the risks of AI in healthcare?
A: The risks of AI in healthcare include the potential for data breaches, model extraction, and feedback poisoning. - Q: How can healthcare leaders stay ahead of the technology?
A: Healthcare leaders can stay ahead of the technology by employing personnel with a deep knowledge of how current AI models can be manipulated and by being vigilant and aware of the potential risks and pitfalls of AI.