Introduction to AI in Healthcare
Dr. Ronald Rodriguez is a professor of medical education and program director of the nation’s first MD/MS in Artificial Intelligence dual degree at The University of Texas at San Antonio. He is at the forefront of AI’s transformation of healthcare and is aware of the positive and negative aspects of AI in healthcare.
The Risks of Generative AI Tools
Clinicians are potentially doing wrong today with generative AI tools by not protecting protected health information (PHI) effectively. Many commercial large language model servers take the prompts and data uploaded to their servers and use it for further training later. This can lead to tier 2 HIPAA violations, with each offense potentially resulting in a separate fine. IT providers can warn users not to cut and paste PHI, but most systems are not enforcing compliance with these rules at the individual level.
The Cost of AI in Healthcare
The current business model of AI use is an ecosystem where each prompt generates a cost based on the number of tokens. This incremental cost currently is modeled such that it is more likely to actually increase healthcare costs than reduce them. For example, systems like DAX and Abridge, which take a recording of the patient-provider interaction, transcribe the interaction, and summarize it for use in a note, make life easier for physicians but come at a cost. The costs of these systems are based on actual usage, and there is no way to bill the patient for these extra costs through third-party payers.
The Problem of Over-Reliance on AI
Safeguards need to be put in place before we will ever realize a true improvement in our overall medical errors. Over-reliance on AI to correct mistakes could potentially result in different types of errors. Large language models (LLMs) are prone to hallucinations under certain situations, and a new source of medical errors can be introduced if these errors are not caught. One way to safeguard against this is to use agentic specialty-specific AI LLMs that perform double checks on the information, confirm its veracity, and use sophisticated methods to minimize errors.
Developing Proper Ethical Policies
Hospitals and health systems need to develop proper ethical policies, guidelines, and oversight to ensure the safe and effective use of AI in healthcare. This can be accomplished by participating in oversight organizations and medical groups, such as the AMA, AAMC, and governmental oversight committees, to help solidify a common framework for ethical AI data access and use policies.
Conclusion
In conclusion, while AI has the potential to revolutionize healthcare, there are several risks and challenges associated with its use. Clinicians, hospitals, and health systems need to be aware of these risks and take steps to mitigate them. This includes protecting PHI, being aware of the costs associated with AI, avoiding over-reliance on AI, and developing proper ethical policies and guidelines.
FAQs
Q: What is the main risk associated with generative AI tools in healthcare?
A: The main risk is the potential for protected health information (PHI) to be compromised.
Q: How can hospitals and health systems reduce the costs associated with AI?
A: They can negotiate cost-effective pricing structures, implement usage controls, or develop in-house AI systems.
Q: What is the problem with over-reliance on AI in healthcare?
A: Over-reliance on AI can potentially result in different types of errors, as large language models are prone to hallucinations under certain situations.
Q: How can hospitals and health systems develop proper ethical policies for AI use?
A: They can participate in oversight organizations and medical groups, such as the AMA, AAMC, and governmental oversight committees, to help solidify a common framework for ethical AI data access and use policies.