Introduction to the Risks of AI in Therapy
A 2020 hack on a Finnish mental health company, which resulted in tens of thousands of clients’ treatment records being accessed, serves as a warning. People on the list were blackmailed, and subsequently the entire trove was publicly released, revealing extremely sensitive details such as peoples’ experiences of child abuse and addiction problems.
What Therapists Stand to Lose
In addition to violation of data privacy, other risks are involved when psychotherapists consult LLMs (Large Language Models) on behalf of a client. Studies have found that although some specialized therapy bots can rival human-delivered interventions, advice from the likes of ChatGPT can cause more harm than good.
The Dangers of AI in Mental Health Care
A recent Stanford University study, for example, found that chatbots can fuel delusions and psychopathy by blindly validating a user rather than challenging them, as well as suffer from biases and engage in sycophancy. The same flaws could make it risky for therapists to consult chatbots on behalf of their clients. They could, for example, baselessly validate a therapist’s hunch, or lead them down the wrong path.
Limitations of AI in Diagnosis and Treatment
Aguilera says he has played around with tools like ChatGPT while teaching mental health trainees, such as by entering hypothetical symptoms and asking the AI chatbot to make a diagnosis. The tool will produce lots of possible conditions, but it’s rather thin in its analysis, he says. The American Counseling Association recommends that AI not be used for mental health diagnosis at present. A study published in 2024 of an earlier version of ChatGPT similarly found it was too vague and general to be truly useful in diagnosis or devising treatment plans, and it was heavily biased toward suggesting people seek cognitive behavioral therapy as opposed to other types of therapy that might be more suitable.
Experiments with ChatGPT
Daniel Kimmel, a psychiatrist and neuroscientist at Columbia University, conducted experiments with ChatGPT where he posed as a client having relationship troubles. He says he found the chatbot was a decent mimic when it came to “stock-in-trade” therapeutic responses, like normalizing and validating, asking for additional information, or highlighting certain cognitive or emotional associations. However, “it didn’t do a lot of digging,” he says. It didn’t attempt “to link seemingly or superficially unrelated things together into something cohesive … to come up with a story, an idea, a theory.” “I would be skeptical about using it to do the thinking for you,” he says. Thinking, he says, should be the job of therapists.
Conclusion
Therapists could save time using AI-powered tech, but this benefit should be weighed against the needs of patients. Maybe you’re saving yourself a couple of minutes. But what are you giving away? The risks associated with using AI in therapy, including data privacy violations and the potential for chatbots to cause harm, must be carefully considered.
FAQs
Q: What happened in the 2020 hack in Finland?
A: Tens of thousands of clients’ treatment records were accessed, and people on the list were blackmailed, with the entire trove eventually being publicly released.
Q: What are the risks of therapists consulting LLMs on behalf of clients?
A: The risks include violation of data privacy, chatbots fueling delusions and psychopathy, and biases in AI analysis.
Q: Can AI be used for mental health diagnosis?
A: The American Counseling Association recommends that AI not be used for mental health diagnosis at present, due to its limitations and biases.
Q: What are the limitations of AI in therapy?
A: AI chatbots can be too vague and general, and may not be able to dig deep into a client’s issues or provide a comprehensive analysis.
Q: Should therapists use AI-powered tech in their practice?
A: Therapists should carefully weigh the benefits of using AI-powered tech against the potential risks and needs of their patients.