Mental Health Support through AI Therapy
Introduction to the Issue
Many psychologists and psychiatrists have shared the vision of making therapy more accessible to people with mental disorders. However, fewer than half of people with a mental disorder receive therapy, and those who do might get only 45 minutes per week. Researchers have tried to build technology to address this issue, but they have been held back by two significant challenges.
Challenges in Building Therapy Bots
One challenge is that a therapy bot that says the wrong thing could result in real harm. This is why many researchers have built bots using explicit programming, where the software pulls from a finite bank of approved responses. However, this approach makes the bots less engaging to chat with, and people tend to lose interest. The second issue is that the hallmarks of good therapeutic relationships, such as shared goals and collaboration, are hard to replicate in software.
Development of AI Model
In 2019, researchers at Dartmouth thought that generative AI might help overcome these hurdles. They set about building an AI model trained to give evidence-based responses. Initially, they tried building it from general mental-health conversations pulled from internet forums. Then they turned to thousands of hours of transcripts of real sessions with psychotherapists. However, the results were not satisfactory, with the model producing clichés and tropes of psychotherapy rather than meaningful responses.
Custom Data Sets for Better Outcomes
Dissatisfied with the initial results, the researchers assembled their own custom data sets based on evidence-based practices, which ultimately went into the model. Many AI therapy bots on the market, in contrast, might be just slight variations of foundation models trained mostly on internet conversations. This poses a significant problem, especially for sensitive topics like disordered eating. For instance, if a user expresses a desire to lose weight, these bots might support the user without considering the potential risks, especially if the user already has a low weight.
Clinical Trial and Results
To test the bot, the researchers ran an eight-week clinical trial with 210 participants who had symptoms of depression or generalized anxiety disorder or were at high risk for eating disorders. About half had access to the AI therapy bot, and a control group did not. Participants responded to prompts from the AI and initiated conversations, averaging about 10 messages per day. The results showed significant reductions in symptoms: participants with depression experienced a 51% reduction in symptoms, those with anxiety experienced a 31% reduction, and those at risk for eating disorders saw a 19% reduction in concerns about body image and weight.
Conclusion
The study demonstrates the potential of AI therapy bots in providing accessible mental health support. By using custom data sets based on evidence-based practices, these bots can offer meaningful and safe interactions for users. While there is still more work to be done, the results are promising and suggest that AI could play a significant role in addressing the shortage of mental health services.
FAQs
- Q: How was the AI therapy bot trained?
- A: The AI therapy bot was trained using custom data sets based on evidence-based practices and thousands of hours of transcripts from real therapy sessions.
- Q: What were the results of the clinical trial?
- A: The clinical trial showed significant reductions in symptoms for participants with depression, anxiety, and those at risk for eating disorders.
- Q: Is the AI therapy bot a replacement for human therapists?
- A: No, the AI therapy bot is intended to provide accessible support and is not a replacement for professional human therapy.
- Q: How often did participants interact with the AI therapy bot?
- A: Participants averaged about 10 messages per day with the AI therapy bot over the course of eight weeks.
- Q: What is the potential risk of using AI therapy bots trained on internet conversations?
- A: These bots might provide harmful advice, especially on sensitive topics like disordered eating, by supporting dangerous behaviors without proper consideration of the user’s health and well-being.