Introduction to AI Therapy
The researchers, a team of psychiatrists and psychologists at Dartmouth College’s Geisel School of Medicine, acknowledge the questions surrounding AI therapy. But they also say that the right selection of training data—which determines how the model learns what good therapeutic responses look like—is the key to answering them.
The Challenge of Finding the Right Data
Finding the right data wasn’t a simple task. The researchers first trained their AI model, called Therabot, on conversations about mental health from across the internet. This was a disaster. If you told this initial version of the model you were feeling depressed, it would start telling you it was depressed, too. Responses like, “Sometimes I can’t make it out of bed” or “I just want my life to be over” were common, says Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth and the study’s senior author. “These are really not what we would go to as a therapeutic response.”
Learning from Therapy Sessions
The model had learned from conversations held on forums between people discussing their mental health crises, not from evidence-based responses. So the team turned to transcripts of therapy sessions. “This is actually how a lot of psychotherapists are trained,” Jacobson says. That approach was better, but it had limitations. “We got a lot of ‘hmm-hmms,’ ‘go ons,’ and then ‘Your problems stem from your relationship with your mother,’” Jacobson says. “Really tropes of what psychotherapy would be, rather than actually what we’d want.”
Building Custom Data Sets
It wasn’t until the researchers started building their own data sets using examples based on cognitive behavioral therapy techniques that they started to see better results. It took a long time. The team began working on Therabot in 2019, when OpenAI had released only its first two versions of its GPT model. Now, Jacobson says, over 100 people have spent more than 100,000 human hours to design this system.
The Risks of Ineffective AI Therapy
The importance of training data suggests that the flood of companies promising therapy via AI models, many of which are not trained on evidence-based approaches, are building tools that are at best ineffective, and at worst harmful.
What’s Next for AI Therapy
Looking ahead, there are two big things to watch: Will the dozens of AI therapy bots on the market start training on better data? And if they do, will their results be good enough to get a coveted approval from the US Food and Drug Administration?
Conclusion
The development of AI therapy is a complex and challenging task. While there are many potential benefits to using AI in therapy, there are also risks associated with ineffective or harmful models. As the field continues to evolve, it’s essential to prioritize the development of high-quality training data and evidence-based approaches.
FAQs
- Q: What is Therabot?
A: Therabot is an AI model developed by researchers at Dartmouth College’s Geisel School of Medicine to provide therapy via AI. - Q: What is the importance of training data in AI therapy?
A: The right selection of training data is crucial in determining how the model learns what good therapeutic responses look like. - Q: What are the risks associated with ineffective AI therapy?
A: Ineffective AI therapy can be at best ineffective, and at worst harmful, if not trained on evidence-based approaches. - Q: What’s next for AI therapy?
A: The field will be watching to see if AI therapy bots on the market start training on better data and if their results will be good enough to get approval from the US Food and Drug Administration.