Introduction to AI in the Courtroom
It’s been quite a couple weeks for stories about AI in the courtroom. You might have heard about the deceased victim of a road rage incident whose family created an AI avatar of him to show as an impact statement, possibly the first time this has been done in the US. However, there’s a bigger, far more consequential controversy brewing, legal experts say. AI hallucinations are cropping up more and more in legal filings, and it’s starting to infuriate judges.
Recent Cases of AI Hallucinations
A few weeks ago, a California judge, Michael Wilner, became intrigued by a set of arguments some lawyers made in a filing. He went to learn more about those arguments by following the articles they cited, but the articles didn’t exist. He asked the lawyers’ firm for more details, and they responded with a new brief that contained even more mistakes than the first. Wilner ordered the attorneys to give sworn testimonies explaining the mistakes, in which he learned that one of them, from the elite firm Ellis George, used Google Gemini as well as law-specific AI models to help write the document, which generated false information. As a result, the judge fined the firm $31,000.
Another California-based judge caught another hallucination in a court filing, this time submitted by the AI company Anthropic in the lawsuit that record labels have brought against it over copyright issues. One of Anthropic’s lawyers had asked the company’s AI model Claude to create a citation for a legal article, but Claude included the wrong title and author. Anthropic’s attorney admitted that the mistake was not caught by anyone reviewing the document.
Lastly, and perhaps most concerning, is a case unfolding in Israel. After police arrested an individual on charges of money laundering, Israeli prosecutors submitted a request asking a judge for permission to keep the individual’s phone as evidence. However, they cited laws that don’t exist, prompting the defendant’s attorney to accuse them of including AI hallucinations in their request. The prosecutors admitted that this was the case, receiving a scolding from the judge.
The Problem with AI in the Courtroom
Taken together, these cases point to a serious problem. Courts rely on documents that are accurate and backed up with citations—two traits that AI models, despite being adopted by lawyers eager to save time, often fail miserably to deliver. Those mistakes are getting caught (for now), but it’s not a stretch to imagine that at some point soon, a judge’s decision will be influenced by something that’s totally made up by AI, and no one will catch it.
Expert Opinion
Maura Grossman, who teaches at the School of Computer Science at the University of Waterloo as well as Osgoode Hall Law School, has been a vocal early critic of the problems that generative AI poses for courts. She wrote about the problem back in 2023, when the first cases of hallucinations started appearing. She said she thought courts’ existing rules requiring lawyers to vet what they submit to the courts, combined with the bad publicity those cases attracted, would put a stop to the problem. That hasn’t panned out.
Grossman says that hallucinations “don’t seem to have slowed down. If anything, they’ve sped up.” And these aren’t one-off cases with obscure local firms, she says. These are big-time lawyers making significant, embarrassing mistakes with AI. She worries that such mistakes are also cropping up more in documents not written by lawyers themselves, like expert reports.
Why Lawyers are Falling for AI Hallucinations
Lawyers fall into two camps, according to Grossman. The first are scared to death and don’t want to use AI at all. But then there are the early adopters. These are lawyers tight on time or without a cadre of other lawyers to help with a brief. They’re eager for technology that can help them write documents under tight deadlines. And their checks on the AI’s work aren’t always thorough.
The fact that high-powered lawyers, whose very profession it is to scrutinize language, keep getting caught making mistakes introduced by AI says something about how most of us treat the technology right now. We’re told repeatedly that AI makes mistakes, but language models also feel a bit like magic. We put in a complicated question and receive what sounds like a thoughtful, intelligent reply. Over time, AI models develop a veneer of authority. We trust them.
The Limitations of Current Solutions
We’ve known about this problem ever since ChatGPT launched nearly three years ago, but the recommended solution has not evolved much since then: Don’t trust everything you read, and vet what an AI model tells you. As AI models get thrust into so many different tools we use, this solution seems unsatisfying.
Companies are selling generative AI tools made for lawyers that claim to be reliably accurate. However, these tools are not foolproof, and mistakes can still occur. For example, the website for Westlaw Precision promises its AI is accurate and complete, but its client, Ellis George, was fined $31,000 for submitting a brief with AI-generated mistakes.
Conclusion
The increasing use of AI in the courtroom is a serious problem that needs to be addressed. While AI can be a useful tool for lawyers, it is not a substitute for human judgment and fact-checking. The fact that high-powered lawyers are getting caught making mistakes introduced by AI is a sign that we need to be more careful about how we use this technology. We need to develop better solutions to prevent AI hallucinations from influencing court decisions.
FAQs
- What is an AI hallucination?
An AI hallucination is when an AI model generates false or inaccurate information, often in a way that is convincing and sounds authoritative. - Why are lawyers using AI in the courtroom?
Lawyers are using AI to help with tasks such as research and document drafting, but they are not always thoroughly checking the AI’s work for errors. - What can be done to prevent AI hallucinations from influencing court decisions?
To prevent AI hallucinations from influencing court decisions, lawyers and judges need to be more careful about how they use AI and make sure to thoroughly fact-check any information generated by AI models. - Are there any consequences for lawyers who submit AI-generated mistakes to the court?
Yes, lawyers who submit AI-generated mistakes to the court can face consequences such as fines and damage to their reputation. - Can AI be trusted to generate accurate information?
No, AI models are not always trustworthy and can generate false or inaccurate information, so it’s essential to fact-check any information generated by AI models.