Introduction to the Case
A recent lawsuit against Google and Character Technologies has raised questions about the nature of artificial intelligence (AI) outputs and whether they should be protected as speech. The case involves a chatbot platform called Character.AI, which uses a large language model (LLM) to generate human-like responses to user input.
The First Amendment Claim
Google and Character Technologies have moved to dismiss the lawsuit based on First Amendment claims, arguing that users of the Character.AI platform have a right to listen to chatbot outputs as supposed "speech." The judge in the case, Conway, agreed that Character Technologies can assert the First Amendment rights of its users, but is not yet ready to rule on whether the chatbot outputs are actually speech.
The Debate Over AI Outputs as Speech
Character.AI had tried to argue that chatbot outputs should be protected like speech from video game characters. However, Conway noted that this argument was not meaningfully advanced, as video game characters’ dialogue is written by humans, while chatbot outputs are simply the result of an LLM predicting what word should come next. The judge wrote, "Defendants fail to articulate why words strung together by an LLM are speech."
Future of the Case
As the case advances, Character Technologies will have a chance to beef up its First Amendment claims, perhaps by better explaining how chatbot outputs are similar to other cases involving non-human speakers. A spokesperson for Character.AI suggested that the judge seems confused, stating, "It’s long been true that the law takes time to adapt to new technology, and AI is no different."
Character.AI’s Response
Character.AI has also taken steps to address concerns about the platform’s impact on users, particularly minors. The company now provides a separate version of its LLM for under-18 users, along with parental insights, filtered characters, and other safety features. Additionally, Character.AI has implemented technical protections aimed at detecting and preventing conversations about self-harm on the platform.
Conclusion
The case against Google and Character Technologies raises important questions about the nature of AI outputs and whether they should be protected as speech. While the judge is not yet ready to rule on this issue, the case is likely to continue and may have significant implications for the development of AI technology and the law surrounding it. If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.
FAQs
- Q: What is the lawsuit against Google and Character Technologies about?
A: The lawsuit raises questions about the nature of AI outputs and whether they should be protected as speech. - Q: What is Character.AI, and how does it work?
A: Character.AI is a chatbot platform that uses a large language model (LLM) to generate human-like responses to user input. - Q: What safety features has Character.AI implemented to protect users?
A: Character.AI has implemented a separate version of its LLM for under-18 users, parental insights, filtered characters, and technical protections aimed at detecting and preventing conversations about self-harm on the platform. - Q: What is the significance of the case, and what implications may it have?
A: The case may have significant implications for the development of AI technology and the law surrounding it, particularly with regards to the protection of AI outputs as speech.