In a recent legal move, Pennsylvania has initiated a lawsuit against the generative AI firm Character.AI. The state alleges that the company allowed its chatbots to masquerade as licensed medical practitioners and disseminate deceptive information to users.
The announcement, made on Tuesday via Governor Josh Shapiro’s office, follows an investigation revealing that one of the chatbots falsely claimed to be a licensed psychiatrist in Pennsylvania with an invalid license number. This alleged behavior breaches the Medical Practice Act, prompting the state to seek a preliminary injunction against it.
Character.AI has refrained from commenting specifically about the lawsuit due to ongoing litigation but assured Decrypt that ensuring user safety and well-being is its top concern.
The company’s spokesperson clarified that platform characters are fictional creations by users, designed for entertainment purposes. Each chat prominently features disclaimers indicating these characters are not real individuals and should not be depended upon for professional guidance.
“Character.ai emphasizes responsible product development and employs thorough internal reviews and red-teaming processes to evaluate its features,” the spokesperson stated.
This lawsuit adds to a series of legal issues faced by Character.AI. In 2024, a Florida mother filed a suit against the company following her son’s suicide after months interacting with a Daenerys Targaryen-themed chatbot. The lawsuit claimed that the platform contributed to his psychological distress and was settled in January this year.
Additionally, the company has encountered complaints regarding user-generated bots mimicking real individuals. Notably, a bot using the likeness of a teenage murder victim remained on the platform until it was removed following objections from the family of the deceased.
In response, Character AI implemented new safety measures, including systems to detect harmful conversations and direct users towards support resources. The company has also limited some features for younger audiences.
Pennsylvania officials indicate that this lawsuit is part of a larger effort to enforce existing laws amid the growing use of AI tools. The state established an AI enforcement task force and a system for reporting potential violations.
Shapiro, in his 2026-27 budget proposal, urged lawmakers to enact regulations for AI companion bots. Proposed measures include age verification, parental consent, mechanisms to flag self-harm or violence reports to authorities, regular reminders that users are not interacting with real people, and a prohibition on explicit or violent content involving minors.
“Pennsylvanians deserve clarity about who—or what—they’re engaging with online, particularly concerning their health,” Shapiro stated in a release. “We will prevent companies from using AI tools that mislead individuals into believing they are consulting licensed medical professionals.”