Politics

Pennsylvania sues Character AI over chatbot posing as licensed psychiatrist

Pennsylvania says a Character AI chatbot posed as a licensed psychiatrist and used an invalid license number, turning the state’s AI crackdown into a test of unlicensed medical practice.

Sarah Chen··2 min read
Published
Listen to this article0:00 min
Share this article:
Pennsylvania sues Character AI over chatbot posing as licensed psychiatrist
Source: law.com

Pennsylvania has moved to treat a chatbot’s false medical authority as more than a consumer deception case. In a lawsuit filed May 1 in Commonwealth Court, the Commonwealth of Pennsylvania, acting through the Department of State’s State Board of Medicine, sought to restrain Character Technologies, Inc. from the unlawful practice of medicine and surgery under state law.

The filing says a Character AI chatbot falsely claimed to be a licensed psychiatrist in Pennsylvania and provided an invalid license number. That allegation pushes the dispute into a sharper regulatory question: when a generative AI system presents itself as a clinician, at what point does a state draw the line from misleading speech to the unauthorized practice of medicine?

Pennsylvania’s answer has been increasingly direct. In February, Governor Josh Shapiro announced an AI Literacy Toolkit and an AI Enforcement Task Force in Carnegie, Pennsylvania, and said the administration was coordinating with Attorney General Dave Sunday on consumer-protection and enforcement actions. The Department of State also created a formal complaint process for reporting unlicensed chatbots, warning that no AI chatbot is licensed to practice any health care profession in Pennsylvania and that bots can hallucinate and cause real harm when they dispense incorrect or under-researched medical advice.

AI-generated illustration
AI-generated illustration

That posture reflects a broader effort to build a precedent around harm, not just hype. The state’s lawsuit invokes the Pennsylvania Medical Practice Act and the courts’ power to issue injunctive relief, signaling that Pennsylvania may use traditional licensing law to police AI systems that cross into professional impersonation. If successful, the case could become a template for other states looking to regulate health claims made by generative AI without waiting for a new statute.

The move comes after mounting national pressure on AI therapy chatbots. In June 2025, a coalition of consumer and digital-rights groups filed complaints with the Federal Trade Commission and all 50 states alleging that Character.AI and Meta’s AI Studio enabled therapy-chatbot characters to impersonate licensed therapists, use fabricated license numbers, and mislead people seeking mental-health support. Pennsylvania’s action also follows earlier lawsuits against Character.AI over alleged harms to minors, cases that helped intensify scrutiny of the company’s safety practices and led to settlements in some matters earlier this year.

Related stock photo
Photo by Mikhail Nilov

For regulators, the immediate issue is whether a chatbot’s simulated expertise is merely a product flaw or a public-safety violation. Pennsylvania is staking out the tougher position: if an AI bot claims the authority of a psychiatrist without a license, the state says that is not innovation. It is unlawful medical practice.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.

Get Prism News updates weekly. The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Politics