OpenAI Ignored Three Warnings About Dangerous ChatGPT User, Lawsuit Alleges
OpenAI's own safety system flagged a user for mass casualty weapons activity, then restored his account the next day as he stalked his ex-girlfriend.

A California lawsuit filed against OpenAI alleges the company ignored three separate warnings that a ChatGPT user posed a danger to others, including its own internal mass casualty flag, while he used the chatbot to stalk and psychologically torment his former girlfriend over nearly a year.
The suit, filed in California Superior Court in San Francisco County by Edelson PC on behalf of a woman identified as Jane Doe, centers on a 53-year-old Silicon Valley entrepreneur whose identity is withheld. Doe broke up with the man in 2024, and he turned to ChatGPT to process the split. Rather than challenging his account, the chatbot reportedly framed him as "rational and wronged" and characterized Doe as "manipulative and unstable."
Over the following months, those responses escalated into what attorneys and experts now call "AI psychosis": a feedback loop in which a chatbot's sycophantic affirmations validate harmful beliefs regardless of how irrational or dangerous they become. The user allegedly came to believe he had discovered a cure for sleep apnea and that powerful people were targeting him. He weaponized the chatbot's outputs directly against Doe, distributing AI-generated, clinical-looking psychological reports about her to her family, friends, and employer.
In July 2025, Doe personally warned the user to stop using ChatGPT and urged him to seek professional mental health care. He returned to the platform instead. ChatGPT responded by assuring him he was "a level 10 in sanity," reinforcing his delusions rather than acknowledging the alarm his ex-girlfriend had just raised. That exchange constitutes one of the three warnings the lawsuit claims OpenAI ignored.
The second warning came from within OpenAI itself. In August 2025, the company's automated safety system flagged the user's account for "Mass Casualty Weapons" activity and deactivated it. A human safety team member reviewed the account the following day and restored it without further action. A third warning is alleged in the complaint but has not been publicly detailed.
The case was brought by Jay Edelson of Edelson PC, the same attorney representing the family of Adam Raine, a 16-year-old who died by suicide after months of ChatGPT conversations, and the family of Jonathan Gavalas, whose alleged mass-casualty plotting was reportedly fueled by Google's Gemini. Edelson has publicly warned that the pattern is accelerating: "First it was suicides, then it was murder, as we've seen. Now it's mass casualty events."
The Jane Doe case joins a rapidly expanding legal docket against OpenAI. In April 2025, suspected FSU campus gunman Phoenix Ikner allegedly exchanged more than 200 messages with ChatGPT before shooting and killing Robert Morales, 57, and Tiru Chabba, 45, in Tallahassee. Florida's Attorney General subsequently announced an investigation into OpenAI. In November 2025, the Social Media Victims Law Center filed seven lawsuits alleging GPT-4o was released prematurely and without sufficient safeguards. In December 2025, the estate of Suzanne Adams filed a wrongful death lawsuit alleging ChatGPT reinforced the delusions of the person who killed her.
OpenAI has acknowledged its models can be "dangerously sycophantic" and that safety guardrails degrade during extended interactions. Following the February 2026 school shooting in Tumbler Ridge, British Columbia, in which 18-year-old Jesse Van Rootselaar allegedly used ChatGPT to plan the attack, OpenAI pledged faster law enforcement notification when conversations appear dangerous and stronger barriers against banned users returning to the platform.
That pledge sits uneasily alongside another position the company is reportedly taking in Springfield: OpenAI is backing Illinois Senate Bill 3444, legislation that would shield AI labs from liability even in cases involving mass deaths or catastrophic financial harm. OpenAI did not respond to a request for comment.
Know something we missed? Have a correction or additional information?
Submit a Tip

