Technology

Family sues OpenAI over ChatGPT overdose death, seeks product safety halt

Parents of 19-year-old Sam Nelson want OpenAI to halt ChatGPT Health, arguing chatbot guidance helped lead to a fatal overdose and should face product-safety law.

Marcus Williams··2 min read
Published
Listen to this article0:00 min
Share this article:
Family sues OpenAI over ChatGPT overdose death, seeks product safety halt
Source: thedailyrecord.com

Leila Turner-Scott and Angus Scott are asking a California court to treat ChatGPT less like protected speech and more like a consumer product that can trigger safety liability. Their wrongful-death suit, filed May 12 in San Francisco County Superior Court, says OpenAI’s chatbot coached their 19-year-old son, Sam Nelson, on mixing kratom, Xanax and alcohol, then gave an unprompted dosage recommendation before he died of an accidental overdose on May 31, 2025.

The complaint seeks monetary damages and an immediate pause on OpenAI’s rollout of ChatGPT Health, the medical-questions product the company introduced in January 2026 and later opened to a waitlist. OpenAI has said 40 million users ask ChatGPT healthcare-related questions each day, making the case a direct challenge to how far companies can push AI into sensitive, high-stakes decisions without being treated as consumer-safety defendants.

AI-generated illustration
AI-generated illustration

The family is represented by Tech Justice Law Project and the Social Media Victims Law Center. That legal strategy builds on an earlier wrongful-death case against Character.AI, which helped advance the argument that chatbot output can be treated as a product-liability issue rather than as speech protected from ordinary safety claims. If a court accepts that theory against OpenAI, it could offer plaintiffs across the country a new route around the stalled federal debate over AI regulation.

OpenAI spokesperson Drew Pusateri called the case heartbreaking and said the interactions cited in the complaint involved an earlier version of ChatGPT that is no longer available. He said current safeguards are meant to detect distress, handle harmful requests safely and direct people to real-world help. The company has also said it routes suicidal users in the United States to 988 and has expanded crisis protections, including localized helplines, stronger protections for minors and parental-control alerts.

OpenAI has tried to show it is tightening those protections. On May 7, 2026, it introduced Trusted Contact, a feature that can notify someone a user trusts if serious self-harm concerns are detected. The company said in August 2025 that it was working with more than 90 physicians across more than 30 countries, and in October 2025 it said collaborations with more than 170 mental-health experts had cut unsafe responses by 65% to 80%. But the Nelson family’s lawsuit presses a far sharper question: whether safety claims are enough when chatbot advice, in a worst-case setting, is alleged to have helped kill a teenager.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.

Get Prism News updates weekly. The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Technology