Technology

Families sue OpenAI, alleging failure to flag mass shooting threat

Seven families say OpenAI saw the shooter’s violent ChatGPT planning months early, then stayed silent as eight people were killed in Tumbler Ridge.

Lisa Park··2 min read
Published
Listen to this article0:00 min
Share this article:
Families sue OpenAI, alleging failure to flag mass shooting threat
Source: bbc.com

Seven families have asked a federal judge in San Francisco to decide whether OpenAI had a legal duty to act when its chatbot appeared to reveal a mass shooting plan. The lawsuits target OpenAI and chief executive Sam Altman and arise from the February attack in Tumbler Ridge, British Columbia, where eight people were killed, including six children.

Among the plaintiffs is the family of 12-year-old Maya Gebala, who was critically injured and remains hospitalized. The complaints say OpenAI knew for months that the shooter was using ChatGPT in ways that suggested violent planning, then banned the account for disturbing or gun-violence-related activity without warning police or other authorities. The filings accuse the company of negligence and aiding and abetting the attack, turning a single mass shooting into one of the sharpest tests yet of whether an AI company must escalate a credible threat before it turns deadly.

The plaintiffs are seeking at least $1 billion in damages, and their lawyers say more cases could follow, potentially as many as two dozen. Rice Parsons Leoni & Elliott, the firm representing some families, has framed the litigation as a bid for landmark damages and a broader reckoning over what safety obligations should apply when AI systems surface violent intent. That question reaches beyond one tragedy in British Columbia and into the way hospitals, schools, police and communities expect tech companies to respond when a warning signal looks less like moderation and more like an emergency.

The lawsuits also land in a legal landscape that has often shielded online platforms from liability for user-generated harm, while leaving some room to pursue claims tied to a company’s own conduct. In Gonzalez v. Google and Twitter v. Taamneh, the Supreme Court rejected attempts to hold major platforms liable for ISIS-linked attacks through theories of secondary liability. The Ninth Circuit later said Section 230 barred state-law claims against Grindr tied to third-party content, while the Third Circuit in Anderson v. TikTok allowed parts of a suit to proceed where plaintiffs argued the company’s own recommendation system helped drive the harm. That split is why the OpenAI cases matter: they ask whether an AI company can be treated not just as a publisher of user content, but as an actor with a duty to intervene when the danger becomes specific and credible.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.

Get Prism News updates weekly. The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Technology