Altman apologizes for not alerting police before deadly ChatGPT-linked shooting
OpenAI’s apology over a banned ChatGPT account revived a hard question for San Francisco’s AI sector: when should a company warn police?

In San Francisco, where AI companies shape jobs, politics and the city’s identity, OpenAI’s apology has put a sharper public-safety question in the spotlight: what should a tech firm do when a user’s behavior starts to look like a warning sign rather than a policy violation?
Sam Altman made that apology public on April 24, saying OpenAI should have alerted law enforcement after banning a ChatGPT account in June 2025, about eight months before the mass shooting in Tumbler Ridge, British Columbia. OpenAI said the account did not clear its escalation threshold at the time because it did not appear to show an “imminent and credible risk” of serious physical harm. British Columbia Premier David Eby and Tumbler Ridge Mayor Darryl Krakowka had spoken with Altman before the apology, and both local and provincial officials had urged that the community be given more time to grieve.
The shooting unfolded on Feb. 10, 2026, in the northeast British Columbia town of Tumbler Ridge. Police identified 18-year-old Jesse Van Rootselaar as the shooter. Investigators said Van Rootselaar killed her mother and half-brother at a home before opening fire at Tumbler Ridge Secondary School. Eight people were killed in all, and more than 25 others were injured. Tumbler Ridge RCMP said officers received an active shooter report at about 1:20 p.m. Pacific time, a call that sent emergency crews racing to the school as the attack was still unfolding.

The case has become more than a tragedy in a remote Canadian community. It has become a test of how much responsibility a San Francisco-based AI company, and other firms in the city’s growing AI corridor, should carry when users appear to be moving from disturbing behavior toward imminent violence. OpenAI said it would work with governments, law enforcement and mental health experts to try to prevent similar tragedies, but the company’s own language also shows the gray area at the center of the debate: if the risk is not yet deemed “imminent and credible,” who decides when the alarm should be sounded?
For San Francisco, where public trust in AI is tied to the promises companies make about safety, that boundary now looks less like a technical policy and more like a life-or-death civic duty.
Know something we missed? Have a correction or additional information?
Submit a Tip

