OpenAI flagged suspect's ChatGPT account months before Tumbler Ridge killings
OpenAI says it banned Jesse Van Rootselaar’s ChatGPT account in June 2025 and contacted RCMP only after the February 2026 Tumbler Ridge shooting.

OpenAI says it identified and banned a ChatGPT account linked to suspect Jesse Van Rootselaar in June 2025 but did not notify police at that time, and only contacted the Royal Canadian Mounted Police after the February 2026 Tumbler Ridge shooting that left multiple people dead.
In a statement reported by the BBC and other outlets, OpenAI said, "In June 2025, we proactively identified an account associated with this individual [Jesse Van Rootselaar] via our abuse detection and enforcement efforts, which include automated tools and human investigations to identify misuses of our models in furtherance of violent activities." The company added it had determined the posts did not meet its internal threshold for law enforcement referral, which it defines as indicating "an imminent and credible risk of serious physical harm to others."
The Wall Street Journal first reported internal debate at OpenAI over the account, writing that "about a dozen staffers debated whether to take action on Van Rootselaar's posts." Reuters, the BBC and other outlets cited the WSJ reporting to describe discussions inside the company about whether to escalate the case to police. OpenAI told reporters it chose not to refer the matter in June 2025 because it did not meet that high threshold.
After the shooting in rural Tumbler Ridge in February, OpenAI said it "proactively" reached out to Canadian authorities and would support the RCMP investigation. The RCMP has confirmed that OpenAI contacted investigators following the attack, which national outlets including the BBC and the Associated Press described as among the worst school-related shootings in recent Canadian history. The AP and BBC reported eight fatalities; the Globe and Mail gave a different timeline and victim breakdown, saying the suspect entered Tumbler Ridge Secondary School on Feb. 10 and that two family members were also found dead at the home. Local authorities have not yet reconciled those differing accounts in a single public release.
RCMP Staff Sgt. Kris Clark said investigators are conducting "a thorough review of the content on electronic devices, as well as social media and online activities" and that "digital and physical evidence is being collected, prioritized, and methodically processed." The agency has not publicly released full casualty figures, the precise timeline of events, or the content OpenAI flagged.
The episode has intensified scrutiny of how large AI companies detect and act on threats. OpenAI defends a limited referral policy on operational and privacy grounds, telling the BBC that alerting authorities too broadly "could cause unintended harm." Critics argue that tech firms must strike a clearer balance between preventing violence and over-notifying police. Law enforcement officials and former policing experts quoted by Canadian outlets say inquiries will probe whether others were involved in planning and whether digital warnings could have changed the outcome.
Investigators now face two immediate tasks: reconciling conflicting public accounts of the shooting dates and victim counts and determining what, if any, actionable material OpenAI preserved and provided. OpenAI has pledged cooperation; the RCMP has said its review is ongoing. As questions mount about corporate thresholds for reporting, policymakers and police are likely to press companies for clearer standards and faster coordination when signals of potential violence are detected.
Know something we missed? Have a correction or additional information?
Submit a Tip

