OpenAI debated reporting shooter's violent ChatGPT logs to police
OpenAI's automated systems flagged the Tumbler Ridge shooter's ChatGPT account in June 2025 for violent content. Internal debate among roughly 12 employees ended with management ruling the activity did not meet the threshold for a police report; the account was banned instead.
OpenAI's automated systems flagged Jesse Van Rootselaar's ChatGPT account in June 2025 after he described gun violence across multiple sessions, triggering an internal debate among approximately 12 employees about whether to alert Canadian police. Management decided against reporting, ruling the activity did not meet the company's threshold of "credible and imminent risk of serious physical harm." The account was banned. On February 10, 2026, Van Rootselaar, 18, carried out a mass shooting at a school in Tumbler Ridge, British Columbia, killing 8 people and injuring more than 25.
Prior to this incident, no widely established industry standard required AI companies to proactively report violent content to law enforcement. OpenAI's decision to ban rather than report reflects a judgment call that is now under intense scrutiny. The company has since proactively contacted the Royal Canadian Mounted Police and is cooperating with their investigation. B.C. Premier David Eby described the reports as "profoundly disturbing." This case could accelerate regulatory pressure on AI platforms to define and codify mandatory reporting obligations when automated systems detect credible threats.
Stay informed. The best AI coverage, delivered weekly.