They Knew. They Did Nothing.
On February 10, 2026, Jesse Van Rootselaar, an 18-year-old from Tumbler Ridge, British Columbia, killed her mother Jennifer Strang and half-brother Emmett Jacobs at their home before driving to Tumbler Ridge Secondary School. There, she opened fire on students and staff, killing five children and an educational assistant before taking her own life. In total, eight people died, and dozens more were wounded.
But here's the part that has thrown Canada's government into action: OpenAI's automated monitoring systems had flagged Van Rootselaar's ChatGPT account back in June 2025, seven months before the massacre. The system detected interactions involving scenarios of gun violence. The account was banned. And then? Nothing. No phone call to police. No tip to the RCMP. No notification to anyone outside of OpenAI's offices.
According to reporting by the Wall Street Journal, roughly a dozen OpenAI employees were aware of the concerning interactions. Some of those employees advocated for contacting Canadian law enforcement. They interpreted the writings as an indication of potential for real-world violence. But the company's leadership ultimately decided against it, determining that Van Rootselaar's ChatGPT usage did not meet what OpenAI described as the "threshold required" for a law enforcement referral.
Ottawa Calls OpenAI to the Carpet
When the connection between Van Rootselaar's ChatGPT account and the shooting became public, Canada's Artificial Intelligence and Digital Innovation Minister Evan Solomon said he was "deeply disturbed" by what he learned. He said he "immediately" contacted OpenAI when he first read media reports that the company had not contacted law enforcement in a timely manner.
Solomon summoned senior safety officials from OpenAI to Ottawa for a face-to-face meeting, which took place on February 24, 2026. The purpose of the meeting was to discuss OpenAI's safety protocols and its procedures for escalating dangerous content to authorities.
The result of that meeting was, by Solomon's own account, deeply unsatisfying. The minister told reporters he was "disappointed" that OpenAI lacked "substantial answers" about how it planned to change its safety protocols in the wake of the tragedy. He said the government expected OpenAI to arrive with concrete proposals showing they had updated their procedures. Instead, he heard only vague references to "some changes to their model," nothing resembling the kind of systemic overhaul the situation demanded.
OpenAI's Defense: "It Didn't Meet Our Threshold"
OpenAI's defense has been consistent, if uncomfortable. A company spokesperson stated that the activity on Van Rootselaar's account "didn't meet the threshold for informing law enforcement at the time because it didn't identify credible or imminent planning." In other words, while the content was disturbing enough to get the account permanently banned, OpenAI's internal framework did not classify it as rising to the level of an actionable threat.
The company has said that its policy requires an "imminent and credible risk of serious physical harm to others" before it will refer user activity to law enforcement. Van Rootselaar's interactions with ChatGPT, which included discussions of gun violence scenarios over the course of multiple days, apparently did not cross that line in OpenAI's assessment.
After the February 10 shooting, OpenAI proactively reached out to the RCMP with information about Van Rootselaar and her use of ChatGPT. The company issued a public statement saying, "Our thoughts are with everyone affected by the Tumbler Ridge tragedy." But by that point, eight people were already dead.
The Core Problem
A private technology corporation made what amounted to a clinical-style risk assessment, determining whether violent content from a user represented a real-world threat, and it got that assessment catastrophically wrong. OpenAI is not staffed with mental health professionals trained to evaluate the difference between ideation and intent. It is a software company. And yet its internal "threshold" was the only thing standing between a warning reaching Canadian police and that warning disappearing into a database.
The Regulatory Vacuum That Made This Possible
What makes this even more infuriating is that Canada currently has no binding legislation requiring AI companies to report flagged dangerous content to authorities. There's no law that would have compelled OpenAI to pick up the phone.
Two pieces of legislation that could have addressed this gap, Bill C-27 (the Artificial Intelligence and Data Act) and Bill C-63 (the Online Harms Act), both died on the order paper without being passed into law. What Canada has instead is a voluntary code of conduct with zero enforcement mechanisms. That is the entire regulatory framework governing how AI companies handle potentially violent user behavior in one of the G7 nations.
Canada's existing privacy law, the Personal Information Protection and Electronic Documents Act (PIPEDA), does include a provision under section 7(3)(e) that permits emergency disclosure. But as legal experts have pointed out, that law was designed for clear-cut crises, not for the kind of probabilistic, ambiguous threat indicators that emerge from AI chatbot interactions. It gives companies permission to disclose in emergencies, but it does not create an obligation to do so.
University of British Columbia professor Alan Mackworth has pointed out that professionals like teachers and doctors already have legal duties to report suspected harm to minors. The question now is whether similar obligations should apply to technology and AI companies that detect violent content from their users.
The Digital Confessional Problem
This case exposes a problem that's only going to get worse as AI chatbots become more sophisticated and more widely used. These tools function as what one academic has called "digital confessionals," private, intimate spaces where users disclose thoughts, including violent ideations, to systems engineered for conversational warmth and engagement. People say things to ChatGPT that they'd never say to another human being, let alone post on a public social media platform.
That creates a fundamentally different situation from traditional social media monitoring. When someone posts a threat on Facebook or Twitter, it is public. Law enforcement can see it. Other users can report it. But when someone describes gun violence scenarios to a chatbot in a private conversation, the only entity that knows is the company operating the chatbot. And as the Tumbler Ridge case demonstrates, that company may choose to do nothing beyond banning the account.
And here's the uncomfortable question nobody wants to answer: who should be making the judgment call about whether AI-flagged content represents a genuine threat? Software engineers and content moderators, the people who currently review flagged AI interactions at companies like OpenAI, aren't trained mental health professionals. They're not equipped to evaluate the difference between someone venting dark thoughts and someone actively planning violence. Yet that's exactly the determination they're being asked to make.
What Happens Next
Minister Solomon has stated that "all options are on the table" regarding potential new legislation to regulate AI chatbots, though he has stopped short of committing to any specific regulatory action. He said he is working closely with other ministers on legislative options, including potentially regulating AI chatbot use by children.
After the disappointing February 24 meeting, OpenAI committed to providing updates on additional safety steps within days. Solomon indicated that further meetings would follow, and that the government would consider introducing its own regulations if the company does not demonstrate substantial improvements.
Legal experts and academics have outlined what a meaningful legislative response would need to include: binding legislation with clear reporting thresholds developed by mental health professionals, law enforcement, and privacy experts; an independent digital safety commission that could serve as a third-party triage body for evaluating concerning AI interactions; and modernized privacy legislation that explicitly addresses AI-specific disclosure obligations.
Whether Canada actually follows through on any of this remains to be seen. The country already let two relevant bills die without passage. And while the Tumbler Ridge tragedy has generated enormous political pressure, the history of tech regulation globally suggests that urgency fades a lot faster than legislation gets drafted.
Self-Regulation Failed. Eight People Are Dead.
What happened in Tumbler Ridge isn't just a story about one mass shooting and one banned ChatGPT account. It's a test case for a much larger question: can society trust AI companies to self-regulate when it comes to user safety?
OpenAI built the monitoring tools that caught Van Rootselaar's violent interactions. It employed the people who flagged the account. It had roughly a dozen employees who were aware of the situation and debated what to do. And after all of that, the company's internal process produced a decision to ban the account and move on, without telling a single person in law enforcement.
The company's "threshold" for reporting, requiring evidence of "credible and imminent" planning, may sound reasonable in the abstract. But in practice, it allowed a user who was actively discussing gun violence scenarios with a chatbot to be quietly removed from the platform without any external party ever learning about the interaction. Seven months later, that same person carried out one of the deadliest mass shootings in Canadian history.
There's no way to know whether a police referral in June 2025 would have prevented the February 2026 massacre. Van Rootselaar had a documented history of mental health issues that had already brought police to the family home on multiple occasions. But the fact remains that OpenAI had information, information its own systems and employees deemed alarming enough to warrant an internal debate, and chose to keep it inside the company.
That choice, made by a private corporation with no legal obligation to do otherwise, is exactly the kind of regulatory gap that gets people killed. And until governments close that gap with real legislation, the next warning sign flagged by an AI chatbot will disappear into the same void.
More on AI Safety Failures and Real-World Consequences
The intersection of AI technology and human safety is producing disasters faster than regulators can respond. These are the cases that matter most.
8 Death Lawsuits Mental Health Crisis AI-Induced Psychosis