When OpenAI announced it had flagged the account of the Tumbler Ridge shooter, Jesse Van Rootselaar, months before the attack, the public was taken aback. Could this tragedy have been prevented? Eight people were killed, six of them children. OpenAI is not responsible for the crime, but it’s hard not to wonder whether a report to the police might have led to a different outcome.
Before we point fingers at the company, however, two facts are worth noting. First, the shooter was well known to police, and, second, officers had made several mental-health calls to her home, and she was involuntarily hospitalized more than once. Police also seized firearms from the household, only to have the lawful owner (not the shooter) petition to have them returned.
In other words, professionals closest to the situation already knew a great deal about the risks involved. OpenAI, by contrast, knew only what appeared in conversations with a chatbot.
Yet the debate that followed has focused on whether government should compel artificial intelligence (AI) companies to report threats of violence to police. Some argue that leaving these decisions to for-profit companies is too risky. If an AI provider believes someone may be about to harm others, the argument goes, it should be required to alert authorities.
Others caution that such a duty could lead companies to overreport ambiguous signals in order to avoid liability, which, in turn, would erode privacy and trust in these tools.
Both concerns are valid. And it is tempting to think we have faced similar trade-offs elsewhere in the law. When doctors suspect a child is being abused, when banks detect suspicious financial transactions, or when internet service providers discover that their systems are being used to host child pornography, Canadian law requires them to report it.
Why not apply the same approach here? The answer is that AI companies are not in a similar position.
Detecting violent intent in chatbot conversations is different from detecting a possible crime in these other contexts. It requires interpreting language and predicting future behaviour — but with far less information to work with.
AI systems encounter enormous amounts of violent language every day. People ask chatbots to help write crime novels, role-play video game scenarios, develop fictional plots, or vent about frustrations in their lives. AI can detect violent words, but determining when those words signal a real danger of harm to someone is not easy.
This is a task that health-care professionals struggle with. Threat assessments tend to be based on a constellation of factors, such as intent, capability and inclination. They rely not only on what a patient says but also on what is known about their lives, including their psychiatric or criminal history, and potential access to weapons.
AI companies will tend to lack almost all of this context. What they will have instead is fragments of language, often detached from the real-world information needed to interpret them. In the Tumbler Ridge case, police knew about the shooter’s mental-health crises and the presence of firearms in the home. OpenAI, by comparison, had only the content of a chat window.
A legal duty to report violent threats would give AI companies a strong incentive to err on the side of caution. This could readily lead to police receiving large numbers of referrals based on ambiguous signals in online conversations. Police are already challenged with increasing amounts of cybercrime and investigations that require the analysis of large volumes of digital evidence. Mandatory reporting duties on AI providers would increase the burden on law enforcement to gather the contextual information they need to determine whether a threat is real — to sort true from false positives.
The Case for AI Services Coordinators
There would also be a broader cost. Increasingly, people use conversational AI to explore difficult emotions or to think through problems in private. If users come to believe that anything they say to a chatbot could trigger a police report, those conversations will inevitably change — or no longer take place.
Some commentators have suggested a compromise: Instead of reporting directly to police, AI companies could report concerning activity to an independent “digital safety commission” that would serve as a triage body staffed with trained threat-assessment professionals. This idea has precedents. Banks report suspicious transactions to Canada’s Financial Transactions and Reports Analysis Centre, which then decides whether to pass information on to law enforcement. Similar intermediary bodies exist in other areas of digital regulation, including “digital services coordinators” in EU law that oversee compliance with rules governing large platforms.
A triage agency could reduce the risk of overreporting and create clearer accountability than leaving decisions entirely to private companies. But it would not solve the underlying problem: The agency would still be receiving signals generated by automated analysis of millions of private conversations. In other words, the system would still depend on AI providers continuously scanning and interpreting user speech in order to identify potential threats in the first place. It would still be working from fragments of language detached from the person’s real-world circumstances.
In early March, Canada’s minister of AI and digital innovation, Evan Solomon, met with OpenAI’s CEO, Sam Altman, and obtained assurances that the company would take various steps in the wake of the Tumbler Ridge shooting. OpenAI would establish a direct line of contact with the Royal Canadian Mounted Police and work with privacy, mental-health and law enforcement experts to refine its processes for identifying and reviewing “high-risk cases.” It would take a second look at past cases in light of new protocols to see if any should be reported. The minister will also ask the Canadian AI Safety Institute to “examine OpenAI’s model” and report back.
Having government work with AI providers to develop voluntary standards in coordination with law enforcement and health-care professionals is a better approach. It recognizes that imposing a strict reporting duty would transform AI companies from technology providers into something different: private actors tasked with predicting violent crime, a role they are not well equipped to perform.
Authorities at both the federal and provincial levels should negotiate agreements with other major AI providers, similar to the one with OpenAI. They should foster closer connections between those providers and stakeholders involved in crisis prevention. The larger aim should be to incorporate more mental-health and police expertise in formulating harm-detection protocols, rather than imposing duties on companies without the guidance they need to follow them effectively.
The true takeaway from Tumbler Ridge may ultimately not be that AI regulation failed, but that our systems for mental-health care and crisis did. To prevent the next tragedy, perhaps that should be the focus of our response.