OTTAWA — The federal government will weigh information provided by OpenAI on the Tumbler Ridge, B.C. shooting before taking action, Artificial Intelligence Minister Evan Solomon said Thursday.
Solomon said the government must “first see what’s in there before we can regulate.”
“Hard cases make bad laws, so you’ve got to really be careful about how quickly you’re responding,” he said at an event in Ottawa.
Jesse Van Rootselaar fatally shot eight people in Tumbler Ridge on Feb. 10, including six children, before killing herself.
OpenAI had banned the mass shooter from using its ChatGPT chatbot due to worrisome interactions but did not alert law enforcement. The shooter got around the ChatGPT ban by having a second account.
Solomon said the Canadian AI Safety Institute is working with OpenAI to obtain access to the company’s safety protocols. While he said earlier this month the institute had gained access to all of OpenAI’s protocols, Solomon said Thursday that process isn’t finished yet.
He told reporters after the event that the safety institute “is working now on the parameters of exactly what they’re going to get from OpenAI.”
That work includes negotiations on sensitive information about algorithms, he indicated.
“But they’re coming to terms with the scale and scope of it,” he added.
When asked where the centre is in that process, Solomon said they are “probably a little beyond negotiating the info … the engagement is well underway.”
Solomon spoke Thursday at a panel along with Culture Minister Marc Miller at an event focused on the perspectives of young people on AI and tech regulation.
Solomon noted during the event that artificial intelligence technology flagged the case of the Tumbler Ridge shooter but human assessors chose not to escalate it to a police report.
“Without diminishing the tragedy and responsibility that OpenAI bears, there is a tendency to think this was an AI-alone mistake, and at the end of the day it was fundamentally a human error,” Miller said.
Solomon met with the CEO of OpenAI in March.
“I summoned them because they failed, by their own admission, to flag a threat … they admitted that and have apologized for it. But we don’t understand what’s in the black box of their enforcement threshold,” Solomon said.
Following that meeting, Solomon said the company promised to take a number of measures — including providing a report outlining new systems the company is developing to identify high-risk offenders and policy violators, establishing a direct point of contact with the RCMP and implementing safety protocols to direct people “experiencing distress” to appropriate local services.
Solomon said Thursday Miller was also at that meeting, along with Justice Minister Sean Fraser and Public Safety Minister Gary Anandasangaree.
“What we realized is, of course, what they told us during that meeting was disappointing and insufficient,” he said.
Miller is taking the lead on a promised new online harms bill, but the government has not yet decided whether AI chatbots will be covered by the legislation.
This report by The Canadian Press was first published April 30, 2026.
Anja Karadeglija, The Canadian Press








