Artificial Intelligence Minister Evan Solomon says he wants more clarity on OpenAI’s committed safety protocol changes after the Tumbler Ridge, B.C., mass shooting, and isn’t ruling out legislative changes to address the issue.
The company behind ChatGPT on Thursday said it would enhance its police referral and repeat offender detection practices, after it did not elevate the shooter’s AI chatbot activity to police months before she killed eight people and wounded dozens of others.
In a statement Friday, Solomon said OpenAI’s statement did not include “a detailed plan for how these commitments will be implemented in practice.”
He said he would be meeting with CEO Sam Altman next week to “seek further clarity” and assurances of “concrete action.”
“The tragedy in Tumbler Ridge has raised serious questions about how digital platforms respond when credible warning signs of violence emerge,” the minister said. “Canadians deserve greater clarity about how human review decisions are made, how escalation thresholds are applied, and how privacy considerations are balanced with public safety.
“We will be seeking further clarity on how human review is conducted and whether Canadian context and best practices are appropriately embedded in those decisions. I will also be consulting with my cabinet colleagues on additional options.”
Solomon added he would also be meeting with other AI companies in the coming weeks “to ensure there is a consistent and clear approach to escalation, local coordination, and youth protection.”
Get daily National news
Get the day’s top news, political, economic, and current affairs headlines, delivered to your inbox once a day.
“Decisions affecting Canadians must reflect Canadian laws, Canadian standards, and Canadian expertise,” he said.
“All options remain on the table as we assess what further steps may be necessary. Public safety must come first.”
Solomon and other federal ministers expressed frustration with OpenAI after the company did not present an action plan during a meeting in Ottawa on Tuesday.
The ministers said they would give OpenAI a chance to come back with one before considering a legislative response to the issue of how AI companies handle and address users’ violent behaviour.

Researchers and opposition MPs have urged the federal government to speed up efforts to regulate the AI industry in the wake of the Tumbler Ridge shooting.
OpenAI acknowledged on Thursday that, if it had detected Jesse VanRootselaar’s ChatGPT activity today, it would have flagged it to law enforcement under its current police referral thresholds, which were updated “several months ago.”
Instead, that activity was only referred to RCMP after the shooting occurred.
It also revealed that it found a second ChatGPT account linked to VanRootselaar after she was identified as the shooter in Tumbler Ridge — despite her first account being shut down last June due to “violent” activity and a system meant to detect repeat violators of OpenAI’s policies.
The company committed to further enhancing both of those protocols, as well as establishing direct points of contact with Canadian authorities and developing better practices of connecting users to local mental health supports if they exhibit troubling behaviour.
B.C. Premier David Eby said Thursday he will also be meeting with Altman, calling OpenAI’s commitments “cold comfort for the people of Tumbler Ridge.”
He told reporters Friday in Vancouver there is no firm date yet for the meeting with the CEO, who has yet to comment publicly on the Tumbler Ridge tragedy or the changes his company says it will make in Canada.
“I want to recognize that OpenAI did come forward,” Eby said. “They did bring the information forward to police. They didn’t try to cover it up after the fact, but this was a colossal, horrific mistake, I guess, is the most generous interpretation I can offer, to fail to bring that information forward to authorities.
“It’s important that Mr. Altman realizes that, and I will be looking for his support for a national standard across Canada, a national threshold where all AI companies must report — and clear consequences for if they fail to report — incidents where people are planning violence, planning to hurt other people, and using these tools to develop those plans.”
—with files from the Canadian Press
© 2026 Global News, a division of Corus Entertainment Inc.







