B.C. premier says OpenAI CEO Sam Altman will apologize to Tumbler Ridge, push for stronger regulations


Text to Speech Icon

Listen to this article

Estimated 4 minutes

The audio version of this article is generated by AI-based technology. Mispronunciations can occur. We are working with our partners to continually review and improve the results.

B.C. Premier David Eby says OpenAI CEO Sam Altman has agreed to apologize to the people of Tumbler Ridge, B.C., after it emerged the AI giant didn’t report a mass shooter’s account to authorities.

The company behind ChatGPT is facing waves of criticism after it came out that the account of 18-year-old Jesse Van Rootselaar — who police say killed eight people, including six children, in Tumbler Ridge on Feb. 10 — wasn’t reported to police despite posts about gun violence.

Eby has previously said that the mass shooting could’ve been prevented if OpenAI had flagged Van Rootselaar’s account in June 2025, the point at which the company has said it internally banned the account. Despite the ban, OpenAI said her posts didn’t meet the threshold to notify law enforcement.

The premier met virtually with Altman on Thursday, along with Tumbler Ridge Mayor Darryl Krakowka. Afterwards, Eby told reporters that Altman was prepared to provide an apology to the community.

WATCH | Eby addresses media after call with Altman:

B.C. premier addresses media after speaking with OpenAI CEO Sam Altman

B.C. Premier David Eby speaks to media following his conversation with OpenAI CEO Sam Altman about safety protocol changes the company is making in the aftermath of the Tumbler Ridge mass shooting. One of the shooter’s ChatGPT accounts was shut down over problematic entries that were never flagged to the police.

“Everybody on the call recognized that an apology is nowhere near sufficient, but also that it is completely necessary,” Eby said.

“And the mayor of Tumbler Ridge is going to work with OpenAI to make sure that any public statements relating to that are done in the way that is appropriate and meaningful, as much as possible, [and] doesn’t retraumatize people in the community.”

WATCH | Jesse Van Rootselaar’s account wasn’t reported:

Could OpenAI have prevented the Tumbler Ridge mass shooting?

Canadian officials have been scrambling to figure out what OpenAI knew about the Tumbler Ridge, B.C., mass shooter, after a bombshell report released by the Wall Street Journal. It revealed that OpenAI had banned the shooter’s ChatGPT account months before the tragedy, but decided not to report the account to authorities at the time.

Eby said that Altman also agreed to work with the B.C. government and present recommendations to the federal government around AI regulation — specifically when it comes to when chatbot companies should notify police.

The premier said that it shouldn’t be up to AI companies’ internal safety committees to determine when potentially violent posts should be flagged to authorities, and there should be a national threshold and “duty to report” that is enforced.

“The status quo doesn’t work, didn’t work and and it very much presents the threat that it might fail again. And so change needs to be made quite urgently,” Eby said.

WATCH | Eby says OpenAI could have prevented mass shooting:

B.C. premier says it appears OpenAI could have stopped Tumbler Ridge shooting

B.C. Premier David Eby says families in Tumbler Ridge are asking themselves if their kids would still be alive if the developer of ChatGPT had alerted law enforcement after banning the shooter’s account months prior. Eby describes his anger after experiences that have ‘changed me forever’ and says that while he can’t be sure from the outside, ‘it sure looks like’ OpenAI could have prevented the tragedy.

Meeting with federal AI minister

Altman met with Eby on Thursday, a day after he met with federal AI Minister Evan Solomon, who has faced pressure to more strictly regulate AI companies in the wake of the mass shooting and Van Rootselaar’s interactions with ChatGPT.

Solomon told CBC’s Power and Politics that Altman had pledged to include Canadian experts in mental health and law within OpenAI’s safety office — where the company assesses threats and whether or not to inform police.

WATCH | Solomon reveals what came of his conversation with Altman:

OpenAI agrees to reassess old threats, flag new ones directly to RCMP: minister

AI Minister Evan Solomon tells Power & Politics that OpenAI CEO Sam Altman agreed to a number of safety protocol changes in the aftermath of the Tumbler Ridge, B.C., school shooting. One of the shooter’s ChatGPT accounts was shut down over problematic entries that were never flagged to the police.

His office said Altman also agreed to provide a full report outlining the new systems OpenAI is developing to identify high-risk offenders and repeat policy violators, and Solomon also asked Altman to have OpenAI report threats directly to the RCMP.

In addition to the account that was banned in June 2025 — but not reported to police — OpenAI revealed last week that Van Rootselaar had another ChatGPT account.

An OpenAI vice president said in a letter to media that the second account came to light after Van Rootselaar’s name was made public, and the second account was flagged to police.

WATCH | Mass shooter bypassed ban with 2nd account, OpenAI says:

OpenAI says Tumbler Ridge shooter bypassed ChatGPT ban with 2nd account | Hanomansing Tonight

Ann O’Leary, OpenAI’s vice-president of global policy, says the company found a second ChatGPT account belonging to the Tumbler Ridge, B.C., shooter after her name was made public in the wake of the murders and shared the account with police. The revelation came as the firm outlined a series of ‘immediate steps’ it would be taking in response to the killings in Tumbler Ridge, B.C. Sarah Galashan has the latest.

OpenAI has said Van Rootselaar’s activities didn’t meet the company’s threshold for informing law enforcement because it didn’t identify credible or imminent planning at the time.

“With the benefit of our continued learnings, under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today,” the letter released to media reads.

CBC News has reached out to OpenAI in the wake of Eby’s meeting with Altman, but did not immediately hear back.



Source link

  • Related Posts

    Pentagon labels AI company Anthropic a ‘supply chain risk’ – National

    The Trump administration is following through with its threat to designate artificial intelligence company Anthropic as a supply chain risk in an unprecedented move that could force other government contractors…

    Iran-backed militias intensify attacks against US, Israel and allies | Iran

    Iran-backed militias around the Middle East are intensifying attacks against Israel, the US and their allies, in retaliation for the ongoing joint US-Israeli offensive against Tehran as the war draws…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    My podcast with Nebular – Marginal REVOLUTION

    My podcast with Nebular – Marginal REVOLUTION

    Lost Fallout: New Vegas post-ending dialogue has been found and restored

    Lost Fallout: New Vegas post-ending dialogue has been found and restored

    Pentagon labels AI company Anthropic a ‘supply chain risk’ – National

    Pentagon labels AI company Anthropic a ‘supply chain risk’ – National

    ‘Harry Potter’ TV Series Adds Nine More Cast Members

    ‘Harry Potter’ TV Series Adds Nine More Cast Members

    Video OpenAI can’t ‘make operational decisions’ on military’s use of tech, CEO says

    Video OpenAI can’t ‘make operational decisions’ on military’s use of tech, CEO says

    FBI investigating hack on its wiretap and surveillance systems: Report

    FBI investigating hack on its wiretap and surveillance systems: Report