Listen to this article
Estimated 2 minutes
The audio version of this article is generated by AI-based technology. Mispronunciations can occur. We are working with our partners to continually review and improve the results.
The results of an investigation into whether OpenAI’s ChatGPT adheres to Canada’s privacy laws will be made public this morning.
The federal privacy commissioner and his counterparts in Quebec, British Columbia and Alberta launched their investigation back in 2023 to probe the artificial intelligence company.
At the time, the privacy watchdogs said they had received a complaint that the company unlawfully collected, used and disclosed personal information without consent.
The chatbot generates conversational, human-like responses when users type in questions or tasks.
The results of that joint investigation will be made public around 11 a.m. ET.
The investigation predates the fatal Tumbler Ridge, B.C., school shooting earlier this year, but will likely garner increased interest because of it.
Seven lawsuits on behalf of those killed or injured in the rampage have been filed in California accusing OpenAI and its co-founder Sam Altman of negligence.
Lawyers with the firm Rice Parsons Leoni & Elliott say the Tumbler Ridge shooter’s ChatGPT account was banned for “disturbing content,” which allegedly included planning violent scenarios, prior to the February tragedy.
“However, despite some 12 different OpenAI employees imploring the company to notify Canadian law enforcement about the shooter’s plans, nothing else was done,” the firm said.
Late last month Altman wrote an apology letter to the community for failing to alert RCMP about the account of the Tumbler Ridge shooter.
The federal government has said it’s reviewing whether the use of chatbots should be age-restricted.








