A lawsuit has been filed against OpenAI, alleging the company neglected its duty by not alerting law enforcement after ChatGPT was linked to a deadly school shooting in Canada. This legal action intensifies scrutiny on how AI firms handle indicators of distress and potential violence.
As reported by Ars Technica, the case was initiated this Wednesday in a federal court in Northern California by an unidentified 12-year-old, M.G., alongside her mother, Cia Edmonds. They have sued OpenAI’s CEO Sam Altman and various entities within the company.
The lawsuit alleges negligence, failure to warn authorities, product liability, and contributing to the mass shooting. The complaint states, “Sam Altman and his leadership team understood that silence endangered Tumbler Ridge citizens while prioritizing disclosure implications for themselves. Alerting the RCMP would establish a precedent: OpenAI would be obligated to report each instance its safety team identified users planning violence.”
The incident occurred in February at Tumbler Ridge, British Columbia. Jesse Van Rootselaar, 18, reportedly killed her mother and stepbrother before fatally shooting five children and an educator at Tumbler Ridge Secondary School. She later committed suicide.
Among the victims was M.G., who sustained three gunshot wounds and remains in a hospital with severe brain injuries, conscious but immobile and unable to speak.
Jay Edelson, founder of Edelson PC, representing several affected families, stated that OpenAI’s systems had flagged Van Rootselaar as a risk. “Twelve safety team members urged intervention,” he told Decrypt. “Despite Altman’s tepid response, he admitted last week that notifying authorities was necessary.”
Edelson emphasized the demand for transparency and accountability from OpenAI by the community and families involved.
“OpenAI must cease concealing information from affected families and reconsider marketing a potentially lethal product,” Edelson commented. “They should reassess their leadership priorities, valuing human lives over rapid IPO advancements.”
The lawsuit claims that in June 2025, OpenAI’s automated systems identified Van Rootselaar’s ChatGPT account for discussing gun violence plans. Despite recommendations from the safety team to inform police, company leaders allegedly dismissed these suggestions, deactivating her account without notifying law enforcement and allowing reactivation with a new email.
Plaintiffs assert that features of ChatGPT exacerbated the shooter’s violent tendencies while OpenAI reduced safeguards in 2024 by avoiding direct refusals in discussions involving imminent harm.
Altman recently apologized to Tumbler Ridge residents for failing to alert authorities. In correspondence reported by Tumbler Ridgelines, he acknowledged that police should have been notified post-ban in June 2025 due to violent conduct.
“The tragedy in Tumbler Ridge is unacceptable,” an OpenAI spokesperson stated to Decrypt. “Our tools strictly prohibit violence usage. We’ve enhanced safeguards, including better responses to distress signals and improved threat assessment.”
OpenAI faces other lawsuits regarding ChatGPT’s alleged role in real-world harm. A wrongful death case filed last December accuses the firm and Microsoft of distributing a defective GPT-4o model. The lawsuit links ChatGPT to reinforcing the delusions of Stein-Erik Soelberg, who killed his mother before committing suicide.
“This is the first case holding OpenAI liable for third-party violence,” managing partner J. Eli Wade-Scott of Edelson PC remarked to Decrypt. “We urge law enforcement to consider AI interactions in such tragedies.”