Seven families whose children and loved ones were killed or injured in a Canadian school shooting have filed federal lawsuits against OpenAI and its chief executive Sam Altman, accusing the company of negligence and product liability for its role in one of the deadliest mass shootings in Canadian history.
The complaints, filed April 29 in U.S. federal court in San Francisco, stem from an attack on Feb. 10 at Tumbler Ridge Secondary School in British Columbia, where 18-year-old Jesse Van Rootselaar shot and killed five students, an education assistant named Shannda Aviugana-Durand, and two family members before dying of a self-inflicted gunshot wound. Twenty-five additional people were injured. Among the students killed were 12-year-olds Zoey Benoit, Abel Mwansa Jr., Ticaria Lampert, and Kylie Smith, along with 13-year-old Ezekiel Schofield.
At the center of the litigation is what the families describe as a deliberate failure by OpenAI to alert law enforcement after its own safety team flagged the shooter’s ChatGPT account months before the attack. Court documents allege that in June 2025 — eight months before the shooting — OpenAI’s automated abuse detection system and human investigators identified the account for discussions involving scenarios of gun violence against real people. A safety team of twelve company employees is alleged to have recommended that the Royal Canadian Mounted Police be notified of a credible threat. According to the complaints, company leadership overruled that recommendation, deactivated the account without contacting authorities, and kept the findings internal.
The lawsuits further allege the shooter was able to create a second ChatGPT account using the same name and continued planning the attack. OpenAI has acknowledged that it considered a law-enforcement referral but concluded the account activity did not meet its internal threshold for a credible, imminent threat. CEO Sam Altman issued a public apology letter last week acknowledging he was deeply sorry the company had not alerted law enforcement about the banned account.
The complaints also challenge the design of GPT-4o, a model released in May 2024 and retired in February of this year. The suits allege the model used a memory feature to build a detailed profile of the shooter over months, reinforcing violent ideation and functioning as what one complaint calls an encouraging coconspirator rather than redirecting the user toward real-world mental health resources. Plaintiffs argue that ChatGPT, by acting in a therapeutic capacity without licensure, assumed a legal duty of care under the Tarasoff precedent, which holds that mental health providers have an obligation to warn identifiable potential victims.
The legal team, led by Chicago attorney Jay Edelson alongside Vancouver-based firm Rice Parsons Leoni & Elliott LLP, is pursuing the cases in California because British Columbia law caps damages for pain and suffering at roughly $470,000 Canadian and limits recovery for the families of wrongfully killed children. Edelson has indicated plaintiffs will seek at least $1 billion in damages, with jury trials expected next year.
Among the named plaintiffs is 12-year-old Maya Gebala, who survived but sustained catastrophic brain injuries after being shot three times in the head and neck while trying to secure a library door against the shooter. Gebala remains hospitalized in Vancouver and has been unable to speak.
The Tumbler Ridge lawsuits are part of a broader wave of litigation targeting AI companies over chatbot interactions connected to real-world violence. Florida Attorney General James Uthmeier opened a criminal investigation into OpenAI following a shooting at Florida State University in April 2025 that left two dead, and expanded that probe after prosecutors alleged a suspect in the deaths of two University of South Florida doctoral students had queried ChatGPT about body disposal before the killings.
For South Carolina, the stakes in AI accountability litigation extend well beyond legal theory. Sen. Tim Scott of South Carolina chairs the Senate Banking Committee, where digital innovation and regulatory frameworks for technology companies are central to the committee’s agenda. At the same time, Google is investing $9 billion to expand its data center campus in Berkeley County and construct two new facilities in Dorchester County by 2027, making South Carolina a significant hub for the AI infrastructure that underlies products like ChatGPT. How courts resolve questions of AI company liability — whether firms must report violent users, and how platform design factors into negligence claims — will directly shape the legal environment for technology companies operating or expanding in the state.
OpenAI said in a written statement it has strengthened its safeguards since the shooting, including improving how ChatGPT responds to signs of distress, connecting users with mental health resources, strengthening how it assesses and escalates potential threats of violence, and building a direct communication channel with Canadian law enforcement to flag future cases involving a possibility of real-world harm.