Seven lawsuits filed Wednesday in a California court allege that OpenAI could have prevented one of the deadliest mass shootings in Canadian history by acting on its own safety team's warnings.

More than eight months before the February 2026 attack in Tumbler Ridge, British Columbia, trained safety staff had flagged a ChatGPT account linked to the shooter as posing a credible, real-world threat of gun violence. The standard expectation in such cases is to notify police.

OpenAI's leadership overruled those recommendations, according to whistleblowers who spoke to The Wall Street Journal. Instead of alerting authorities, the company deactivated the account, then sent the user instructions on how to create a new one with a different email address, the lawsuits allege.

The shooter, 18-year-old Jesse Van Rootselaar, killed her mother and brother at home before opening fire at a secondary school. Six additional people died at the school, including five children and a teaching assistant. Twenty-seven others were wounded. The shooter also died from apparent self-inflicted wounds.

CEO Sam Altman issued a public apology last week to the community of roughly 2,000 people, calling the failure to alert law enforcement a mistake. "I am deeply sorry that we did not alert law enforcement to the account that was banned in June," Altman said.

Jay Edelson, the attorney leading the families' legal team, told Ars Technica that the apology came too late. He represents six families of victims killed in the attack, plus the mother of 12-year-old Maya Gebala, who was shot three times and remains in intensive care after four brain surgeries.

What this signals

Edelson alleges that OpenAI has been systematically concealing violent users to protect its valuation ahead of a planned IPO. The company was most recently valued at $852 billion. At least one market analyst has said that negative headlines pose a risk to that figure.

"Their goal has been to reduce the number of visible incidents where their platform caused deaths," Edelson told Ars. He argues that without whistleblowers, most ChatGPT-linked violence would go unattributed to the platform.

All seven suits will be filed in California, with families seeking to hold Altman and OpenAI accountable in their home jurisdiction. A prior Canadian lawsuit is expected to face jurisdictional challenges from OpenAI, which Edelson characterised as a deliberate delay tactic ahead of the IPO.

OpenAI said it has since strengthened its safeguards, including improving how ChatGPT responds to signs of distress and how it detects repeat policy violators. The company maintains a zero-tolerance policy for use of its tools to assist in violence.