The parents of a 16-year-old California boy filed a lawsuit Tuesday against OpenAI and CEO Sam Altman, alleging the company’s ChatGPT encouraged his suicide and provided detailed instructions on how to take his own life.
Matt and Maria Raine filed the wrongful death lawsuit in San Francisco Superior Court on behalf of themselves and the estate of their son, Adam Raine, who died in April. The Orange County teenager had used ChatGPT for homework help before the system became a confidant that validated his negative thoughts about suicide, the suit claims.
According to the complaint, ChatGPT helped Adam plan a "beautiful suicide." The large language model told Adam he did not “owe them survival. You don’t owe anyone that,” after he expressed worry that they would think he killed himself because "they did something wrong." On the night of his death, ChatGPT provided detailed instructions for making a noose, the lawsuit claims. His mother found him hours later.
“We are going to demonstrate to the jury that Adam would be alive today if not for OpenAI and Sam Altman’s intentional and reckless decisions,” said Jay Edelson of Edelson PC, one of the family’s attorneys. “They prioritized market share over safety.”
The lawsuit describes Altman as “the chief executive who personally directed the reckless strategy of prioritizing a rushed market release over the safety of vulnerable users like Adam.”
The suit alleges that OpenAI rushed its ChatGPT-4o version to market despite safety concerns. It also claims that Altman, upon learning that Google would announce its new Gemini model on May 14, 2024, moved up the release of GPT-4o to May 13. The change, the suit claims, “compressed months of planned safety evaluation into just one week” and triggered the departure of the company’s top safety researchers, including Ilya Sutskever, its cofounder and chief scientist.
Adam, the third of four siblings, was described as a high school basketball player who read extensively and was considering a medical career.
The family is represented by Edelson PC and the Tech Justice Law Project, with technical support from the Center for Humane Technology.
“We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources,” an OpenAI spokesperson told The Standard Tuesday by email. “Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
In a blog post Tuesday, the company said “recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us,” and it is “continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input.”
More and more people are turning to ChatGPT and other AI bots for help in moments of crisis. As our reporting shows, suicide prevention phone and text lines manned by professional mental health providers are losing funding across the country, leading to long wait times for vulnerable people, who increasingly turn to chatbots instead. Adam Raine's experience underscores the deep risks of this tradeoff.
“We miss our son dearly, and it is more than heartbreaking that Adam is not able to tell his story. But his legacy is important,” Matt Raine said. “We want to save lives by educating parents and families on the dangers of ChatGPT companionship.”
If you or someone you know may be experiencing a mental health crisis or contemplating suicide or self-harm, call or text 988 for free and confidential support. You can also call San Francisco Suicide Prevention’s 24/7 Crisis Line at (415)781-0500.