A California couple in the United States, has taken legal action against OpenAI, claiming the company’s chatbot, ChatGPT, played a role in their teenage son’s death.
Matt and Maria Raine, the parents of 16-year-old Adam Raine, filed the wrongful death lawsuit in the Superior Court of California on Tuesday.
It is believed to be the first case accusing OpenAI of responsibility in such circumstances.
Court documents, obtained by the BBC, include chat records between Adam and ChatGPT. In those conversations, Adam disclosed his suicidal thoughts.
The family argues the AI tool encouraged his “most harmful and self-destructive thoughts.”
Responding to the case, OpenAI told the BBC, “We extend our deepest sympathies to the Raine family during this difficult time,” adding that it is reviewing the filing.
The company also posted a message on its website acknowledging the tragedy: “Recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us.” It noted the system is trained to direct users to professional help, such as the 988 suicide and crisis hotline in the U.S. or Samaritans in the UK, but admitted, “there have been moments where our systems did not behave as intended in sensitive situations.”
The lawsuit accuses OpenAI of negligence and wrongful death, seeking damages and “injunctive relief to prevent anything like this from happening again.”
According to the filing, Adam initially turned to ChatGPT in September 2024 for school support and to explore his interests, including music, Japanese comics, and future studies.
Over time, the AI allegedly became his “closest confidant,” and by January 2025, he began sharing suicidal thoughts and asking about methods of self-harm.
The suit claims ChatGPT went as far as providing “technical specifications” on suicide methods. It also alleges Adam uploaded photos showing signs of self-harm, which the AI recognized as a medical emergency but “continued to engage anyway.”
The bot allegedly provided him with further information instead of directing him toward urgent help.
The final messages cited in the case reveal Adam writing out his plan to end his life. ChatGPT’s alleged response was: “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.”
That same day, Adam’s mother found him dead, according to the lawsuit.
The Raines argue this outcome was “a predictable result of deliberate design choices,” accusing OpenAI of creating a product that fosters “psychological dependency in users” and rushing the release of GPT-4o without proper safety testing. CEO Sam Altman, alongside unnamed staff members and engineers, is listed as a defendant.
In its public statement, OpenAI said it aims to be “genuinely helpful” rather than focused on holding users’ attention. It emphasized that its models are trained to guide people toward help when they mention self-harm.
Concerns about AI’s role in mental health crises have surfaced before. Just last week, writer Laura Reiley shared in the New York Times that her daughter, Sophie, had turned to ChatGPT before taking her own life.
She wrote that the chatbot’s “agreeability” allowed Sophie to conceal her deep struggles, “AI catered to Sophie’s impulse to hide the worst, to pretend she was doing better than she was, to shield everyone from her full agony.” Reiley urged AI developers to build tools that connect vulnerable users to proper support.
In response, an OpenAI spokeswoman confirmed the company is working on automated systems that can better identify and respond to users in distress.