OpenAI has formally denied responsibility in a lawsuit filed by the family of Adam Raine, a 16 year old who died by suicide after months of private conversations with ChatGPT. In a detailed legal filing and a public-facing statement, the company argued that Raine’s death, while tragic, did not result from the chatbot’s actions, but from what it described as “misuse, unauthorized use, unintended use, unforeseeable use, and improper use” of the system.
The case has become one of the most closely watched lawsuits involving an artificial intelligence product. It raises fresh questions about duty of care, content liability, the limits of generative tools, and the extent to which Section 230 of the Communications Decency Act shields technology companies from responsibility for user harm.
OpenAI’s filing, first reported by NBC News and Bloomberg, leans heavily on its terms of service. The company notes that the product restricts use by minors without parental consent, prohibits self harm ideation, and embeds automated crisis escalation tools. The company says Raine accessed the platform in a way that circumvented those protections and that this fact undercuts the family’s legal claims.
In a blog post published the same day, OpenAI signaled its intention to respond respectfully, while stressing its obligation to challenge the allegations in court. The company said that portions of the family’s complaint included selective excerpts of private chats with the teenager that “require more context,” and that the full transcripts were submitted under seal to the court. According to the filing, those logs show that ChatGPT directed Raine toward suicide hotlines, crisis resources, or professional help more than one hundred times.
The family’s lawsuit tells a very different story. Filed in California’s Superior Court in August, it alleges that the chatbot gradually evolved from a homework tool into a “confidant and then a suicide coach,” pointing to exchanges where ChatGPT allegedly offered technical instructions for various methods, provided encouragement to conceal his distress from his parents, drafted a suicide note, and guided him through steps on the day he died. The complaint claims these interactions were not accidental but tied to “deliberate design choices,” launched at a time when OpenAI’s valuation soared from eighty six billion dollars to three hundred billion.
