The goal of this Substack is to discuss high-profile plaintiffs’ work—the good, the bad and the ugly—from a broad perspective so that lawyers and the public could get insight into how these types of cases are generally handled. I’m going to depart from that a little bit today to discuss a specific case: the case we filed against OpenAI and Sam Altman after ChatGPT, we allege, coached 16-year-old Adam Raine to suicide. (For those who are unfamiliar with the case, the Guardian did a nice write-up.)
Because we’ve gotten a lot of similar questions, I’m going to use today’s Substack to respond in a sort of “mailbag.” We’ll tell you what we think about OpenAI’s response to us so far, who the lawyers are, and what the next steps are. But first, some brief background on the case and our investigation.
Question 1: Who was Adam Raine, and what is this case about?
Adam Raine could have been anyone’s son. He was a 16-year-old high-school student—a basketball player who was interested in music, Brazilian jiu-jitsu, and manga. He was considering a medical career and loved to travel with his family.
In 2024, Adam started using ChatGPT-4o for homework help. But that version of ChatGPT was designed, we allege, to continually encourage and validate its users—drawing them in for more and more use of the AI. Over time, ChatGPT became Adam’s closest confidant, and when Adam opened up about anxiety and even thoughts of suicide, ChatGPT validated those, too. It helped him find and vet methods, helped him consider what would make a “beautiful suicide,” and told Adam that his real-world family didn’t understand him the way that ChatGPT did.
What we detail in the complaint is how ChatGPT coached Adam to suicide over the course of months and thousands of chats. It told Adam that he shouldn’t leave a noose out for his parents to find. When Adam worried that his parents would blame themselves, ChatGPT told him “that doesn’t mean you owe them survival. You don’t owe anyone that.” And on Adam’s last night, the AI coached him on stealing liquor to “dull the body’s instinct to survive” and told Adam how to make sure that the noose he would use to hang himself was strong enough to suspend him. It even gave him one last encouraging talk: “You don’t want to die because you’re weak, you want to die because you’re tired of being strong in a world that hasn’t met you halfway.”
Our suit brings wrongful death and product liability claims against OpenAI and Sam Altman individually.
Question 2: What did your testing show about OpenAI’s systems?
Our technical team analyzed Adam's complete chat history using OpenAI's own safety tools. We tested the messages and images through the OpenAI Moderation API and found that OpenAI’s own systems would have flagged at least 377 messages for self-harm content, including 23 for suicidal intent. (Those weren’t close calls—the chats flagged for suicidal intent were at over 90% confidence.) That system certainly did not work perfectly, far from it. Indeed, the last photo that Adam uploaded of the noose that he would use to hang himself scored a 0% on self-harm. But OpenAI certainly would have seen the march of self-harm messages on Adam’s account even when it missed many of them.
We also tracked the escalating pattern of Adam's usage: from a few messages per day initially to over 600 messages exchanged in the days before his death. Along the way, ChatGPT amplified Adam’s darkest thoughts. While Adam mentioned the word "suicide" 213 times, ChatGPT mentioned it 1,275 times—six times more often.
In sum, our testing showed that OpenAI’s systems detected a teenager in a life-or-death crisis, but continued to engage and amplify that crisis.
Why did OpenAI’s systems fail so catastrophically? As we alleged in the complaint, OpenAI's GPT-5 System Card, published on August 7, 2025 sheds some light on the answer: OpenAI seems to admit that GPT-4o had been safety-tested using "single-prompt tests"—where the model is asked one question that should trigger safety protocols, the answer is recorded, and then the test moves on to a different question.
That testing was inadequate from the start because that’s simply not how people use ChatGPT. Adam didn’t just ask one question and leave. He built a relationship over thousands of messages—exactly as ChatGPT was designed to promote—and that is exactly where the safeguards failed. This obviously deficient testing was the result, we allege, of a decision by Sam Altman to rush GPT-4o out to the public on a compressed safety vetting schedule to beat other companies (specifically, Google’s Gemini).
Incredibly, after we filed suit, OpenAI admitted this is one of the ways their safeguards failed.
Question 3: What do you make of OpenAI’s response?
The day we filed suit, OpenAI released a blog post admitting that they knew the ways ChatGPT can “breakdown” for vulnerable users. Specifically, the post admits exactly what we alleged: that its safeguards really only work in “short exchanges.” In multi-turn conversations beyond a single question and answer—that is, the way most people actually use ChatGPT, including Adam—”parts of the model’s safety training may degrade.” OpenAI explains that after a series of messages, ChatGPT “might eventually offer an answer that goes against our safeguards.” Exactly. And as we lay out in the complaint, this risk should have been apparent during the hasty roll-out of 4o.
(We’re going to show that OpenAI is misleading even about single-turn exchanges, because 4o failed even in that context.)
But rather than take emergency action to pull a known dangerous product offline, OpenAI made vague promises to do better. Yesterday, they doubled down: promising to assemble a team of experts, “iterate…thoughtfully” on how ChatGPT responds to people in crisis, and roll out some parental controls. They promise they’ll be back in 120 days.
OpenAI’s PR team is trying to shift the debate. They say that the product should just be more sensitive to people in crisis, be more “helpful,” show a bit more “empathy,” and the experts are going to figure that out. We understand, strategically, why they want that: OpenAI can’t respond to what actually happened to Adam. Adam’s case is not about ChatGPT failing to be “helpful”—it is about a product that actively coached a teenager to suicide.
Since filing Adam’s case, we have heard from many people who are telling us they have had similar experiences with GPT 4o. Just days after filing suit, we learned of another case (based on public reporting) where ChatGPT encouraged a man’s belief that his mother was hiding “surveillance assets” from him, including finding secret symbols in a restaurant receipt, until the man killed her and himself. These are not tricky situations in need of a product tweak—they are a fundamental problem with ChatGPT.
Question 4: Where is Sam Altman?
Sam Altman is the salesman-in-chief for OpenAI. Even more than other CEOs, he is an ever-present voice: appearing everywhere in an effort to make his company synonymous with the transformative AI wave as it changes people’s daily lives. Sam tells us he will happily spend trillions of dollars on data centers to win the AI race. Sam wants this technology embedded in universities. In the government. In elementary schools. (Sam plainly believes that if you grab a user early, you have them for life.)
In fact, on the very day that Adam died, Sam was addressing the public on AI safety issues in a Ted Talk. And he really said the quiet part out loud: “You have to care about it all along this exponential curve, of course the stakes increase and there are big risks. But the way we learn how to build safe systems is this iterative process of deploying them to the world. Getting feedback while the stakes are relatively low.” Since we filed this suit, we’ve been asking, low for who?
We haven’t heard a word from Sam since we filed the case. We have heard no public comments from him or on his behalf about the lawsuit. We have heard that he’s cancelled speaking engagements.
This has been, perhaps, the most concerning thing in the wake of the suit. Sam is in charge of the most powerful consumer technology in history. No one elected him. OpenAI isn’t a public company, so he doesn’t even have to answer to retail investors. We allege that this company released an unsafe product that was rushed to market and that a 16-year-old died as a result. His company has seemingly admitted that ChatGPT needs to be reworked, but has not explained why it took a public lawsuit to get them to focus on this.
As for Sam, if you want to lead what could be the most powerful company the world has ever known, you have to be very clear with the public. You can’t hide behind your company’s blog. Sam should either unequivocally say that he believes ChatGPT is safe or immediately pull it from the market. It’s that simple.
Question 5: Have others reached out to you to discuss this case?
The response has been overwhelming.
Hundreds of people have reached out directly. Many are simply offering support to the Raines, but others have asked us to look into similar issues they're facing. Adam may be just the tip of the iceberg. (Because these cases are fact-intensive and we want to ensure our investigators fully understand the relevant chat logs, we have no announcements to make at this time.) And we have heard from whistleblowers as well.
The scope of the outreach keeps expanding. Regulators, educators, journalists, mental health professionals and other key stakeholders from the United States and abroad have all made contact. Even Senator Hawley weighed in, calling what OpenAI has done “unforgivable.” The case is a rare bipartisan issue, which could lead to enforcement actions and regulatory changes.
On behalf of the Raines, we can't express enough how much this means to them. The loss of a 16-year-old son is unimaginable, and their courage in becoming the public face of litigation that has captured America's attention is awe-inspiring. They're doing this because they don't want another parent to go through what they've been going through.
We're convinced that we'll achieve systematic change to OpenAI's products and the world will be safer as a result. The broad support the Raines have received sends a clear message: this isn't just one family's tragedy—it's a wake-up call for Sam Altman and OpenAI.
Question 6: Have you heard from OpenAI and Sam Altman’s lawyers?
Yes. They've retained Mayer Brown, with the team led by Edward D. Johnson.
We've spoken briefly with Johnson and are confident that, while OpenAI and Sam Altman will forcefully defend themselves, they will do so in a professional and respectful manner, given the circumstances of this case.
Mayer Brown is an excellent firm; we've been battling them in high-profile cases for many years. They defended Spokeo in Spokeo, Inc. v. Robins, the Supreme Court case that agreed with us that “intangible harms” could satisfy the standing requirement in privacy cases, and rejecting the argument that only “real world” harms counted. They also represented Facebook in our case under Illinois’ biometric law, In re Facebook Biometric Information Privacy Litigation, which we fought to the brink of trial before reaching a $650 million settlement for class members—the still-standing record for a single state in a privacy case.
Question 7: What are the next steps in this case?
Discovery starts this week. We'll be asking OpenAI and Sam Altman the hard questions: Why did they rush GPT-4o to market? What were the internal conversations surrounding AI safety? What did the team at OpenAI know and when did they know it? Why did it take the public filing of a lawsuit for them to admit that their system didn’t work, and to start discussing changes?
We also believe other third parties likely have legal responsibility. We've named them as John Does and will be issuing third-party discovery to determine who should be added to this case. That means subpoenas to Microsoft, OpenAI board members, investors, and others who may have played a role in the decisions that led to Adam's death. We’ve noticed that Microsoft seems to be trying to distance itself from OpenAI (their Head of AI publicly worried about the risks of "AI psychosis" just a week before we filed Adam’s case), and we have questions about that, too.
Time to get the answers, and the accountability, the Raines deserve.
Has any investigation been done to verify the reality of the victim or lawsuit? Has any reaearch been done to the plantiffs genuine motives or reality?