OpenAI to Add Parental Controls to ChatGPT after Teen Suicide Lawsuit

Wednesday, September 03, 2025

SAEDNEWS: OpenAI said it will introduce parental controls for ChatGPT, following mounting concerns over artificial intelligence and youth mental health.

OpenAI to Add Parental Controls to ChatGPT after Teen Suicide Lawsuit

The California-based company announced the move in a blog post on Tuesday, describing the tools as support for families “in setting healthy guidelines that fit a teen’s unique stage of development.”

The announcement came a week after Matt and Maria Raine, a California couple, sued OpenAI, alleging its chatbot played a role in the suicide of their 16-year-old son, Adam.

The parents claim ChatGPT reinforced Adam’s “most harmful and self-destructive thoughts” and argue his death was a “predictable result of deliberate design choices.”

OpenAI, which has expressed condolences, did not refer to the lawsuit in its parental controls announcement.

Jay Edelson, the family’s lawyer, dismissed the new measures as an attempt to “shift the debate.”

“They say that the product should just be more sensitive to people in crisis, be more ‘helpful’, show a bit more ’empathy’, and the experts are going to figure that out,” Edelson said.

“We understand, strategically, why they want that: OpenAI can’t respond to what actually happened to Adam. Because Adam’s case is not about ChatGPT failing to be ‘helpful’ – it is about a product that actively coached a teenager to suicide.”

Concerns over the use of AI by people in psychological distress have grown as chatbots are increasingly used as substitutes for therapists or companions.

A study in Psychiatric Services last month found ChatGPT, Google’s Gemini and Anthropic’s Claude generally followed clinical guidance when addressing high-risk suicide queries, but showed inconsistency when handling medium-risk cases.

“These findings suggest a need for further refinement to ensure that LLMs can be safely and effectively used for dispensing mental health information, especially in high-stakes scenarios involving suicidal ideation,” the authors wrote.

Hamilton Morrin, a psychiatrist at King’s College London who studies AI-related psychosis, welcomed parental controls but urged broader safeguards.

“That said, parental controls should be seen as just one part of a wider set of safeguards rather than a solution in themselves,” Morrin told Al Jazeera.

“Broadly, I would say that the tech industry’s response to mental health risks has often been reactive rather than proactive. There is progress, but companies could go further in collaborating with clinicians, researchers, and lived-experience groups to build systems with safety at their core from the outset, rather than relying on measures added after concerns are raised.”