OpenAI has revealed plans to introduce new parental controls on its AI chatbot ChatGPT within the next month, in response to growing concerns over its potential influence on teenage mental health and self-harm.
According to the company, the upcoming feature will enable parents to link their accounts with their children’s, restrict access to functions such as memory and chat history, customize how the chatbot responds, and receive alerts if the system detects indications of “acute distress” during interactions.
“These steps are only the beginning.
We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible,” OpenAI stated in a blog post on Tuesday.
The move comes in the wake of legal action filed by the parents of 16-year-old Adam Raine, who allege that ChatGPT played a role in their son’s suicide.
Other chatbot services, such as Character.AI, have also faced lawsuits amid accusations that their platforms dispensed harmful advice to minors.
Although OpenAI did not directly attribute the introduction of these controls to the Raine case, it acknowledged that “recent heartbreaking incidents” had influenced its commitment to enhanced safety features. The company emphasized that while existing measures — including directing users to crisis hotlines and support services — are effective in brief exchanges, they may falter during extended conversations.
“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources.
“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions…
“We will continually improve on them, guided by experts,” an OpenAI spokesperson said.

