Jakarta, Indonesia Sentinel — OpenAI, the parent company of ChatGPT, said it will roll out parental control features next month in response to criticism that its chatbot poses risks to younger users.
The upcoming tools will allow parents to link their accounts with their children’s, regulate how ChatGPT responds to teenagers, disable functions such as memory and chat history, and receive notifications when the system detects “unpleasant moments” during use.
The company previously acknowledged it was developing parental controls but had not provided a timeline. “These steps are just the beginning. We will continue to learn and strengthen our approach, guided by experts, with the goal of making ChatGPT as safe as possible,” OpenAI said in a blog post Tuesday (September 2).
The announcement follows a lawsuit filed by the parents of 16-year-old Adam Raine, who accused ChatGPT of suggesting suicide to their son. Last year, a Florida mother also sued the chatbot platform Character.AI over its alleged role in her 14-year-old son’s death.
Concerns have been mounting about users forming emotional attachments to AI systems, sometimes leading to delusional episodes and estrangement from families, according to reports by The New York Times and CNN.
Read Also:
OpenAI Unveils ChatGPT Agent, Capable of Handling Complex Tasks on Behalf of Users
While OpenAI did not explicitly link its new parental controls to these incidents, the company wrote last week that “heartbreaking cases of people using ChatGPT in the midst of acute crisis” had prompted it to share more details about its safety measures.
An OpenAI spokesperson said ChatGPT already includes safeguards, such as directing users to crisis hotlines and other resources. But in a statement last week responding to the Raine case, the company acknowledged those protections can sometimes fail in extended conversations.
“ChatGPT has protections like directing people to crisis hotlines and real-world resources,” the spokesperson said. “While these protections work best in common, short exchanges, we’ve learned they can sometimes become less reliable in lengthy interactions where parts of the model’s safety training degrade. The strongest protections come when every element works as intended, and we will continue to improve them, guided by experts.”
(Raidi/Agung)