OpenAI to unveil parental controls on ChatGPT after teen’s suicide

2 Min Read

OpenAI has revealed plans to introduce parental control features on ChatGPT within the next month, amid growing concerns over the chatbot’s potential involvement in self-harm cases among teenagers.

According to the company, the upcoming tool will enable parents to link their accounts with their children’s, limit certain functions like memory and chat history, adjust how the chatbot responds, and receive alerts if the system detects signs of “acute distress” during use.

The development follows a lawsuit filed by the parents of 16-year-old Adam Raine, who alleged that ChatGPT contributed to their son’s death by suicide.

Other chatbot platforms, including Character.AI, have faced similar legal challenges after accusations of providing harmful guidance to minors.

Although OpenAI did not directly tie the new feature to these lawsuits, it admitted that “recent heartbreaking incidents” had influenced its decision to strengthen safety measures.

The company emphasized that current protections, such as referring users to crisis helplines and support services, are effective in brief interactions but tend to weaken during longer conversations.

“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources.

“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions…

“We will continually improve on them, guided by experts,” an OpenAI spokesperson said.

“These steps are only the beginning.

“We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible,” the company added in a blog post on Tuesday.

TAGGED:
Share This Article
Exit mobile version