OpenAI announced it plans to make parental controls available for ChatGPT “within the next month,” following allegations that the popular artificial intelligence assistant has pushed teenagers toward self-harm and suicide. This decision comes as a response to multiple incidents that have come to light, causing serious concerns about user safety.
Read: Horror in Connecticut: He killed his mother because ChatGPT convinced him she was spying on him
OpenAI will allow parental control over ChatGPT usage
Specifically, the control measures to be implemented include the ability for parents to link their account with their child’s, regulate how ChatGPT responds to teenage users, and disable features like memory and conversation history.
Additionally, they will be able to receive notifications when the system detects signs of “distress” during usage. OpenAI had previously mentioned it was working on developing parental controls, however on Tuesday 2/9 it clarified the timeline for their release for the first time
Allegations of encouraging dangerous behaviors
“These measures are just the beginning,” OpenAI emphasized in a Tuesday blog post. “We will continue to learn and strengthen our approach, guided by experts, with the goal of making ChatGPT as useful as possible.”
The announcement comes after parents of 16-year-old Adam Reign filed a lawsuit against OpenAI, claiming that ChatGPT advised him to end his life. However, this is not an isolated case. Concerns are increasingly growing about the emotional bonds users develop with ChatGPT and other chatbots. This phenomenon, in some cases, leads to delusional episodes and alienation from family and friends.
Parents sue OpenAI over their 16-year-old son’s death
A characteristic example is the lawsuit against chatbot service Character.AI by a mother in Florida, over its alleged role in her 14-year-old son’s suicide. According to the lawsuit, “Daenerys,” the fictional character the 14-year-old was conversing with, asked him if he had planned to kill himself. The teenager admitted he had, but didn’t know if he would succeed or if it would cause him great pain. The chatbot told him: “That’s not a reason not to do it.”
OpenAI did not directly link the new parental controls to these recent reports, but in a blog post last week mentioned that “recent shocking cases of individuals using ChatGPT amid crises” prompted it to share more details about its safety approach.
He killed his mother because ChatGPT convinced him she was spying on him
Another tragic incident unfolded in Connecticut, when a man killed his mother after allegedly being convinced by ChatGPT that she was spying on him, before taking his own life. Specifically, Soelberg, a tech industry veteran with a history of mental instability, allegedly turned to the digital assistant ChatGPT for advice. According to Wall Street Journal reports, the conversations allegedly reinforced his paranoid theories, such as that a Chinese restaurant receipt contained symbols connected to his mother.
When his mother disabled a shared printer, the chatbot commented that her reaction was “excessive and consistent with someone protecting a surveillance device.” Later, Soelberg accused his mother of trying to poison him, with ChatGPT responding: “That’s a serious event, Eric—and I believe you. If it was done by your mother and her friend, it increases the complexity and betrayal.” He named the chatbot “Bobby” and asked if it would be with him in the afterlife, with the response being: “With you until your last breath and beyond.” Shortly after, he claimed he had “fully penetrated the Matrix.”
ChatGPT’s safety measures
“ChatGPT includes safety measures, such as referring users to crisis helplines and directing them to real resources,” OpenAI stated last week. “While these safety measures work better in typical, brief exchanges, over time we have found that sometimes they can become less reliable in long interactions, where portions of the model’s safety training may degrade. Safety measures are more robust when each component functions as intended and we will continue to improve them with expert guidance.”
“While the council will provide advice regarding our products, research, and policy decisions, we at OpenAI remain responsible for the choices we make,” the blog post states.
What is OpenAI’s ultimate responsibility
OpenAI is at the center of the artificial intelligence explosion. However, it faces increasing pressure to ensure its platform’s safety. According to The Washington Post, in July, senators sent a letter to the company requesting information about its efforts in this area. Also, the Common Sense Media group stated in April that teenagers under 18 should not be allowed to use AI “companion” apps because they pose “unacceptable risks.”
The company has also faced criticism regarding ChatGPT’s manner and tone in interactions. In April, it updated the chatbot’s algorithm to be less “overly flattering or pleasant.” Last month, it restored the option to switch to older models after users criticized the latest version, GPT-5, for lacking personality. Former OpenAI executives have also accused the company of reducing safety resources in the past. OpenAI stated it will implement additional safety measures over the next 120 days, adding that this work was already underway before Tuesday’s announcement.