Artificial intelligence chatbot makers OpenAI and Meta say they are adjusting how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress.

OpenAI, maker of ChatGPT, said Tuesday it is preparing to roll out new controls enabling parents to link their accounts to their teen’s account.

Parents can choose which features to disable and “receive notifications when the system detects their teen is in a moment of acute distress,” according to a company blog post that says the changes will go into effect this fall.

Regardless of a user’s age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response.

EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

The announcement comes a week after the parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.

Jay Edelson, the family’s attorney, on Tuesday described the OpenAI announcement as “vague promises to do better” and “nothing more than OpenAI’s crisis management team trying to change the subject.”

Altman “should either unequivocally say that he believes ChatGPT is safe or immediately pull it from the market,” Edelson said.

Meta, the parent company of Instagram, Facebook and WhatsApp, also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.

A study published last week in the medical journal Psychiatric Services found inconsistencies in how three popular artificial intelligence chatbots responded to queries about suicide.

The study by researchers at the RAND Corporation found a need for “further refinement” in ChatGPT, Google’s Gemini and Anthropic’s Claude. The researchers did not study Meta’s chatbots.

The study’s lead author, Ryan McBain, said Tuesday that “it’s encouraging to see OpenAI and Meta introducing features like parental controls and routing sensitive conversations to more capable models, but these are incremental steps.”

“Without independent safety benchmarks, clinical testing, and enforceable standards, we’re still relying on companies to self-regulate in a space where the risks for teenagers are uniquely high,” said McBain, a senior policy researcher at RAND and assistant professor at Harvard University’s medical school.

Share:
More In Business
Spain fines Airbnb $75 million for unlicensed tourist rentals
Spain's government has fined Airbnb 64 million euros or $75 million for advertising unlicensed tourist rentals. The consumer rights ministry announced the fine on Monday. The ministry stated that many listings lacked proper license numbers or included incorrect information. The move is part of Spain's ongoing efforts to regulate short-term rental companies amid a housing affordability crisis especially in popular urban areas. The ministry ordered Airbnb in May to remove around 65,000 listings for similar violations. The government's consumer rights minister emphasized the impact on families struggling with housing. Airbnb said it plans to challenge the fine in court.
Roomba maker iRobot files for bankruptcy protection; will be taken private under restructuring
Roomba maker iRobot has filed for Chapter 11 bankruptcy protection, but says that it doesn’t expect any disruptions to devices as the more than 30-year-old company is taken private under a restructuring process. iRobot said that it is being acquired by Picea through a court-supervised process. Picea is the company's primary contract manufacturer. The Bedford, Massachusetts-based anticipates completing the prepackaged chapter 11 process by February.
Serbia organized crime prosecutors charge minister, others in connection with Kushner-linked project
Serbia’s prosecutor for organized crime has charged a government minister and three others with abuse of position and falsifying of documents related to a luxury real estate project linked to U.S. President Donald Trump’s son-in-law Jared Kushner. The charges came on Monday. The investigation centers on a controversy over a a bombed-out military complex in central Belgrade that was a protected cultural heritage zone but that is facing redevelopment as a luxury compound by a company linked to Kushner. The $500 million proposal to build a high-rise hotel, offices and shops at the site has met fierce opposition from experts at home and abroad. Selakovic and others allegedly illegally lifted the protection status for the site by falsifying documentation.
Load More