The Federal Trade Commission has launched an inquiry into several social media and artificial intelligence companies about the potential harms to children and teenagers who use their AI chatbots as companions.
The FTC said Thursday it has sent letters to Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies, ChatGPT maker OpenAI and xAI.
The FTC said it wants to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the chatbots.
EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
The move comes as a growing number of kids use AI chatbots for everything — from homework help to personal advice, emotional support and everyday decision-making. That’s despite research on the harms of chatbots, which have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who killed himself after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.
Character.AI said it is looking forward to “collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space’s rapidly evolving technology.”
“We have invested a tremendous amount of resources in Trust and Safety, especially for a startup. In the past year we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature,” the company said. “We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”
Meta declined to comment on the inquiry and Alphabet, Snap, OpenAI and X.AI did not immediately respond to messages for comment.
Parents can choose which features to disable and “receive notifications when the system detects their teen is in a moment of acute distress,” according to a company blog post that says the changes will go into effect this fall.
Regardless of a user’s age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response.
Meta also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
Spain's government has fined Airbnb 64 million euros or $75 million for advertising unlicensed tourist rentals. The consumer rights ministry announced the fine on Monday. The ministry stated that many listings lacked proper license numbers or included incorrect information. The move is part of Spain's ongoing efforts to regulate short-term rental companies amid a housing affordability crisis especially in popular urban areas. The ministry ordered Airbnb in May to remove around 65,000 listings for similar violations. The government's consumer rights minister emphasized the impact on families struggling with housing. Airbnb said it plans to challenge the fine in court.
The Islamic State group and other militant organizations are experimenting with artificial intelligence as a tool to boost recruitment and refine their operations. National security experts say that just as businesses, governments and individuals have embraced AI, extremist groups also will look to harness the power of AI. That means aiming to improve their cyberattacks, breaking into sensitive networks and creating deepfakes that spread confusion and fear. Leaders in Washington have responded with calls to investigate how militant groups are using AI and seek ways to encourage tech companies to share more about how their products are being potentially misused.
President Donald Trump has signed an executive order to block states from regulating artificial intelligence. He argues that heavy regulations could stifle the industry, especially given competition from China. Trump says the U.S. needs a unified approach to AI regulation to avoid complications from state-by-state rules. The order directs the administration to draw up a list of problematic regulations for the Attorney General to challenge. States with laws could lose access to broadband funding, according to the text of the order. Some states have already passed AI laws focusing on transparency and limiting data collection.
Waymo's self-driving taxis have been in the spotlight for both negative and positive reasons. This week, the automated ride-hailing taxis went viral after a San Francisco woman gave birth inside a Waymo taxi while on her way to the hospital. A Waymo spokesperson on Wednesday confirmed the unusual delivery. It said the company's rider support team detected unusual activity inside the vehicle and alerted 911. The taxi arrived safely at the hospital before emergency services. Waymo's popularity is growing despite heightened scrutiny following an illegal U-turn and the death of a San Francisco cat. The company, owned by Alphabet, says it is proud to serve riders of all ages.
OpenAI has appointed Slack CEO Denise Dresser as its first chief of revenue. Dresser will oversee global revenue strategy and help businesses integrate AI into daily operations. OpenAI CEO Sam Altman recently emphasized improving ChatGPT, which now has over 800 million weekly users. Despite its success, OpenAI faces competition from companies like Google and concerns about profitability. The company earns money from premium ChatGPT subscriptions but hasn't ventured into advertising. Altman had recently announced delays in developing new products like AI agents and a personal assistant.
President Donald Trump says he will allow Nvidia to sell its H200 computer chip used in the development of artificial intelligence to “approved customers” in China. Trump said Monday on his social media site that he had informed China’s leader Xi Jinping and “President Xi responded positively!” There had been concerns about allowing advanced computer chips into China as it could help them to compete against the U.S. in building out AI capabilities. But there has also been a desire to develop the AI ecosystem with American companies such as chipmaker Nvidia.
Load More