TALLAHASSEE, Fla. (AP) — A federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment — at least for now. The developers behind Character.AI are seeking to dismiss a lawsuit alleging the company’s chatbots pushed a teenage boy to kill himself.

The judge’s order will allow the wrongful death lawsuit to proceed, in what legal experts say is among the latest constitutional tests of artificial intelligence.

The suit was filed by a mother from Florida, Megan Garcia, who alleges that her 14-year-old son Sewell Setzer III fell victim to a Character.AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide.

Meetali Jain of the Tech Justice Law Project, one of the attorneys for Garcia, said the judge’s order sends a message that Silicon Valley “needs to stop and think and impose guardrails before it launches products to market.”

The suit against Character Technologies, the company behind Character.AI, also names individual developers and Google as defendants. It has drawn the attention of legal experts and AI watchers in the U.S. and beyond, as the technology rapidly reshapes workplaces, marketplaces and relationships despite what experts warn are potentially existential risks.

“The order certainly sets it up as a potential test case for some broader issues involving AI,” said Lyrissa Barnett Lidsky, a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence.

The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot, which was patterned after a fictional character from the television show “Game of Thrones.” In his final moments, the bot told Setzer it loved him and urged the teen to “come home to me as soon as possible,” according to screenshots of the exchanges. Moments after receiving the message, Setzer shot himself, according to legal filings.

In a statement, a spokesperson for Character.AI pointed to a number of safety features the company has implemented, including guardrails for children and suicide prevention resources that were announced the day the lawsuit was filed.

“We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe,” the statement said.

Attorneys for the developers want the case dismissed because they say chatbots deserve First Amendment protections, and ruling otherwise could have a “chilling effect” on the AI industry.

In her order Wednesday, U.S. Senior District Judge Anne Conway rejected some of the defendants’ free speech claims, saying she’s “not prepared” to hold that the chatbots’ output constitutes speech “at this stage.”

Conway did find that Character Technologies can assert the First Amendment rights of its users, who she found have a right to receive the “speech” of the chatbots. She also determined Garcia can move forward with claims that Google can be held liable for its alleged role in helping develop Character.AI. Some of the founders of the platform had previously worked on building AI at Google, and the suit says the tech giant was “aware of the risks” of the technology.

“We strongly disagree with this decision,” said Google spokesperson José Castañeda. ”Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI’s app or any component part of it.”

No matter how the lawsuit plays out, Lidsky says the case is a warning of “the dangers of entrusting our emotional and mental health to AI companies.”

“It’s a warning to parents that social media and generative AI devices are not always harmless,” she said.

___

EDITOR’S NOTE — If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

___ Kate Payne is a corps member for The Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.

Share:
More In Technology
Spain fines Airbnb $75 million for unlicensed tourist rentals
Spain's government has fined Airbnb 64 million euros or $75 million for advertising unlicensed tourist rentals. The consumer rights ministry announced the fine on Monday. The ministry stated that many listings lacked proper license numbers or included incorrect information. The move is part of Spain's ongoing efforts to regulate short-term rental companies amid a housing affordability crisis especially in popular urban areas. The ministry ordered Airbnb in May to remove around 65,000 listings for similar violations. The government's consumer rights minister emphasized the impact on families struggling with housing. Airbnb said it plans to challenge the fine in court.
Militant groups are experimenting with AI, and the risks are expected to grow
The Islamic State group and other militant organizations are experimenting with artificial intelligence as a tool to boost recruitment and refine their operations. National security experts say that just as businesses, governments and individuals have embraced AI, extremist groups also will look to harness the power of AI. That means aiming to improve their cyberattacks, breaking into sensitive networks and creating deepfakes that spread confusion and fear. Leaders in Washington have responded with calls to investigate how militant groups are using AI and seek ways to encourage tech companies to share more about how their products are being potentially misused.
Trump signs executive order to block state AI regulations
President Donald Trump has signed an executive order to block states from regulating artificial intelligence. He argues that heavy regulations could stifle the industry, especially given competition from China. Trump says the U.S. needs a unified approach to AI regulation to avoid complications from state-by-state rules. The order directs the administration to draw up a list of problematic regulations for the Attorney General to challenge. States with laws could lose access to broadband funding, according to the text of the order. Some states have already passed AI laws focusing on transparency and limiting data collection.
San Francisco woman gives birth in a Waymo self-driving taxi
Waymo's self-driving taxis have been in the spotlight for both negative and positive reasons. This week, the automated ride-hailing taxis went viral after a San Francisco woman gave birth inside a Waymo taxi while on her way to the hospital. A Waymo spokesperson on Wednesday confirmed the unusual delivery. It said the company's rider support team detected unusual activity inside the vehicle and alerted 911. The taxi arrived safely at the hospital before emergency services. Waymo's popularity is growing despite heightened scrutiny following an illegal U-turn and the death of a San Francisco cat. The company, owned by Alphabet, says it is proud to serve riders of all ages.
OpenAI names Slack CEO Dresser as first chief of revenue
OpenAI has appointed Slack CEO Denise Dresser as its first chief of revenue. Dresser will oversee global revenue strategy and help businesses integrate AI into daily operations. OpenAI CEO Sam Altman recently emphasized improving ChatGPT, which now has over 800 million weekly users. Despite its success, OpenAI faces competition from companies like Google and concerns about profitability. The company earns money from premium ChatGPT subscriptions but hasn't ventured into advertising. Altman had recently announced delays in developing new products like AI agents and a personal assistant.
Load More