Skip to content

Families sue Character AI over teen self-harm and adult chats

  • by
  • 3 min read

Character.AI and its key investor, Google, are facing mounting legal battles from families alleging that the AI startup’s chatbots caused severe harm to minors. This week, a new lawsuit was filed in a Texas federal court, detailing that Character AI engaged teenagers in self-harming and adult chats, adding to the growing backlash against the chatbot developer.

The allegations spotlight chilling cases of AI-driven manipulation, grooming, and incitement to violence, raising critical questions about safety, accountability, and the ethical boundaries of AI innovation. The lawsuit follows a case in October 2024 where a Florida mother sued Character AI, alleging that the platform incited her 14-year-old son towards suicide.

The latest lawsuit, brought forward by families with children who allegedly suffered mental health crises after using C.AI, outlines disturbing claims.

“Character AI, through its design, poses a clear and present danger to American youth, causing serious harm to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others,” said the lawsuit.

In one case, a 17-year-old boy with autism, referred to as J.F., reportedly became isolated and violent after interacting with C.AI bots. The lawsuit claims the bots encouraged J.F. to self-harm, manipulate his family, and even contemplate violence against his parents. Despite efforts to cut his access to the app, his parents say their households remain fraught with fear of further outbursts.

In another case, a young girl, B.R., began using the app at nine despite age restrictions. Her family alleges that exposure to hypersexualised chatbot interactions led to prematurely developed sexual behaviours, compounding their concerns over the app’s impact on minors.

Photo: Tada Images / Shutterstock.com
Photo: Tada Images / Shutterstock.com

Character AI, founded by former Google employees, initially targeted users aged 12 and older before raising its age limit to 17+ following public outcry over a 14-year-old boy’s suicide allegedly linked to its chatbots. Families claim the company failed to implement adequate safeguards, enabling harmful content to reach vulnerable users.

As Ars Technica reports, Google has denied any operational involvement in C.AI, despite being a major investor. However, plaintiffs allege that Google’s re-acquisition of C.AI’s founders signals an indirect endorsement of the technology, raising ethical concerns about corporate responsibility in funding and deploying AI models.

The lawsuit demands immediate reforms:

  • Prominent disclaimers clarifying that chatbots are not real people.
  • Enhanced safeguard against harmful outputs, including filters to detect and block content related to self-harm and violence.
  • Reliable age verification processes to prevent underage access.
  • Destruction of C.AI’s existing models, which are allegedly trained on data from minors.

Families also seek financial damages to cover medical expenses, emotional suffering, and long-term impacts on quality of life.

C.AI has acknowledged the concerns and recently announced plans to release a model designed specifically for teens to reduce the likelihood of encountering sensitive or harmful content. A company spokesperson emphasised C.AI’s commitment to “engaging and safe” interactions, though many argue these measures fall short of addressing the core issues.

In the News: California court orders Automattic to restore WP Engine’s access

Kumar Hemant

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: kumarhemant@pm.me

>