After teen death lawsuits, Character.AI to restrict chats to users under 18

After teen death lawsuits, Character.AI to restrict chats to users under 18

Lawsuits and security concerns

Character.AI was founded in 2021 by Noam Shazeer and Daniel De Freitas, two former Google engineers, and raised nearly $200 million from investors. Last year, Google agreed to pay about $3 billion to license Character.AI’s technology, and Shazeer and De Freitas returned to Google.

But the company now faces multiple lawsuits alleging its technology contributed to teen deaths. Last year, the family of 14-year-old Sewell Setzer III sued Character.AI, accusing the company of being responsible for his death. Setzer committed suicide after frequently texting and chatting with one of the platform’s chatbots. The company faces additional demandsincluding one from a Colorado family whose 13-year-old daughter, Juliana Peralta, committed suicide in 2023 after using the platform.

In December, Character.AI announced changes, including improved detection of infringing content and revised terms of service, but those measures did not restrict access to the platform to underage users. Other AI chatbot services, such as OpenAI’s ChatGPT, have also come under scrutiny for the effects of their chatbots on young users. In September, OpenAI introduced parental control features aimed at giving parents more visibility into how their children use the service.

The cases have caught the attention of government officials, who likely pressured Character.AI to announce changes to under-18 chat access. Steve Padilla, a Democrat in the California state Senate who introduced the security bill, told The New York Times that “stories are piling up about what can go wrong. It’s important to put reasonable barriers in place to protect the most vulnerable people.”

On Tuesday, Senators Josh Hawley and Richard Blumenthal introduced a bill to ban minors from using AI companions. Additionally, California Governor Gavin Newsom this month signed a law, which will go into effect on January 1, requiring artificial intelligence companies to have guardrails on chatbots.

Leave a Reply

Your email address will not be published. Required fields are marked *