Earlier this month, the company unveiled a wellness council to address these concerns, although critics noted that the council did not include a suicide prevention expert. OpenAI also recently implemented controls for parents of children using ChatGPT. The company says it is creating an age prediction system to automatically detect children using ChatGPT and impose a stricter set of age-related safeguards.
Strange but impactful conversations
The data shared Monday appears to be part of the company’s effort to demonstrate progress on these issues, although it also sheds light on how profoundly AI chatbots may be affecting the health of the general public.
In a blog post about the recently released data, OpenAI says these types of conversations on ChatGPT that could raise concerns about “psychosis, mania, or suicidal thoughts” are “extremely rare” and therefore difficult to measure. The company estimates that about 0.07 percent of active users in a given week and 0.01 percent of messages indicate possible signs of mental health emergencies related to psychosis or mania. As for emotional attachment, the company estimates that about 0.15 percent of users are active in a given week and 0.03 percent of messages indicate potentially high levels of emotional attachment to ChatGPT.
OpenAI also claims that in an evaluation of more than 1,000 challenging conversations related to mental health, the new GPT-5 model met 92 percent of their desired behaviors, compared to 27 percent for a previous GPT-5 model released on August 15. The company also says that its latest version of GPT-5 better withstands OpenAI safeguards in long conversations. OpenAI has previously admitted that its safeguards are less effective during long conversations.
Additionally, OpenAI says it is adding new assessments to try to measure some of the more serious mental health issues ChatGPT users face. The company says its basic safety tests for its AI language models will now include benchmarks for emotional dependency and non-suicidal mental health emergencies.
Despite current concerns about mental health, OpenAI CEO Sam Altman announced On October 14, the company will allow verified adult users to have erotic conversations with ChatGPT starting in December. The company had relaxed ChatGPT content restrictions in February, but then drastically tightened them after the August lawsuit. Altman explained that OpenAI had made ChatGPT “quite restrictive to ensure we were careful about mental health issues,” but acknowledged that this approach made the chatbot “less useful/enjoyable for many users who did not have mental health issues.”
If you or someone you know is suicidal or distressed, call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will connect you to a local crisis center.
