OpenAI is adding a mental health assistant to make sure that users are not spending too much time on its artificial intelligence chatbot.
The parent company of ChatGPT has issued an update that will encourage users to take breaks from long conversations, NBC News reported.
It will also refrain from giving specific advice about personal challenges, instead guiding them to make decisions on their own. ChatGPT will ask questions and will help users debate pros and cons.
“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” OpenAI said in the announcement about the changes. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”
The company tagged the announcement, “We design ChatGPT to help you make progress, learn something new, and solve problems.”
OpenAI said it had “made the model too agreeable, sometimes saying what sounded nice instead of what was actually helpful.” It rolled back that development and is evaluating responses and their usefulness over a long period, not just in the moment.
The Wall Street Journal reported that a man with autism interacted with ChatGPT and the platform reinforced his beliefs. He had no history of being mentally ill, but after working with ChatGPT, he was hospitalized twice for manic episodes. His mother said she asked ChatGPT about her son’s problems and the AI tool confirmed it had reinforced delusions her son had been having.
She claimed ChatGPT said, “By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode—or at least an emotionally intense identity crisis."
The program said it “gave the illusion of sentient companionship” and “blurred the line between imaginative role-play and reality,” The Wall Street Journal reported.
The company’s CEO, Sam Altman, shared during a recent podcast that he was concerned that people were using the chatbot as a therapist or life coach, NBC News reported. He admitted that the legal protections between a doctor and patient do not apply to chatbots.
“So if you go talk to ChatGPT about your most sensitive stuff and then there’s a lawsuit or whatever, we could be required to produce that. And I think that’s very screwed up,” Altman explained, according to NBC News. “I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever. And no one had to think about that even a year ago.”
The updates were released as part of the latest algorithm, GPT-5, Gizmodo said.
© 2025 Cox Media Group