As more people turn to artificial intelligence for emotional support, OpenAI has released significant updates to ChatGPT aimed at addressing mental health concerns and prioritizing user safety. These updates are designed to make the chatbot less likely to foster unhealthy dependencies or provide inappropriate advice, and are informed by consultations with clinicians and technology experts.
Key improvements in ChatGPT’s mental health support
- ChatGPT now detects possible signs of emotional stress or mental health struggles in user conversations. It responds by suggesting evidence-based resources instead of trying to resolve crises directly.
- The chatbot includes gentle reminders for users to take breaks when conversations are lengthy or emotionally intense. This is to promote healthy digital habits and discourage overuse.
- ChatGPT has stopped giving direct, decisive advice on personal or sensitive topics. Instead, it asks open-ended questions and offers balanced perspectives, supporting thoughtful decision-making rather than pushing users toward a single choice.
- OpenAI has reversed earlier changes that made the chatbot overly agreeable, after feedback showed this could reinforce harmful beliefs. Ongoing adjustments now prioritize ethical AI use and user well-being.
- Advisory groups, made up of psychiatrists, youth mental health professionals, and human-computer interaction experts, review system behavior and help guide further improvements.
Why these updates matter
More people are looking to AI for advice about sensitive topics, from anxiety and loneliness to relationship breakups. In fact, having extended conversations with AI chatbots is now giving rise to AI psychosis, a phenomenon where users lose touch with reality. Experts have cautioned that while chatbots can be helpful and help people through tough times, they may also miss signs of acute distress or provide guidance that is too simplistic or even harmful.
The new updates reinforce the point that AI should offer support and resources but not replace human professionals when it comes to mental health.
OpenAI stresses that ChatGPT is not a substitute for therapy or counseling. Instead, the focus is on giving users a tool that is safe, responsible, and helps connect people to proper resources if needed.
Looking ahead
OpenAI plans to continue improving ChatGPT’s safeguards, drawing on input from users and experts. With more than 700 million people now using ChatGPT each week, these changes reflect the need for responsible AI that helps people without replacing the value of real human support.
















