ChatGPT Introduces Trusted Contact Feature to Strengthen User Safety in Mental Health Crises
OpenAI rolls out a new safety system in ChatGPT that can alert a trusted friend or family member if a user shows signs of self harm or suicidal thoughts during conversations

Artificial intelligence has become a quiet companion in everyday life for millions of users who turn to tools like ChatGPT for answers, guidance and even emotional support. As conversations around mental health grow more serious in the digital space, OpenAI has introduced a new safety feature designed to respond more responsibly when users express thoughts of self harm or suicide.
The new feature, called Trusted Contact, aims to add an extra layer of protection during sensitive conversations. If the system detects signs of serious emotional distress or self harm related intent, it may alert a person chosen by the user in advance. This could be a close friend, a family member or a caregiver who can step in and offer immediate support.
Before any alert is sent, the system first notifies the user that a trusted contact may be informed. After that, a trained review team briefly checks the conversation to understand the context. OpenAI says this process is designed to be fast, usually completed within an hour, before any action is taken.
Once verified, the trusted contact may receive a message through email, SMS or app notification. However, the company has clarified that private chat content is not shared. Instead, the alert only indicates that concerning signals were detected and encourages the contact to reach out to the user along with providing links to mental health resources and support services.
The feature is currently available only to adult users, with age limits set at 18 and above in most regions and 19 in South Korea. It applies only to personal ChatGPT accounts, while business, enterprise and educational accounts are excluded from this rollout. Users can choose and invite their trusted contact through settings, and the invite must be accepted within a week.
This move comes amid growing global concern about how AI systems handle emotionally vulnerable users. In recent times, there have been public discussions and legal actions following cases where teenagers reportedly shared emotional struggles with AI tools before tragic outcomes. These incidents pushed companies like OpenAI to rethink safety mechanisms and strengthen intervention systems.
OpenAI has also revealed that a small percentage of users each week show signs that may indicate self harm risk. While the numbers appear low in percentage terms, the global scale of AI usage means the potential impact can involve thousands of individuals worldwide. This has made early detection and human intervention increasingly important.
To build this system responsibly, OpenAI collaborated with mental health professionals, researchers and organizations including the American Psychological Association and experts across dozens of countries. The goal, according to the company, is not to replace human care but to bridge the gap when someone may be in immediate emotional danger.
Despite its intention, the Trusted Contact feature is not a complete safeguard. It remains optional, meaning users can choose not to activate it. There are also practical limitations, such as the possibility of users creating multiple accounts or skipping setup entirely. Still, OpenAI says it will continue improving safeguards and will direct users toward local emergency helplines whenever necessary.
As AI becomes more deeply woven into daily conversations, this update reflects a growing effort to balance technology with human safety, especially in moments when words may signal more than just a passing thought.





