As reported by Mashable, OpenAI just launched "Trusted Contact," a new feature that lets you choose a trusted person in your life to connect to your ChatGPT account. The idea isn't to share your conversations or collaborate on projects within ChatGPT; rather, if the chatbot thinks your personal chats are veering in a concerning direction with regards to self-harm, ChatGPT will reach out to your Trusted Contact, letting them know to check in on you.
How ChatGPT's Trusted Contact works
Credit: OpenAIIf the contact agrees, the feature kicks in. In the future, if OpenAI's automated system thinks you're discussing harming yourself "in a way that indicates a serious safety concern," ChatGPT will let you know that it may reach out to the Trusted Contact, but also encourages you to reach out that contact yourself, with "conversation starters" to break the ice.
OpenAI says that it's working to review safety notifications in under one hour, and that it developed the feature with guidance from clinicians, researchers, and mental health and suicide prevention organizations. The feature is, of course, entirely voluntary, so the user will need to enroll themselves (and a contact) in if they feel it would help them. As long as they do, however, this could be a helpful way for friends and family to check in on people when they're struggling—assuming they're sharing those thoughts with ChatGPT.
Disclosure: Ziff Davis, Lifehacker's parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
Hence then, the article about chatgpt can now reach out to a trusted contact after conversations concerning self harm was published today ( ) and is available on Live Hacker ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( ChatGPT Can Now Reach Out to a 'Trusted Contact' After Conversations Concerning Self-Harm )
Also on site :