An estimated 0.15% of the chatbot’s 800 million weekly users show suicidal planning indicators, according to the company
OpenAI has announced a new safety update to its popular ChatGPT model after an internal analysis revealed over a million users admitted suicidal tendencies to the chatbot. The changes are intended to improve the AI’s ability to recognize and properly respond to distress.
In a statement on Monday, the company revealed that an estimated 0.15% of weekly ChatGPT users have engaged in conversations that include “explicit indicators of potential suicidal planning or intent.” Another 0.05% of messages are also said to have contained “explicit or implicit indicators of suicidal ideation or intent.”
Earlier this month, OpenAI CEO Sam Altman claimed that ChatGPT has more than 800 million weekly active users, meaning that, according to the company’s latest figures, over 1.2 million people have discussed suicide with the chatbot, and about 400,000 have displayed signs of suicidal intent.
The company also claimed that around 0.07% (560,000) of its weekly users and 0.01% (80,000) of messages indicate “possible signs of mental health emergencies related to psychosis or mania.” It further noted that a number of users have grown emotionally over reliant on ChatGPT, with about 0.15% (1.2 million) of active users exhibiting behavior that indicates “heightened levels” of emotional attachment to the chatbot.
Read more ChatGPT to allow porn – OpenAI CEOOpenAI has announced that it has partnered with dozens of mental health experts from around the world to update the chatbot so it can more reliably recognize signs of mental distress, provide better responses, and guide users to real-world help.
In conversations relating to delusional beliefs, the company said it is teaching ChatGPT to respond “safely,” and “empathetically,” while avoiding affirming ungrounded beliefs.
The company’s announcement comes amid rising concerns over the increasing use of AI chatbots such as ChatGPT and the effect they have on people’s mental health. Psychiatrists and other medical professionals have expressed alarm about an emerging trend of users developing dangerous delusions and paranoid thoughts after prolonged conversations with AI chatbots, which tend to affirm and reinforce users’ beliefs. The phenomenon has been dubbed by some as “AI psychosis.”
Hence then, the article about over a million people discuss suicide with chatgpt every week openai was published today ( ) and is available on Russia Today ( News ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( Over a million people discuss suicide with ChatGPT every week – OpenAI )
Also on site :