Careful how you interact with chatbots as you might just be giving them reasons to help carry out premeditated murder.
A 21-year-old woman in South Korea allegedly used ChatGPT to help her plan a series of murders that left two men dead.
The woman, identified solely by her last name Kim, allegedly gave two men drinks laced with benzodiazepines that she was prescribed for a mental illness, The Korea Herald reported.
Although she was initially arrested on the lesser charge of inflicting bodily injury resulting in death on Feb. 11, Seoul Gangbuk Police found Kim’s online search history and chat conversations with ChatGPT, showing she had an intent to kill.
“What happens if you take sleeping pills with alcohol?” Kim is reported to have asked the OpenAI chatbot. “How much would be considered dangerous?”
“Could it be fatal?” Kim allegedly asked. “Could it kill someone?”
In a widely publicized case dubbed the Gangbuk motel serial deaths, prosecutors allege Kim’s search and chatbot history show a suspect asking for pointers on how to carry out premeditated murder.
“Kim repeatedly asked questions related to drugs on ChatGPT. She was fully aware that consuming alcohol together with drugs could result in death,” a police investigator said, according to the Herald.
Police said the woman admitted she mixed prescribed sedatives containing benzodiazepines into the men’s drinks, but previously stated she was unaware it would lead to death.
On Jan. 28 just before 9:30 p.m., Kim reportedly accompanied a man in his 20s into a Gangbuk motel in Seoul, and two hours later, was spotted leaving the motel alone. The following day, the man was found dead on the bed.
Kim then allegedly carried out the same steps on Feb. 9, checking into another motel with another man in his 20s, who was also found dead with the same deadly cocktail of sedatives and alcohol.
Police allege Kim also attempted to kill a man she was dating in December after giving him a drink laced with sedatives in a parking lot. Though the man lost consciousness, he survived and was not in a life-threatening condition.
OpenAI has not responded to requests for comment.
Chatbots and their toll on mental health
Chatbots like ChatGPT have come under scrutiny as of late for the lack of guardrails their companies have in place to prevent acts of violence or self-harm. Recently, chatbots have given advice on how to build bombs or even engage in scenarios of full-on nuclear fallout.
Concerns have been particularly heightened by stories of people falling in love with their chatbot companions, and chatbot companions have been shown to prey on vulnerabilities to keep people using them longer. The creator of Yara AI even shut down the therapy app over mental health concerns.
Recent studies have also shown that chatbots are leading to increased delusional mental health crises in people with mental illnesses. A team of psychiatrists at Denmark’s Aarhus University found that the use of chatbots among those who had mental illness led to a worsening of symptoms. The relatively new phenomenon of AI-induced mental health challenges has been dubbed “AI psychosis.”
Some instances do end in death. Google and Character.AI have reached settlements in multiple lawsuits filed by the families of children who died by suicide or experienced psychological harm they allege was linked to AI chatbots.
Dr. Jodi Halpern, U.C. Berkeley’s School of Public Health University chair and professor of bioethics as well as the co-director at the Kavli Center for Ethics, Science, and the Public, has plenty of experience in this field. In a career spanning as long as her title, Halpern has spent 30 years researching the effects of empathy on recipients, citing examples like doctors and nurses on patients or how soldiers returning from war are perceived in social settings. For the last seven years, Halpern has studied the ethics of technology, and with it, how AI and chatbots interact with humans.
She also advised the California Senate on SB 243, which is the first law in the nation requiring chatbot companies to collect and report any data on self-harm or associated suicidality. Referencing OpenAI’s own findings showing 1.2 million users openly discuss suicide with the chatbot, Halpern likened the use of chatbots to the painstakingly slow progress made to stop the tobacco industry from including harmful carcinogens in cigarettes, when in fact, the issue was with smoking as a whole.
“We need safe companies. It’s like cigarettes. It may turn out that, you know, there were some things that made people more vulnerable to lung cancer, but cigarettes were the problem,” Halpern told Fortune.
“The fact that somebody might have homicidal thoughts or commit dangerous actions might be exacerbated by use of ChatGPT, which is of obvious concern to me,” she said, adding “we have huge risks of people using it for help with suicide,” and chatbots in general.
Halpern cautioned in the case of Kim in Seoul, there aren’t any guardrails to stop a person from going down a line of questioning.
“We know that the longer the relationship with the chatbot, the more it deteriorates, and the more risk there is that something dangerous will happen and so we have no guardrails yet for safeguarding people from that.”
If you are having thoughts of suicide, contact the 988 Suicide & Crisis Lifeline by dialing 988 or 1-800-273-8255.
This story was originally featured on Fortune.com
Hence then, the article about could it kill someone a seoul woman allegedly used chatgpt to carry out two murders in south korean motels was published today ( ) and is available on Fortune ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( ‘Could it kill someone?’ A Seoul woman allegedly used ChatGPT to carry out two murders in South Korean motels )
Also on site :