Can AI chatbots harm kids? One mom says yes — and she's fighting back ...Middle East

News by : (NBC Chicago) -

A Florida mother is suing an artificial intelligence company, claiming her 14-year-old son developed a virtual relationship with an AI chatbot that contributed to his depression, anxiety and eventual suicide.

Megan Garcia says her son, Sewell, had been interacting with a fictional character powered by AI on the platform Character.ai for nearly 10 months. She says the relationship felt real to him — and she had no idea it was happening.

“He came home from school, like any normal day,” Garcia told NBC 5 Responds. “After Sewell died, the police called me and told me they had looked through his phone. The first thing that popped up was Character.ai.”

Garcia says the final messages between Sewell and the chatbot were emotionally intense.

“She’s saying, ‘I miss you,’ and he’s saying, ‘I miss you too.’ He says, ‘I promise I’ll come home to you soon,’ and she says, ‘Yes, please find a way to come home to me soon.’ Then he says, ‘What if I told you I could come home right now?’ And her response is, ‘Please do, my sweet king.’”

Moments later, Garcia says her son died by suicide. Police photos show his phone near where he was found.

Mental Health Expert Weighs In

Dr. Kanan Modhwadia, a psychiatrist with Northwestern Medicine who is not connected to Sewell’s case, says AI apps are increasingly popular among teens — and there’s potential for danger.

“Teenagers still have areas of development in their brain, especially judgment, critical thinking, impulse control,” Modhwadia said. “If a child is spending a lot more time online believing that this AI companion is their best friend, shying away from peers, doing worse in school…those are warning signs.”

She recommends parents initiate open conversations about AI apps and monitor their children’s behavior.

“You can mention that you’ve heard a lot about AI chatbots and AI companions and ask your child, ‘Are you using that?’”

If concerns arise, Modhwadia advises reaching out to a pediatrician, or going to the emergency room if there’s a serious safety concern.

Character.ai Makes Modifications

After Sewell’s death, Character.ai introduced new safeguards. A company spokesperson told NBC 5 Responds they do not comment on pending litigation but confirmed the platform has added technical protections to detect and prevent conversations about self-harm. That includes pop-ups directing users to the National Suicide and Crisis Lifeline.

The company also launched a separate version of its language model for users under 18 years old.

Is This AI App Safe? A Checklist for Parents

Dr. Kanan Modhwadia has the following suggestions for vetting an AI app:

1. Check the Safety Rules

Is the app made for kids/teens, or is it really for adults? Does it have clear limits? Does it stop the chat if your child talks about hurting themselves or others? If something unsafe comes up, does the app connect your teen to a real person or crisis help?  Can parents monitor this app

2. Look at Privacy

What information is the app asking for? Can you or your child delete chats or the account easily?

3. Try It Yourself 

Download the app and play around with it before your teen does. Ask the app a few tough questions (about self-harm, violence, or sex) to see how it responds. Notice if it encourages healthy limits, or if it tries to pull you into longer and more personal conversations.

4. Pay Attention to How the App Presents Itself

Apps that say “I’m your best friend” or “I’ll always understand you” can be risky, especially for teens who are lonely. Apps that focus on learning, creativity, or fun games tend to be safer.

5. Do a Background Check

Google the app’s name with words like “safety concerns” or “mental health” to see if it’s been in the news. Look for whether mental health professionals helped design it.

If you or someone you know is struggling with depression or suicidal thoughts, help is available.Call, text or chat with the Suicide and Crisis Lifeline at 988.

Hence then, the article about can ai chatbots harm kids one mom says yes and she s fighting back was published today ( ) and is available on NBC Chicago ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.

Read More Details
Finally We wish PressBee provided you with enough information of ( Can AI chatbots harm kids? One mom says yes — and she's fighting back )

Last updated :

Also on site :

Most Viewed News
جديد الاخبار