The mother of a 14-year-old who died of suicide is blaming an A.I. chatbot for her son’s death. Last week, Megan L. Garcia filed a lawsuit against Character.AI, calling its chatbot feature “dangerous and untested.” Garcia also alleged that it can “trick customers into handing over their most private thoughts and feelings,” The New York Times reported.

“I feel like it’s a big experiment, and my kid was just collateral damage,” she told the news outlet.

Sewell Setzer III — who was diagnosed with mild Asperger’s syndrome, anxiety and disruptive mood dysregulation disorder — spent months messaging an A.I. chatbot named after Daenerys Targaryen, a character from Game of Thrones. He started isolating himself, but his family and friends were unaware of how deeply involved he was with the chatbot.

@cbsmorningsMegan Garcia’s 14-year-old son, Sewell Setzer III, died by suicide in February. Garcia is now suing Character.AI and Google, alleging her son became addicted to the platform and was in a months-long virtual emotional and sexual relationship with an AI chatbot. A spokesperson for Google said, in part, that the company is not and was not part of the development of Character.AI. Character.AI called the situation tragic and said its hearts go out to the family, stressing it takes the safety of its users very seriously. A disclaimer on each chat reads, “Reminder: everything Characters say is made up!”

♬ original sound – CBS Mornings

Sewell knew the chatbot wasn’t real. Still, he confided in the A.I. character — expressing thoughts of suicide.

“I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier,” he wrote in his journal.

“We want to acknowledge that this is a tragic situation, and our hearts go out to the family. We take the safety of our users very seriously, and we’re constantly looking for ways to evolve our platform,” Jerry Ruoti, the head of trust and safety at Character.AI, said in a statement to the New York Times.

He added that the company prohibits “the promotion or depiction of self-harm and suicide” and that more safety measures would be added for minors.

Chelsea Harrison, a spokeswoman for Character.AI, confirmed that time limits will be added with a message that reads: “This is an A.I. chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice.”

And going forward, if messages contain words relating to self-harm or suicide, pop-up messages will direct users to a suicide prevention hotline.

As Blavity reported, over a dozen U.S. states sued TikTok for failing to protect minors and for worsening a teen mental health crisis. A.I. chatbots aren’t regulated as it’s fairly new.

“By and large, it’s the Wild West out there,” Stanford researcher Bethanie Maples, who studies how A.I. companionship apps affect mental health, told the Times. “I don’t think it’s inherently dangerous. But there’s evidence that it’s dangerous for depressed and chronically lonely users and people going through change, and teenagers are often going through change.”

If you or someone you know is battling suicidal ideation, call the Suicide and Crisis Lifeline by dialing 988. You can also text “STRENGTH” to the Crisis Text Line at 741741 or go to 988lifeline.org.