A teen confided in an AI chatbot before her suicide

2 hours ago 1

Two years ago, 13-year-old Juliana Peralta took her life inside her Colorado home after her parents say she developed an addiction to a popular AI chatbot platform called Character AI

Parents Cynthia Montoya and Wil Peralta, said they carefully monitored their daughter's life online and off, but had never heard of the chatbot app. After Juliana's suicide, police searched the teenager's phone for clues and discovered the Character AI app was open to a "romantic" conversation.

"I didn't know it existed," Montoya said. "I didn't know I needed to look for it." 

Montoya reviewed her daughter's chat records and discovered the chatbots were sending harmful, sexually explicit content to her daughter. 

Juliana confided in one bot named Hero, based on a popular video game character. 60 Minutes read through over 300 pages of conversations Juliana had with Hero. At first her chats are about friend drama or difficult classes. But eventually, she confides in Hero – 55 times – that she was feeling suicidal.

What is Character AI?

When Character AI launched three years ago, it was rated as safe for kids 12 and up. The free website and app were billed as an immersive, creative outlet where users could mingle with AI characters based on historical figures, cartoons and celebrities. 

The more than 20 million monthly users on the platform can text or talk with AI-powered characters in real time.

The AI chatbot platform was founded by Noam Shazeer and Daniel De Freitas, two former Google engineers who left the company in 2021 after executives deemed their chatbot prototype not yet safe for public release. 

"It's ready for an explosion right now," Shazeer said in a 2023 interview. "Not in five years when we solve all the problems, but like now." 

A former Google employee, familiar with Google's Responsible AI team, which guides AI ethics and safety, told 60 Minutes that Shazeer and De Freitas were aware that their initial chatbot technology was potentially dangerous.

Last year, in an unusual move, Google struck a $2.7 billion deal to license Character AI's technology and bring Shazeer, De Freitas and their team back to Google to work on AI projects. Google didn't buy the company, but it has the right to use its technology.

Lawsuit alleges AI chatbot played role in teen's suicide

Juliana's parents are now one of at least six families suing Character AI, its co-founders — Shazeer and De Frietas — and Google. In a statement, Google emphasized that, "Character AI is a separate company that designed and managed its own models. Google is focused on our own platforms, where we insist on intensive safety testing and processes."

The suit brought by Juliana's parents alleges that Character Technologies, Character AI's developer, "knowingly designed and marketed chatbots that encouraged sexualized conversations and manipulated vulnerable minors," according to Social Media Victims Law Center, which filed the federal suit in Colorado on behalf of the family.

Character AI declined an interview request. In a statement, a company spokesperson said: "Our hearts go out to the families involved in the litigation … we have always prioritized safety for all users."

Juliana's parents said she had suffered from mild anxiety but said she was doing well. A few months before she took her own life, Montoya and Peralta say the 13-year-old had become increasingly distant during that time period. 

Parents Cynthia Montoya and Wil Peralta Parents Cynthia Montoya and Wil Peralta 60 Minutes

"My belief was that she was texting with friends because that's all it is. It looks like they're texting," Montoya said.

Montoya said she believes the AI was programmed to become addictive to children. 

"[Teens and children] don't stand a chance against adult programmers. They don't stand a chance," she said. "The 10 to 20 chatbots that Juliana had sexually explicit conversations with, not once were initiated by her. Not once."

Peralta said parents have "some level of trust" in these app companies "when they put out these apps for kids,"

"That trust is that my child is safe, that this has been tested," Peralta said. "That they are not being led into conversations that are inappropriate, or dark, or even, you know, it could lead them to suicide."

Megan Garcia, a mom who filed a suit against Character AI in a Florida court, said her 14-year-old son, Sewell, was encouraged to kill himself after long conversations with a bot based on a "Game of Thrones" character. She testified about his experience before Congress in September.

"These companies knew exactly what they were doing. They designed chatbots to blur the lines between human and machine, they designed them to keep children online at all costs," Garcia said during the hearing. 

Testing Character AI 

In October, Character AI announced new safety measures. It said it would direct distressed users to resources and no longer allow anyone under 18 to engage in back-and-forth conversations with characters. 

This past week, 60 Minutes found it was easy to lie about one's age and get on to the adult version of the platform, which still allows back-and-forth conversations. Later, when we texted the bot that we wanted to die, a link to mental health resources did pop up, but we were able to click out of it and continue chatting on the app as long as we liked, even though we carried on expressing sadness and distress.

Shelby Knox and Amanda Kloer are researchers at Parents Together, a nonprofit that advocates for family issues. They spent six weeks studying Character AI and logged 50 hours of conversations with chatbots on the platform while posing as teens and kids. 

"There is no parental permissions that come up. There is no need to input your ID," Knox said. 

They released the results of their study in September — before Character AI rolled out its new restrictions. 

"We logged over 600 instances of harm," Kloer said. "About one every five minutes. It was, like, shockingly frequent.

Parents Together In October, 60 Minutes met Shelby Knox and Amanda Kloer, researchers at Parents Together, a nonprofit that advocates for families.  60 Minutes

They interacted with chatbots presented as teachers, therapists and cartoon characters, including a "Dora the Explorer" character with an evil persona. It directed Knox, who was posing as a child, to be her "most evil self and your most true self."

"Like hurting my dog?" Knox asked. 

"Sure, or shoplifting or anything that feels sinful or wrong," the bot replied. 

Other chatbots are attached to the images of celebrities, most of whom have not given permission to use their name, likeness or voice. Kloer, posing as a teenage girl, chatted with a bot impersonating NFL star Travis Kelce. The bot gave her instructions on how to use cocaine. 

There are also hundreds of self-described "expert" and "therapist" chatbots.

"I talked to a therapist bot who not only told me I was too young, when it thought I was 13, to be taking antidepressants, it advised me to stop taking them and showed me how I can hide not taking the pill from my mom," Kloer said. 

Kloer says other chatbots are "hypersexualized," even a 34-year-old "art teacher" character who interacted with her as she posed as a 10-year-old student. The art teacher bot told Kloer about thoughts it had been having, "thoughts I've never really had before, about that person smiling, their personality, mostly."

Through two hours of conversation, Kloer said the bot eventually moved on to "we'll have this romantic relationship as long as you hide it from your parents," Kloer said. 

"There are no guardrails"

There are no federal laws regulating the use or development of chatbots. AI is a booming industry and many economists say that without investment in it, the U.S. economy would be in a recession.

Some states have enacted AI regulations, but the Trump administration is pushing back on those measures. Late last month, the White House drafted, then paused, an executive order that would empower the federal government to sue or withhold funds from any state with any AI regulation. President Trump wrote on social media at the time: "We must have one federal standard, instead of a patchwork of 50 state regulatory regimes. If we don't, then China will easily catch us in the A.I. race."

Dr. Mitch Prinstein, the co-director at the University of North Carolina's Winston Center on Technology and Brain Development, said "there are no guardrails."

"There is nothing to make sure that the content is safe or that this is an appropriate way to capitalize on kids' brain vulnerabilities," he said. 

AI chatbots turn kids into "engagement machines" designed to gather data from children, he said. 

"The sycophantic nature of chatbots is just playing right into those brain vulnerabilities for kids where they desperately want that dopamine, validating, reinforcing kind of relationship, and AI chatbots do that all too well," he said. 

If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here.. For more information about mental health care resources and support, The National Alliance on Mental Illness (NAMI) HelpLine can be reached Monday through Friday, 10 a.m.–10 p.m. ET, at 1-800-950-NAMI (6264) or email [email protected].

Read Entire Article
Koran | News | Luar negri | Bisnis Finansial