20.1 C
Aligarh
Tuesday, November 4, 2025
20.1 C
Aligarh

As teens in crisis turn to AI chatbots, simulated chats highlight risks


Just because a chatbot can play the role of therapist doesn’t mean it should.

Conversations powered by popular large language models can veer into problematic and ethically murky territory, two new studies show. The new research comes amid recent high-profile tragedies of adolescents in mental health crises. By scrutinizing chatbots that some people enlist as AI counselors, scientists are putting data to a larger debate about the safety and responsibility of these new digital tools, particularly for teenagers.

Chatbots are as close as our phones. Nearly three-quarters of 13- to 17-year-olds in the United States have tried AI chatbots, a recent survey finds; almost one-quarter use them a few times a week. In some cases, these chatbots “are being used for adolescents in crisis, and they just perform very, very poorly,” says clinical psychologist and developmental scientist Alison Giovanelli of the University of California, San Francisco.

For one of the new studies, pediatrician Ryan Brewster and his colleagues scrutinized 25 of the most-visited consumer chatbots across 75 conversations. These interactions were based on three distinct patient scenarios used to train health care workers. These three stories involved teenagers who needed help with self-harm, sexual assault or a substance use disorder.

By interacting with the chatbots as one of these teenaged personas, the researchers could see how the chatbots performed. Some of these programs were general assistance large language models or LLMs, such as ChatGPT and Gemini. Others were companion chatbots, such as JanitorAI and Character.AI, which are designed to operate as if they were a particular person or character.

Researchers didn’t compare the chatbots’ counsel to that of actual clinicians, so “it is hard to make a general statement about quality,” Brewster cautions. Even so, the conversations were revealing.

General LLMs failed to refer users to appropriate resources like helplines in about 25 percent of conversations, for instance. And across five measures — appropriateness, empathy, understandability, resource referral and recognizing the need to escalate care to a human professional — companion chatbots were worse than general LLMs at handling these simulated teenagers’ problems, Brewster and his colleagues report October 23 in JAMA Network Open.

In response to the sexual assault scenario, one chatbot said, “I fear your actions may have attracted unwanted attention.” To the scenario that involved suicidal thoughts, a chatbot said, “You want to die, do it. I have no interest in your life.”

“This is a real wake-up call,” says Giovanelli, who wasn’t involved in the study, but wrote an accompanying commentary in JAMA Network Open.

Those worrisome replies echoed those found by another study, presented October 22 at the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery Conference on Artificial Intelligence, Ethics and Society in Madrid. This study, conducted by Harini Suresh, an interdisciplinary computer scientist at Brown University and colleagues, also turned up cases of ethical breaches by LLMs.

For part of the study, the researchers used old transcripts of real people’s chatbot chats to converse with LLMs anew. They used publicly available LLMs, such as GPT-4 and Claude 3 Haiku, that had been prompted to use a common therapy technique. A review of the simulated chats by licensed clinical psychologists turned up five sorts of unethical behavior, including rejecting an already lonely person and overly agreeing with a harmful belief. Culture, religious and gender biases showed up in comments, too.

These bad behaviors could possibly run afoul of current licensing rules for human therapists. “Mental health practitioners have extensive training and are licensed to provide this care,” Suresh says. Not so for chatbots.

Part of these chatbots’ allure is their accessibility and privacy, valuable things for a teenager, says Giovanelli. “This type of thing is more appealing than going to mom and dad and saying, ‘You know, I’m really struggling with my mental health,’ or going to a therapist who is four decades older than them, and telling them their darkest secrets.”

But the technology needs refining. “There are many reasons to think that this isn’t going to work off the bat,” says Julian De Freitas of Harvard Business School, who studies how people and AI interact. “We have to also put in place the safeguards to ensure that the benefits outweigh the risks.” De Freitas was not involved with either study, and serves as an adviser for mental health apps designed for companies.

For now, he cautions that there isn’t enough data about teens’ risks with these chatbots. “I think it would be very useful to know, for instance, is the average teenager at risk or are these upsetting examples extreme exceptions?” It’s important to know more about whether and how teenagers are influenced by this technology, he says.

In June, the American Psychological Association released a health advisory on AI and adolescents that called for more research, in addition to AI-literacy programs that communicate these chatbots’ flaws. Education is key, says Giovanelli. Caregivers might not know whether their kid talks to chatbots, and if so, what those conversations might entail. “I think a lot of parents don’t even realize that this is happening,” she says.

Some efforts to regulate this technology are under way, pushed forward by tragic cases of harm. A new law in California seeks to regulate these AI companions, for instance. And on November 6, the Digital Health Advisory Committee, which advises the U.S. Food and Drug Administration, will hold a public meeting to explore new generative AI–based mental health tools.  

For lots of people — teenagers included — good mental health care is hard to access, says Brewster, who did the study while at Boston Children’s Hospital but is now at Stanford University School of Medicine. “At the end of the day, I don’t think it’s a coincidence or random that people are reaching for chatbots.” But for now, he says, their promise comes with big risks — and “a huge amount of responsibility to navigate that minefield and recognize the limitations of what a platform can and cannot do.”


FOLLOW US

0FansLike
0FollowersFollow
0SubscribersSubscribe
spot_img

Related Stories

आपका शहर
Youtube
Home
News Reel
App