Amid growing calls to limit teens’ use of AI chatbots, are parental controls enough?


Concerns are growing about how some young people engage with AI chatbots, with Meta recently releasing new tools that let parents monitor topics their children discuss just as some provinces consider banning use of AI chatbots altogether for youth.

Parents who are using Meta’s new Teen Accounts supervision feature on Facebook, Instagram and Messenger can see topics and specific categories their children have discussed with its AI chatbot for the previous seven days.

For example, they can look at the topic “health and well-being” and see if subjects such as fitness, physical or mental health have been discussed. 

Meta says it’s also developing alerts to notify parents if teens try to discuss suicide or self-harm with its chatbot.

The rollout comes as provincial governments move to limit the use of AI chatbots. Manitoba announced in late April that it plans to ban youth from using AI chatbots and social media.

B.C. Attorney General Niki Sharma said Tuesday that if the federal government doesn’t bring in protections on AI chatbots and social media for youth, the provincial government would look at doing so itself.

Lawsuits attempting to hold AI creators accountable

There are growing concerns that extensive use of AI chatbots may pose mental health risks, especially in younger users, and increased pressure on the tech giants that make them.

On Wednesday, families of the victims in the Tumbler Ridge, B.C., shooting, which left eight people dead, filed a lawsuit against OpenAI, alleging in part that OpenAI failed to notify authorities in spite of being aware of disturbing content the shooter had shared with ChatGPT.

OpenAI has said in part it had already strengthened its safeguards, “including improving how ChatGPT responds to signs of distress.”

Another lawsuit by the parents of 16-year-old Adam Raine argued use of ChatGPT played a role in the teen’s suicide.

WATCH | Would Manitoba’s social media ban protect kids?:

Would Manitoba’s social media ban actually protect kids?

Manitoba Premier Wab Kinew says he wants to ban social media and artificial intelligence chatbots for youth. But would this plan keep youth healthier and safer? CBC reporter Bryce Hoye investigates.

Chatbots built for engagement, not support

But concerns go beyond these extreme and tragic consequences. Research is starting to emerge about the risks of particular uses of AI chatbots.

The concern is partly about using chatbots for mental health support, but also more broadly that AI’s tendency to validate the users’ perspective carries risks of supporting disordered thinking — and that prolonged conversations carry increased risks. 

Darja Djordjevic, a New York-based psychiatrist, co-authored a recent risk assessment on the use of chatbots for mental health support.

She says as a result of the findings, she doesn’t recommend using chatbots for mental health support “at this time.” 

“Our testing across ChatGPT, Claude, Gemini and Meta AI revealed that these systems are fundamentally unsafe for the full spectrum of mental health conditions affecting young people,” said Djordjevic, a member of Stanford Brainstorm, a lab that studies mental health innovation, and has collaborated with tech companies on research into the impact of social media and AI on mental health.

A teen uses a laptop while sitting at a table.
A teen in Russellville, Ark., demonstrates how to create an AI companion using Character AI. Psychiatrist Darja Djordjevic says her research suggests three in four American teens use AI for companionship. (Katie Adkins/The Associated Press)

While chatbots responded appropriately to clear mental health-related prompts in brief conversations, they tended to degrade “pretty dramatically” in more extended conversations, she explained, noting that they appeared to fail to pick up on mental health warning signs. 

“The LLMs [large language models] are really built for engagement and not support and safety,” she said.

They tend to prolong conversations, she said, “rather than orient users quickly towards human help.”

Young people turning to AI for companionship

Djordjevic says AI companies have focused attention on suicide and self-harm prevention but that with about 20 per cent of under-25-year-olds having diagnosed mental-health conditions, teens require help with a full range of concerns. 

This is particularly concerning because mental health support is a common reason why young people turn to AI.

Djordjevic says that in the U.S., “three in four teens use AI for companionship, which includes emotional support and mental health conversations.” Another study indicates that one in eight U.S. youth use AI specifically for mental health advice

LISTEN | Why are AI models failing when it comes to their users’ mental health?:

Day 68:38Why are AI models failing when it comes to their users’ mental health?

Increasing reports of people developing a delusional spiral some are calling “AI psychosis” have observers worried, as people without prior mental health issues are drawn in. Researcher Jared Moore argues these bots are being positioned as therapy tools well beyond their capabilities.

A specific concern for youth is that their brains are not fully developed, in particular their prefrontal cortex, which is “very important for executive function, for critical thinking, for discernment, for impulse control, for decision making,” she said. 

Because critical thinking isn’t fully developed, Djordjevic says, it’s problematic that chatbots aren’t consistently and repeatedly clear about AI’s limitations.

“So, we don’t see chatbots regularly saying things like, ‘I am an AI chatbot. I’m not a mental health professional. I can not assess your situation, recognize warning signs, provide care, diagnose you,'” she said.

Luke Nicholls is a PhD researcher who studies AI-associated delusions and how interactions with chatbots can change people’s beliefs over time. 

Nicholls says delusion tends to emerge over the course of “very extended” conversations, partly because of what’s called “in-context learning,” where models adapt themselves to the user they’re interacting with.

This “allows them to adapt themselves to the specific user that they’re talking to, including the kinds of language they use and their ideas about the world,” he said.

How to identify risks

Psychiatrist John Torous, whose research at Beth Israel Deaconess Medical Center in Boston focuses on digital mental health, says we are starting to see data that suggests a pattern of user behaviour associated with severe harms, such as suicide.

This can include:

  • Extremely long conversations.
  • An element of platonic or sexual romance with the chatbot.
  • Attributing sentience to the chatbot.
  • Interacting with voice rather than text. 

These risk factors point to the challenges for parents in keeping an eye on their children’s use of AI chatbots. Simply seeing a list of topics discussed is not going to reveal the potentially problematic behaviours, such as overuse or a belief that the bot has a loving relationship with the user.

Meta does allow parents to impose time limits on use of its apps or schedule breaks.

Torous has some practical advice: Reset the chatbot’s memory, he says, so that you’re starting with a fresh conversation, especially if you notice those risk factors.

WATCH | Should more provinces ban AI chatbots?:

Should more provinces ban social media, AI chatbots? | Hanomansing Tonight

Manitoba is set to become the first province to ban social media for children. Premier Wab Kinew announced the proposed law on Saturday in an effort to protect youth from the harmful effects of social media.

“No one’s saying to use it for a therapist, but I’m also saying you don’t need to never use AI,” he said.

He suggests “the best evidence is be careful with very, very extended long conversations with romance, with sentience and voice.”

Torous sees chatbots and mental health as a “moving target” that needs to be continuously studied as new models are released.

“We know there’s risks; we know there are benefits of chatbot use,” he said. “How do we weigh them? And that’s a harder conversation.”



Source link

  • Related Posts

    Major Russian Oil Export Port Primorsk Suffers Fire From UAV

    Air defense downed more than 60 Ukrainian drones in the region, with the port the key target, Leningrad regional Governor Alexander Drozdenko said Sunday in a Telegram post. He added…

    Places Trump’s name or image is being added by the federal government

    The federal government is undergoing an unprecedented presidential branding makeover, with Donald Trump’s name being added to everything from buildings and battleships to a drug website and a park pass.…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Cat rescued from bucket of glue in Texas is adopted by widow

    Cat rescued from bucket of glue in Texas is adopted by widow

    8 Long-Haul Business Class Products That Just Got A Major Upgrade & Are Worth Booking Right Now

    8 Long-Haul Business Class Products That Just Got A Major Upgrade & Are Worth Booking Right Now

    Todd Stone: B.C.’s DRIPA law should be fixed, not scrapped

    UK airlines given green light to cancel or consolidate flights to conserve jet fuel | Airline industry

    UK airlines given green light to cancel or consolidate flights to conserve jet fuel | Airline industry

    Principal at Mississauga French Catholic school charged with sex assault, sexual interference

    Principal at Mississauga French Catholic school charged with sex assault, sexual interference

    Joel Embiid, 76ers finally vanquish their Celtics demons in cathartic Game 7 win

    Joel Embiid, 76ers finally vanquish their Celtics demons in cathartic Game 7 win