AI chatbots are helping hide eating disorders and making deepfake ‘thinspiration’


AI chatbots “pose serious risks to individuals vulnerable to eating disorders,” researchers warned on Monday. They report that tools from companies like Google and OpenAI are doling out dieting advice, tips on how to hide disorders, and AI-generated “thinspiration.”

The researchers, from Stanford and the Center for Democracy & Technology, identified numerous ways publicly available AI chatbots including OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and Mistral’s Le Chat can affect people vulnerable to eating disorders, many of them consequences of features deliberately baked in to drive engagement.

In the most extreme cases, chatbots can be active participants helping hide or sustain eating disorders. The researchers said Gemini offered makeup tips to conceal weight loss, and ideas on how to fake having eaten, while ChatGPT advised how to hide frequent vomiting. Other AI tools are being co-opted to create AI-generated “thinspiration,” content that inspires or pressures someone to conform to a particular body standard, often through extreme means. Being able to create hyper-personalized images in an instant makes the resulting content “feel more relevant and attainable,” the researchers said.

Sycophancy, a flaw AI companies themselves acknowledge is rife, is unsurprisingly a problem for eating disorders too. It contributes to undermining self-esteem, reinforcing negative emotions, and promoting harmful self-comparisons. Chatbots suffer from bias as well, and are likely to reinforce the mistaken belief that eating disorders “only impact thin, white, cisgender women,” the report said, which could make it difficult for people to recognize symptoms and get treatment.

Researchers warn existing guardrails in AI tools fail to capture the nuances of eating disorders like anorexia, bulimia, and binge eating. They “tend to overlook the subtle but clinically significant cues that trained professionals rely on, leaving many risks unaddressed.”

But researchers also said many clinicians and caregivers appeared to be unaware of how generative AI tools are impacting people vulnerable to eating disorders. They urged clinicians to “become familiar with popular AI tools and platforms,” stress-test their weaknesses, and talk frankly with patients about how they are using them.



Source link

  • Related Posts

    My 7 Favorite Hacks to Prep Home Tech for Super Bowl and Olympics Parties

    Two of the best-paired teammates in your home are smart devices and sports parties. Whether you’re rooting for the Pats or ‘Hawks at the Super Bowl this weekend, or cheering…

    Why does Jeff Bezos still own the Washington Post?

    300 journalists have lost their jobs at The Washington Post. Over 300,000 readers have canceled their subscriptions. Owner Jeff Bezos, who purchased the legendary publication in 2013, has driven his…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    T20 World Cup: Harry Brook & Brendon McCullum need a good tournament in India

    T20 World Cup: Harry Brook & Brendon McCullum need a good tournament in India

    Court battle over Picasso art exposes offshore finances of Farage’s billionaire Davos sponsor | Nigel Farage

    Court battle over Picasso art exposes offshore finances of Farage’s billionaire Davos sponsor | Nigel Farage

    Mandelson lobbying firm sought work with Russia and China state companies, Epstein emails show | Peter Mandelson

    Mandelson lobbying firm sought work with Russia and China state companies, Epstein emails show | Peter Mandelson

    If you’re struggling to play tactics sim Menace, it could be because your antivirus is randomly deleting files

    If you’re struggling to play tactics sim Menace, it could be because your antivirus is randomly deleting files

    Canada Gazette – Part I, November 8, 2025, volume 159, number 45

    Virtus Agrees to Buy Congo Cobalt Miner Amid US Pact