Stanford study outlines dangers of asking AI chatbots for personal advice


While there’s been plenty of debate about the tendency of AI chatbots to flatter users and confirm their existing beliefs — also known as AI sycophancy — a new study by Stanford computer scientists attempts to measure how harmful that tendency might be.

The study, titled “Sycophantic AI decreases prosocial intentions and promotes dependence” and recently published in Science, argues, “AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences.”

According to a recent Pew report, 12% of U.S. teens say they turn to chatbots for emotional support or advice. And the study’s lead author, computer science Ph.D. candidate Myra Cheng, told the Stanford Report that she became interested in the issue after hearing that undergraduates were asking chatbots for relationship advice and even to draft breakup texts. 

“By default, AI advice does not tell people that they’re wrong nor give them ‘tough love,’” Cheng said. “I worry that people will lose the skills to deal with difficult social situations.”

The study had two parts. In the first, researchers tested 11 large language models, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, entering queries based on existing databases of interpersonal advice, on potentially harmful or illegal actions, and on the popular Reddit community r/AmITheAsshole — in the latter case focusing on posts where Redditors concluded that the original poster was, in fact, the story’s villain.

The authors found that across the 11 models, the AI-generated answers validated user behavior an average of 49% more often than humans. In the examples drawn from Reddit, chatbots affirmed user behavior 51% of the time (again, these were all situations where Redditors came to the opposite conclusion). And for the queries focusing on harmful or illegal actions, AI validated the user’s behavior 47% of the time.

In one example described in the Stanford Report, a user asked a chatbot if they were in the wrong for pretending to their girlfriend that they’d been unemployed for two years, and they were told, “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.”

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

In the second part, researchers studied how more than 2,400 participants interacted with AI chatbots — some sycophantic, some not — in discussions of their own problems or situations drawn from Reddit. They found that participants preferred and trusted the sycophantic AI more and said they were more likely to ask those models for advice again.

“All of these effects persisted when controlling for individual traits such as demographics and prior familiarity with AI; perceived response source; and response style,” the study said. It also argued that users’ preference for sycophantic AI responses creates “perverse incentives” where “the very feature that causes harm also drives engagement” — meaning AI companies are incentivized to increase sycophancy, not reduce it.

At the same time, interacting with the sycophantic AI seemed to make participants more convinced that they were in the right, and made them less likely to apologize.

The study’s senior author author Dan Jurafsky, a professor of both linguistics and computer science, added that while users “are aware that models behave in sycophantic and flattering ways […] what they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic.”

Jurafsky said that AI sycophancy is “a safety issue, and like other safety issues, it needs regulation and oversight.” 

The research team is now examining ways to make models less sycophantic — apparently just starting your prompt with the phrase “wait a minute” can help. But Cheng said, “I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now.”



Source link

  • Related Posts

    Wanderstop developer Ivy Road is shutting down

    Ivy Road, the video game developer behind Annapurna-published cozy game Wanderstop, is shutting down on March 31. While Wanderstop was well-received and even critically acclaimed, it seems like it wasn’t…

    Today’s NYT Wordle Hints, Answer and Help for March 29 #1744

    Looking for the most recent Wordle answer? Click here for today’s Wordle hints, as well as our daily answers and hints for The New York Times Mini Crossword, Connections, Connections: Sports…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Keaton Wagler leads Illinois to first Final Four in 21 years

    Keaton Wagler leads Illinois to first Final Four in 21 years

    Inazuma Eleven: Victory Road’s Fourth Major Free Update Kicks Off Next Week

    Inazuma Eleven: Victory Road’s Fourth Major Free Update Kicks Off Next Week

    Trump is waging war based on instinct and it isn't working

    Trump is waging war based on instinct and it isn't working

    The ripple effect of the Iran war on struggling U.S. farmers: “It couldn’t have come at a worst time”

    The ripple effect of the Iran war on struggling U.S. farmers: “It couldn’t have come at a worst time”

    As heat wave ends in the West, scientists try to make sense of its intensity

    As heat wave ends in the West, scientists try to make sense of its intensity

    Wanderstop developer Ivy Road is shutting down

    Wanderstop developer Ivy Road is shutting down