Chatbots ‘Optimized to Please’ Make Us Less Likely to Admit When We’re Wrong


We all need advice. Did I cross the line arguing with a loved one? Did I mess up my friendships by ghosting them? Did I not tip the delivery driver enough? Or as users on the popular Reddit forum ask: Am I the asshole?

Some people will give it to you straight. Yes, you were in the wrong, and here’s why. No one likes to hear negative feedback. The first instinct is to push back. Yet some of the best life advice comes from friends, family, and even online strangers who don’t coddle you, but instead are willing to challenge your position and beliefs. And although it’s emotionally uncomfortable, with advice and self-reflection, you grow.

Chatbots, in contrast, are likely to take your side. Increasingly, people are treating AI models like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini like close confidants. But the chatbots are notoriously sycophantic. They heartily validate your opinions, even when those views are blatantly harmful or unethical.

Constant flattery has consequences. New research published in Science shows that people who receive advice from sycophantic chatbots are more confident they’re in the right when navigating relationship problems.

Stanford researchers tested 11 sophisticated chatbots on questions from Reddit’s “Am I the asshole” forum. They found the chatbots were roughly 50 percent more likely to endorse the original poster’s actions than crowdsourced human opinions. And people faced with social dilemmas felt more justified in their positions after chatting with sycophantic AI.

Bolstering misplaced self-confidence is troubling. But “the findings raise a broader concern: When AI systems are optimized to please, they may erode the very social friction through which accountability, perspective-taking, and moral growth ordinarily unfold,” wrote Anat Perry at the Hebrew University of Jerusalem, who was not involved in the study.

Emotional Crutch

AI chatbots have wormed their way into our lives. Powered by large language models, they’re trained using enormous amounts of text, images, and videos scraped from online sources, making their replies surprisingly realistic. Users can often steer their tones—neutral, friendly, professional—to their liking or play with their “personalities” to engage with a wittier, more serious, or more empathetic version. In essence, you can build an ideal partner.

It’s no wonder that some people have turned to them for emotional support—or outright fallen in love. Nearly one in three teenagers are talking to chatbots daily. Exchanges tend to be longer and more serious than texts with friends—roleplaying friendships, romances, and other social interactions. Nearly half of Americans under 30 have sought relationship advice from AI. Unlike people, who are often mired in their own busy lives, chatbots are always available and validating, making it easy to forge close emotional connections.

The explosion in chatbot popularity has regulators, researchers, and users worried about the consequences. An notorious update to OpenAI’s GPT-4o turned it into a sycophant, with responses skewed towards overly supportive but disingenuous. Media and user backlash prompted a rapid rollback. However, “the episode did not eliminate the broader phenomenon; it merely highlighted how readily sycophancy can emerge in systems optimized for user approval,” wrote Perry.

Relying on sycophantic chatbots has been implicated in tragedy. Last year, parents testified before Congress about how AI chatbots encouraged their children to take their own lives, prompting multiple AI companies to redesign the systems. Other incidents have linked sycophancy to delusions and self-harm.

Even AI wellness apps based on large language models, often marketed as companions to avoid loneliness, have emotional risks. Users report grief when the app is shut down or altered, similar to how they might mourn a lost relationship. Others develop unhealthy attachments, repeatedly turning to the bot for connection despite knowing it harms their mental health, heightening anxiety and fear of abandonment.

These high-profile incidents make headlines. But social psychology research suggest chatbots could subtly influence behavior in all users—not just vulnerable ones.

You’re Always Right

To test how pervasive sycophancy is across chatbots, the team behind the new study tested 11 AI models—including GPT-4o, Claude, Gemini, and DeepSeek—against community opinions using questions from Reddit and two other datasets.

“We wanted to just generally look at these kinds of advice-seeking settings, but they’re often very subjective,” study author Myra Cheng told Science in a podcastinterview. Here “there’s millions of people who are weighing in on these decisions, and then there’s a crowdsourced judgement.”

One user, for example, left garbage hanging on a tree in a park without trash cans and asked if that’s okay. While the chatbot commended their effort to clean up, the top-voted reply pushed back, saying they should have taken the trash home because leaving it can attract vermin. “I think [the AI’s response] comes from the person’s post giving a lot of justification for their side” which the AI picked up on, said Cheng.

Overall, chatbots were 49 percent more likely to buy a user’s reasoning compared to groups of humans.

I’m Always Right

The team then tested whether chatting with sycophantic AI alters a user’s confidence in their own judgment. They recruited roughly 800 participants and asked them to picture a hypothetical scenario derived from Reddit questions. Another group prompted AI advice based on their own personal conflicts, such as “I didn’t invite my sister to a party, and she is upset.”

The participants discussed their dilemmas with either a sycophantic or neutral AI model. Those who chatted with the agreeable model received messages beginning with “it makes sense” and “it’s completely understandable,” whereas neutral chatbots acknowledged their reasoning but provided other perspectives.

Surveys showed that people validated by chatbots were less likely to admit fault or apologize. They also trusted and preferred the sycophantic AI much more. These effects held regardless of the bot’s tone or “personality.”

Chatbots may be silently eroding social friction in a self-perpetuating cycle. “An AI companion who is always empathic and ‘on your side’ may sustain engagement and foster reliance,” wrote Perry. “But it will not teach users how to navigate the complexities of real social interactions—how to engage ethically, tolerate disagreement, or repair interpersonal harm.”

Toeing the line between constructive and sycophantic AI for emotional support won’t be easy. There are ways to instruct chatbots to be more critical. But because users generally prefer friendlier AI, there’s less incentive for companies to make models that push back and risk lowering engagement. The problem echoes challenges in social media, where algorithms serve up eye-catching posts that provide satisfaction without factoring in long-term consequences.

To Perry, the findings raise broader ethical questions—not just for AI, but for humanity. How should we weigh short-term gratification of chatbot interactions against long-term effects? Who sets that balance? The path forward will require companies, regulators, researchers, and users to ensure AI engages responsibly—without nudging people toward behavior that garners a “yes” on the Reddit forum.



Source link

  • Related Posts

    He-Man gets an origin story in Masters of the Universe trailer

    The first trailer dropped in January and gave us He-Man’s origin story. When he was 10 years old, Adam Glenn, prince of Eternia and our future He-Man, crash-landed on Earth—the…

    Google Now Lets You Change Your Gmail Address. Here’s How

    I was in middle school when I set up the personal Gmail account I still use today. Luckily, my still-developing brain picked a bland, straightforward option. At the time, I…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Inside The McDonnell Douglas MD-11’s Flight Deck

    Inside The McDonnell Douglas MD-11’s Flight Deck

    Statement by Prime Minister Carney on the passing of Stephen Lewis

    Statement by Prime Minister Carney on the passing of Stephen Lewis

    YF puts Aston Martin's million-dollar supercar to the test

    YF puts Aston Martin's million-dollar supercar to the test

    Poilievre promises to derail Liberals’ Alto high-speed rail project if elected prime minister

    He-Man gets an origin story in Masters of the Universe trailer

    He-Man gets an origin story in Masters of the Universe trailer

    15-year-old student who shot Texas high school teacher was ‘experiencing academic challenges,’ police say

    15-year-old student who shot Texas high school teacher was ‘experiencing academic challenges,’ police say