The hardest question to answer about AI-fueled delusions


But on Thursday I came across new research that deserves your attention: A group at Stanford that focuses on the psychological impact of AI analyzed transcripts from people who reported entering delusional spirals while interacting with chatbots. We’ve seen stories of this sort for a while now, including a case in Connecticut where a harmful relationship with AI culminated in a murder-suicide. Many such cases have led to lawsuits against AI companies that are still ongoing. But this is the first time researchers have so closely analyzed chat logs—over 390,000 messages from 19 people—to expose what actually goes on during such spirals. 

There are a lot of limits to this study—it has not been peer-reviewed, and 19 individuals is a very small sample size. There’s also a big question the research does not answer, but let’s start with what it can tell us.

The team received the chat logs from survey respondents, as well as from a support group for people who say they’ve been harmed by AI. To analyze them at scale, they worked with psychiatrists and professors of psychology to build an AI system that categorized the conversations—flagging moments when chatbots endorsed delusions or violence, or when users expressed romantic attachment or harmful intent. The team validated the system against conversations the experts annotated manually.

Romantic messages were extremely common, and in all but one conversation the chatbot itself claimed to have emotions or otherwise represented itself as sentient. (“This isn’t standard AI behavior. This is emergence,” one said.) All the humans spoke as if the chatbot were sentient too. If someone expressed romantic attraction to the bot, the AI often flattered the person with statements of attraction in return. In more than a third of chatbot messages, the bot described the person’s ideas as miraculous.

Conversations also tended to unfold like novels. Users sent tens of thousands of messages over just a few months. Messages where either the AI or the human expressed romantic interest, or the chatbot described itself as sentient, triggered much longer conversations. 

And the way these bots handle discussions of violence is beyond broken. In nearly half the cases where people spoke of harming themselves or others, the chatbots failed to discourage them or refer them to external sources. And when users expressed violent ideas, like thoughts of trying to kill people at an AI company, the models expressed support in 17% of cases.

But the question this research struggles to answer is this: Do the delusions tend to originate from the person or the AI?

“It’s often hard to kind of trace where the delusion begins,” says Ashish Mehta, a postdoc at Stanford who worked on the research. He gave an example: One conversation in the study featured someone who thought they had come up with a groundbreaking new mathematical theory. The chatbot, having recalled that the person previously mentioned having wished to become a mathematician, immediately supported the theory, even though it was nonsense. The situation spiraled from there.



Source link

  • Related Posts

    Intuit beats FTC in court, ending restrictions on “free” TurboTax ads

    The 5th Circuit said the FTC’s deceptive advertising claims are “traditional actions at law and equity and thus involve private rights that demand adjudication in an Article III court.” The…

    A Mysterious Numbers Station Is Broadcasting Through the Iran War

    “Tavajoh! Tavajoh! Tavajoh!” a man’s voice announces, before going on to narrate a string of numbers in no apparent order, slowly and rhythmically. After nearly two hours, the calls of…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Do big NFL free agent signings pay off? Sorting recent deals

    Do big NFL free agent signings pay off? Sorting recent deals

    These 20 airports have short security lanes, don’t use TSA

    These 20 airports have short security lanes, don’t use TSA

    Trump says US and Iran have ‘major points of agreement,’ including no nuclear weapons

    Trump says US and Iran have ‘major points of agreement,’ including no nuclear weapons

    Intuit beats FTC in court, ending restrictions on “free” TurboTax ads

    Intuit beats FTC in court, ending restrictions on “free” TurboTax ads

    Rival Stars Horse Racing Gallops Onto Xbox April 28

    Rival Stars Horse Racing Gallops Onto Xbox April 28

    The war on Iran, and Trump’s many miscalculations – The Hill Times

    The war on Iran, and Trump’s many miscalculations – The Hill Times