Anthropic’s Daniela Amodei Believes the Market Will Reward Safe AI


The Trump administration may think regulation is crippling the AI industry, but one of the industry’s biggest players doesn’t agree.

At WIRED’s Big Interview event on Thursday, Anthropic president and cofounder Daniela Amodei told WIRED editor at large Steven Levy that even though Trump’s AI and crypto czar, David Sacks, may have tweeted that her company is “running a sophisticated regulatory capture strategy based on fear-mongering,” she’s convinced her company’s commitment to calling out the potential dangers of AI is making the industry stronger.

“We were very vocal from day one that we felt there was this incredible potential” for AI, Amodei said. “We really want to be able to have the entire world realize the potential, the positive benefits, and the upside that can come from AI, and in order to do that, we have to get the tough things right. We have to make the risks manageable. And that’s why we talk about it so much.”

More than 300,000 startups, developers, and companies use some version of Anthropic’s Claude model and Amodei said that, through the company’s dealings with those brands, she’s learned that, while customers want their AI to be able to do great things, they also want it to be reliable and safe.

“No one says, ‘We want a less safe product,’” Amodei said, likening Anthropic’s reporting of its model’s limits and jailbreaks to that of a car company releasing crash-test studies to show how it has addressed safety concerns. It might seem shocking to see a crash-test dummy flying through a car window in a video, but learning that an automaker updated their vehicle’s safety features as a result of that test could sell a buyer on a car. Amodei said the same goes for companies using Anthropic’s AI products, making for a market that is somewhat self-regulating.

“We’re setting what you can almost think of as minimum safety standards just by what we’re putting into the economy,” she said. Companies “are now building many workflows and day-to-day tooling tasks around AI, and they’re like, ‘Well, we know that this product doesn’t hallucinate as much, it doesn’t produce harmful content, and it doesn’t do all of these bad things.’ Why would you go with a competitor that is going to score lower on that?”

Daniela Amodei attends the WIRED Big Interview event.

Photograph: Annie Noelker



Source link

  • Related Posts

    How Wellness Influencers Spreading Misinformation Signals a Deeper Problem Within Our Health Care System

    When I turned 18, I moved on from the pediatrician I’d seen since birth and joined the world of adult health care. This was also the last time I had…

    ByteDance and DeepSeek Are Placing Very Different AI Bets

    Go high or go wide? DeepSeek and ByteDance, the two leaders of China’s AI industry, are adopting vastly different strategies. On Monday, DeepSeek released DeepSeek V3.2, another open-weight model that…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    John Weissenberger: How Canadian ingenuity turned Alberta’s tar pits into a cash cow

    How Wellness Influencers Spreading Misinformation Signals a Deeper Problem Within Our Health Care System

    How Wellness Influencers Spreading Misinformation Signals a Deeper Problem Within Our Health Care System

    As Palestinians yearn for a leader, top candidate remains behind bars

    Experts urge Ottawa to seek new trading opportunities within the Western Hemisphere

    Signet Jewelers CEO talks lab-grown vs. natural diamonds

    Signet Jewelers CEO talks lab-grown vs. natural diamonds

    The Ashes 2025 second Test – day four: Australia’s Steve Smith argues with England’s Jofra Archer

    The Ashes 2025 second Test – day four: Australia’s Steve Smith argues with England’s Jofra Archer