Anthropic’s Daniela Amodei Believes the Market Will Reward Safe AI


The Trump administration may think regulation is crippling the AI industry, but one of the industry’s biggest players doesn’t agree.

At WIRED’s Big Interview event on Thursday, Anthropic president and cofounder Daniela Amodei told WIRED editor at large Steven Levy that even though Trump’s AI and crypto czar, David Sacks, may have tweeted that her company is “running a sophisticated regulatory capture strategy based on fear-mongering,” she’s convinced her company’s commitment to calling out the potential dangers of AI is making the industry stronger.

“We were very vocal from day one that we felt there was this incredible potential” for AI, Amodei said. “We really want to be able to have the entire world realize the potential, the positive benefits, and the upside that can come from AI, and in order to do that, we have to get the tough things right. We have to make the risks manageable. And that’s why we talk about it so much.”

More than 300,000 startups, developers, and companies use some version of Anthropic’s Claude model and Amodei said that, through the company’s dealings with those brands, she’s learned that, while customers want their AI to be able to do great things, they also want it to be reliable and safe.

“No one says, ‘We want a less safe product,’” Amodei said, likening Anthropic’s reporting of its model’s limits and jailbreaks to that of a car company releasing crash-test studies to show how it has addressed safety concerns. It might seem shocking to see a crash-test dummy flying through a car window in a video, but learning that an automaker updated their vehicle’s safety features as a result of that test could sell a buyer on a car. Amodei said the same goes for companies using Anthropic’s AI products, making for a market that is somewhat self-regulating.

“We’re setting what you can almost think of as minimum safety standards just by what we’re putting into the economy,” she said. Companies “are now building many workflows and day-to-day tooling tasks around AI, and they’re like, ‘Well, we know that this product doesn’t hallucinate as much, it doesn’t produce harmful content, and it doesn’t do all of these bad things.’ Why would you go with a competitor that is going to score lower on that?”

Daniela Amodei attends the WIRED Big Interview event.

Photograph: Annie Noelker



Source link

  • Related Posts

    Instagram and Facebook are about to be filled with affiliate content

    Instagram and Facebook content will soon have shopping links baked into posts, essentially cutting out third-party “link in bio”-style tools. Meta announced Tuesday that it’s adding commerce features on the…

    Newly purchased Vizio TVs now require Walmart accounts to use smart features

    A Walmart ads vehicle This week’s announcement shows Walmart further integrating Vizio and looking to more deeply leverage Vizio OS’s ads and tracking capabilities to fuel its $6.4 billion ads business.…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Steve Kornacki’s guide to the next big elections on the calendar: From the Politics Desk

    Steve Kornacki’s guide to the next big elections on the calendar: From the Politics Desk

    Pentagon blacklisting Anthropic looked like punishment for AI stance: judge – National

    Pentagon blacklisting Anthropic looked like punishment for AI stance: judge – National

    Instagram and Facebook are about to be filled with affiliate content

    Instagram and Facebook are about to be filled with affiliate content

    What It’s Like to Stand in an Airport Security Line for Hours

    Payne replaces Edwards at Sunrisers Hyderabad for IPL 2026

    Payne replaces Edwards at Sunrisers Hyderabad for IPL 2026

    Cornwall is overrun with mutant fish people and you are a fisherman with a nailgun in the boomer shooter BRINE

    Cornwall is overrun with mutant fish people and you are a fisherman with a nailgun in the boomer shooter BRINE