AI company Anthropic amends core safety principle amid growing competition in sector


Anthropic, the AI company behind the Claude chatbot that was founded with a focus on safe technology, appears to be scaling back its safety commitments in order to keep the company competitive.

The company said on Tuesday it had changed its responsible scaling policy, a set of self-imposed guidelines aimed at preventing the development of AI that could potentially be dangerous and cause situations like large-scale cyberattacks.

While the updated guidelines say Anthropic would still require a “strong argument that catastrophic risk is contained” when developing AI, it now says it will only delay development “until and unless we no longer believe we have a significant lead” — meaning it would keep developing if they don’t believe they have a lead over their competitors.

The company said it has taken this step because concerns about the safety of AI in the U.S. have taken a back seat to its economic potential.

“Despite rapid advances in AI capabilities over the past three years, government action on AI safety has moved slowly,” the company said in a blog post.

“The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level.”

The change in Anthropic’s safety guidelines comes as the Pentagon threatens to pull its contracts with the company unless its technology is allowed to be used for all legal military purposes — though Anthropic says the guideline change is unrelated.

The AI company has historically sold itself as putting safety first.

Anthropic was founded in 2021 by former employees of OpenAI who were concerned that company was putting development ahead of safety. CEO Dario Amodei has also voiced fears about the negative potential of AI including mass human catastrophe, and maintained that safety continued to be the “highest-level focus” for Anthropic in a December interview with Fortune.

a man in a bluish grey suit and white collared shirt speaks to someone off camera
CEO and Co-Founder of Anthropic Dario Amodei speaks during the 56th annual World Economic Forum (WEF) meeting in Davos, Switzerland, January 20, 2026. (Denis Balibouse/Reuters)

The blog post noted the company’s safety practices were always intended to be updated, and that this new iteration improves the company’s “transparency and accountability” with new commitments to regularly publish reports and safety goals.

But Heidy Khlaaf, chief AI scientist at independent research group the AI Now Institute, says despite Anthropic’s safety-first reputation, it has always fallen short when it comes to its attempts to prevent human harm.

From its first safety policy, Khlaaf says Anthropic has focused too much on the possibility of catastrophic events down the road, rather than counting the possibility of harm that could come from current AI technology, like run-of-the-mill errors with chatbots.

The Claude chatbot has in the past been misused in fraud schemes and attempts to create malware, and was recently used to steal Mexican government data according to cybersecurity researchers.

She says the company is now dropping the “veneer of safety” it’s previously used to market itself because it’s become clear that’s not in its best interest.

“This is a strategic announcement to show that they’re open for business,” Khlaaf said.

WATCH | Canada launches AI watchdog to oversee technology’s development:

Canada launches AI watchdog to oversee the technology’s safe development and use

Amid rapid global advances and deployment of artificial intelligence technologies, the federal government has invested millions to combine the minds of three existing institutes into one that can keep an eye on potential dangers ahead.

The announcement comes at a time of intense competition between top AI companies like Anthropic, OpenAI and Google, which have competing chatbots and are all striking deals to integrate their technologies with businesses and government departments. 

U.S. President Donald Trump’s administration has also signalled it’s all-in on AI development, and has threatened to withhold funding from states that enact laws that hold back U.S. dominance in the industry.

Teresa Scassa, Canada Research Chair in Information Law and Policy at the University of Ottawa, says that no-rules attitude from the U.S. government makes it hard for companies to prioritize safety, “because if they do that, then they are going to be left in the dust.”

That puts Canada in a tough place too, she says, because regulation here could set homegrown AI development back compared to the U.S., or encourage Canadian companies to move south of the border where there would be fewer restrictions on their tech.

“And I think the sense is that we can’t afford that in Canada right now. So you can see how it’s having that kind of knock on effect on AI regulation here,” Scassa said.

She says since Canada’s Artificial Intelligence and Data Act died in 2025, the Canadian government, much like in the U.S., hasn’t tried to impose any broad AI regulation.

Safety change unrelated to Pentagon spat, company says

The change in Anthropic’s safety guidelines comes as the company comes under pressure from the Pentagon.

Anthropic struck a deal with the U.S. Department of Defense worth up to $200 million US in July, allowing the government to use their technology for military purposes, but within the company’s usage guidelines — the set of rules Anthropic has for how clients can and can’t use its products, including chatbot Claude.

WATCH | How AI chatbots can influence feelings of companionships, voting intentions:

How AI chatbots can influence feelings of companionships, voting intentions

An increasing number of people around the world are turning towards AI chatbots for companionship and connection. Meanwhile, a new study indicates those chatbots can even impact people’s voting intentions. Study co-author Gordon Pennycook, an associate professor at Cornell University in the U.S., discusses his research and how AI chatbots can affect potential voters differently than a traditional TV ad.

Those guidelines bar anyone, including the U.S. government, from using Anthropic’s AI tools for a range of things, including to design or develop weapons.

But according to reports, U.S. Defense Secretary Pete Hegseth issued CEO Amodei an ultimatum in a meeting on Tuesday — giving the company until Friday to allow the military to use its AI tools for all legal military purposes, or risk losing its government contracts.

LISTEN | Will AI agents take over the workplace?

Front Burner21:47Will AI agents take over the workplace?

In their back and forth with the government, Anthropic said it will not allow the government to use its technology in autonomous weapons systems — those that allow AI alone to fire at targets — and mass surveillance systems.

But Pentagon officials told media the dispute doesn’t involve AI’s potential uses in autonomous weaponry and mass surveillance, and insist the government has always followed the law.

Anthropic says the update of its responsible scaling policy and demands by the Department of Defense are unrelated. Hegseth’s issues are with the company’s usage policy, rather than the scaling policy, according to Anthropic. 



Source link

  • Related Posts

    U.S. allows some embassy staff to leave Israel, citing safety risks amid Iran threats

    The U.S. said Friday it would permit non-emergency government personnel and family members to leave Israel over safety risks, amid growing concerns about the risk of a military conflict with…

    Why a memory chip shortage is wreaking havoc on the consumer electronics industry

    The boom of investment in artificial intelligence has led to a shortage of the world’s memory chip supply, a predicament that has been nothing short of a crisis for consumer…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Trump’s Go-To Tactic in the State of the Union

    Trump’s Go-To Tactic in the State of the Union

    U.S. allows some embassy staff to leave Israel, citing safety risks amid Iran threats

    U.S. allows some embassy staff to leave Israel, citing safety risks amid Iran threats

    Samsung exec confirms you can blame RAM — and other materials — for the Galaxy S26’s higher price tag

    Samsung exec confirms you can blame RAM — and other materials — for the Galaxy S26’s higher price tag

    Why a memory chip shortage is wreaking havoc on the consumer electronics industry

    Why a memory chip shortage is wreaking havoc on the consumer electronics industry

    The cold, hard truth about this year's pothole season — and why it's about to get worse

    The cold, hard truth about this year's pothole season — and why it's about to get worse

    Where is Miami basketball in latest ACC standings?

    Where is Miami basketball in latest ACC standings?