Anthropic Denies It Could Sabotage AI Tools During War


Anthropic cannot manipulate its generative AI model Claude once the US military has it running, an executive wrote in a court filing on Friday. The statement was made in response to accusations from the Trump administration about the company potentially tampering with its AI tools during war.

“Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations,” Thiyagu Ramasamy, Anthropic’s head of public sector, wrote. “Anthropic does not have the access required to disable the technology or alter the model’s behavior before or during ongoing operations.”

The Pentagon has been sparring with the leading AI lab for months over how its technology can be used for national security—and what the limits on that usage should be. This month, defense secretary Pete Hegseth labeled Anthropic a supply-chain risk, a designation that will prevent the Department of Defense from using the company’s software, including through contractors, over the coming months. Other federal agencies are also abandoning Claude.

Anthropic filed two lawsuits challenging the constitutionality of the ban and is seeking an emergency order to reverse it. However, customers have already begun canceling deals. A hearing in one of the cases is scheduled for March 24 in federal district court in San Francisco. The judge could decide on a temporary reversal soon after.

In a filing earlier this week, government attorneys wrote that the Department of Defense “is not required to tolerate the risk that critical military systems will be jeopardized at pivotal moments for national defense and active military operations.”

The Pentagon has been using Claude to analyze data, write memos, and help generate battle plans, WIRED reported. The government’s argument is that Anthropic could disrupt active military operations by turning off access to Claude or pushing harmful updates if the company disapproves of certain uses.

Ramasamy rejected that possibility. “Anthropic does not maintain any back door or remote ‘kill switch,’” he wrote. “Anthropic personnel cannot, for example, log into a DoW system to modify or disable the models during an operation; the technology simply does not function that way.”

He went on to say that Anthropic would be able to provide updates only with the approval of the government and its cloud provider, in this case Amazon Web Services, though he didn’t specify it by name. Ramasamy added that Anthropic cannot access the prompts or other data military users enter into Claude.

Anthropic executives maintain in court filings that the company does not want veto power over military tactical decisions. Sarah Heck, head of policy, wrote in a court filing on Friday that Anthropic was willing to guarantee as much in a contract proposed March 4. “For the avoidance of doubt, [Anthropic] understands that this license does not grant or confer any right to control or veto lawful Department of War operational decision‑making,” the proposal stated, according to the filing, which referred to an alternative name for the Pentagon.

The company was also ready to accept language that would address its concerns about Claude being used to help carry out deadly strikes without human supervision, Heck claimed. But negotiations ultimately broke down.

For the time being, the Defense Department has said in court filings that it “is taking additional measures to mitigate the supply chain risk” posed by the company by “working with third-party cloud service providers to ensure Anthropic leadership cannot make unilateral changes” to the Claude systems currently in place.



Source link

  • Related Posts

    At Palantir’s Developer Conference, AI Is Built to Win Wars

    It’s a chilly March morning in the undisclosed mid-Atlantic hotel hosting Palantir’s developer conference. The defense contractors, military officers, and corporate executives in attendance are unprepared for the weather; they’d…

    A French Navy officer accidentally leaked the location of an aircraft carrier by logging his run on Strava

    A French Navy officer went for a run on the deck of the Charles de Gaulle aircraft carrier and uploaded his workout to Strava, inadvertently leaking the location of the…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Israel deliberately targeting medical facilities in south Lebanon, say health workers | Lebanon

    Israel deliberately targeting medical facilities in south Lebanon, say health workers | Lebanon

    WATCH: Police: Man wielding a butcher knife at a Boston transit station arrested

    WATCH:  Police: Man wielding a butcher knife at a Boston transit station arrested

    At Palantir’s Developer Conference, AI Is Built to Win Wars

    At Palantir’s Developer Conference, AI Is Built to Win Wars

    Slay the Spire 2 reveals first major patch

    Slay the Spire 2 reveals first major patch

    This Asian Airport Extends Reign As World’s Best After Hamad Airport Withdrawal

    This Asian Airport Extends Reign As World’s Best After Hamad Airport Withdrawal

    Punk Princess of Bold, Fun and Fearless Style

    Punk Princess of Bold, Fun and Fearless Style