Judge temporarily blocks Pentagon’s blacklist of AI company Anthropic


Text to Speech Icon

Listen to this article

Estimated 3 minutes

The audio version of this article is generated by AI-based technology. Mispronunciations can occur. We are working with our partners to continually review and improve the results.

A U.S. judge on Thursday temporarily blocked the Pentagon’s blacklisting of Anthropic, the latest turn in the company’s ‌high-stakes fight with the military over AI safety on the battlefield.

Anthropic’s lawsuit in California federal court alleges that U.S. Secretary of War Pete Hegseth overstepped his authority when he designated Anthropic a national security supply-chain risk, a label the government can apply to companies that expose military systems to potential infiltration or sabotage by adversaries.

Anthropic alleged the government violated its right to free speech under the First Amendment by retaliating against its views on AI safety. The company said it was not given a chance to dispute the designation, in violation of its Fifth Amendment right to due process.

U.S. District Judge Rita Lin, an appointee of former U.S. president Joe Biden, agreed with the company ⁠in a 43-page ruling, but said ⁠it would not take effect for seven days to give ‌the administration a chance to appeal.

Hegseth’s unprecedented move, which followed Anthropic’s opposition to allow the military to use its AI chatbot Claude for U.S. surveillance or autonomous weapons, blocked Anthropic from certain military contracts. Anthropic executives have said it could cost the company billions of dollars in lost business and reputational harm.

WATCH | AI watchdog will oversee development in Canada:

Canada launches AI watchdog to oversee the technology’s safe development and use

Amid rapid global advances and deployment of artificial intelligence technologies, the federal government has invested millions to combine the minds of three existing institutes into one that can keep an eye on potential dangers ahead.

Anthropic says that AI models ⁠are not reliable enough to be safely used in autonomous weapons and that it opposes domestic surveillance as a violation of rights. The Pentagon says private companies should not be able to constrain military action, but also said that the Pentagon is not interested in those uses and would only use the technology in legal ways.

In Thursday’s ruling, Lin said the administration’s actions did not ‌appear to be directed at the government’s stated national security interests, but rather, to punish Anthropic.

“The record supports an inference that Anthropic is being punished for criticizing the government’s contracting position in the press,” Lin wrote.

“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.”

Anthropic spokesperson Danielle Cohen said the company was pleased with the decision.

“While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with ⁠the government to ensure all Americans benefit from safe, reliable AI,” Cohen said in a statement.

Anthropic’s designation was the first ⁠time a U.S. company has been publicly designated a supply-chain risk under an obscure government-procurement statute aimed at protecting military systems from ⁠foreign ⁠sabotage.

Anthropic’s March 9 lawsuit says the decision was unlawful, unsupported by facts and inconsistent with the military’s past praise of Claude.

The Justice Department countered that Anthropic’s refusal to lift the restrictions could cause uncertainty in the ⁠Pentagon over how it could use Claude and risk disabling military systems during operations, according to a court filing.

The government said the designation stemmed from Anthropic’s refusal to accept contractual terms, not its views on AI safety.

Anthropic has a second lawsuit pending in Washington over a separate Pentagon supply-chain risk designation that could lead to its exclusion from civilian government contracts. 

LISTEN | Iran and AI on the battlefield:

Front Burner31:13Iran and AI on the battlefield



Source link

  • Related Posts

    iCAUR to Host International Business Summit During Auto China 2026, Showcasing Its New Energy System Strategy

    The V27, positioned as the mid-to-large all-round hybrid SUV, is built on Golden REEV (Range-Extended Electric Vehicle) architecture. By combining electric-driven performance with extended range capability, it enables stable operation…

    109-year-old veteran kicks off Orioles’ season with Opening Day pitch

    A 109-year-old superfan of the Baltimore Orioles threw out the team’s first pitch for Opening Day on Thursday. Arthur Green, a veteran who served in two wars, has been a…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    iCAUR to Host International Business Summit During Auto China 2026, Showcasing Its New Energy System Strategy

    Florida axes sociology as required class at state universities in latest attack on ‘woke’ | Florida

    Florida axes sociology as required class at state universities in latest attack on ‘woke’ | Florida

    Apple says no one using Lockdown Mode has been hacked with spyware

    Apple says no one using Lockdown Mode has been hacked with spyware

    Odell Beckham Jr. eying NFL return: Giants, Rams among five-best landing spots

    Odell Beckham Jr. eying NFL return: Giants, Rams among five-best landing spots

    The Best Video Game Soundtracks Of 2026 (So Far)

    The Best Video Game Soundtracks Of 2026 (So Far)

    Cool New Yorkers Are Wearing Loafers With Everything

    Cool New Yorkers Are Wearing Loafers With Everything