Anthropic filed two lawsuits against the Department of Defense on Monday, alleging that the government’s decision to label the artificial intelligence firm a “supply chain risk” was unlawful and violated its first amendment rights. The two sides have been locked in a monthslong heated feud over the company’s attempt to implement safeguards against the military’s potential use of its AI models for mass domestic surveillance or fully autonomous lethal weapons.
The lawsuits, which Anthropic filed in the northern district court of California and the US court of appeals for the Washington DC Circuit, come after the Pentagon formally issued the supply chain risk designation last Thursday, the first time the blacklisting tool has been used against a US company. The AI firm previously vowed to challenge the designation and its demand that any company that does business with the government cut all ties with Anthropic, a serious threat to its business model.
Anthropic’s lawsuit contends that the Trump administration is punishing the company for its refusal to comply with the ideological demands of the government, in a violation of its protected speech and an attempt to punish the company for not complying.
“These actions are unprecedented and unlawful. The constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” Anthropic stated in its California lawsuit.
Anthropic’s AI model, called Claude, has been deeply integrated into the Department of Defense over the past year. Until recently, Claude was also the only AI mode approved for use in classified systems. The DoD has reportedly used it extensively in its military operations, including deciding where to target missile strikes in its war against Iran.
Anthropic emphasized in its lawsuit that it was still committed to providing AI for national security purposes. The company also stated in its California lawsuit that it has previously collaborated with the Department of Defense to modify its systems for unique use cases. The company also wants to continue its negotiations with the government, according to a statement.
“Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners,” a spokesperson for Anthropic said in a statement to The Guardian. “We will continue to pursue every path toward resolution, including dialogue with the government”.
The AI firm alleged in the suit that the Trump administration and Pentagon’s punitive actions are “harming Anthropic irreparably,” an accusation that somewhat contradicts CEO Dario Amodei telling CBS News last week that “the impact of this designation is fairly small” and the company was “gonna be fine”.
“Defendants are seeking to destroy the economic value created by one of the world’s fastest-growing private companies, which is a leader in responsibly developing an emergent technology of vital significance to our Nation,” Anthropic alleged in its suit.
The Department of Defense did not immediately respond to a request for comment.








