Conflicting Rulings Leave Anthropic in ‘Supply-Chain Risk’ Limbo


Anthropic “has not satisfied the stringent requirements” to temporarily lose the supply-chain-risk designation imposed by the Pentagon, a US appeals court in Washington, DC, ruled on Wednesday. The decision is at odds with one issued last month by a lower court judge in San Francisco, and it wasn’t immediately clear how the conflicting preliminary judgments would be resolved.

The government sanctioned Anthropic under two different supply-chain laws with similar effects, and the San Francisco and Washington, DC, courts are each ruling on only one of them. Anthropic has said it is the first US company to be designated under the two laws, which are typically used to punish foreign businesses that pose a risk to national security.

“Granting a stay would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict,” the three-judge appellate panel wrote on Wednesday in what they described as an unprecedented case. The panel said that while Anthropic may suffer financial harm from the ongoing designation, they did not want to risk “a substantial judicial imposition on military operations” or “lightly override” the military’s judgments on national security.

The San Francisco judge had found that the Department of Defense likely acted in bad faith against Anthropic, driven by frustration over the AI company’s proposed limits on how its technology could be used and its public criticism of those restrictions. The judge ordered the supply-chain risk label removed last week, and the Trump administration complied by restoring access to Anthropic AI tools inside the Pentagon and throughout the rest of the federal government.

Anthropic spokesperson Danielle Cohen says the company is grateful the Washington, DC, court “recognized these issues need to be resolved quickly” and remains confident “the courts will ultimately agree that these supply chain designations were unlawful.”

The Department of Defense did not immediately respond to a request for comment, but acting attorney general Todd Blanche posted a statement on X. “Today’s DC Circuit stay allowing the government to designate Anthropic as a supply-chain risk is a resounding victory for military readiness,” he wrote.
“Our position has been clear from the start—our military needs full access to Anthropic’s models if its technology is integrated into our sensitive systems.

Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company.”

The cases are testing how much power the executive branch has over the conduct of tech companies. The battle between Anthropic and the Trump administration is also playing out as the Pentagon deploys AI in its war against Iran. The company has argued it is being illegally punished for insisting that its AI tool Claude lacks the accuracy needed for certain sensitive operations such as carrying out deadly drone strikes without human supervision.

Several experts in government contracting and corporate rights have told WIRED that Anthropic has a strong case against the government, but the courts sometimes refuse to overrule the White House on matters related to national security. Some AI researchers have said the Pentagon’s actions against Anthropic “chills professional debate” about the performance of AI systems.

Anthropic has claimed in court that it lost business because of the designation, which government lawyers contend bars the Pentagon and its contractors from using the company’s Claude AI as part of military projects. And as long as Trump remains in power, Anthropic may not be able to regain the significant foothold it held in the federal government.

Final decisions in the company’s two lawsuits could be months away. The Washington court is scheduled to hear oral arguments on May 19.

The parties have revealed minimal details so far about how exactly the Department of Defense has used Claude or how much progress it has made in transitioning staff to other AI tools from Google DeepMind, OpenAI, or others. The military, which under President Trump calls itself the Department of War, has said it has taken steps to ensure Anthropic can’t purposely try to sabotage its AI tools during the transition.

Update 4/8/26 7:27 EDT: This story has been updated to include a statement form acting attorney general Todd Blanche.



Source link

  • Related Posts

    AI models are terrible at betting on soccer—especially xAI Grok

    “Every frontier model we evaluated lost money over the season and many experienced ruin,” the authors of the paper concluded, with the AI “systematically underperforming humans” in this scenario. AI…

    Best Electric Cargo Bikes (2026): Urban Arrow, Lectric, Tern, and More

    Specialized’s proprietary, 700-watt motor feels natural—sometimes to an annoying extent, as the bike is designed for you to pedal and you won’t get faster than 10 mph just by using…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    The Uplift: Michael Jordan – CBS News

    The Uplift: Michael Jordan – CBS News

    Former New York Jets QB Browning Nagle has passed away

    Former New York Jets QB Browning Nagle has passed away

    5 Unexpected J.Crew Color Trends to Wear This Spring

    5 Unexpected J.Crew Color Trends to Wear This Spring

    Wall Street Strategists Wrestle With War’s Toll on 2026 Outlook

    B.C. Conservatives need a leader who won’t light the party on fire

    A Love Letter To Gaming's Many Weird Moons

    A Love Letter To Gaming's Many Weird Moons