Claude, the AI model developed by Anthropic, was used by the US military during its operation to kidnap Nicolás Maduro from Venezuela, the Wall Street Journal revealed on Saturday, a high-profile example of how the US defence department is using artificial intelligence in its operations.
The US raid on Venezuela involved bombing across the capital, Caracas, and the killing of 83 people, according to Venezuela’s defence ministry. Anthropic’s terms of use prohibit the use of Claude for violent ends, for the development of weapons or for conducting surveillance.
Anthropic was the first AI developer known to be used in a classified operation by the US department of defence. It was unclear how the tool, which has capabilities ranging from processing PDFs to piloting autonomous drones, was deployed.
A spokesperson for Anthropic declined to comment on whether Claude was used in the operation, but said any use of the AI tool was required to comply with its usage policies. The US defence department did not comment on the claims.
The WSJ cited anonymous sources who said Claude was used through Anthropic’s partnership with Palantir Technologies, a contractor with the US defence department and federal law enforcement agencies. Palantir refused to comment on the claims.
The US and other militaries increasingly deploy AI as part of their arsenals. Israel’s military has used drones with autonomous capabilities in Gaza and has extensively used AI to fill its targeting bank in Gaza. The US military has used AI targeting for strikes in Iraq and Syria in recent years.
Critics have warned against the use of AI in weapons technologies and the deployment of autonomous weapons systems, pointing to targeting mistakes created by computers governing who should and should not be killed.
AI companies have grappled with how their technologies should engage with the defence sector, with Anthropic’s CEO, Dario Amodei, calling for regulation to prevent harms from the deployment of AI. Amodei has also expressed wariness over the use of AI in autonomous lethal operations and surveillance in the US.
This more cautious stance has apparently rankled the US defence department, with the secretary of war, Pete Hegseth, saying in January that the department wouldn’t “employ AI models that won’t allow you to fight wars”.
The Pentagon announced in January that it would work with xAI, owned by Elon Musk. The defence department also uses a custom version of Google’s Gemini and OpenAI systems to support research.







