These aren’t AI firms, they’re defense contractors. We can’t let them hide behind their models | AI (artificial intelligence)


There is an Israeli military strategy called the “fog procedure”. First used during the second intifada, it’s an unofficial rule that requires soldiers guarding military posts in conditions of low visibility to shoot bursts of gunfire into the darkness, on the theory that an invisible threat might be lurking.

It’s violence licensed by blindness. Shoot into the darkness and call it deterrence. With the dawn of AI warfare, that same logic of chosen blindness has been refined, systematized, and handed off to a machine.

Israel’s recent war in Gaza has been described as the first major “AI war” – the first war in which AI systems have played a central role in generating Israel’s list of purported Hamas and Islamic jihad militants to target. Systems that processed billions of data points to rank the probability that any given person in the territory was a combatant.

The darkness in the watchtower was a condition of the terrain. The darkness inside the algorithm is a condition of the design. In both cases, the blindness was chosen. It was chosen because blindness is useful: it creates deniability, it makes the violence feel inevitable, it moves the question of who decided from a person to a procedure. The fog did not lift. It was given a probability score and called intelligence.

It may have been chosen blindness that led, at the start of the US-Israeli Iran war, to the strike at the Shajareh Tayyebeh elementary school in Minab, in southern Iran. At least 168 people were killed, most of them children, girls aged seven to 12.

Portraits of schoolchildren from the Shajarah Tayyebeh elementary school in Minab, Iran, who were killed in a US strike. Photograph: Ons Abid/AP

The weapons were precise. Munitions experts described the targeting as “incredibly accurate”, each building individually struck, nothing missed. The problem was not the execution. The problem was intelligence. The school had been separated from an adjacent Revolutionary Guard base by a fence and repurposed for civilian use nearly a decade ago. Somewhere in the targeting cycle, it seems that fact was never updated.

The exact role of AI in the strike on Minab has not been officially confirmed. What is known is that the targeting infrastructure in which those systems operate has no reliable mechanism for flagging when the underlying intelligence is a decade out of date.

Whether or not an algorithm selected this school, it was selected by a system that algorithmic targeting built. To strike 1,000 targets in the first 24 hours of the campaign in Iran, the US military relied on AI systems to generate, prioritize, and rank the target list at a speed no human team could replicate.

Gaza was the laboratory. Minab is the market. The result is a world in which the most consequential targeting decisions in modern warfare are made by systems that cannot explain themselves, supplied by companies that answer to no one, in conflicts that generate no accountability and no reckoning. That is not a failure of the system. That is the system.


Who is to blame when AI kills?

We should resist the temptation to solely blame the algorithm for the logic that makes children into acceptable error rates. In July 2014, four boys from the Bakr family – Ismail, Zakariya, Ahed and Mohammad, aged nine to 11 – were killed on a beach in Gaza. No AI was involved. The site had been preclassified as a Hamas naval compound. The boys were flagged as suspicious because they ran, then walked – behavior that matched a targeting template for fighters trying not to draw attention. When the first missile hit, the surviving children fled. The drone followed them and fired again. An officer later testified that from a vertical aerial view, it is very hard to identify children. The strike was logged as a targeting error.

A classified Israeli military database, reviewed by the Guardian, +972 Magazine and Local Call, indicated that of more than 53,000 deaths recorded in Gaza, named Hamas and Islamic Jihad fighters accounted for roughly 17%. That suggests the rest, 83%, were civilians. These are not the statistics of a war fought with precision, this is a war where imprecision is the aim. (The IDF disputed figures presented in the Guardian article although they did not identify which figures.)

Mohamed Bakr and his wife, Salwa, with one of their children. They lost their 11-year-old son, Mohammad, during a 2014 airstrike on Gaza City’s beach. Photograph: Sean Smith/The Guardian

So AI targeting systems did not invent this logic. They inherited it, encoded it across millions of data points, and automated it beyond any meaningful human check. When a school in Minab is classified in a database as a military compound, that is not a malfunction. It is the fog procedure, the same logic that chased four boys down a beach in Gaza – running exactly as designed, at a different scale, in a different country, with a different weapon. The darkness just has better hardware now.

Many of these AI systems inherently defy international humanitarian law, which does not merely demand correct outcomes from military operations; it requires a careful process before they’re carried out. A commander must make every reasonable effort to verify that a target is a legitimate military objective. The law also requires that everything feasible be done to protect civilians from the effects of attack, not as an afterthought, but as a parallel and equal obligation.

That obligation cannot be delegated to a system whose reasoning is opaque and whose outputs cannot be interrogated in real time. In Gaza, an algorithm processed data on every person in the strip – phone records, movement patterns, social connections, behavioral signals – and produced a ranked list of names, each assigned a probability score indicating the likelihood they were a combatant. This is not the same as a human analyst identifying a known militant and programming a weapon to hit them. The AI was not confirming identities. It was inferring them, statistically, across an entire population, generating targets that no human had individually assessed before they appeared on the list.

Verification, in this system, meant a human operator reviewed each name for an average of about 20 seconds, long enough to confirm the target was male. Then they signed off. One system alone produced more than 37,000 targets in the first weeks of the war. Another was capable of generating 100 potential bombing sites per day. The humans in the loop were not exercising judgment. They were managing a queue.

In Iran, the picture is, at this time, less fully documented. But the scale tells its own story. Two sources confirmed to NBC News that Palantir’s AI systems, which draw in part on large language model technology, were used to identify targets. (Palantir’s CEO, Alex Karp, said he “can’t go into specifics” when asked about this on CNBC, but said that Claude was still integrated into Palantir’s systems used in the Iran war.) Brad Cooper, head of the US Central Command, has boasted that the military is using AI in Iran to “sift through vast amounts of data in seconds” in order to “make smarter decisions faster than the enemy can react”. Whether or not every strike was AI-assisted, the tempo of the campaign was only possible because targeting had been substantially automated.

When reported verification times for AI-assisted targets are measured in seconds, we are no longer talking about human judgment with algorithmic assistance. We are talking about rubber-stamping a machine’s output. And when that machine’s data is a decade out of date, the consequences are written in rows of small coffins.

The companies implicated in this are not obscure defense startups. Palantir, founded with early CIA funding and now one of the primary AI infrastructure providers to the US military, supplied systems used in the Iran campaign. Those systems draw in part on Anthropic’s Claude, a large language model whose parent company attempted to resist Pentagon pressure to remove ethical constraints on its use for targeting. The Pentagon responded by threatening to cut ties and turning to OpenAI and others instead. The market for killing at scale does not lack for suppliers.

The episode is instructive: the one company that tried to draw a line was sidelined, and the killing continued without interruption. Google, despite significant internal employee protest, signed Project Nimbus, a cloud-computing and AI contract with the Israeli government and military worth more than $1bn.

Amazon is a co-signatory to Project Nimbus alongside Google. Microsoft had deep integration with Israeli military systems before partially withdrawing under pressure in 2024, at which point the data migrated to Amazon Web Services within days.

Anduril, founded by Palmer Luckey and staffed heavily with former US defense officials, builds autonomous weapons systems explicitly designed for lethal targeting. OpenAI, which until recently prohibited military use in its terms of service, quietly removed that restriction in early 2024 and has since pursued Pentagon contracts. These are among the most valuable companies in the world, with consumer products used by hundreds of millions of people, university research partnerships, and significant political influence in Washington, Brussels and beyond.

Of course private companies have supplied militaries for centuries – with radios, trucks, satellite navigation, microwave technology and, of course, complex weapons systems. This is not new or inherently corrupt. The “dual-use” problem is as old as industrialization: almost any powerful technology can be used for military ends.

But AI targeting is not simply a component that militaries incorporate into their operations. It is the decision architecture itself – the thing that determines who gets killed and why. When a single system can generate tens of thousands of targets in the time it would have taken a human intelligence team to verify 10, the question is not whether private companies should supply militaries. It is whether any legal framework can survive contact with it.

In international law we talk about accountability frameworks: the chain of answerability that runs from a decision to use lethal force back to the person who authorized it. An accountability framework requires that someone be identifiable as the decision-maker, that their reasoning be reconstructable after the fact, and that the process obligations the law demands – proportionality assessment, verification, precaution – can be shown to have been followed.

AI targeting systematically destroys each of these conditions. Attribution dissolves across a chain of engineers, commanders, operators and corporate suppliers, each of whom can point to another. Reasoning disappears into a probability score that no lawyer can audit and no court can cross-examine. Process collapses into a 20-second approval of a machine recommendation. And the companies that built and sold the system sit entirely outside the legal framework, because international humanitarian law was designed for states and their agents, and Palantir is not a signatory to the Geneva conventions.

The accountability framework has not been merely strained or tested by AI warfare. It has been made structurally irrelevant.


Lifting the fog of war

We should stop calling these technology companies and start calling them what they are: defense contractors.

The largest AI firms are not neutral infrastructure providers who happened to find a military customer. They are being integrated into the targeting architecture of modern warfare. Their systems sit inside the kill chain, their engineers hold security clearances, their executives rotate through the same revolving door that has always connected Silicon Valley to the Pentagon.

These AI providers are at the cutting edge of the military-industrial complex, and should be regulated as such. A clear accountability chain applies to firms such as Raytheon and Lockheed Martin – entailing export controls, congressional oversight, liability frameworks and procurement conditions – whereas the weak regulations that apply to the companies writing the algorithms that select military targets have never been applied, tested or enforced.

Demonstrators gather outside the Palantir office to protest Palantir’s role in ICE deportations and the Israel-Gaza war, in Palo Alto, California, on 14 July 2025. Photograph: Anadolu/Getty Images

That is not an oversight. It is a choice, actively maintained by lobbying, by the deliberate blurring of “commercial” and “defense” products, and by a regulatory culture that still treats AI as a consumer technology that happened to find its way to the battlefield. Palantir spent close to $6m lobbying Washington in 2024, and in one quarter of 2023 outspent Northrop Grumman. It launched a dedicated foundation to shape the policy environment it operates in. The consortium of Palantir, Anduril, OpenAI, SpaceX and Scale AI was described by its own participants as a project to supply a new generation of defense contractors to the US government. The venture capital firms backing these companies, Andreessen Horowitz and Founders Fund, have cultivated influence through proximity to power: former senior officials on their advisory boards, partners rotating through government roles and direct access to the policymakers who determine how much the Pentagon spends and on what.

The EU AI Act, the most ambitious attempt yet to govern artificial intelligence, explicitly exempts military and national security applications, with the stated justification that international humanitarian law is the more appropriate framework. It is a remarkable act of circularity: the one body of law being systematically destroyed by these systems is designated as their regulator, while the regulators who might actually constrain them look away.

In the United States, the AI provisions of the 2025 National Defense Authorization Act do not regulate military AI. They direct agencies to adopt more of it. Pete Hegseth’s AI strategy, issued in January 2026, frames the question entirely as a race, directing the Pentagon to move at wartime speed, with AI as the first proving ground. The regulatory culture has not failed to catch up with the technology. It has decided, deliberately, not to try.

So far, the only serious government intervention in AI military capability we have seen came not from a state demanding restraint or accountability, but from the US demanding the systems be made more lethal. That is the horizon of ambition we have accepted.

Banning these systems outright is impossible when so many of the actors involved care little about international law. But pressure points remain, and they are real. Any future government in Washington that wants to use AI military capability without producing an unending series of Minabs will need a regulatory framework – not as a concession to critics but as a basic requirement for not becoming a rogue actor. The same is true in Europe, where Britain has committed over £1bn to a new AI-integrated targeting system connecting sensors and strike capabilities across all domains, and where France’s leading AI company has partnered with a German defense startup to build autonomous weapons platforms, and where Germany is deploying AI-guided attack drones in Ukraine.

There’s an opening to regulate these systems. The EU has the most obvious tools, not through the AI Act, which deliberately exempts military applications, but through export controls and procurement conditions on the dual-use systems that move between commercial and defense markets. International courts are beginning to open doors too: the ICJ advisory opinion on Palestinian rights has created a framework in which companies supplying systems used in unlawful strikes face potential liability exposure in jurisdictions that take international law seriously. And AI firms need governments, not just as customers but as the providers of the computing power, the energy, and the physical infrastructure that frontier AI requires and that no company can sustain from commercial revenues alone. That dependency gives states that are willing to use it real leverage over companies that would prefer not to be regulated. The question is whether any government with the tools to act will decide, before the next Minab, that the cost of inaction has become too high.

What regulation should look like is relatively straightforward, even if it is hard to enforce. AI systems used in targeting must be explainable – not via probability score but reasoning that a lawyer can audit. The cumulative civilian cost of AI-assisted campaigns must be assessed as a whole. And the liability that stops at the operator must extend up the supply chain to the companies that knowingly built and sold opaque systems for use in armed conflict. These are not novel demands. They are the minimum conditions for the laws of war to mean anything in the age of algorithmic targeting.

In the meantime, the fog procedure is operational and coming to define the future of war. But the soldiers who fired into the darkness were at least present in it. The companies that built what replaced them are doing it from Palo Alto, at no personal risk, with no legal exposure, and with every incentive to do it again.

  • Avner Gvaryahu is a DPhil researcher at the Blavatnik school of government, University of Oxford. He is a former executive director of Breaking the Silence, an Israeli human rights organization of former soldiers



Source link

  • Related Posts

    Razer Boomslang 20th Anniversary Mouse Review: For Collectors

    The original Boomslang came from the era of Xbox’s bulky Duke controller. We had just barely made it past the N64’s beloved three-pronged abomination. At a time when “ergonomics” was…

    Rivian’s RJ Scaringe thinks we’re doing robots all wrong

    If you haven’t heard, Rivian founder and CEO RJ Scaringe has another company — his third by our count. And this time it’s focused on robotics. The serial entrepreneur is…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Razer Boomslang 20th Anniversary Mouse Review: For Collectors

    Razer Boomslang 20th Anniversary Mouse Review: For Collectors

    Donkey Kong Bananza's Best Feature Went "Too Far," Dev Says

    Donkey Kong Bananza's Best Feature Went "Too Far," Dev Says

    Two counter-protesters at Al-Quds Day rally in Toronto charged with assault

    Two counter-protesters at Al-Quds Day rally in Toronto charged with assault

    Adam Schiff says war with Iran is ‘simply unsustainable’: Full interview

    Adam Schiff says war with Iran is ‘simply unsustainable’: Full interview

    Former Pakistan captain Sarfaraz retires from international cricket

    Former Pakistan captain Sarfaraz retires from international cricket

    Chanel’s Pre-Oscars Dinner With Jessie Buckley and Lily-Rose Depp

    Chanel’s Pre-Oscars Dinner With Jessie Buckley and Lily-Rose Depp