Inside Anthropic’s battle with the Pentagon
Yesterday's revelation that outdated targeting data may have resulted in a mistaken US strike on a school in Iran has thrown a new spotlight on the use of AI software by the military in war zones.
Produced by Maya Elese
One of the most harrowing images of the Pentagon’s “Operation Epic Fury” thus far was the aerial image of dozens of graves - most of them belonging to school girls aged between 7 and 12 - who were killed by a missile that hit their school in Minab in Southern Iran on 28th February. The UN has not confirmed the exact number of casualties but the death toll is believed to be at least 150.
While we still don’t know definitively who was responsible for the attack, yesterday The New York Times, citing officials, said preliminary investigations by the Pentagon have concluded that the US was at fault. It found that outdated targeting data may have resulted in a mistaken missile hitting the school which was on the same block as buildings used by Iran’s Islamic Revolutionary Guard’s Navy - that were a top target of the US strikes.
President Trump was asked about the article as he left the White House, and replied, “I don’t know about that.”
You’re reading InDepth: A weekly long read from Channel 4 News offering quality analysis and insight, diving deeper beyond the headlines
So who or what made that mistake? Was it human error, or did the error come from artificial intelligence software firm Anthropic, whose AI model Claude is closely integrated into Pentagon systems and relies on Pentagon data to provide real time intelligence analysis for operatives in the field?
Certainly that was what many were suggesting, without evidence, when the strike took place. And it is a legitimate question, given we know that Claude is being used within the Department of War.
The other reason people immediately focused on Claude was due to the very public row between Anthropic and the Pentagon specifically about the use of Claude on the battlefield.
For those who have not been watching every twist and turn in the war of words between the two sides, here’s what you need to know.
Anthropic’s Claude is being used extensively by the US military to wage war in Iran. Fact.
It was used in the targeting and capture of President Maduro in Venezuela. Fact.
So the row between the two has profound real world implications.
First lawsuit of its kind
The latest twist, earlier this week, saw Anthropic sue the Pentagon in a first of its kind lawsuit against the Trump administration.
A very bold move, given most tech giants have bent over backwards to win Trump’s favour. Not so much Anthropic and its CEO Dario Amodei.
And why? Because the White House designated his firm a “supply chain risk” for not allowing unfettered access to Claude, its powerful AI tool, within the Pentagon.
The designation has historically been reserved for foreign adversaries like the Chinese tech firm, Huawei. It’s never been applied to a major American company before, not least to THE company - Anthropic - that is arguably more in bed with the US military than any other tech company, since Claude was the first AI tool to be integrated inside Government classified networks.
So to say that Claude was at the centre of the onion is an understatement. And yet, Trump now dismisses the firm and its leaders as “left wing nut jobs”.
That’s because Mr Amodei, pictured above, stood firm in a standoff with the US Defence Secretary Pete Hegseth over the terms of the AI company’s two-hundred million dollar defence contract.
Red lines in the sand
The Pentagon could use the AI tools as it saw fit but Mr Amodei had two red lines; the technology could not be used to power autonomous weapons - those with no human intervention - or to spy on American citizens.
The Pentagon assured him it didn’t want to use Claude that way, but somehow the two men could not agree on language that gave Mr Amodei enough assurance that the Defence Secretary would not change his mind under different circumstances.
So Amodei walked away. And Hegseth, true to his threats, designated Anthropic a “supply chain risk” not because it was a risk in any real sense, but as punishment for not rolling over and agreeing to the Pentagon’s terms.
But here’s the rub - and there are two.
First, under their “break up” agreement, if you can call it such, the Pentagon demanded a six-month separation period while the military integrated new AI tools, handily provided the very next day by Anthropic’s major rival, OpenAI of ChatGPT fame.
All’s fair in love and war, they say.
Except in the actual ongoing war in Iran, it means that the Pentagon is continuing to use Claude on a daily basis. So much for supply chain risk.
And the chances are it will continue to use Claude for the entire duration of the war, if you believe Trump’s latest musings that “the war is very complete” and could therefore be over within weeks.
Laws of war
But run all this by academics and experts in AI and they make a second, arguably much bigger point.
That Messers Amodei and Hegseth should not be having this public spat at all.
Instead, according to Mariarosaria Taddeo, Professor of Digital Ethics and Defence Technology at the University of Oxford, they should be adhering to the laws of war.
“The idea that the moral boundaries for the use of AI in defence should be defined either by Amodei, the CEO of Anthropic, or by the Department of War. It’s not up to either of the two. It is a matter of international humanitarian laws. We shouldn’t rely on the ethical compass, the moral compass of a private citizen nor a national government. War is regulated by international regulations and we need those regulations also for AI in defence.”
In theory, Taddeo says, the use of AI in warfare should be governed by the same rules as any other type of warfare, such as the Geneva Conventions. But right now, AI seems to fall into a grey area and it’s not clear it’s being regulated at all.
“We might be eroding international humanitarian laws from within” she says. “I would not want to see a situation where we identify a war crime…due to some misuse of AI but we are unable to provide accountability and more responsibility for that mistake.”
If the use of AI continues unchecked, Taddeo warns that war will eventually get “closer and closer to constant systematic atrocities” with governments and AI companies potentially pointing the finger at each other over who’s to blame.
Anna Hehir, head of Military AI Governance at the Future of Life Institute agrees that the lack of governance over AI on battlefields could be a ticking time bomb which could endanger innocent lives.
“AI weapons can enable governments to commit atrocities at an unprecedented scale where high numbers of civilians are dying,” she said. “AI weapons are not capable of telling the difference between a combatant or a child, let alone the act of surrender. They’re not predictable, they are error prone.”
Who is accountable in the ‘Kill Chain’
Which brings us full circle, to the death of at least 150 people, dozens of them children, attending school on the first day of the war in Iran.
Killed by a missile, according to the New York Times, that the US now admits may have been down to a targeting mistake by its intelligence officials.
We will have to wait for the Pentagon’s full investigation and conclusions to be published. Was the mistake human error, as devastating as that may be?
Or was it AI? A source close to Anthropic told us its Claude software is being used extensively on the battlefield in Iran but said it did not draw up lists of targets - so, we assume, would deny any responsibility for the data which reportedly led to the strike on the school.
Whatever the outcome, this tragic incident has shone a fresh spotlight on AI in warfare and the lack of transparency and regulation over its use.










Well the pentagon is conducting an investigation and it was their own preliminary findings - disclosed yesterday - that concludes the US was at fault. Waiting for its full Report tho
That the missile strike on the girls’ school was 'mistaken' is debatable. No investigation has been initiated to verify that claim.
To do so is premature at best and whitewashing a war crime at worst.