Skip to main content
Work with the NEU
Iran: a Test Lab for AI Warfare

MARC VANDEPITTE says AI is driving the pace of destruction to unprecedented speed

Residents look on and take pictures as flames and smoke rise from an oil storage facility struck as attacks hit the city during the U.S.–Israeli military campaign in Tehran, Iran, March 7, 2026

WAR is increasingly being fought less by humans and more by algorithms. In Iran, we see how artificial intelligence is driving the pace of destruction to an unprecedented speed, bringing with it a host of profound moral problems.

Unprecedented speed of destruction

New wars are rarely purely military; they also serve as testing zones for technology. The current conflict in Iran is serving as a tragic testing ground for frontier technologies. In this war, we are witnessing a technological shift that, until recently, was unthinkable.

In just four days, the Pentagon succeeded in striking more than 2,000 targets. According to Pete Hegseth, the US Secretary of Defence, the current operation against Iran has already been twice as deadly as the invasion of Iraq in 2003 and seven times as powerful as the 12-day war against Iran in June 2025.

This enormous acceleration is the direct result of integrating advanced AI systems into military operations. Within the US military, software forms a kind of digital brain during operations. The so-called Maven system analyses real-time data from the battlefield and supports the entire kill chain: from identifying a target to assessing damage following a strike.

The system combines various AI models to interpret information and provide recommendations. Generative AI – similar to the technology civilians use for office work or education – helps commanders understand complex data and make decisions.

With this system, a new qualitative step is being taken. The technology has evolved from simple data summarisation to complex reasoning. This allows armies to operate on a scale that was previously physically impossible for human planners. The result is a pace of military operations never seen before.

It is noteworthy that the Maven system was designed by the software company Palantir, which was founded by the far-right billionaire Peter Thiel. This tech oligarch is a significant financial backer of President Trump and Vice-President JD Vance.

Man as spectator of the machine

A crucial problem in this lightning-fast warfare is the role of the human factor. Although soldiers officially still make the final decisions, it is becoming increasingly difficult for them to oversee the logic of the machine. An algorithm performs millions of calculations per second that a human cannot possibly verify.

Experts warn that this puts soldiers in a position where they blindly trust the recommendations of the computer. It is psychologically and practically almost impossible to reject a rapidly generated target when the justification is hidden inside an impenetrable black box. Human control thus becomes a mere administrative formality.

Moreover, the next step threatens to become even more radical. Some military planners want weapons systems that operate completely autonomously without direct human control, known as Lethal Autonomous Weapons Systems (LAWS).

The consequences of this digital tunnel vision have become painfully visible in Iran. The devastating attack on a primary school for girls in Minab raises pressing questions about data verification. Although the investigation is ongoing, this tragedy illustrates the life-threatening risk of targets selected by algorithms without proper human screening.

Moral limits in an age of algorithms

In addition to technical risks, there is a fundamental moral objection to outsourcing decisions about life and death. Should a machine be given the power to decide who dies? The debate about LAWS is gaining urgency as the US and China invest heavily in technology that can kill without human intervention.

Supporters claim that AI can reduce human error and thus prevent civilian casualties. However, the situation in Iran contradicts this. With tens of thousands of buildings damaged, including thousands of homes, the precision of AI seems to be used primarily to destroy more in less time, rather than to act with greater care.

Even tech companies are – at times – applying the brakes. Anthropic (the company behind the chatbot Claude) recently refused to hand over full control of its AI models to the Pentagon for classified use. This led to a conflict in which Washington excluded the company’s technology from the military supply chain.

Urgent need for international rules

The lack of reliability in current AI models makes their deployment on the battlefield even riskier. These systems are based on statistical probabilities and are sensitive to “hallucinations” (flagrant errors). A system that invents incorrect information in a chatbot is annoying, but in a weapons system, the consequences are catastrophic.

It is, therefore, high time for an international moratorium on the use of fully autonomous weapons. The UN working group that has been debating this for years must finally produce binding conclusions. Just as there is a global ban on chemical and biological weapons, we need clear limits for artificial intelligence in wartime.

The United States and China, as technological superpowers, bear an enormous responsibility. They must reach agreements to limit the spread of this technology and to safeguard human accountability. The international community must demand that the kill chain remains transparent and that a human can always be held responsible for every bullet or rocket fired.

Without strict regulation, we risk a future in which swarms of cheap drones autonomously hunt people – a scenario straight out of a dystopian film. Every new war accelerates the development and deployment of AI on the battlefield. Without international agreements, we face a world in which machines increasingly determine who lives and who dies.

The 95th Anniversary Appeal
Support the Morning Star
You have reached the free limit.
Subscribe to continue reading.