AI’s Role in Iran Targeting Raises Questions Amid Growing Military Dependence


Paris — Artificial intelligence is playing a growing role in modern warfare, with military forces using the technology to analyze intelligence and assist in selecting potential targets. However, its use on the battlefield continues to raise ethical and legal concerns among experts.

Recent conflicts involving the United States, Israel, and Iran have reportedly seen increased deployment of AI tools to process large volumes of data and accelerate military decision-making.

Experts believe AI systems may have helped identify targets during thousands of U.S. and Israeli strikes on Iran since February 28, although the exact extent of the technology’s role has not been officially confirmed.

According to Laure de Roucy-Rochegonde from the French Institute of International Relations, major military powers are investing heavily in AI-driven defense technologies.

“Almost any military function can be enhanced with AI,” she said, citing areas such as logistics, reconnaissance, information warfare, cybersecurity, and electronic warfare.

One of AI’s most prominent uses in the military is shortening the so-called “kill chain”—the process from identifying a target to carrying out a strike.

For instance, the U.S. military uses the Maven Smart System, developed by Palantir Technologies, which can analyze intelligence data and prioritize possible targets.

A report by The Washington Post said the generative AI model Claude from Anthropic has been integrated with the system to improve detection and simulation capabilities.

Military analysts say AI systems can process massive volumes of information from sources such as satellite imagery, radar signals, drone footage, audio monitoring, and electromagnetic data.

Bertrand Rondepierre, head of the French army’s AI agency, said such systems allow armed forces to analyze intelligence faster and more comprehensively.

However, the growing use of AI in warfare has intensified debates about accountability and human control.

During the conflict in Gaza Strip, Israeli forces reportedly used a targeting system known as Lavender, which helped identify suspected targets but reportedly operated with a margin of error.

Critics argue that reliance on AI raises difficult questions about responsibility if mistakes occur.

Peter Asaro, chairman of the International Committee for Robot Arms Control, said determining accountability becomes complicated when both humans and machines are involved in military decisions.

For example, a widely reported bombing of a school in Iran—where local authorities claim around 150 people were killed—has raised concerns that faulty or outdated AI data could lead to mistaken targeting. Neither the United States nor Israel has confirmed responsibility for the strike.

Despite the concerns, some military officials emphasize that AI systems do not operate independently.

Rondepierre said fully autonomous weapons systems without human oversight remain largely theoretical, stressing that military commanders remain responsible for decisions and oversight.

Experts believe the current use of AI in warfare represents only the early stages of a broader transformation.

According to Benjamin Jensen of the Center for Strategic and International Studies, armed forces worldwide are still learning how to fully integrate AI into military planning and operations.

“It’s going to take a generation for us to really figure this out,” Jensen said.