Ekhbary
Wednesday, 25 March 2026
Breaking
Also available in: العربية

AI in Iran Conflict: How Algorithms are Reshaping Warfare and Raising Ethical Questions

Advanced AI systems accelerate military operations, but conc

AI in Iran Conflict: How Algorithms are Reshaping Warfare and Raising Ethical Questions
Abd Al-Fattah Yousef
1 week ago
57

AI at the Forefront of the Iran Conflict: A Revolution in Military Operations

Battlefields are undergoing a profound transformation due to the expanding use of artificial intelligence technologies in military operations. Algorithms are playing a pivotal role in data analysis, target identification, and accelerating combat decision-making, a trend clearly evident in the current conflict involving Iran. Reports indicate that the US military conducted over 2,000 strikes in just four days, an unprecedented pace compared to previous military campaigns.

This acceleration is largely attributed to AI systems capable of processing vast amounts of intelligence data from drones, satellites, and sensors. These systems generate targeting options at speeds far exceeding traditional methods reliant on human analysis, according to a report by the British newspaper Financial Times.

Generative AI Models: A Critical Tool in Military Planning

The military campaign led by the United States and Israel against Iran marks the first large-scale field deployment of advanced generative AI models, the same category of technologies used in popular applications like intelligent chatbots. These models assist military commanders in interpreting complex intelligence data, planning military operations, and providing real-time feedback during battles.

Over the past two years, the US Department of Defense has significantly expanded the integration of AI technologies into its operational framework. The Maven Smart System, developed by Palantir, serves as the primary platform for Pentagon data analysis, integrating with Anthropic's Claude model to create a real-time data analysis dashboard supporting military operations. This system functions as the "software brain" managing data flow and translating it into operational decisions.

Inference and Logical Analysis: A Quantum Leap in AI Capabilities

Lewis Moseley, Head of Operations for Palantir in Europe and the UK, stated that the most significant development in modern models is their transition from mere information summarization to the ability to perform inference and logical analysis. This capability allows intelligent systems to break down problems step-by-step, resulting in a substantial increase in the number of military decisions that can be made rapidly during complex operations.

AI systems support what is militarily known as the "kill chain"—the process encompassing target identification, assessment, selection of the appropriate weapon, and post-strike damage assessment. Previously, this process required hours or even days for senior military leaders to review printed reports. Today, AI-powered systems aim to reduce this timeframe to minutes or even seconds.

Ethical Challenges and Accountability in the Age of Smart Warfare

Academic research suggests that large language models, the technology behind systems like Claude and ChatGPT, can generate significantly larger target lists compared to traditional methods. These systems enable militaries to operate "at unprecedented speed and scale" in aerial targeting operations, according to Sophia Goodfriend, a researcher in warfare technologies at Cambridge University.

AI's role extends beyond data analysis to include technologies like computer vision and autonomous navigation. Experts believe image recognition software can help analyze drone footage to identify ballistic missile launchers or other military targets much faster than previously possible, when soldiers had to manually review hours of video footage.

The proliferation of these technologies raises growing concerns about the level of human oversight in combat decisions. This debate intensified after a sharp disagreement between Anthropic and the Pentagon regarding the boundaries of military AI use, highlighting the sensitivity of employing advanced AI models on the battlefield. The bombing of a girls' primary school in Minab, southern Iran, which resulted in the deaths of dozens of students and teachers, underscored the dangers of rapid or inaccurate targeting reliant on AI. This incident could be considered a "war crime," and the use of AI does not absolve those responsible of criminal and ethical liability.

Towards International Regulation of Smart Weapons

The rapid advancement of AI poses difficult questions about the traceability of decisions made by intelligent systems, especially when they can perform millions of calculations per second. Experts warn that decision-makers may find it challenging to question the recommendations of intelligent systems due to the complexity of the models and the vast amount of data they rely on.

Some researchers believe that introducing AI into military operations could fundamentally alter the nature of warfare by accelerating targeting and expanding the scope of combat operations. Experts raise scenarios involving the use of massive swarms of low-cost drones to track and target adversaries on a large scale.

Consequently, calls are increasing for international rules to govern the use of AI in warfare, including imposing restrictions on, or even outright bans on, lethal autonomous weapons in the future. Some experts believe an agreement among major AI powers, led by the United States and China, could be a first step towards limiting the proliferation of these technologies on battlefields, as AI integration in global military conflicts accelerates.

Keywords: # Artificial Intelligence # Iran War # Military Operations # Generative AI # Military Targeting # Human Oversight # Autonomous Weapons # International Humanitarian Law