The deaths of more than 165 schoolgirls in Iran’s city of Minab have intensified global debate over the use of artificial intelligence in modern warfare. Missiles struck the Shajareh Tayyebeh girls’ elementary school during joint US–Israeli military operations, killing children aged seven to twelve along with teachers and staff. The tragedy, one of the deadliest civilian incidents in the conflict, has drawn international condemnation and prompted questions about whether advanced AI systems were involved in identifying the target.
US President Donald Trump rejected accusations that American forces were responsible for the deaths, suggesting instead that Iranian munitions may have caused the strike and insisting that US operations did not intentionally target civilians. However, independent assessments and media investigations have indicated that US involvement is likely, leading to closer scrutiny of the technologies used in the operation.
Investigations into the Minab strike suggest it occurred shortly after the launch of Operation Epic Fury, during which US and Israeli forces targeted Iranian leadership sites, military facilities and nuclear infrastructure. Satellite imagery and geolocated footage reviewed by international analysts indicated the school was hit around the same time as nearby strikes on a naval facility linked to the Islamic Revolutionary Guard Corps. Weapons specialists who examined available footage said the munition fragments appeared consistent with Tomahawk Land Attack Missiles, which are operated by US forces in the region. Based on this material, multiple investigations concluded that American forces were likely responsible for the strike.
Reports have also circulated alleging a “double-tap” pattern, in which the site was reportedly struck twice within a short interval. Regional outlets cited local sources claiming a second explosion occurred roughly forty minutes after the first, hitting people who had gathered in the building. Such patterns, if verified, are often associated with attempts to maximize casualties, though confirmation remains difficult in active conflict zones. US authorities have declined to comment on operational specifics, while official statements have emphasized that civilian casualties, if any, were unintended.
The incident has fueled concern about the growing integration of artificial intelligence into combat decision-making. Reports indicate that generative AI systems, including tools developed by Anthropic, were embedded in US operational workflows to assist with intelligence analysis, target identification and combat simulations. These systems were reportedly used through partnerships with defense technology firms, allowing AI to process large volumes of surveillance and battlefield data at high speed.
According to multiple reports, AI platforms helped generate extensive lists of potential targets, assign priority rankings and provide geographic coordinates. Analysts noted that such tools significantly shortened the “kill chain,” the sequence from target detection to strike authorization, enabling rapid execution of hundreds of strikes within short timeframes. Military planners argue this accelerates response and improves coordination, but critics warn that compressing decision timelines increases the risk of oversight failures.
Historical satellite imagery reportedly showed that the Minab school building had once been connected to a military facility years earlier but was later separated and repurposed for civilian use. Analysts have suggested that automated systems relying on outdated or misinterpreted data might fail to recognize such changes. Experts also caution that generative AI tools remain prone to factual errors, image misinterpretation and flawed reasoning even in low-risk civilian applications, raising concern about their reliability in lethal environments.
The use of AI-assisted targeting is not without precedent. Similar technologies were deployed in earlier conflicts to identify large numbers of strike locations rapidly. Supporters say these systems enhance precision and reduce human workload, while critics argue that limited human oversight and algorithmic opacity can contribute to civilian harm when errors occur.
The Minab tragedy underscores a broader dilemma: artificial intelligence can process information and recommend actions far faster than human analysts, yet even a small error rate in automated targeting can result in large-scale civilian casualties. The balance between speed, accuracy and accountability remains unresolved as militaries expand AI adoption.
Defence leaders have continued to advocate accelerated AI integration into military operations, framing it as essential for strategic advantage. However, transparency around how such systems function in real-time combat remains limited. In the absence of full disclosure, the question of whether AI contributed to the Minab strike remains unproven, but the incident has intensified calls for stricter safeguards, oversight mechanisms and clearer rules governing autonomous and AI-assisted warfare.
The deaths of schoolchildren in Minab have therefore become a focal point in the global debate over whether advanced AI systems, despite rapid technological progress, can be trusted in high-stakes battlefield decisions where mistakes carry irreversible human costs.