The Unseen Perils of AI Weapons and Warfare | Image Source: www.icrc.org
GENEVA, Switzerland, 16 April 2025 – As artificial intelligence (AI) has been constantly integrated into the architecture of modern war, global decision makers, human rights defenders and military strategists are struggling with their deep and often disturbing implications. According to the International Committee of the Red Cross (ICRC), the main concern is not only the capacity of these new systems, but also their impact on civilians and combatants. Their warnings are based on an impact-based assessment method, which assesses not only the visible impact of current impact assessment systems, but also the expected damage of those that have not yet been deployed. In short, the CEW at war is no longer a matter of possibility; It is now about morality, legality and long-term risk.
While countries such as China, the United States, Israel and Russia are making progress in integrating AI into military operations, calls for regulation are growing. The recent ICRC communication to the Secretary-General of the United Nations underlines the urgent need for international law to evolve in a way that is blocked with these technological advances. Without clear frameworks, the militarization of IV threatens to exceed ethical limits, which could reconfigure the rules of war in an unthinkable manner.
Why is IV at war a unique threat?
Unlike traditional weapons systems, AI-driven tools do not simply follow human commands, but interpret, adapt and can act independently. According to a RAND comment, the dangers of using AI in poorly understood war zones are multiplied when its role is not clearly defined or when decision-making is fully downloaded on algorithms. War games infused with AI, often considered as innovative training platforms, risk becoming deceptive simultaneously if the humani behavior is actually represented by machines.
Stephen M. Worman and Bryan Rooney of RAND describe the use of AI in combat as a double-edged sword. On the one hand, AI helps generate scenarios at an unprecedented speed, helping quantitative analysis. On the other hand, when AI is responsible for replacing those responsible for human decision-making or acting as adversaries, it can distort the learning process. The main concern? You may think of learning a conflict scenario, but you are studying the limitations of AI interpretation.
What are the strategic implications of war reinforced by IV?
Strategically, AI has become the new frontier of global power dynamics. According to The Hindu, while discussions on AI are continuing, it is clear that AI is already playing a vital role in national security. Google’s former CEO Eric Schmidt, along with experts such as Dan Hendricks and Alexandr Wang, argued for a doctrine of secure mutual restructuring (MAIM), similar to the framework for the safe mutual destruction of the Cold War.
However, this comparison is controversial. MAIM assumes that AI infrastructure is as centralized and physically traceable as nuclear arsenals. But AI projects are often distributed, decentralized and financed by free contributions, making preventive strikes or deterrence strategies almost unbeatable. Critics argue that this bad analogy could lead to hasty political decisions, justifying aggressive countermeasures against ambiguous threats.
Q: Can AI be regulated like nuclear materials?
A: No, because AI models do not require physical enrichment processes such as nuclear weapons. Once trained, AI can be deployed from almost any device with sufficient computing power, making it almost impossible to control distribution by conventional methods.
Which countries lead the AI arms race?
According to a 247Wall St. report, the United States and China are undoubtedly using AI’s arms race. Both countries pack billions of dollars in autonomous systems, surveillance technologies and AI-led weapons. Although the United States emphasizes its “human loop” doctrine, projects such as Sea Hunter and Project Maven suggest increasing reliance on autonomous systems. Meanwhile, China’s “smart war” strategy aims to infuse AI in every layer of military operations, from real-time command control to psychological operations.
India is also expanding its investment through initiatives such as the AI Defence Project Agency. Its deployment of autonomous drones along the Chinese border and robotic surveillance units at high altitude stations indicates a rapidly evolving doctrine. Russia and Ukraine have already tested AI-controlled drones and navigation systems during their ongoing conflict, illustrating the usefulness of the battlefield of these tools.
Q: Which AI technologies are currently used in combat?
A: Examples: autonomous drones, AI-based surveillance and reconnaissance, predictive analysis, automated reconnaissance systems for targets and warships, and AI-driven vehicles. Some systems, such as Israel’s Harpy drone and Lavender’s destination algorithm, can choose and attack independently.
Where are the red lines in the AI war?
One of the most striking warnings comes from Israel’s deployment of AI systems such as Lavande and the Gospel. According to reports, Lavande independently identifies the individuals to be targeted, while the Gospel determines the structural objectives. Critics claim that these systems have been involved in actions that violate international law, particularly in the Gaza conflict, where the threshold of collateral damage has been manipulated or ignored.
This type of use raises deep ethical and legal questions. Who is responsible when an AI kills the wrong person? How do we audit the decision-making process of an algorithm? These questions remain unanswered, creating a dangerous gap in accountability. The ICRC stresses that the introduction of AI should not go beyond the development of legal safeguards. Without sound frameworks, military forces risk normalizing indiscriminate violence under the pretext of technological sophistication.
What about human surveillance?
According to EU rules, AI-led weapons must maintain a certain degree of human surveillance. However, Member States differ considerably in the interpretation and application of this principle. Germany, for example, has developed rapid response systems capable of eliminating threats in milliseconds. South Korea Super Aegis II can track and reach autonomous targets four kilometers away. These advances question the very definition of “human control” in combat environments.
While maintaining human participation is a noble objective, the reality of war, chaotic, rapid and often based on the decisions of the second division, means that even minimal human supervision can be impossible. In these scenarios, the line between assistance and autonomy is dangerously spread.
Q: Is there global consensus on regulating AI weapons?
A: No. Although organizations such as ICRC and the United Nations advocate regulation, countries differ considerably in their positions. As the EU focuses on human monitoring, Russia and Israel oppose restrictions. The absence of a unified position weakens international regulatory efforts.
Can we really understand the risks?
War has long served to simulate conflict scenarios and assess decision-making. But the inclusion of AI complicates these exercises. According to Worman and Rooney, the knowledge gained from these games is not only about the results, but also about interpersonal dynamics, trust and communication between the players. The introduction of AI risks by replacing this nuanced interaction with opaque simulations, limiting what can be learned and potentially providing false confidence in imperfect systems.
The authors argue that to trust IV in wartime, we need much more than technical validation – we need to understand the cognitive frameworks in which it operates. Otherwise, we run the risk of replacing uncertainty with illusion. The concept of a machine that embodies an opponent means that it not only understands strategic behaviour, but the psychological nuances of leadership, even something diplomats have often been misjudged.
As the RAND authors correctly said, “We learned not only from game results or discussions, but also from interactions between players and game designers. For this form of knowledge generation, which may be totally experiential, the ways in which we can use IV to generate knowledge are even clearer. »
Ultimately, the challenge is not to develop more powerful algorithms, but to design governance systems that can keep pace. Treating impact assessment as another tool in the arsenal without considering its unique attributes and risks is not only within short range – it is dangerous. Today’s assumptions about AI could shape the wars of tomorrow. If we look at deterrence, regulation or cooperation, one thing is clear: this conversation can no longer be delayed.