This report provides a comprehensive analysis of the transformative impact of Artificial Intelligence (AI) on the design and capabilities of military drone systems. The integration of AI is not merely an incremental enhancement but represents a fundamental paradigm shift in the character of modern warfare. This analysis concludes that AI is the central catalyst driving the evolution of unmanned aerial systems (UAS) from remotely piloted tools into “AI-native” autonomous assets, a transition with profound strategic consequences for national security.
The report’s findings are structured around six key areas. First, it examines the redesign of the drone airframe itself, arguing that the operational necessity for onboard data processing—or edge computing—in contested environments is forcing a new design philosophy. This philosophy is governed by the stringent constraints of Size, Weight, Power, and Cost (SWaP-C), creating a strategic imperative for the development of hyper-efficient, specialized AI hardware. The nation-states that master the design and mass production of these low-SWaP AI accelerators will gain a decisive advantage.
Second, the report details how AI is revolutionizing the core capabilities of drones. Autonomous navigation, untethered from GPS, provides unprecedented resilience against electronic warfare. AI-powered sensor fusion synthesizes data from multiple sources to create a rich, contextual understanding of the battlefield that surpasses human analytical capacity. Concurrently, Automated Target Recognition (ATR) is evolving from simple object detection to flexible, language-based identification, allowing drones to find novel targets on the fly.
Third, these enhanced core functions are enabling entirely new operational paradigms. AI-driven swarm intelligence allows hundreds of drones to act as a single, collaborative, and resilient entity, capable of overwhelming traditional defenses through saturation attacks. Simultaneously, cognitive electronic warfare (EW) equips these systems to dominate the electromagnetic spectrum, autonomously detecting and countering novel threats in real time. The fusion of these capabilities creates self-protecting, intelligent networks that are redefining force projection.
Fourth, the report analyzes the crisis of control this technological shift precipitates. The traditional models of human-in-the-loop (HITL) command are becoming untenable in the face of machine-speed combat. Operational necessity is forcing a move toward human-on-the-loop (HOTL) supervision, which, due to cognitive limitations and the sheer velocity of events, functionally approaches a human-out-of-the-loop (HOOTL) reality. The concept of “Meaningful Human Control” (MHC) is consequently shifting from a real-time action to a pre-mission process of design, testing, and constraint-setting, creating a significant “accountability gap.”
Fifth, the strategic implications for the 21st-century battlefield are examined. AI is compressing the military kill chain to machine speeds, creating a dynamic of hyper-fast warfare that risks inadvertent escalation. Concurrently, the proliferation of low-cost, AI-enabled drones is democratizing lethal capabilities, empowering non-state actors and altering the global balance of power. This has ignited an AI-versus-AI arms race in counter-drone technologies, forcing a doctrinal shift away from exquisite, high-cost platforms toward attritable, mass-produced intelligent systems.
Finally, the report addresses the profound ethical and legal challenges posed by these systems, focusing on the international debate surrounding Lethal Autonomous Weapon Systems (LAWS). The slow pace of international lawmaking stands in stark contrast to the rapid pace of technological development, suggesting that de facto norms established on the battlefield will likely precede any formal treaty, creating a complex and volatile regulatory environment.
In conclusion, the nation-states that successfully navigate this transformation—by prioritizing investment in attritable AI-native platforms, adapting military doctrine to machine-speed warfare, cultivating a new generation of tech-savvy warfighters, and proactively shaping international norms—will hold a decisive strategic advantage in the conflicts of the 21st century.
Section 1: The AI-Native Airframe: Redesigning Drones for Autonomous Operations
The most fundamental impact of Artificial Intelligence on drone systems begins not with abstract algorithms but with the physical and digital architecture of the platform itself. The strategic shift from remotely piloted aircraft, which function as extensions of a human operator, to truly autonomous systems necessitates a radical rethinking of drone design. This evolution is driven by the primacy of onboard data processing, a capability that enables mission execution in the face of sophisticated electronic warfare. However, this demand for onboard computational power creates a critical and defining tension with the inherent physical constraints of unmanned platforms, a tension governed by the imperatives of Size, Weight, Power, and Cost (SWaP-C). The resolution of this tension is leading to the emergence of the “AI-native” airframe, a new class of drone designed from the ground up for autonomous warfare.
1.1 The Primacy of Onboard Processing: The Shift from Remote Piloting to Edge AI
The defining characteristic that separates a modern AI-enabled drone from its predecessors is its capacity to perform complex computations locally, a concept known as edge computing or “AI at the edge”.1 This capability is the bedrock of true autonomy, as it untethers the drone from the need for a continuous, high-bandwidth data link to a human operator or a remote cloud server.3 In the context of modern peer-level conflict, where the electromagnetic spectrum is a fiercely contested domain, this independence is not a luxury but a mission-critical necessity. The ability of a drone to continue its mission—to navigate, identify targets, and even engage them—after its communication link has been severed by enemy jamming is a revolutionary leap in operational resilience.2
This paradigm shift is enabled by the integration of highly specialized hardware designed specifically to handle the computational demands of AI and machine learning (ML) tasks. While traditional drones rely on basic microcontrollers for flight stability, AI-native platforms incorporate a suite of powerful processors. These include general-purpose graphics processing units (GPGPUs), which excel at the parallel processing required by many ML algorithms, and increasingly, more efficient and specialized hardware such as application-specific integrated circuits (ASICs) and systems-on-a-chip (SoCs).2 These components are optimized to run the complex neural network models that underpin modern AI capabilities like computer vision and real-time data analysis. Industry leaders in the semiconductor space, such as NVIDIA, have become central players in the defense ecosystem, with their compact, powerful computing modules like the Jetson series (e.g., Xavier NX, Orin Nano, Orin NX) being explicitly designed into the autopilots of advanced military and commercial drones.7
The operational imperative for this onboard processing power is clear. It reduces decision latency to near-zero, enabling instantaneous responses that are impossible when data must be transmitted to a ground station for analysis and then have commands sent back. This is crucial for time-sensitive tasks such as terminal guidance for a kinetic strike, dynamic obstacle avoidance in a cluttered urban environment, or real-time threat analysis and countermeasures against an incoming missile.4 By processing sensor data locally, the drone can make its own decisions, transforming it from a remote-controlled puppet into a self-reliant agent capable of adapting to changing battlefield conditions.9
1.2 Redefining Design Under SWaP-C Imperatives
While the demand for onboard AI processing is theoretically limitless, its practical implementation is governed by the ironclad constraints of Size, Weight, Power, and Cost—collectively known as SWaP-C. This set of interdependent variables represents the central design challenge for unmanned systems, particularly for the smaller, more numerous, and often expendable drones that are proving so decisive in modern conflicts.5 Every component added to an airframe must be justified across all four dimensions, as an increase in one often negatively impacts the others.
This creates a fundamental design trade-off. Advanced AI algorithms require immense processing power, which translates directly into larger, heavier processing units that consume more electrical power and generate significant heat, which in turn may require additional weight for cooling systems. These factors directly diminish the drone’s operational effectiveness by reducing its flight endurance (by drawing more from the battery) and its payload capacity (by taking up a larger portion of the allowable weight).2 Furthermore, the cost of these high-performance components can be substantial, challenging the strategic utility of deploying them on attritable platforms designed to be lost in combat. The financial calculus is stark: for military UAS, a reduction of just one pound in platform weight can save an estimated $30,000 in operational costs for an ISR platform and up to $60,000 for a combat platform over its lifecycle.12
The solution to this complex optimization problem is the development of “AI-native” drone platforms. In contrast to legacy airframes that have been retrofitted with AI capabilities, these systems are engineered from their inception for autonomous operation.1 This holistic design philosophy influences every aspect of the drone’s construction. Airframes are built from advanced lightweight composite materials to maximize strength while minimizing weight. Power systems are meticulously engineered for efficiency, with some designs even incorporating AI-driven energy management algorithms to optimize power distribution during different phases of a mission.6 Most critically, the electronics architecture is built around highly integrated, low-power SoCs and ASICs that are custom-designed to provide the maximum computational performance within the smallest possible SWaP-C footprint.13 The intense focus on this area is evidenced by significant military research and development efforts aimed at creating miniaturized, low SWaP-C payloads, such as compact radar and multi-band antenna systems, that can be integrated onto small UAS without compromising their core performance characteristics.16
The SWaP-C constraint, therefore, acts as the primary forcing function in the design of modern tactical AI-powered drones. It is no longer sufficient to simply write more advanced software; the central challenge is creating the hardware that can execute that software efficiently within the unforgiving physical limits of an unmanned airframe. This reality elevates the design and mass production of specialized, hyper-efficient, low-power AI accelerator chips from a mere engineering problem to a primary strategic concern. The competitive advantage in 21st-century drone warfare is rapidly shifting away from nations that can build the largest and most expensive platforms to those that can design and mass-produce the most computationally powerful microelectronics within the tightest SWaP-C budget.
This hardware-centric paradigm, born from the immutable laws of physics governing flight, introduces a new and critical strategic vulnerability. An adversary’s ability to disrupt the highly specialized and globally distributed supply chains for these low-SWaP AI chips could effectively ground an opponent’s entire autonomous drone fleet. A future conflict, therefore, will not be waged solely on the physical battlefield but also within the intricate ecosystem of the global semiconductor industry. Actions such as targeted sanctions, cyberattacks on fabrication plants, or control over the supply of rare earth materials necessary for chip production become potent acts of industrial warfare. This reality compels nation-states to pursue self-sufficiency in the design and manufacturing of these critical components, fundamentally transforming the concept of a “defense industrial base” to include what were once considered purely commercial entities: semiconductor foundries and microchip design firms.
Section 2: Revolutionizing Core Capabilities: From Enhanced to Emergent Functions
The integration of AI into the drone’s core architecture is not merely about improving existing functions; it is about creating entirely new capabilities that transform the drone from a simple sensor-shooter platform into an intelligent agent. This revolution is most apparent in three key areas: autonomous navigation, which grants resilience in contested environments; advanced perception through sensor fusion, which enables a deep, contextual understanding of the battlefield; and automated target recognition, which accelerates the process of identifying and acting upon threats. Together, these AI-driven functions represent a qualitative leap in the operational potential of unmanned systems.
2.1 Autonomous Navigation and Mission Execution
For decades, the effectiveness of unmanned systems has been tethered to the availability of the Global Positioning System (GPS). In a modern conflict against a peer adversary, however, the electromagnetic spectrum is a primary battleground, and GPS signals are a prime target for jamming and spoofing. AI provides the critical solution to this vulnerability. By employing advanced techniques such as Visual-Inertial Odometry (VIO) and Simultaneous Localization and Mapping (SLAM), an AI-powered drone can navigate by observing and mapping its physical surroundings.4 Using onboard cameras and other sensors, it can recognize landmarks, build a 3D model of its environment, and determine its position and trajectory relative to that model, all without a single signal from a satellite.19 This capability to operate effectively in a GPS-denied environment represents a quantum leap in mission survivability and operational freedom.
The impact of this resilience is dramatically amplified by AI’s ability to enhance mission success rates. The conflict in Ukraine has served as a proving ground for this technology, where the integration of AI for terminal guidance on first-person view (FPV) drones has reportedly boosted strike accuracy from a baseline of 10-20% to as high as 70-80%.5 This remarkable improvement stems from the AI’s ability to take over the final, critical phase of the attack, homing in on the target even if the communication link to the human operator is lost due to jamming or terrain masking. Beyond terminal guidance, AI algorithms can optimize entire mission profiles in real time. They can dynamically plan flight paths to avoid newly detected air defense threats, reroute to account for changing weather conditions, or adapt the mission plan based on new intelligence, all without direct human input.10
Looking forward, the role of AI in mission planning is set to expand even further. Emerging applications of generative AI, the same technology that powers models like ChatGPT, are being explored for highly complex cognitive tasks. These include the automated planning of intricate, multi-stage mission routes through hostile territory and even the automatic generation of draft operation orders (OPORDs), a task that is traditionally a time-consuming and mentally taxing process for human staff officers.23 By automating these functions, AI promises to significantly reduce the cognitive load on human planners and accelerate the entire operational planning cycle.
2.2 Advanced Perception through AI-Powered Sensor Fusion
A single sensor provides a limited, one-dimensional view of the world. A modern military drone, however, is a multi-sensory platform, equipped with a diverse suite of instruments including high-resolution electro-optical (EO) cameras, infrared (IR) thermal imagers, radar, Light Detection and Ranging (LiDAR), and acoustic sensors.1 The true power of this array is unlocked by AI-driven sensor fusion, the process of intelligently combining data from these disparate sources into a single, coherent, and comprehensive model of the operational environment. This fused picture provides a degree of situational awareness that is impossible for a human operator to achieve by attempting to mentally synthesize multiple, separate data feeds in real time.25
The core benefit of sensor fusion is its ability to overcome the inherent limitations of any single sensor. For instance, an optical camera is ineffective in fog or darkness, but a thermal imager can see heat signatures and radar can penetrate obscurants. An AI algorithm can synthesize the data from all three, correlating a radar track with a thermal signature and, if conditions permit, a visual identification, thereby producing a high-confidence assessment of a potential target.10 This multi-modal approach is critical for all aspects of the drone’s operation, from robust navigation and obstacle avoidance to reliable targeting and threat detection.27 The field is advancing so rapidly that researchers are even exploring the use of novel quantum sensors, with AI being the essential tool to filter the noise and extract meaningful signals from these highly sensitive but complex instruments.28
This capability is having a revolutionary impact on the field of Intelligence, Surveillance, and Reconnaissance (ISR). Traditionally, ISR platforms would collect vast amounts of raw data—terabytes of video footage, for example—which would then be transmitted back to a ground station for painstaking analysis by teams of humans. This process is slow, bandwidth-intensive, and prone to human error and fatigue. AI-powered drones are upending this model. By performing analysis at the edge, the drone’s onboard AI can sift through the raw data as it is collected, automatically filtering out irrelevant information, classifying objects of interest, and prioritizing the most critical intelligence for immediate transmission to human analysts.1 This dramatically reduces the bandwidth required for data exfiltration and, more importantly, accelerates the entire intelligence cycle from days or hours to minutes. The effectiveness of this approach has been demonstrated in Ukraine, where integrated systems like Delta and Griselda use AI to process battlefield reports and drone footage in near real-time, providing frontline units with an unparalleled operational picture.20
2.3 Automated Target Recognition (ATR): See, Understand, Act
Building upon the foundation of advanced perception, AI is enabling a dramatic leap in the speed and accuracy of targeting through Automated Target Recognition (ATR). Using sophisticated machine learning and computer vision algorithms, ATR systems can automatically detect, classify, and identify potential targets within the drone’s sensor feeds.32 This goes beyond simply detecting an object; it involves classifying it (e.g., vehicle, person) and, with increasing fidelity, identifying it (e.g., T-90 main battle tank vs. a civilian tractor). This capability has been shown to be effective at significant ranges, with some systems able to lock onto targets up to 2 kilometers away.20 By automating this critical function, ATR drastically reduces the cognitive burden on human operators, allowing them to focus on higher-level tactical decisions and accelerating the engagement cycle.33
Furthermore, advanced ATR systems are proving adept at countering traditional methods of military deception. Where a human eye might be fooled by camouflage, netting, or even sophisticated inflatable decoys, an AI algorithm can analyze data from across the electromagnetic spectrum. By fusing thermal, radar, and multi-spectral imagery, the ATR system can identify tell-tale signatures—such as the heat from a recently run engine or the specific radar reflectivity of armored steel—that betray the true nature of the target.20
The primary bottleneck in developing more powerful ATR systems is the immense amount of high-quality, accurately labeled data required to train the machine learning models.34 An algorithm can only learn to identify a T-90 tank if it has been shown thousands of images of T-90 tanks in various conditions—different angles, lighting, weather, and states of damage. Recognizing this challenge, military organizations are now focusing heavily on standardizing the curation and labeling of military datasets and developing more efficient training methodologies, such as building smaller, specialized AI models tailored for specific, narrow tasks.20
A revolutionary development on the horizon promises to mitigate this data dependency: Open Vocabulary Object Detection (OVOD) powered by Vision Language Models (VLMs).35 Unlike traditional ATR, which can only find what it has been explicitly trained to see, an OVOD system connects language with imagery. This allows an operator to task the drone using natural language to find novel or uniquely described targets. For example, a commander could instruct the system to “find the command vehicle in that convoy; it’s a truck with a large satellite dish on the roof.” Even if the VLM has never been specifically trained on that exact vehicle configuration, it can use its semantic understanding of “truck,” “satellite dish,” and “roof” to correlate the text description with the visual data from the drone’s sensors and identify the correct target.35 This capability transforms ATR from a rigid, pre-programmed function into a flexible, dynamic, and instantly adaptable tool for battlefield intelligence.
The convergence of these three AI-driven capabilities—resilient navigation, multi-sensor fusion, and advanced ATR—is creating an emergent property that is far greater than the sum of its parts: contextual battlefield understanding. The drone is evolving from a mere tool that sees a target into an intelligent agent that understands the target in its operational context. The logical progression is clear: AI-powered navigation allows the drone to position itself optimally in the battlespace, even under heavy electronic attack. Once in position, AI-driven sensor fusion provides a rich, multi-layered, and continuous stream of data about that environment. Within that data stream, advanced ATR algorithms can pinpoint and identify specific objects of interest.
When these functions are integrated, the system can perform sophisticated correlations at machine speed. It does not just see a “tank” as a traditional ATR system might. Instead, it perceives a “T-72 main battle tank” (a specific ATR identification), located at precise coordinates despite GPS jamming (a function of AI navigation), whose thermal signature indicates its engine was running within the last 15 minutes (an inference from sensor fusion), and which is positioned in a concealed revetment next to a building whose signals intelligence signature matches that of a known command post (a correlation with wider ISR data). This is no longer simple targeting; it is automated, real-time tactical intelligence generation at the tactical edge. This emergent capability of contextual understanding is the primary enabler of what some analysts have termed “Minotaur Warfare,” a future form of conflict where AI systems assume greater control over tactical operations.5 As a drone’s comprehension of the battlefield begins to approach, and in some cases exceed, that of a human platoon leader, the doctrinal and ethical justifications for maintaining a human “in-the-loop” for every discrete tactical decision will inevitably begin to erode. This creates immense pressure on military organizations to redefine their command and control structures and to place greater trust in AI systems to execute progressively more complex and lethal decisions, thereby accelerating the trend toward greater autonomy in warfare.
Section 3: New Paradigms in Unmanned Warfare
The integration of artificial intelligence is not only enhancing the individual capabilities of drones but is also enabling entirely new operational concepts that were previously confined to the realm of science fiction. These emerging paradigms, principally swarm intelligence and cognitive electronic warfare, represent a fundamental change in how military force can be organized, projected, and sustained on the modern battlefield. They are not incremental improvements on existing tactics but are instead the building blocks of a new form of high-tempo, algorithmically-driven conflict.
3.1 Swarm Intelligence and Collaborative Autonomy
A drone swarm is not simply a large number of drones flying in the same area; it is a group of unmanned systems that utilize artificial intelligence to communicate, collaborate, and act as a single, cohesive, and intelligent entity.1 Unlike traditionally controlled assets, a swarm does not rely on a central human operator directing the actions of each individual unit. Instead, its collective behavior is an “emergent” property that arises from individual drones following a simple set of rules—such as maintaining separation from their neighbors, aligning their flight path with the group, and maintaining cohesion with the overall swarm—inspired by the flocking of birds or schooling of fish.37 This allows for complex group actions to be performed with a remarkable degree of coordination and adaptability.
The tactical applications of this technology are profound. Swarms are particularly well-suited for conducting saturation attacks, where the sheer number of inexpensive, coordinated drones can overwhelm and exhaust the magazines of even the most sophisticated and expensive air defense systems.1 A single billion-dollar Aegis destroyer may be able to intercept dozens of incoming threats, but it may not be able to counter a coordinated attack by a thousand AI-guided drones costing only a few thousand dollars each. Beyond saturation attacks, swarms are ideal for executing complex reconnaissance missions over a wide area, establishing persistent area denial, or conducting multi-axis, synchronized strikes on multiple targets simultaneously.39
The key to a swarm’s operational effectiveness and resilience lies in its decentralized command and control (C2) architecture. In a centralized system, the loss of the single command node can paralyze the entire force. In a swarm, each drone makes decisions based on its own sensor data and peer-to-peer communication with its immediate neighbors.37 This distributed intelligence means that the loss of individual units, or even entire sub-groups, does not compromise the overall mission. The swarm can autonomously adapt, reallocating tasks and reconfiguring its formation to compensate for losses and continue its objective.41 This inherent resilience makes swarms exceptionally difficult to defeat with traditional attrition-based tactics.
Recognizing this transformative potential, the United States military has been aggressively pursuing swarm capabilities. The Defense Advanced Research Projects Agency’s (DARPA) OFFensive Swarm-Enabled Tactics (OFFSET) program, for example, aimed to develop and demonstrate tactics for heterogeneous swarms of up to 250 air and ground robots operating in complex urban environments.42 While large-scale swarm combat has yet to be seen, the first uses of autonomous swarms have been reported in conflicts in Libya and Gaza, signaling that this technology is rapidly moving from the laboratory to the battlefield.42
3.2 Cognitive Electronic Warfare (EW): Dominating the Spectrum
The modern battlefield is an invisible storm of electromagnetic energy. Communications, navigation, sensing, and targeting all depend on the ability to successfully transmit and receive signals across the radio frequency (RF) spectrum. Consequently, electronic warfare—the art of controlling that spectrum—is central to modern conflict. However, traditional EW systems, which rely on pre-programmed libraries of known enemy signals, are becoming increasingly obsolete. Adversaries are fielding agile, software-defined radios and radars that can change their frequencies, waveforms, and pulse patterns on the fly, creating novel signatures that a library-based system cannot recognize or counter.5
Cognitive electronic warfare is the AI-driven solution to this dynamic threat. Instead of relying on a static threat library, a cognitive EW system uses machine learning to sense and analyze the electromagnetic environment in real time.47 An AI-enabled drone can autonomously detect an unfamiliar jamming signal, use ML algorithms to classify its key parameters, and then generate a tailored countermeasure—such as a precisely configured jamming waveform or a rapid frequency hop—all within milliseconds and without requiring any input from a human operator.49
This capability is fundamentally dual-use, encompassing both defensive and offensive applications. Defensively, it provides a powerful form of Electronic Protection (EP), allowing a drone or a swarm to dynamically protect itself from enemy jamming and GPS spoofing attempts. This ensures that the drones can maintain their communication links and navigational accuracy, and ultimately complete their mission even in a highly contested EW environment.1 Offensively, the same AI techniques can be used for Electronic Attack (EA). An AI-powered system can more effectively probe an adversary’s network to find vulnerabilities, and then deploy optimized jamming or spoofing signals to disrupt their radar, neutralize their air defenses, or sever their command and control links.22 The ultimate goal is to achieve adaptive counter-jamming, where AI agents conceptualized for the task can proactively perceive the electromagnetic environment and autonomously execute complex anti-jamming strategies, which can include not only adjusting their own communication parameters but also physically maneuvering the drone or the entire swarm to find clearer signal paths or to better triangulate and neutralize an enemy jammer.52
The fusion of swarm intelligence with cognitive electronic warfare creates a powerful, emergent capability: a self-protecting, resilient, and intelligent force projection network. A swarm is no longer just a collection of individual sensor-shooter platforms; it becomes a mobile, adaptive, and distributed system for seizing and maintaining control of the battlespace. The logic of this combination is compelling. A swarm is composed of numerous, geographically distributed nodes (the individual drones). Each of these nodes can be equipped with cognitive EW payloads. Through the swarm’s collaborative AI, these nodes can be dynamically tasked in real time.
For instance, in a swarm of fifty drones, ten might be assigned to sense the RF environment, fifteen might be tasked with providing protective jamming (EA) for the entire group, and the remaining twenty-five could be dedicated to the primary ISR or strike mission. The swarm’s AI-driven logic can reallocate these roles instantaneously based on the evolving tactical situation. If a jammer drone is shot down, another drone can be autonomously re-tasked to take its place. If a new, unknown enemy radar frequency is detected, the entire swarm can adapt its own communication protocols and jamming profiles to counter it. This creates a system that is orders of magnitude more resilient, adaptable, and survivable than a single, high-value asset attempting to perform the same mission.
This new paradigm will inevitably lead to a future battlefield characterized by “swarm versus swarm” combat.55 In such a conflict, victory will not be determined by the side with the most powerful individual platform, but by the side whose swarm algorithms can out-think, out-maneuver, and out-adapt the enemy’s algorithms. This reality signals a profound shift in military research and development priorities, moving away from a traditional focus on platform-centric hardware engineering and toward an emphasis on algorithm-centric software development and AI superiority. It also carries the sobering implication that future conflicts could witness massive, automated engagements between opposing swarms, playing out at machine speeds with little to no direct human intervention. Such a scenario would result in an unprecedented rate of attrition and herald the arrival of a new, terrifyingly fast form of high-tech, mechanized warfare.
Section 4: The Human-Machine Interface: Command, Control, and the Crisis of Control
As artificial intelligence grants drone systems escalating levels of autonomy, the role of the human warfighter is undergoing a profound and contentious transformation. The traditional relationship, in which a human directly controls a machine, is being replaced by a spectrum of more complex human-machine teaming arrangements. This evolution is forcing a critical re-examination of military command and control structures and has ignited an intense global debate over the appropriate level of human judgment in the use of lethal force. At the heart of this debate is the concept of “Meaningful Human Control” (MHC), a principle that is proving to be as difficult to define and implement as it is ethically essential.
4.1 The Spectrum of Autonomy: Defining the Human Role
The relationship between a human operator and an autonomous weapon system is not a binary choice between manual control and full autonomy. Rather, it exists along a spectrum, commonly defined by three distinct levels of human involvement in the decision to use lethal force. Understanding these classifications is essential to grasping the nuances of the current policy and ethical debates.
Table 1: The Spectrum of Autonomy in Unmanned Systems
| Level of Control | Definition | Operational Example | Implications for Command & Control (C2) | Primary Legal/Ethical Challenge |
| Human-in-the-Loop (HITL) | The system can perform functions like searching for, detecting, and tracking a target, but a human operator must provide the final authorization before lethal force is applied. The human is an integral and required part of the decision-making process.42 | An operator of an MQ-9 Reaper drone positively identifies a target and receives clearance before manually firing a Hellfire missile. | C2 process is deliberate but can be slow. High cognitive load on the operator. Vulnerable to communication link disruption. Can be too slow for high-tempo or swarm-vs-swarm engagements.57 | Latency and Speed: The time required for human approval can be a fatal liability in rapidly evolving combat scenarios, such as defending against a hypersonic missile or a drone swarm. |
| Human-on-the-Loop (HOTL) | The system is authorized to autonomously search for, detect, track, target, and engage threats based on pre-defined parameters (Rules of Engagement). A human supervisor monitors the system’s operations and has the ability to intervene and override or abort an action.42 | An automated air defense system (e.g., C-RAM) is authorized to automatically engage incoming rockets and mortars. A human supervisor monitors the system and can issue a “cease fire” command if needed. | C2 is supervisory, enabling machine-speed engagements. Reduces operator cognitive load for routine tasks. Allows for management of large-scale systems like swarms. | Automation Bias and Effective Veto: Operators may become complacent and overly trust the system’s judgment, failing to intervene when necessary. The speed of the engagement may make a human veto practically impossible.60 |
| Human-out-of-the-Loop (HOOTL) | The system, once activated, makes all combat decisions—including searching, targeting, and engaging—without any further human interaction or supervision. The human is removed from the individual engagement decision cycle entirely.42 | A “fire-and-forget” loitering munition is launched into a designated area with instructions to autonomously find and destroy any vehicle emitting a specific type of radar signal. | C2 is limited to the initial activation and mission programming. Enables operations in completely communications-denied environments. Represents true autonomy. | The Accountability Gap and IHL Compliance: If the system makes an error and commits a war crime, it is unclear who is legally and morally responsible. The system’s inability to apply human judgment raises serious doubts about its capacity to comply with the laws of war.63 |
Currently, U.S. Department of Defense policy for systems that use lethal force mandates a human-in-the-loop approach, requiring that commanders and operators exercise “appropriate levels of human judgment over the use of force”.42 However, the relentless pace of technological advancement and the operational realities of modern warfare are placing this policy under immense pressure.
4.2 The Challenge of Meaningful Human Control (MHC)
In response to the ethical and legal dilemmas posed by increasing autonomy, the concept of “Meaningful Human Control” (MHC) has become the central pillar of international regulatory discussions.67 The principle, while intuitively appealing, posits that humans—not machines—must retain ultimate control over and moral responsibility for any use of lethal force.70 While there is broad agreement on this general principle, implementing it in practice is fraught with profound technical, operational, and philosophical challenges.
First, there are significant technical and operational challenges. The very nature of advanced AI creates barriers to human understanding and control. Many powerful machine learning models function as “black boxes,” meaning that even their designers cannot fully explain the specific logic behind a particular output. This lack of explainability, or epistemic limitation, makes it impossible for a human operator to truly understand why a system has decided a particular object is a legitimate target, fundamentally undermining the basis for meaningful control.71 Furthermore, an AI system, no matter how sophisticated, lacks genuine human judgment, empathy, and contextual understanding. It cannot comprehend the value of a human life or interpret the subtle, non-verbal cues that might signal surrender or civilian status, all of which are critical for making lawful and ethical targeting decisions in the complex fog of war.71
Second, there are cognitive limitations inherent in the human-machine interface itself. A large body of research in cognitive psychology has identified a phenomenon known as “automation bias,” which is the tendency for humans to over-trust the suggestions of an automated system, even when those suggestions are incorrect.60 An operator supervising a highly reliable autonomous system may become complacent, failing to maintain the situational awareness needed to detect an error and intervene in time. This is compounded by the
temporal limitations imposed by machine-speed warfare. An AI can process data and cycle through an engagement decision in milliseconds, a speed at which a human’s ability to deliberate, decide, and physically execute an override becomes practically impossible.60
Finally, there is no internationally accepted definition of what constitutes “meaningful” control. Interpretations vary wildly among nations. Some argue it requires direct, positive human authorization for every single engagement (a strict HITL model). Others contend that it is satisfied by a human setting the initial rules of engagement, target parameters, and geographical boundaries for the system, which would permit a HOTL or even HOOTL operational posture.68 This fundamental ambiguity remains a primary obstacle to the formation of any international treaty or binding regulation.
The intense debate over which “loop” a human should occupy is, in many ways, becoming a false choice that is being rendered moot by operational necessity. In a future high-tempo conflict, particularly one involving swarm-versus-swarm engagements, the decision cycle will be compressed to a timescale where a human simply cannot remain in the loop for every individual lethal action. A human operator cannot physically or cognitively process and approve hundreds of distinct targeting decisions in the few seconds it might take for an enemy swarm to close in. This operational reality will inevitably force militaries to adopt a human-on-the-loop supervisory posture as the default for defensive systems.
However, given the powerful effects of automation bias and the sheer velocity of events, the human supervisor’s practical ability to meaningfully assess the tactical situation, identify a potential error in the system’s judgment, and execute a timely veto will be severely constrained. The “veto” option, while theoretically present, becomes functionally impossible to exercise in many critical scenarios. Thus, the operational demand for machine-speed defense is pushing systems toward a state of de facto autonomy, regardless of stated policies that emphasize retaining human control.
This leads to a fundamental re-conceptualization of Meaningful Human Control itself. MHC is evolving from a technical standard to be engineered into a real-time interface into a broader legal and ethical framework for managing risk and assigning accountability prior to a system’s deployment. The most “meaningful” control a human will exercise over a future autonomous weapon will not be in the split-second decision to fire, but in the months and years of rigorous design, extensive testing and validation in diverse environments, meticulous curation of training data to minimize bias, and the careful, deliberate definition of operational constraints. This includes setting clear geographical boundaries, defining permissible target classes, and programming explicit, unambiguous rules of engagement. This evolution effectively shifts the locus of responsibility away from the frontline operator and diffuses it across a wide array of actors: the system designers, the software programmers, the data scientists who curated the training sets, and the senior commanders who formally certified and deployed the system. This diffusion creates the widely feared “accountability gap,” a scenario where a machine commits an act that would constitute a war crime if done by a human, yet responsibility is so fragmented across the long chain of human agents that no single individual can be held morally or legally culpable for the machine’s actions.63
Section 5: Strategic Implications for the 21st Century Battlefield
The proliferation of AI-powered drone systems is not merely a tactical development; it is a strategic event that is fundamentally reshaping the character of conflict, altering the global balance of power, and creating new and dangerous dynamics of escalation. The core impacts can be understood through three interrelated trends: the radical compression of the military kill chain, the democratization of lethal air power, and the emergence of a new, high-speed arms race in counter-drone technologies.
5.1 Compressing the Kill Chain: Warfare at Machine Speed
The traditional military targeting process, often conceptualized as the “F2T2EA” cycle—Find, Fix, Track, Target, Engage, and Assess—is a deliberate, often time-consuming, and human-intensive endeavor.74 Artificial intelligence is injecting unprecedented speed and efficiency into every stage of this process, compressing a cycle that once took hours or days into a matter of minutes, or even seconds.23
Table 2: AI’s Impact Across the F2T2EA Kill Chain
| Kill Chain Phase | Traditional Method (Human-Centric) | AI-Enabled Method (Machine-Centric) | Impact/Acceleration |
| Find | Human analysts manually review hours or days of ISR video and signals intelligence to detect potential targets. | AI algorithms continuously scan multi-source ISR data (video, SIGINT, satellite imagery) in real-time, automatically flagging anomalies and potential targets.29 | Reduces target discovery time from hours/days to seconds/minutes. Drastically reduces analyst cognitive load.23 |
| Fix | An operator manually maneuvers a sensor to get a positive identification and precise location of the target. | An autonomous drone, using AI-powered navigation, maneuvers to fix the target’s location, even in GPS-denied environments.20 | Increases accuracy of location data and enables operations in contested airspace. |
| Track | A dedicated team of operators continuously monitors the target’s movement, a process prone to human error or loss of line-of-sight. | AI-powered ATR and sensor fusion algorithms autonomously track the target, predicting its movement and maintaining a persistent track file even with intermittent sensor contact.32 | Improves tracking persistence and accuracy, freeing human operators for other tasks. |
| Target | A commander, often with legal and intelligence advisors, reviews a “target packet” of information to authorize engagement based on Rules of Engagement (ROE). | An AI decision-support system automatically correlates the track file with pre-programmed ROE, classifies the target, assesses collateral damage risk, and recommends engagement options to the commander.76 | Reduces decision time from minutes to seconds. Provides data-driven recommendations to support human judgment. |
| Engage | A human operator manually guides a weapon to the target or designates the target for a guided munition. | An autonomous drone or loitering munition executes the engagement, using onboard AI for terminal guidance to ensure precision, even against moving targets or in jammed environments.5 | Increases probability of kill (Pk) from ~30-50% to ~80% in some cases. Reduces reliance on vulnerable communication links.5 |
| Assess | Analysts review post-strike imagery to conduct Battle Damage Assessment (BDA), a process that can be slow and subjective. | AI algorithms automatically analyze post-strike imagery, comparing it to pre-strike data to provide instantaneous, quantitative BDA and recommend re-attack if necessary. | Accelerates BDA from hours/minutes to seconds, enabling rapid re-engagement of missed targets. |
The strategic goal of this radical acceleration is to achieve “decision advantage” over an adversary. By cycling through the OODA loop (Observe, Orient, Decide, Act) faster than an opponent, a military force can seize the initiative, dictate the tempo of battle, and achieve objectives before the enemy can effectively react.74 However, this pursuit of machine-speed warfare introduces a profound and dangerous risk of unintended escalation. An automated system, operating at a tempo that precludes human deliberation, could engage a misidentified target or act on flawed intelligence, triggering a catastrophic crisis that spirals out of control before human leaders can intervene.78 In a future conflict between two AI-enabled military powers, the immense pressure to delegate engagement authority to machines to avoid being outpaced could create highly unstable “use-them-or-lose-them” scenarios, where the first side to unleash its autonomous systems gains a potentially decisive, and irreversible, advantage.78
5.2 The Proliferation of Asymmetric Power: Democratizing Lethality
For most of military history, the projection of air power—the ability to conduct persistent surveillance and precision strikes from the sky—was the exclusive domain of wealthy, technologically advanced nation-states. The convergence of low-cost commercial drone technology with increasingly accessible and powerful open-source AI software has shattered this monopoly, fundamentally altering the global balance of power between states and non-state actors (NSAs).39
For the cost of a few hundred or thousand dollars, insurgent groups, terrorist organizations, and transnational criminal cartels can now acquire and weaponize capabilities that were, just a decade ago, available only to major militaries.81 These groups can now field their own “miniature air forces,” allowing them to conduct persistent ISR on government forces, execute precise standoff attacks with modified munitions, and generate powerful propaganda, all while dramatically reducing the risk to their own personnel.83 This “democratization of lethality” provides a potent asymmetric advantage, allowing technologically inferior groups to inflict significant damage on and impose high costs against far more powerful conventional forces.
The historical record demonstrates a clear and accelerating trend. State-supported groups like Hezbollah have a long and sophisticated history of using drones for ISR, famously hacking into the unencrypted video feeds of Israeli drones as early as the 1990s to gain a tactical advantage.84 The Islamic State took this a step further, becoming the first non-state actor to weaponize commercial drones at scale, using them for reconnaissance and to drop small mortar-like munitions on Iraqi and Syrian forces.83 More recently, Houthi rebels in Yemen have employed increasingly sophisticated, Iranian-supplied kamikaze drones and anti-ship missiles to significant strategic effect, disrupting global shipping and challenging naval powers.82 The war in Ukraine has served as a global laboratory and showcase for this new reality, where both sides have deployed millions of low-cost FPV drones, demonstrating their ability to decimate armored columns, artillery positions, and logistics lines, and proving that mass can be a quality all its own.5
5.3 The Counter-Drone Arms Race: AI vs. AI
The inevitable strategic response to the proliferation of offensive AI-powered drones has been the rapid emergence of an arms race in AI-powered Counter-Unmanned Aircraft Systems (C-UAS).85 Defending against small, fast, and numerous autonomous threats is a complex challenge that cannot be solved by any single technology. Effective C-UAS requires a layered, integrated defense-in-depth approach that combines multiple sensor modalities—such as RF detectors, radar, EO/IR cameras, and acoustic sensors—to reliably detect, track, classify, and ultimately neutralize incoming drone threats.86
Artificial intelligence is the critical enabling technology that weaves these layers together. AI algorithms are essential for fusing the data from disparate sensors, distinguishing the faint radar signature or unique RF signal of a hostile drone from the clutter of non-threats like birds, civilian aircraft, or background noise. This AI-driven classification drastically reduces false alarm rates and provides human operators with high-confidence, actionable intelligence.36
Once a threat is identified, AI also plays a crucial role in the neutralization phase. Countermeasures range from non-kinetic “soft kill” options, such as electronic warfare to jam a drone’s control link or spoof its GPS navigation, to kinetic “hard kill” solutions, including interceptor drones, high-energy lasers, and high-powered microwave weapons.86 For a given threat, an AI-powered C2 system can autonomously select the most appropriate and efficient countermeasure—for example, choosing to jam a single reconnaissance drone but launching a kinetic interceptor against an incoming attack drone—and can direct the engagement at machine speed. This automated response is absolutely essential for countering the threat of a drone swarm, where dozens or hundreds of targets may need to be engaged simultaneously.92
This dynamic creates an escalating, high-speed, cat-and-mouse game on the battlefield. Offensive drones will be designed with AI to autonomously navigate, communicate on encrypted, frequency-hopping data links, and use deceptive tactics to evade detection. In response, defensive C-UAS systems will use their own AI to detect those subtle signatures, predict their flight paths, and coordinate a multi-layered defense. This will inevitably lead to a future of “swarm versus swarm” combat, where autonomous offensive swarms are met by autonomous defensive swarms, and victory is determined not by the quality of the airframe, but by the superiority of the underlying algorithms and their ability to learn and adapt in real time.55
The convergence of the compressed kill chain and the proliferation of low-cost, asymmetric drone capabilities is forcing a fundamental doctrinal shift in modern militaries. The focus is moving away from the procurement of exquisite, expensive, and highly survivable individual platforms and toward a new model emphasizing system resilience and attritability. The era of the “unsinkable” aircraft carrier or the “invincible” main battle tank is being challenged by the stark reality that these multi-billion-dollar assets can be disabled or destroyed by a coordinated network of thousand-dollar drones. The logical chain of this strategic shift is clear: AI accelerates the kill chain, making every asset on the battlefield more vulnerable and more easily targeted. Simultaneously, cheap, AI-enabled drones are becoming available to virtually any actor, state or non-state. Therefore, even the most technologically advanced and heavily defended platforms are at constant risk of being overwhelmed and destroyed by a numerically superior, low-cost, and intelligent force.
This new reality renders the traditional military procurement model—which invests immense resources in a small number of highly capable platforms—strategically untenable. The logical response is to pivot investment toward concepts like the Pentagon’s Replicator initiative, which prioritizes the mass production of thousands of cheaper, “attritable” (i.e., expendable) autonomous systems.17 These systems are designed with the expectation that many will be lost in combat, but their low cost and high numbers allow them to absorb these losses and still achieve the mission. This shift toward attritable mass has profound implications for the global defense industry and military force structures. It favors nations with agile, commercial-style advanced manufacturing capabilities over those with slow, bureaucratic, and expensive traditional defense procurement pipelines. The ability to rapidly iterate designs, 3D-print components, and mass-produce intelligent, autonomous drones will become a key metric of national military power. This could also lead to a “hollowing out” of traditional military formations, as investment, prestige, and personnel are redirected from legacy platforms like tanks and fighter jets to new unmanned systems units that require entirely different skill sets, such as data science, AI programming, and robotics engineering.31
Section 6: The Regulatory and Ethical Horizon: Navigating the LAWS Debate
The rapid integration of artificial intelligence into drone systems, particularly those capable of employing lethal force, has created profound legal and ethical challenges that are outpacing the ability of international law and normative frameworks to adapt. The prospect of Lethal Autonomous Weapon Systems (LAWS)—machines that can independently select and engage targets without direct human control—has ignited a global debate that strikes at the core principles of the law of armed conflict and raises fundamental questions about accountability, human dignity, and the future of warfare.
6.1 International Humanitarian Law (IHL) and the Accountability Gap
The use of any weapon in armed conflict is governed by a long-standing body of international law known as International Humanitarian Law (IHL), or the law of armed conflict. The core principles of IHL are designed to limit the effects of war, particularly on civilians. These foundational rules include: the principle of Distinction, which requires combatants to distinguish between military objectives and civilians or civilian objects at all times; the principle of Proportionality, which prohibits attacks that may be expected to cause incidental loss of civilian life, injury to civilians, or damage to civilian objects that would be excessive in relation to the concrete and direct military advantage anticipated; and the principle of Precaution, which obligates commanders to take all feasible precautions to avoid and minimize harm to civilians.93
There are grave and well-founded doubts as to whether a fully autonomous weapon system, powered by AI, could ever be capable of making the complex, nuanced, and context-dependent judgments required to comply with these principles.73 An AI system, no matter how well-trained, lacks uniquely human qualities such as empathy, common-sense reasoning, and a true understanding of the value of human life. It cannot interpret the subtle behavioral cues that might indicate a person is surrendering (
hors de combat) or is a civilian under distress. Furthermore, AI systems are vulnerable to acting on biased or incomplete data; a facial recognition algorithm trained on a non-diverse dataset, for example, could be more likely to misidentify individuals from certain ethnic groups, with potentially tragic consequences on the battlefield.71
This leads to the central legal and ethical dilemma of LAWS: the accountability gap.63 In traditional warfare, if a war crime is committed, legal responsibility can be assigned to the soldier who pulled the trigger and/or the commander who gave the unlawful order. When an autonomous system makes a mistake and unlawfully kills civilians, it is not at all clear who should be held responsible. Is it the fault of the software programmer who wrote the faulty code? The manufacturer who built the system? The data scientist who curated the biased training dataset? The commander who deployed the system without fully understanding its limitations? Or the machine itself, which has no legal personality and cannot be put on trial? This diffusion of responsibility across a complex chain of human and non-human actors creates the very real possibility of a legal and moral vacuum, where atrocities could be committed with no one being held legally accountable for them.64
6.2 Global Efforts at Regulation: The UN and Beyond
The international community has been grappling with the challenge of LAWS for over a decade. The primary forum for these discussions has been the Group of Governmental Experts (GGE) on LAWS, operating under the auspices of the United Nations Convention on Certain Conventional Weapons (CCW) in Geneva.42
However, progress within the CCW GGE has been painstakingly slow, largely due to a lack of consensus among member states.99 The debate is characterized by deeply divergent positions. On one side, a large and growing coalition of states, supported by the International Committee of the Red Cross (ICRC) and a broad civil society movement known as the “Campaign to Stop Killer Robots,” advocates for the negotiation of a new, legally binding international treaty. Such a treaty would prohibit systems that cannot be used with meaningful human control and strictly regulate all other forms of autonomous weapons.71 On the other side, a number of major military powers, including the United States, Russia, and Israel, have so far resisted calls for a new treaty. Their position is generally that existing IHL is sufficient to govern the use of any new weapon system, and they favor the development of non-binding codes of conduct, best practices, and national-level review processes rather than a prohibitive international ban.100
The official policy of the United States is articulated in Department of Defense Directive 3000.09, “Autonomy in Weapon Systems.” This directive states that all autonomous and semi-autonomous weapon systems “shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force”.42 It establishes a rigorous senior-level review and certification process that any new autonomous weapon system must pass before it can be fielded, but it does not ban such systems outright.
Frustrated by the slow, consensus-bound process at the CCW, proponents of regulation have begun to seek alternative venues. In a significant development, the UN General Assembly passed a resolution on LAWS in December 2024 with overwhelming support. This resolution calls for the UN Secretary-General to seek the views of states on LAWS and to hold new consultations, a move widely seen as an attempt to shift the debate to a forum where a single state cannot veto progress. This suggests that momentum toward some form of new international legal instrument is building, even if its final form and forum remain uncertain.93
The international debate on LAWS can be understood as a fundamental clash between two irreconcilable philosophical viewpoints: a human-centric view of law and ethics versus a techno-utilitarian view of military effectiveness. The human-centric perspective, advanced by organizations like the ICRC and the Campaign to Stop Killer Robots, is largely deontological. It argues that the act of a machine making a life-or-death decision over a human being is inherently immoral and unlawful, regardless of the outcome. This view holds that such a decision requires uniquely human capacities like moral reasoning, empathy, and the ability to show mercy, which a machine can never possess. Allowing a machine to kill, therefore, represents a fundamental affront to human dignity and a “digital dehumanization” that must be prohibited.71 The focus of this argument is on the process of the decision.
In contrast, the techno-utilitarian viewpoint, often implicitly held by proponents of autonomous systems and states resisting a ban, is consequentialist. It argues that the primary moral and legal goal in warfare is to achieve legitimate military objectives while minimizing unnecessary suffering and collateral damage. If an AI-powered system can be empirically proven to be more precise, more reliable, and less prone to error, fatigue, or emotion than a human soldier, then its use is not only legally permissible but may even be morally preferable.101 The focus of this argument is on the
outcome of the decision. These two starting points—one prioritizing the moral nature of the decision-making process, the other prioritizing the empirical outcome—are in fundamental conflict, which helps to explain the deep divisions and lack of progress in international forums like the CCW. The debate is not merely a technical one about defining levels of autonomy; it is a profound disagreement about the very source of moral authority in the conduct of war.
This deep philosophical divide, combined with the slow, deliberate pace of international diplomacy and treaty-making, stands in stark contrast to the blistering speed of technological development. This creates a dangerous dynamic where operational facts on the ground are likely to establish de facto norms of behavior long before any formal international law can be agreed upon. The widespread and effective use of semi-autonomous loitering munitions and AI-targeted drones in conflicts like the one in Ukraine is already normalizing their presence on the battlefield and demonstrating their military utility. This creates a “new reality” to which international law will likely be forced to adapt, rather than a future condition that it can preemptively shape. Consequently, any future regulations may be compelled to “grandfather in” the highly autonomous systems that are already in service, leading to a potential treaty that bans hypothetical, future “killer robots” while implicitly permitting the very real and increasingly autonomous systems that are already being deployed in conflicts around the world.
Conclusion and Strategic Recommendations
The integration of Artificial Intelligence into unmanned systems is not an incremental evolution; it is a disruptive and revolutionary transformation of military technology and the character of war itself. AI is fundamentally reshaping drone design, creating a new class of “AI-native” platforms constrained by the physics of SWaP-C and dependent on advanced microelectronics. It is enabling a suite of revolutionary capabilities, from resilient navigation in denied environments to the collaborative intelligence of swarms and the adaptive dominance of cognitive electronic warfare. These capabilities are, in turn, compressing the military kill chain to machine speeds, democratizing access to sophisticated air power for non-state actors, and forcing a crisis in traditional models of command and control.
The strategic landscape is being remade by these technologies. The battlefield is becoming a transparent, hyper-lethal environment where survivability depends less on armor and more on algorithms. The logic of military procurement is shifting from a focus on exquisite, high-cost platforms to a new paradigm of attritable, intelligent mass. And the very nature of human control over the use of force is being challenged, creating profound legal and ethical dilemmas that the international community is struggling to address. Navigating this new era of algorithmic warfare requires a clear-eyed assessment of these changes and a deliberate, forward-looking national strategy.
Based on the analysis contained in this report, the following strategic recommendations are offered for policymakers and defense leaders:
- Prioritize Investment in Attritable Mass and Sovereign AI Hardware. The strategic focus of research, development, and procurement must shift. The era of prioritizing small numbers of expensive, “survivable” platforms is ending. The future lies in the ability to field large numbers of intelligent, autonomous, and attritable systems that can be lost without catastrophic strategic impact. This requires a fundamental overhaul of defense acquisition processes to favor speed, agility, and commercial-style innovation. Critically, this strategy is entirely dependent on assured access to the specialized, low-SWaP AI hardware that powers these systems. Therefore, it is a national security imperative to treat the semiconductor supply chain as a strategic asset, investing heavily in domestic chip design and fabrication capabilities to ensure sovereign control over these foundational components of modern military power.
- Drive Urgent and Radical Doctrinal Adaptation. The technologies discussed in this report render many existing military doctrines obsolete. Concepts of command and control must be radically rethought to accommodate human-machine teaming and machine-speed decision-making. Force structures must be reorganized, moving away from platform-centric formations (e.g., armored brigades, carrier strike groups) and toward integrated, multi-domain networks of manned and unmanned systems. Logistics and sustainment models must adapt to a battlefield characterized by extremely high attrition rates for unmanned systems. This doctrinal evolution must be driven from the highest levels of military leadership and must be pursued with a sense of urgency, as adversaries are already adapting to this new reality.
- Cultivate a New Generation of Human Capital. The warfighter of the future will require a fundamentally different skillset. While traditional martial skills will remain relevant, they must be augmented by expertise in data science, AI/ML programming, robotics, and systems engineering. The military must aggressively recruit, train, and retain talent in these critical fields, creating new career paths and promotion incentives for a tech-savvy force. This includes not only uniformed personnel but also a deeper integration of civilian experts and partnerships with academia and the private technology sector.
- Lead Proactively in Shaping International Norms. The United States should not adopt a passive or obstructionist posture in the international debate on autonomous weapons. The slow pace of the CCW process provides an opportunity for the United States and its allies to proactively lead the development of international norms and standards for the responsible military use of AI. Rather than focusing on all-or-nothing bans on hypothetical future systems, this effort should prioritize achievable, concrete regulations that can build a broad consensus. This could include establishing international standards for the testing, validation, and verification of autonomous systems; promoting transparency in data curation and algorithm design to mitigate bias; and developing common frameworks for ensuring legal review and accountability. By leading this effort, the United States can shape the normative environment in a way that aligns with its interests and values, before that environment is irrevocably set by the chaotic realities of the next conflict.
If you find this post useful, please share the link on Facebook, with your friends, etc. Your support is much appreciated and if you have any feedback, please email me at in**@*********ps.com. Please note that for links to other websites, we are only paid if there is an affiliate program such as Avantlink, Impact, Amazon and eBay and only if you purchase something. If you’d like to directly donate to help fund our continued report, please visit our donations page.
Sources Used
- AI Impact Analysis on the Military Drone Industry – MarketsandMarkets, accessed September 26, 2025, https://www.marketsandmarkets.com/ResearchInsight/ai-impact-analysis-military-drone-industry.asp
- Drone AI | Artificial Intelligence Components | Deep Learning Software, accessed September 26, 2025, https://www.unmannedsystemstechnology.com/expo/artificial-intelligence-components/
- Tactical Edge Computing: Enabling Faster Intelligence Cycles – FlySight, accessed September 26, 2025, https://www.flysight.it/tactical-edge-computing-enabling-faster-intelligence-cycles/
- Edge AI in Tactical Defense: Empowering the Future of Military and Aerospace Operations, accessed September 26, 2025, https://dedicatedcomputing.com/edge-ai-in-tactical-defense-empowering-the-future-of-military-and-aerospace-operations/
- ARTIFICIAL INTELLIGENCE’S GROWING ROLE IN MODERN WARFARE – War Room, accessed September 26, 2025, https://warroom.armywarcollege.edu/articles/ais-growing-role/
- AI Components in Drones – Fly Eye, accessed September 26, 2025, https://www.flyeye.io/ai-drone-components/
- Autonomous Machines: The Future of AI – NVIDIA Jetson, accessed September 26, 2025, https://www.nvidia.com/en-us/autonomous-machines/
- Advanced AI-Powered Drone, Autopilot & Hardware Solutions | UST, accessed September 26, 2025, https://www.unmannedsystemstechnology.com/2025/06/advanced-ai-powered-drone-autopilot-hardware-solutions/
- Drone Onboard AI Processors – Meegle, accessed September 26, 2025, https://www.meegle.com/en_us/topics/autonomous-drones/drone-onboard-ai-processors
- AI in Military Drones: Transforming Modern Warfare (2025-2030) – MarketsandMarkets, accessed September 26, 2025, https://www.marketsandmarkets.com/ResearchInsight/ai-in-military-drones-transforming-modern-warfare.asp
- SWaP-C: Transforming the Future of Aircraft Components, accessed September 26, 2025, https://www.nextgenexecsearch.com/swap-c-transforming-the-future-of-aircraft-components/
- Optimizing SWaP-C for Unmanned Platforms: Pushing the Boundaries of Size, Weight, Power, and Cost in Aerospace and Defense, accessed September 26, 2025, https://idstch.com/technology/electronics/optimizing-swap-c-for-unmanned-platforms-pushing-the-boundaries-of-size-weight-power-and-cost-in-aerospace-and-defense/
- Using SWaP-C Reductions to Improve UAS/UGV Mission Capabilities – Magazine Article, accessed September 26, 2025, https://saemobilus.sae.org/articles/using-swap-c-reductions-improve-uas-ugv-mission-capabilities-16aerp05_01
- SWaP-optimized mission systems for unmanned platforms help expand capabilities, accessed September 26, 2025, https://militaryembedded.com/unmanned/payloads/swap-optimized-mission-systems-for-unmanned-platforms-help-expand-capabilities
- Optimizing SWaP-C in Defense and Aerospace: Strategies for 2025 – Galorath Incorporated, accessed September 26, 2025, https://galorath.com/blog/optimizing-swap-c-defense-aerospace-2025/
- Low SWAP-C Imaging Radar for Small Air Vehicle Sense and Avoid – NASA TechPort – Project, accessed September 26, 2025, https://techport.nasa.gov/projects/113335
- Low SWaP-C UAS Payload: MatrixSpace Wins AFWERX Contract – Dronelife, accessed September 26, 2025, https://dronelife.com/2024/06/19/matrixspace-awarded-1-25m-afwerx-contract-for-low-swap-c-uas-payload-development/
- AI-ENABLED DRONE AUTONOMOUS NAVIGATION AND DECISION MAKING FOR DEFENCE SECURITY | ENVIRONMENT. TECHNOLOGY. RESOURCES. Proceedings of the International Scientific and Practical Conference, accessed September 26, 2025, https://journals.rta.lv/index.php/ETR/article/view/8237
- Revolutionizing drone navigation: AI algorithms take flight – Show Me Mizzou, accessed September 26, 2025, https://showme.missouri.edu/2024/revolutionizing-drone-navigation-ai-algorithms-take-flight/
- Ukraine’s Future Vision and Current Capabilities for Waging AI …, accessed September 26, 2025, https://www.csis.org/analysis/ukraines-future-vision-and-current-capabilities-waging-ai-enabled-autonomous-warfare
- ai-enabled drone autonomous navigation and decision making for defence security – ResearchGate, accessed September 26, 2025, https://www.researchgate.net/publication/382409933_AI-ENABLED_DRONE_AUTONOMOUS_NAVIGATION_AND_DECISION_MAKING_FOR_DEFENCE_SECURITY
- C2 and AI Integration In Drone Warfare – Impacts On TTPs & Military Strategy, accessed September 26, 2025, https://www.strategycentral.io/post/c2-and-ai-integration-in-drone-warfare-impacts-on-ttps-military-strategy
- Innovating Defense: Generative AI’s Role in Military Evolution | Article – U.S. Army, accessed September 26, 2025, https://www.army.mil/article/286707/innovating_defense_generative_ais_role_in_military_evolution
- SensorFusionAI: Shaping the Future of Defence Strategy – DroneShield, accessed September 26, 2025, https://www.droneshield.com/blog/shaping-the-future-of-defence-strategy
- AI for Sensor Fusion: Sensing the Invisible – Mind Foundry, accessed September 26, 2025, https://www.mindfoundry.ai/blog/sensing-the-invisible
- (PDF) UAV Detection Multi-sensor Data Fusion – ResearchGate, accessed September 26, 2025, https://www.researchgate.net/publication/382689292_UAV_Detection_Multi-sensor_Data_Fusion
- AI-Enabled Fusion for Conflicting Sensor Data – Booz Allen, accessed September 26, 2025, https://www.boozallen.com/markets/defense/indo-pacific/ai-enabled-fusion-for-conflicting-sensor-data.html
- Quantum Sensing and AI for Drone and Nano-Drone Detection, accessed September 26, 2025, https://postquantum.com/quantum-sensing/quantum-sensing-ai-drones/
- AI-Powered Intelligence, Surveillance, and Reconnaissance (ISR) Systems in Defense, accessed September 26, 2025, https://www.researchgate.net/publication/394740589_AI-Powered_Intelligence_Surveillance_and_Reconnaissance_ISR_Systems_in_Defense
- What is ISR (Intelligence, Surveillance, Reconnaissance)? – Fly Eye, accessed September 26, 2025, https://www.flyeye.io/drone-acronym-isr/
- Technological Evolution on the Battlefield – CSIS, accessed September 26, 2025, https://www.csis.org/analysis/chapter-9-technological-evolution-battlefield
- Aided and Automatic Target Recognition – CoVar, accessed September 26, 2025, https://covar.com/technology-area/aided-target-recognition/
- Automatic Target Recognition for Military Use – What’s the Potential? – FlySight, accessed September 26, 2025, https://www.flysight.it/automatic-target-recognition-for-military-use-whats-the-potential/
- Operationally Relevant Artificial Training for Machine Learning – RAND, accessed September 26, 2025, https://www.rand.org/pubs/research_reports/RRA683-1.html
- Finding Novel Targets on the Fly: Using Advanced AI to Make Flexible Automatic Target Recognition Systems – Draper, accessed September 26, 2025, https://www.draper.com/media-center/featured-stories/detail/27285/finding-novel-targets-on-the-fly-using-advanced-ai-to-make-flexible-automatic-target-recognition-systems
- AI-powered Management for Defense Drone Swarms – – Datategy, accessed September 26, 2025, https://www.datategy.net/2025/07/21/ai-powered-management-for-defense-drone-swarms/
- Drone Swarms: Collective Intelligence in Action, accessed September 26, 2025, https://scalastic.io/en/drone-swarms-collective-intelligence/
- Generative AI “Agile Swarm Intelligence” (Part 1): Autonomous Agent Swarms Foundations, Theory, and Advanced Applications | by Arman Kamran | Medium, accessed September 26, 2025, https://medium.com/@armankamran/generative-ai-agile-swarm-intelligence-part-1-autonomous-agent-swarms-foundations-theory-and-9038e3bc6c37
- Technology and modern warfare: How drones and AI are transforming conflict, accessed September 26, 2025, https://www.visionofhumanity.org/technology-and-modern-warfare-how-drones-and-ai-are-transforming-conflict/
- The Impact Of Drones On The Future Of Military Warfare – SNS Insider, accessed September 26, 2025, https://www.snsinsider.com/blogs/the-impact-of-drones-on-the-future-of-military-warfare
- AI‑operated swarm intelligence: Flexible, self‑organizing drone fleets – Marks & Clerk, accessed September 26, 2025, https://www.marks-clerk.com/insights/latest-insights/102kt4b-ai-operated-swarm-intelligence-flexible-self-organizing-drone-fleets/
- Lethal autonomous weapon – Wikipedia, accessed September 26, 2025, https://en.wikipedia.org/wiki/Lethal_autonomous_weapon
- (PDF) DARPA OFFSET: A Vision for Advanced Swarm Systems through Agile Technology Development and ExperimentationDARPA OFFSET – ResearchGate, accessed September 26, 2025, https://www.researchgate.net/publication/367321998_DARPA_OFFSET_A_Vision_for_Advanced_Swarm_Systems_through_Agile_Technology_Development_and_ExperimentationDARPA_OFFSET_A_Vision_for_Advanced_Swarm_Systems_through_Agile_Technology_Development_and_Exper
- DARPA OFFSET Second Swarm Sprint Pursuing State-of-the-Art Solutions – DSIAC, accessed September 26, 2025, https://dsiac.dtic.mil/articles/darpa-offset-second-swarm-sprint-pursuing-state-of-the-art-solutions/
- OFFSET: Offensive Swarm-Enabled Tactics – Director Operational Test and Evaluation, accessed September 26, 2025, https://www.dote.osd.mil/News/What-DOT-Es-Following/Following-Display/Article/3348613/offset-offensive-swarm-enabled-tactics/
- DoD Directive 3000.09, “Autonomy in Weapon Systems,” January 25 …, accessed September 26, 2025, https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf
- Cognitive Electronic Warfare and the Fight for Spectrum Superiority – Parallax Research, accessed September 26, 2025, https://parallaxresearch.org/news/blog/cognitive-electronic-warfare-and-fight-spectrum-superiority
- August/September 2018 – Cognitive Electronic Warfare: Radio Frequency Spectrum Meets Machine Learning | Avionics Digital Edition – Aviation Today, accessed September 26, 2025, https://interactive.aviationtoday.com/avionicsmagazine/august-september-2018/cognitive-electronic-warfare-radio-frequency-spectrum-meets-machine-learning/
- AI-Enabled Electronic Warfare – Defense One, accessed September 26, 2025, https://www.defenseone.com/insights/cards/how-ai-changing-way-warfighters-make-decisions-and-fight-battlefield/3/
- Cognitive Electronic Warfare: Using AI to Solve EW Problems – YouTube, accessed September 26, 2025, https://www.youtube.com/watch?v=oO6piqNroiI
- HX-2 – AI Strike Drone – Helsing, accessed September 26, 2025, https://helsing.ai/hx-2
- Agent-Based Anti-Jamming Techniques for UAV Communications in Adversarial Environments: A Comprehensive Survey – arXiv, accessed September 26, 2025, https://arxiv.org/html/2508.11687v1
- ScaleFlyt Antijamming: against interference and jamming threats | Thales Group, accessed September 26, 2025, https://www.thalesgroup.com/en/markets/aerospace/drone-solutions/scaleflyt-antijamming-against-interference-and-jamming-threats
- [2508.11687] Agent-Based Anti-Jamming Techniques for UAV Communications in Adversarial Environments: A Comprehensive Survey – arXiv, accessed September 26, 2025, https://www.arxiv.org/abs/2508.11687
- AI-controlled drone swarms arms race to dominate the near-future battlefield – YouTube, accessed September 26, 2025, https://www.youtube.com/watch?v=h2O17B4R7Rc
- Humans on the Loop vs. In the Loop: Striking the Balance in Decision-Making – Trackmind, accessed September 26, 2025, https://www.trackmind.com/humans-in-the-loop-vs-on-the-loop/
- What is C2 (Command and Control) & How Does it Work? – Fly Eye, accessed September 26, 2025, https://www.flyeye.io/drone-acronym-c2/
- The Command and Control Element | GEOG 892: Unmanned Aerial Systems, accessed September 26, 2025, https://www.e-education.psu.edu/geog892/node/8
- Human-in-the-Loop (HITL) vs Human-on-the-Loop (HOTL) – Checkify, accessed September 26, 2025, https://checkify.com/article/human-on-the-loop-hotl/
- The (im)possibility of meaningful human control for lethal autonomous weapon systems – Humanitarian Law & Policy Blog, accessed September 26, 2025, https://blogs.icrc.org/law-and-policy/2018/08/29/im-possibility-meaningful-human-control-lethal-autonomous-weapon-systems/
- Understanding “The Loop”: Hum-ans and the Next Drone Generations – Brookings Institution, accessed September 26, 2025, https://www.brookings.edu/wp-content/uploads/2016/06/27-humans-drones-marra-mcneil.pdf
- In On or Out of the Loop, accessed September 26, 2025, https://cms.polsci.ku.dk/publikationer/in-on-or-out-of-the-loop/In_On_or_Out_of_the_Loop.pdf
- Military AI Challenges Human Accountability – CIP – Center for International Policy, accessed September 26, 2025, https://internationalpolicy.org/publications/military-ai-challenges-human-accountability/
- Lethal Autonomous Weapon Systems (LAWS): Accountability, Collateral Damage, and the Inadequacies of International Law – Temple iLIT, accessed September 26, 2025, https://law.temple.edu/ilit/lethal-autonomous-weapon-systems-laws-accountability-collateral-damage-and-the-inadequacies-of-international-law/
- Defense Official Discusses Unmanned Aircraft Systems, Human Decision-Making, AI, accessed September 26, 2025, https://www.war.gov/News/News-Stories/Article/Article/2491512/defense-official-discusses-unmanned-aircraft-systems-human-decision-making-ai/
- Department of Defense Directive 3000.09 – Wikipedia, accessed September 26, 2025, https://en.wikipedia.org/wiki/Department_of_Defense_Directive_3000.09
- How Meaningful is “Meaningful Human Control” in LAWS Regulation? – Lieber Institute, accessed September 26, 2025, https://lieber.westpoint.edu/how-meaningful-is-meaningful-human-control-laws-regulation/
- A MEANINGFUL FLOOR FOR “MEANINGFUL HUMAN CONTROL” Rebecca Crootof *, accessed September 26, 2025, https://sites.temple.edu/ticlj/files/2017/02/30.1.Crootof-TICLJ.pdf
- Full article: Imagining Meaningful Human Control: Autonomous Weapons and the (De-) Legitimisation of Future Warfare – Taylor & Francis Online, accessed September 26, 2025, https://www.tandfonline.com/doi/full/10.1080/13600826.2023.2233004
- Meaningful Human Control over Autonomous Systems: A Philosophical Account – PMC, accessed September 26, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC7806098/
- Problems with autonomous weapons – Stop Killer Robots, accessed September 26, 2025, https://www.stopkillerrobots.org/stop-killer-robots/facts-about-autonomous-weapons/
- A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making, accessed September 26, 2025, https://www.hrw.org/report/2025/04/28/hazard-human-rights/autonomous-weapons-systems-and-digital-decision-making
- Full article: The ethical legitimacy of autonomous Weapons systems: reconfiguring war accountability in the age of artificial Intelligence – Taylor & Francis Online, accessed September 26, 2025, https://www.tandfonline.com/doi/full/10.1080/16544951.2025.2540131
- Mapping Artificial Intelligence to the Naval Tactical Kill Chain, accessed September 26, 2025, https://nps.edu/documents/10180/142489929/NEJ+Hybrid+Force+Issue_Mapping+AI+to+The+Naval+Kill+Chain.pdf
- How are Drones Changing Modern Warfare? | Australian Army Research Centre (AARC), accessed September 26, 2025, https://researchcentre.army.gov.au/library/land-power-forum/how-are-drones-changing-modern-warfare
- Air Force Battle Lab advances the kill chain with AI, C2 Innovation – AF.mil, accessed September 26, 2025, https://www.af.mil/News/Article-Display/Article/4241485/air-force-battle-lab-advances-the-kill-chain-with-ai-c2-innovation/
- Shortening the Kill Chain with Artificial Intelligence – AutoNorms, accessed September 26, 2025, https://www.autonorms.eu/shortening-the-kill-chain-with-artificial-intelligence/
- Artificial Intelligence, Drone Swarming and Escalation Risks in Future Warfare James Johnson – DORAS | DCU Research Repository, accessed September 26, 2025, http://doras.dcu.ie/25505/1/RUSI%20Journal%20JamesJohnson%20%282020%29.pdf
- Unmanned Aerial Systems’ Influences on Conflict Escalation Dynamics – CSIS, accessed September 26, 2025, https://www.csis.org/analysis/unmanned-aerial-systems-influences-conflict-escalation-dynamics
- Democratizing harm: Artificial intelligence in the hands of nonstate actors | Brookings, accessed September 26, 2025, https://www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors/
- Non state actors can now create lethal autonomous weapons from civilian products, accessed September 26, 2025, https://www.weforum.org/stories/2022/05/regulate-non-state-use-arms/
- The Rising Threat of Non-State Actor Commercial Drone Use: Emerging Capabilities and Threats – Combating Terrorism Center, accessed September 26, 2025, https://ctc.westpoint.edu/the-rising-threat-of-non-state-actor-commercial-drone-use-emerging-capabilities-and-threats/
- The Role of Drones in Future Terrorist Attacks – AUSA, accessed September 26, 2025, https://www.ausa.org/sites/default/files/publications/LWP-137-The-Role-of-Drones-in-Future-Terrorist-Attacks_0.pdf
- On the Horizon: The Ukraine War and the Evolving Threat of Drone Terrorism, accessed September 26, 2025, https://ctc.westpoint.edu/on-the-horizon-the-ukraine-war-and-the-evolving-threat-of-drone-terrorism/
- Pentagon Fast-Tracks AI Into Drone Swarm Defense – Warrior Maven, accessed September 26, 2025, https://warriormaven.com/news/land/pentagon-fast-tracks-ai-into-drone-swarm-defense
- Counter-Drone Systems – RF Drone Detection and Jamming Systems – Mistral Solutions, accessed September 26, 2025, https://www.mistralsolutions.com/counter-drone-systems/
- The Future of AI and Automation in Anti-Drone Detection, accessed September 26, 2025, https://www.nqdefense.com/the-future-of-ai-and-automation-in-anti-drone-detection/
- 8 Counter-UAS Challenges and How to Overcome Them – Robin Radar, accessed September 26, 2025, https://www.robinradar.com/blog/counter-uas-challenges
- The impact of AI on counter-drone measures – Skylock, accessed September 26, 2025, https://www.skylock1.com/the-impact-of-ai-on-counter-drone-measures/
- How AI-Powered Anti-Drone Solutions Transform Defense Operations? – – Datategy, accessed September 26, 2025, https://www.datategy.net/2025/07/15/how-ai-powered-anti-drone-solutions-transform-defense-operations/
- A counter to drone swarms: high-power microwave weapons | The Strategist, accessed September 26, 2025, https://www.aspistrategist.org.au/a-counter-to-drone-swarms-high-power-microwave-weapons/
- Honeywell Unveils AI-Enabled Counter-Drone Swarm System – National Defense Magazine, accessed September 26, 2025, https://www.nationaldefensemagazine.org/articles/2024/9/17/honeywell-unveils-ai-enabled-uas-system-to-counter-swarm-drones
- Lethal Autonomous Weapons Systems & International Law: Growing Momentum Towards a New International Treaty | ASIL, accessed September 26, 2025, https://www.asil.org/insights/volume/29/issue/1
- Compliance with and enforcement of IHL – GSDRC, accessed September 26, 2025, https://gsdrc.org/topic-guides/international-legal-frameworks-for-humanitarian-action/challenges/compliance-with-and-enforcement-of-ihl/
- International Humanitarian Law – European Commission, accessed September 26, 2025, https://civil-protection-humanitarian-aid.ec.europa.eu/what/humanitarian-aid/international-humanitarian-law_en
- www.ie.edu, accessed September 26, 2025, https://www.ie.edu/insights/articles/when-ai-meets-the-laws-of-war/#:~:text=Legal%20and%20Ethical%20Considerations&text=To%20start%2C%20determining%20accountability%20for,autonomous%20nature%20of%20these%20systems.
- The Ethics of Automated Warfare and Artificial Intelligence, accessed September 26, 2025, https://www.cigionline.org/the-ethics-of-automated-warfare-and-artificial-intelligence/
- International Discussions Concerning Lethal Autonomous Weapon Systems – Congress.gov, accessed September 26, 2025, https://www.congress.gov/crs-product/IF11294
- CHALLENGES IN REGULATING LETHAL AUTONOMOUS WEAPONS UNDER INTERNATIONAL LAW, accessed September 26, 2025, https://www.swlaw.edu/sites/default/files/2021-03/3.%20Reeves%20%5Bp.%20101-118%5D.pdf
- Understanding the Global Debate on Lethal Autonomous Weapons Systems: An Indian Perspective | Carnegie Endowment for International Peace, accessed September 26, 2025, https://carnegieendowment.org/research/2024/08/understanding-the-global-debate-on-lethal-autonomous-weapons-systems-an-indian-perspective?lang=en
- Pros and Cons of Autonomous Weapons Systems – Army University Press, accessed September 26, 2025, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/
- Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can – Scholarship Archive, accessed September 26, 2025, https://scholarship.law.columbia.edu/cgi/viewcontent.cgi?article=2804&context=faculty_scholarship