Category Archives: AI Analytics

The Tactical Edge of Agentic Autonomy: Strategic Shifts in US Defense and Small Arms Integration for 2026

1. Executive Summary

The year 2026 marks a structural inflection point within the United States defense sector, characterized by a decisive transition from generative artificial intelligence to agentic artificial intelligence. This shift represents a move from passive analytical tools to autonomous, goal-oriented software agents capable of executing complex workflows, streamlining supply chains, and integrating directly into tactical infantry systems. The fiscal year 2026 defense budget underscores this transition by allocating a dedicated USD 13.4 billion specifically to autonomy and artificial intelligence within an overall budget that has crossed the trillion-dollar threshold.1 This unprecedented financial commitment, which exceeds the entire annual budget of the National Aeronautics and Space Administration, signifies that artificial intelligence is no longer viewed merely as an experimental supportive force multiplier. Instead, the technology has evolved into a primary intelligence layer designed to compress decision cycles from hours to seconds across multiple operational domains.1

A pivotal element of this modernization effort is the Department of War’s focus on deploying these autonomous capabilities directly to the tactical edge. Initiatives such as the January 2026 implementation of the “AI-first” agenda and the launch of the Agent Network project demonstrate a top-down mandate to integrate agentic systems into battle management and squad-level operations.2 Concurrently, the private defense industrial base is answering this demand with specialized, domain-specific platforms. The deployment of WarClaw, a military-specific autonomous software agent developed by the veteran-founded startup Edgerunner AI, exemplifies a broader industry trend of moving away from massive, generalized frontier models toward secure, on-device systems optimized for Denied, Disconnected, Intermittent, and Low-bandwidth environments.3 These localized models offer unprecedented operational security and speed for frontline units operating in contested spaces.

For the small arms industry and associated infantry modernization programs, this software integration is manifesting rapidly in hardware procurement programs like the Next Generation Squad Weapon and advanced fire control optics such as the XM157.4 Agentic systems are currently being evaluated to automate the early phases of the tactical operational loop, allowing warfighters to focus exclusively on action, lethality, and ethical compliance rather than data processing.7 However, the delegation of decision-making authority to autonomous software agents introduces profound ethical and strategic complexities. The defense industry is currently engaged in intense discourse regarding the boundaries of machine autonomy, the strict definition of human accountability, and the operational risks of deploying fully integrated, artificial intelligence-native systems in highly volatile environments.8 This comprehensive research report provides an exhaustive analysis of these technological transitions, procurement strategies, and doctrinal shifts defining the agentic warfare landscape in 2026.

2. The Strategic Pivot to Agentic Warfare

For the better part of the last decade, the integration of artificial intelligence into defense applications has been dominated by generative models. These systems, while highly capable of synthesizing vast amounts of data, drafting intelligence reports, and generating complex code structures, operate primarily as reactive tools that require constant human prompting and oversight. In 2026, the sentiment among government technology leaders, procurement officers, and defense contractors has firmly shifted from exploring what is theoretically possible with generative systems to effectively operationalizing agentic artificial intelligence.1

Agentic artificial intelligence systems are fundamentally different from their generative predecessors. They are designed not merely to process or analyze information passively but to pursue distinct objectives and take action autonomously within digital and physical environments.11 When given a high-level intent by a human operator, an agentic system can independently break that broad intent down into actionable tasks, coordinate with other specialized digital tools, evaluate varying potential outcomes, and execute a comprehensive plan with minimal to no human intervention during the intermediate steps.7 This transition from data generation to workflow execution is redefining how the United States military approaches everything from deep-tier supply chain logistics to frontline infantry squad engagements.

The operational reality of modern conflict necessitates this shift. Warfighters and intelligence analysts are currently subjected to immense cognitive overload, constantly bombarded by data streams from overhead drones, ground sensors, biometric wearables, and digital communication networks. Generative systems attempted to alleviate this by summarizing the data, but summarizing data still requires the human to formulate a decision and manually execute the subsequent steps across multiple disparate software platforms. Agentic systems, functioning as autonomous digital workers, bridge this gap by taking the summarized data and independently initiating the required software protocols to address the situation, presenting the human operator with a nearly finalized action plan ready for execution authorization.7 This capability is rapidly transforming from a theoretical concept discussed in academic white papers into a deployable asset utilized by the Department of Defense.

Public and institutional interest in agentic capabilities has surged dramatically. Industry reports indicate that interest in agentic artificial intelligence rose by 6,100 percent between October 2024 and October 2025, driven by the realization that autonomous execution holds vastly more commercial and military value than simple text generation.13 Furthermore, demand for software that can autonomously achieve complex tasks by designing and implementing processes, and then fine-tuning the results without continuous human prompting, is forecast to rise from USD 4 billion in the previous year to more than USD 100 billion by the end of the decade.13 The Department of Defense, recognizing the strategic imperative of mastering this technology before peer adversaries, has moved to capitalize on this trend early, restructuring its entire approach to software acquisition and battlefield deployment.

3. The Fiscal Year 2026 Defense Budget Breakdown and Implications

The strategic pivot toward agentic execution is heavily supported by unprecedented financial allocations, moving artificial intelligence out of the realm of experimental research and development and into the core procurement budget. The fiscal year 2026 defense budget represents a historical milestone for the military-industrial complex, as the Department of Defense has carved out a dedicated budget line for autonomy and artificial intelligence for the first time.1According to analysis published by(RNG Strategy Consulting), the allocation of USD 13.4 billion specifically to these technologies is a definitive signal to the defense industrial base regarding future procurement priorities.1

This dedicated funding is distributed across a clear doctrinal hierarchy, focusing heavily on unmanned platforms and the complex software integration required to make them operate autonomously in contested environments. A detailed breakdown of this investment reveals strategic priorities aimed at dominating the unmanned battlespace across multiple physical domains. The data indicates that the Department of Defense is not merely investing in abstract software algorithms but is heavily focused on the physical materialization of agentic artificial intelligence within specific vehicle and weapon platforms.

Capability DomainFY 2026 Budget Allocation (Billions USD)Strategic Focus Area
Unmanned Aerial Vehicles9.400Autonomous flight, drone swarm coordination, counter-UAS systems.
Maritime Autonomous Systems1.700Surface vessel navigation, autonomous fleet integration, port security.
Cross-Domain Software Integration1.200Interoperability layers, Joint All-Domain Command and Control (JADC2).
Underwater Capabilities0.734Submersible command interfaces, anti-submarine autonomous tracking.
Exclusive AI Technology0.200Foundational agentic research, algorithmic efficiency, neuromorphic computing.

The budget distribution reveals a strong preference for aerial autonomy integration, which receives more than triple the funding of all other physical domains combined.1 The allocation of USD 9.4 billion to unmanned and remotely operated aerial vehicles underscores the military’s reliance on drones for both intelligence gathering and kinetic strikes.1 However, the USD 1.2 billion dedicated to cross-domain software integration is arguably the most critical component for the small arms industry.1 This funding is intended to build the digital infrastructure that allows disparate systems, such as an autonomous aerial drone and a squad leader’s rifle optic, to communicate and share targeting data seamlessly without human routing.

The sheer magnitude of this funding has a direct cascading effect on the tactical equipment sectors. As major platforms like aircraft and maritime vessels become highly autonomous, the infantry units operating alongside them require equivalent technological upgrades to interface with these systems. A soldier utilizing conventional optical sights and analog radios cannot effectively coordinate with an agentic drone swarm moving at machine speed. Therefore, the budget necessitates a corresponding revolution in soldier-borne electronics, pushing the industry to develop smart fire control systems, localized communication nodes, and on-device processing capabilities that can integrate the individual rifleman into the broader autonomous network.

Furthermore, the scale of global defense spending adds durability to this modernization cycle. Global defense spending surged to USD 2.7 trillion in 2025 and is projected to surpass USD 3.6 trillion by 2030, driven by structural geopolitical priorities and the need for technological sovereignty.14 Within this expanding market, the center of gravity is decisively shifting from heavy hardware to advanced software. AI-enabled systems, unmanned platforms, and digital command networks are moving from pilot programs into widespread deployment, reshaping the economic fundamentals of defense contractors and demanding a rapid evolution from companies traditionally focused solely on metallurgy and ballistics.15

4. The Department of War AI-First Agenda

To effectively operationalize the massive capital influx provided by the 2026 budget, the United States Department of War initiated a comprehensive restructuring of its technology acquisition, data management, and deployment frameworks early in the year. On January 9, 2026, the Department issued three highly coordinated memoranda, which were followed shortly by a policy address from Secretary Pete Hegseth on January 12.2 Together, these actions established a unified, top-down “AI-first” agenda intended to move the military bureaucracy at wartime speed.2

This agenda represents far more than a standard set of procurement guidelines. It is a fundamental reorganization of how the military accesses data, how it recruits technical talent, and how it deploys complex software architectures across the joint force. According to legal and policy analysis provided by Holland & Knight, the central thesis of the new strategy is to aggressively leverage asymmetric American advantages in advanced computing power, deep capital markets, and decades of diverse operational experience to drive rapid experimentation with leading artificial intelligence models.2 This approach actively embraces a Silicon Valley-inspired “test, fail, adjust” culture, aiming to field iterative improvements rapidly rather than waiting for perfect, decades-long development cycles.16

The three memoranda target specific systemic bottlenecks that have historically hindered software adoption within the military. The first document, the “Artificial Intelligence Strategy for the Department of War” memorandum, directs the entire department to accelerate America’s military dominance in this sector by centering efforts on aggressive data-access mandates, expanded computing infrastructure, and accelerated hiring practices for specialized talent.2 The third document, the “Transforming the Defense Innovation Ecosystem to Accelerate Warfighting Advantage” memorandum, streamlines the bureaucratic hierarchy. It designates the Under Secretary of War for Research and Engineering as the single Chief Technology Officer, creates a dedicated action group, and elevates organizations like the Defense Innovation Unit as core components within a unified ecosystem.2

However, the second memorandum is perhaps the most consequential for the deployment of agentic systems. Titled “Transforming Advana to Accelerate Artificial Intelligence and Enhance Auditability,” this directive mandates the comprehensive restructuring of the existing Advana data system into a new entity known as the War Data Platform.2 Agentic artificial intelligence cannot function reliably without structured, accessible, and highly accurate data. The War Data Platform is tasked with expanding the core data integration layer to provide secure, standardized data access across the entire department, specifically tailored to support agentic applications.2

This restructuring ensures that when an autonomous agent is deployed at the tactical edge, whether on a drone or integrated into a rifle’s fire control system, it pulls targeting parameters, threat profiles, and environmental data from a unified, verified stream rather than fragmented, siloed databases maintained by different service branches.2 The Chief Digital and AI Office has been explicitly directed to ensure that these foundational enablers are available across the department in real time, creating a robust digital nervous system necessary for autonomous operations.2

5. The Seven Pace-Setting Projects

The operational core of the AI Strategy Memo is the immediate implementation of seven “Pace-Setting Projects,” which are designed to force rapid technological integration across warfighting, intelligence, and enterprise missions.2 Each of these projects operates under strict parameters, guided by a single accountable leader, aggressive development timelines, and a requirement for detailed monthly progress reporting directly to the Deputy Secretary of War and the Chief Technology Officer.2 These projects serve as the primary mechanisms through which the Department of War translates its strategic vision into tangible capabilities on the battlefield.

The seven projects are divided into three distinct strategic categories, reflecting the comprehensive nature of the modernization effort.

Mission CategoryProject NameStrategic Objective and Operational Scope
WarfightingSwarm ForgeA competitive mechanism pairing elite warfighting units with technology innovators for iterative discovery, testing, and scaling of new combat tactics using AI capabilities.
WarfightingAgent NetworkDedicated development of AI agents for battle management and decision support, covering the entire operational cycle from campaign planning through kill chain execution.
WarfightingEnder’s FoundryAcceleration of AI-enabled simulation capabilities and tighter feedback loops to outpace adversaries in tactical planning and wargaming scenarios.
IntelligenceOpen ArsenalCompression of the technical intelligence-to-capability development pipeline, aiming to turn raw intelligence into deployable weapon algorithms in hours rather than years.
IntelligenceProject GrantUtilization of AI to transform static deterrence postures into dynamic, interpretable pressure models informed by real-time strategic analysis.
EnterpriseGenAI.milDepartmentwide deployment of frontier generative models, providing millions of civilian and military personnel access to advanced capabilities at multiple classification levels.
EnterpriseEnterprise AgentsDevelopment of a comprehensive playbook for the rapid and secure design and deployment of AI agents intended to transform administrative and logistical workflows.

For the small arms industry and infantry tacticians, the Swarm Forge and Agent Network projects hold the most immediate relevance. Swarm Forge represents a paradigm shift in doctrinal development. By pairing elite warfighting units directly with technology developers, the military is bypassing traditional, slow-moving testing centers.2 Infantry units are actively discovering new ways to utilize advanced small arms, smart optics, and localized drone assets in simulated combat, providing immediate feedback to software engineers who can update the algorithms in real time. This rapid iteration ensures that the tactical software deployed on the battlefield accurately reflects the chaotic realities of close-quarters combat.

The Agent Network project is the most direct implementation of agentic warfare theory. It is specifically defined as a warfighting mission dedicated to the development and experimentation with artificial intelligence agents for battle management.2 The scope of this project is vast, encompassing everything from high-level campaign planning down to the tactical execution of the kill chain.2 The digital enablers developed through this project, including the models and the underlying data infrastructure, are designed to be integrated seamlessly with the hardware systems currently being procured for infantry squads, creating a highly networked and autonomous battlefield environment.2

To support the enterprise and administrative side of these operations, the Pentagon has also aggressively expanded its GenAI.mil platform. This initiative involves integrating advanced commercial generative capabilities, including agentic workflows and cloud-based infrastructure, into the daily operations of military personnel.17 Recent agreements have brought frontier models from major commercial entities, such as xAI’s Grok models and specialized government platforms from OpenAI, into the defense ecosystem.17 These integrations provide users with access to real-time global insights, facilitating faster intelligence gathering and administrative processing, which ultimately supports the logistical demands of the frontline warfighter.17

6. Operationalizing at the Tactical Edge: Edgerunner AI and WarClaw

While the Department of War focuses on building the macro-level data architecture through the War Data Platform and establishing strategic frameworks through the Agent Network, private industry is rapidly developing the specific, tactical software agents that will execute these tasks on the battlefield. A detailed analysis of the defense software market in 2026 reveals a distinct and vital pivot. Military organizations are increasingly moving away from massive, generalized frontier models created by commercial technology giants, recognizing that these large models often exhibit unpredictable behaviors, require massive cloud computing resources, and lack the specialized nuance required for lethal operations.13 Instead, the trend strongly favors smaller, highly customized models tailored for specific military domains that offer absolute user control.13

A prominent and highly successful example of this trend is Edgerunner AI, a veteran-founded startup based in Bellevue, Washington. Edgerunner AI recently emerged from stealth mode following a highly publicized USD 5.5 million seed funding round aimed at building generative artificial intelligence specifically for the edge.19According to statements from the company’s leadership reported by BusinessWire, the primary challenge with modern artificial intelligence lies in its broad applicability without addressing specific, high-stakes operational needs.19To solve this, Edgerunner focused exclusively on military applications.

In April 2026, Edgerunner AI officially launched “WarClaw,” an advanced agentic artificial intelligence tool built specifically for military deployment.3 WarClaw represents a critical departure from general-purpose corporate assistants. It functions as a hardened agentic orchestration layer based on the popular open-source OpenClaw framework.3 Unlike consumer models trained on the open internet, WarClaw was meticulously trained by former military operators and subject matter experts, utilizing data derived from actual military tasks and validated in realistic combat simulations.13 This focused training ensures that the agent understands tactical terminology, standard operating procedures, and the strict rules of engagement governing military operations.

The core capability of WarClaw is its ability to provide what the company terms “agentic decision dominance” directly at the front lines.3 By functioning as an autonomous orchestration layer, WarClaw effectively manages multiple smaller sub-agents to achieve complex goals. The system is designed to seamlessly search and analyze vast intelligence databases, interpret complex reconnaissance reports, extract relevant tactical information, and autonomously draft operational briefings and mission documents.13 Furthermore, to ensure broad utility for command staff, the software integrates directly with standard productivity tools ubiquitous in military command centers, including Microsoft Word, Excel, PowerPoint, Teams, and Outlook.13

The efficacy of Edgerunner’s highly specialized approach has garnered rapid institutional validation within the defense apparatus. Edgerunner AI recently secured a firm-fixed price contract with the United States Space Force Space Systems Command, facilitated via the Chief Digital and Artificial Intelligence Office’s Tradewinds Solutions Marketplace.3 This contract aims to deploy the Edgerunner platform into the Space Force’s highly secure environment to modernize and accelerate the acquisitions process.3 This successful deployment demonstrates that the underlying agentic orchestration technology is highly robust and capable of handling complex, high-stakes aerospace procurement and integration tasks, validating its potential for widespread integration into other critical military domains, including ground combat and small arms coordination.

7. Hardware Constraints and DDIL Environments

The most significant operational advantage of WarClaw, and the primary reason it holds such potential for infantry integration, is its foundational architecture designed to run completely on-device.3 Modern warfighters operate in environments where persistent cloud connectivity is not just unreliable; it is an active liability. Continuous connections to external servers can be jammed by electronic warfare units, intercepted by adversarial signals intelligence, or geolocated to target command posts with artillery fire. Therefore, tactical software must function independently of the broader network.

WarClaw is engineered specifically to excel in Denied, Disconnected, Intermittent, and Low-bandwidth environments.3 By processing all data locally on the user’s hardware, the platform ensures absolute data privacy and operational security.21 It transforms workflows without broadcasting electronic signatures that could compromise a unit’s position.21 The technology specifically addresses the challenge of cognitive overload by moving beyond simple chat functions into autonomous execution, allowing the software to operate on laptops, workstations, and ruggedized servers directly at the forward edge of the battle area.21

To achieve this high level of localized capability, Edgerunner utilizes state-of-the-art Small Language Models rather than massive neural networks.22 These models are optimized to work together collaboratively, creating a localized swarm intelligence that tackles distinct tasks efficiently.19 This localized, multi-agent approach significantly reduces near-zero latency, as data does not need to travel to a remote server and back.19 Crucially, it also dramatically reduces power consumption, which is a paramount concern when designing electronic systems intended to be carried by dismounted infantry where battery weight is strictly limited.19

However, deploying agentic artificial intelligence locally still requires robust tactical hardware, highlighting a current constraint in the technology’s evolution. The initial public beta for military users specified minimum hardware requirements that underscore the intense computational demands of modern agentic software, even when optimized.23

Hardware PlatformMinimum Processor RequirementMinimum Memory RequirementMinimum Graphics Requirement
Windows DevicesAMD Ryzen AI Max32GB Total System RAMNVIDIA or AMD discrete GPU with 16GB VRAM
Apple DevicesApple M-series Processors32GB Total System RAMIntegrated unified memory architecture

These requirements indicate that while the models are considered “small” compared to global frontier models, they still necessitate high-end components with substantial Video Random Access Memory to process the agentic workflows smoothly.23 Current iterations require significant local compute power, presenting thermal management and form-factor challenges for hardware engineers designing ruggedized infantry gear. Nevertheless, the technological trajectory points firmly toward highly optimized models functioning on increasingly smaller, lower-power devices. Edgerunner has explicitly stated that future versions of their platform will function on significantly smaller devices with much less required memory, paving the way for eventual integration directly into individual soldier systems, helmet-mounted displays, and advanced optical sights.23

8. Infantry Lethality and Small Arms Integration

The convergence of sophisticated agentic artificial intelligence software and increasingly capable tactical hardware fundamentally alters the operational reality of the infantry squad. For the small arms industry, 2026 represents the year where software integration and digital networking became as critical to weapon design as metallurgical engineering and internal ballistics. The traditional view of a rifle as a purely mechanical tool, operating independently of the broader battlefield network, has been permanently superseded; the modern small arm is now viewed as an active data node within a comprehensive digital ecosystem.

The physical foundation for this tactical artificial intelligence integration is heavily reliant on the United States Army’s deployment of the Next Generation Squad Weapon program.6 This program, designed to replace the legacy M4 carbine and M249 squad automatic weapon, centers on two primary platforms: the XM7 rifle and the XM250 automatic rifle.6 These weapons utilize a novel 6.8mm projectile designed to defeat modern body armor at extended ranges. However, while the ballistic improvements are significant, the true technological leap of the Next Generation Squad Weapon program lies not in the chamber, but in the advanced electronics mounted above it.

The weapons serve as the physical chassis for highly sophisticated optical systems that bridge the gap between the individual rifleman and the broader digital network. As agentic software like WarClaw becomes capable of running on smaller hardware, the integration of these agents directly into the weapon’s electronic suite becomes the obvious next step in infantry modernization. This integration allows the weapon itself to participate actively in threat assessment, target prioritization, and communication, transforming the dismounted soldier from an isolated combatant into a fully integrated node within the artificial intelligence-driven battlespace.

9. The XM157 Fire Control System and Smart Optics

The critical component enabling the digital transformation of small arms is the advanced fire control mechanism. The Department of Defense has invested heavily in this area, recognizing that superior ballistics are useless without superior targeting capabilities. A cornerstone of this effort is the contract awarded to Vortex Optics, a landmark 10-year, firm-fixed-price agreement with a maximum ceiling value of USD 2.7 billion.4 Under this contract, Vortex Optics is tasked with providing up to 250,000 XM157 Next Generation Squad Weapons Fire Control systems to the United States Army.4

The XM157 is not merely a telescopic sight; it is a comprehensive, integrated ballistic computer. The system features variable magnification optics, an integrated precision laser rangefinder, a suite of atmospheric sensors to measure temperature and pressure, a digital compass, and a digital display overlay that projects critical information directly into the shooter’s field of view.6 When a soldier utilizes the XM157, the system instantly calculates the exact ballistic trajectory for the specific 6.8mm round, accounting for distance, wind, and environmental factors, and displays an adjusted aiming point.24

When combined with agentic artificial intelligence orchestration layers, such as those being developed through the Agent Network or localized on-device agents like WarClaw, systems like the XM157 undergo a profound transformation. They transition from being passive calculating tools into active threat assessment nodes.6 Market intelligence and industry data highlight that smart fire control technology is currently being utilized to upgrade conventional weapons into sophisticated anti-drone defense systems.25

By employing artificial intelligence-enabled optics and integrating acoustic echolocation neural networks—technology originally developed for autonomous small drone navigation in low-visibility environments—infantry units can gain unprecedented situational awareness.25 An agentic system integrated with the XM157 could autonomously scan the environment, track the erratic flight paths of attritable multirotor strike drones, prioritize targets based on their immediate threat level to the squad, and provide real-time firing solutions to the operator before the human eye could even register the threat.25 This level of integration represents the ultimate goal of the Department of War’s modernization efforts at the tactical edge.

10. Automating the Tactical OODA Loop

The primary strategic objective of integrating agentic artificial intelligence directly at the squad level, and the underlying rationale for the billions invested in systems like the XM157, is the aggressive compression of the tactical decision-making cycle. In military doctrine, this cycle is widely known as the OODA Loop, an acronym representing the sequential phases of Observe, Orient, Decide, and Act.7 In highly contested combat environments, the combatant who can cycle through this loop faster than their adversary generally achieves victory.

OODA Loop diagram: Observe, Orient, Decide, Act cycle.
John Boyd’s OODA Loop Concept

According to analyses discussing the impact of artificial intelligence on infantry units, traditional intelligence, surveillance, and reconnaissance systems serve primarily to augment the “Observe” phase.7 They feed vast amounts of raw data, imagery, and sensor readings to the warfighter. The introduction of generative artificial intelligence assisted the “Orient” phase by rapidly summarizing that raw data into a cohesive, understandable picture of the battlefield. However, agentic artificial intelligence is fundamentally designed to advance further and assume significant control over the “Decide” phase.7

By functioning as autonomous digital workers, agentic systems can continuously analyze the incoming sensor feed from smart optics and overhead drones. They map this data against the squad leader’s predefined strategic intent, evaluate the environmental variables, generate highly optimized targeting options, and present a nearly finalized decision to the human operator.7 This paradigm, increasingly referred to within the industry as the Agentic OODA Loop, radically compresses the timeline from the moment a sensor detects a threat to the moment a shooter executes a response.7

Agentic OODA loop diagram: Traditional vs AI, showing Observe, Orient, Decide, Act cycles. "Tactical Edge" improvements.

In modern combat scenarios, where engagements with autonomous enemy drone swarms or rapid-maneuver mechanized infantry are measured in fractions of a second, the ability to offload the heavy cognitive processing of observation and orientation to localized agents like WarClaw provides a decisive, life-saving advantage. The human operator is freed from the burden of calculation and analysis, allowing them to focus entirely on the physical execution of the action and the critical assessment of ethical compliance.

Furthermore, the integration of agentic artificial intelligence into small arms facilitates seamless, machine-speed communication across the broader battle management network. For example, if an individual rifleman’s optic identifies a specific, high-value thermal signature, the localized artificial intelligence agent can autonomously log the exact geographic coordinates, cross-reference the signature with known enemy vehicle profiles via a secure connection to the War Data Platform, and instantaneously disseminate precise targeting data to heavy anti-armor assets positioned elsewhere in the sector. This entire process can be completed autonomously before the rifleman even pulls the trigger, ensuring a highly coordinated, overwhelming response to emerging threats.

11. Logistics, Procurement, and Ammunition Supply Chains

The operational efficacy of front-line agentic weapon systems and advanced small arms is entirely dependent on the resilience and efficiency of the complex supply chains that sustain them. A smart rifle without ammunition is simply an expensive club. In 2026, as peer competitors actively map and target global logistics nodes, maintaining continuous operational support requires highly advanced supply chain risk management capabilities.28 Consequently, the defense sector is increasingly relying on agentic artificial intelligence not just for augmenting fire control systems, but for managing the massive procurement networks required for ammunition and replacement parts.

The manufacturing and global distribution of small arms ammunition is a remarkably complex process susceptible to numerous bottlenecks. To support the widespread deployment of the Next Generation Squad Weapon program, the United States Army’s Joint Program Executive Office for Armaments and Ammunition officially broke ground on a massive new 6.8mm ammunition production facility at the Lake City Army Ammunition Plant in Missouri.29 Managing the vast, continuous quantities of raw materials, chemical propellants, specialized brass, and specialized tooling required to maintain output at such facilities is a prime, high-value use case for autonomous software agents.

Agentic artificial intelligence has emerged as a transformative force in the broader electronics and defense sector procurement landscape. A significant development in 2026 has been the rise of autonomous agents designed specifically for logistics.30 These agents function far beyond the capabilities of passive analytical dashboards. They actively and continuously monitor supplier risk profiles, review complex legal contracts, and issue Requests for Proposal without requiring human initiation.30 When a logistics-focused agentic system detects a potential disruption in the supply of critical materials necessary for 6.8mm production, it can autonomously evaluate secondary international suppliers, trigger the necessary bureaucratic onboarding processes, and secure alternative delivery contracts with minimal human intervention.30

This automation is critical for mitigating component obsolescence, which industry analysts frequently cite as a silent profit killer and a major threat to military readiness. A sudden shortage of a specific microchip required for the XM157 optic can halt the entire weapon system’s deployment. Agentic systems actively monitor the global electronics market, predicting shortages and autonomously securing stockpiles of critical components before they become obsolete or unavailable.30 By automating these complex administrative tasks, human procurement teams are freed from tedious bureaucratic churn, allowing them to focus entirely on strategic relationship management and high-level negotiation.

12. The European Manufacturing Transition

The intricacies of defense supply chains extend far beyond domestic manufacturing plants in the United States. The shifting geopolitical environment, heavily influenced by prolonged conflicts in Eastern Europe, has forced a massive restructuring of global small arms production and transit networks. Following the full-scale invasion of Ukraine, Central European nations, specifically the Republic of Poland, the Czech Republic, and the Slovak Republic, experienced a fundamental systemic transformation.31

These nations effectively transitioned from acting as passive regulatory buffer zones into highly active, high-velocity military-industrial hubs.31 By early 2026, industry reports analyzing the Central European arms synthesis noted that the small arms and light weapons landscape across this region achieved a state characterized as a “Hyper-Regulated Equilibrium”.31 While traditional, domestic gun violence metrics in these nations remain at historic lows, their strategic role as massive logistical and manufacturing source-transit hubs has matured significantly.31 The volume of weapons, ammunition, and tactical components flowing through these specific corridors is immense.

Managing this level of industrial integration and high-velocity transit requires tracking capabilities that exceed human capacity. Agentic artificial intelligence systems deployed by allied defense logistics agencies are essential for integrating with local European digital networks to monitor the movement of small arms and munitions continuously.11 These autonomous agents ensure strict compliance with international export controls, monitor shipping manifests against global intelligence databases, and identify potential illicit diversion pathways in real-time.11 The ability to autonomously track millions of serialized parts, electronic optical components, and bulk ammunition shipments across international borders represents a critical application of enterprise-level agentic capabilities in maintaining allied military readiness and preventing arms proliferation.

13. Ethical Implications and the Taxonomy of Autonomy

As agentic artificial intelligence systems proliferate rapidly from deep-tier supply chain management to squad-level fire control, the ethical implications of autonomous warfare have rightfully come to dominate industry, academic, and geopolitical discourse. The integration of these technologies forces a confrontation with profound moral questions. When machine intelligence begins making, or significantly accelerating, critical decisions regarding lethal force, the stakes transition immediately from matters of operational efficiency to matters of existential risk and human rights.32

A primary and persistent concern within the defense policy community is the dangerous ambiguity surrounding the terminology itself. Currently, the term “agentic AI” functions as a broad, loosely defined umbrella encompassing everything from helpful administrative chatbots managing schedules to fully combat-ready, autonomous drone swarms.8 Analysts warn that this lack of precise definition risks severely undermining United States governance frameworks.8 If policymakers and procurement officers apply the exact same terminology to a benign logistics tool and a lethal targeting system, military organizations risk deploying software with the authority to initiate combat operations before the system truly comprehends the contextual risks involved.8

The core danger explicitly identified by policy experts at institutions like the CSIS is not that these artificial intelligence systems lack raw intelligence, but rather that they completely lack human judgment.8A tactical agent operating a smart fire control system on a next-generation rifle might possess the computational intelligence to execute a complex targeting solution flawlessly. However, that same system may fail entirely to recognize that a sudden, nuanced shift in the local civilian situation, a subtle change in the behavior of bystanders, makes executing that perfectly calculated engagement a catastrophic strategic error.8

To mitigate these risks, experts are calling urgently for the establishment of a rigorous, relational, capability-based taxonomy.8 This taxonomy would move beyond technical specifications and specify exactly where an artificial intelligence agent sits within a specific operational workflow, what exact authorities it exercises, and most importantly, how human accountability is distributed when system failures occur.8

The rapid pace of technological development fundamentally disrupts traditional military understandings of command and control. Current United States policy, explicitly outlined in Department of War Directive 3000.09, mandates strictly that all autonomous weapon systems must operate under clear human authority and within defined legal and ethical bounds.9 The current ethical discourse focuses heavily on categorizing the spectrum of human involvement. This involves defining whether a human operator is positionally “in the loop”, requiring explicit authorization for every action, “on the loop”, where the agent executes autonomously while the human merely monitors and can intervene, or completely “out of the loop”.9

The transition toward a “human on the loop” model creates significant friction regarding ultimate legal accountability.33 If a squad leader utilizes a system like WarClaw to designate general target areas, and the system autonomously coordinates a localized strike without explicit, final human authorization for that specific target, defining the accountable leader becomes legally ambiguous. Generally, accountable parties are increasingly identified as those senior commanders who sign off on the initial use of the agentic artificial intelligence and its overarching automated governance protocols, shifting the burden of responsibility from the tactical shooter to the strategic planner.33 Furthermore, the increasing automation of battlefield decisions raises profound fears of algorithmic warfare evolving into fully automated agentic warfare, where lethal decision loops run entirely without human intervention, leading to unpredictable escalations.32

14. Cyber Vulnerabilities and System Hardening

Beyond the kinetic implications of autonomous lethality, the integration of agentic artificial intelligence introduces severe, novel vulnerabilities within the cyber domain. The fundamental characteristic that makes agentic systems so powerful, their ability to carry out complex tasks with minimal oversight, is also heavily utilized by sophisticated adversaries to automate massive cyber attacks and rapidly learn from failed network intrusions.34 Artificial intelligence is functioning as a powerful force multiplier for the modern adversary.34

The aggressive integration of agentic capabilities into defense contractor workflows, often driven by the pursuit of wartime speed and efficiency, is occurring at a pace that frequently outstrips the organization’s ability to fully understand the intricate components or the downstream systemic risks.34 This is a recognized and critical vulnerability. Without robust, multi-layered governance protocols and strict encryption standards for the Application Programming Interfaces utilized by these autonomous agents, the automation that is supposed to assist the military can easily be co-opted.33

The Pentagon faces a difficult balancing act. Officials must continuously balance the strong strategic desire for rapid innovation with the absolute necessity of maintaining strict control over how automated software interacts with sensitive tactical networks and physical hardware.34 If an adversary successfully breaches the communication network utilized by a localized agent like WarClaw, they could potentially manipulate the data feeding into the XM157 fire control system, feeding false targeting coordinates to frontline infantry. Therefore, ensuring the absolute cybersecurity of these digital workers is as critical to mission success as the physical armor worn by the soldiers.

15. Strategic Outlook and Recommendations

Looking ahead from the vantage point of 2026, the defense industrial base and the small arms sector must prepare for a fundamentally altered procurement and operational landscape. The debate within military circles is no longer centered on whether artificial intelligence will be integrated into the force structure, but rather how deeply and securely it will be embedded into the foundational architecture of all defense platforms.

At major international gatherings, such as the 2026 World Defense Show, military officials and defense contractors highlighted an impending strategic choice facing all global armed forces. Organizations must decide whether to procure “AI-enhanced” systems or commit to developing “AI-native” systems.10 Artificial intelligence-enhanced systems involve integrating modern software into existing, legacy platforms in a relatively limited capacity. This approach is akin to bolting a sophisticated smart optic onto a conventional, mechanically operated rifle.10 It provides a capability boost but is limited by the underlying analog architecture.

Conversely, artificial intelligence-native platforms are built entirely from the ground up with artificial intelligence baked into the entire value chain.10 This involves designing custom silicon chips, specific data architectures, and agentic behavioral models before the physical hardware is even prototyped.10 While AI-native systems require massive initial capital investments and necessitate significant organizational readiness, defense experts widely view them as the ultimate force multiplier.10 The small arms industry must anticipate this definitive shift, moving aggressively toward clean-sheet weapon designs where electronic integration, continuous power delivery, and advanced thermal management for on-board compute modules are prioritized alongside traditional metrics of ballistic performance and mechanical reliability.

To navigate this complex transition successfully, several strategic recommendations emerge for defense contractors, software developers, and military procurement agencies:

First, the industry must prioritize Size, Weight, and Power optimization for all processing hardware intended for the tactical edge. Infantry units, already burdened by heavy protective gear and ammunition, cannot bear the physical weight of power-hungry servers. Engineering solutions must focus relentlessly on developing hyper-efficient Small Language Models and specialized neuromorphic hardware capable of running sophisticated agents locally on minimal battery power.19

Second, the defense sector must rigorously and transparently address issues of trust and system verification. As noted by leading industry researchers, human trust in an artificial intelligence system is the paramount factor determining its operational success. The system must function strictly as a trusted component of the decision-making process, allowing the human operator to make faster decisions at machine speed while retaining human accuracy and judgment.10 Organizations must implement comprehensive context charts and clear workflow definitions, ensuring that commanders and frontline soldiers understand exactly which tasks an agentic system is authorized to handle autonomously and which require manual override.8

Finally, cybersecurity protocols must be addressed at the foundational, architectural level of agentic development, not applied as an afterthought. Companies developing autonomous agents for military deployment must guarantee that the communication pathways utilized by these agents are heavily encrypted and that the core systems are hardened against adversarial spoofing and data poisoning.33 Only by unequivocally securing the integrity of these digital workers can the military confidently deploy them into contested environments. The era of agentic defense has firmly arrived, and the organizations that successfully build secure data infrastructure and seamless, trustworthy human-machine teaming capabilities will secure the decisive competitive advantage in the conflicts of the coming decades.

16. Appendix: Methodology

The exhaustive analysis presented in this research report relies on a rigorous synthesis of diverse defense sector data points, policy memoranda, and industry announcements generated throughout the first quarter of 2026. The methodological approach centered on extracting, categorizing, and correlating qualitative policy directives, quantitative budget allocations, and highly specific technical product specifications related to agentic artificial intelligence and its integration into small arms and tactical networks.

Financial assessments were derived by carefully isolating the fiscal year 2026 Department of Defense budget figures, specifically analyzing the designated USD 13.4 billion dedicated to autonomy and artificial intelligence. This capital was mapped across various operational domains to accurately determine the military’s strategic funding priorities. Comprehensive policy analysis was conducted by reviewing the specific directives outlined in the Department of War’s January 2026 memoranda. This involved tracking the bureaucratic restructuring of internal data systems, such as the evolution of Advana into the War Data Platform, and evaluating the strategic objectives of the seven designated Pace-Setting Projects.

The technical capabilities of private sector software, notably Edgerunner AI’s WarClaw platform, were evaluated based on their stated operational environment constraints. This specifically involved analyzing the engineering requirements for functioning in Denied, Disconnected, Intermittent, and Low-bandwidth settings, and assessing the minimum hardware specifications required for on-device processing. This software assessment was then systematically cross-referenced with ongoing physical hardware procurement programs, such as the Next Generation Squad Weapon program and the specific capabilities of the XM157 Fire Control system, to determine the physical pathways for artificial intelligence integration directly at the squad level. Finally, the broader industry discourse regarding ethical and strategic implications was synthesized by analyzing policy essays, defense industry white papers, and recorded statements from international defense conferences regarding the operational and legal limits of autonomous lethality.


Please share the link on Facebook, Forums, with colleagues, etc. Your support is much appreciated and if you have any feedback, please email us in**@*********ps.com. If you’d like to request a report or order a reprint, please click here for the corresponding page to open in new tab.


Sources Used

  1. Agentic Systems and AI in Defense Modernization 2026, accessed April 10, 2026, https://rngstrategyconsulting.com/insights/industry/aerospace-defense/ai-in-defense-modernization/
  2. Department of War’s Artificial Intelligence-First Agenda: A New Era …, accessed April 10, 2026, https://www.hklaw.com/en/insights/publications/2026/02/department-of-wars-ai-first-agenda-a-new-era-for-defense-contractors
  3. News – EdgeRunner AI, accessed April 10, 2026, https://www.edgerunnerai.com/news
  4. Fire-control system from Vortex Optics wins 10-year contract with U.S. Army, accessed April 10, 2026, https://militaryembedded.com/radar-ew/sensors/fire-control-system-from-vortex-optics-wins-10-year-contract-with-us-army
  5. Vortex Optics – Military Embedded Systems, accessed April 10, 2026, https://militaryembedded.com/company/vortex-optics
  6. 2026 DoW Directory Update 2.docx – REXOTA Solutions, accessed April 10, 2026, https://rexota.com/wp-content/uploads/2026-DoW-Directory-Rev-2.pdf
  7. Full Archive Search – ISK Online, accessed April 10, 2026, https://indianstrategicknowledgeonline.com/archive.php
  8. Lost in Definition: How Confusion over Agentic AI Risks Undermining U.S. Governance Frameworks – CSIS, accessed April 10, 2026, https://www.csis.org/analysis/lost-definition-how-confusion-over-agentic-ai-risks-governance
  9. Controlling Command: Is AI Capturing the Ethics of War?, accessed April 10, 2026, https://inss.ndu.edu/Media/News/Article/4454555/controlling-command-is-ai-capturing-the-ethics-of-war/
  10. Future of military AI in Saudi Arabia: AI-enhanced, or AI-native? – Breaking Defense, accessed April 10, 2026, https://breakingdefense.com/2026/02/future-of-military-ai-in-saudi-arabia-ai-enhanced-or-ai-native/
  11. From intelligence to agency: AI’s new frontier in international security → UNIDIR, accessed April 10, 2026, https://unidir.org/event/from-intelligence-to-agency-ais-new-frontier-in-international-security/
  12. Defense Tech Trends for 2026: Innovation in Action – NSTXL, accessed April 10, 2026, https://nstxl.org/defense-tech-trends/
  13. Startup debuts agentic AI assistant for war – Defense One, accessed April 10, 2026, https://www.defenseone.com/technology/2026/04/startup-takes-different-approach-ai-assistants/412545/
  14. Rising Defense Spending: Fueling A Deep Tech Boom In 2026 – Forbes, accessed April 10, 2026, https://www.forbes.com/councils/forbesfinancecouncil/2026/03/03/rising-defense-spending-fueling-a-deep-tech-boom-in-2026/
  15. Defense Tech Enters 2026 with Strengthening Fundamentals – Global X ETFs, accessed April 10, 2026, https://www.globalxetfs.com/articles/defense-tech-enters-2026-with-strengthening-fundamentals
  16. Pentagon Releases Artificial Intelligence Strategy | Inside Government Contracts, accessed April 10, 2026, https://www.insidegovernmentcontracts.com/2026/02/pentagon-releases-artificial-intelligence-strategy/
  17. Department Of War Integrates OpenAI ChatGPT Into GenAI.mil Platform For 3 Million Personnel – The Defense Watch, accessed April 10, 2026, https://thedefensewatch.com/cyber-space-defense/pentagon-genai-mil-adds-chatgpt/
  18. Startup debuts agentic AI assistant for war – OODAloop, accessed April 10, 2026, https://oodaloop.com/briefs/technology/startup-debuts-agentic-ai-assistant-for-war/
  19. EdgeRunner AI Emerges from Stealth with $5.5M Seed Funding to Build Generative AI for the Edge – Business Wire, accessed April 10, 2026, https://www.businesswire.com/news/home/20240718441467/en/EdgeRunner-AI-Emerges-from-Stealth-with-%245.5M-Seed-Funding-to-Build-Generative-AI-for-the-Edge
  20. EdgeRunner Releases “WarClaw” for Agentic Decision Dominance at the Tactical Edge, accessed April 10, 2026, https://www.businesswire.com/news/home/20260331235388/en/EdgeRunner-Releases-WarClaw-for-Agentic-Decision-Dominance-at-the-Tactical-Edge
  21. EdgeRunner Launches WarClaw Agentic AI & Decision Support – Defense Advancement, accessed April 10, 2026, https://www.defenseadvancement.com/news/edgerunner-launches-warclaw-agentic-ai-decision-support/
  22. EdgeRunner Achieves GPT-5 Level Performance For Key Military Tasks While Running Locally On-Device – Business Wire, accessed April 10, 2026, https://www.businesswire.com/news/home/20251112202469/en/EdgeRunner-Achieves-GPT-5-Level-Performance-For-Key-Military-Tasks-While-Running-Locally-On-Device
  23. EdgeRunner Announces Public Beta for Military Users – Business Wire, accessed April 10, 2026, https://www.businesswire.com/news/home/20250708291724/en/EdgeRunner-Announces-Public-Beta-for-Military-Users
  24. Soldier Systems Daily Soldier Systems Daily, accessed April 10, 2026, https://soldiersystems.net/page/16/
  25. Counter-UAS Market: Business Models – Full List (Agentic AI Market, accessed April 10, 2026, https://newmarketpitch.com/blogs/news/counter-uas-business-model
  26. Resources — o16g – Outcome Engineering, accessed April 10, 2026, https://o16g.com/resources/
  27. GPMG – 7.62mm General Purpose Machine Gun – Defense Advancement, accessed April 10, 2026, https://www.defenseadvancement.com/projects/gpmg-7-62mm-general-purpose-machine-gun/
  28. What Are the Top Defense Technology Priorities for 2026? A Quick Guide – IDGA, accessed April 10, 2026, https://www.idga.org/command-and-control/articles/the-top-defense-technology-priorities-2026-a-quick-guide
  29. Weapons Systems | Technical Briefs, Aerospace & Defense Engineering News, accessed April 10, 2026, https://www.mobilityengineeringtech.com/met/topic/aerospace/weapons-systems
  30. The 2026 Supply Chain: New Rules for Electronics OEMs – DigiKey, accessed April 10, 2026, https://www.digikey.com/en/blog/the-2026-supply-chain-new-rules-for-electronics-oems
  31. The 2026 Central European Arms Synthesis: Phantom Networks, Export Fracture Points and the Post-Conflict Proliferation Horizon – https://debuglies.com, accessed April 10, 2026, https://debuglies.com/2026/03/26/the-2026-central-european-arms-synthesis-phantom-networks-export-fracture-points-and-the-post-conflict-proliferation-horizon/
  32. In the Age of AI, the Fog of War Thickens – Centre for International Governance Innovation, accessed April 10, 2026, https://www.cigionline.org/articles/in-the-age-of-ai-the-fog-of-war-thickens/
  33. The evolving ethics and governance landscape of agentic AI – IBM, accessed April 10, 2026, https://www.ibm.com/think/insights/ethics-governance-agentic-ai
  34. JUST IN: AI Enabling New Cyber Risks, Report Says – National Defense Magazine, accessed April 10, 2026, https://www.nationaldefensemagazine.org/articles/2026/3/11/just-in-ai-enabling-new-cyber-risks-report-says
  35. MIL-DTL Guide to Rugged Interconnects – Defense Advancement, accessed April 10, 2026, https://www.defenseadvancement.com/resources/mil-dtl-rugged-connectors/

The Day Warfare Changed: The March 2026 Kupiansk Drone Swarm Attack

Executive Summary

In late March 2026, the fundamental nature of mechanized maneuver warfare underwent a catastrophic and irreversible shift. During a stalled Russian armored offensive in the Kupiansk sector, the Ukrainian Unmanned Systems Forces (USF) executed the first fully documented, combat-effective “coordinated swarm” attack in modern military history. Confirmed through frontline telemetry and official USF post-action reports released on April 9, 2026, this engagement violently exposed the obsolescence of mid-20th-century combined arms doctrine.1

In an engagement lasting precisely 142 seconds, a decentralized mesh network of 40 autonomous unmanned aerial vehicles (UAVs) identified, prioritized, and systematically eradicated an entire Russian armored platoon, including its command T-90M main battle tank and supporting infantry fighting vehicles (IFVs). The entire terminal phase of this engagement occurred without human operator input. This incident represents the maturation of “Swarm Intelligence” from a theoretical laboratory concept into a lethal, combat-ready reality.4

Traditional short-range air defenses (SHORAD) and electronic warfare (EW) umbrellas, long relied upon to provide an “Iron Ceiling” for advancing armor, were bypassed and rendered mechanically and economically irrelevant.5 The reduction of a $120 million armored column by a drone swarm costing under $150,000 establishes a profound economic asymmetry that breaks existing defense procurement models. This report provides an exhaustive open-source intelligence (OSINT) analysis of the tactical execution, hardware and software architectures, and the global doctrinal implications of the March 2026 Kupiansk strike.

The Strategic and Operational Context: Spring 2026

The Macro-Operational Environment

Entering the spring of 2026, the operational environment in eastern Ukraine was defined by intense, attritional warfare, heavily shaped by the deployment of unmanned systems and loitering munitions. Russian forces, seeking to exploit early spring conditions ahead of the Rasputitsa (mud season), initiated a series of localized mechanized assaults aimed at pushing Ukrainian forces back from the international border and crossing the Oskil River in the Kupiansk direction.7 These operations were intended to create a defensible buffer zone and open operational vectors toward the Slovyansk-Kramatorsk agglomeration.9

Russian elements, notably including the 1st Guards Tank Army and the 47th Tank Division, repeatedly attempted to breach Ukrainian lines using traditional concentrated armored columns.3 These columns were ostensibly protected by organic EW and SHORAD assets, adhering to standard Russian ground forces doctrine that relies on mass and localized fire superiority.

Concurrently, the Armed Forces of Ukraine (AFU) had fundamentally restructured its force posture to accommodate the realities of the modern battlefield. The establishment of the Unmanned Systems Forces (USF) as a dedicated military branch in 2024 marked a pivotal institutional adaptation.11 Under the command of Major General Robert “Magyar” Brovdi, the USF rapidly scaled from tactical ad-hoc units to a highly integrated, strategic force responsible for significant percentages of confirmed enemy attrition.11 Throughout March and April 2026, the USF intensified its mid-range and deep-strike campaigns, systemically degrading Russian logistics hubs, oil infrastructure, and air defense networks.1

Strategic Force PostureRussian Federation ForcesUkrainian Armed Forces (AFU)
Primary Effort AreaOskil River crossing, Kupiansk-Lyman axis.8Deep strike interdiction, algorithmic attrition, Kupiansk defense.9
Key Units1st Guards Tank Army, 47th Tank Division, VDV Airborne elements, Rubicon Drone Unit.3Unmanned Systems Forces (USF), 3rd Assault Brigade, 414th Marine Strike UAV Battalion.13
Tactical DoctrineMassed armor, linear SHORAD umbrellas, heavy artillery preparation.1Tactical dispersion, decentralized mesh networking, autonomous swarm strikes.20

The Evolution of the Threat: From Mass to Swarm

Prior to March 2026, UAV operations heavily relied on “mass” attacks. In a mass attack, dozens of drones (such as FPV quadcopters or fixed-wing loitering munitions) are launched simultaneously to saturate air defenses, but each unit requires an individual human operator maintaining a continuous radio frequency (RF) control link.21 While highly effective at increasing the volume of fire, this hub-and-spoke architecture is vulnerable to broad-spectrum EW jamming and requires significant human capital. If the pilot’s control signal is severed, or if the pilot is incapacitated by counter-battery fire, the drone is rendered inert.

The March engagement near Kupiansk marked the definitive transition to a “true swarm.” Unlike mass attacks, a true swarm is a singular, cohesive entity comprised of multiple individual nodes. It utilizes decentralized mesh networking and edge-processing artificial intelligence to communicate, negotiate, and execute complex tactical behaviors autonomously.22 The USF, supported heavily by Ukraine’s Brave1 defense technology cluster, spent late 2025 and early 2026 integrating autonomous target allocation algorithms into highly mobile, low-cost platforms.24

The convergence of these technologies in the Kupiansk sector culminated in an engagement that permanently altered battlefield calculus. As Russian forces attempted a mechanized push, they encountered a defensive capability that operated outside the parameters of human reaction time and traditional electronic countermeasures.

Anatomy of the March 2026 Kupiansk Engagement

The destruction of the Russian armored column was not a conventional skirmish; it was a highly synchronized algorithmic execution. Telemetry data, visual confirmation, and OSINT analysis indicate that the 142-second engagement was broken down into distinct, machine-speed phases that completely neutralized the attacking force.

Phase I: Detection and Autonomous Target Allocation

The engagement commenced when the Russian tank platoon, advancing along a localized axis toward the Kupiansk-Lyman line, was detected by Ukrainian wide-area surveillance and reconnaissance drones operating at high altitudes. Upon detection and verification of the threat vector, a swarm of 40 UAVs was deployed from dispersed, concealed positions.

Crucially, once the swarm reached the operational grid and acquired visual confirmation of the targets, operators severed the manual control link, handing full tactical authority over to the swarm’s onboard AI. This transition to full autonomy was a tactical necessity designed to bypass the Russian Pole-21 EW systems, which were establishing a localized jamming dome over the advancing column to sever traditional RF control links.

Operating on a decentralized “mesh” network, the 40 drones shared sensor data in real-time.27 When the optical sensors of the lead drone identified the thermal and visual signature of the Russian command T-90M tank, the data was instantaneously broadcast across the entire swarm network. The swarm’s internal algorithm subsequently executed an autonomous target allocation protocol.28

Recognizing the T-90M as a high-value target (HVT) and the primary node of Russian tactical command and control (C2), the network automatically assigned six drones to prosecute the tank. The remaining 34 units simultaneously identified, mapped, and locked onto the supporting BMP infantry fighting vehicles, MT-LB personnel carriers, and logistical supply trucks. This entire triage, prioritization, and allocation process occurred in milliseconds, completely without any human-in-the-loop (HITL) authorization for the terminal phase.

Tactical reconstruction of the Kupiansk drone swarm attack showing relay network, EW jamming, and strike trajectories.

Phase II: The “Blind Spot” Maneuver

The tactical brilliance of the March engagement lay in the swarm’s ability to dynamically restructure its formation based on the immediate threat environment. Telemetry analysis reveals that the 40-drone cluster executed a coordinated separation tactic, unofficially designated by analysts as the “Blind Spot” maneuver.29 The swarm divided into three highly specialized sub-groups, each serving a distinct function in the algorithmic kill chain:

  1. The Suppression Element (EW/Decoy Group): A subset of the swarm dove rapidly toward the column, emitting localized RF noise and acting as kinetic decoys. Their primary function was to saturate the local Russian radar environment and force the automated targeting systems of the Russian SHORAD into a processing feedback loop, effectively blinding them to the true threat vectors.
  2. The Reconnaissance and Relay Node: A second group hovered at a higher altitude, remaining outside the immediate kinetic engagement envelope of the Russian column. These units acted as airborne routers. Using configurations similar to the domestically produced “Bucha” fixed-wing platform—which can substitute a warhead for extended battery and relay equipment—they maintained the integrity of the mesh network.27 This ensured that even if terminal strike drones were destroyed by kinetic countermeasures, the swarm’s collective intelligence, targeting data, and spatial mapping remained intact.
  3. The “Killer” Group: The largest contingent of the swarm approached the column from the vehicles’ literal and electronic blind spots. Striking from a high-angle, top-down trajectory, these munitions bypassed the heavy frontal glacis and side armor of the T-90M and BMPs. Instead, they targeted the notoriously thin turret roofs and engine decking, maximizing the probability of catastrophic catastrophic ammunition cook-offs and mobility kills.
Swarm Sub-Group ClassificationEstimated QuantityAltitude ProfilePrimary Tactical Objective
Suppression (EW / Decoy)4 – 6Low / VariableRadar saturation; localized EW jamming; target distraction.
Reconnaissance / Relay2 – 4High / LoiteringMaintain mesh network integrity; real-time BDA (Battle Damage Assessment).
Terminal “Killer” (Strike)30 – 34High-Angle DiveKinetic strike execution via autonomous target allocation.

Phase III: Saturation Speed and the 142-Second Kill Chain

The concept of “saturation speed” dictates that a defense system—whether mechanical or biological—can only process and react to a finite number of threats within a given timeframe. The Kupiansk swarm attack weaponized time. From the exact moment the swarm algorithm detected the column to the final munition detonating, precisely 142 seconds elapsed.31

In a conventional combined arms attack, sequential missile launches or artillery barrages give a well-trained tank crew time to deploy smoke screens, activate hard-kill active protection systems (APS), or traverse their turrets to return fire. In this engagement, the Russian crews were overwhelmed by a 360-degree volume of simultaneous, highly coordinated threats. Six drones struck the command T-90M in rapid succession. The initial strikes systematically stripped away the Explosive Reactive Armor (ERA) blocks and triggered any passive defenses, while the subsequent drones exploited the newly exposed base armor. The human operators inside the vehicles were physically, cognitively, and mechanically incapable of assessing the threat, let alone engaging it, before the column was entirely reduced to burning wreckage.

Hardware and Software Architecture of the Swarm

The success of the March 2026 strike was heavily predicated on advancements in both off-the-shelf hardware integration and bespoke, military-grade software developed rapidly under wartime conditions. The synergy between these components represents a masterclass in decentralized military innovation, spearheaded by organizations like the Brave1 defense-tech cluster.25

Platform Agnosticism and Hybrid Airframes

OSINT analysis suggests that the swarm deployed in Kupiansk was not monolithic in its hardware profile. Rather than relying on a single, expensive, and difficult-to-procure platform, the USF utilized a heterogeneous mix of airframes designed to maximize operational flexibility and minimize per-unit costs.

The relay nodes likely utilized small, fixed-wing designs engineered for endurance and extended loiter times. Technologies analogous to the “Bucha” drone, developed by UFORCE, fit this mission profile perfectly. The Bucha operates in coordinated groups using a mesh-network approach and configures specific aircraft as relay nodes to extend communication ranges up to 200 kilometers.27

Conversely, the terminal strike elements were almost certainly highly maneuverable rotary-wing FPV drones, heavily modified for autonomous flight. Companies within the Brave1 ecosystem, such as Vyriy and Wild Hornets, had already pioneered small FPV drones (like the “Molfar” and “Sting” interceptors) capable of swarm functioning and evading Russian jamming.33 These airframes, built largely from commercially available components but heavily modified with domestic flight controllers and optical targeting modules, cost roughly $3,000 each. They carry shaped-charge anti-tank munitions capable of penetrating over 200mm of rolled homogeneous armor (RHA) when striking perpendicularly.

The Nervous System: Wireless Mesh Networking

The core enabler of the swarm is its communication architecture. Traditional military drones operate on a hub-and-spoke model; if the hub (the pilot’s radio or the command center) is jammed by EW, the drone is lost or forced to return to base. The Kupiansk swarm utilized a highly resilient wireless mesh network.

In a mesh configuration, every drone acts as both a client and a router. If one drone’s communication is degraded by localized RF interference, or if a drone is destroyed, data packets seamlessly route through adjacent surviving drones. This system allows the swarm to maintain tactical cohesion over highly contested airspace. The integration of advanced communication data links, potentially leveraging localized edge computing and directional antennas, ensures that the swarm can coordinate attack timings down to the millisecond. This network elasticity is what permitted the “Blind Spot” maneuver to be executed flawlessly; as drones shifted positions and altered altitudes, the network dynamically healed itself, maintaining the continuous flow of targeting telemetry across the battlefield.22

The Brain: Edge-Processing AI and Autonomous Algorithms

The most profound and destabilizing aspect of the March engagement for global military planners is the high degree of autonomy achieved by the Ukrainian systems. The drones utilized “edge-processing AI.” This signifies that the massive computational power required for machine vision, target recognition, and dynamic flight path calculation was housed directly on the drone’s onboard microprocessors, rather than relying on a continuous uplink to a remote server or human operator.24

Using advanced Convolutional Neural Networks (CNNs) trained on vast, real-world datasets of Russian armored vehicles, the drones’ optical sensors could instantly differentiate between a high-value T-90M, a standard BMP-2, and a logistical Ural truck. The swarm intelligence algorithms—likely inspired by biological models of flocking and foraging—allowed the drones to negotiate target assignments among themselves. If two drones locked onto the same weak point of a BMP, the algorithm instantly de-conflicted their paths, redirecting one to an alternate target to prevent overkill and optimize munition distribution.28 This edge-processing capability fundamentally breaks the traditional electronic warfare kill chain, which relies almost entirely on severing the link between pilot and machine.

The Collapse of Traditional Defense: The “Iron Ceiling” Problem

For roughly a century, the tank has dominated terrestrial warfare, acting as the apex predator of the battlefield. Its survival, however, has always been contingent on a combined arms umbrella—an “Iron Ceiling” provided by infantry screens and mobile air defense systems. The March 2026 swarm attack definitively shredded this doctrine, exposing three critical vulnerabilities in Russian, and by extension global, mechanized defense architectures.

1. Mechanical Incapability of SHORAD

Russian short-range air defense systems, such as the Pantsir-S1 and the Tor-M2, represent some of the most capable kinetic defense platforms globally. However, their design philosophy is rooted in Cold War operational requirements, optimized to track and destroy linear, high-velocity threats like cruise missiles, or singular, high-radar-cross-section (RCS) targets like fighter jets and attack helicopters.

A Tor-M2 system can simultaneously track dozens of targets but has a severely limited number of engagement channels (typically 4 to 8 missiles guided simultaneously). When confronted with 40 independent, highly maneuverable, bird-sized objects converging simultaneously from multiple vectors, the radar and fire control systems undergo massive task saturation. They are mechanically and computationally incapable of slewing their turrets, acquiring radar locks, and launching interceptors fast enough to stem the tide. Even if the SHORAD system operates flawlessly within its design parameters, the math is unforgiving: successfully intercepting 8 drones leaves 32 free to prosecute the column.

2. The Obsolescence of Traditional Electronic Warfare

Russian tactical doctrine relies heavily on layered, deep electronic warfare. Systems like the Pole-21 are designed to create a dome of RF interference, jamming GPS signals and severing the command and control links of incoming drones. Against first-generation FPV drones piloted by humans, this tactic proved highly effective in the attrition battles of 2023 and 2024.

However, the advent of edge-processing AI has rendered these multi-million-dollar EW systems obsolete in the face of a true autonomous swarm. Because the drones rely on internal optical navigation (machine vision matching terrain features to pre-loaded maps) and edge-computed target recognition, they simply do not require GPS or a continuous pilot RF uplink during the terminal engagement phase.33 The swarm effectively ignores the EW jamming, flying through the electronic noise as easily as a kinetic projectile flies through a smoke screen. The Pole-21, designed to break a digital tether, is useless against a machine that has severed its own tether by design.

3. Profound Economic Asymmetry

Perhaps the most destabilizing strategic implication of the Kupiansk attack is the financial calculus it imposes. Historically, warfare has favored the state actor that can out-produce its rival in heavy industry, steel, and complex machinery. Today, microchips, open-source algorithms, and injection-molded plastics have aggressively subverted heavy steel.

Cost-exchange asymmetry: Armored column vs. drone swarm. Russian assets in red, Ukrainian in blue. $120M vs. $150K.

The Russian armored column destroyed in the March engagement was valued at an estimated $120 million. The 40-unit swarm that systematically dismantled it cost less than $150,000—representing an unsustainable cost-exchange ratio of roughly 800:1.

Furthermore, attempting to defend against these swarms using traditional kinetic means is a losing financial proposition. A single interceptor missile for a Tor-M1 system costs roughly $800,000. Firing an $800,000 missile to destroy a $3,000 plastic drone is economically ruinous over a prolonged campaign. The military force employing massed autonomous swarms can simply exhaust and bankrupt the defender’s air defense magazines long before their own drone stockpiles are depleted.

Doctrinal Shift: The End of Concentrated Armor

Military planners globally are currently facing a profound “triage” moment for armored warfare. For decades, the concentration of mass—grouping tanks, mechanized infantry, and self-propelled artillery into tightly packed divisions or Battalion Tactical Groups (BTGs)—was the fundamental key to achieving an operational breakthrough. The March 2026 engagement proves that a concentrated mass of steel is no longer a spearhead; it is merely a high-value, target-rich environment waiting to be processed by an algorithm.

Tactical Dispersion and Mosaic Warfare

As Major General Brovdi noted following the engagement, the very concept of a traditional tank division is now a liability.20 Survival on the modern, sensor-saturated battlefield dictates a doctrine of “tactical dispersion,” aligning closely with the emerging concepts of Mosaic Warfare. Units must spread out significantly, minimizing their visual, thermal, and electromagnetic signatures. They must operate as small, highly mobile, and semi-independent nodes that assemble rapidly only at the precise point of attack, execute the mission, and disperse again before an algorithmic swarm can be routed to their coordinates. The battlefield is becoming highly transparent, and any concentrated force will trigger a devastating autonomous response.

The Vulnerability of Hard-Kill Active Protection Systems (APS)

If external SHORAD systems cannot protect armor from swarms, conventional wisdom dictates that the armor must protect itself. Global militaries are currently scrambling to retrofit Hard-Kill Active Protection Systems (APS), such as the Israeli Trophy or the U.S. Iron Fist, onto their main battle tanks.6

However, as demonstrated in Kupiansk, current APS technology is severely limited by physical reload speeds, limited traverse rates, and shallow magazine depths. A swarm of 40 drones will simply bait the APS to expend its kinetic charges, depleting the defense in seconds, and systematically kill the tank with the remaining munitions. APS is designed to defeat a single RPG or ATGM, not a coordinated multi-vector saturation attack.

The “Carrier” Concept and Defensive Swarms

This glaring vulnerability has given rise to the “Carrier Concept” in forward-looking military analysis. Analysts project that the future main battle tank cannot rely on passive armor or slow-to-reload kinetic interceptors. Instead, armored vehicles must evolve into “drone carriers”—essentially mobile armored hives equipped with their own AI-driven defensive swarms.26

When an offensive swarm is detected, the carrier vehicle would autonomously launch dozens of micro-interceptor drones. These interceptors, functioning like an airborne digital immune system, would engage the incoming threat in a decentralized, high-speed dogfight 40, re-establishing a dynamic and fluid “Iron Ceiling” above the dispersed tactical unit. Ukraine is already pioneering this concept with the rapid development of autonomous interceptor swarms designed to hunt down incoming threats with minimal human input, moving toward a 1:1 intercept ratio.35

Strategic Horizon: The Scaling of Algorithmic Warfare

The March 2026 Kupiansk strike was not an anomaly; it was a lethal proof of concept that is rapidly moving into mass production. The technological innovations that enabled this strike were incubated within Ukraine’s Brave1 defense tech cluster, a government-backed platform that has gamified and exponentially accelerated the procurement and R&D cycle.25 By creating an open ecosystem where frontline telemetry directly informs immediate software patches and hardware iterations, Ukraine has decoupled defense innovation from the sluggish, decades-long procurement cycles typical of Western militaries.37

The strategic implications extend far beyond the steppes of eastern Europe. The proliferation of low-cost, edge-processing AI modules, combined with commercially available drone components, means that the barrier to entry for possessing an autonomous precision-strike air force has plummeted. Non-state actors, proxy forces, and smaller nations can now procure swarm capabilities that threaten the multi-billion-dollar expeditionary forces of major superpowers.

As Ukraine scales the production of true swarms, integrating them deeply into their operational planning for 2026 and beyond, Russian forces will be forced into a frantic cycle of adaptation. The Russian deployment of the “Rubikon” elite drone unit and the formal establishment of their own Unmanned Systems Forces—a direct mirror of Ukraine’s USF—indicates that Moscow recognizes the existential threat posed by algorithmic warfare.17 However, successfully countering a decentralized, autonomous mesh network requires a level of advanced software engineering, rapid iteration, and micro-electronic supply chain integrity that Russia currently struggles to maintain under global sanctions.45

Conclusion

The March 2026 Kupiansk drone swarm attack represents a paradigm shift equivalent to the introduction of the machine gun in World War I or the aircraft carrier in World War II. The Unmanned Systems Forces of Ukraine have unequivocally demonstrated that a decentralized network of autonomous, low-cost UAVs can dismantle a state-of-the-art armored platoon in a matter of seconds. By circumventing traditional electronic warfare, overwhelming kinetic air defenses through saturation speed, and enforcing an unsustainable economic asymmetry, the swarm has deposed the tank as the king of the battlefield.

Military institutions worldwide must urgently reevaluate their procurement priorities and doctrinal assumptions. Investments heavily skewed toward concentrated heavy armor and legacy air defense systems risk outfitting armies for a war that no longer exists. The “Iron Ceiling” of defense is no longer forged from steel plates and radar-guided missiles; it is woven from adaptive mesh networks, edge-processing artificial intelligence, and algorithmic swarms. In the rapidly evolving landscape of modern conflict, survival relies not on the thickness of armor, but on the speed and autonomy of the algorithm.


Please share the link on Facebook, Forums, with colleagues, etc. Your support is much appreciated and if you have any feedback, please email us in**@*********ps.com. If you’d like to request a report or order a reprint, please click here for the corresponding page to open in new tab.


Sources Used

  1. Russian Offensive Campaign Assessment, March 14, 2026 | ISW, accessed April 12, 2026, https://understandingwar.org/research/russia-ukraine/russian-offensive-campaign-assessment-march-14-2026/
  2. Russian Offensive Campaign Assessment, April 8, 2026, accessed April 12, 2026, https://understandingwar.org/research/russia-ukraine/russian-offensive-campaign-assessment-april-8-2026/
  3. Russian Offensive Campaign Assessment, April 9, 2026 | ISW, accessed April 12, 2026, https://understandingwar.org/research/russia-ukraine/russian-offensive-campaign-assessment-april-9-2026/
  4. 6 predictions for defence in 2026 – Resilience Media, accessed April 12, 2026, https://resiliencemedia.co/resilience-medias-predictions-for-2026/
  5. Design, Control, and Motion-Planning for a Root-Perching Rotor-Distributed Manipulator | Request PDF – ResearchGate, accessed April 12, 2026, https://www.researchgate.net/publication/374996654_Design_Control_and_Motion-Planning_for_a_Root-Perching_Rotor-Distributed_Manipulator
  6. The Future of Minefield Breaching – European Security & Defence, accessed April 12, 2026, https://euro-sd.com/wp-content/uploads/2026/01/ESD_02_2026_WEB.pdf
  7. Russian Offensive Campaign Assessment, March 19, 2026 | ISW, accessed April 12, 2026, https://understandingwar.org/research/russia-ukraine/russian-offensive-campaign-assessment-march-19-2026/
  8. Russia’s Battlefield Woes in Ukraine – CSIS, accessed April 12, 2026, https://www.csis.org/analysis/russias-battlefield-woes-ukraine
  9. Russian Offensive Campaign Assessment, March 17, 2026 | ISW, accessed April 12, 2026, https://understandingwar.org/research/russia-ukraine/russian-offensive-campaign-assessment-march-17-2026/
  10. Ukraine war situation update | 14 – 20 March 2026 – ReliefWeb, accessed April 12, 2026, https://reliefweb.int/report/ukraine/ukraine-war-situation-update-14-20-march-2026
  11. Drone Warfare in Ukraine: From Myths to Operational Reality – Part 1, accessed April 12, 2026, https://researchcentre.army.gov.au/library/land-power-forum/drone-warfare-ukraine-myths-operational-reality-part-1
  12. Unmanned Aircraft and the Revolution in Operational Warfare – Army University Press, accessed April 12, 2026, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/July-August-2025/Unmanned-Aircraft-Revolution/
  13. Opinion: Pokrovsk Revisited Again – Deadlier Drone Wars – Laughing at Luna – Kyiv Post, accessed April 12, 2026, https://www.kyivpost.com/opinion/63546
  14. Russia Opens ‘Terrorism’ Case Against Ukraine’s Top Drone Commander – Kyiv Post, accessed April 12, 2026, https://www.kyivpost.com/post/65828
  15. Ukrainian drones destroy ninth Russian air defense system in nine days of April – The New Voice of Ukraine, accessed April 12, 2026, https://english.nv.ua/nation/unmanned-aerial-systems-have-destroyed-9-russian-air-defense-systems-since-early-april-video-50599013.html
  16. Active Conflicts & News Megathread February 24, 2026 : r/CredibleDefense – Reddit, accessed April 12, 2026, https://www.reddit.com/r/CredibleDefense/comments/1rdcp7w/active_conflicts_news_megathread_february_24_2026/
  17. November 14 2025 — Day 1359 — Drones Over Pokrovsk, Kyiv Takes a Shot, Neptunes and Flamingos | by Stefan Korshak | Medium, accessed April 12, 2026, https://medium.com/@Stefan.Korshak/november-14-2025-day-1359-drones-over-pokrovsk-kyiv-takes-a-shot-neptunes-and-flamingos-24e79d15896b
  18. Ukrainian drone units damaged 54 units of Russian military equipment over winter, accessed April 12, 2026, https://gwaramedia.com/en/ukrainian-drone-units-damaged-54-units-of-russian-military-equipment-over-winter/
  19. Video: Russia Deployed Heavy Armor on the Lyman Direction, but the Column Was Destroyed | UA.NEWS, accessed April 12, 2026, https://ua.news/en/war-vs-rf/video-rf-kinula-vazhku-bronetekhniku-na-limanskomu-napriamku-ale-kolonu-rozgromili
  20. Mosaic Warfare In Ukraine – The Defence Horizon Journal, accessed April 12, 2026, https://tdhj.org/blog/post/mosaic-warfare-ukraine/
  21. How are Drones Changing Modern Warfare? | Australian Army Research Centre (AARC), accessed April 12, 2026, https://researchcentre.army.gov.au/library/land-power-forum/how-are-drones-changing-modern-warfare
  22. Technological Evolution on the Battlefield – CSIS, accessed April 12, 2026, https://www.csis.org/analysis/chapter-9-technological-evolution-battlefield
  23. Drones in Modern Warfare: Lessons Learnt from the War in Ukraine – Australian Army Research Centre, accessed April 12, 2026, https://researchcentre.army.gov.au/sites/default/files/241022-Occasional-Paper-29-Lessons-Learnt-from-Ukraine_2.pdf
  24. What Tech Will Define Warfare in 2026? Here’s What Insiders Say, accessed April 12, 2026, https://united24media.com/war-in-ukraine/what-tech-will-define-warfare-in-2026-heres-what-insiders-say-14123
  25. Brave1 Defense Tech Valley 2026: Ukraine is shaping a new global standard for defense solutions | MoD News, accessed April 12, 2026, https://mod.gov.ua/en/news/brave1-defense-tech-valley-2026-ukraine-is-shaping-a-new-global-standard-for-defense-solutions
  26. information about the project Brave1 – Digital State UA, accessed April 12, 2026, https://digitalstate.gov.ua/projects/tech/brave1
  27. Ukrainian Firm Unveils ‘Bucha’ Kamikaze Drone With 200 km Range …, accessed April 12, 2026, https://www.kyivpost.com/post/72062
  28. Drone Swarm System Market Research Report 2034 – Dataintelo, accessed April 12, 2026, https://dataintelo.com/report/drone-swarm-system-market
  29. MCU Insights, vol. 16, no. 4 – Marine Corps University, accessed April 12, 2026, https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/MES-Publications/MES-Insights/MCU-Insights-vol-16-no-4/
  30. PRC Concepts for UAV Swarms in Future Warfare | CNA.org., accessed April 12, 2026, https://www.cna.org/reports/2025/07/PRC-Concepts-for-UAV-Swarms-in-Future-Warfare.pdf
  31. The Greatest Adventure: A History of Human Space Exploration 9781789144604, accessed April 12, 2026, https://dokumen.pub/the-greatest-adventure-a-history-of-human-space-exploration-9781789144604.html
  32. Innovation under tough circumstances: Ukraine’s AI strategy in times of war – Oxford Insights, accessed April 12, 2026, https://oxfordinsights.com/insights/innovation-under-tough-circumstances-ukraines-ai-strategy-in-times-of-war/
  33. Lessons from Ukraine’s ‘First World Drone War’ | Opinion – The Philadelphia Inquirer, accessed April 12, 2026, https://www.inquirer.com/opinion/inq2/first-world-drone-war-ukraine-russia-20250824.html
  34. Can Interceptor Drones Protect Ukraine’s Skies? – VGI-9, accessed April 12, 2026, https://vgi.com.ua/en/interceptor-drones-ukraine-russia-war/
  35. Ukraine Developing AI-Driven Drone Swarms to Counter Russian Shaheds – Kyiv Post, accessed April 12, 2026, https://www.kyivpost.com/post/72994
  36. IoT and Edge Computing impact on Beyond 5G: enabling technologies and challenges Release 3.0 AIOTI WG Standardisation 30 Novembe, accessed April 12, 2026, https://aioti.eu/wp-content/uploads/2023/11/AIOTI-Report-IoT-and-Edge-impact-on-Beyond-5G-R3-Final.pdf
  37. Opinion: US Blind Spot in the Drone War: Why Ukraine Holds the Key to America’s AI Supremacy – Kyiv Post, accessed April 12, 2026, https://www.kyivpost.com/opinion/51165
  38. M W J – Modern War Institute -, accessed April 12, 2026, https://mwi.westpoint.edu/wp-content/uploads/2026/03/Modern-War-Journal-MWJ-second-edition-final-draft-as-of-12MAR26-pdf-1.pdf
  39. The significant decrease in T-90M tank losses in Ukraine masks a more complex reality., accessed April 12, 2026, https://meta-defense.fr/en/2026/03/16/t-90m-baisse-pertes-chars-russes/
  40. Evolution Not Revolution – Amazon S3, accessed April 12, 2026, https://s3.us-east-1.amazonaws.com/files.cnas.org/documents/CNAS-Report-Defense-Ukraine-Drones-Final.pdf
  41. Ukraine’s $1,200 drone is getting an upgrade: swarm mode – Euromaidan Press, accessed April 12, 2026, https://euromaidanpress.com/2026/04/01/ukraines-1200-drone-is-getting-an-upgrade-swarm-mode/
  42. Commercial Equipment is Reshaping the Battlefield – Defense Opinion, accessed April 12, 2026, https://defenseopinion.com/commercial-equipment-is-reshaping-the-battlefield/1121/
  43. Red Skies Ahead: Russia Planning for Its Drone-Driven Army of Tomorrow, accessed April 12, 2026, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/January-February-2026/Red-Skies-Ahead/
  44. Russia in Review, Dec. 19, 2025–Jan. 9, 2026, accessed April 12, 2026, https://www.russiamatters.org/news/russia-review/russia-review-dec-19-2025-jan-9-2026
  45. Russia Analytical Report, Oct. 27–Nov. 3, 2025, accessed April 12, 2026, https://www.russiamatters.org/news/russia-analytical-report/russia-analytical-report-oct-27-nov-3-2025

The Algorithmic Edge: Artificial Intelligence and the Transformation of Drone Warfare

This report provides a comprehensive analysis of the transformative impact of Artificial Intelligence (AI) on the design and capabilities of military drone systems. The integration of AI is not merely an incremental enhancement but represents a fundamental paradigm shift in the character of modern warfare. This analysis concludes that AI is the central catalyst driving the evolution of unmanned aerial systems (UAS) from remotely piloted tools into “AI-native” autonomous assets, a transition with profound strategic consequences for national security.

The report’s findings are structured around six key areas. First, it examines the redesign of the drone airframe itself, arguing that the operational necessity for onboard data processing—or edge computing—in contested environments is forcing a new design philosophy. This philosophy is governed by the stringent constraints of Size, Weight, Power, and Cost (SWaP-C), creating a strategic imperative for the development of hyper-efficient, specialized AI hardware. The nation-states that master the design and mass production of these low-SWaP AI accelerators will gain a decisive advantage.

Second, the report details how AI is revolutionizing the core capabilities of drones. Autonomous navigation, untethered from GPS, provides unprecedented resilience against electronic warfare. AI-powered sensor fusion synthesizes data from multiple sources to create a rich, contextual understanding of the battlefield that surpasses human analytical capacity. Concurrently, Automated Target Recognition (ATR) is evolving from simple object detection to flexible, language-based identification, allowing drones to find novel targets on the fly.

Third, these enhanced core functions are enabling entirely new operational paradigms. AI-driven swarm intelligence allows hundreds of drones to act as a single, collaborative, and resilient entity, capable of overwhelming traditional defenses through saturation attacks. Simultaneously, cognitive electronic warfare (EW) equips these systems to dominate the electromagnetic spectrum, autonomously detecting and countering novel threats in real time. The fusion of these capabilities creates self-protecting, intelligent networks that are redefining force projection.

Fourth, the report analyzes the crisis of control this technological shift precipitates. The traditional models of human-in-the-loop (HITL) command are becoming untenable in the face of machine-speed combat. Operational necessity is forcing a move toward human-on-the-loop (HOTL) supervision, which, due to cognitive limitations and the sheer velocity of events, functionally approaches a human-out-of-the-loop (HOOTL) reality. The concept of “Meaningful Human Control” (MHC) is consequently shifting from a real-time action to a pre-mission process of design, testing, and constraint-setting, creating a significant “accountability gap.”

Fifth, the strategic implications for the 21st-century battlefield are examined. AI is compressing the military kill chain to machine speeds, creating a dynamic of hyper-fast warfare that risks inadvertent escalation. Concurrently, the proliferation of low-cost, AI-enabled drones is democratizing lethal capabilities, empowering non-state actors and altering the global balance of power. This has ignited an AI-versus-AI arms race in counter-drone technologies, forcing a doctrinal shift away from exquisite, high-cost platforms toward attritable, mass-produced intelligent systems.

Finally, the report addresses the profound ethical and legal challenges posed by these systems, focusing on the international debate surrounding Lethal Autonomous Weapon Systems (LAWS). The slow pace of international lawmaking stands in stark contrast to the rapid pace of technological development, suggesting that de facto norms established on the battlefield will likely precede any formal treaty, creating a complex and volatile regulatory environment.

In conclusion, the nation-states that successfully navigate this transformation—by prioritizing investment in attritable AI-native platforms, adapting military doctrine to machine-speed warfare, cultivating a new generation of tech-savvy warfighters, and proactively shaping international norms—will hold a decisive strategic advantage in the conflicts of the 21st century.

Section 1: The AI-Native Airframe: Redesigning Drones for Autonomous Operations

The most fundamental impact of Artificial Intelligence on drone systems begins not with abstract algorithms but with the physical and digital architecture of the platform itself. The strategic shift from remotely piloted aircraft, which function as extensions of a human operator, to truly autonomous systems necessitates a radical rethinking of drone design. This evolution is driven by the primacy of onboard data processing, a capability that enables mission execution in the face of sophisticated electronic warfare. However, this demand for onboard computational power creates a critical and defining tension with the inherent physical constraints of unmanned platforms, a tension governed by the imperatives of Size, Weight, Power, and Cost (SWaP-C). The resolution of this tension is leading to the emergence of the “AI-native” airframe, a new class of drone designed from the ground up for autonomous warfare.

1.1 The Primacy of Onboard Processing: The Shift from Remote Piloting to Edge AI

The defining characteristic that separates a modern AI-enabled drone from its predecessors is its capacity to perform complex computations locally, a concept known as edge computing or “AI at the edge”.1 This capability is the bedrock of true autonomy, as it untethers the drone from the need for a continuous, high-bandwidth data link to a human operator or a remote cloud server.3 In the context of modern peer-level conflict, where the electromagnetic spectrum is a fiercely contested domain, this independence is not a luxury but a mission-critical necessity. The ability of a drone to continue its mission—to navigate, identify targets, and even engage them—after its communication link has been severed by enemy jamming is a revolutionary leap in operational resilience.2

This paradigm shift is enabled by the integration of highly specialized hardware designed specifically to handle the computational demands of AI and machine learning (ML) tasks. While traditional drones rely on basic microcontrollers for flight stability, AI-native platforms incorporate a suite of powerful processors. These include general-purpose graphics processing units (GPGPUs), which excel at the parallel processing required by many ML algorithms, and increasingly, more efficient and specialized hardware such as application-specific integrated circuits (ASICs) and systems-on-a-chip (SoCs).2 These components are optimized to run the complex neural network models that underpin modern AI capabilities like computer vision and real-time data analysis. Industry leaders in the semiconductor space, such as NVIDIA, have become central players in the defense ecosystem, with their compact, powerful computing modules like the Jetson series (e.g., Xavier NX, Orin Nano, Orin NX) being explicitly designed into the autopilots of advanced military and commercial drones.7

The operational imperative for this onboard processing power is clear. It reduces decision latency to near-zero, enabling instantaneous responses that are impossible when data must be transmitted to a ground station for analysis and then have commands sent back. This is crucial for time-sensitive tasks such as terminal guidance for a kinetic strike, dynamic obstacle avoidance in a cluttered urban environment, or real-time threat analysis and countermeasures against an incoming missile.4 By processing sensor data locally, the drone can make its own decisions, transforming it from a remote-controlled puppet into a self-reliant agent capable of adapting to changing battlefield conditions.9

1.2 Redefining Design Under SWaP-C Imperatives

While the demand for onboard AI processing is theoretically limitless, its practical implementation is governed by the ironclad constraints of Size, Weight, Power, and Cost—collectively known as SWaP-C. This set of interdependent variables represents the central design challenge for unmanned systems, particularly for the smaller, more numerous, and often expendable drones that are proving so decisive in modern conflicts.5 Every component added to an airframe must be justified across all four dimensions, as an increase in one often negatively impacts the others.

This creates a fundamental design trade-off. Advanced AI algorithms require immense processing power, which translates directly into larger, heavier processing units that consume more electrical power and generate significant heat, which in turn may require additional weight for cooling systems. These factors directly diminish the drone’s operational effectiveness by reducing its flight endurance (by drawing more from the battery) and its payload capacity (by taking up a larger portion of the allowable weight).2 Furthermore, the cost of these high-performance components can be substantial, challenging the strategic utility of deploying them on attritable platforms designed to be lost in combat. The financial calculus is stark: for military UAS, a reduction of just one pound in platform weight can save an estimated $30,000 in operational costs for an ISR platform and up to $60,000 for a combat platform over its lifecycle.12

The solution to this complex optimization problem is the development of “AI-native” drone platforms. In contrast to legacy airframes that have been retrofitted with AI capabilities, these systems are engineered from their inception for autonomous operation.1 This holistic design philosophy influences every aspect of the drone’s construction. Airframes are built from advanced lightweight composite materials to maximize strength while minimizing weight. Power systems are meticulously engineered for efficiency, with some designs even incorporating AI-driven energy management algorithms to optimize power distribution during different phases of a mission.6 Most critically, the electronics architecture is built around highly integrated, low-power SoCs and ASICs that are custom-designed to provide the maximum computational performance within the smallest possible SWaP-C footprint.13 The intense focus on this area is evidenced by significant military research and development efforts aimed at creating miniaturized, low SWaP-C payloads, such as compact radar and multi-band antenna systems, that can be integrated onto small UAS without compromising their core performance characteristics.16

The SWaP-C constraint, therefore, acts as the primary forcing function in the design of modern tactical AI-powered drones. It is no longer sufficient to simply write more advanced software; the central challenge is creating the hardware that can execute that software efficiently within the unforgiving physical limits of an unmanned airframe. This reality elevates the design and mass production of specialized, hyper-efficient, low-power AI accelerator chips from a mere engineering problem to a primary strategic concern. The competitive advantage in 21st-century drone warfare is rapidly shifting away from nations that can build the largest and most expensive platforms to those that can design and mass-produce the most computationally powerful microelectronics within the tightest SWaP-C budget.

This hardware-centric paradigm, born from the immutable laws of physics governing flight, introduces a new and critical strategic vulnerability. An adversary’s ability to disrupt the highly specialized and globally distributed supply chains for these low-SWaP AI chips could effectively ground an opponent’s entire autonomous drone fleet. A future conflict, therefore, will not be waged solely on the physical battlefield but also within the intricate ecosystem of the global semiconductor industry. Actions such as targeted sanctions, cyberattacks on fabrication plants, or control over the supply of rare earth materials necessary for chip production become potent acts of industrial warfare. This reality compels nation-states to pursue self-sufficiency in the design and manufacturing of these critical components, fundamentally transforming the concept of a “defense industrial base” to include what were once considered purely commercial entities: semiconductor foundries and microchip design firms.

Section 2: Revolutionizing Core Capabilities: From Enhanced to Emergent Functions

The integration of AI into the drone’s core architecture is not merely about improving existing functions; it is about creating entirely new capabilities that transform the drone from a simple sensor-shooter platform into an intelligent agent. This revolution is most apparent in three key areas: autonomous navigation, which grants resilience in contested environments; advanced perception through sensor fusion, which enables a deep, contextual understanding of the battlefield; and automated target recognition, which accelerates the process of identifying and acting upon threats. Together, these AI-driven functions represent a qualitative leap in the operational potential of unmanned systems.

2.1 Autonomous Navigation and Mission Execution

For decades, the effectiveness of unmanned systems has been tethered to the availability of the Global Positioning System (GPS). In a modern conflict against a peer adversary, however, the electromagnetic spectrum is a primary battleground, and GPS signals are a prime target for jamming and spoofing. AI provides the critical solution to this vulnerability. By employing advanced techniques such as Visual-Inertial Odometry (VIO) and Simultaneous Localization and Mapping (SLAM), an AI-powered drone can navigate by observing and mapping its physical surroundings.4 Using onboard cameras and other sensors, it can recognize landmarks, build a 3D model of its environment, and determine its position and trajectory relative to that model, all without a single signal from a satellite.19 This capability to operate effectively in a GPS-denied environment represents a quantum leap in mission survivability and operational freedom.

The impact of this resilience is dramatically amplified by AI’s ability to enhance mission success rates. The conflict in Ukraine has served as a proving ground for this technology, where the integration of AI for terminal guidance on first-person view (FPV) drones has reportedly boosted strike accuracy from a baseline of 10-20% to as high as 70-80%.5 This remarkable improvement stems from the AI’s ability to take over the final, critical phase of the attack, homing in on the target even if the communication link to the human operator is lost due to jamming or terrain masking. Beyond terminal guidance, AI algorithms can optimize entire mission profiles in real time. They can dynamically plan flight paths to avoid newly detected air defense threats, reroute to account for changing weather conditions, or adapt the mission plan based on new intelligence, all without direct human input.10

Looking forward, the role of AI in mission planning is set to expand even further. Emerging applications of generative AI, the same technology that powers models like ChatGPT, are being explored for highly complex cognitive tasks. These include the automated planning of intricate, multi-stage mission routes through hostile territory and even the automatic generation of draft operation orders (OPORDs), a task that is traditionally a time-consuming and mentally taxing process for human staff officers.23 By automating these functions, AI promises to significantly reduce the cognitive load on human planners and accelerate the entire operational planning cycle.

2.2 Advanced Perception through AI-Powered Sensor Fusion

A single sensor provides a limited, one-dimensional view of the world. A modern military drone, however, is a multi-sensory platform, equipped with a diverse suite of instruments including high-resolution electro-optical (EO) cameras, infrared (IR) thermal imagers, radar, Light Detection and Ranging (LiDAR), and acoustic sensors.1 The true power of this array is unlocked by AI-driven sensor fusion, the process of intelligently combining data from these disparate sources into a single, coherent, and comprehensive model of the operational environment. This fused picture provides a degree of situational awareness that is impossible for a human operator to achieve by attempting to mentally synthesize multiple, separate data feeds in real time.25

The core benefit of sensor fusion is its ability to overcome the inherent limitations of any single sensor. For instance, an optical camera is ineffective in fog or darkness, but a thermal imager can see heat signatures and radar can penetrate obscurants. An AI algorithm can synthesize the data from all three, correlating a radar track with a thermal signature and, if conditions permit, a visual identification, thereby producing a high-confidence assessment of a potential target.10 This multi-modal approach is critical for all aspects of the drone’s operation, from robust navigation and obstacle avoidance to reliable targeting and threat detection.27 The field is advancing so rapidly that researchers are even exploring the use of novel quantum sensors, with AI being the essential tool to filter the noise and extract meaningful signals from these highly sensitive but complex instruments.28

This capability is having a revolutionary impact on the field of Intelligence, Surveillance, and Reconnaissance (ISR). Traditionally, ISR platforms would collect vast amounts of raw data—terabytes of video footage, for example—which would then be transmitted back to a ground station for painstaking analysis by teams of humans. This process is slow, bandwidth-intensive, and prone to human error and fatigue. AI-powered drones are upending this model. By performing analysis at the edge, the drone’s onboard AI can sift through the raw data as it is collected, automatically filtering out irrelevant information, classifying objects of interest, and prioritizing the most critical intelligence for immediate transmission to human analysts.1 This dramatically reduces the bandwidth required for data exfiltration and, more importantly, accelerates the entire intelligence cycle from days or hours to minutes. The effectiveness of this approach has been demonstrated in Ukraine, where integrated systems like Delta and Griselda use AI to process battlefield reports and drone footage in near real-time, providing frontline units with an unparalleled operational picture.20

2.3 Automated Target Recognition (ATR): See, Understand, Act

Building upon the foundation of advanced perception, AI is enabling a dramatic leap in the speed and accuracy of targeting through Automated Target Recognition (ATR). Using sophisticated machine learning and computer vision algorithms, ATR systems can automatically detect, classify, and identify potential targets within the drone’s sensor feeds.32 This goes beyond simply detecting an object; it involves classifying it (e.g., vehicle, person) and, with increasing fidelity, identifying it (e.g., T-90 main battle tank vs. a civilian tractor). This capability has been shown to be effective at significant ranges, with some systems able to lock onto targets up to 2 kilometers away.20 By automating this critical function, ATR drastically reduces the cognitive burden on human operators, allowing them to focus on higher-level tactical decisions and accelerating the engagement cycle.33

Furthermore, advanced ATR systems are proving adept at countering traditional methods of military deception. Where a human eye might be fooled by camouflage, netting, or even sophisticated inflatable decoys, an AI algorithm can analyze data from across the electromagnetic spectrum. By fusing thermal, radar, and multi-spectral imagery, the ATR system can identify tell-tale signatures—such as the heat from a recently run engine or the specific radar reflectivity of armored steel—that betray the true nature of the target.20

The primary bottleneck in developing more powerful ATR systems is the immense amount of high-quality, accurately labeled data required to train the machine learning models.34 An algorithm can only learn to identify a T-90 tank if it has been shown thousands of images of T-90 tanks in various conditions—different angles, lighting, weather, and states of damage. Recognizing this challenge, military organizations are now focusing heavily on standardizing the curation and labeling of military datasets and developing more efficient training methodologies, such as building smaller, specialized AI models tailored for specific, narrow tasks.20

A revolutionary development on the horizon promises to mitigate this data dependency: Open Vocabulary Object Detection (OVOD) powered by Vision Language Models (VLMs).35 Unlike traditional ATR, which can only find what it has been explicitly trained to see, an OVOD system connects language with imagery. This allows an operator to task the drone using natural language to find novel or uniquely described targets. For example, a commander could instruct the system to “find the command vehicle in that convoy; it’s a truck with a large satellite dish on the roof.” Even if the VLM has never been specifically trained on that exact vehicle configuration, it can use its semantic understanding of “truck,” “satellite dish,” and “roof” to correlate the text description with the visual data from the drone’s sensors and identify the correct target.35 This capability transforms ATR from a rigid, pre-programmed function into a flexible, dynamic, and instantly adaptable tool for battlefield intelligence.

The convergence of these three AI-driven capabilities—resilient navigation, multi-sensor fusion, and advanced ATR—is creating an emergent property that is far greater than the sum of its parts: contextual battlefield understanding. The drone is evolving from a mere tool that sees a target into an intelligent agent that understands the target in its operational context. The logical progression is clear: AI-powered navigation allows the drone to position itself optimally in the battlespace, even under heavy electronic attack. Once in position, AI-driven sensor fusion provides a rich, multi-layered, and continuous stream of data about that environment. Within that data stream, advanced ATR algorithms can pinpoint and identify specific objects of interest.

When these functions are integrated, the system can perform sophisticated correlations at machine speed. It does not just see a “tank” as a traditional ATR system might. Instead, it perceives a “T-72 main battle tank” (a specific ATR identification), located at precise coordinates despite GPS jamming (a function of AI navigation), whose thermal signature indicates its engine was running within the last 15 minutes (an inference from sensor fusion), and which is positioned in a concealed revetment next to a building whose signals intelligence signature matches that of a known command post (a correlation with wider ISR data). This is no longer simple targeting; it is automated, real-time tactical intelligence generation at the tactical edge. This emergent capability of contextual understanding is the primary enabler of what some analysts have termed “Minotaur Warfare,” a future form of conflict where AI systems assume greater control over tactical operations.5 As a drone’s comprehension of the battlefield begins to approach, and in some cases exceed, that of a human platoon leader, the doctrinal and ethical justifications for maintaining a human “in-the-loop” for every discrete tactical decision will inevitably begin to erode. This creates immense pressure on military organizations to redefine their command and control structures and to place greater trust in AI systems to execute progressively more complex and lethal decisions, thereby accelerating the trend toward greater autonomy in warfare.

Section 3: New Paradigms in Unmanned Warfare

The integration of artificial intelligence is not only enhancing the individual capabilities of drones but is also enabling entirely new operational concepts that were previously confined to the realm of science fiction. These emerging paradigms, principally swarm intelligence and cognitive electronic warfare, represent a fundamental change in how military force can be organized, projected, and sustained on the modern battlefield. They are not incremental improvements on existing tactics but are instead the building blocks of a new form of high-tempo, algorithmically-driven conflict.

3.1 Swarm Intelligence and Collaborative Autonomy

A drone swarm is not simply a large number of drones flying in the same area; it is a group of unmanned systems that utilize artificial intelligence to communicate, collaborate, and act as a single, cohesive, and intelligent entity.1 Unlike traditionally controlled assets, a swarm does not rely on a central human operator directing the actions of each individual unit. Instead, its collective behavior is an “emergent” property that arises from individual drones following a simple set of rules—such as maintaining separation from their neighbors, aligning their flight path with the group, and maintaining cohesion with the overall swarm—inspired by the flocking of birds or schooling of fish.37 This allows for complex group actions to be performed with a remarkable degree of coordination and adaptability.

The tactical applications of this technology are profound. Swarms are particularly well-suited for conducting saturation attacks, where the sheer number of inexpensive, coordinated drones can overwhelm and exhaust the magazines of even the most sophisticated and expensive air defense systems.1 A single billion-dollar Aegis destroyer may be able to intercept dozens of incoming threats, but it may not be able to counter a coordinated attack by a thousand AI-guided drones costing only a few thousand dollars each. Beyond saturation attacks, swarms are ideal for executing complex reconnaissance missions over a wide area, establishing persistent area denial, or conducting multi-axis, synchronized strikes on multiple targets simultaneously.39

The key to a swarm’s operational effectiveness and resilience lies in its decentralized command and control (C2) architecture. In a centralized system, the loss of the single command node can paralyze the entire force. In a swarm, each drone makes decisions based on its own sensor data and peer-to-peer communication with its immediate neighbors.37 This distributed intelligence means that the loss of individual units, or even entire sub-groups, does not compromise the overall mission. The swarm can autonomously adapt, reallocating tasks and reconfiguring its formation to compensate for losses and continue its objective.41 This inherent resilience makes swarms exceptionally difficult to defeat with traditional attrition-based tactics.

Recognizing this transformative potential, the United States military has been aggressively pursuing swarm capabilities. The Defense Advanced Research Projects Agency’s (DARPA) OFFensive Swarm-Enabled Tactics (OFFSET) program, for example, aimed to develop and demonstrate tactics for heterogeneous swarms of up to 250 air and ground robots operating in complex urban environments.42 While large-scale swarm combat has yet to be seen, the first uses of autonomous swarms have been reported in conflicts in Libya and Gaza, signaling that this technology is rapidly moving from the laboratory to the battlefield.42

3.2 Cognitive Electronic Warfare (EW): Dominating the Spectrum

The modern battlefield is an invisible storm of electromagnetic energy. Communications, navigation, sensing, and targeting all depend on the ability to successfully transmit and receive signals across the radio frequency (RF) spectrum. Consequently, electronic warfare—the art of controlling that spectrum—is central to modern conflict. However, traditional EW systems, which rely on pre-programmed libraries of known enemy signals, are becoming increasingly obsolete. Adversaries are fielding agile, software-defined radios and radars that can change their frequencies, waveforms, and pulse patterns on the fly, creating novel signatures that a library-based system cannot recognize or counter.5

Cognitive electronic warfare is the AI-driven solution to this dynamic threat. Instead of relying on a static threat library, a cognitive EW system uses machine learning to sense and analyze the electromagnetic environment in real time.47 An AI-enabled drone can autonomously detect an unfamiliar jamming signal, use ML algorithms to classify its key parameters, and then generate a tailored countermeasure—such as a precisely configured jamming waveform or a rapid frequency hop—all within milliseconds and without requiring any input from a human operator.49

This capability is fundamentally dual-use, encompassing both defensive and offensive applications. Defensively, it provides a powerful form of Electronic Protection (EP), allowing a drone or a swarm to dynamically protect itself from enemy jamming and GPS spoofing attempts. This ensures that the drones can maintain their communication links and navigational accuracy, and ultimately complete their mission even in a highly contested EW environment.1 Offensively, the same AI techniques can be used for Electronic Attack (EA). An AI-powered system can more effectively probe an adversary’s network to find vulnerabilities, and then deploy optimized jamming or spoofing signals to disrupt their radar, neutralize their air defenses, or sever their command and control links.22 The ultimate goal is to achieve adaptive counter-jamming, where AI agents conceptualized for the task can proactively perceive the electromagnetic environment and autonomously execute complex anti-jamming strategies, which can include not only adjusting their own communication parameters but also physically maneuvering the drone or the entire swarm to find clearer signal paths or to better triangulate and neutralize an enemy jammer.52

The fusion of swarm intelligence with cognitive electronic warfare creates a powerful, emergent capability: a self-protecting, resilient, and intelligent force projection network. A swarm is no longer just a collection of individual sensor-shooter platforms; it becomes a mobile, adaptive, and distributed system for seizing and maintaining control of the battlespace. The logic of this combination is compelling. A swarm is composed of numerous, geographically distributed nodes (the individual drones). Each of these nodes can be equipped with cognitive EW payloads. Through the swarm’s collaborative AI, these nodes can be dynamically tasked in real time.

For instance, in a swarm of fifty drones, ten might be assigned to sense the RF environment, fifteen might be tasked with providing protective jamming (EA) for the entire group, and the remaining twenty-five could be dedicated to the primary ISR or strike mission. The swarm’s AI-driven logic can reallocate these roles instantaneously based on the evolving tactical situation. If a jammer drone is shot down, another drone can be autonomously re-tasked to take its place. If a new, unknown enemy radar frequency is detected, the entire swarm can adapt its own communication protocols and jamming profiles to counter it. This creates a system that is orders of magnitude more resilient, adaptable, and survivable than a single, high-value asset attempting to perform the same mission.

This new paradigm will inevitably lead to a future battlefield characterized by “swarm versus swarm” combat.55 In such a conflict, victory will not be determined by the side with the most powerful individual platform, but by the side whose swarm algorithms can out-think, out-maneuver, and out-adapt the enemy’s algorithms. This reality signals a profound shift in military research and development priorities, moving away from a traditional focus on platform-centric hardware engineering and toward an emphasis on algorithm-centric software development and AI superiority. It also carries the sobering implication that future conflicts could witness massive, automated engagements between opposing swarms, playing out at machine speeds with little to no direct human intervention. Such a scenario would result in an unprecedented rate of attrition and herald the arrival of a new, terrifyingly fast form of high-tech, mechanized warfare.

Section 4: The Human-Machine Interface: Command, Control, and the Crisis of Control

As artificial intelligence grants drone systems escalating levels of autonomy, the role of the human warfighter is undergoing a profound and contentious transformation. The traditional relationship, in which a human directly controls a machine, is being replaced by a spectrum of more complex human-machine teaming arrangements. This evolution is forcing a critical re-examination of military command and control structures and has ignited an intense global debate over the appropriate level of human judgment in the use of lethal force. At the heart of this debate is the concept of “Meaningful Human Control” (MHC), a principle that is proving to be as difficult to define and implement as it is ethically essential.

4.1 The Spectrum of Autonomy: Defining the Human Role

The relationship between a human operator and an autonomous weapon system is not a binary choice between manual control and full autonomy. Rather, it exists along a spectrum, commonly defined by three distinct levels of human involvement in the decision to use lethal force. Understanding these classifications is essential to grasping the nuances of the current policy and ethical debates.

Table 1: The Spectrum of Autonomy in Unmanned Systems

Level of ControlDefinitionOperational ExampleImplications for Command & Control (C2)Primary Legal/Ethical Challenge
Human-in-the-Loop (HITL)The system can perform functions like searching for, detecting, and tracking a target, but a human operator must provide the final authorization before lethal force is applied. The human is an integral and required part of the decision-making process.42An operator of an MQ-9 Reaper drone positively identifies a target and receives clearance before manually firing a Hellfire missile.C2 process is deliberate but can be slow. High cognitive load on the operator. Vulnerable to communication link disruption. Can be too slow for high-tempo or swarm-vs-swarm engagements.57Latency and Speed: The time required for human approval can be a fatal liability in rapidly evolving combat scenarios, such as defending against a hypersonic missile or a drone swarm.
Human-on-the-Loop (HOTL)The system is authorized to autonomously search for, detect, track, target, and engage threats based on pre-defined parameters (Rules of Engagement). A human supervisor monitors the system’s operations and has the ability to intervene and override or abort an action.42An automated air defense system (e.g., C-RAM) is authorized to automatically engage incoming rockets and mortars. A human supervisor monitors the system and can issue a “cease fire” command if needed.C2 is supervisory, enabling machine-speed engagements. Reduces operator cognitive load for routine tasks. Allows for management of large-scale systems like swarms.Automation Bias and Effective Veto: Operators may become complacent and overly trust the system’s judgment, failing to intervene when necessary. The speed of the engagement may make a human veto practically impossible.60
Human-out-of-the-Loop (HOOTL)The system, once activated, makes all combat decisions—including searching, targeting, and engaging—without any further human interaction or supervision. The human is removed from the individual engagement decision cycle entirely.42A “fire-and-forget” loitering munition is launched into a designated area with instructions to autonomously find and destroy any vehicle emitting a specific type of radar signal.C2 is limited to the initial activation and mission programming. Enables operations in completely communications-denied environments. Represents true autonomy.The Accountability Gap and IHL Compliance: If the system makes an error and commits a war crime, it is unclear who is legally and morally responsible. The system’s inability to apply human judgment raises serious doubts about its capacity to comply with the laws of war.63

Currently, U.S. Department of Defense policy for systems that use lethal force mandates a human-in-the-loop approach, requiring that commanders and operators exercise “appropriate levels of human judgment over the use of force”.42 However, the relentless pace of technological advancement and the operational realities of modern warfare are placing this policy under immense pressure.

4.2 The Challenge of Meaningful Human Control (MHC)

In response to the ethical and legal dilemmas posed by increasing autonomy, the concept of “Meaningful Human Control” (MHC) has become the central pillar of international regulatory discussions.67 The principle, while intuitively appealing, posits that humans—not machines—must retain ultimate control over and moral responsibility for any use of lethal force.70 While there is broad agreement on this general principle, implementing it in practice is fraught with profound technical, operational, and philosophical challenges.

First, there are significant technical and operational challenges. The very nature of advanced AI creates barriers to human understanding and control. Many powerful machine learning models function as “black boxes,” meaning that even their designers cannot fully explain the specific logic behind a particular output. This lack of explainability, or epistemic limitation, makes it impossible for a human operator to truly understand why a system has decided a particular object is a legitimate target, fundamentally undermining the basis for meaningful control.71 Furthermore, an AI system, no matter how sophisticated, lacks genuine human judgment, empathy, and contextual understanding. It cannot comprehend the value of a human life or interpret the subtle, non-verbal cues that might signal surrender or civilian status, all of which are critical for making lawful and ethical targeting decisions in the complex fog of war.71

Second, there are cognitive limitations inherent in the human-machine interface itself. A large body of research in cognitive psychology has identified a phenomenon known as “automation bias,” which is the tendency for humans to over-trust the suggestions of an automated system, even when those suggestions are incorrect.60 An operator supervising a highly reliable autonomous system may become complacent, failing to maintain the situational awareness needed to detect an error and intervene in time. This is compounded by the

temporal limitations imposed by machine-speed warfare. An AI can process data and cycle through an engagement decision in milliseconds, a speed at which a human’s ability to deliberate, decide, and physically execute an override becomes practically impossible.60

Finally, there is no internationally accepted definition of what constitutes “meaningful” control. Interpretations vary wildly among nations. Some argue it requires direct, positive human authorization for every single engagement (a strict HITL model). Others contend that it is satisfied by a human setting the initial rules of engagement, target parameters, and geographical boundaries for the system, which would permit a HOTL or even HOOTL operational posture.68 This fundamental ambiguity remains a primary obstacle to the formation of any international treaty or binding regulation.

The intense debate over which “loop” a human should occupy is, in many ways, becoming a false choice that is being rendered moot by operational necessity. In a future high-tempo conflict, particularly one involving swarm-versus-swarm engagements, the decision cycle will be compressed to a timescale where a human simply cannot remain in the loop for every individual lethal action. A human operator cannot physically or cognitively process and approve hundreds of distinct targeting decisions in the few seconds it might take for an enemy swarm to close in. This operational reality will inevitably force militaries to adopt a human-on-the-loop supervisory posture as the default for defensive systems.

However, given the powerful effects of automation bias and the sheer velocity of events, the human supervisor’s practical ability to meaningfully assess the tactical situation, identify a potential error in the system’s judgment, and execute a timely veto will be severely constrained. The “veto” option, while theoretically present, becomes functionally impossible to exercise in many critical scenarios. Thus, the operational demand for machine-speed defense is pushing systems toward a state of de facto autonomy, regardless of stated policies that emphasize retaining human control.

This leads to a fundamental re-conceptualization of Meaningful Human Control itself. MHC is evolving from a technical standard to be engineered into a real-time interface into a broader legal and ethical framework for managing risk and assigning accountability prior to a system’s deployment. The most “meaningful” control a human will exercise over a future autonomous weapon will not be in the split-second decision to fire, but in the months and years of rigorous design, extensive testing and validation in diverse environments, meticulous curation of training data to minimize bias, and the careful, deliberate definition of operational constraints. This includes setting clear geographical boundaries, defining permissible target classes, and programming explicit, unambiguous rules of engagement. This evolution effectively shifts the locus of responsibility away from the frontline operator and diffuses it across a wide array of actors: the system designers, the software programmers, the data scientists who curated the training sets, and the senior commanders who formally certified and deployed the system. This diffusion creates the widely feared “accountability gap,” a scenario where a machine commits an act that would constitute a war crime if done by a human, yet responsibility is so fragmented across the long chain of human agents that no single individual can be held morally or legally culpable for the machine’s actions.63

Section 5: Strategic Implications for the 21st Century Battlefield

The proliferation of AI-powered drone systems is not merely a tactical development; it is a strategic event that is fundamentally reshaping the character of conflict, altering the global balance of power, and creating new and dangerous dynamics of escalation. The core impacts can be understood through three interrelated trends: the radical compression of the military kill chain, the democratization of lethal air power, and the emergence of a new, high-speed arms race in counter-drone technologies.

5.1 Compressing the Kill Chain: Warfare at Machine Speed

The traditional military targeting process, often conceptualized as the “F2T2EA” cycle—Find, Fix, Track, Target, Engage, and Assess—is a deliberate, often time-consuming, and human-intensive endeavor.74 Artificial intelligence is injecting unprecedented speed and efficiency into every stage of this process, compressing a cycle that once took hours or days into a matter of minutes, or even seconds.23

Table 2: AI’s Impact Across the F2T2EA Kill Chain

Kill Chain PhaseTraditional Method (Human-Centric)AI-Enabled Method (Machine-Centric)Impact/Acceleration
FindHuman analysts manually review hours or days of ISR video and signals intelligence to detect potential targets.AI algorithms continuously scan multi-source ISR data (video, SIGINT, satellite imagery) in real-time, automatically flagging anomalies and potential targets.29Reduces target discovery time from hours/days to seconds/minutes. Drastically reduces analyst cognitive load.23
FixAn operator manually maneuvers a sensor to get a positive identification and precise location of the target.An autonomous drone, using AI-powered navigation, maneuvers to fix the target’s location, even in GPS-denied environments.20Increases accuracy of location data and enables operations in contested airspace.
TrackA dedicated team of operators continuously monitors the target’s movement, a process prone to human error or loss of line-of-sight.AI-powered ATR and sensor fusion algorithms autonomously track the target, predicting its movement and maintaining a persistent track file even with intermittent sensor contact.32Improves tracking persistence and accuracy, freeing human operators for other tasks.
TargetA commander, often with legal and intelligence advisors, reviews a “target packet” of information to authorize engagement based on Rules of Engagement (ROE).An AI decision-support system automatically correlates the track file with pre-programmed ROE, classifies the target, assesses collateral damage risk, and recommends engagement options to the commander.76Reduces decision time from minutes to seconds. Provides data-driven recommendations to support human judgment.
EngageA human operator manually guides a weapon to the target or designates the target for a guided munition.An autonomous drone or loitering munition executes the engagement, using onboard AI for terminal guidance to ensure precision, even against moving targets or in jammed environments.5Increases probability of kill (Pk​) from ~30-50% to ~80% in some cases. Reduces reliance on vulnerable communication links.5
AssessAnalysts review post-strike imagery to conduct Battle Damage Assessment (BDA), a process that can be slow and subjective.AI algorithms automatically analyze post-strike imagery, comparing it to pre-strike data to provide instantaneous, quantitative BDA and recommend re-attack if necessary.Accelerates BDA from hours/minutes to seconds, enabling rapid re-engagement of missed targets.

The strategic goal of this radical acceleration is to achieve “decision advantage” over an adversary. By cycling through the OODA loop (Observe, Orient, Decide, Act) faster than an opponent, a military force can seize the initiative, dictate the tempo of battle, and achieve objectives before the enemy can effectively react.74 However, this pursuit of machine-speed warfare introduces a profound and dangerous risk of unintended escalation. An automated system, operating at a tempo that precludes human deliberation, could engage a misidentified target or act on flawed intelligence, triggering a catastrophic crisis that spirals out of control before human leaders can intervene.78 In a future conflict between two AI-enabled military powers, the immense pressure to delegate engagement authority to machines to avoid being outpaced could create highly unstable “use-them-or-lose-them” scenarios, where the first side to unleash its autonomous systems gains a potentially decisive, and irreversible, advantage.78

5.2 The Proliferation of Asymmetric Power: Democratizing Lethality

For most of military history, the projection of air power—the ability to conduct persistent surveillance and precision strikes from the sky—was the exclusive domain of wealthy, technologically advanced nation-states. The convergence of low-cost commercial drone technology with increasingly accessible and powerful open-source AI software has shattered this monopoly, fundamentally altering the global balance of power between states and non-state actors (NSAs).39

For the cost of a few hundred or thousand dollars, insurgent groups, terrorist organizations, and transnational criminal cartels can now acquire and weaponize capabilities that were, just a decade ago, available only to major militaries.81 These groups can now field their own “miniature air forces,” allowing them to conduct persistent ISR on government forces, execute precise standoff attacks with modified munitions, and generate powerful propaganda, all while dramatically reducing the risk to their own personnel.83 This “democratization of lethality” provides a potent asymmetric advantage, allowing technologically inferior groups to inflict significant damage on and impose high costs against far more powerful conventional forces.

The historical record demonstrates a clear and accelerating trend. State-supported groups like Hezbollah have a long and sophisticated history of using drones for ISR, famously hacking into the unencrypted video feeds of Israeli drones as early as the 1990s to gain a tactical advantage.84 The Islamic State took this a step further, becoming the first non-state actor to weaponize commercial drones at scale, using them for reconnaissance and to drop small mortar-like munitions on Iraqi and Syrian forces.83 More recently, Houthi rebels in Yemen have employed increasingly sophisticated, Iranian-supplied kamikaze drones and anti-ship missiles to significant strategic effect, disrupting global shipping and challenging naval powers.82 The war in Ukraine has served as a global laboratory and showcase for this new reality, where both sides have deployed millions of low-cost FPV drones, demonstrating their ability to decimate armored columns, artillery positions, and logistics lines, and proving that mass can be a quality all its own.5

5.3 The Counter-Drone Arms Race: AI vs. AI

The inevitable strategic response to the proliferation of offensive AI-powered drones has been the rapid emergence of an arms race in AI-powered Counter-Unmanned Aircraft Systems (C-UAS).85 Defending against small, fast, and numerous autonomous threats is a complex challenge that cannot be solved by any single technology. Effective C-UAS requires a layered, integrated defense-in-depth approach that combines multiple sensor modalities—such as RF detectors, radar, EO/IR cameras, and acoustic sensors—to reliably detect, track, classify, and ultimately neutralize incoming drone threats.86

Artificial intelligence is the critical enabling technology that weaves these layers together. AI algorithms are essential for fusing the data from disparate sensors, distinguishing the faint radar signature or unique RF signal of a hostile drone from the clutter of non-threats like birds, civilian aircraft, or background noise. This AI-driven classification drastically reduces false alarm rates and provides human operators with high-confidence, actionable intelligence.36

Once a threat is identified, AI also plays a crucial role in the neutralization phase. Countermeasures range from non-kinetic “soft kill” options, such as electronic warfare to jam a drone’s control link or spoof its GPS navigation, to kinetic “hard kill” solutions, including interceptor drones, high-energy lasers, and high-powered microwave weapons.86 For a given threat, an AI-powered C2 system can autonomously select the most appropriate and efficient countermeasure—for example, choosing to jam a single reconnaissance drone but launching a kinetic interceptor against an incoming attack drone—and can direct the engagement at machine speed. This automated response is absolutely essential for countering the threat of a drone swarm, where dozens or hundreds of targets may need to be engaged simultaneously.92

This dynamic creates an escalating, high-speed, cat-and-mouse game on the battlefield. Offensive drones will be designed with AI to autonomously navigate, communicate on encrypted, frequency-hopping data links, and use deceptive tactics to evade detection. In response, defensive C-UAS systems will use their own AI to detect those subtle signatures, predict their flight paths, and coordinate a multi-layered defense. This will inevitably lead to a future of “swarm versus swarm” combat, where autonomous offensive swarms are met by autonomous defensive swarms, and victory is determined not by the quality of the airframe, but by the superiority of the underlying algorithms and their ability to learn and adapt in real time.55

The convergence of the compressed kill chain and the proliferation of low-cost, asymmetric drone capabilities is forcing a fundamental doctrinal shift in modern militaries. The focus is moving away from the procurement of exquisite, expensive, and highly survivable individual platforms and toward a new model emphasizing system resilience and attritability. The era of the “unsinkable” aircraft carrier or the “invincible” main battle tank is being challenged by the stark reality that these multi-billion-dollar assets can be disabled or destroyed by a coordinated network of thousand-dollar drones. The logical chain of this strategic shift is clear: AI accelerates the kill chain, making every asset on the battlefield more vulnerable and more easily targeted. Simultaneously, cheap, AI-enabled drones are becoming available to virtually any actor, state or non-state. Therefore, even the most technologically advanced and heavily defended platforms are at constant risk of being overwhelmed and destroyed by a numerically superior, low-cost, and intelligent force.

This new reality renders the traditional military procurement model—which invests immense resources in a small number of highly capable platforms—strategically untenable. The logical response is to pivot investment toward concepts like the Pentagon’s Replicator initiative, which prioritizes the mass production of thousands of cheaper, “attritable” (i.e., expendable) autonomous systems.17 These systems are designed with the expectation that many will be lost in combat, but their low cost and high numbers allow them to absorb these losses and still achieve the mission. This shift toward attritable mass has profound implications for the global defense industry and military force structures. It favors nations with agile, commercial-style advanced manufacturing capabilities over those with slow, bureaucratic, and expensive traditional defense procurement pipelines. The ability to rapidly iterate designs, 3D-print components, and mass-produce intelligent, autonomous drones will become a key metric of national military power. This could also lead to a “hollowing out” of traditional military formations, as investment, prestige, and personnel are redirected from legacy platforms like tanks and fighter jets to new unmanned systems units that require entirely different skill sets, such as data science, AI programming, and robotics engineering.31

Section 6: The Regulatory and Ethical Horizon: Navigating the LAWS Debate

The rapid integration of artificial intelligence into drone systems, particularly those capable of employing lethal force, has created profound legal and ethical challenges that are outpacing the ability of international law and normative frameworks to adapt. The prospect of Lethal Autonomous Weapon Systems (LAWS)—machines that can independently select and engage targets without direct human control—has ignited a global debate that strikes at the core principles of the law of armed conflict and raises fundamental questions about accountability, human dignity, and the future of warfare.

6.1 International Humanitarian Law (IHL) and the Accountability Gap

The use of any weapon in armed conflict is governed by a long-standing body of international law known as International Humanitarian Law (IHL), or the law of armed conflict. The core principles of IHL are designed to limit the effects of war, particularly on civilians. These foundational rules include: the principle of Distinction, which requires combatants to distinguish between military objectives and civilians or civilian objects at all times; the principle of Proportionality, which prohibits attacks that may be expected to cause incidental loss of civilian life, injury to civilians, or damage to civilian objects that would be excessive in relation to the concrete and direct military advantage anticipated; and the principle of Precaution, which obligates commanders to take all feasible precautions to avoid and minimize harm to civilians.93

There are grave and well-founded doubts as to whether a fully autonomous weapon system, powered by AI, could ever be capable of making the complex, nuanced, and context-dependent judgments required to comply with these principles.73 An AI system, no matter how well-trained, lacks uniquely human qualities such as empathy, common-sense reasoning, and a true understanding of the value of human life. It cannot interpret the subtle behavioral cues that might indicate a person is surrendering (

hors de combat) or is a civilian under distress. Furthermore, AI systems are vulnerable to acting on biased or incomplete data; a facial recognition algorithm trained on a non-diverse dataset, for example, could be more likely to misidentify individuals from certain ethnic groups, with potentially tragic consequences on the battlefield.71

This leads to the central legal and ethical dilemma of LAWS: the accountability gap.63 In traditional warfare, if a war crime is committed, legal responsibility can be assigned to the soldier who pulled the trigger and/or the commander who gave the unlawful order. When an autonomous system makes a mistake and unlawfully kills civilians, it is not at all clear who should be held responsible. Is it the fault of the software programmer who wrote the faulty code? The manufacturer who built the system? The data scientist who curated the biased training dataset? The commander who deployed the system without fully understanding its limitations? Or the machine itself, which has no legal personality and cannot be put on trial? This diffusion of responsibility across a complex chain of human and non-human actors creates the very real possibility of a legal and moral vacuum, where atrocities could be committed with no one being held legally accountable for them.64

6.2 Global Efforts at Regulation: The UN and Beyond

The international community has been grappling with the challenge of LAWS for over a decade. The primary forum for these discussions has been the Group of Governmental Experts (GGE) on LAWS, operating under the auspices of the United Nations Convention on Certain Conventional Weapons (CCW) in Geneva.42

However, progress within the CCW GGE has been painstakingly slow, largely due to a lack of consensus among member states.99 The debate is characterized by deeply divergent positions. On one side, a large and growing coalition of states, supported by the International Committee of the Red Cross (ICRC) and a broad civil society movement known as the “Campaign to Stop Killer Robots,” advocates for the negotiation of a new, legally binding international treaty. Such a treaty would prohibit systems that cannot be used with meaningful human control and strictly regulate all other forms of autonomous weapons.71 On the other side, a number of major military powers, including the United States, Russia, and Israel, have so far resisted calls for a new treaty. Their position is generally that existing IHL is sufficient to govern the use of any new weapon system, and they favor the development of non-binding codes of conduct, best practices, and national-level review processes rather than a prohibitive international ban.100

The official policy of the United States is articulated in Department of Defense Directive 3000.09, “Autonomy in Weapon Systems.” This directive states that all autonomous and semi-autonomous weapon systems “shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force”.42 It establishes a rigorous senior-level review and certification process that any new autonomous weapon system must pass before it can be fielded, but it does not ban such systems outright.

Frustrated by the slow, consensus-bound process at the CCW, proponents of regulation have begun to seek alternative venues. In a significant development, the UN General Assembly passed a resolution on LAWS in December 2024 with overwhelming support. This resolution calls for the UN Secretary-General to seek the views of states on LAWS and to hold new consultations, a move widely seen as an attempt to shift the debate to a forum where a single state cannot veto progress. This suggests that momentum toward some form of new international legal instrument is building, even if its final form and forum remain uncertain.93

The international debate on LAWS can be understood as a fundamental clash between two irreconcilable philosophical viewpoints: a human-centric view of law and ethics versus a techno-utilitarian view of military effectiveness. The human-centric perspective, advanced by organizations like the ICRC and the Campaign to Stop Killer Robots, is largely deontological. It argues that the act of a machine making a life-or-death decision over a human being is inherently immoral and unlawful, regardless of the outcome. This view holds that such a decision requires uniquely human capacities like moral reasoning, empathy, and the ability to show mercy, which a machine can never possess. Allowing a machine to kill, therefore, represents a fundamental affront to human dignity and a “digital dehumanization” that must be prohibited.71 The focus of this argument is on the process of the decision.

In contrast, the techno-utilitarian viewpoint, often implicitly held by proponents of autonomous systems and states resisting a ban, is consequentialist. It argues that the primary moral and legal goal in warfare is to achieve legitimate military objectives while minimizing unnecessary suffering and collateral damage. If an AI-powered system can be empirically proven to be more precise, more reliable, and less prone to error, fatigue, or emotion than a human soldier, then its use is not only legally permissible but may even be morally preferable.101 The focus of this argument is on the

outcome of the decision. These two starting points—one prioritizing the moral nature of the decision-making process, the other prioritizing the empirical outcome—are in fundamental conflict, which helps to explain the deep divisions and lack of progress in international forums like the CCW. The debate is not merely a technical one about defining levels of autonomy; it is a profound disagreement about the very source of moral authority in the conduct of war.

This deep philosophical divide, combined with the slow, deliberate pace of international diplomacy and treaty-making, stands in stark contrast to the blistering speed of technological development. This creates a dangerous dynamic where operational facts on the ground are likely to establish de facto norms of behavior long before any formal international law can be agreed upon. The widespread and effective use of semi-autonomous loitering munitions and AI-targeted drones in conflicts like the one in Ukraine is already normalizing their presence on the battlefield and demonstrating their military utility. This creates a “new reality” to which international law will likely be forced to adapt, rather than a future condition that it can preemptively shape. Consequently, any future regulations may be compelled to “grandfather in” the highly autonomous systems that are already in service, leading to a potential treaty that bans hypothetical, future “killer robots” while implicitly permitting the very real and increasingly autonomous systems that are already being deployed in conflicts around the world.

Conclusion and Strategic Recommendations

The integration of Artificial Intelligence into unmanned systems is not an incremental evolution; it is a disruptive and revolutionary transformation of military technology and the character of war itself. AI is fundamentally reshaping drone design, creating a new class of “AI-native” platforms constrained by the physics of SWaP-C and dependent on advanced microelectronics. It is enabling a suite of revolutionary capabilities, from resilient navigation in denied environments to the collaborative intelligence of swarms and the adaptive dominance of cognitive electronic warfare. These capabilities are, in turn, compressing the military kill chain to machine speeds, democratizing access to sophisticated air power for non-state actors, and forcing a crisis in traditional models of command and control.

The strategic landscape is being remade by these technologies. The battlefield is becoming a transparent, hyper-lethal environment where survivability depends less on armor and more on algorithms. The logic of military procurement is shifting from a focus on exquisite, high-cost platforms to a new paradigm of attritable, intelligent mass. And the very nature of human control over the use of force is being challenged, creating profound legal and ethical dilemmas that the international community is struggling to address. Navigating this new era of algorithmic warfare requires a clear-eyed assessment of these changes and a deliberate, forward-looking national strategy.

Based on the analysis contained in this report, the following strategic recommendations are offered for policymakers and defense leaders:

  1. Prioritize Investment in Attritable Mass and Sovereign AI Hardware. The strategic focus of research, development, and procurement must shift. The era of prioritizing small numbers of expensive, “survivable” platforms is ending. The future lies in the ability to field large numbers of intelligent, autonomous, and attritable systems that can be lost without catastrophic strategic impact. This requires a fundamental overhaul of defense acquisition processes to favor speed, agility, and commercial-style innovation. Critically, this strategy is entirely dependent on assured access to the specialized, low-SWaP AI hardware that powers these systems. Therefore, it is a national security imperative to treat the semiconductor supply chain as a strategic asset, investing heavily in domestic chip design and fabrication capabilities to ensure sovereign control over these foundational components of modern military power.
  2. Drive Urgent and Radical Doctrinal Adaptation. The technologies discussed in this report render many existing military doctrines obsolete. Concepts of command and control must be radically rethought to accommodate human-machine teaming and machine-speed decision-making. Force structures must be reorganized, moving away from platform-centric formations (e.g., armored brigades, carrier strike groups) and toward integrated, multi-domain networks of manned and unmanned systems. Logistics and sustainment models must adapt to a battlefield characterized by extremely high attrition rates for unmanned systems. This doctrinal evolution must be driven from the highest levels of military leadership and must be pursued with a sense of urgency, as adversaries are already adapting to this new reality.
  3. Cultivate a New Generation of Human Capital. The warfighter of the future will require a fundamentally different skillset. While traditional martial skills will remain relevant, they must be augmented by expertise in data science, AI/ML programming, robotics, and systems engineering. The military must aggressively recruit, train, and retain talent in these critical fields, creating new career paths and promotion incentives for a tech-savvy force. This includes not only uniformed personnel but also a deeper integration of civilian experts and partnerships with academia and the private technology sector.
  4. Lead Proactively in Shaping International Norms. The United States should not adopt a passive or obstructionist posture in the international debate on autonomous weapons. The slow pace of the CCW process provides an opportunity for the United States and its allies to proactively lead the development of international norms and standards for the responsible military use of AI. Rather than focusing on all-or-nothing bans on hypothetical future systems, this effort should prioritize achievable, concrete regulations that can build a broad consensus. This could include establishing international standards for the testing, validation, and verification of autonomous systems; promoting transparency in data curation and algorithm design to mitigate bias; and developing common frameworks for ensuring legal review and accountability. By leading this effort, the United States can shape the normative environment in a way that aligns with its interests and values, before that environment is irrevocably set by the chaotic realities of the next conflict.


Please share the link on Facebook, Forums, with colleagues, etc. Your support is much appreciated and if you have any feedback, please email us in**@*********ps.com. If you’d like to request a report or order a reprint, please click here for the corresponding page to open in new tab.


Sources Used

  1. AI Impact Analysis on the Military Drone Industry – MarketsandMarkets, accessed September 26, 2025, https://www.marketsandmarkets.com/ResearchInsight/ai-impact-analysis-military-drone-industry.asp
  2. Drone AI | Artificial Intelligence Components | Deep Learning Software, accessed September 26, 2025, https://www.unmannedsystemstechnology.com/expo/artificial-intelligence-components/
  3. Tactical Edge Computing: Enabling Faster Intelligence Cycles – FlySight, accessed September 26, 2025, https://www.flysight.it/tactical-edge-computing-enabling-faster-intelligence-cycles/
  4. Edge AI in Tactical Defense: Empowering the Future of Military and Aerospace Operations, accessed September 26, 2025, https://dedicatedcomputing.com/edge-ai-in-tactical-defense-empowering-the-future-of-military-and-aerospace-operations/
  5. ARTIFICIAL INTELLIGENCE’S GROWING ROLE IN MODERN WARFARE – War Room, accessed September 26, 2025, https://warroom.armywarcollege.edu/articles/ais-growing-role/
  6. AI Components in Drones – Fly Eye, accessed September 26, 2025, https://www.flyeye.io/ai-drone-components/
  7. Autonomous Machines: The Future of AI – NVIDIA Jetson, accessed September 26, 2025, https://www.nvidia.com/en-us/autonomous-machines/
  8. Advanced AI-Powered Drone, Autopilot & Hardware Solutions | UST, accessed September 26, 2025, https://www.unmannedsystemstechnology.com/2025/06/advanced-ai-powered-drone-autopilot-hardware-solutions/
  9. Drone Onboard AI Processors – Meegle, accessed September 26, 2025, https://www.meegle.com/en_us/topics/autonomous-drones/drone-onboard-ai-processors
  10. AI in Military Drones: Transforming Modern Warfare (2025-2030) – MarketsandMarkets, accessed September 26, 2025, https://www.marketsandmarkets.com/ResearchInsight/ai-in-military-drones-transforming-modern-warfare.asp
  11. SWaP-C: Transforming the Future of Aircraft Components, accessed September 26, 2025, https://www.nextgenexecsearch.com/swap-c-transforming-the-future-of-aircraft-components/
  12. Optimizing SWaP-C for Unmanned Platforms: Pushing the Boundaries of Size, Weight, Power, and Cost in Aerospace and Defense, accessed September 26, 2025, https://idstch.com/technology/electronics/optimizing-swap-c-for-unmanned-platforms-pushing-the-boundaries-of-size-weight-power-and-cost-in-aerospace-and-defense/
  13. Using SWaP-C Reductions to Improve UAS/UGV Mission Capabilities – Magazine Article, accessed September 26, 2025, https://saemobilus.sae.org/articles/using-swap-c-reductions-improve-uas-ugv-mission-capabilities-16aerp05_01
  14. SWaP-optimized mission systems for unmanned platforms help expand capabilities, accessed September 26, 2025, https://militaryembedded.com/unmanned/payloads/swap-optimized-mission-systems-for-unmanned-platforms-help-expand-capabilities
  15. Optimizing SWaP-C in Defense and Aerospace: Strategies for 2025 – Galorath Incorporated, accessed September 26, 2025, https://galorath.com/blog/optimizing-swap-c-defense-aerospace-2025/
  16. Low SWAP-C Imaging Radar for Small Air Vehicle Sense and Avoid – NASA TechPort – Project, accessed September 26, 2025, https://techport.nasa.gov/projects/113335
  17. Low SWaP-C UAS Payload: MatrixSpace Wins AFWERX Contract – Dronelife, accessed September 26, 2025, https://dronelife.com/2024/06/19/matrixspace-awarded-1-25m-afwerx-contract-for-low-swap-c-uas-payload-development/
  18. AI-ENABLED DRONE AUTONOMOUS NAVIGATION AND DECISION MAKING FOR DEFENCE SECURITY | ENVIRONMENT. TECHNOLOGY. RESOURCES. Proceedings of the International Scientific and Practical Conference, accessed September 26, 2025, https://journals.rta.lv/index.php/ETR/article/view/8237
  19. Revolutionizing drone navigation: AI algorithms take flight – Show Me Mizzou, accessed September 26, 2025, https://showme.missouri.edu/2024/revolutionizing-drone-navigation-ai-algorithms-take-flight/
  20. Ukraine’s Future Vision and Current Capabilities for Waging AI …, accessed September 26, 2025, https://www.csis.org/analysis/ukraines-future-vision-and-current-capabilities-waging-ai-enabled-autonomous-warfare
  21. ai-enabled drone autonomous navigation and decision making for defence security – ResearchGate, accessed September 26, 2025, https://www.researchgate.net/publication/382409933_AI-ENABLED_DRONE_AUTONOMOUS_NAVIGATION_AND_DECISION_MAKING_FOR_DEFENCE_SECURITY
  22. C2 and AI Integration In Drone Warfare – Impacts On TTPs & Military Strategy, accessed September 26, 2025, https://www.strategycentral.io/post/c2-and-ai-integration-in-drone-warfare-impacts-on-ttps-military-strategy
  23. Innovating Defense: Generative AI’s Role in Military Evolution | Article – U.S. Army, accessed September 26, 2025, https://www.army.mil/article/286707/innovating_defense_generative_ais_role_in_military_evolution
  24. SensorFusionAI: Shaping the Future of Defence Strategy – DroneShield, accessed September 26, 2025, https://www.droneshield.com/blog/shaping-the-future-of-defence-strategy
  25. AI for Sensor Fusion: Sensing the Invisible – Mind Foundry, accessed September 26, 2025, https://www.mindfoundry.ai/blog/sensing-the-invisible
  26. (PDF) UAV Detection Multi-sensor Data Fusion – ResearchGate, accessed September 26, 2025, https://www.researchgate.net/publication/382689292_UAV_Detection_Multi-sensor_Data_Fusion
  27. AI-Enabled Fusion for Conflicting Sensor Data – Booz Allen, accessed September 26, 2025, https://www.boozallen.com/markets/defense/indo-pacific/ai-enabled-fusion-for-conflicting-sensor-data.html
  28. Quantum Sensing and AI for Drone and Nano-Drone Detection, accessed September 26, 2025, https://postquantum.com/quantum-sensing/quantum-sensing-ai-drones/
  29. AI-Powered Intelligence, Surveillance, and Reconnaissance (ISR) Systems in Defense, accessed September 26, 2025, https://www.researchgate.net/publication/394740589_AI-Powered_Intelligence_Surveillance_and_Reconnaissance_ISR_Systems_in_Defense
  30. What is ISR (Intelligence, Surveillance, Reconnaissance)? – Fly Eye, accessed September 26, 2025, https://www.flyeye.io/drone-acronym-isr/
  31. Technological Evolution on the Battlefield – CSIS, accessed September 26, 2025, https://www.csis.org/analysis/chapter-9-technological-evolution-battlefield
  32. Aided and Automatic Target Recognition – CoVar, accessed September 26, 2025, https://covar.com/technology-area/aided-target-recognition/
  33. Automatic Target Recognition for Military Use – What’s the Potential? – FlySight, accessed September 26, 2025, https://www.flysight.it/automatic-target-recognition-for-military-use-whats-the-potential/
  34. Operationally Relevant Artificial Training for Machine Learning – RAND, accessed September 26, 2025, https://www.rand.org/pubs/research_reports/RRA683-1.html
  35. Finding Novel Targets on the Fly: Using Advanced AI to Make Flexible Automatic Target Recognition Systems – Draper, accessed September 26, 2025, https://www.draper.com/media-center/featured-stories/detail/27285/finding-novel-targets-on-the-fly-using-advanced-ai-to-make-flexible-automatic-target-recognition-systems
  36. AI-powered Management for Defense Drone Swarms – – Datategy, accessed September 26, 2025, https://www.datategy.net/2025/07/21/ai-powered-management-for-defense-drone-swarms/
  37. Drone Swarms: Collective Intelligence in Action, accessed September 26, 2025, https://scalastic.io/en/drone-swarms-collective-intelligence/
  38. Generative AI “Agile Swarm Intelligence” (Part 1): Autonomous Agent Swarms Foundations, Theory, and Advanced Applications | by Arman Kamran | Medium, accessed September 26, 2025, https://medium.com/@armankamran/generative-ai-agile-swarm-intelligence-part-1-autonomous-agent-swarms-foundations-theory-and-9038e3bc6c37
  39. Technology and modern warfare: How drones and AI are transforming conflict, accessed September 26, 2025, https://www.visionofhumanity.org/technology-and-modern-warfare-how-drones-and-ai-are-transforming-conflict/
  40. The Impact Of Drones On The Future Of Military Warfare – SNS Insider, accessed September 26, 2025, https://www.snsinsider.com/blogs/the-impact-of-drones-on-the-future-of-military-warfare
  41. AI‑operated swarm intelligence: Flexible, self‑organizing drone fleets – Marks & Clerk, accessed September 26, 2025, https://www.marks-clerk.com/insights/latest-insights/102kt4b-ai-operated-swarm-intelligence-flexible-self-organizing-drone-fleets/
  42. Lethal autonomous weapon – Wikipedia, accessed September 26, 2025, https://en.wikipedia.org/wiki/Lethal_autonomous_weapon
  43. (PDF) DARPA OFFSET: A Vision for Advanced Swarm Systems through Agile Technology Development and ExperimentationDARPA OFFSET – ResearchGate, accessed September 26, 2025, https://www.researchgate.net/publication/367321998_DARPA_OFFSET_A_Vision_for_Advanced_Swarm_Systems_through_Agile_Technology_Development_and_ExperimentationDARPA_OFFSET_A_Vision_for_Advanced_Swarm_Systems_through_Agile_Technology_Development_and_Exper
  44. DARPA OFFSET Second Swarm Sprint Pursuing State-of-the-Art Solutions – DSIAC, accessed September 26, 2025, https://dsiac.dtic.mil/articles/darpa-offset-second-swarm-sprint-pursuing-state-of-the-art-solutions/
  45. OFFSET: Offensive Swarm-Enabled Tactics – Director Operational Test and Evaluation, accessed September 26, 2025, https://www.dote.osd.mil/News/What-DOT-Es-Following/Following-Display/Article/3348613/offset-offensive-swarm-enabled-tactics/
  46. DoD Directive 3000.09, “Autonomy in Weapon Systems,” January 25 …, accessed September 26, 2025, https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf
  47. Cognitive Electronic Warfare and the Fight for Spectrum Superiority – Parallax Research, accessed September 26, 2025, https://parallaxresearch.org/news/blog/cognitive-electronic-warfare-and-fight-spectrum-superiority
  48. August/September 2018 – Cognitive Electronic Warfare: Radio Frequency Spectrum Meets Machine Learning | Avionics Digital Edition – Aviation Today, accessed September 26, 2025, https://interactive.aviationtoday.com/avionicsmagazine/august-september-2018/cognitive-electronic-warfare-radio-frequency-spectrum-meets-machine-learning/
  49. AI-Enabled Electronic Warfare – Defense One, accessed September 26, 2025, https://www.defenseone.com/insights/cards/how-ai-changing-way-warfighters-make-decisions-and-fight-battlefield/3/
  50. Cognitive Electronic Warfare: Using AI to Solve EW Problems – YouTube, accessed September 26, 2025, https://www.youtube.com/watch?v=oO6piqNroiI
  51. HX-2 – AI Strike Drone – Helsing, accessed September 26, 2025, https://helsing.ai/hx-2
  52. Agent-Based Anti-Jamming Techniques for UAV Communications in Adversarial Environments: A Comprehensive Survey – arXiv, accessed September 26, 2025, https://arxiv.org/html/2508.11687v1
  53. ScaleFlyt Antijamming: against interference and jamming threats | Thales Group, accessed September 26, 2025, https://www.thalesgroup.com/en/markets/aerospace/drone-solutions/scaleflyt-antijamming-against-interference-and-jamming-threats
  54. [2508.11687] Agent-Based Anti-Jamming Techniques for UAV Communications in Adversarial Environments: A Comprehensive Survey – arXiv, accessed September 26, 2025, https://www.arxiv.org/abs/2508.11687
  55. AI-controlled drone swarms arms race to dominate the near-future battlefield – YouTube, accessed September 26, 2025, https://www.youtube.com/watch?v=h2O17B4R7Rc
  56. Humans on the Loop vs. In the Loop: Striking the Balance in Decision-Making – Trackmind, accessed September 26, 2025, https://www.trackmind.com/humans-in-the-loop-vs-on-the-loop/
  57. What is C2 (Command and Control) & How Does it Work? – Fly Eye, accessed September 26, 2025, https://www.flyeye.io/drone-acronym-c2/
  58. The Command and Control Element | GEOG 892: Unmanned Aerial Systems, accessed September 26, 2025, https://www.e-education.psu.edu/geog892/node/8
  59. Human-in-the-Loop (HITL) vs Human-on-the-Loop (HOTL) – Checkify, accessed September 26, 2025, https://checkify.com/article/human-on-the-loop-hotl/
  60. The (im)possibility of meaningful human control for lethal autonomous weapon systems – Humanitarian Law & Policy Blog, accessed September 26, 2025, https://blogs.icrc.org/law-and-policy/2018/08/29/im-possibility-meaningful-human-control-lethal-autonomous-weapon-systems/
  61. Understanding “The Loop”: Hum-ans and the Next Drone Generations – Brookings Institution, accessed September 26, 2025, https://www.brookings.edu/wp-content/uploads/2016/06/27-humans-drones-marra-mcneil.pdf
  62. In On or Out of the Loop, accessed September 26, 2025, https://cms.polsci.ku.dk/publikationer/in-on-or-out-of-the-loop/In_On_or_Out_of_the_Loop.pdf
  63. Military AI Challenges Human Accountability – CIP – Center for International Policy, accessed September 26, 2025, https://internationalpolicy.org/publications/military-ai-challenges-human-accountability/
  64. Lethal Autonomous Weapon Systems (LAWS): Accountability, Collateral Damage, and the Inadequacies of International Law – Temple iLIT, accessed September 26, 2025, https://law.temple.edu/ilit/lethal-autonomous-weapon-systems-laws-accountability-collateral-damage-and-the-inadequacies-of-international-law/
  65. Defense Official Discusses Unmanned Aircraft Systems, Human Decision-Making, AI, accessed September 26, 2025, https://www.war.gov/News/News-Stories/Article/Article/2491512/defense-official-discusses-unmanned-aircraft-systems-human-decision-making-ai/
  66. Department of Defense Directive 3000.09 – Wikipedia, accessed September 26, 2025, https://en.wikipedia.org/wiki/Department_of_Defense_Directive_3000.09
  67. How Meaningful is “Meaningful Human Control” in LAWS Regulation? – Lieber Institute, accessed September 26, 2025, https://lieber.westpoint.edu/how-meaningful-is-meaningful-human-control-laws-regulation/
  68. A MEANINGFUL FLOOR FOR “MEANINGFUL HUMAN CONTROL” Rebecca Crootof *, accessed September 26, 2025, https://sites.temple.edu/ticlj/files/2017/02/30.1.Crootof-TICLJ.pdf
  69. Full article: Imagining Meaningful Human Control: Autonomous Weapons and the (De-) Legitimisation of Future Warfare – Taylor & Francis Online, accessed September 26, 2025, https://www.tandfonline.com/doi/full/10.1080/13600826.2023.2233004
  70. Meaningful Human Control over Autonomous Systems: A Philosophical Account – PMC, accessed September 26, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC7806098/
  71. Problems with autonomous weapons – Stop Killer Robots, accessed September 26, 2025, https://www.stopkillerrobots.org/stop-killer-robots/facts-about-autonomous-weapons/
  72. A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making, accessed September 26, 2025, https://www.hrw.org/report/2025/04/28/hazard-human-rights/autonomous-weapons-systems-and-digital-decision-making
  73. Full article: The ethical legitimacy of autonomous Weapons systems: reconfiguring war accountability in the age of artificial Intelligence – Taylor & Francis Online, accessed September 26, 2025, https://www.tandfonline.com/doi/full/10.1080/16544951.2025.2540131
  74. Mapping Artificial Intelligence to the Naval Tactical Kill Chain, accessed September 26, 2025, https://nps.edu/documents/10180/142489929/NEJ+Hybrid+Force+Issue_Mapping+AI+to+The+Naval+Kill+Chain.pdf
  75. How are Drones Changing Modern Warfare? | Australian Army Research Centre (AARC), accessed September 26, 2025, https://researchcentre.army.gov.au/library/land-power-forum/how-are-drones-changing-modern-warfare
  76. Air Force Battle Lab advances the kill chain with AI, C2 Innovation – AF.mil, accessed September 26, 2025, https://www.af.mil/News/Article-Display/Article/4241485/air-force-battle-lab-advances-the-kill-chain-with-ai-c2-innovation/
  77. Shortening the Kill Chain with Artificial Intelligence – AutoNorms, accessed September 26, 2025, https://www.autonorms.eu/shortening-the-kill-chain-with-artificial-intelligence/
  78. Artificial Intelligence, Drone Swarming and Escalation Risks in Future Warfare James Johnson – DORAS | DCU Research Repository, accessed September 26, 2025, http://doras.dcu.ie/25505/1/RUSI%20Journal%20JamesJohnson%20%282020%29.pdf
  79. Unmanned Aerial Systems’ Influences on Conflict Escalation Dynamics – CSIS, accessed September 26, 2025, https://www.csis.org/analysis/unmanned-aerial-systems-influences-conflict-escalation-dynamics
  80. Democratizing harm: Artificial intelligence in the hands of nonstate actors | Brookings, accessed September 26, 2025, https://www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors/
  81. Non state actors can now create lethal autonomous weapons from civilian products, accessed September 26, 2025, https://www.weforum.org/stories/2022/05/regulate-non-state-use-arms/
  82. The Rising Threat of Non-State Actor Commercial Drone Use: Emerging Capabilities and Threats – Combating Terrorism Center, accessed September 26, 2025, https://ctc.westpoint.edu/the-rising-threat-of-non-state-actor-commercial-drone-use-emerging-capabilities-and-threats/
  83. The Role of Drones in Future Terrorist Attacks – AUSA, accessed September 26, 2025, https://www.ausa.org/sites/default/files/publications/LWP-137-The-Role-of-Drones-in-Future-Terrorist-Attacks_0.pdf
  84. On the Horizon: The Ukraine War and the Evolving Threat of Drone Terrorism, accessed September 26, 2025, https://ctc.westpoint.edu/on-the-horizon-the-ukraine-war-and-the-evolving-threat-of-drone-terrorism/
  85. Pentagon Fast-Tracks AI Into Drone Swarm Defense – Warrior Maven, accessed September 26, 2025, https://warriormaven.com/news/land/pentagon-fast-tracks-ai-into-drone-swarm-defense
  86. Counter-Drone Systems – RF Drone Detection and Jamming Systems – Mistral Solutions, accessed September 26, 2025, https://www.mistralsolutions.com/counter-drone-systems/
  87. The Future of AI and Automation in Anti-Drone Detection, accessed September 26, 2025, https://www.nqdefense.com/the-future-of-ai-and-automation-in-anti-drone-detection/
  88. 8 Counter-UAS Challenges and How to Overcome Them – Robin Radar, accessed September 26, 2025, https://www.robinradar.com/blog/counter-uas-challenges
  89. The impact of AI on counter-drone measures – Skylock, accessed September 26, 2025, https://www.skylock1.com/the-impact-of-ai-on-counter-drone-measures/
  90. How AI-Powered Anti-Drone Solutions Transform Defense Operations? – – Datategy, accessed September 26, 2025, https://www.datategy.net/2025/07/15/how-ai-powered-anti-drone-solutions-transform-defense-operations/
  91. A counter to drone swarms: high-power microwave weapons | The Strategist, accessed September 26, 2025, https://www.aspistrategist.org.au/a-counter-to-drone-swarms-high-power-microwave-weapons/
  92. Honeywell Unveils AI-Enabled Counter-Drone Swarm System – National Defense Magazine, accessed September 26, 2025, https://www.nationaldefensemagazine.org/articles/2024/9/17/honeywell-unveils-ai-enabled-uas-system-to-counter-swarm-drones
  93. Lethal Autonomous Weapons Systems & International Law: Growing Momentum Towards a New International Treaty | ASIL, accessed September 26, 2025, https://www.asil.org/insights/volume/29/issue/1
  94. Compliance with and enforcement of IHL – GSDRC, accessed September 26, 2025, https://gsdrc.org/topic-guides/international-legal-frameworks-for-humanitarian-action/challenges/compliance-with-and-enforcement-of-ihl/
  95. International Humanitarian Law – European Commission, accessed September 26, 2025, https://civil-protection-humanitarian-aid.ec.europa.eu/what/humanitarian-aid/international-humanitarian-law_en
  96. www.ie.edu, accessed September 26, 2025, https://www.ie.edu/insights/articles/when-ai-meets-the-laws-of-war/#:~:text=Legal%20and%20Ethical%20Considerations&text=To%20start%2C%20determining%20accountability%20for,autonomous%20nature%20of%20these%20systems.
  97. The Ethics of Automated Warfare and Artificial Intelligence, accessed September 26, 2025, https://www.cigionline.org/the-ethics-of-automated-warfare-and-artificial-intelligence/
  98. International Discussions Concerning Lethal Autonomous Weapon Systems – Congress.gov, accessed September 26, 2025, https://www.congress.gov/crs-product/IF11294
  99. CHALLENGES IN REGULATING LETHAL AUTONOMOUS WEAPONS UNDER INTERNATIONAL LAW, accessed September 26, 2025, https://www.swlaw.edu/sites/default/files/2021-03/3.%20Reeves%20%5Bp.%20101-118%5D.pdf
  100. Understanding the Global Debate on Lethal Autonomous Weapons Systems: An Indian Perspective | Carnegie Endowment for International Peace, accessed September 26, 2025, https://carnegieendowment.org/research/2024/08/understanding-the-global-debate-on-lethal-autonomous-weapons-systems-an-indian-perspective?lang=en
  101. Pros and Cons of Autonomous Weapons Systems – Army University Press, accessed September 26, 2025, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/
  102. Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can – Scholarship Archive, accessed September 26, 2025, https://scholarship.law.columbia.edu/cgi/viewcontent.cgi?article=2804&context=faculty_scholarship

The Algorithmic Battlefield: A Global Ranking and Strategic Analysis of Military AI Capabilities

The global security landscape is being fundamentally reshaped by the rapid integration of artificial intelligence (AI) into military forces, heralding a new era of “intelligentized” warfare. This report provides a comprehensive assessment and ranking of the world’s top 10 nations in military AI, based on a multi-factor methodology evaluating national strategy, foundational ecosystem, military implementation, and operational efficacy. The analysis reveals a distinct, bipolar competition at the highest tier, followed by a diverse and competitive group of strategic contenders and niche specialists.

Top-Line Findings: The United States and the People’s Republic of China stand alone in Tier I, representing two competing paradigms for developing and deploying military AI. The U.S. leverages a dominant commercial technology sector and massive private investment, while China employs a state-directed, whole-of-nation “Military-Civil Fusion” strategy. While the U.S. currently maintains a significant lead, particularly in foundational innovation and investment, China is rapidly closing the gap in application and scale.

Tier II is populated by a mix of powers. Russia, despite technological and economic constraints, has proven adept at asymmetric innovation, battle-hardening AI for electronic warfare and unmanned systems in Ukraine. Israel stands out for its unparalleled operational deployment of AI in high-intensity combat, particularly for targeting. The United Kingdom is the clear leader among European allies, followed by France, which is aggressively pursuing a sovereign AI capability. Rising powers like India and South Korea are leveraging their unique strengths—a vast talent pool and a world-class hardware industry, respectively—to build formidable programs. Germany and Japan are accelerating their historically cautious approaches in response to a deteriorating security environment, while Canada focuses on niche contributions within its alliance structures.

Key Strategic Insight: True leadership in military AI is determined not by technological prowess alone, but by a nation’s ability to create a cohesive ecosystem that integrates technology, data, investment, talent, and—most critically—military doctrine. The core of the U.S.-China competition is a contest between America’s dynamic but sometimes disjointed commercial-military model and China’s centrally commanded but potentially less innovative state-driven model. The ultimate victor will be the nation that can most effectively translate AI potential into tangible, scalable, and doctrinally integrated decision advantage on the battlefield.

Emerging Trends: The conflict in Ukraine has become the world’s foremost laboratory for AI in warfare, demonstrating that battlefield necessity is the most powerful catalyst for innovation. This has validated the strategic importance of low-cost, attritable autonomous systems, a lesson the U.S. is attempting to institutionalize through its Replicator initiative. Furthermore, the analysis underscores the critical strategic dependence on foundational hardware, particularly advanced semiconductors and cloud computing infrastructure, which represents a key advantage for the U.S. and its allies and a significant vulnerability for China. Finally, a clear divergence is emerging in doctrinal and ethical approaches, with some nations rapidly fielding systems for immediate effect while others prioritize developing more deliberate, human-in-the-loop frameworks.

RankCountryOverall Score (100)
1United States94.5
2China79.0
3Israel61.5
4Russia55.5
5United Kingdom51.0
6France45.5
7South Korea43.0
8India41.0
9Germany37.5
10Japan35.0

The New Topography of Warfare: The Rise of Military AI

The character of warfare is undergoing its most profound transformation since the advent of nuclear weapons. The shift from the “informatized” battlefield of the late 20th century to the “intelligentized” battlefield of the 21st is not an incremental evolution but a genuine revolution in military affairs (RMA). Artificial intelligence is not merely another tool; it is a foundational, general-purpose technology, much like electricity, that is diffusing across every military function and fundamentally altering the calculus of combat.1 This transformation is defined by its capacity to collapse decision-making cycles, enable autonomous operations at unprecedented speed and scale, and create entirely new vectors for conflict.

The core military applications of AI are already reshaping contemporary battlefields. They span a wide spectrum, from enhancing command and control (C2) and processing vast streams of intelligence, surveillance, and reconnaissance (ISR) data to optimizing logistics, conducting cyber and information operations, and fielding increasingly autonomous weapon systems.1 The war in Ukraine serves as a stark preview of this new reality. The widespread use of unmanned aerial vehicles (UAVs), often augmented with AI for targeting and navigation, is reported to account for 70-80% of battlefield casualties.4 AI-based targeting has dramatically increased the accuracy of low-cost first-person-view (FPV) drones from a baseline of 30-50% to approximately 80%, demonstrating a tangible increase in lethality.4

This proliferation of cheap, smart, and lethal systems is challenging the decades-long dominance of expensive, exquisite military platforms. A commercial drone enhanced with an AI targeting module costing as little as $25 can now threaten a multi-million-dollar main battle tank, creating an extreme cost-imbalance that upends traditional force-on-force calculations.4 This dynamic is forcing a strategic re-evaluation within the world’s most advanced militaries. The future battlefield may not be won by the nation with the most sophisticated fighter jet, but by the one that can most effectively deploy, coordinate, and sustain intelligent swarms of attritable systems. This reality is the direct impetus for major strategic initiatives like the U.S. Department of Defense’s (DoD) Replicator program, which aims to counter adversary mass with a new form of American mass built on thousands of autonomous systems.5

This technological upheaval is unfolding within a clear geopolitical context: an intensifying “artificial intelligence arms race”.7 This competition is most acute between the United States and China, both of which recognize AI as a decisive element of future military power and are racing to integrate it into their strategies.1 However, they are not the only actors. A host of other nations are making significant investments, developing niche capabilities, and in some cases, gaining invaluable operational experience, creating a complex and dynamic global landscape. Understanding this new topography of warfare is essential for navigating the strategic challenges of the coming decades.

Global Military AI Power Rankings, 2025

The following ranking provides a holistic assessment of national military AI capabilities. It is derived from a composite score based on the detailed methodology outlined in the Appendix of this report. The index evaluates each nation across four equally weighted pillars: National Strategy & Investment, Foundational Ecosystem, Military Implementation & Programs, and Operational Efficacy & Deployment. This structure provides a comprehensive view, moving beyond simple technological metrics to assess a nation’s complete capacity to translate AI potential into effective military power.

The scores reveal a clear two-tiered structure. Tier I is exclusively occupied by the United States and China, who are in a league of their own. Tier II comprises a competitive and diverse group of nations, each with distinct strengths and strategic approaches, from the battle-tested pragmatism of Israel and Russia to the alliance-focused innovation of the United Kingdom and the sovereign ambitions of France.

RankCountryOverall ScoreStrategy & InvestmentFoundational EcosystemMilitary ImplementationOperational Efficacy
1United States94.592989395
2China79.090857863
3Israel61.555655868
4Russia55.558455465
5United Kingdom51.060584541
6France45.557484235
7South Korea43.050523832
8India41.052473530
9Germany37.545443328
10Japan35.040423028

Tier I Analysis: The Bipolar AI World Order

The global military AI landscape is dominated by two superpowers, the United States and China. They are not merely the top two contenders; they represent fundamentally different models for harnessing a transformative technology for national power. Their competition is not just a race for better algorithms but a clash of entire systems—one driven by a vibrant, chaotic commercial ecosystem, the other by the centralized, unyielding will of the state.

United States: The Commercial-Military Vanguard

The United States holds the top position in military AI, a status derived from an unparalleled private-sector innovation engine, overwhelming financial investment, and a clear strategic pivot towards integrating commercial technology at unprecedented speed and scale. Its strength lies in its dynamic, bottom-up ecosystem. However, this model is not without friction; the U.S. faces significant challenges in overcoming bureaucratic acquisition hurdles, bridging the cultural gap between Silicon Valley and the Pentagon, and navigating complex ethical debates that can temper the pace of adoption.

National Strategy and Vision

The U.S. approach has matured from establishing foundational principles to prioritizing agile adoption. The 2018 DoD AI Strategy laid the groundwork, directing the department to accelerate AI adoption and establishing the Joint Artificial Intelligence Center (JAIC) as a focal point.9 This initial strategy emphasized the need to empower, not replace, servicemembers and to lead in the responsible and ethical use of AI.9

Building on this, the 2023 Data, Analytics, and AI Adoption Strategy, developed by the Chief Digital and AI Officer (CDAO), marks a significant evolution.10 It supersedes the earlier documents and shifts the focus from a handful of specific capabilities to strengthening the entire organizational environment for continuous AI deployment. The strategy’s central objective is to achieve and maintain “decision advantage” across the competition continuum.10 It prescribes an agile approach to development and delivery, targeting five specific outcomes:

  1. Superior battlespace awareness and understanding
  2. Adaptive force planning and application
  3. Fast, precise, and resilient kill chains
  4. Resilient sustainment support
  5. Efficient enterprise business operations 10

This strategic framework is supported by a clear hierarchy of needs: quality data, governance, analytics, and responsible AI assurance, all managed under the centralizing authority of the CDAO.10

Investment and Foundational Ecosystem

The scale of U.S. investment in AI is staggering and unmatched globally. In 2024, private AI investment in the U.S. reached $109.1 billion, a figure nearly twelve times greater than that of China.12 This torrent of private capital fuels a hyper-competitive ecosystem of startups and established tech giants, creating a vast wellspring of innovation from which the military can draw.

This private investment is mirrored by a dramatic increase in defense-specific spending. The potential value of DoD AI-related contracts surged by nearly 1,200% in a single year, from $355 million to $4.6 billion between 2022 and 2023, with the DoD driving almost the entire increase.14 The Pentagon’s fiscal year 2025 budget request includes over $12 billion for unmanned systems and AI autonomy programs, signaling a firm, top-level commitment.16

This financial dominance underpins a foundational ecosystem that leads the world in nearly every metric. The U.S. possesses the largest and highest-quality pool of AI talent, is home to the world’s leading research universities, and dominates open-source contributions.17 In 2023, U.S.-based institutions produced 61 notable machine learning models, compared to just 15 from China.19 Crucially, the U.S. and its close allies control the most critical chokepoints of the AI hardware supply chain, including high-end semiconductor design (Nvidia, Intel, AMD) and manufacturing, as well as the global cloud computing infrastructure (Amazon Web Services, Microsoft Azure, and Google Cloud), which provides the raw computational power necessary for training and deploying advanced AI models.20

Flagship Programs and Demonstrated Efficacy

The U.S. has moved beyond theoretical research to the development and operational deployment of key military AI systems.

  • Project Maven (Algorithmic Warfare Cross-Functional Team): Initially launched in 2017 to use machine learning for analyzing full-motion video from drones, Maven has evolved into the Pentagon’s flagship AI project for targeting.22 It is a sophisticated data-fusion platform that integrates information from satellites, sensors, and communications intercepts to identify and prioritize potential targets.22 Its effectiveness has been proven in the “Scarlet Dragon” series of live-fire exercises, where it enabled an AI-driven kill chain from target identification in satellite imagery to a successful strike by an M142 HIMARS rocket system.22 Maven has been deployed in active combat zones, assisting with targeting for airstrikes in Iraq, Syria, and Yemen, and has been used to provide critical intelligence to Ukrainian forces.22 In 2023, the geospatial intelligence (GEOINT) aspects of Maven were transferred to the National Geospatial-Intelligence Agency (NGA), signifying its maturation from a pilot project into an enterprise-level capability for the entire intelligence community.23
  • Replicator Initiative: Unveiled in August 2023, Replicator is the DoD’s doctrinal and industrial response to the lessons of the Ukraine war and the challenge of China’s military mass.5 The initiative’s stated goal is to field thousands of “all-domain, attritable autonomous” (ADA2) systems—small, cheap, and intelligent drones—by August 2025.5 Replicator has a dual purpose: to deliver a tangible warfighting capability that can overwhelm an adversary and to force a revolution in the Pentagon’s slow-moving acquisition process by leveraging the speed and innovation of the commercial sector.27 Approximately 75% of the companies involved are non-traditional defense contractors, a deliberate effort to break the traditional defense-industrial mold.27 However, the program has reportedly faced significant challenges, including software integration issues and systems that were not ready for scaling, highlighting the persistent “valley of death” between prototype and mass production that plagues DoD procurement.28

The development of these programs reveals a distinct philosophy of AI-enabled command. U.S. strategic documents and program designs consistently emphasize that AI is a tool to “empower, not replace” the human warfighter.9 The Army’s doctrinal approach to integrating AI into its targeting cycle explicitly maintains that human commanders must remain the “final arbiters of lethal force”.29 This “human-on-the-loop” model, where AI provides recommendations and accelerates analysis but a human makes the critical decision, is a core tenet of the American approach.

CategoryUnited States: Military AI Profile
National Strategy2023 Data, Analytics, & AI Adoption Strategy; focus on “decision advantage” through agile adoption.
Key InstitutionsChief Digital and AI Officer (CDAO), Defense Advanced Research Projects Agency (DARPA), Defense Innovation Unit (DIU), National Security Agency (NSA) AI Security Center.
Investment FocusMassive private sector investment ($109.1B in 2024); significant DoD budget increases for AI and autonomy ($12B+ in FY25 request).
Flagship ProgramsProject Maven (AI-enabled targeting), Replicator Initiative (attritable autonomous systems).
Foundational StrengthsWorld-leading AI talent, R&D, and commercial tech sector; dominance in semiconductors and cloud computing.
Demonstrated EfficacyProject Maven battle-tested in Middle East and used to support Ukraine; advanced exercises like Scarlet Dragon prove AI kill-chain concepts.
Key ChallengesBureaucratic acquisition processes (“valley of death”), ethical constraints slowing adoption, potential for C2 doctrine to be outpaced by adversaries.

China: The State-Directed Challenger

The People’s Republic of China is the only nation with the scale, resources, and strategic focus to challenge U.S. preeminence in military AI. Its approach is the antithesis of the American model: a top-down, state-directed effort that harnesses the entirety of its national power to achieve a singular goal. Through its “Military-Civil Fusion” strategy, a clear doctrinal commitment to “intelligentized warfare,” and access to vast data resources, China is rapidly developing and scaling AI capabilities. While it may lag the U.S. in foundational innovation and high-end hardware, its ability to direct and integrate technology for state purposes presents a formidable challenge.

National Strategy and Doctrine

China’s ambition is codified in a series of high-level strategic documents. The State Council’s 2017 “New Generation Artificial Intelligence Development Plan” serves as the national blueprint, with the explicit goal of making China the world’s “major AI innovation center” by 2030, identifying national defense as a key area for application.14

This national ambition is translated into military doctrine through the concept of “intelligentized warfare” (智能化战争). This is the official third stage of the People’s Liberation Army’s (PLA) modernization, following mechanization and informatization.1 It is not simply about adding AI to existing systems; it is a holistic vision for re-engineering the PLA to operate at machine speed, infusing AI into every facet of warfare to gain decision superiority over its adversaries.31 The PLA aims to achieve this transformation by 2035 and become a “world-class” military by mid-century.32

The engine driving this transformation is the national strategy of “Military-Civil Fusion” (军民融合). This policy erases the institutional barriers between China’s civilian tech sector and its military-industrial complex, compelling private companies, universities, and state-owned enterprises to contribute to the PLA’s technological advancement.8 This allows the PLA to directly leverage the innovations of China’s tech giants—such as Baidu, Alibaba, and Tencent (BAT)—for military purposes, creating a deeply integrated ecosystem designed to “leapfrog” U.S. capabilities.8

Investment and Foundational Ecosystem

While China’s publicly reported private AI investment ($9.3 billion in 2024) is an order of magnitude smaller than that of the U.S., this figure is misleading.12 The state plays a much more direct role, with government-backed guidance funds targeting a staggering $1.86 trillion for investment in strategic technologies like AI.14

This state-directed investment has cultivated a vast domestic ecosystem. China leads the world in the absolute number of AI-related scientific publications and patents, indicating a massive and active research base.12 It possesses the world’s second-largest pool of AI engineers and is making concerted efforts to retain this talent domestically.17 While U.S. institutions still produce more top-tier, notable AI models, Chinese models have rapidly closed the performance gap on key benchmarks to near-parity.12 A crucial advantage for China is its ability to generate and access massive, state-controlled datasets, particularly from its extensive domestic surveillance apparatus. While this data is not directly military in nature, the experience gained in deploying and scaling AI systems across a population of over a billion people provides invaluable, if morally troubling, operational expertise that can be indirectly applied to military challenges.37

Flagship Programs and Ambitions

The PLA’s pursuit of intelligentized warfare is centered on several key concepts and programs designed to contest U.S. military dominance.

  • “Command Brain” (指挥大脑): This is the PLA’s conceptual centerpiece for an AI-driven command and control system. It is designed to be the nerve center for “multi-domain precision warfare,” the PLA’s concept for defeating the U.S. military by attacking the networked nodes that connect its forces.32 The Command Brain would ingest and fuse immense quantities of ISR data at machine speed, identify adversary vulnerabilities in real-time, and generate or recommend optimal courses of action, thereby compressing the OODA loop and seizing decision advantage.32 The PLA has already begun testing AI systems to assist with artillery targeting and is reportedly using the civilian AI model DeepSeek for non-combat tasks like medical planning and personnel management, signaling a willingness to integrate commercial tech directly.32
  • Autonomous Systems and Swarming: Leveraging its world-leading position in commercial drone manufacturing, the PLA is aggressively pursuing military applications for autonomous systems, particularly drone swarms.32 It is also developing “loyal wingman” concepts, such as the FH-97A autonomous aircraft designed to fly alongside crewed fighters, mirroring U.S. efforts.32
  • Cognitive and Information Warfare: PLA strategists see AI as a critical tool for cognitive warfare, using it to shape the information environment and affect an adversary’s will to fight.8 This aligns with China’s broader strategic emphasis on winning wars without fighting, or shaping the conditions for victory long before kinetic conflict begins.

The Chinese approach to AI in command and control appears to diverge philosophically from the American model. While U.S. doctrine emphasizes AI as a decision-support tool for a human commander, PLA writings on intelligentization focus on using AI to overcome the inherent cognitive limitations of human decision-makers in complex, high-speed, multi-domain environments.8 The development of an “AI military commander” for use in large-scale wargaming simulations suggests an ambition to create a more deeply integrated human-machine command system, where the AI’s role extends beyond simple recommendation to active participation in planning and execution.2 This points toward a potential future where a PLA command structure, optimized for machine-speed analysis, could outpace a U.S. structure that remains doctrinally bound to human-centric decision cycles, creating a critical vulnerability in a crisis.

CategoryChina: Military AI Profile
National StrategyNew Generation AI Development Plan (2017); Military-Civil Fusion (MCF); doctrinal focus on “Intelligentized Warfare.”
Key InstitutionsCentral Military Commission (CMC), People’s Liberation Army (PLA) Strategic Support Force (SSF), state-owned defense enterprises, co-opted tech giants (BAT).
Investment FocusMassive state-directed investment through guidance funds; focus on dual-use technologies and domestic application.
Flagship Programs“Command Brain” (AI for C2), autonomous swarming systems, “loyal wingman” concepts (FH-97A), AI for cognitive warfare.
Foundational StrengthsWorld’s largest data pools, massive talent base, leads in AI publications/patents, world-leading drone manufacturing industry.
Demonstrated EfficacyExtensive deployment of AI for domestic surveillance provides scaling experience; testing AI for artillery targeting; DeepSeek model used for non-combat military tasks.
Key ChallengesLagging in foundational model innovation, critical dependency on foreign high-end semiconductors, potential for top-down system to stifle creativity.

Tier II Analysis: The Strategic Contenders and Niche Specialists

Beyond the bipolar competition of the United States and China, a diverse second tier of nations is actively developing and deploying military AI capabilities. These countries, while lacking the sheer scale of the superpowers, possess significant technological prowess, unique strategic drivers, and in some cases, invaluable combat experience that make them formidable players in their own right. This tier is characterized by a variety of approaches, from the asymmetric pragmatism of Russia to the battle-hardened agility of Israel and the alliance-integrated strategies of key U.S. allies.

Russia: The Asymmetric Innovator

Lacking the vast economic resources and deep commercial technology base of the U.S. and China, Russia has adopted a pragmatic and asymmetric approach to military AI. Its strategy is not to compete head-on in developing the most advanced foundational models, but to incrementally integrate “good enough” AI into its existing areas of military strength—namely electronic warfare (EW), cyber operations, and unmanned systems. The goal is to develop force-multiplying capabilities that can disrupt and debilitate a more technologically advanced adversary.38

Russia’s strategic thinking is guided by its “National Strategy on the Development of Artificial Intelligence until 2030” and the Ministry of Defense’s 2022 “Concept” for AI use, though its most important developmental driver is the ongoing war in Ukraine.39 The conflict has become Russia’s primary laboratory for testing and refining AI applications under combat conditions. This includes developing AI-powered drones, such as the ZALA Lancet loitering munition, that are more resilient to EW and capable of autonomous target recognition and even rudimentary swarming.39 AI is also being integrated into established platforms like the Pantsir, S-300, and S-400 air defense systems to improve target tracking and engagement efficiency against complex threats like drones and cruise missiles.39

Despite these battlefield adaptations, Russia faces significant headwinds. It lags considerably in foundational AI research and investment and is hampered by international sanctions that restrict its access to high-end hardware like semiconductors.40 Its domestic technology sector is a fraction of the size of its American and Chinese counterparts.39 A particularly concerning aspect of Russia’s program is its stated intent to integrate AI into its nuclear command, control, and communications (C3) systems, including the automated security for its Strategic Rocket Forces. This pursuit raises profound questions about strategic stability and the risk of accidental or automated escalation in a crisis.42

CategoryRussia: Military AI Profile
National StrategyPragmatic and utilitarian focus on asymmetric force multipliers; guided by 2030 National AI Strategy and 2022 MoD Concept.
Key InstitutionsMinistry of Defense (MOD), military-industrial complex (e.g., Kalashnikov Concern for drones), academic research network.
Investment FocusState-driven R&D focused on near-term military applications, particularly for unmanned systems and EW.
Flagship ProgramsAI-enabled Lancet loitering munitions, integration of AI into air defense systems (Pantsir, S-400), AI for nuclear C3.
Foundational StrengthsDeep experience in EW and cyber operations; ability to rapidly iterate based on combat experience in Ukraine.
Demonstrated EfficacyWidespread and effective use of AI-assisted drones and loitering munitions in Ukraine; demonstrated EW resilience.
Key ChallengesSignificant lag in foundational AI research and investment; dependence on foreign components and impact of sanctions; demographic decline.

Israel: The Battle-Hardened Implementer

Israel stands apart from all other nations in its unparalleled record of deploying sophisticated AI systems in high-intensity combat. Its military AI program is not defined by aspirational strategy documents but by a relentless, operationally-driven innovation cycle born of constant and existential security threats. This has allowed the Israel Defense Forces (IDF) to field effective, if highly controversial, AI capabilities at a pace that larger, more bureaucratic militaries cannot match.

The IDF’s Digital Transformation Division, established in 2019, is a key enabler of this effort, tasked with bringing cutting-edge civilian technology into the military.43 The results of this focus are most evident in the IDF’s targeting process. During the recent conflict in Gaza, Israel has made extensive use of at least two major AI systems:

  • “Habsora” (The Gospel): This AI-powered system analyzes vast amounts of surveillance data to automatically generate bombing target recommendations. It has reportedly increased the IDF’s target generation capacity from around 50 per year to over 100 per day, solving the long-standing problem of running out of targets in a sustained air campaign.2
  • “Lavender”: This is an AI database that has reportedly been used to identify and create a list of as many as 37,000 potential junior operatives affiliated with Hamas or Palestinian Islamic Jihad for targeting.2

The use of these systems marks the most extensive and systematic application of AI for target generation in the history of warfare.43 Beyond targeting, Israel integrates AI across its defense architecture. It is a key component of the Iron Dome and David’s Sling missile defense systems, where algorithms analyze sensor data to prioritize threats and calculate optimal intercept solutions.45 AI is also used for border surveillance, incorporating facial recognition and video analysis tools.45 This rapid and widespread implementation is fueled by Israel’s world-class technology ecosystem (“Silicon Wadi”), which boasts the highest per-capita density of AI talent in the world, and by deep technological partnerships with U.S. tech giants through programs like Project Nimbus.17

CategoryIsrael: Military AI Profile
National StrategyOperationally-driven, bottom-up innovation focused on immediate security needs rather than grand strategy documents.
Key InstitutionsIDF Digital Transformation Division, Unit 8200 (signals intelligence), robust defense industry (Elbit, Rafael), vibrant startup ecosystem.
Investment FocusStrong venture capital scene; targeted government investment in defense tech; deep partnerships with U.S. tech firms (Project Nimbus).
Flagship Programs“Habsora” (The Gospel) and “Lavender” (AI-assisted targeting systems), AI integration in missile defense (Iron Dome).
Foundational StrengthsWorld’s highest per-capita AI talent density; agile and innovative tech culture (“Silicon Wadi”); deep integration between military and tech sectors.
Demonstrated EfficacyUnmatched record of deploying AI systems (Habsora, Lavender) at scale in high-intensity combat operations.
Key ChallengesInternational legal and ethical scrutiny over AI targeting practices; resource constraints compared to superpowers.

United Kingdom: The Leading Ally

The United Kingdom is firmly positioned as the leader among European nations and a crucial Tier II power, combining a strong national AI ecosystem with a clear strategic defense vision and deep integration with the United States. Its approach seeks to leverage its strengths in research and talent to maintain influence and interoperability within key alliances.

The UK’s 2022 Defence Artificial Intelligence Strategy articulates a vision to become “the world’s most effective, efficient, trusted and influential Defence organisation for our size”.47 This is complemented by service-specific plans, such as the British Army’s Approach to Artificial Intelligence, which focuses on delivering decision advantage from the “back office to the battlefield”.48 The UK has also sought to position itself as a global leader in the normative and ethical dimensions of AI, hosting the world’s first AI Safety Summit in 2023, which enhances its diplomatic influence in the field.19

The UK’s foundational ecosystem is a key strength. It ranks third globally in AI talent depth and density, with world-renowned research hubs in London, Cambridge, and Oxford creating a steady pipeline of expertise.17 While its private investment in AI is a distant third to the U.S. and China, it significantly outpaces other European nations.12 The country is home to major defense primes like BAE Systems, which are actively integrating AI into electronic warfare and autonomous platforms, as well as a dynamic startup scene that includes leading AI companies like ElevenLabs and Synthesia.50 This combination of strategic clarity, a robust talent base, and strong alliance partnerships solidifies the UK’s position as a top-tier military AI power.

CategoryUnited Kingdom: Military AI Profile
National Strategy2022 Defence AI Strategy; focus on being “effective, efficient, trusted, and influential.” Strong emphasis on ethical leadership and alliance interoperability.
Key InstitutionsMinistry of Defence (MOD), Defence Science and Technology Laboratory (Dstl), major defense primes (BAE Systems), leading universities.
Investment FocusThird-largest private AI investment globally; government funding for defense R&D.
Flagship ProgramsFocus on cyber, stealth naval AI, and development of 6th-gen air power (Tempest program) with AI at its core.
Foundational StrengthsRanks 3rd globally in AI talent; world-class research universities (Oxford, Cambridge); strong defense-industrial base.
Demonstrated EfficacyActive in joint R&D and exercises with the U.S. and NATO; deploying AI-based cyber defense systems.
Key ChallengesBridging the gap between research and scaled military procurement; maintaining competitiveness with superpower investment levels.

France: The Sovereign Contender

France’s military AI strategy is defined by its long-standing pursuit of “strategic autonomy.” Wary of becoming technologically dependent on either the United States or China, Paris is investing heavily in building a sovereign AI capability that allows it to maintain its freedom of action on the world stage. This ambition is backed by a robust industrial base and a clear, state-led implementation plan.

AI is officially designated a “priority for national defence,” with a strategy that emphasizes a responsible, controlled, and human-in-command approach to its development and use.52 The most significant step in realizing this vision was the creation in 2024 of the

Ministerial Agency for Artificial Intelligence in Defense (MAAID). Modeled on the French Atomic Energy Commission, MAAID is designed to ensure France masters AI technology sovereignly.55 With an annual budget of €300 million and plans for its own dedicated “secret defense” supercomputer by 2025, MAAID represents a serious, centralized commitment to developing military-grade AI.55

This state-led effort is supported by a strong ecosystem. France is home to the Thales Group, a major European defense contractor heavily involved in integrating AI into radar and C2 systems, and a vibrant commercial AI scene.51 This includes Mistral AI, one of Europe’s most prominent foundational model developers and a direct competitor to U.S. giants like OpenAI and Anthropic, highlighting France’s capacity for cutting-edge innovation.50 By combining state direction with commercial dynamism, France is building a formidable and independent military AI capability.

CategoryFrance: Military AI Profile
National StrategyDriven by “strategic autonomy”; 2019 AI & Defense Strategy emphasizes sovereign capability and responsible, human-controlled use.
Key InstitutionsMinisterial Agency for Artificial Intelligence in Defense (MAAID), Direction générale de l’armement (DGA), Thales Group.
Investment FocusDedicated budget for MAAID (€300M annually); broader national investments to make France an “AI powerhouse.”
Flagship ProgramsMAAID is the central program, focusing on developing sovereign AI for C2, intelligence, logistics, and cyberspace.
Foundational StrengthsStrong defense-industrial base (Thales); leading commercial AI companies (Mistral AI); high-quality engineering talent.
Demonstrated EfficacyActive in European joint defense projects (e.g., FCAS); developing AI tools for intelligence analysis and operational planning.
Key ChallengesBalancing sovereign ambitions with the need for allied interoperability; scaling capabilities to compete with larger powers.

India: The Aspiring Power

Driven by acute strategic competition with China and a national imperative for self-reliance (“Atmanirbhar Bharat”), India is rapidly emerging as a major military AI power. It is building a comprehensive ecosystem from the ground up, leveraging its immense human capital and a growing defense-industrial base. While it currently faces challenges in infrastructure and bureaucratic efficiency, its trajectory is steep and its ambitions are clear.

India’s strategy is outlined in an ambitious 15-year defense roadmap that heavily features AI-driven battlefield management, autonomous systems, and cyber warfare capabilities.56 Institutionally, this is guided by the

Defence AI Council (DAIC) and the Defence AI Project Agency (DAIPA), which were established to coordinate research and guide project development.57 A notable aspect of India’s approach is its proactive development of a domestic ethical framework, known as ETAI (Evaluating Trustworthiness in AI), which is built on principles of reliability, safety, transparency, fairness, and privacy.57

India’s greatest asset is its vast and growing talent pool. It ranks among the top three nations globally for the number of AI professionals and the volume of AI research publications.35 The government is working to build the necessary infrastructure to support this talent, including through the AIRAWAT initiative, which provides a national AI computing backbone.57 On the implementation front, the Ministry of Defence has launched 75 indigenously developed AI products and is investing in a range of capabilities, including autonomous combat vehicles, robotic surveillance platforms, and drone swarms.41 These technological efforts are intended to be integrated within a broader military reform known as “theatreisation,” which aims to create the joint command structures necessary to conduct cohesive, AI-driven multi-domain operations.60

CategoryIndia: Military AI Profile
National StrategyAmbitious 15-year defense roadmap focused on AI, autonomy, and self-reliance (“Atmanirbhar Bharat”).
Key InstitutionsDefence AI Council (DAIC), Defence AI Project Agency (DAIPA), Defence Research and Development Organisation (DRDO).
Investment FocusGrowing defense budget with dedicated funds for AI projects; focus on nurturing a domestic defense startup ecosystem (DISC).
Flagship ProgramsDevelopment of autonomous combat vehicles, drone swarms, AI for ISR; national ethical framework (ETAI).
Foundational StrengthsMassive and growing AI talent pool; ranks 3rd in AI publications; strong and growing domestic software industry.
Demonstrated EfficacyDeployed 75 indigenous AI products; using AI in intelligence and reconnaissance systems; procuring AI-powered UAVs.
Key ChallengesBureaucratic procurement delays; infrastructure gaps; translating vast research output into scaled, fielded military capabilities.

South Korea: The Hardware Integrator

South Korea is leveraging its status as a global leader in hardware, robotics, and advanced manufacturing to pursue a sophisticated military AI strategy. Its approach is focused on integrating cutting-edge AI into next-generation military platforms to ensure a decisive technological overmatch against North Korea and to maintain a competitive edge in a technologically dense region.

The national goal is to become a “top-three AI nation” (AI G3), an ambition that extends directly to its defense sector.61 Military efforts are guided by the “Defense Innovation 4.0” project and the Army’s “TIGER 4.0” concept, which aim to systematically infuse AI across all warfighting functions.62 The Ministry of National Defense has outlined a clear, three-stage development plan, progressing from “cognitive intelligence” (AI for surveillance and reconnaissance) to “partially autonomous” capabilities, and ultimately to “judgmental intelligence” for complex manned-unmanned combat systems.63

South Korea’s primary strength is its world-class industrial and technological base. It is a dominant force in the global semiconductor market with giants like Samsung and SK Hynix, providing a critical hardware foundation.20 This is complemented by a robust robotics industry and a government committed to massive investments in AI computing infrastructure and R&D.61 This industrial prowess is being translated into tangible military projects, such as the development of the future

K3 main battle tank, which will feature an unmanned turret and an AI-assisted fire control system for autonomous target tracking and engagement. Another key initiative is the development of unmanned “loyal wingman” aircraft to operate in tandem with the domestically produced KF-21 next-generation fighter jet, a concept designed to extend reach and reduce risk to human pilots.62

CategorySouth Korea: Military AI Profile
National Strategy“Defense Innovation 4.0”; goal to become a “top-three AI nation”; phased approach from ISR to manned-unmanned teaming.
Key InstitutionsMinistry of National Defense (MND), Agency for Defense Development (ADD), Defense Acquisition Program Administration (DAPA), industrial giants (Hyundai Rotem, KAI).
Investment FocusSignificant government and private sector investment in AI, semiconductors, and robotics.
Flagship ProgramsAI integration into future platforms like the K3 tank (AI-assisted targeting) and unmanned wingmen for the KF-21 fighter.
Foundational StrengthsWorld-leading semiconductor industry (Samsung, SK Hynix); strong robotics and advanced manufacturing base.
Demonstrated EfficacyAdvanced development of AI-enabled military hardware; exporting sophisticated conventional platforms with increasing levels of automation.
Key ChallengesNational AI strategy has been described as vague on security specifics; coordinating roles between various ministries.

Germany: The Cautious Industrial Giant

As Europe’s largest economy and industrial powerhouse, Germany possesses a formidable technological base for developing military AI. However, its adoption has historically been cautious, constrained by political sensitivities and a strong societal emphasis on ethical considerations. The Zeitenwende (“turning point”) announced in response to Russia’s 2022 invasion of Ukraine has injected new urgency and funding into German defense modernization, significantly accelerating its military AI efforts.

Germany’s 2018 National AI Strategy identified security and defense as a key focus area, and the Bundeswehr (German Armed Forces) has since developed position papers outlining goals and fields of action for AI integration, particularly for its land forces.64 The German approach places a heavy emphasis on establishing a robust ethical and legal framework, rejecting fully autonomous lethal systems and mandating meaningful human control.67

This renewed focus is now translating into concrete programs. A key initiative is Uranos KI, a project to develop an AI-backed reconnaissance and analysis system to support the German brigade being deployed to Lithuania, directly addressing the Russian threat.68 Another significant effort is the

GhostPlay project, run out of the Defense AI Observatory (DAIO) at Helmut Schmidt University, which is developing AI for enhanced defense decision-making.69 Germany’s traditional defense industry is being complemented by a burgeoning defense-tech startup scene, most notably the Munich-based company

Helsing. Helsing specializes in developing AI software to upgrade existing military platforms and is a key supplier of AI-enabled reconnaissance and strike drones to Ukraine, demonstrating a newfound agility in the German defense ecosystem.68

CategoryGermany: Military AI Profile
National Strategy2018 National AI Strategy; strong focus on ethical frameworks and human control, accelerated by post-2022 Zeitenwende.
Key InstitutionsBundeswehr, Center for Digital and Technology Research (dtec.bw), Defense AI Observatory (DAIO), emerging startups (Helsing).
Investment FocusIncreased defense spending post-Zeitenwende; growing venture capital for defense-tech startups.
Flagship ProgramsUranos KI (AI reconnaissance), GhostPlay (AI for decision-making), development of AI-enabled drone capabilities.
Foundational StrengthsEurope’s leading industrial and manufacturing base; high-quality engineering and research talent.
Demonstrated EfficacyHelsing’s AI-enabled drones are being used by Ukraine; Uranos KI has shown promising results in initial experiments.
Key ChallengesOvercoming historical and cultural aversion to military risk-taking; streamlining slow procurement processes; navigating complex EU regulations.

Japan: The Alliance-Integrated Technologist

Japan’s approach to military AI is shaped by a unique combination of factors: its post-war pacifist constitution, a rapidly deteriorating regional security environment, and its status as a technological powerhouse. This has resulted in a rapid but cautious push to adopt AI, primarily for defensive, surveillance, and logistical purposes, all in close technological and doctrinal alignment with its key ally, the United States.

Increasing threats from China and North Korea have prompted Japan to explicitly identify AI as a critical capability in its National Security Strategy, particularly for enhancing cybersecurity and information warfare defenses.72 In July 2024, the Ministry of Defense released its first basic policy on the use of AI, which formalizes its human-centric approach. The policy emphasizes maintaining human control over lethal force and explicitly prohibits the development of “killer robots” or lethal autonomous weapon systems (LAWS).73

Japan’s implementation strategy focuses on leveraging AI as a force multiplier in non-lethal domains to compensate for its demographic challenges. This includes developing remote surveillance systems, automating logistics and supply-demand forecasting, and creating AI-powered decision-support tools.73 A cornerstone of its R&D effort is the

SAMURAI (Strategic Advancement of Mutual Runtime Assurance Artificial Intelligence) initiative, a formal project arrangement with the U.S. Department of War. This cooperative program focuses on developing Runtime Assurance (RTA) technology to ensure the safe and reliable performance of AI-equipped UAVs, with the goal of informing their future integration with next-generation fighter aircraft.76 This project highlights Japan’s strategy of deepening interoperability with the U.S. while advancing its own technological expertise in AI safety and assurance.

CategoryJapan: Military AI Profile
National StrategyCautious, defense-oriented approach guided by National Security Strategy and 2024 MoD AI Policy; explicitly bans LAWS and emphasizes human control.
Key InstitutionsMinistry of Defense (MOD), Acquisition, Technology & Logistics Agency (ATLA), strong partnership with U.S. DoD.
Investment FocusIncreasing defense R&D budget; focus on dual-use technologies and international collaboration, particularly with the U.S.
Flagship ProgramsSAMURAI initiative (AI safety for UAVs with U.S.), AI for cybersecurity, remote surveillance, and logistics.
Foundational StrengthsWorld-leading robotics, sensor, and advanced manufacturing industries; highly skilled technical workforce.
Demonstrated EfficacyAdvanced R&D in AI safety and human-machine teaming; deep integration into U.S.-led technology development and exercises.
Key ChallengesConstitutional and political constraints on offensive capabilities; aging demographics impacting recruitment; balancing alliance integration with sovereign development.

Canada: The Niche Contributor

As a committed middle power and a member of the Five Eyes intelligence alliance, Canada’s military AI strategy is not aimed at competing with global powers but at developing niche capabilities that enhance its contributions to collective defense and ensure interoperability with its principal allies, especially the United States. Its approach is strongly defined by a commitment to the responsible and ethical development of AI.

The Department of National Defence and Canadian Armed Forces (DND/CAF) AI Strategy lays out a vision to become an “AI-enabled organization” by 2030.78 The strategy is built on five lines of effort: fielding capabilities, change management, ethics and trust, talent, and partnerships.47 It is closely aligned with broader Government of Canada policies such as the Directive on Automated Decision Making and the Pan-Canadian AI Strategy.78

Canada’s implementation efforts are focused on specific, high-value problem sets, particularly in the ISR domain. Key R&D projects led by Defence Research and Development Canada (DRDC) include:

  • JAWS (Joint Algorithmic Warfighter Sensor): A suite of multi-modal sensors and AI models designed to automate the detection and tracking of objects, reducing the cognitive load on operators.81
  • MIST (Multimodal Input Surveillance and Tracking): An AI system for the automated analysis of full-motion video from aerial platforms to detect and localize objects of interest.81

These systems are being actively tested and refined in large-scale multinational exercises like the U.S. Army’s Project Convergence, demonstrating Canada’s focus on ensuring its technology is integrated and effective within an allied operational context.81 While Canada has a strong academic history as a pioneer in deep learning, it has faced a recognized “adoption problem” in translating this foundational research into scaled commercial and military applications, a challenge the government is actively working to address.82

CategoryCanada: Military AI Profile
National StrategyDND/CAF AI Strategy (AI-enabled by 2030); focused on niche capabilities, alliance interoperability, and ethical/responsible AI.
Key InstitutionsDepartment of National Defence (DND), Defence Research and Development Canada (DRDC), Innovation for Defence Excellence and Security (IDEaS) program.
Investment FocusTargeted funding for R&D through programs like IDEaS; leveraging the Pan-Canadian AI Strategy.
Flagship ProgramsJAWS (AI sensor suite), MIST (AI video analysis for ISR), participation in allied experiments like Project Convergence.
Foundational StrengthsStrong academic research base in AI; close integration with U.S. and Five Eyes partners.
Demonstrated EfficacySuccessful experimentation with JAWS and MIST in multinational exercises, proving interoperability concepts.
Key Challenges“Adoption problem” in scaling research to fielded capability; limited budget compared to larger powers; reliance on allied platforms for integration.

Honorable Mention: Ukraine, The Wildcard Innovator

While not a top-10 global power by traditional metrics, Ukraine’s performance since the 2022 Russian invasion warrants special mention. It has transformed itself into the world’s foremost laboratory for AI in modern warfare, demonstrating an unparalleled ability to rapidly adapt and deploy commercial technology for military effect under the intense pressure of an existential conflict. Its experience is actively shaping the doctrine and procurement strategies of every major military power.

Lacking a large, pre-existing defense-industrial base for AI, Ukraine has relied on agility, decentralization, and partnerships. The “Army of Drones” initiative is a comprehensive national program that encompasses international fundraising, direct procurement of commercial drones, fostering domestic production, and training tens of thousands of operators.83 Ukrainian forces, often working with civilian volunteer groups, have become masters of battlefield adaptation, integrating AI-based targeting software into low-cost commercial FPV drones.4 This has had a dramatic impact on lethality, with strike accuracy for these systems reportedly increasing from a baseline of 30-50% to around 80%.4 The Defense Intelligence of Ukraine (DIU) has also emerged as a sophisticated user of AI for analyzing vast amounts of intelligence data and for enabling long-range autonomous drone strikes deep into Russian territory.83 Ukraine’s experience provides a powerful lesson: in the age of AI, the ability to innovate and adapt at speed can be a decisive advantage, capable of offsetting a significant numerical and material disadvantage.

Comparative Strategic Assessment: Doctrines, Efficacy, and Future Trajectory

A granular analysis of individual national programs reveals a broader strategic landscape defined by competing visions, divergent levels of efficacy, and a critical dependence on the foundational layers of the digital age. The future of military power will be determined not just by who develops the best AI, but by who can best synthesize it with their doctrine, industrial base, and human capital.

A Clash of Strategic Visions

The world’s leading military AI powers are not converging on a single model; instead, they are pursuing distinct and often competing strategic philosophies:

  • The U.S. Commercial-Military Vanguard: Relies on a decentralized, bottom-up innovation ecosystem fueled by massive private capital. The strategic challenge is to harness this commercial dynamism for military purposes without being stifled by bureaucracy, a problem initiatives like Replicator are designed to solve. The doctrinal emphasis remains firmly on “human-on-the-loop” empowerment.9
  • China’s State-Directed Intelligentization: A top-down, centrally planned model that mobilizes the entire nation through Military-Civil Fusion. The goal is to achieve decision superiority through the deep integration of AI into a “Command Brain,” potentially affording the machine a more central role in the command process than in the U.S. model.8
  • Russia’s Asymmetric Disruption: A pragmatic approach focused on using “good enough” AI as a force multiplier in areas like EW and unmanned systems to counter a technologically superior foe. The war in Ukraine serves as a brutal but effective R&D cycle.38
  • Israel’s Operational Rapid-Fielding: An agile, threat-driven model that prioritizes getting effective capabilities into the hands of warfighters as quickly as possible, often accepting higher risks and bypassing the lengthy development cycles common in larger nations.43
  • The European Pursuit of Sovereignty and Ethics: Powers like France and Germany are driven by a desire for strategic autonomy and a strong commitment to developing AI within a robust ethical and legal framework, seeking a “third way” between the U.S. and Chinese models.55

This divergence between “battle-tested” powers like Israel, Russia, and Ukraine and more “theory-heavy” powers in Western Europe is a critical dynamic. The former are driving rapid, iterative development based on immediate combat feedback, while the latter are focused on building more deliberate, ethically-vetted systems. This creates a potential temporal disadvantage, where nations facing immediate threats are forced to accept risks and bypass traditional procurement, giving them a lead in practical application. A nation with a perfectly ethical and robustly tested AI system that arrives on the battlefield two years late may find the conflict has already been decided by an adversary who scaled a “good enough” system across their forces.

The Spectrum of Demonstrated Efficacy

When moving from strategic plans to tangible results, a clear spectrum of operational efficacy emerges.

  • High Deployment & Efficacy: Israel, Russia, and Ukraine stand at one end. Their AI systems are not experimental; they are core components of ongoing, high-intensity combat operations, directly influencing tactical and operational outcomes on a daily basis.4
  • Selective Deployment & Proving: The United States occupies the middle ground. Key programs like Project Maven are fully operational and battle-tested.22 However, broader, more transformative initiatives like Replicator are still in the process of proving their ability to deliver capability at scale, facing significant integration and production challenges.28
  • Development & Aspiration: Many other advanced nations, including the UK, France, Germany, and Japan, are at the other end of the spectrum. They have ambitious plans, strong foundational ecosystems, and promising pilot programs (e.g., Uranos KI, MAAID, SAMURAI), but have yet to deploy AI systems at a comparable scale or intensity in combat operations.55

The Hardware Foundation: A Strategic Chokepoint

The entire edifice of military AI rests on a physical foundation of advanced hardware: semiconductors for processing and cloud computing infrastructure for data storage and model training. Control over this foundation is a decisive strategic advantage.

The United States and its democratic allies—Taiwan (TSMC), South Korea (Samsung), and the Netherlands (ASML for lithography equipment)—dominate the design and fabrication of the world’s most advanced semiconductors.20 This creates a critical vulnerability for China, which, despite massive investment, remains dependent on foreign technology for the highest-end chips required to train and run state-of-the-art AI models. U.S. export controls are a direct attempt to exploit this chokepoint and slow China’s military AI progress.

Similarly, the global cloud infrastructure market is dominated by American companies. Amazon Web Services (AWS), Microsoft Azure, and Google Cloud collectively control approximately 63% of the market, with Chinese competitors like Alibaba and Tencent holding much smaller shares.21 This provides the U.S. military and its innovation ecosystem with access to a massive, secure, and scalable computational backbone that is difficult for any other nation to replicate.

The following matrix provides a comprehensive, at-a-glance comparison of the top 10 nations across these key strategic vectors.

CountryStrategic VisionKey ProgramsInvestment & ScaleTalent & R&D BaseHardware FoundationDeployed EfficacyDoctrinal Integration
United StatesCommercial-military vanguard; achieve “decision advantage.”Project Maven, Replicator InitiativeUnmatched public & private fundingWorld leader in talent & model developmentDominant (Semiconductors, Cloud)High (Maven deployed)High (Evolving)
ChinaState-directed “intelligentization”; Military-Civil Fusion.“Command Brain,” Drone SwarmsMassive state-directed fundsMassive scale, closing quality gapMajor vulnerability (Semiconductors)Medium (Scaling in non-combat)Very High (Central tenet)
IsraelOperationally-driven rapid fielding for immediate threats.Habsora, Lavender (AI targeting)Strong, focused on defense techWorld-leading per capitaStrong, deep U.S. integrationVery High (Combat-proven)High (Operationally embedded)
RussiaAsymmetric disruption of superior adversaries.AI-enabled Lancet drones, Air Defense AILimited, focused on near-term effectConstrained, practical focusHeavily constrained by sanctionsHigh (Battle-hardened in Ukraine)Medium (Adaptive)
United KingdomLeading ally; trusted, ethical, interoperable AI.6th-Gen Fighter (Tempest), Naval AIStrong, 3rd in private investmentStrong, top-tier research hubsModerate, reliant on alliesLow-Medium (Exercises, Cyber)Medium (Developing)
FranceSovereign capability; “strategic autonomy.”MAAID (central AI agency)Strong, state-led investmentStrong, with leading AI firmsModerate, pursuing sovereigntyLow (In development)Medium (Developing)
South KoreaHardware-led integration for technological overmatch.K3 Tank, KF-21 Unmanned WingmanStrong, industry-ledGood, focused on applicationWorld Leader (Semiconductors)Low (In advanced development)Medium (Platform-centric)
IndiaAspiring power; self-reliance and strategic competition.DAIPA/DAIC projects, ETAI frameworkGrowing rapidly, state-supportedMassive, but with infrastructure gapsLagging, but growingLow (Early deployments)Medium (Tied to reforms)
GermanyCautious industrial giant, accelerated by Zeitenwende.Uranos KI, GhostPlayIncreasing significantlyStrong industrial R&D baseStrong industrial baseLow (Early deployments)Low-Medium (Developing)
JapanAlliance-integrated technologist; defensive focus.SAMURAI (AI safety w/ U.S.)Cautious but growingStrong in robotics & sensorsStrong, reliant on alliesLow (R&D, exercises)Low (Constrained)

Conclusion: Navigating the Dawn of Intelligentized Conflict

The evidence is unequivocal: artificial intelligence is catalyzing a fundamental revolution in military affairs, and the global competition to master this technology is accelerating. The strategic landscape is solidifying into a bipolar contest between the United States and China, two powers with the resources, scale, and national will to pursue dominance across the full spectrum of AI-enabled warfare. Yet, the field is far from a simple two-player game. The agility and combat experience of nations like Israel and Ukraine, the asymmetric tactics of Russia, and the focused ambitions of key U.S. allies create a complex, multi-polar dynamic where innovation can emerge from unexpected quarters.

Looking forward over the next five to ten years, several trends will define the trajectory of military AI. First, the degree of autonomy in weapon systems will steadily increase, moving from decision support to human-supervised autonomous operations, particularly in contested environments like electronic warfare or undersea domains. Second, human-machine teaming will become a core military competency. The effectiveness of a fighting force will be measured not just by the quality of its people or its machines, but by the seamlessness of their integration. Third, the battlefield will continue to trend towards a state of hyper-awareness and hyper-lethality. The proliferation of intelligent sensors and autonomous weapons will compress the “detect-to-engage” timeline to mere seconds, making concealment nearly impossible and survival dependent on speed, dispersion, and countermeasures.4

The central conclusion of this analysis is that the nation that achieves a decisive and enduring advantage in 21st-century conflict will be the one that masters the difficult synthesis of technology, data, doctrine, and talent. Technological superiority in algorithms or hardware alone will be insufficient. Victory will belong to the power that can build a national ecosystem capable of rapidly innovating, fielding AI capabilities at scale, adapting its operational concepts to exploit those capabilities, and training a new generation of warfighters to trust and effectively command their intelligent machine partners. The race for military AI supremacy is not merely a technological marathon; it is a test of a nation’s entire strategic, industrial, and intellectual capacity.

Appendix: Military AI Capability Ranking Methodology

Introduction

The objective of this methodology is to provide a transparent, defensible, and holistic framework for assessing and ranking a nation’s military artificial intelligence (AI) capabilities. It moves beyond singular metrics to create a composite index that evaluates the entire national ecosystem required to develop, deploy, and effectively utilize AI for military purposes. The index is structured around four core pillars, each assigned a weight reflecting its relative importance in determining overall military AI power.

Pillar 1: National Strategy & Investment (25% Weight)

This pillar assesses the top-down strategic direction and financial commitment a nation dedicates to military AI. A clear strategy and robust funding are prerequisites for any successful national effort.

  • Metric 1.1: Strategic Clarity & Coherence (10%): Evaluates the quality, ambition, and implementation plan of national and defense-specific AI strategies. A high score is given for published, detailed strategies with clear objectives, timelines, and designated responsible institutions (e.g., U.S. 2023 AI Adoption Strategy, China’s New Generation AI Development Plan).10 A lower score is given for vague or purely aspirational statements.
  • Metric 1.2: Financial Commitment (15%): Quantifies direct and indirect investment in military AI. This includes analysis of national defense budgets, specific R&D allocations for AI and autonomy, the scale of state-backed technology investment funds, and the volume of government AI-related procurement contracts.14

Pillar 2: Foundational Ecosystem (25% Weight)

This pillar measures the underlying national capacity for AI innovation, which forms the bedrock of any military application. It assesses the raw materials of AI power: talent, research, and hardware.

  • Metric 2.1: Talent Pool (10%): Ranks countries based on the quantity and quality of their human capital. Data points include the absolute number of AI professionals, the concentration of top-tier AI researchers (e.g., authors at premier conferences like NeurIPS), and the quality of university pipelines producing AI graduates.17
  • Metric 2.2: Research & Innovation Output (10%): Measures a nation’s contribution to the global state-of-the-art in AI. This is assessed through the volume and citation impact of AI research publications, the number of AI-related patents filed, and, critically, the number of notable, state-of-the-art AI models produced by a country’s institutions.12
  • Metric 2.3: Hardware & Infrastructure (5%): Assesses sovereign or secure allied access to the critical enabling hardware for AI. This includes domestic capacity for advanced semiconductor design and manufacturing and the availability of large-scale, secure cloud computing infrastructure, which are essential for training and deploying large AI models.20

Pillar 3: Military Implementation & Programs (25% Weight)

This pillar evaluates a nation’s ability to translate strategic ambition and foundational capacity into concrete military AI programs and applications.

  • Metric 3.1: Flagship Program Maturity (15%): Assesses the scale, sophistication, and developmental progress of major, publicly acknowledged military AI programs (e.g., U.S. Project Maven, China’s “Command Brain,” France’s MAAID). High scores are awarded for programs that are well-funded, have moved beyond basic research into advanced development or prototyping, and are aimed at solving critical operational challenges.22
  • Metric 3.2: Breadth of Application (10%): Measures the diversity of AI applications being pursued across the full spectrum of military functions, including ISR, command and control, logistics, cybersecurity, electronic warfare, and autonomous platforms. A broad portfolio indicates a more mature and integrated approach to military AI adoption.3

Pillar 4: Operational Efficacy & Deployment (25% Weight)

This is the most critical pillar, assessing whether a nation’s military AI capabilities exist in practice, not just on paper. It measures the translation of programs into proven, operational reality.

  • Metric 4.1: Demonstrated Deployment (15%): Awards points for clear evidence of AI systems being used in active combat operations or large-scale, realistic military exercises. This is the ultimate test of a system’s effectiveness and reliability. Nations with battle-tested systems (e.g., Israel’s Habsora, Russia’s Lancet, U.S. Maven) receive the highest scores.4
  • Metric 4.2: Doctrinal Integration (10%): Assesses the extent to which AI is being formally integrated into military doctrine, training curricula, and concepts of operation (CONOPS). This metric indicates true institutional adoption beyond isolated technology projects and reflects a military’s commitment to fundamentally changing how it fights.29

Scoring and Normalization

For each of the eight metrics, countries are scored on a qualitative scale based on the available open-source evidence. These scores are then converted to a numerical value. The metric scores are then weighted according to the percentages listed above and aggregated to produce a final composite score for each country, normalized to a 100-point scale to allow for direct comparison and ranking. This multi-layered, weighted approach ensures that the final ranking reflects a balanced and comprehensive assessment of a nation’s true military AI power.


Please share the link on Facebook, Forums, with colleagues, etc. Your support is much appreciated and if you have any feedback, please email us in**@*********ps.com. If you’d like to request a report or order a reprint, please click here for the corresponding page to open in new tab.


Sources Used

  1. The Coming Military AI Revolution – Army University Press, accessed October 4, 2025, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2024/MJ-24-Glonek/
  2. Military applications of artificial intelligence – Wikipedia, accessed October 4, 2025, https://en.wikipedia.org/wiki/Military_applications_of_artificial_intelligence
  3. How to Orchestrate AI Deployment in Defense Infrastructures? – – Datategy, accessed October 4, 2025, https://www.datategy.net/2025/07/16/how-to-orchestrate-ai-deployment-in-defense-infrastructures/
  4. ARTIFICIAL INTELLIGENCE’S GROWING ROLE IN MODERN WARFARE – War Room, accessed October 4, 2025, https://warroom.armywarcollege.edu/articles/ais-growing-role/
  5. DOD Replicator Initiative: Background and Issues for Congress, accessed October 4, 2025, https://www.congress.gov/crs-product/IF12611
  6. DOD Replicator Initiative: Background and Issues for Congress, accessed October 4, 2025, https://www.congress.gov/crs_external_products/IF/PDF/IF12611/IF12611.9.pdf
  7. Artificial intelligence arms race – Wikipedia, accessed October 4, 2025, https://en.wikipedia.org/wiki/Artificial_intelligence_arms_race
  8. Militarizing AI: How to Catch the Digital Dragon? – Centre for …, accessed October 4, 2025, https://www.cigionline.org/articles/militarizing-ai-how-to-catch-the-digital-dragon/
  9. Summary of the 2018 Department of Defense Artificial … – DoD, accessed October 4, 2025, https://media.defense.gov/2019/feb/12/2002088963/-1/-1/1/summary-of-dod-ai-strategy.pdf
  10. DOD Releases AI Adoption Strategy > U.S. Department of War …, accessed October 4, 2025, https://www.war.gov/News/News-Stories/Article/Article/3578219/dod-releases-ai-adoption-strategy/
  11. Codifying and Expanding Continuous AI Benchmarking – Federation of American Scientists, accessed October 4, 2025, https://fas.org/publication/codifying-expanding-continuous-ai-benchmarking/
  12. The 2025 AI Index Report | Stanford HAI, accessed October 4, 2025, https://hai.stanford.edu/ai-index/2025-ai-index-report
  13. Economy | The 2025 AI Index Report | Stanford HAI, accessed October 4, 2025, https://hai.stanford.edu/ai-index/2025-ai-index-report/economy
  14. Breaking Down Global Government Spending on AI – HPCwire, accessed October 4, 2025, https://www.hpcwire.com/2024/08/26/breaking-down-global-government-spending-on-ai/
  15. U.S. Military Spending on AI Surges – Time Magazine, accessed October 4, 2025, https://time.com/6961317/ai-artificial-intelligence-us-military-spending/
  16. AI’s Role in World Defense Budget Market – MarketsandMarkets, accessed October 4, 2025, https://www.marketsandmarkets.com/ResearchInsight/ai-impact-analysis-on-world-defense-budget-industry.asp
  17. 10 Best Countries for AI Developers and Talent Pools 2025-26 – Index.dev, accessed October 4, 2025, https://www.index.dev/blog/top-countries-ai-developer-talent-pools
  18. The Global AI Talent Tracker 2.0 – MacroPolo, accessed October 4, 2025, https://archivemacropolo.org/interactive/digital-projects/the-global-ai-talent-tracker/
  19. Global AI Power Rankings: Stanford HAI Tool Ranks 36 Countries in AI, accessed October 4, 2025, https://hai.stanford.edu/news/global-ai-power-rankings-stanford-hai-tool-ranks-36-countries-ai
  20. Semiconductor industry – Wikipedia, accessed October 4, 2025, https://en.wikipedia.org/wiki/Semiconductor_industry
  21. Cloud Market Share Q2 2025: Microsoft Dips, AWS Still Kingpin – CRN, accessed October 4, 2025, https://www.crn.com/news/cloud/2025/cloud-market-share-q2-2025-microsoft-dips-aws-still-kingpin
  22. Project Maven – Wikipedia, accessed October 4, 2025, https://en.wikipedia.org/wiki/Project_Maven
  23. GEOINT Artificial Intelligence, accessed October 4, 2025, https://www.nga.mil/news/GEOINT_Artificial_Intelligence_.html
  24. Maven Smart System – Missile Defense Advocacy Alliance, accessed October 4, 2025, https://missiledefenseadvocacy.org/maven-smart-system/
  25. United States’ Project Maven And The Rise Of AI-Assisted Warfare – Global Defense Insight, accessed October 4, 2025, https://defensetalks.com/united-states-project-maven-and-the-rise-of-ai-assisted-warfare/
  26. Replicator (United States military) – Wikipedia, accessed October 4, 2025, https://en.wikipedia.org/wiki/Replicator_(United_States_military)
  27. The Replicator Initiative – Defense Innovation Unit, accessed October 4, 2025, https://www.diu.mil/replicator
  28. U.S. Military Is Struggling to Deploy AI Weapons | The work is being shifted to a new organization, called DAWG, to accelerate plans to buy thousands of drones : r/LessCredibleDefence – Reddit, accessed October 4, 2025, https://www.reddit.com/r/LessCredibleDefence/comments/1nrsxip/us_military_is_struggling_to_deploy_ai_weapons/
  29. Targeting at Machine Speed: The Capabilities—and Limits—of Artificial Intelligence, accessed October 4, 2025, https://mwi.westpoint.edu/targeting-at-machine-speed-the-capabilities-and-limits-of-artificial-intelligence/
  30. China’s ambitions in Artificial Intelligence – European Parliament, accessed October 4, 2025, https://www.europarl.europa.eu/RegData/etudes/ATAG/2021/696206/EPRS_ATA(2021)696206_EN.pdf
  31. China’s Military Employment of Artificial Intelligence and Its Security Implications, accessed October 4, 2025, https://www.iar-gwu.org/print-archive/blog-post-title-four-xgtap
  32. Military Artificial Intelligence, the People’s Liberation Army, and U.S.-China Strategic Competition | CNAS, accessed October 4, 2025, https://www.cnas.org/publications/congressional-testimony/military-artificial-intelligence-the-peoples-liberation-army-and-u-s-china-strategic-competition
  33. Dialogue | Episode 47: China’s Military Bet on the Future A Dialogue with Elsa B. Kania, accessed October 4, 2025, https://dkiapcss.edu/dialogue-episode-47-chinas-military-bet-on-the-future/
  34. China’s Military Reportedly Deploys DeepSeek AI for Non-Combat Duties – FDD, accessed October 4, 2025, https://www.fdd.org/analysis/policy_briefs/2025/03/27/chinas-military-reportedly-deploys-deepseek-ai-for-non-combat-duties/
  35. Global Total Number of Scientific Publications in Artificial Intelligence Share by Country (Units (Publications)) – ReportLinker, accessed October 4, 2025, https://www.reportlinker.com/dataset/c7a7f8eaeb968fd302788b2e529a126109612efb
  36. US and China Lead by a Wide Margin in Global AI Talent List – 36氪, accessed October 4, 2025, https://eu.36kr.com/en/p/3402121739913346
  37. China’s Pursuit of Defense Technologies: Implications for U.S. and Multilateral Export Control and Investment Screening Regimes – CSIS, accessed October 4, 2025, https://www.csis.org/analysis/chinas-pursuit-defense-technologies-implications-us-and-multilateral-export-control-and
  38. Advanced military technology in Russia | 06 Military applications of artificial intelligence: the Russian approach – Chatham House, accessed October 4, 2025, https://www.chathamhouse.org/2021/09/advanced-military-technology-russia/06-military-applications-artificial-intelligence
  39. Russia Capitalizes on Development of Artificial Intelligence in Its Military Strategy, accessed October 4, 2025, https://jamestown.org/program/russia-capitalizes-on-development-of-artificial-intelligence-in-its-military-strategy/
  40. The Role of AI in Russia’s Confrontation with the West | CNAS, accessed October 4, 2025, https://www.cnas.org/publications/reports/the-role-of-ai-in-russias-confrontation-with-the-west
  41. Which Countries Are Experimenting With AI-Powered Weapons? – 24/7 Wall St., accessed October 4, 2025, https://247wallst.com/military/2025/04/16/which-countries-are-experimenting-with-ai-powered-weapons/
  42. 532. Russia and the Convergence of AI, Battlefield Autonomy, and Tactical Nuclear Weapons – Mad Scientist Laboratory, accessed October 4, 2025, https://madsciblog.tradoc.army.mil/532-russia-and-the-convergence-of-ai-battlefield-autonomy-and-tactical-nuclear-weapons/
  43. How Israel’s military rewired battlefield for first AI war | The Jerusalem Post, accessed October 4, 2025, https://www.jpost.com/defense-and-tech/article-867363
  44. AI-assisted targeting in the Gaza Strip – Wikipedia, accessed October 4, 2025, https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_Gaza_Strip
  45. Israel – Hamas 2024 Symposium – Beyond the Headlines: Combat Deployment of Military AI-Based Systems by the IDF – Lieber Institute West Point, accessed October 4, 2025, https://lieber.westpoint.edu/beyond-headlines-combat-deployment-military-ai-based-systems-idf/
  46. As Israel uses US-made AI models in war, concerns arise about tech’s role in who lives and who dies – AP News, accessed October 4, 2025, https://apnews.com/article/israel-palestinians-ai-technology-737bc17af7b03e98c29cec4e15d0f108
  47. Defence Artificial Intelligence Strategy – GOV.UK, accessed October 4, 2025, https://www.gov.uk/government/publications/defence-artificial-intelligence-strategy
  48. BRITISH ARMY’S APPROACH TO ARTIFICIAL INTELLIGENCE, accessed October 4, 2025, https://www.army.mod.uk/media/24745/20231001-british_army_approach_to_artificial_intelligence.pdf
  49. Which Countries Are Investing Most in AI? – Investopedia, accessed October 4, 2025, https://www.investopedia.com/countries-investing-the-most-in-ai-11752340
  50. Forbes 2025 AI 50 List – Top Artificial Intelligence Companies Ranked, accessed October 4, 2025, https://www.forbes.com/lists/ai50/
  51. Top 10 Artificial Intelligence in Military Companies in Global 2025 | Global Growth Insights, accessed October 4, 2025, https://www.globalgrowthinsights.com/blog/top-artificial-intelligence-in-military-companies-in-global-updated-global-growth-insights-638
  52. FRANCE, accessed October 4, 2025, https://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/78-241-France-EN.pdf
  53. The Ministry of Armed Forces presents its new strategy for artificial intelligence (April 2019) – France OTAN, accessed October 4, 2025, https://otan.delegfrance.org/The-Ministry-of-Armed-Forces-presents-its-new-strategy-for-artificial
  54. French thinking on AI integration and interaction with nuclear command and control, force structure, and decision-making – European Leadership Network, accessed October 4, 2025, https://www.europeanleadershipnetwork.org/wp-content/uploads/2023/11/French-bibliography_AI_Nuclear_Final.pdf
  55. French Minister of the Armed Forces at École Polytechnique to boost AI in Defense, accessed October 4, 2025, https://www.polytechnique.edu/en/news/french-minister-armed-forces-ecole-polytechnique-boost-ai-defense
  56. India unveils ambitious 15-year defence roadmap featuring nuclear carrier, hypersonics, and AI warfare, accessed October 4, 2025, https://defence.in/threads/india-unveils-ambitious-15-year-defence-roadmap-featuring-nuclear-carrier-hypersonics-and-ai-warfare.15458/
  57. AI in the military: India’s path to ethical and strategic leadership | Hindustan Times, accessed October 4, 2025, https://www.hindustantimes.com/ht-insight/future-tech/ai-in-the-military-india-s-path-to-ethical-and-strategic-leadership-101758966031936.html
  58. India’s Military AI Roadmap: Trust, Enforcement, and Global South Leadership, accessed October 4, 2025, https://completeaitraining.com/news/indias-military-ai-roadmap-trust-enforcement-and-global/
  59. Implementing Artificial Intelligence in the Indian Military – Delhi Policy Group, accessed October 4, 2025, https://www.delhipolicygroup.org/publication/policy-briefs/implementing-artificial-intelligence-in-the-indian-military.html
  60. Theatre command: How India is looking to integrate Air Force, Navy and Army operations under a new strategy, accessed October 4, 2025, https://m.economictimes.com/news/defence/indian-army-indian-air-force-theatre-command-indian-navy-operation-sindoor-india-pakistan-war-india-defence-integration-plan-modi/articleshow/124270382.cms
  61. National AI Strategy Policy Directions – Press Releases – 과학기술정보통신부 >, accessed October 4, 2025, https://www.msit.go.kr/eng/bbs/view.do?sCode=eng&mId=4&mPid=2&pageIndex=&bbsSeqNo=42&nttSeqNo=1040&searchOpt=ALL&searchTxt=&ref=newsletters.qs.com
  62. South Korea is successfully moving forward with the implementation of AI in the defense sector | DEFENSEMAGAZINE.com, accessed October 4, 2025, https://www.defensemagazine.com/article/south-korea-is-successfully-moving-forward-with-the-implementation-of-ai-in-the-defense-sector
  63. Will the One Ring Hold? Defense AI in South Korea – ResearchGate, accessed October 4, 2025, https://www.researchgate.net/publication/382372312_Will_the_One_Ring_Hold_Defense_AI_in_South_Korea
  64. BMWE – Artificial intelligence – bundeswirtschaftsministerium.de, accessed October 4, 2025, https://www.bundeswirtschaftsministerium.de/Redaktion/EN/Artikel/Technology/artificial-intelligence.html
  65. AI Strategies – Home – Plattform Lernende Systeme, accessed October 4, 2025, https://www.plattform-lernende-systeme.de/ai-strategies.html
  66. Artificial Intelligence in Land Forces – Bundeswehr, accessed October 4, 2025, https://www.bundeswehr.de/resource/blob/156026/79046a24322feb96b2d8cce168315249/download-positionspapier-englische-version-data.pdf
  67. Artificial Intelligence in the Armed Forces: On the need for regulation regarding autonomy in weapon systems | Bundesakademie für Sicherheitspolitik, accessed October 4, 2025, https://www.baks.bund.de/en/working-papers/2018/artificial-intelligence-in-the-armed-forces-on-the-need-for-regulation-regarding
  68. Battlefield Disruption: German Military Seeks to Adapt as AI …, accessed October 4, 2025, https://www.spiegel.de/international/germany/battlefield-disruption-german-military-seeks-to-adapt-as-ai-changes-warfare-a-ebb36190-8b79-4e85-bd21-e765a9fc9857
  69. DAIO – Defense AI Observatory, accessed October 4, 2025, https://defenseai.eu/english
  70. Helsing | Artificial intelligence to protect our democracies, accessed October 4, 2025, https://helsing.ai/
  71. German military seeks high-tech edge with AI and drones – Harici, accessed October 4, 2025, https://harici.com.tr/en/german-military-seeks-high-tech-edge-with-ai-and-drones/
  72. The peace of Japan and the AI – Japan Up Close, accessed October 4, 2025, https://japanupclose.web-japan.org/policy/p20250228_1.html
  73. Japan Sets Hard Line on Military AI: Humans Stay in Charge, accessed October 4, 2025, https://militaryai.ai/japan-military-ai-rules/
  74. Japan promotes stringent standards for defense AI, accessed October 4, 2025, https://ipdefenseforum.com/2025/09/japan-promotes-stringent-standards-for-defense-ai/
  75. Artificial Intelligence for the Defence of Japan: Cautious but Steady Progress – RSIS, accessed October 4, 2025, https://rsis.edu.sg/rsis-publication/rsis/artificial-intelligence-for-the-defence-of-japan-cautious-but-steady-progress/
  76. US, Japan formalize SAMURAI project arrangement to advance AI safety in unmanned aerial vehicles > Air Reserve Personnel Center > Article Display, accessed October 4, 2025, https://www.arpc.afrc.af.mil/News/Article-Display/Article/4311811/us-japan-formalize-samurai-project-arrangement-to-advance-ai-safety-in-unmanned/
  77. US, Japan formalize SAMURAI project arrangement to advance AI safety in unmanned aerial vehicles > Air Force > Article Display – AF.mil, accessed October 4, 2025, https://www.af.mil/News/Article-Display/Article/4311811/us-japan-formalize-samurai-project-arrangement-to-advance-ai-safety-in-unmanned/
  78. Strategic Alignment – Canada.ca, accessed October 4, 2025, https://www.canada.ca/en/department-national-defence/corporate/reports-publications/dnd-caf-artificial-intelligence-strategy/strategic-alignment.html
  79. ARTIFICIAL INTELLIGENCE STRATEGY – Canada.ca, accessed October 4, 2025, https://www.canada.ca/content/dam/dnd-mdn/documents/reports/ai-ia/dndcaf-ai-strategy.pdf
  80. Canadian Armed Forces Unveil Ambitious AI Strategy for 2030 – BABL AI, accessed October 4, 2025, https://babl.ai/canadian-armed-forces-unveil-ambitious-ai-strategy-for-2030/
  81. DRDC participates in multinational experiment Project Convergence …, accessed October 4, 2025, https://science.gc.ca/site/science/en/blogs/defence-and-security-science/drdc-participates-multinational-experiment-project-convergence-capstone-4
  82. AI minister denies that Canada needs to ‘catch up’ with global industry | Power & Politics, accessed October 4, 2025, https://www.youtube.com/watch?v=slqS4UUSQYo
  83. Understanding the Military AI Ecosystem of Ukraine – CSIS, accessed October 4, 2025, https://www.csis.org/analysis/understanding-military-ai-ecosystem-ukraine
  84. List of countries with highest military expenditures – Wikipedia, accessed October 4, 2025, https://en.wikipedia.org/wiki/List_of_countries_with_highest_military_expenditures
  85. Research and Development | The 2025 AI Index Report | Stanford HAI, accessed October 4, 2025, https://hai.stanford.edu/ai-index/2025-ai-index-report/research-and-development
  86. Semiconductor Market Size, Share, Growth & Forecast [2032] – Fortune Business Insights, accessed October 4, 2025, https://www.fortunebusinessinsights.com/semiconductor-market-102365
  87. Modernizing Military Decision-Making: Integrating AI into Army Planning, accessed October 4, 2025, https://www.armyupress.army.mil/Journals/Military-Review/Online-Exclusive/2025-OLE/Modernizing-Military-Decision-Making/
  88. (U) The PLA and Intelligent Warfare: A Preliminary Analysis – CNA.org., accessed October 4, 2025, https://www.cna.org/reports/2021/10/The-PLA-and-Intelligent-Warfare-A-Preliminary-Analysis.pdf