The 21st-century strategic competition is increasingly defined not by mass or industrial might, but by the speed and quality of decision-making. The foundational framework for understanding this competition is Colonel John Boyd’s OODA loop—Observe, Orient, Decide, Act. For decades, military doctrine has focused on “getting inside” an opponent’s loop, operating at a tempo that shatters their ability to cohere. Today, Artificial Intelligence (AI) is compressing this human-scale cognitive process into a machine-speed automated cycle, fundamentally altering the character of war.
This report provides a strategic analysis of this transformation. It first reviews the OODA loop as a framework for competitive advantage, clarifying that its center of gravity is not merely speed, but superior “Orientation.” It then provides an exhaustive, phase-by-phase assessment of how specific AI technologies are revolutionizing the entire combat engagement lifecycle.
The analysis finds:
- AI is the Engine of Modern C2: The U.S. Department of Defense’s (DoD) Joint All-Domain Command and Control (JADC2) concept is the architectural-technological manifestation of the OODA loop. Its guiding maxim—”Sense, Make Sense, Act”—is a direct map to “Observe, Orient, Act.”
- A “Super-OODA Loop”: AI is automating and accelerating each phase. In the Observe phase, AI-driven sensor fusion and Automated Target Recognition (ATR)—exemplified by Project Maven—solve the data-deluge bottleneck, allowing persistent, all-domain surveillance. In the Orient phase, predictive analytics and AI-curated operational pictures provide “sense-making” at a scale no human staff can match. In the Decide phase, AI tools generate thousands of optimized Courses of Action (COAs) in seconds, shifting the commander’s role from generation to judgment. In the Act phase, autonomous systems, loitering munitions, and drone swarms execute decisions with unprecedented precision and speed.
- The “Centaur” Imperative: The strategic objective is Decision Dominance—the ability to decide and act more effectively and rapidly than any adversary. This is not achieved by replacing humans, but by creating “Strategic Centaurs”: a hybrid-intelligence partnership where AI handles data processing and speed, freeing human commanders to provide the “appropriate human judgment” mandated by DoD policy (DoD Directive 3000.09). The common refrain of a “human-in-the-loop” is a dangerously misleading myth; the reality is a far more complex human-machine team.
- The Paradox of Algorithmic Warfare: This new “Super-OODA Loop” creates profound new vulnerabilities. By automating the loop, it transforms the loop itself into a high-value attack surface. The very AI models used for “Observe” and “Orient” are susceptible to adversarial attacks, such as “evasion” (hiding targets from AI) and “data poisoning” (corrupting AI’s “brain” before a conflict). In this paradigm, a faster loop can become a liability, leading to a “millisecond compromise” where a force, blinded by its own corrupted AI, simply loses faster.
The strategic imperative for the DoD is therefore twofold: first, to aggressively pursue the technical capabilities for AI-driven decision dominance, and second, to simultaneously build the adaptive doctrine, rigorous training, and resilient “Red Team” processes necessary to manage the vulnerabilities of this new algorithmic age.
Part I: The OODA Framework – A Primer on Tempo and Strategic Advantage
Introduction: The Origins and Purpose of Boyd’s Loop
To understand the revolution Artificial Intelligence (AI) is bringing to modern warfare, one must first understand the framework it is revolutionizing. This is the OODA loop, a decision-making model developed by U.S. Air Force Colonel John Boyd.1 The loop consists of four stages: Observe (absorbing new information), Orient (processing observations against a “repertoire” of experience), Decide (selecting a course of action), and Act (implementing the decision).3
Boyd, a renowned strategist and fighter pilot, developed this concept from his experiences in the Korean War and his deep research into aerial combat tactics.6 His foundational work on Energy-Maneuverability Theory modeled aircraft performance 3, but the OODA loop became his universal theory for success in any competitive, rapidly changing, or chaotic environment.2
Crucially, the OODA loop is not a simple, linear checklist. It is a highly iterative and fluid feedback model.1 Boyd’s diagrams show feedback paths from every stage to every other, emphasizing continuous adaptation and learning.1 His core concepts, disseminated primarily through his briefings “A Discourse on Winning and Losing,” have become foundational to modern military strategy, business, law enforcement, and cyberwarfare.1
The Strategic Goal: “Getting Inside the Enemy’s Decision Cycle”
The purpose of the OODA loop in a conflict setting is not merely to make a decision; it is to win. Boyd’s central thesis was that victory is achieved by “getting inside the opponent’s decision cycle”.1 This means an entity—whether a pilot, a commander, or an entire organization—that can process its entire OODA loop more quickly, more effectively, and more relevantly than its opponent gains an insuperable advantage.1
This is a psychological and temporal attack. By operating at a faster and more effective tempo, one can observe and react to unfolding events so rapidly that the opponent’s own observations become obsolete before they can act on them. The adversary’s actions, when they finally come, are out of sync with reality. Boyd described this desired end state in stark terms: to “operate inside adversary’s observation-orientation-decision-action loops to enmesh adversary in a world of uncertainty, doubt, mistrust, confusion, disorder, fear, panic, chaos”.10
The goal is to “fold adversary back inside himself so that he cannot cope with events/efforts as they unfold”.10 The opponent is forced to react to a reality that has already changed, leading to a cascading collapse of their decision-making capability. One metaphor for this process is the “OODA cable,” which visualizes decisions flowing like electrical current through the loop, with the “Observe” phase being the thickest cable, gathering the most strands of information.11 By disrupting this flow anywhere, one can short-circuit the entire system.
Orientation as the Center of Gravity (Not Just Speed)
A common and dangerous misinterpretation of the OODA loop is that it is a simple race for speed. This reductionist view—that the fastest combatant always prevails—is historically false. Speed without direction is mere haste. The ill-fated Schlieffen Plan in World War I and General MacArthur’s rapid, unsupported drive into North Korea in 1950 are prime examples where a focus on speed, at the expense of flexibility and accurate orientation, led to strategic catastrophe.6
Boyd himself did not prioritize raw speed; he prioritized Orientation. This is the “mental tapestry” (as Boyd called it) of changing intentions that harmonizes effort.2 It is the most critical and complex phase in the loop.2 While Observation is the gathering of raw data, Orientation is the “process” of turning that data into understanding.2 It involves integrating new observations with a “repertoire” of existing mental models, cultural biases, and past experiences to form an accurate perception of the world.3
This is the loop’s center of gravity. A superior orientation allows a combatant to make better decisions, not just faster ones. In fact, a combatant with a superior “orientation advantage” can actually operate at a slower tempo and still win by ensuring their actions are more relevant and more surprising.12 True mastery of the loop, which Boyd’s contemporaries called Fingerspitzengefuhl or “fingertip feeling,” comes from a deep, intuitive orientation.13 This mastery is what allows a commander to seemingly bypass the explicit “Orient” and “Decide” steps and achieve “deliberate speed”—acting almost simultaneously with observing, because the orientation is already so deeply ingrained.2
This primacy of the “Orient” phase is the single most important concept to grasp when analyzing the impact of AI. The modern battlespace is not a contest of simple speed, but a contest of orientation—and it is this cognitive phase that AI promises to, and threatens to, revolutionize.
Part II: Algorithmic Warfare: AI’s Revolution of the Combat Lifecycle
Introduction: From Cognitive Loop to Algorithmic Cycle
Artificial Intelligence is fundamentally altering the character of warfare.14 This transformation is not about a single new weapon, but about the process of combat itself. AI is injecting machine-speed computation into every phase of Boyd’s OODA loop, transforming it from a human-centric cognitive cycle to a human-machine algorithmic one.16
The U.S. Department of Defense’s (DoD) capstone concept for this new era is Joint All-Domain Command and Control (JADC2).17 JADC2 is, for all practical purposes, the DoD’s architectural and technological embodiment of the OODA loop.10 Its stated goal is to enable the Joint Force to “sense,” “make sense,” and “act” on information at the “speed of relevance”.18 This “Sense, Make Sense, Act” paradigm is a direct modernization of Boyd’s “Observe, Orient, Act”.20
The entire JADC2 strategy is built on the premise of using automation and AI to “act inside an adversary’s decision cycle”.22 The following sections will analyze, phase by phase, exactly how AI is executing this vision.
Table 1: The AI-Driven Transformation of the OODA Loop
| OODA Phase | Conventional Process (Human-Scale) | AI-Driven Transformation (Machine-Speed) | Key Enabling Technologies & Programs |
| OBSERVE | Intermittent human-led ISR (patrols, singular sensor feeds); manual data processing. | Persistent, all-domain, autonomous sensing and data exploitation. | JADC2 Sensor Grid 23, AI-Enabled Sensor Fusion 24, Persistent Surveillance 25, Project Maven 26, Automated Target Recognition (ATR).27 |
| ORIENT | Manual staff analysis; high “fog of war”; slow, linear planning (e.g., the Millitary Decision Making Process (MDMP)). | Automated data processing; predictive sense-making; AI-curated Common Operational Picture. | AI Data Analysis 14, Predictive Analytics 29, AI-Augmented MDMP 30, AI-COP.31 |
| DECIDE | Commander’s deliberation based on 2-3 human-generated Courses of Action (COAs). | AI-augmented decision support; real-time generation and wargaming of thousands of optimized COAs. | COA-GPT 32, AI Wargaming 33, AI Decision Support Systems (DSS).31 |
| ACT | Human-in-the-loop kinetic/non-kinetic action; pre-planned fires. | Autonomous and semi-autonomous execution; coordinated swarming; “human-on-the-loop” supervision. | Autonomous Weapon Systems (AWS) 36, AI-Powered Drone Swarms 37, Loitering Munitions 38, AI-Directed Electronic Warfare (EW) & Cyber.39 |
The “Observe” Phase: From Human Sentry to Omniscient Sensor Grid
In conventional warfare, the “Observe” phase is defined by bottlenecks. Platoons on patrol, single-sensor UAV feeds, and periodic satellite passes create an intermittent, incomplete, and human-intensive picture of the battlefield. The JADC2 architecture, powered by AI, seeks to shatter this paradigm by creating an integrated, persistent, and all-domain sensor grid.23
AI-Enabled Sensor Fusion
In a multi-domain battlespace, a commander is inundated with data from land, air, sea, space, and cyber sensors.41 This data is often conflicting, in different formats, and arrives at different times.31 AI’s first and most critical job in the “Observe” phase is sensor fusion: the use of algorithms to “connect information streams” 41 and “squeeze more insight” from existing assets.24 AI-enabled fusion can rapidly bring together large numbers of sensors from manned and unmanned systems 24, integrating multi-domain data 42 from RADAR, LIDAR, spectroscopy, and imagery 43 to resolve conflicting reports and create a single, clear, and accurate picture.20
Persistent, Autonomous Surveillance
AI enables a shift from “intermittent” to “persistent” observation. Autonomous systems, such as the Sentry tower, use AI-enabled edge processing and a suite of sensors to “autonomously identify, detect and track objects of interest” 24/7 across land, sea, and air.25 AI algorithms allow these systems to monitor vast areas with minimal human intervention.44 Swarms of drones, for example, can collaborate, share data, and adapt to changing environments to provide a resilient and continuous surveillance solution.44
Case Study: Project Maven (Automating Observation)
The most powerful illustration of AI in the “Observe” phase is Project Maven.47 Established as the DoD’s “pathfinder” for operational AI 48, Maven was created to solve a critical bottleneck: the “PED” (Processing, Exploitation, and Dissemination) of intelligence.49 The DoD’s ability to collect data, particularly full-motion video (FMV) from UAVs, had exponentially outpaced its ability to process it.49 There was simply “too much data for the analyst workforce to manage”.49
Project Maven employs computer vision algorithms 48 to automate this PED process. Its core technology is Automated Target Recognition (ATR).27 AI and machine learning algorithms are trained to autonomously scan FMV and satellite imagery to “detect, classify, and identify” objects of interest—such as a specific “battle tank” versus a “civilian vehicle”.26
The impact is a radical acceleration of the “Observe-to-Orient” pipeline. With Maven, AI can perform multiple steps of the “kill chain” autonomously.26 A senior targeting officer, who could previously process 30 targets per hour, can now process 80 targets per hour with AI support. Furthermore, this is achieved with a targeting cell of 20 people, whereas a comparable effort during Operation Iraqi Freedom required a staff of 2,000.26
This case study reveals the true nature of AI’s role in observation. It is not just about better cameras or more drones. It is about automating the exploitation of the data they collect. AI-driven observation, as exemplified by Maven, doesn’t just improve the OODA loop; it makes the loop possible in the modern, data-saturated battlespace. Without it, the loop would collapse under the sheer weight of its own data, which often overwhelms human staffs and creates a “fog of war” from an overabundance of information.28
The “Orient” Phase: From Fog of War to Predictive Sense-Making
The “Orient” phase, Boyd’s center of gravity, is where raw observation is turned into actionable understanding. This is the “make sense” in the JADC2 framework.18 Historically, this phase is the source of the Clausewitzian “fog of war,” where uncertainty, friction, and “cascades of information” 28 paralyze human staffs. AI offers to dispel this fog by processing data at a scale and speed that is superhuman.
Taming the Data Deluge
The modern battlespace is defined by a data deluge that can overwhelm human cognition.28 While some analysts warn that AI may simply replace the “fog of war” with a new “fog of systems” 52, the primary goal of military AI is to do the opposite. AI algorithms are designed to rapidly process and analyze “vast amounts of data” 30 from diverse sources 14 to provide commanders with a “clearer picture” 23 and “comprehensive situational awareness”.19
The AI-Curated Common Operational Picture (COP)
The key output of this process is the Common Operational Picture (COP). A conventional COP is a static, manually updated map. An AI-curated COP is a living, dynamic, and tailorable “all-domain” picture.55 AI algorithms fuse data from all domains 31 to create a real-time, shared understanding of the battlespace. This AI-enhanced awareness can be decentralized, allowing even “the smallest tactical teams and units” to maintain “excellent situational awareness” 55, enabling a new level of mission command.
Predictive Analytics: Forecasting Enemy COAs
The most revolutionary aspect of AI in the “Orient” phase is its ability to move from reaction to prediction. Using deep learning and multifactor analysis 29, AI models can be trained on adversary doctrine, historical data, and real-time intelligence to predict enemy behavior.57
These predictive models can:
- Identify subtle enemy behavior patterns.29
- Detect preparations for an offensive.29
- Assess enemy combat readiness.29
- Instantly revise an enemy’s most likely course of action based on new contact reports.59
This capability allows a commander to “outthink” the adversary 58 and begin orienting to the next fight, not the current one. This is the very definition of seizing the initiative and getting inside the enemy’s loop.
The Human Judgment Complement
However, AI is not a panacea for orientation, and this is where the “fog of systems” concern becomes relevant.52 AI is a tool for prediction, but it is not a substitute for judgment.60 As researchers from the Georgia Institute of Technology note, the “hard problems in war are strategy and uncertainty”.61 AI models are only as good as the data they are trained on.60 An adversary will, by definition, “go beyond the training set” by creating novel situations.60
In these moments of high uncertainty and novelty, human “sense-making” and “moral, ethical, and intellectual decisions” remain irreplaceable.61 The “Orient” phase therefore becomes a complex human-machine team. The human commander’s role shifts from data processor (a role the AI has taken) to chief arbiter of AI-generated insights. This new role requires a deep understanding of the AI’s limitations 63 and a new level of critical thinking 64 to know when to trust the machine and when to override it.
The “Decide” Phase: From Deliberation to Algorithmic Recommendation
The “Decide” phase is where a commander, having been “Oriented” by their staff, commits to a Course of Action (COA). The U.S. Army’s traditional Military Decision-Making Process (MDMP) is a human-staff-intensive, time-consuming, and linear process.30 In an AI-driven conflict, this legacy framework is too slow.30 AI promises to accelerate this phase from a matter of days or hours to a matter of seconds.
AI-Powered Decision Support Systems (DSS)
The most common application of AI in this phase is the Decision Support System (DSS).35 These are AI tools that ingest the fused data from the “Orient” phase, simulate outcomes 41, and provide “real-time recommendations” to human decision-makers.31 By highlighting threats, suggesting optimal weapon-target pairings, and ranking COAs, these systems “reduce cognitive burden” 41 and “reduce the mental load for operators” 66, allowing commanders to focus on the decision itself.
Automated COA Generation and Wargaming
The true leap forward is the automation of the MDMP itself. AI is being designed to augment or replace nearly every step:
- Mission Analysis: AI rapidly processes intelligence to provide a comprehensive understanding of the operational environment.30
- COA Development: Instead of a human staff laboring to create 2-3 COAs, AI can “generate a broader spectrum of COAs” 30 by considering “a greater number of factors and permutations than is feasible with traditional manual methods”.30
- COA Analysis (Wargaming): AI can then “wargame” these COAs iteratively to analyze potential outcomes.32
- Orders Production: AI can “produce and disseminate all downstream orders” automatically, saving hundreds of man-hours.30
Tools like COA-GPT leverage large language models (LLMs) to allow commanders to “input mission specifics… receiving multiple, strategically aligned COAs in a matter of seconds”.32 DARPA’s Strategic Chaos Engine for Planning, Tactics, Experimentation and Resiliency (SCEPTER) program is developing similar technologies for accelerated COA adjudication.69
The impact on tempo is staggering. An Air Force experiment (DASH 2) demonstrated that AI-enabled teams produced COA recommendations in less than ten seconds and generated 30 times more options than human-only teams. In one hour, two AI vendors produced over 6,000 solutions for roughly 20 problems, with accuracy on par with human performance.70
This changes the fundamental nature of the commander’s decision. The cognitive load is not removed; it is shifted. The commander’s task is no longer to generate a good plan. Their task is to judge between thousands of machine-optimized plans, selecting the one that best matches their human intuition, strategic intent, and risk tolerance.13 This is a high-stakes task, especially when the AI’s reasoning may be a “black box” 30, placing an even greater premium on the commander’s experience.
The “Act” Phase: From Human Trigger-Pull to Autonomous Execution
The “Act” phase is the physical implementation of the decision. AI is transforming this phase by enabling systems to “act” with unprecedented speed, precision, and coordination, often without a human directly in the decision loop at the moment of engagement.
Autonomous Weapon Systems (AWS)
An Autonomous Weapon System (AWS) is formally defined as “a weapon system that, once activated, can select and engage targets without further intervention by an operator”.73 While most current military robots are remotely piloted 36, true AWS are emerging that can execute the “Act” phase on their own, guided by AI algorithms.76
Loitering Munitions (Kamikaze Drones)
The most prevalent example of AI in the “Act” phase is the loitering munition. These systems combine the roles of surveillance and strike into a single platform.38 They can “loiter” over a target area, using their onboard AI to autonomously hunt for targets.
- Advanced AI chips 77 enable these systems to “autonomously detect, track and engage targets” 78, reducing human workload and shortening the decision cycle.78
- Systems like Israel’s Spike missile family 78, Harpy and Harop anti-radiation drones 79, and Turkey’s Kargu-2 37 use AI for terminal guidance, autonomous targeting, and precision strikes, even in GPS-denied environments.78
AI-Powered Drone Swarms
Perhaps the most disruptive “Act” capability is the AI-driven drone swarm. This is a new form of “mass” where “swarm intelligence”—inspired by biological systems like ants or bees 37—is used to coordinate the actions of dozens or thousands of simple, cheap, and expendable drones.37
- AI allows these drones to collaborate, share data, and adapt to losses.37
- A swarm can overwhelm traditional, expensive air defense systems 37 and execute missions with a high tolerance for attrition.
- The U.S. (Pentagon’s Replicator program), China, and others are in a race to field this technology.37 This is leading to entirely new forms of combat, such as human-machine teaming (manned aircraft “quarterbacking” AI-piloted drones) 82 and the prospect of “swarm versus swarm” combat.84
Non-Kinetic Action (EW & Cyber)
The “Act” phase is not just kinetic. AI can “act” in the cyber and electromagnetic domains. Cognitive Electronic Warfare (CEW) uses AI and machine learning for “autonomous threat detection, electronic attack, and adaptive response”.40 An AI-driven EW system can, for example, detect a new, unknown enemy radar signal, classify it as a threat, and begin “adaptive jamming” against it, all without human intervention.40 Similarly, AI can be used to autonomously defend networks 41 or direct sophisticated, high-speed cyberattacks.39
Part III: Achieving Decision Dominance: The “Super-OODA Loop” and Its Consequences
The New Battle for Tempo: The “Super-OODA Loop”
The collective result of injecting AI into every phase of the OODA loop is the creation of a “Super-OODA Loop”.87 This is a decision-action cycle that operates at machine speed, capable of processing information and executing tasks “in environments requiring split-second decisions beyond human cognitive limits”.87
This new reality has ignited a 21st-century “AI arms race”.15 Adversaries, particularly China and Russia, are aggressively pursuing AI to enhance the speed, reach, and lethality of their own operations.30 The strategic prize in this new race is not territorial advantage or industrial superiority, but “Decision Dominance”.30
Decision Dominance is the ability to “analyze and contextualize vast streams of structured and unstructured data… to make the right decisions across the Kill Chain faster, more accurately, and more effectively than our adversaries”.91 It is the modern manifestation of Boyd’s “getting inside the enemy’s loop.” The side that achieves decision dominance “owns the tempo and dictates the terms of the fight”.93 This is why the DoD has made AI-enabled decision-making a top strategic priority, allocating $1.8 billion for AI programs in fiscal year 2025.92
This new, AI-driven tempo demands a fundamental shift in doctrine, moving away from slow, sequential warfare and toward “parallel and simultaneous all-domain warfare” that can “generate maximum chaos, friction, and disorientation for the adversary”.55
Redefining Command: Human Judgment in Algorithmic Warfare
The compression of the OODA loop to machine speed raises the single most important question for military strategists: What is the role of the human commander? This has led to widespread and confused discussion about “human-in-the-loop” systems.
The “Human-in-the-Loop” Myth
It is critical to correct a pervasive myth. A common refrain from defense officials is that DoD policy will “always have a human in the loop” to reassure audiences concerned about “killer robots”.94
This statement is factually incorrect. That is not DoD policy.94
The words “human in the loop” do not appear in the governing DoD directive, and this omission was intentional.73 The “loop” language is seen as a “machine-centric” and “misguided” framing 94 that “misrepresents the nature of AI warfare”.94 It creates “unnecessary confusion” 73 by implying a level of continuous tactical oversight that is not even required for existing conventional weapons (e.g., a “fire and forget” missile).73
The Real Framework: DoD Directive 3000.09 and “Appropriate Human Judgment”
The actual U.S. policy is DoD Directive 3000.09, “Autonomy in Weapon Systems” 95, which was updated in 2023.73 This policy does not require a human “in” the loop. It requires “appropriate levels of human judgment over the use of force”.73
This is a profound and crucial distinction. The policy’s focus is on accountability, not on a specific technical “loop” configuration. As former Secretary of Defense Ash Carter, who wrote the original 2012 directive, explained, the reply “the machine did it” for a tragic, unintended engagement is “unacceptable and immoral”.96 The directive is designed to ensure that a human is always accountable for the decision to employ force, even if the system itself is autonomous.97
“In” vs. “On” the Loop: A More Useful C2 Distinction
While “human-in-the-loop” is not a formal policy term, a more nuanced (though still informal) framework is used by C2 and ethics specialists to describe the actual levels of human involvement 97:
- Human-in-the-loop: The human is a direct part of the decision cycle. The AI may identify a target, but a human operator must make the final decision to “engage” before the system can act. This preserves human judgment but is slow.
- Human-on-the-loop: The human is a supervisor. The AI-powered system is authorized to “select and engage” targets autonomously within a set of pre-defined, human-authorized constraints (e.g., rules of engagement, geographic boundaries, target types). The human “oversees” this autonomous operation and has the ability to intervene or “call off” the system.97
- Human-out-of-the-loop: The human defers all decisions to the autonomous system.97 This is already the standard for defensive systems where the engagement “tempo” is physically impossible for a human to manage, such as a ship’s Phalanx Close-In Weapon System (CIWS) shooting down an incoming anti-ship missile 99, or the Aegis Combat System.97 The human sets the system to “auto,” and the machine does the rest.
The “on-the-loop” model, supported by trusted and reliable AI, is seen as the most likely future for C2, as it balances the need for machine speed with the requirement for human oversight.98
This is not about “humans versus machines”.100 It is about designing smarter human-machine partnerships.19 The goal is to create what chess grandmaster Garry Kasparov called a “centaur”: a human-plus-machine team.71 Kasparov found that a good human player paired with a good AI could beat even the best “AI-only” supercomputer.
This is the “Strategic Centaur” model.93 In this model, the AI is a “computer partner” that handles the “laborious calculations” of data processing, target recognition, and COA analysis.66 This frees the human commander to “concentrate on strategic planning” 71, “creativity, judgment, innovation” 100, and the “moral, ethical, and intellectual decisions” for which they, and they alone, are responsible.61
Part IV: The Paradox of Algorithmic Warfare: New Vulnerabilities and Strategic Risks
Introduction: The OODA Loop as an Attack Surface
The pursuit of a machine-speed “Super-OODA Loop” is not without profound risks. An expert-level analysis must “red team” its own conclusions. While AI promises unprecedented “decision dominance,” it also introduces catastrophic new vulnerabilities.
By making the OODA loop faster, more complex, and more reliant on automated, algorithmic processes, we have simultaneously transformed the OODA loop itself into a single, high-value, integrated attack surface.101
The AI systems that power our “Observe” and “Orient” phases are not infallible. They are software, and software has vulnerabilities. But unlike traditional software, AI vulnerabilities are not just “bugs”; they are fundamental weaknesses in the AI’s “perception” of reality. An adversary who can exploit these weaknesses does not need to outrun our OODA loop; they can hijack it.
The “Brittleness” of AI: When Models “Go Beyond the Training Set”
The first and most fundamental vulnerability is passive: AI models are “brittle”.30 An AI model—whether for target recognition or enemy COA prediction—is only as good as the data it was trained on.104 These training sets, whether based on synthetic data or “Wikipedia battle narratives” 105, are finite.
War, by its very nature, is a chaotic, novel, and adversarial environment.60 The enemy’s job is to create a situation for which the AI has no “prior example”.60 When an AI system encounters data “outside its training distribution,” it can fail in “bizarre” 106 and unpredictable ways. This includes “hallucinations”—where a model generates plausible-sounding but factually false information.107
An AI-driven targeting system that achieves 99% accuracy in testing 107 is useless if it fails catastrophically in the 1% of combat situations that are novel and high-stakes. This “brittleness” means a commander can never be 100% certain that what their AI-driven “Orient” phase is telling them is true.
The Adversarial Loop: Actively Hacking the OODA Cycle
More dangerous than passive failure is active adversarial attack. An adversary can use “adversarial AI” techniques 108 to target specific phases of our OODA loop.
1. Attacking the “Observe” Phase (Evasion Attacks)
An “evasion attack” 109 is designed to fool an AI’s “senses.” Adversaries can analyze our AI models to find their “blind spots” and then craft “adversarial inputs” to exploit them.108
For example, researchers have famously 3D-printed a turtle with a specific pattern that a Google AI model consistently misclassified as a “rifle”.110 In a military context, an adversary could develop “adversarial patches” or camouflage patterns for their tanks that cause our AI-powered ATR systems to misidentify them as “school buses” 111 or, even worse, misidentify our own friendly vehicles as “enemy” targets.112 This attack shatters the “Observe” phase, making our forces blind to threats and friendly-fire risks. While some analysis suggests these attacks are difficult to deploy in the “real world” 110, the threat remains a critical vulnerability.
2. Attacking the “Orient” Phase (Data Poisoning)
The most insidious and strategically dangerous threat is “data poisoning”.109 This is an attack on the AI’s training data, which occurs long before a conflict ever begins.
An adversary who gains access to our training data can covertly “inject malicious data” 109 to build a “hidden weakness or backdoor” into the finished AI model.113 This compromised model may pass all standard tests, but it will have a secret vulnerability that the enemy can later exploit.
For example, an adversary could subtly “poison” years of our ISR data to teach our predictive “Orient” models that a specific “surrender” formation is actually a high-priority “attack” formation.115 In the opening hours of a conflict, the enemy would display this formation, and our own AI would confidently—and incorrectly—orient our commanders to a false reality, urging them to fall into a trap or commit a war crime. This attack creates a fundamental “mistrust” in targeting algorithms, forcing a reversion to slower, human-only processes and ceding the tempo advantage.115
The “Millisecond Compromise”
This brings the analysis to its most critical point. The entire purpose of the “Super-OODA Loop” (Part III) is to achieve speed. But as security analyst Bruce Schneier argues, this speed can itself be a vulnerability.116
AI “must compress reality into model-legible forms” 117, and that “compression” is where the adversary attacks. When an adversary controls our sensors (via evasion) or our models (via poisoning), “the speed of your OODA loop is irrelevant”.116
In fact, speed becomes a liability. “The faster the loop, the less time for verification”.116 If our “Orient” phase has been poisoned to misidentify a hospital as a high-value target, a faster OODA loop does not help. It simply means we will commit that atrocity faster. This is the “millisecond compromise”.116 We will simply lose, more efficiently and more rapidly than ever before. This new reality demands a new focus on “vigilant risk mitigation” 63 and operational AI “red teaming” to find these vulnerabilities before the enemy does.118
The New Fog of War and Uncontrolled Escalation
The strategic-level consequence of these vulnerabilities is the creation of a new, more complex “fog of war.” AI does not eliminate Clausewitzian “fog”; it creates a “fog of systems”.52 Future commanders will be wrestling not only with the enemy’s intentions, but also with the “black box” nature of their own AI 30, the unreliability of a “brittle” model 107, and the paranoia of a compromised model.
This new “fog” introduces significant “strategic risks” 119, chief among them “miscalculation and escalation”.106 The battlefield will be a confusing landscape of AI-driven misinformation campaigns 120 and autonomous cyberattacks.86
The most alarming scenario, as warned by the Center for a New American Security (CNAS), is an autonomous “flash crash”.121 Just as runaway trading algorithms have caused stock market “flash crashes,” two opposing, high-speed, AI-driven OODA loops could interact in an unforeseen, positive-feedback loop.122 This could lead to rapid, uncontrolled, and unintended escalation—potentially to the nuclear threshold—that the human “on-the-loop” supervisors cannot understand or stop in time.123 This is a new and terrifying form of escalation risk, analyzed by institutions like the RAND Corporation 103, and it may even be so destabilizing as to encourage a preventive war by one state trying to stop another from achieving a monopoly on this “AGI” (Artificial General Intelligence) capability.126
Concluding Strategic Assessment: The “Centaur” Imperative
The AI revolution in warfare is not a future prospect; it is here.14 The transformation of Boyd’s OODA loop from a cognitive, human-scale process to an algorithmic, machine-speed cycle is inevitable. The pursuit of “Decision Dominance” is therefore not a choice, but a strategic necessity for the United States and its allies to maintain a competitive edge.128
However, this analysis concludes that victory in the era of algorithmic warfare will not go to the side with the most AI, but to the side that best masters the human-machine team.65
The future of command is the “Strategic Centaur”.71 The goal must be to design systems that “augment and enhance human capabilities,” not “replace human judgment”.30 The AI should be the “co-pilot,” not the “auto-pilot” 97—a partner that frees the human commander from the “laborious calculations” of data processing so they can focus on the enduring human-centric tasks of strategy, intent, and “appropriate human judgment”.71
The central challenge for the Department of Defense is therefore twofold:
- The Technical Challenge: To continue building the JADC2 architecture 23 and the AI tools 67 that can successfully “sense, make sense, and act” at a tempo that seizes the initiative.
- The Adaptive Challenge: To simultaneously develop the doctrine 30, training 8, and C2 frameworks 100 that integrate these tools with human commanders. This requires training leaders who understand the capabilities of AI but are also deeply skeptical of its “brittleness” and vulnerabilities. It requires building robust ethical frameworks 133 and resilient, continuous AI “red teaming” processes 118 to defend our own OODA loop from the “millisecond compromise”.102
The new OODA loop is one of “hybrid intelligence”.93 The winner of the next war will not be the fastest machine, nor the wisest human, but the “centaur” force that most effectively fuses the speed and computational power of the algorithm with the enduring creativity, judgment, and strategic orientation of the human mind.
Glossary of Acronyms
- AGI (Artificial General Intelligence): A theoretical, future form of AI that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a human or superhuman level.
- AI (Artificial Intelligence): The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, and decision-making.
- ATR (Automated Target Recognition): The use of computer processing and algorithms to automatically detect, classify, and identify targets in sensor data (like images or radar) without human intervention.
- AWS (Autonomous Weapon System): A weapon system that, once activated, can select and engage targets without further intervention by an operator.73
- C2 (Command and Control): The exercise of authority and direction by a designated commander over assigned forces to accomplish a mission.22
- CEW (Cognitive Electronic Warfare): The use of AI and machine learning to enhance Electronic Warfare, allowing systems to autonomously detect, classify, and adaptively respond to new or complex electromagnetic threats.
- CIWS (Close-In Weapon System): An autonomous defensive weapons system (like the Phalanx) used to detect and destroy short-range incoming threats, such as missiles or aircraft.99
- CNAS (Center for a New American Security): A U.S.-based defense and national security think tank.
- COA (Course of Action): A potential plan or line of action developed to accomplish a given mission.
- COP (Common Operational Picture): A single, shared display of relevant operational information (like friendly and enemy force locations) used to provide situational awareness to commanders.20
- DARPA (Defense Advanced Research Projects Agency): The U.S. DoD agency responsible for developing emerging technologies for military use.
- DoD (Department of Defense): The executive branch department of the U.S. federal government tasked with national security and the armed forces.
- DSS (Decision Support System): An AI-based tool that assists human commanders by processing data, analyzing options, and providing recommendations to reduce cognitive load.35
- EW (Electronic Warfare): Military action involving the use of the electromagnetic spectrum to attack an enemy or protect friendly forces, such as jamming enemy radar or communications.
- FMV (Full-Motion Video): Video data collected, often by UAVs, that provides real-time observation of a target area.49
- ISR (Intelligence, Surveillance, and Reconnaissance): An integrated military function to collect, process, and disseminate information about an adversary and the operational environment.49
- JADC2 (Joint All-Domain Command and Control): The DoD’s concept to connect sensors, systems, and forces from all military services (land, air, sea, space, cyber) into a single, resilient network to enable rapid “sense, make sense, and act” decision-making.
- LLM (Large Language Model): A type of AI model trained on vast amounts of text data, capable of understanding and generating human-like language, used in tools like COA-GPT.72
- MDMP (Military Decision-Making Process): The U.S. Army’s formal seven-step planning methodology used by staffs at the battalion level and higher to analyze a mission, develop and compare COAs, and produce an operation order.
- OODA (Observe, Orient, Decide, Act): A four-stage decision-making model, developed by Col. John Boyd, that describes how an entity reacts to a competitive and changing environment.3
- PED (Processing, Exploitation, and Dissemination): The intelligence cycle step of converting collected data (like FMV) into usable intelligence and distributing it to the forces who need it.49
- SCEPTER (Strategic Chaos Engine for Planning, Tactics, Experimentation and Resiliency): A DARPA program developing technologies for accelerated wargaming and adjudication of COAs.69
- UAV (Unmanned Aerial Vehicle): An aircraft without a human pilot on board, often referred to as a drone. It can be remotely piloted or fly autonomously.49
If you find this post useful, please share the link on Facebook, with your friends, etc. Your support is much appreciated and if you have any feedback, please email me at in**@*********ps.com. Please note that for links to other websites, we are only paid if there is an affiliate program such as Avantlink, Impact, Amazon and eBay and only if you purchase something. If you’d like to directly contribute towards our continued reporting, please visit our funding page.
Sources Used
- OODA loop – Wikipedia, accessed November 16, 2025, https://en.wikipedia.org/wiki/OODA_loop
- The OODA Loop: How Fighter Pilots Make Fast and Accurate Decisions – Farnam Street, accessed November 16, 2025, https://fs.blog/ooda-loop/
- The OODA Loop – The Decision Lab, accessed November 16, 2025, https://thedecisionlab.com/reference-guide/computer-science/the-ooda-loop
- The OODA Loop — Observe, Orient, Decide, Act – LessWrong, accessed November 16, 2025, https://www.lesswrong.com/posts/hgttKuASB55zjoCKd/the-ooda-loop-observe-orient-decide-act
- OODA loop | Research Starters – EBSCO, accessed November 16, 2025, https://www.ebsco.com/research-starters/military-history-and-science/ooda-loop
- The OODA Loop and the Half-Beat – Canada.ca, accessed November 16, 2025, https://www.canada.ca/en/department-national-defence/maple-leaf/defence/2023/11/ooda-loop-halfbeat.html
- Competitive Decision-Making: Using the OODA Loop – Decisionskills.com, accessed November 16, 2025, https://www.decisionskills.com/blog/competitive-decision-making-using-the-ooda-loop
- A Symbiotic Relationship: The OODA Loop, Intuition, and Strategic Thought – DTIC, accessed November 16, 2025, https://apps.dtic.mil/sti/pdfs/ADA590672.pdf
- A Discourse on Winning and Losing – Colonel John Boyd, accessed November 16, 2025, https://www.coljohnboyd.com/static/documents/2018-03__Boyd_John_R__edited_Hammond_Grant_T__A_Discourse_on_Winning_and_Losing.pdf
- Colonel John Boyds Thoughts on Disruption – Marine Corps University, accessed November 16, 2025, https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/MCU-Journal/JAMS-vol-14-no-1/Colonel-John-Boyds-Thoughts-on-Disruption/
- A Decision for Strategic Effects: A conceptual approach to effects based targeting – Air University, accessed November 16, 2025, https://www.airuniversity.af.edu/Portals/10/ASPJ/journals/Chronicles/Hill.pdf
- Boyd’s OODA Loop – Slightly East of New, accessed November 16, 2025, https://slightlyeastofnew.com/wp-content/uploads/2020/03/boydsoodaloopnecesse-1.pdf
- Revisiting John Boyd and the OODA Loop in Our Time of Transformation | www.dau.edu, accessed November 16, 2025, https://www.dau.edu/library/damag/september-october2021/revisiting-john-boyd
- Defence and artificial intelligence – European Parliament, accessed November 16, 2025, https://www.europarl.europa.eu/RegData/etudes/BRIE/2025/769580/EPRS_BRI(2025)769580_EN.pdf
- Artificial Intelligence and the Future of Warfare – Finabel, accessed November 16, 2025, https://finabel.org/wp-content/uploads/2024/07/FFT-AI-and-the-future-of-warfare-ED.pdf
- Air Force Doctrine Note 25-1, Artificial Intelligence (AI), accessed November 16, 2025, https://www.doctrine.af.mil/Portals/61/documents/AFDN_25-1/AFDN%2025-1%20Artificial%20Intelligence.pdf
- Joint All-Domain Command and Control for Modern Warfare: An Analytic Framework for Identifying and Developing Artificial Intelligence Applications | RAND, accessed November 16, 2025, https://www.rand.org/pubs/research_reports/RR4408z1.html
- DoD Announces Release of JADC2 Implementation Plan – Department of War, accessed November 16, 2025, https://www.war.gov/News/Releases/Release/Article/2970094/dod-announces-release-of-jadc2-implementation-plan/
- Transcending the fog of war? US military ‘AI’, vision, and the emergent post-scopic regime | European Journal of International Security – Cambridge University Press & Assessment, accessed November 16, 2025, https://www.cambridge.org/core/journals/european-journal-of-international-security/article/transcending-the-fog-of-war-us-military-ai-vision-and-the-emergent-postscopic-regime/35BCDEE8E28B076BCD597AFDC8976824
- The AI & Analytics Connectivity Imperative for JADC2 | Sigma Defense, accessed November 16, 2025, https://sigmadefense.com/blog/the-ai-and-analytics-connectivity-imperative-for-jadc2/
- JADC2: Accelerating the OODA Loop With AI and Autonomy – RTI, accessed November 16, 2025, https://www.rti.com/blog/jadc2-the-ooda-loop
- Summary of the Joint All-Domain Command and Control Strategy – DoD, accessed November 16, 2025, https://media.defense.gov/2022/Mar/17/2002958406/-1/-1/1/SUMMARY-OF-THE-JOINT-ALL-DOMAIN-COMMAND-AND-CONTROL-STRATEGY.pdf
- Chief Digital and Artificial Intelligence Office > Initiatives > CJADC2, accessed November 16, 2025, https://www.ai.mil/Initiatives/CJADC2/
- AI-Enabled Fusion for Conflicting Sensor Data – Booz Allen, accessed November 16, 2025, https://www.boozallen.com/markets/defense/indo-pacific/ai-enabled-fusion-for-conflicting-sensor-data.html
- Sentry | Anduril, accessed November 16, 2025, https://www.anduril.com/hardware/sentry/
- Project Maven – Wikipedia, accessed November 16, 2025, https://en.wikipedia.org/wiki/Project_Maven
- Automatic target recognition – Wikipedia, accessed November 16, 2025, https://en.wikipedia.org/wiki/Automatic_target_recognition
- The Coming Military AI Revolution – Army University Press, accessed November 16, 2025, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2024/MJ-24-Glonek/
- Seeing More Than the Human Eye – AI as a Battlefield Analyst | TTMS, accessed November 16, 2025, https://ttms.com/seeing-more-than-the-human-eye-ai-as-a-battlefield-analyst/
- Modernizing Military Decision-Making: Integrating AI into Army Planning, accessed November 16, 2025, https://www.armyupress.army.mil/Journals/Military-Review/Online-Exclusive/2025-OLE/Modernizing-Military-Decision-Making/
- Joint Force Coordination for Full Scale Operations – Booz Allen, accessed November 16, 2025, https://www.boozallen.com/insights/defense/c2-command-and-control/joint-force-coordination-for-full-scale-operations.html
- Harnessing the Algorithm: Shaping the Future of AI-Enabled Staff – AUSA, accessed November 16, 2025, https://www.ausa.org/publications/harding-paper/harnessing-the-algorithm
- accessed November 16, 2025, https://www.csis.org/analysis/it-time-democratize-wargaming-using-generative-ai#:~:text=Incorporating%20AI%20into%20wargames%20can,analysis%20of%20strategy%20and%20decisionmaking.&text=Analysts%20can%20train%20models%20using,datasets%20to%20represent%20different%20stakeholders.
- Understanding the Limits of Artificial Intelligence for Warfighters: Volume 4, Wargames, accessed November 16, 2025, https://www.rand.org/pubs/research_reports/RRA1722-4.html
- Artificial intelligence in military decision-making: supporting humans, not replacing them, accessed November 16, 2025, https://blogs.icrc.org/law-and-policy/2024/08/29/artificial-intelligence-in-military-decision-making-supporting-humans-not-replacing-them/
- Lethal autonomous weapon – Wikipedia, accessed November 16, 2025, https://en.wikipedia.org/wiki/Lethal_autonomous_weapon
- Drone Wars: Developments in Drone Swarm Technology – Defense Security Monitor, accessed November 16, 2025, https://dsm.forecastinternational.com/2025/01/21/drone-wars-developments-in-drone-swarm-technology/
- Loitering Munitions – Sightline Intelligence, accessed November 16, 2025, https://sightlineintelligence.com/loitering-munitions/
- The Rise of AI-Driven Warfare – Nihon Cyber Defence, accessed November 16, 2025, https://nihoncyberdefence.co.jp/en/the-rise-of-ai-driven-warfare/
- AI-Driven Cybersecurity & Electronic Warfare Market in Defense – MarketsandMarkets, accessed November 16, 2025, https://www.marketsandmarkets.com/ResearchInsight/ai-driven-cybersecurity-electronic-warfare-market.asp
- AI Impact Analysis on US Joint All Domain Command and Control (JADC2) Market Industry, accessed November 16, 2025, https://www.marketsandmarkets.com/ResearchInsight/ai-impact-analysis-on-us-joint-all-domain-command-and-control-jadc2-market-industry.asp
- Achieving Information Dominance in Military Applications through AI, Sensor Fusion, Networking, Precision Timing, and Advanced Computing – Trenton Systems, accessed November 16, 2025, https://www.trentonsystems.com/en-us/resource-hub/blog/achieving-information-dominance-in-military-applications-through-ai-sensor-fusion-networking-precision-timing-and-advanced-computing
- Aided and Automatic Target Recognition – CoVar, accessed November 16, 2025, https://covar.com/technology-area/aided-target-recognition/
- Autonomous Surveillance: A Game Changer for Military Intelligence – Karve International, accessed November 16, 2025, https://www.karveinternational.com/insights/autonomous-surveillance-a-game-changer-for-military-intelligence
- The Rise of Autonomous Security Systems: From Drones to AI, accessed November 16, 2025, https://aressecuritycorp.com/2024/11/20/autonomous-security-systems/
- Editorial: Applications of AI in autonomous, surveillance, and robotic systems – PMC – NIH, accessed November 16, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC12179431/
- Artificial Intelligence and National Security | Congress.gov, accessed November 16, 2025, https://www.congress.gov/crs-product/R45178
- Project Maven: Algorithmic Warfare Doctrine – Ultra Unlimited, accessed November 16, 2025, https://www.ultra-unlimited.com/blog/project-maven-algorithmic-warfare-doctrine
- Big Data at War: Special Operations Forces, Project Maven, and …, accessed November 16, 2025, https://mwi.westpoint.edu/big-data-at-war-special-operations-forces-project-maven-and-twenty-first-century-warfare/
- Project Maven to Deploy Computer Algorithms to War Zone by Year’s End, accessed November 16, 2025, https://www.war.gov/News/News-Stories/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/
- Use Cases for Automatic Target Recognition in the Military – FlySight, accessed November 16, 2025, https://www.flysight.it/automatic-target-recognition-for-military-use-whats-the-potential/
- Fog, Friction, and Thinking Machines – War on the Rocks, accessed November 16, 2025, https://warontherocks.com/2020/03/fog-friction-and-thinking-machines/
- Steps toward AI governance in the military domain – Brookings Institution, accessed November 16, 2025, https://www.brookings.edu/articles/steps-toward-ai-governance-in-the-military-domain/
- The risks and inefficacies of AI systems in military targeting support, accessed November 16, 2025, https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/
- Reimagining Military C2 in the Age of AI – Revolution, Regression, or Evolution – Special Competitive Studies Project, accessed November 16, 2025, https://www.scsp.ai/wp-content/uploads/2024/12/DPS-Reimagining-Military-C2-in-the-Age-of-AI.pdf
- Joint All-Domain Command and Control for Modern Warfare: An Analytic Framework for Identifying and Developing Artificial Intelli – RAND, accessed November 16, 2025, https://www.rand.org/content/dam/rand/pubs/research_reports/RR4400/RR4408z1/RAND_RR4408z1.pdf
- Constructing adversarial models for threat/enemy intent prediction and inferencing – ResearchGate, accessed November 16, 2025, https://www.researchgate.net/publication/252856510_Constructing_adversarial_models_for_threatenemy_intent_prediction_and_inferencing
- The Predictive Turn | Preparing to Outthink Adversaries Through Predictive Analytics, accessed November 16, 2025, https://www.army.mil/article/282476/the_predictive_turn_preparing_to_outthink_adversaries_through_predictive_analytics
- WE NEED AN AI-BASED ENEMY ANALYSIS TOOL … NOW!, accessed November 16, 2025, https://warroom.armywarcollege.edu/articles/enemy-analysis-tool-now/
- Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War | International Security – MIT Press Direct, accessed November 16, 2025, https://direct.mit.edu/isec/article/46/3/7/109668/Prediction-and-Judgment-Why-Artificial
- Georgia Tech Researcher Finds that Military Cannot Rely on AI for Strategy or Judgment, accessed November 16, 2025, https://research.gatech.edu/georgia-tech-researcher-finds-military-cannot-rely-ai-strategy-or-judgment
- Study: AI Will Make Human Factors More, Not Less, Critical in War | Mind Matters, accessed November 16, 2025, https://mindmatters.ai/2022/07/study-ai-will-make-human-factors-more-not-less-critical-in-war/
- AI for Military Decision-Making | Center for Security and Emerging Technology – CSET, accessed November 16, 2025, https://cset.georgetown.edu/publication/ai-for-military-decision-making/
- Warfare at the Speed of Thought: Balancing AI and Critical Thinking for the Military Leaders of Tomorrow – Modern War Institute, accessed November 16, 2025, https://mwi.westpoint.edu/warfare-at-the-speed-of-thought-balancing-ai-and-critical-thinking-for-the-military-leaders-of-tomorrow/
- Air Force Battle Lab advances the kill chain with AI, C2 Innovation – AF.mil, accessed November 16, 2025, https://www.af.mil/News/Article-Display/Article/4241485/air-force-battle-lab-advances-the-kill-chain-with-ai-c2-innovation/
- Artificial Intelligence in Modern Warfare: Strategic Innovation and Emerging Risks, accessed November 16, 2025, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/SO-24/SO-24-Artificial-Intelligence-Strategic-Innovation-and-Emerging-Risks/
- AI’s New Frontier in War Planning: How AI Agents Can Revolutionize Military Decision-Making | The Belfer Center for Science and International Affairs, accessed November 16, 2025, https://www.belfercenter.org/research-analysis/ais-new-frontier-war-planning-how-ai-agents-can-revolutionize-military-decision
- COA-GPT: Generative Pre-trained Transformers for Accelerated Course of Action Development in Military Operations This research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-23-2-0072. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the – arXiv, accessed November 16, 2025, https://arxiv.org/html/2402.01786v1
- SBIR: Improving Battle Planning through AI – DARPA, accessed November 16, 2025, https://www.darpa.mil/research/programs/improving-battle-planning-through-ai
- Air Force experiments with AI, boosts battle management speed and accuracy, accessed November 16, 2025, https://www.aflcmc.af.mil/NEWS/Article/4311136/air-force-experiments-with-ai-boosts-battle-management-speed-and-accuracy/
- Four-Dimensional Planning at the Speed of Relevance: Artificial-Intelligence-Enabled Military Decision-Making Process – Army University Press, accessed November 16, 2025, https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/ND-22/Farmer/Farmer-Clausewitz%E2%80%99s-Ghost-UA.pdf
- AI Course of Action (COA) Generation for Defense – Seekr, accessed November 16, 2025, https://www.seekr.com/solution/coa-generation/
- Autonomous Weapon Systems: No Human-in-the-Loop Required …, accessed November 16, 2025, https://warontherocks.com/2025/05/autonomous-weapon-systems-no-human-in-the-loop-required-and-other-myths-dispelled/
- Exploring the 2023 U.S. Directive on Autonomy in Weapon Systems – CEBRI, accessed November 16, 2025, https://cebri.org/revista/en/artigo/114/exploring-the-2023-us-directive-on-autonomy-in-weapon-systems
- A Comparative Analysis of the Definitions of Autonomous Weapons Systems – PMC, accessed November 16, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC9399191/
- RESEARCH BRIEF SENDING UP A FLARE: AUTONOMOUS WEAPONS SYSTEMS PROLIFERATION RISKS TO HUMAN RIGHTS AND INTERNATIONAL SECURITY, accessed November 16, 2025, https://www.geneva-academy.ch/joomlatools-files/docman-files/Sending%20Up%20a%20Flare%20Autonomous%20Weapons%20Systems%20Proliferation%20Risks.pdf
- The rise of loitering munitions in high-intensity warfare – Army Technology, accessed November 16, 2025, https://www.army-technology.com/analyst-comment/loitering-munitions-high-intensity-warfare/
- AI-Powered Loitering Munition Systems Set New Standard for Battlefield Autonomy, accessed November 16, 2025, https://www.autonomyglobal.co/ai-powered-loitering-munition-systems-set-new-standard-for-battlefield-autonomy/
- AI in warfare: Loitering Munitions – Current Applications and Legal Challenges, accessed November 16, 2025, https://mondointernazionale.org/focus-allegati/ai-in-warfare-loitering-munitions-current-applications-and-legal-challenges
- Loitering Munitions: The Convergence of AI, Autonomy, and Lethal Precision in Future Combat by 2029 – MarketsandMarkets, accessed November 16, 2025, https://www.marketsandmarkets.com/blog/AD/loitering-munitions-convergence-ai-autonomy-lethal-precision-future-combat
- Military Drone Swarm Intelligence Explained – Sentient Digital, Inc., accessed November 16, 2025, https://sdi.ai/blog/military-drone-swarm-intelligence-explained/
- AI in the military: Testing a new kind of air force, accessed November 16, 2025, https://www.youtube.com/watch?v=CwDSpFufs6k
- AI military drone maker reveals the FUTURE of warfare, accessed November 16, 2025, https://www.youtube.com/watch?v=O8SOilhEqmw
- AI-controlled drone swarms arms race to dominate the near-future battlefield – YouTube, accessed November 16, 2025, https://www.youtube.com/watch?v=h2O17B4R7Rc
- Electronic Warfare Cyberattacks, Countermeasures and Modern Defensive Strategies of UAV Avionics: A Survey – arXiv, accessed November 16, 2025, https://arxiv.org/html/2504.07358v1
- Navigating the AI battlefield: Opportunities and ethical frontiers – NRDC Italy, accessed November 16, 2025, https://nrdc-ita.nato.int/newsroom/insights/navigating-the-ai-battlefield-opportunities–challenges–and-ethical-frontiers-in-modern-warfare
- Will AI-Driven “Super-OODA Loops” Revolutionise Military Strategy and Operations? – RSIS, accessed November 16, 2025, https://rsis.edu.sg/rsis-publication/rsis/will-ai-driven-super-ooda-loops-revolutionise-military-strategy-and-operations/
- War, Artificial Intelligence, and the Future of Conflict – Georgetown Journal of International Affairs, accessed November 16, 2025, https://gjia.georgetown.edu/2024/07/12/war-artificial-intelligence-and-the-future-of-conflict/
- Achieving Decision Dominance: The Arduous Pursuit of Operationalized Data, accessed November 16, 2025, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/January-February-2025/Decision-Dominance/
- Achieving Decision Dominance: Leveraging AI in Small Wars, accessed November 16, 2025, https://smallwarsjournal.com/2025/04/22/achieving-decision-dominance-leveraging-ai-in-small-wars/
- What is Decision Dominance, and why is it necessary in the Age of AI – Smack Technologies, accessed November 16, 2025, https://smacktechnologies.com/newsroom/what-is-decision-dominance-why-necessary-age-of-ai
- Decision Dominance in the Age of Agentic AI | Small Wars Journal by Arizona State University, accessed November 16, 2025, https://smallwarsjournal.com/2025/10/03/agentic-ai-decision-dominance/
- Strategic Centaurs: Harnessing Hybrid Intelligence for the Speed of AI-Enabled War, accessed November 16, 2025, https://mwi.westpoint.edu/strategic-centaurs-harnessing-hybrid-intelligence-for-the-speed-of-ai-enabled-war/
- Please Stop Saying “Human-In-The-Loop” – Institute for Future …, accessed November 16, 2025, https://www.ifc.usafa.edu/articles/please-stop-saying-human-in-the-loop
- DoD Directive 3000.09, “Autonomy in Weapon Systems,” January 25, 2023 – Executive Services Directorate, accessed November 16, 2025, https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf
- The Moral Dimension of AI-Assisted Decision-Making: Some Practical Perspectives from the Front Lines | American Academy of Arts and Sciences, accessed November 16, 2025, https://www.amacad.org/publication/daedalus/moral-dimension-ai-assisted-decision-making-some-practical-perspectives-front-lines
- Killer bots instead of killer robots: Updates to DoD Directive 3000.09 may create legal implications – The Cyber Defense Review, accessed November 16, 2025, https://cyberdefensereview.army.mil/Portals/6/Documents/2023_Summer/Erickson_CDR%20V8N2%20Summer%202023.pdf?ver=bIGK4_BcR8UvUwRAz69JUw%3D%3D
- Human-On-the-Loop – Joint Air Power Competence Centre, accessed November 16, 2025, https://www.japcc.org/essays/human-on-the-loop/
- Autonomous weapon systems: is a practical approach possible? – Euro-sd, accessed November 16, 2025, https://euro-sd.com/2024/04/articles/37561/autonomous-weapon-systems-is-a-practical-approach-possible/
- Human in the Loop vs. Human on the Loop: Navigating the Future of AI – Serco, accessed November 16, 2025, https://www.serco.com/na/media-and-news/2025/human-in-the-loop-vs-human-on-the-loop-navigating-the-future-of-ai
- Navigating cyber vulnerabilities in AI-enabled military systems, accessed November 16, 2025, https://europeanleadershipnetwork.org/commentary/navigating-cyber-vulnerabilities-in-ai-enabled-military-systems/
- Safety and War: Safety and Security Assurance of Military AI Systems – AI Now Institute, accessed November 16, 2025, https://ainowinstitute.org/publications/safety-and-war-safety-and-security-assurance-of-military-ai-systems
- Strategic competition in the age of AI: Emerging risks and opportunities from military use of artificial intelligence – RAND, accessed November 16, 2025, https://www.rand.org/content/dam/rand/pubs/research_reports/RRA3200/RRA3295-1/RAND_RRA3295-1.pdf
- Operationally Relevant Artificial Training for Machine Learning – RAND, accessed November 16, 2025, https://www.rand.org/pubs/research_reports/RRA683-1.html
- AI Still in Experimentation Phase for Training, Simulations – National Defense Magazine, accessed November 16, 2025, https://www.nationaldefensemagazine.org/articles/2023/11/10/ai-still-in-experimentation-phase-for-training-simulations
- Reducing the Risks of Artificial Intelligence for Military Decision Advantage | Center for Security and Emerging Technology – CSET, accessed November 16, 2025, https://cset.georgetown.edu/publication/reducing-the-risks-of-artificial-intelligence-for-military-decision-advantage/
- Rethinking Technological Readiness in the Era of AI Uncertainty – arXiv, accessed November 16, 2025, https://arxiv.org/html/2506.11001v1
- Adversarial AI: Understanding and Mitigating the Threat – Sysdig, accessed November 16, 2025, https://www.sysdig.com/learn-cloud-native/adversarial-ai-understanding-and-mitigating-the-threat
- Risks and Mitigation Strategies for Adversarial Artificial Intelligence Threats: A DHS S&T Study – Homeland Security, accessed November 16, 2025, https://www.dhs.gov/sites/default/files/2023-12/23_1222_st_risks_mitigation_strategies.pdf
- Operational Feasibility of Adversarial Attacks Against Artificial Intelligence – RAND, accessed November 16, 2025, https://www.rand.org/content/dam/rand/pubs/research_reports/RRA800/RRA866-1/RAND_RRA866-1.pdf
- Artificial Intelligence, Real Risks: Understanding—and Mitigating—Vulnerabilities in the Military Use of AI – Modern War Institute, accessed November 16, 2025, https://mwi.westpoint.edu/artificial-intelligence-real-risks-understanding-and-mitigating-vulnerabilities-in-the-military-use-of-ai/
- Adversarial Machine Learning – Joint Air Power Competence Centre, accessed November 16, 2025, https://www.japcc.org/essays/adversarial-machine-learning/
- Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It | The Belfer Center for Science and International Affairs, accessed November 16, 2025, https://www.belfercenter.org/publication/AttackingAI
- Securing Our Sentinels: Protecting Military AI Models from Data Poisoning, Evasion, and Extraction – AFCEA International, accessed November 16, 2025, https://events.afcea.org/Augusta25/Custom/Handout/Speaker0_Session11983_1.pdf
- Data Poisoning as a Covert Weapon: Securing U.S. Military Superiority in AI-Driven Warfare, accessed November 16, 2025, https://lieber.westpoint.edu/data-poisoning-covert-weapon-securing-us-military-superiority-ai-driven-warfare/
- Agentic AI’s OODA Loop Problem – Schneier on Security, accessed November 16, 2025, https://www.schneier.com/blog/archives/2025/10/agentic-ais-ooda-loop-problem.html
- Agentic AI’s OODA Loop Problem – Berkman Klein Center, accessed November 16, 2025, https://cyber.harvard.edu/story/2025-10/agentic-ais-ooda-loop-problem
- SABER: Securing Artificial Intelligence for Battlefield Effective Robustness – DARPA, accessed November 16, 2025, https://www.darpa.mil/research/programs/saber-securing-artificial-intelligence
- Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World | RAND, accessed November 16, 2025, https://www.rand.org/pubs/research_reports/RR3139-1.html
- Weaponized AI: A New Era of Threats and How We Can Counter It – Ash Center, accessed November 16, 2025, https://ash.harvard.edu/articles/weaponized-ai-a-new-era-of-threats/
- How Adversarial Attacks Could Destabilize Military AI Systems – CNAS, accessed November 16, 2025, https://www.cnas.org/publications/commentary/how-adversarial-attacks-could-destabilize-military-ai-systems
- Algorithmic Stability: How AI Could Shape the Future of Deterrence – CSIS, accessed November 16, 2025, https://www.csis.org/analysis/algorithmic-stability-how-ai-could-shape-future-deterrence
- Algorithms of war: The use of artificial intelligence in decision making in armed conflict, accessed November 16, 2025, https://blogs.icrc.org/law-and-policy/2023/10/24/algorithms-of-war-use-of-artificial-intelligence-decision-making-armed-conflict/
- Strategic competition in the age of AI: Emerging risks and opportunities from military use of artificial intelligence—Summary – RAND, accessed November 16, 2025, https://www.rand.org/content/dam/rand/pubs/research_reports/RRA3200/RRA3295-1/RAND_RRA3295-1.summary.pdf
- An AI Revolution in Military Affairs? How Artificial Intelligence Could Reshape Future Warfare – RAND, accessed November 16, 2025, https://www.rand.org/content/dam/rand/pubs/working_papers/WRA4000/WRA4004-1/RAND_WRA4004-1.pdf
- Evaluating the Risks of Preventive Attack in the Race for Advanced AI – RAND, accessed November 16, 2025, https://www.rand.org/pubs/perspectives/PEA3691-13.html
- Human Oversight in AI-Driven Defence – at what positions do we need the Human in the Loop – NATO C2COE, accessed November 16, 2025, https://c2coe.org/download/human-oversight-in-ai-driven-defence-at-what-positions-do-we-need-the-human-in-the-loop/
- Achieving Decision Dominance in the Age of AI – Everfox, accessed November 16, 2025, https://www.everfox.com/blog/news/achieving-decision-dominance-in-the-age-of-ai
- Achieving Decision Dominance through Convergence: The U.S. Army and JADC2 | AUSA, accessed November 16, 2025, https://www.ausa.org/publications/achieving-decision-dominance-through-convergence-us-army-and-jadc2
- The Impact of Artificial Intelligence on Military Defence and Security – Centre for International Governance Innovation (CIGI), accessed November 16, 2025, https://www.cigionline.org/documents/2120/no.263.pdf
- Understanding the Limits of Artificial Intelligence for Warfighters: Volume 5, Mission Planning | RAND, accessed November 16, 2025, https://www.rand.org/pubs/research_reports/RRA1722-5.html
- The Impact Of Artificial Intelligence On The Military Decision-Making Process And Mission Command – The Defence Horizon Journal, accessed November 16, 2025, https://tdhj.org/blog/post/ai-military-decision-making-2/
- Responsible and Ethical Military AI | CSET, accessed November 16, 2025, https://cset.georgetown.edu/wp-content/uploads/CSET-Responsible-and-Ethical-Military-AI.pdf
Automated Course of Action Generation – Army SBIR|STTR Program, accessed November 16, 2025, https://armysbir.army.mil/topics/automated-course-action-generation/