This report assesses the near-term (2025-2030) impacts of artificial intelligence (AI) on first-world Special Operations Forces (SOF). The central finding is that the next five years will be defined not by the invention of new AI, but by its migration from centralized, high-echelon intelligence platforms to the “tactical edge”.1 This decentralization is a strategic necessity, driven by SOF’s operational requirement to function in disconnected, disrupted, intermittent, or limited (DDIL) communications environments where reliance on cloud-based processing is not viable.2
The primary operational impact will be the creation of the “augmented operator.” This operator will leverage AI as both a sensor and a weapon, processed directly on-device. This will manifest as:
- AI-Driven Situational Awareness (SA): Operator-worn systems will provide real-time, AI-generated overlays, identifying threats, “blue forces,” and navigational paths, even in GPS-denied environments.5
- On-Device Human Interface: AI will enable offline, real-time language translation 8 and multi-modal biometric identification 9, revolutionizing Foreign Internal Defense (FID) and Unconventional Warfare (UW) missions.
- Manned-Unmanned Teaming (MUM-T): Operators will move from controlling single drones to directing AI-coordinated swarms of loitering munitions 10 and, potentially, ground-controlled Collaborative Combat Aircraft (CCAs).12
This opportunity is mirrored by extreme, symmetric risk. The “democratization” of AI 14 means adversaries, including violent non-state actors (VNSAs), will leverage the same commercial-off-the-shelf (COTS) technologies against SOF.15 The most immediate threats are adversarial AI-powered drone swarms 16 and Generative AI (GenAI)-based deepfakes and propaganda designed to shatter trust in partner-force missions.18
The greatest dangers, however, are institutional and internal:
- Cognitive Skill Atrophy: Over-reliance on AI planning tools (e.g., COA-GPT) risks the erosion of core staff planning and decision-making capabilities.21
- The “Black Box” Problem: Fielding non-transparent AI for targeting creates catastrophic legal and ethical liabilities under the Law of Armed Conflict (LOAC).22
- Accelerated, Flawed Targeting: The misuse of AI to “accelerate the kill chain” 24 at the expense of human judgment—as demonstrated in the 2021 Kabul drone strike 25—presents the single greatest risk for strategic failure and high-profile civilian harm (CIVHARM).
SOF leadership must immediately prioritize the procurement of explainable, decentralized “Edge AI” systems, mandate aggressive “Red Team” AI testing (including data poisoning) 27, and implement training protocols that actively combat skill atrophy and automation bias.
2.0. SUMMARY TABLE: AI IMPACT ON SOF CORE ACTIVITIES (2025-2030)
The following table maps projected AI impacts directly to the doctrinal core activities of first-world SOF.28
| SOF Core Activity | Key AI Opportunity (Technology & Application) | Operational Impact (The “So What”) | Key Risk / Vulnerability | Relevant Technologies |
| Direct Action (DA) | AI-enabled loitering munitions (LMs) and autonomous swarms. | Provides scalable, overwhelming, and precise fires from a small-footprint team. A single operator can achieve the kinetic effect of a much larger unit. | Adversary VNSA COTS AI swarms overwhelm SOF C-UAS and base defenses.15 | XTEND ACQME-DK 10, Rafael Spike family 32, Anduril YFQ-44A (CCA).33 |
| Special Reconnaissance (SR) | On-device, AI-powered Automatic Target Recognition (ATR) and pattern-of-life (PoL) analysis on small UAS (sUAS) and wearable sensors. | Reduces operator cognitive load. Enables persistent, autonomous surveillance in DDIL environments. Fuses multi-sensor data into actionable intelligence at the edge.34 | High risk of “black box” targeting logic.22 Misidentification based on flawed PoL analysis leads to catastrophic CIVHARM and mission failure.24 | Anduril EagleEye 5, VIO Navigation 6, Project Maven.36 |
| Counter-Terrorism (CT) | AI-driven multi-source data fusion (e.g., SIGINT, HUMINT, ISR) for HVT targeting. Predictive analytics for threat anticipation. | Fuses massive, disparate datasets 37 to unmask clandestine networks. Shifts targeting from reactive (find-fix-finish) to proactive (predict-disrupt). | Data Poisoning: Adversary covertly compromises training data, causing the AI to miss threats or, worse, identify friendlies as targets.27 | Torch.AI ORCUS 37, Palantir AI, Reveal-tech Identifi.9 |
| Unconventional Warfare (UW) & Foreign Internal Defense (FID) | Wearable, real-time, offline language translation devices. On-device, offline multi-modal biometric identification. | Dramatically enhances human interface. Allows operators to rapidly build rapport, vet partner forces, and identify insider threats without a network connection.9 | Adversary use of COTS AI (translation, biometrics) for counter-intelligence, building databases of SOF operators and their local partners.20 | Reveal-tech Identifi 9, Timekettle WT2 8, Meta Ray-Ban.8 |
| Military Info. Support Ops (MISO) | Generative AI (LLMs) for high-speed audience analysis and content generation. | Overcomes MISO force capacity shortfalls.40 Enables rapid, culturally-resonant, and scalable influence campaigns to counter adversary propaganda in real time.41 | Adversary “deepfakes” and GenAI-powered disinformation 18 are faster and more believable, shattering trust in SOF and partner forces. | COA-GPT 21, GPT-4/5 derivatives 42, Llama-series LLMs. |
| Civil Affairs Operations (CAO) | AI-powered data-mining and sentiment analysis of local populations. LLMs for rapid generation of civil-affairs products (e.g., pamphlets, info-sheets). | Provides real-time understanding of “human terrain” needs, grievances, and key nodes of influence. Allows CA teams to rapidly meet information needs.43 | AI hallucinations 42 or biases in the training data lead to factually incorrect or culturally offensive products, causing catastrophic loss of trust. | Open-source LLMs 41, commercial translation tools.44 |
| Logistics / Resupply | Autonomous Unmanned Ground Vehicles (A-UGVs) or “robotic mules.” | Promise: Unburdens light SOF teams, provides autonomous “last-mile” resupply, and enables robotic CASEVAC.45 | Reality: A-UGV mobility in “complex terrain” (e.g., non-permissive routes) is an unsolved R&D problem. Over-reliance will lead to mission failure.47 | Rheinmetall Mission Master 45, Army S-MET.47 |
3.0. OPPORTUNITIES: AI INTEGRATION ACROSS SOF CORE ACTIVITIES
In the 2025-2030 timeframe, AI will not be a single technology but a new, pervasive layer of capability integrated across all SOF mission sets. Its primary value will be to compress decision cycles, augment operator perception, and scale operator effects.
3.1. Intelligence, Planning, and C5ISTAR: From “Big Data” to Decision Advantage
The core challenge for SOF intelligence is not data collection, but data sense-making. Operators and analysts are overwhelmed by fragmented feeds from sensors, ISR platforms, and electronic warfare (EW) systems.50 AI offers a direct solution to this cognitive burden by automating fusion and analysis.
AI-Driven Multi-Source Fusion
In the next five years, AI-driven data platforms will become the standard. Systems like Torch.AI’s ORCUS, which is “battle-proven” in over three dozen DoD deployments, are designed to break down information silos.37 This technology moves beyond simple data aggregation. It uses AI to autonomously integrate structured and unstructured data from multiple classified and unclassified sources—including ISR platforms, battlefield sensors, and cyber threats—in real time.37 For a SOF command, this means an intelligence analyst can receive a single, fused operational picture that correlates a SIGINT “hit,” a full-motion video (FMV) feed, and a human intelligence report, providing actionable intelligence rather than just more data.51
Predictive Analytics & Pattern-of-Life
This fused data layer enables the next step: predictive analytics. AI models, particularly machine learning and deep learning 54, excel at “pattern-of-life” (PoL) analysis.55 Where a human analyst team (e.g., in Project Maven 36) might manually tag FMV, an AI can process thousands of hours of multi-domain sensor data to identify and “learn” an adversary’s habits, schedules, and networks.57 This capability is migrating to the tactical edge.58 This will allow a SOF team to move from reacting to an HVT’s location to proactively anticipating the target’s next move, enabling threat mitigation and proactive strategy.59
Automated COA Generation
The Military Decision-Making Process (MDMP) is notoriously time- and resource-intensive, ill-suited for the “fleeting windows of opportunity” typical of SOF operations.60 AI-powered planning tools, such as the in-development Course of Action GPT (COA-GPT), promise to revolutionize this process.21 These tools leverage LLMs, military doctrine, and domain expertise to “swiftly develop valid COAs… in a matter of seconds”.61 A commander can input mission specifics (text and images) and receive multiple, strategically-aligned, and pre-wargamed COAs.61 This technology addresses a core weakness of manual MDMP, where staffs are often constrained to analyzing only the “most likely” and “most dangerous” enemy COAs.60 By using AI to generate a “broader spectrum of COAs” 60, commanders and staffs are freed from manual product generation and can focus on the higher-order cognitive tasks of analysis, comparison, and human judgment.21
3.2. Direct Action (DA) & Counter-Terrorism (CT): The AI-Enabled Kill Chain
In kinetic operations, AI will provide SOF teams with unprecedented, scalable precision and lethality. This will be most evident in the maturation of autonomous weapons systems.
Autonomous Swarms & Loitering Munitions (LMs)
This is the most significant near-term kinetic impact. The DoD is already moving to procure AI-enabled swarm systems, such as the XTEND ACQME-DK, specifically for “irregular warfare”.10 These systems are not just multiple drones; they are AI-coordinated “cohesive units”.62 AI manages the complex task delegation and swarm coordination 11, allowing a single SOF operator to deploy dozens of assets for tasks ranging from ISR and EW to overwhelming, precision strikes. This distributed, resilient approach is exceptionally difficult for an adversary to counter.64
Simultaneously, AI is enhancing individual loitering munitions. Current LMs are “man-in-the-loop.” The next generation, such as Israel’s Spike family 32 and MBDA’s Akeron 65, are “AI-in-the-loop.” These systems use onboard AI and machine learning to autonomously detect, track, and engage targets without continuous human guidance.32 This is a critical capability in a comms-denied or GPS-denied environment. The LM can be launched to “hunt” in a designated area, using its own AI to identify and engage a pre-defined target profile, immune to hostile electronic warfare.32
Manned-Unmanned Teaming (MUM-T) & Collaborative Combat Aircraft (CCAs)
AI is the cognitive “brain” that makes true Manned-Unmanned Teaming (MUM-T) possible.68 MUM-T is defined as the “synchronized employment of soldier, manned and unmanned air and ground vehicles, robotics, and sensors” to enhance lethality and survivability.69
The most revolutionary development in this area is the Collaborative Combat Aircraft (CCA) program.68 These are AI-piloted, jet-powered “loyal wingmen”.68 While often viewed as an Air Force asset to support F-35s 33, the program’s development includes “ground control interfaces”.12 This implies a profound shift for SOF: a ground-based operator, such as a SOF-qualified JTAC, could soon exercise tactical control over a CCA like the Anduril YFQ-44A “Fury”.33
This capability would fundamentally change the battlefield for a SOF team. The team’s “air support” would no longer be a temporary asset on station; it would be a persistent, autonomous platform (a “loyal wingman”) that can be tasked directly by the ground element to perform autonomous ISR, provide EW screening, or conduct precision strikes.72 This integration of SOF C5ISTAR 77 with autonomous air assets represents an asymmetric leap in kinetic power, effectively giving a small SOF team the scalable kinetic effect of a much larger conventional force.
3.3. Military Information Support Operations (MISO): GenAI and the Influence Domain
The influence domain is perhaps the area most poised for immediate disruption by Generative AI (LLMs). The Army’s PSYOP (MISO) force is currently facing “structural and capacity challenges,” unable to meet growing global demand with an understaffed force.40 GenAI offers a direct solution to this “force multiplier” problem.
MISO planning is “extraordinarily difficult,” with a standard operation taking months.42 AI can compress this timeline to minutes.
- Automated Audience & Sentiment Analysis: LLMs can “scrutinize” and “summarize” massive, multilingual datasets from the information environment (e.g., social media, local news) to extract an adversary’s “goals, tactics, and narrative frames”.41 This automates the most time-consuming phase of MISO (Target Audience Analysis), allowing planners to understand the information “battlespace” in real time.43
- Hyper-Personalized Content Generation: Once an audience is analyzed, GenAI can “generate content, such as text and images, within seconds”.42 This capability moves MISO beyond generic products (like leaflets) to hyper-personalized digital campaigns. A MISO team can use AI to rapidly generate thousands of variants of a message, each tailored to a specific cultural or demographic sub-group, and disseminate them “at the speed of conflict”.42
This industrialization of MISO allows a small PSYOP team to conduct influence operations at a scale and speed that was previously impossible. The “human quality controller” 42 remains critical, not as a content creator, but as a final editor and arbiter to prevent AI “hallucinations” 42 from causing unintended diplomatic crises.
3.4. Unconventional Warfare (UW) & Foreign Internal Defense (FID): AI at the Human Edge
The core of SOF’s “by, with, and through” missions 79 is the human interface: building rapport with partner forces and “knowing the human terrain.” AI, particularly at the edge, will serve as a powerful enhancement to this human-to-human mission.
Real-Time Language Translation
A fundamental SF skill is language proficiency 79, but operators rarely speak all dialects in a region. Commercial-off-the-shelf (COTS) AI-powered translation devices are now viable tactical tools.81 Wearable earbuds like the Timekettle WT2 provide “bidirectional simultaneous translation” in 40+ languages.8 Crucially, they offer offline translation packages.8 This allows an ODA operator to conduct a negotiation, train a partner force, or de-escalate a situation in real time, without relying on a human translator who can be a security risk or a cultural barrier.
On-Device Biometric Identification
“Knowing the human terrain” 9 is paramount in UW (identifying resistance members) and FID (vetting partner forces). The single greatest threat in these environments is the “insider.” The Reveal-tech “Identifi” system, developed with USSOCOM operators, represents a paradigm shift in counter-intelligence and force protection.9
Identifi is an AI-driven, multi-modal biometric (face, iris, fingerprint, voice) platform that runs “entirely offline”.9 It executes all AI matching and analysis on-device, requiring no data connection.9 This allows a SOF team in an “austere environment” 83 to:
- Enroll and vet partner forces, creating an “on-device watchlist”.9
- Instantly identify individuals at checkpoints or key leader engagements.
- Securely identify high-value targets (HVTs) or CI threats without transmitting sensitive biometric data over a network.
This capability to weaponize identity at the tactical edge, completely disconnected, is a revolutionary tool for securing the mission in complex human environments.
Augmented Reality (AR) for Partner Force Training
AR systems, suchab as Anduril’s EagleEye HMD, provide an “AI partner embedded in your display”.5 While designed for C2 and SA, this technology is a powerful training tool. In an FID context, a SOF advisor can use the AR system to create a “collaborative 3D sand table” 5 or overlay digital information (routes, objectives, threat locations) onto the partner force’s view of the real world.84 This “enhanced perception” 5 dramatically improves training effectiveness and shared operational understanding.
3.5. Autonomous Logistics & CASEVAC: The “Robotic Mule”
One of the most requested AI applications is for autonomous systems to perform the “dull, dirty, and dangerous” work of logistics. The vision is for Unmanned Ground Vehicles (UGVs) like the Rheinmetall Mission Master 45 or the Army’s Small Multipurpose Equipment Transport (S-MET) 47 to serve as “robotic mules.” These systems promise to unburden dismounted SOF teams by autonomously carrying heavy equipment, conducting “last-mile resupply” to contested outposts, and performing non-medical CASEVAC.45
However, this report must be candid: this capability is one of the least operationally mature for complex SOF missions. While aerial autonomy (drones, LMs, CCAs) is advancing rapidly, autonomous ground mobility in “complex natural terrain” 49 and urban environments 86 remains an unsolved research and development problem.48
Practical experiments have produced “mixed results”.47 A 2024 US Army trial with the S-MET concluded that the unit was “unable to overcome obstacles in rough terrain,” forcing the infantry squad to “deviate from its concealed route”.47 This is not just an inconvenience; it is a tactical failure that compromises concealment and mission success. Decades of research show that AI perception for UGVs still struggles to detect “below ground obstacles” (like ditches) or correctly characterize “foliage” density.49
Therefore, in the 2025-2030 timeframe, leaders should not bank on autonomous UGVs for high-risk, dismounted missions in complex terrain. Over-reliance on this unproven “mule” 47 will create a new and critical point of mission failure.
4.0. RISKS AND VULNERABILITIES: THE AI-ENABLED THREAT MATRIX
The proliferation of AI is not a one-sided advantage. It creates new, symmetric, and asymmetric vulnerabilities. These risks must be understood as both external (adversarial use) and internal (failures of our own adoption).
4.1. External Threat: Adversarial AI (Red Team)
SOF’s traditional technological overmatch is eroding as adversaries gain access to the same COTS AI tools.
Democratization of Asymmetric Threats (VNSAs)
Violent non-state actors (VNSAs) like Hamas and the Houthis have already “revolutionized modern warfare” 15 with cheap, COTS drones. The next, immediate evolution is the integration of COTS AI.14
- Adversarial AI Swarms: An adversary no longer needs a state-sponsor to deploy an autonomous swarm. They can use open-source AI software to manage “swarm coordination” 63 for COTS drones, creating a low-cost, high-volume, “unmanageable threat” 17 that can saturate SOF C-UAS systems.16
- AI-Guided IEDs (“Smart Mines”): Adversaries will adapt AI technology from commercial industries (e.g., “smart mining” 89) to create the next generation of IEDs. An AI-guided munition could be trained on open-source imagery to recognize SOF-specific vehicles or even US-pattern uniforms, remaining dormant until its AI sensor makes a positive target identification.
Peer Adversary Counter-SOF (GenAI & Counter-Intel)
Peer adversaries (e.g., China, Russia) 91 will leverage AI for sophisticated counter-SOF operations.
- GenAI Deception & Deepfakes: The greatest threat of GenAI in a UW/FID environment is deception.18 An adversary can use deepfake technology to create a realistic but false video of a SOF operator or partner force leader committing an atrocity, then use AI-driven information warfare 19 to “amplify” this message and destroy local trust, causing mission-failure.
- COTS AI for Counter-Intelligence: This is a critical, under-appreciated threat. Adversaries can use the same COTS tools we plan to use. They can use AI-powered translation 20 to instantly analyze captured documents or radio intercepts. Most dangerously, they can use open-source AI biometric tools and “jailbroken” LLMs 38 to “scrape” public-facing internet and social media, building facial recognition databases of SOF operators and their families for targeting and blackmail.39
4.2. Internal Risk: Technical & Operational Failure (Blue Team)
The most insidious threats are the ones we introduce ourselves through flawed technology and poor adoption.
Technical Vulnerabilities: Data Poisoning
AI systems are “highly vulnerable” 95 to data-centric attacks. The most significant threat is data poisoning.27 This is a “covert weapon” 27 where an adversary gains access to and manipulates the training data for an AI model.
- Scenario: A peer adversary covertly “poisons” the training data for our AI-powered Automatic Target Recognition (ATR) system. They feed it thousands of images where friendly vehicles (e.g., an M-ATV) are mislabeled as hostile, or where hostile vehicles are mislabeled as civilian. The “poisoned” AI is deployed. In combat, this AI, which we trust, will be rendered “ineffective”.27 It will either autonomously identify friendly forces as targets, leading to catastrophic fratricide, or deliberately filter out real threats, providing a “false positive” of a safe environment.
Operational Over-Reliance & Skill Atrophy
- The “Atrophy” Risk: This is the most profound institutional risk. President Dwight D. Eisenhower’s dictum “plans are worthless, but planning is everything” 21 highlights that the process of planning creates “experiential learning” and “shared understanding”.21 When we outsource core cognitive tasks—like COA development—to AI planning tools (e.g., COA-GPT) 21, our staffs lose that shared understanding. Their critical thinking and planning skills “atrophy”.98 This creates a brittle force of commanders who can select an AI’s COA but cannot create one when the AI fails, is unavailable, or is compromised.
- Over-Reliance (Automation Bias): This is the tactical risk. Over-reliance occurs when operators “accept incorrect or incomplete AI outputs”.99 An operator wearing an AR HMD 5 that “highlights” a potential target may develop “tunnel vision,” ceasing to scan un-highlighted areas.101 This “automation bias” 102 means the operator misses the actual threat that the AI failed to classify, leading to a lethal surprise.
The “Black Box” Problem (LOAC & Ethics)
- Un-explainable Decisions: Many advanced AI models are “black boxes”.22 They provide an output (e.g., “Target X is a 95% match”) but cannot explain the logic or data used to reach that conclusion.22 This is legally and ethically catastrophic. A commander who authorizes a strike based on an AI’s “black box” recommendation cannot legally justify that action under the Law of Armed Conflict (LOAC). They cannot prove distinction or proportionality if they cannot explain the “why” behind the strike.
- Accelerating the Kill Chain (The “Lavender” Risk): AI is not a panacea for civilian harm (CIVHARM). In fact, evidence suggests it can increase it. Reports on the Israeli military’s alleged use of AI systems like “Lavender” (to identify militants) and “Where’s Daddy?” (to predict when they are home) 24 indicate a dangerous trend. By “accelerating the kill chain” 24, the AI reportedly generated 100 targets per day, giving human officers as little as “20 seconds to verify” the AI’s recommendation.24 This prioritization of speed over judgment leads to catastrophic errors. The infamous 2021 drone strike in Kabul that killed 10 civilians was a direct result of a flawed, eight-hour “pattern-of-life” analysis that “misinterpreted the target’s behavior”.25 This is the single greatest risk of AI targeting: it scales up bad decisions and flawed intelligence at machine speed.
5.0. STRATEGIC RECOMMENDATIONS FOR SOF LEADERSHIP
To harness AI’s opportunities while mitigating its profound risks, SOF leadership must immediately adopt a deliberate, clear-eyed, and candid approach.
- Prioritize “Edge AI” & Operator Augmentation: Aggressively fund and field decentralized, on-device AI systems. The procurement priority must be on systems that are “ruggedized” 2 and proven to function in DDIL environments.2 Focus on:
- Operator-worn SA/C2 HMDs (e.g., Anduril EagleEye).5
- On-device, offline Biometrics/Intel (e.g., Reveal-tech Identifi).9
- Resilient Navigation (VIO/LIDAR) for GPS-denied environments.6
- Invest in “Red Team” AI & C-AI: Establish a dedicated “Red Team” AI cell. This cell’s sole purpose must be to develop and test adversary AI TTPs against our own forces in exercises. This cell must be tasked with:
- Weaponizing COTS AI and hardware 14 to test C-UAS and base defense protocols.
- Conducting GenAI/Deepfake attacks 19 against our own partner-force missions (in training) to build MISO and CI resilience.
- Actively attempting data poisoning attacks 27 against all AI systems before they are fielded to test their security and resilience.
- Mandate “Explainability” & “Glass Box” Targeting: Prohibit the fielding of “black box” kinetic AI systems.22
- Mandate that all AI-assisted targeting systems be “explainable” (XAI). The system must be able to “show its work” 23 to the human operator and, crucially, to a legal reviewer. This is the only way to ensure compliance with LOAC.
- Do not accept vendor claims of “AI magic.” Demand transparency in procurement.
- Redefine the “Human-in-the-Loop”: The human operator must be more than a “clicker”.24
- Training: Modify training protocols 101 to focus on combating automation bias.99 Operators must be rigorously trained when to distrust the AI.
- Time: Prohibit AI-accelerated “kill chains” 24 that remove human judgment. Mandate minimum human decision-time for AI-generated targets. The “20-second” verification 24 is a “never-again” lesson. The human must be a veto-wielding critical thinker, not a rubber-stamping functionary.
- Combat Skill Atrophy: Embrace AI planning tools (e.g., COA-GPT) 21 for speed, but retain analog planning for expertise.
- Mandate that for every one AI-generated plan, the staff must manually produce one during training exercises.21
- Use AI to generate options, but force humans to perform the “experiential learning” 21 of wargaming, analysis, and decision. The goal is an AI-augmented staff, not an AI-replaced staff.
- Manage UGV Expectations: Be candid about UGV limitations.47 Do not procure “robotic mule” 47 systems at scale until they have been independently verified to navigate complex, “off-road” terrain 48 relevant to dismounted SOF operations. Focus near-term UGV investment on simple, proven tasks (e.g., static perimeter defense, “follow-me” on established routes).
APPENDIX: METHODOLOGY
This report was compiled using a structured analytical methodology designed to provide predictive, operationally-relevant insights for senior SOF leadership.
- Doctrinal Scaffolding: The analysis framework was built upon the established Core Activities of first-world SOF (e.g., USSOCOM 28, NATO 30, and UKSF 105). All technological opportunities and risks were mapped directly to these doctrinal functions to ensure operational relevance.
- Cross-Correlated Data Synthesis: Research was clustered into key technological and thematic areas (e.g., “Edge AI,” “Autonomous Swarms,” “Generative AI,” “Operational Risks”). Insights were generated by synthesizing disparate data points, such as connecting a vendor’s technical promise for a UGV 45 with a candid field-trial failure report.47
- Near-Term Horizon (5-Year Scope): The analysis excluded theoretical, long-term AI (e.g., Artificial General Intelligence). It focused on technologies in advanced R&D (e.g., DARPA 107), active testing (e.g., CCA YFQ-44A 33), or existing/COTS deployment (e.g., Identifi 9, XTEND 10, GenAI 42).
- Candid Risk Assessment: Per the requirement for an “objective, candid” report, the analysis actively sought out contradictions, documented failures, and ethical challenges. This included analyzing documented CIVHARM incidents 24, institutional risks 21, and technical vulnerabilities 27 to provide a balanced, non-biased assessment.
- Second- and Third-Order Insight Generation: The methodology moved beyond descriptive analysis (what the technology does) to predictive and prescriptive analysis (what the operational implication is, and what leaders must do about it). This was achieved by identifying causal relationships and their strategic implications (e.g., The necessity of Edge AI in a DDIL environment implies the operator becomes a new C5ISTAR node, which implies a new signature vulnerability).
If you find this post useful, please share the link on Facebook, with your friends, etc. Your support is much appreciated and if you have any feedback, please email me at in**@*********ps.com. Please note that for links to other websites, we are only paid if there is an affiliate program such as Avantlink, Impact, Amazon and eBay and only if you purchase something. If you’d like to directly contribute towards our continued reporting, please visit our funding page.
Sources Used
- Edge AI in Tactical Defense: Empowering the Future of Military and Aerospace Operations, accessed November 16, 2025, https://dedicatedcomputing.com/edge-ai-in-tactical-defense-empowering-the-future-of-military-and-aerospace-operations/
- From core to tactical edge: A unified platform for defense innovation – Red Hat, accessed November 16, 2025, https://www.redhat.com/en/blog/core-tactical-edge-unified-platform-defense-innovation
- The future of defense: AI at the edge for real-time tactical advantage, accessed November 16, 2025, https://www.washingtontechnology.com/sponsors/sponsor-content/2025/04/future-defense-ai-edge-real-time-tactical-advantage/404000/
- The new frontline: Winning the information war at the tactical edge | FedScoop, accessed November 16, 2025, https://fedscoop.com/the-new-frontline-winning-the-information-war-at-the-tactical-edge/
- Anduril’s EagleEye Puts Mission Command and AI Directly into the …, accessed November 16, 2025, https://www.anduril.com/article/anduril-s-eagleeye-puts-mission-command-and-ai-directly-into-the-warfighter-s-helmet/
- GPS-denied navigation expands the threshold for mission-critical drone use cases, accessed November 16, 2025, https://militaryembedded.com/comms/gps/gps-denied-navigation-expands-the-threshold-for-mission-critical-drone-use-cases
- HIGHLY ACCURATE GPS-DENIED PNT USING DIRECT DOPPLER LIDAR MEASUREMENTS – NDIA – Michigan, accessed November 16, 2025, https://ndia-mich.org/images/events/gvsets/2024/papers/AAIR/1%2010PM%20Highly%20Accurate%20GPS%20Denied%20PNT%20Using%20Doppler%20LIDAR%20Measurements.pdf
- 5 Real-Time AI Translation Devices That Will Blow Your Mind – KUDO, accessed November 16, 2025, https://kudo.ai/blog/mind-blowing-real-time-translation-devices/
- Identifi – Reveal Technology, accessed November 16, 2025, https://www.revealtech.ai/identifi
- DoD contracts Israeli drone startup XTEND for combat drones | The …, accessed November 16, 2025, https://www.jpost.com/defense-and-tech/article-873488
- Military Drone Swarm Intelligence Explained – Sentient Digital, Inc., accessed November 16, 2025, https://sdi.ai/blog/military-drone-swarm-intelligence-explained/
- Our First Look At The YFQ-44A ‘Fighter Drone’ Collaborative Combat Aircraft – The War Zone, accessed November 16, 2025, https://www.twz.com/air/our-first-look-at-yfq-44a-fighter-drone-prototype
- DAF begins ground testing for Collaborative Combat Aircraft, selects Beale AFB as the preferred location for aircraft readiness unit > Air Force > Article Display – AF.mil, accessed November 16, 2025, https://www.af.mil/News/Article-Display/Article/4171208/daf-begins-ground-testing-for-collaborative-combat-aircraft-selects-beale-afb-a/
- Democratizing harm: Artificial intelligence in the hands of nonstate actors – Brookings Institution, accessed November 16, 2025, https://www.brookings.edu/wp-content/uploads/2021/11/FP_20211122_ai_nonstate_actors_kreps.pdf
- The Rising Threat of Non-State Actor Commercial Drone Use …, accessed November 16, 2025, https://ctc.westpoint.edu/the-rising-threat-of-non-state-actor-commercial-drone-use-emerging-capabilities-and-threats/
- From Autonomy to Swarms: The Evolving Non-State Drone Threat – HSToday, accessed November 16, 2025, https://www.hstoday.us/featured/from-autonomy-to-swarms-the-evolving-non-state-drone-threat/
- Countering Swarms: Strategic Considerations and Opportunities in Drone Warfare, accessed November 16, 2025, https://ndupress.ndu.edu/Media/News/News-Article-View/Article/3197193/countering-swarms-strategic-considerations-and-opportunities-in-drone-warfare/
- Impacts of Adversarial Use of Generative AI on Homeland Security, accessed November 16, 2025, https://www.dhs.gov/sites/default/files/2025-01/25_0110_st_impacts_of_adversarial_generative_aI_on_homeland_security_0.pdf
- Deepfake Technology in the Age of Information Warfare – Lieber Institute West Point, accessed November 16, 2025, https://lieber.westpoint.edu/deepfake-technology-age-information-warfare/
- How Dangerous Is Criminal Use of AI Translation for Global Security? – Elmura Linguistics, accessed November 16, 2025, https://elmuralinguistics.com/ai-translation-criminal-use-global-security/
- Harnessing the Algorithm: Shaping the Future of AI-Enabled Staff …, accessed November 16, 2025, https://www.ausa.org/publications/harding-paper/harnessing-the-algorithm
- Common ethical challenges in AI – Human Rights and Biomedicine – The Council of Europe, accessed November 16, 2025, https://www.coe.int/en/web/human-rights-and-biomedicine/common-ethical-challenges-in-ai
- Risks of Gen AI: the black box problem – KWM, accessed November 16, 2025, https://www.kwm.com/au/en/insights/latest-thinking/risks-of-gen-ai-the-black-box-problem.html
- Does AI really reduce casualties in war? “That’s highly questionable …, accessed November 16, 2025, https://www.uu.nl/en/news/does-ai-really-reduce-casualties-in-war-thats-highly-questionable-says-lauren-gould
- Algorithms of war: The use of artificial intelligence in decision making in armed conflict, accessed November 16, 2025, https://blogs.icrc.org/law-and-policy/2023/10/24/algorithms-of-war-use-of-artificial-intelligence-decision-making-armed-conflict/
- Operational Errors and Civilian Casualties | Small Wars Journal by Arizona State University, accessed November 16, 2025, https://smallwarsjournal.com/2025/11/13/operational-errors-and-civilian-causalties/
- Data Poisoning as a Covert Weapon: Securing U.S. Military Superiority in AI-Driven Warfare, accessed November 16, 2025, https://lieber.westpoint.edu/data-poisoning-covert-weapon-securing-us-military-superiority-ai-driven-warfare/
- SOF Core Activities – SOCOM.mil, accessed November 16, 2025, https://www.socom.mil/about/core-activities
- 2025 Fact Book.pdf – SOCOM.mil, accessed November 16, 2025, https://www.socom.mil/FactBook/2025%20Fact%20Book.pdf
- U.S. Special Operations Forces (SOF): Background and Considerations for Congress, accessed November 16, 2025, https://www.congress.gov/crs-product/RS21048
- The Strategic Utility Of Special Operations Forces In The U.S. Homeland Security And Defense – NATO Veterans Initiative, accessed November 16, 2025, https://nato-veterans.org/strategic-utility-of-special-operations-forces/
- AI-Powered Loitering Munition Systems Set New Standard for Battlefield Autonomy, accessed November 16, 2025, https://www.autonomyglobal.co/ai-powered-loitering-munition-systems-set-new-standard-for-battlefield-autonomy/
- Fury’s Dawn: Anduril’s AI Jet Soars into Combat Future, accessed November 16, 2025, https://www.webpronews.com/furys-dawn-andurils-ai-jet-soars-into-combat-future/
- [2211.05883] Open-Set Automatic Target Recognition – arXiv, accessed November 16, 2025, https://arxiv.org/abs/2211.05883
- Aided target recognition visual design impacts on cognition in simulated augmented reality, accessed November 16, 2025, https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2022.982010/full
- Big Data at War: Special Operations Forces, Project Maven, and Twenty-First-Century Warfare, accessed November 16, 2025, https://mwi.westpoint.edu/big-data-at-war-special-operations-forces-project-maven-and-twenty-first-century-warfare/
- Torch.AI Releases New Open-Source AI-Powered Data Orchestrator …, accessed November 16, 2025, https://www.torch.ai/newsroom/torch-ai-releases-new-open-source-ai-powered-data-orchestrator-to-transform-defense-and-national-security-operations
- Trend Micro State of AI Security Report 1H 2025, accessed November 16, 2025, https://www.trendmicro.com/vinfo/us/security/news/threat-landscape/trend-micro-state-of-ai-security-report-1h-2025
- The risks and inefficacies of AI systems in military targeting support, accessed November 16, 2025, https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/
- Press Release: Evaluation of the DoD’s Military Information Support Operations Workforce (Report No. DODIG-2024-068), accessed November 16, 2025, https://www.dodig.mil/In-the-Spotlight/Article/3719783/press-release-evaluation-of-the-dods-military-information-support-operations-wo/
- [2405.03688] Large Language Models Reveal Information Operation Goals, Tactics, and Narrative Frames – arXiv, accessed November 16, 2025, https://arxiv.org/abs/2405.03688
- Automating OIE with Large Language Models > Air University (AU) > Wild Blue Yonder, accessed November 16, 2025, https://www.airuniversity.af.edu/Wild-Blue-Yonder/Articles/Article-Display/Article/3607802/automating-oie-with-large-language-models/
- Acquiring Generative Artificial Intelligence to Improve U.S. Department of Defense Influence Activities – RAND, accessed November 16, 2025, https://www.rand.org/content/dam/rand/pubs/research_reports/RRA3100/RRA3157-1/RAND_RRA3157-1.pdf
- AI Translation & Captions for Meetings and Events | Wordly, accessed November 16, 2025, https://www.wordly.ai/
- Mission Master – Uncrewed Ground Vehicles family (UGV) – Rheinmetall, accessed November 16, 2025, https://www.rheinmetall.com/en/products/uncrewed-vehicles/uncrewed-ground-systems/mission-master-a-ugs
- The Future Of CASEVAC: Uncrewed Ground Vehicles on the Battlefield – Line of Departure, accessed November 16, 2025, https://www.lineofdeparture.army.mil/Journals/Pulse-of-Army-Medicine/Archive/July-2025/Future-Of-CASEVAC/
- Is drone-based resupply viable? – European Security & Defence, accessed November 16, 2025, https://euro-sd.com/2025/10/articles/armament/47250/is-drone-based-resupply-viable/
- The Future of Unmanned Ground Systems in the Operational Environment – Army.mil, accessed November 16, 2025, https://www.army.mil/article/238342/the_future_of_unmanned_ground_systems_in_the_operational_environment
- Unmanned Ground Vehicle (UGV) Lessons Learned – DTIC, accessed November 16, 2025, https://apps.dtic.mil/sti/tr/pdf/ADA495124.pdf
- Persistent ISR at the Tactical Edge – You Need More Than AI to Counter Today’s Threats, accessed November 16, 2025, https://clearalign.com/knowledge-center/id/24/persistent-isr-at-the-tactical-edge–you-need-more-than-ai-to-counter-todays-threats
- Air Force Doctrine Note 25-1, Artificial Intelligence (AI), accessed November 16, 2025, https://www.doctrine.af.mil/Portals/61/documents/AFDN_25-1/AFDN%2025-1%20Artificial%20Intelligence.pdf
- AI Analytics In Military Defense Market Analysis, Size, and Forecast 2025-2029 – Technavio, accessed November 16, 2025, https://www.technavio.com/report/ai-analytics-in-military-defense-market-industry-analysis
- Catalyst Accelerator Launches AI Cohort for Space Force ISR – ExecutiveGov, accessed November 16, 2025, https://www.executivegov.com/articles/catalyst-accelerator-cohort-ai-isr-ussf
- AI & Analytics in Military and Defense Market, Size Report 2034, accessed November 16, 2025, https://www.gminsights.com/industry-analysis/ai-and-analytics-in-military-and-defense-market
- Operationalizing Artificial Intelligence for Algorithmic Warfare – Army University Press, accessed November 16, 2025, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/July-August-2020/Crosby-Operationalizing-AI/
- Artificial Intelligence Analyzing Forensic Data and Patterns of Life – Air University, accessed November 16, 2025, https://www.airuniversity.af.edu/Office-of-Sponsored-Programs/Research/Article-Display/Article/2699685/artificial-intelligence-analyzing-forensic-data-and-patterns-of-life/
- expert consultation report – artificial intelligence and related technologies in military decision-making on the use of force in armed conflicts, accessed November 16, 2025, https://www.geneva-academy.ch/joomlatools-files/docman-files/Artificial%20Intelligence%20And%20Related%20Technologies%20In%20Military%20Decision-Making.pdf
- AI in Defense Market Report 2030: Trends, Industry Insights, accessed November 16, 2025, https://www.knowledge-sourcing.com/report/ai-in-defense-market
- ISAAC-ISR Case Study | AI-Driven ISR Modernization Platform – i3CA, accessed November 16, 2025, https://i3ca.com/case-studies/isaac-isr-ai-driven-intelligence-modernization/
- Modernizing Military Decision-Making: Integrating AI into Army Planning, accessed November 16, 2025, https://www.armyupress.army.mil/Journals/Military-Review/Online-Exclusive/2025-OLE/Modernizing-Military-Decision-Making/
- COA-GPT: Generative Pre-trained Transformers for Accelerated Course of Action Development in Military Operations This research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-23-2-0072. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the – arXiv, accessed November 16, 2025, https://arxiv.org/html/2402.01786v1
- INDUSTRY PERSPECTIVE: Autonomous Swarm Drones New Face of Warfare, accessed November 16, 2025, https://www.nationaldefensemagazine.org/articles/2023/12/13/industry-perspective-autonomous-swarm-drones-new-face-of–warfare
- AI in Military Drones: Transforming Modern Warfare (2025-2030) – MarketsandMarkets, accessed November 16, 2025, https://www.marketsandmarkets.com/ResearchInsight/ai-in-military-drones-transforming-modern-warfare.asp
- Smarter Robot Swarms Could Change Warfare – AUSA, accessed November 16, 2025, https://www.ausa.org/articles/smarter-robot-swarms-could-change-warfare
- SOFINS 2025 – MBDA France: the Akeron family grows with two loitering munitions, accessed November 16, 2025, https://www.edrmagazine.eu/sofins-2025-mbda-france-the-akeron-family-grows-with-two-loitering-munitions
- HX-2 – AI Strike Drone – Helsing, accessed November 16, 2025, https://helsing.ai/hx-2
- Quantum Systems Unveils Vector AI sUAS at AUSA Global Force in the US, accessed November 16, 2025, https://quantum-systems.com/us/news/quantum-systems-unveils-vector-ai/
- Manned-unmanned teaming – Wikipedia, accessed November 16, 2025, https://en.wikipedia.org/wiki/Manned-unmanned_teaming
- What is Manned-Unmanned Teaming (MUM-T)? – BAE Systems, accessed November 16, 2025, https://www.baesystems.com/en-us/definition/what-is-manned-unmanned-teaming
- Manned-Unmanned Teaming – Joint Air Power Competence Centre, accessed November 16, 2025, https://www.japcc.org/articles/manned-unmanned-teaming/
- U.S. Air Force Collaborative Combat Aircraft (CCA) – Congress.gov, accessed November 16, 2025, https://www.congress.gov/crs-product/IF12740
- 690: LOYAL WINGMAN CONCEPT: REDEFINING AIR COMBAT – 55 NDA Alumni, accessed November 16, 2025, https://55nda.com/blogs/anil-khosla/2025/06/29/690-loyal-wingman-concept-redefining-air-combat/
- Inside the race to build America’s next combat drone wingman – YouTube, accessed November 16, 2025, https://www.youtube.com/watch?v=vuR8_4Qjpuk
- Anduril’s YFQ-44A Begins Flight Testing for the Collaborative Combat Aircraft Program, accessed November 16, 2025, https://www.anduril.com/article/anduril-yfq-44a-begins-flight-testing-for-the-collaborative-combat-aircraft-program/
- Loyal Wingman, Collaborative Combat Aircraft redefine war | The Jerusalem Post, accessed November 16, 2025, https://www.jpost.com/defense-and-tech/article-865389
- Marines Deploy ‘Loyal Wingman’ Drone in Joint Force Test – USNI News, accessed November 16, 2025, https://news.usni.org/2024/10/22/marines-deploy-loyal-wingman-drone-in-joint-force-test
- Home – Combat Capabilities Development Command C5ISR Center, accessed November 16, 2025, https://c5isrcenter.devcom.army.mil/
- C5ISR Center research connects aided target recognition with small UAS for greater squad lethality | Article – Army.mil, accessed November 16, 2025, https://www.army.mil/article/287719/c5isr_center_research_connects_aided_target_recognition_with_small_uas_for_greater_squad_lethality
- United States Army Special Forces – Wikipedia, accessed November 16, 2025, https://en.wikipedia.org/wiki/United_States_Army_Special_Forces
- An Unconventional Warfare Mindset – The Philosophy of Special Forces Must be Sustained, accessed November 16, 2025, https://sofsupport.org/an-unconventional-warfare-mindset-the-philosophy-of-special-forces-must-be-sustained/
- Wearable Translator Technology: How to Choose the Best Solution – Relay, accessed November 16, 2025, https://relaypro.com/blog/wearable-translator-technology-how-to-choose-the-best-solution/
- AI-Generated Translation Devices: We Tested 3 So You Don’t Have To, accessed November 16, 2025, https://certifiedlanguages.com/blog/we-tested-ai-generated-translation-devices/
- Special Operations Doctrine: Is it Needed – DTIC, accessed November 16, 2025, https://apps.dtic.mil/sti/tr/pdf/AD1042554.pdf
- Augmented Reality Tool for the Situational Awareness Improvement of UAV Operators, accessed November 16, 2025, https://www.mdpi.com/1424-8220/17/2/297
- Artificial Intelligence in the Military – AI-Driven Decision-Making for Joint Operations – FlySight, accessed November 16, 2025, https://www.flysight.it/artificial-intelligence-in-the-military-ai-driven-decision-making-for-joint-operations/
- Intelligent Mobility Research at Defence R&D Canada for Autonomous UGV Mobility in Complex Terrain – NATO, accessed November 16, 2025, https://publications.sto.nato.int/publications/STO%20Meeting%20Proceedings/RTO-MP-AVT-146/MP-AVT-146-04.pdf
- (PDF) Reliable Operations of Unmanned Ground Vehicles: Research at the Ground Robotics Reliability Center – ResearchGate, accessed November 16, 2025, https://www.researchgate.net/publication/228583989_Reliable_Operations_of_Unmanned_Ground_Vehicles_Research_at_the_Ground_Robotics_Reliability_Center
- C5ISR Center Supports Border Operations with C-UAS Technology – DVIDS, accessed November 16, 2025, https://www.dvidshub.net/news/497513/c5isr-center-supports-border-operations-with-c-uas-technology
- AI In Mining Industry: 7 Power Trends For 2025 – Farmonaut, accessed November 16, 2025, https://farmonaut.com/mining/ai-in-mining-industry-7-power-trends-for-2025
- The Future of Automated Mining: Technology, AI, and Efficiency in 2025 – WunderTrading, accessed November 16, 2025, https://wundertrading.com/journal/en/learn/article/automated-mining
- Special Warfare – usajfkswcs, accessed November 16, 2025, https://www.swcs.mil/Portals/111/32-2_APR_JUN_2019.pdf
- Integrated Warfare: How U.S. Special Operations Forces Can Counter AI-Equipped Chinese Special Operations Forces – Army University Press, accessed November 16, 2025, https://www.armyupress.army.mil/Journals/Military-Review/Online-Exclusive/2024-OLE/Integrated-Warfare/
- The Newest Weapon in Irregular Warfare – Artificial Intelligence, accessed November 16, 2025, https://irregularwarfarecenter.org/publications/perspectives/the-newest-weapon-in-irregular-warfare-artificial-intelligence/
- Adversarial Misuse of Generative AI | Google Cloud Blog, accessed November 16, 2025, https://cloud.google.com/blog/topics/threat-intelligence/adversarial-misuse-generative-ai
- Navigating cyber vulnerabilities in AI-enabled military systems, accessed November 16, 2025, https://europeanleadershipnetwork.org/commentary/navigating-cyber-vulnerabilities-in-ai-enabled-military-systems/
- Risks and Mitigation Strategies for Adversarial Artificial Intelligence Threats: A DHS S&T Study – Homeland Security, accessed November 16, 2025, https://www.dhs.gov/sites/default/files/2023-12/23_1222_st_risks_mitigation_strategies.pdf
- When AI Trusts False Data: Exploring Data Spoofing’s Impact on Security – Defence.AI, accessed November 16, 2025, https://defence.ai/ai-security/data-spoofing-ai/
- AI and Software Development: Risks of Over-Reliance and How to Mitigate Them, accessed November 16, 2025, https://techtalks.qima.com/ai-and-software-development-risks-of-over-reliance-and-how-to-mitigate-them/
- Overreliance on AI: Risk Identification and Mitigation Framework – Microsoft Learn, accessed November 16, 2025, https://learn.microsoft.com/en-us/ai/playbook/technology-guidance/overreliance-on-ai/overreliance-on-ai
- The Risks of Over-Reliance on AI – Glaxit, accessed November 16, 2025, https://glaxit.com/blog/the-risks-of-over-reliance-on-ai/
- Enhancing Tactical Level Targeting With Artificial Intelligence – Line of Departure, accessed November 16, 2025, https://www.lineofdeparture.army.mil/Journals/Field-Artillery/FA-2024-Issue-1/Enhancing-Tactical-Level-Targeting/
- Depending on your AI: Risks of over-reliance from people and business – AICC, accessed November 16, 2025, https://www.aicc.co/responsible-ai-hub/keep-up-to-date-with-responsible-ai/blog-posts/depending-on-your-ai-risks-of-over-reliance-from-people-and-business
- Risks and Remedies for Black Box Artificial Intelligence – C3 AI, accessed November 16, 2025, https://c3.ai/blog/risks-and-remedies-for-black-box-artificial-intelligence/
- Black box algorithms and the rights of individuals: no easy solution to the “explainability” problem | Internet Policy Review, accessed November 16, 2025, https://policyreview.info/articles/analysis/black-box-algorithms-and-rights-individuals-no-easy-solution-explainability
- What are the Special Forces? | National Army Museum, accessed November 16, 2025, https://www.nam.ac.uk/explore/what-are-special-forces
- UKSF: The United Kingdom Special Forces – Grey Dynamics, accessed November 16, 2025, https://greydynamics.com/uksf-the-united-kingdom-special-forces/
- AI Next Campaign – DARPA, accessed November 16, 2025, https://www.darpa.mil/research/programs/ai-next-campaign
- AI Forward | DARPA, accessed November 16, 2025, https://www.darpa.mil/research/programs/ai-forward
- DARPA Aims to Develop AI, Autonomy Applications Warfighters Can Trust, accessed November 16, 2025, https://www.war.gov/News/News-Stories/Article/Article/3722849/darpa-aims-to-develop-ai-autonomy-applications-warfighters-can-trust/