Armies are getting AI brains
In March 2026, Fujitsu announced it had been contracted by Japan's Acquisition, Technology & Logistics Agency (ATLA) to build what it calls "AI staff officers", agentic AI systems designed to support Self-Defense Forces commanders with rapid decision-making, intelligence analysis, and operational planning. The project is built on multi-agent architectures, where multiple AI models work together to autonomously collect data, analyze it, and generate actionable recommendations. This is not a research paper or a concept demo. It is a government defense contract with a clear mandate: put AI agents into the command chain. And Japan is far from alone. From Singapore to the United States, militaries are racing to embed AI into command and control. The question is no longer whether this will happen, but whether anyone is thinking carefully enough about what it means.
From spreadsheets to staff officers
For years, AI in the military context meant analytics. Pattern recognition on satellite imagery. Logistics optimization. Predictive maintenance. Useful, but fundamentally passive, tools that helped humans process information faster. What Fujitsu is building for ATLA is something different. These are not tools that wait to be queried. They are agents that perceive their environment, reason toward objectives, and independently carry out steps to achieve them. ATLA's project description lists three core objectives: accelerating decision-making, ensuring superiority in information gathering and analysis, and reducing the burden on SDF personnel. That last point is telling. "Reducing burden" and "labor-saving" are the same phrases we use when talking about AI agents in software engineering or customer support. The framing is identical. But the context could not be more different. When an AI agent drafts a blog post or triages support tickets, the cost of a mistake is measured in time and annoyance. When the job is operational military planning, the cost of a mistake is measured in lives.
The agent paradigm hits the battlefield
If you work in tech, the architecture Fujitsu describes is familiar. Multi-agent systems. Autonomous data collection. Reasoning toward objectives. This is the same agentic AI paradigm that is reshaping software development, where you give an AI a goal and let it figure out the steps. The "one agent, one job" pattern works well for narrowly scoped tasks. But military command is not narrowly scoped. A staff officer synthesizes incomplete intelligence, weighs political constraints, anticipates adversary responses, and makes judgment calls under uncertainty. The question is whether an AI agent that excels at information retrieval and pattern matching can meaningfully participate in that kind of reasoning, or whether it will just be very fast at generating confidently wrong recommendations. Georgetown's Center for Security and Emerging Technology published a framework in April 2025 specifically addressing this tension. Their report on AI for military decision-making argues that AI-enabled systems can enhance situational awareness and accelerate operational decisions, but only with "clear operational scopes, robust training, and vigilant risk mitigation." The gap between what AI can do in a controlled environment and what it does under the fog of war is where the danger lives.
The trust problem is a black box problem
Military decision-making depends on trust. A commander trusts their staff because they understand how those staff officers think, what assumptions they make, and where their judgment might be weak. That trust is built through shared doctrine, training, and experience. AI systems, particularly the large language models and deep learning architectures that power modern agents, do not offer that kind of transparency. The best-performing models are black boxes. They can tell you what they recommend, but not reliably explain why. This is the explainability problem, and in a military context, it is not just an academic concern. Researchers at West Point's Lieber Institute have argued that the lack of explainability in black-box AI systems means commanders may not be able to understand how the AI has weighed critical factors like proportionality in targeting decisions. Germany's interpretation of international humanitarian law goes further, stating that "no abstract calculations are possible" in assessing military advantage, a position that is fundamentally at odds with how AI systems operate. The War on the Rocks publication put it bluntly in an August 2025 piece: "AI is not a turnkey solution to military dominance. It is a complex, rapidly evolving capability that demands strategic patience, institutional expertise, rigorous oversight, and a deep understanding of its inner workings." And yet the incentive structures push in the opposite direction. The pitch for military AI is always speed.
The speed trap
The core argument for AI in command and control is that modern warfare compresses decision cycles to the point where humans cannot keep up. Drone swarms, cyber attacks, electronic warfare, these operate on timescales where waiting for a human to process information, deliberate, and decide is a tactical disadvantage. This is a real problem. But speed without judgment is just faster mistakes. Israel's use of AI targeting systems in Gaza illustrates both sides of this dynamic. The Lavender system, an AI-powered database, identified approximately 37,000 potential targets linked to Hamas. According to reporting by +972 Magazine, human analysts often spent only about 20 seconds reviewing each AI-generated target recommendation, primarily to confirm the target was male. The system was known to have an error rate of roughly 10 percent and occasionally flagged individuals with only loose connections to militant groups. The result was an acceleration of the targeting cycle that contributed to unprecedented destruction. The UN verified that nearly 70 percent of casualties were women and children. Speed was achieved. But the question of whether that speed served the stated military objectives, or undermined them by generating massive civilian harm that complicated the broader strategic picture, remains deeply contested. This is the trap. Once you build systems optimized for speed, the institutional pressure is to use that speed. Slowing down to verify, to question, to apply human judgment, starts to feel like a bottleneck rather than a safeguard.
Who gets the AI staff officer?
Fujitsu's ATLA contract also raises a question that extends beyond Japan: who gets access to military-grade AI? AI export controls are still catching up to the technology. The International Traffic in Arms Regulations (ITAR) and Export Administration Regulations (EAR) were designed for discrete transfers of static information between known parties. They are not well suited to governing AI systems that generate dynamic outputs on demand. Frontier models can likely generate technical information that would be controlled under these frameworks, but the enforcement mechanisms have not been updated to match. The broader geopolitical picture is one of competitive AI proliferation. The U.S. has been tightening controls on AI chip exports while simultaneously promoting American AI technology through programs like the AI Diffusion Rule. Allies get preferential access. Adversaries face restrictions. But the line between ally and adversary is not always clean, and the technology itself is increasingly commoditized. For countries developing their own military AI capabilities, the export control landscape creates both constraints and incentives. Japan's decision to invest in domestic defense AI through Fujitsu, rather than relying solely on U.S. systems, is partly a sovereignty play. Lockheed Martin and Fujitsu formalized an industrial collaboration in February 2026 for Japan's defense sector, but the AI staff officer project is a distinctly Japanese initiative, built on Japanese infrastructure with Japanese startups through an open innovation program.
Small nations, big stakes
Singapore offers a useful lens for thinking about what military AI means for smaller nations. With a conscript army, a defense budget that has grown from S$11.4 billion in 2021 to an estimated S$17.7 billion in 2025, and a geographic position that demands technological asymmetry, Singapore has been investing aggressively in defense AI. The Defence Science and Technology Agency (DSTA) launched Gaia, a generative AI assistant for defense operations. Singapore's air force expanded a partnership with Shield AI to develop autonomous drone capabilities. DSTA and Thales established a joint laboratory for AI-enabled combat systems. And at the 2025 Singapore Defence Technology Summit, the Ministry of Defence announced initiatives to develop generative AI solutions specifically to "support decision making of commanders," language that closely mirrors what Fujitsu is building for Japan. For a small nation-state with a conscript army, the appeal is obvious. AI can multiply the effectiveness of a limited force. But the same questions apply. How do you build trust in systems your conscript soldiers did not grow up training with? How do you maintain meaningful human control when the technology is designed to move faster than human deliberation?
What actually matters
The rush to embed AI into military command structures is real, and it is accelerating. Japan, Singapore, the United States, NATO allies, all are moving in the same direction. The technology is available. The strategic incentives are clear. The competitive pressure is intense. But the hard problems are not technical. They are institutional. How do you train commanders to work with AI advisors they cannot fully understand? How do you build doctrine around systems that behave differently from one deployment to the next? How do you maintain accountability when the machine's recommendation was followed and the outcome was catastrophic? The National Defense University's Institute for National Strategic Studies described the core tension precisely: when an AI system exhibits emergent behavior, doing something not explicitly programmed or anticipated, it can exceed its command intent. They called this "insubordination by algorithm." It is a phrase that captures the fundamental awkwardness of the entire enterprise, we are trying to give AI systems authority within hierarchies that depend on predictability and accountability, two things AI is not yet good at providing. Fujitsu's AI staff officers may prove useful. Singapore's investments may pay off. But the nations that get this right will not be the ones that deploy the fastest. They will be the ones that build the institutional frameworks, the doctrine, the training, the oversight mechanisms, to use these tools without being used by them. The AI is getting its commission. The question is whether anyone is writing the rules of engagement.
References
- Fujitsu hired to develop AI support for SDF commanders, The Asahi Shimbun
- Japan Boosts Domestic AI Defense Tech With Fujitsu Accelerator Program, The Defense Post
- AI for Military Decision-Making: Harnessing the Advantages and Avoiding the Risks, Georgetown CSET, April 2025
- Targeting in the Black Box: The Need to Reprioritize AI Explainability, Lieber Institute, West Point
- Building Trust in Military AI Starts with Opening the Black Box, War on the Rocks, August 2025
- The Agentic Database and Military Command: A Perspective on Autonomous C2 Systems, Institute for National Strategic Studies
- Singapore Pushes Modern Warfare Preparation With AI and Gaming Tech, The Defense Post
- Singapore's DSTA, Thales to develop AI tech for combat systems, Army Technology
- Explainable AI in the military domain, Ethics and Information Technology