War makes better software
Every major leap in computing traces back to conflict. GPS started as a Cold War submarine-tracking project. The internet's foundational ideas came from a RAND Corporation engineer trying to build communications that could survive a nuclear strike. Public-key cryptography owes its existence to wartime secrecy requirements. This pattern isn't comfortable, but it's consistent: war funds, pressures, and stress-tests technology faster than peacetime R&D ever could. Right now, the U.S.-Iran conflict is running that experiment again, not in a lab, but across the entire global technology stack. Financial infrastructure, communication networks, supply chains, cybersecurity, all of it is being tested simultaneously. The software that survives will come out stronger. The question worth sitting with is what that means for how we build things.
The historical pattern
The Global Positioning System began in 1974 when the U.S. Air Force started developing Navstar satellites for military precision navigation. The first Block I satellite launched in February 1978. GPS remained exclusively military until May 2000, when President Clinton ended the Selective Availability program and opened it to civilian use. Today it underpins everything from ride-sharing apps to precision agriculture, but it was built to guide weapons. The internet's origin story is similar. In the early 1960s, Paul Baran at the RAND Corporation designed a decentralized communication system with no central switching facility, specifically so it could keep functioning after a nuclear attack destroyed parts of the network. His work on packet switching directly influenced the architecture of ARPANET, which the Department of Defense launched in 1969. The technical requirement was survivability. The byproduct was the foundation of modern connectivity. Encryption follows the same arc. From the Enigma machine through to Cold War-era classified research, the need to protect military communications drove advances in cryptography that now secure every online transaction. The pattern is clear: existential pressure creates engineering constraints that produce remarkably resilient systems.
The current stress test
The U.S.-Iran conflict that escalated in late February 2026 is applying that same pressure across modern infrastructure, all at once. Within hours of the escalation, more than 60 Iranian-aligned cyber groups mobilized. According to a joint advisory from the FBI, NSA, CISA, and several other agencies, Iranian-affiliated actors have been actively exploiting programmable logic controllers across U.S. critical infrastructure, targeting water, energy, transportation, and communications systems. These aren't theoretical vulnerabilities. Organizations across multiple sectors have experienced actual disruptions. The cybersecurity angle is particularly revealing. Iran created its own offensive cyber capabilities shortly after the U.S. and Israel deployed Stuxnet against Iranian nuclear centrifuges in 2010. Now those capabilities are being turned back against exposed industrial control systems in the U.S., many running outdated software with default credentials. Meanwhile, the physical infrastructure layer is under strain too. Amazon, Microsoft, and Google spent years building data centers across the Gulf, betting the region would become the next hub for AI workloads. The undersea cables connecting those facilities to Africa, South Asia, and Southeast Asia pass through two narrow passages: the Red Sea and the Strait of Hormuz. Both are now effectively closed to commercial traffic. As one analyst noted, "AI development is outpacing national security doctrine," and undersea cable routes are geographically constrained with few options for physical bypasses. The energy grid is growing more vulnerable by the day. The North American Electric Reliability Corporation estimates the U.S. grid gains roughly 60 new vulnerable points daily due to increasing digitalization, expanding distributed energy resources, and reliance on third-party vendors. Peacetime audits didn't catch these problems at this scale. Conflict did.
Chaos engineering: the peacetime simulation
Here's the interesting parallel. The discipline of chaos engineering was essentially invented to simulate what war does to systems naturally. In 2008, Netflix was migrating from physical data centers to AWS. They needed their services to survive random infrastructure failures, so they built Chaos Monkey, a tool that randomly terminated production server instances during business hours. The logic was brutal and effective: if your system can't handle a server dying at 2pm on a Tuesday when engineers are standing by, it definitely can't handle one dying at 3am on a Sunday. Chaos Monkey grew into the Simian Army, a full suite of failure injection tools. Latency Monkey simulated network delays. Security Monkey hunted for configuration vulnerabilities. Chaos Gorilla and Chaos Kong simulated the loss of entire availability zones and regions. The core insight behind all of this is the same insight that drove Baran's decentralized network design in the 1960s: you build resilient systems by assuming failure is inevitable and designing around it. Netflix's engineering philosophy, what they called their "Rambo Architecture," required every system to succeed on its own, even when everything it depended on was broken. Today chaos engineering has gone mainstream. AWS, Google Cloud, and Azure all offer native fault injection services. Companies like Gremlin and Harness have built entire platforms around it. The practice has evolved from chaos experiments into broader resilience testing, incorporating service discovery, dependency mapping, and AI-assisted risk prediction. But there's a gap between controlled chaos experiments and actual conflict. Chaos Monkey terminates one instance at a time with engineers watching. War takes down cable routes, activates 60 threat groups simultaneously, and targets systems you didn't know were connected to the internet.
What conflict reveals that audits miss
The cybersecurity attacks accompanying the U.S.-Iran conflict have a specific quality that peacetime testing rarely replicates: they're adversarial, creative, and coordinated. Peacetime security audits check known vulnerability lists. Conflict-driven attackers find the vulnerabilities that nobody catalogued. The CISA advisory specifically highlighted that many compromised programmable logic controllers were simply connected to the internet with default credentials, something that compliance checklists should catch but often don't because the operational technology teams managing those devices face competing priorities like physical maintenance. There's also the cascading effect. As one security analysis noted, when regional war escalates, cyber effects rarely stay contained. Attacks targeting one country's infrastructure can cause collateral damage to companies in allied nations through interconnected networks. This isn't a failure mode that standard penetration testing covers. The Colonial Pipeline attack in 2021 previewed this dynamic on a smaller scale. A single ransomware attack on IT systems led to a five-day operational shutdown affecting 45 percent of fuel supply to the U.S. East Coast. The direct cost was $4.5 million in ransom, but the cascading economic impact was orders of magnitude larger. Now multiply that across every sector simultaneously.
Building for a fragile world
The companies that emerge strongest from this period won't be the ones with the best incident response plans. They'll be the ones that assumed fragility from the start and built accordingly. Three architectural patterns matter: Multi-cloud by default. Depending on a single cloud provider is the modern equivalent of a centralized switching facility, exactly the vulnerability Baran identified in 1960. Organizations that distribute workloads across AWS, Azure, and GCP can absorb the loss of an entire provider or region. This isn't just theoretical risk management. When undersea cable routes close or data center regions become geopolitically compromised, multi-cloud architecture is the difference between degraded service and total outage. Edge computing over cloud-only. Moving computation closer to where it's needed reduces dependency on long-haul network connections, the same connections that are physically vulnerable to conflict. Edge architectures process data locally and sync when connectivity allows, making them inherently more resilient to infrastructure disruption. Offline-first design. Applications designed to function without constant internet connectivity aren't just better for users on bad wifi. They're architecturally prepared for a world where connectivity can't be guaranteed. Note-taking tools, document editors, and project management software that store and process data locally provide reliable access regardless of what's happening to the network layer beneath them. These patterns share a common principle: decentralization. The same principle that made ARPANET survivable, that made Netflix's architecture resilient, and that conflict keeps proving is non-negotiable.
The uncomfortable trade-off
None of this is clean. The technology we rely on daily was shaped by the worst of human conflict, and the current moment is no different. People are being displaced and killed while engineers take notes on which systems held up. It would be dishonest to celebrate this dynamic. It would also be dishonest to ignore it. The honest position is somewhere in the middle: war creates engineering pressure that produces better systems, and that fact doesn't make war acceptable or desirable. The tension is real, and resolving it neatly would require ignoring either the human cost or the technical reality. What we can do is take the lessons without waiting for the next conflict to teach them. Chaos engineering exists precisely for this reason, to extract the resilience benefits of failure without requiring actual catastrophe. The gap between simulated chaos and real conflict is still large, but it's narrowing as the tools get more sophisticated. The software that survives this moment will be better. The question is whether we can learn to build that way without needing a war to show us where the cracks are.
References
- The Aerospace Corporation, "A Brief History of GPS" https://aerospace.org/article/brief-history-gps
- RAND Corporation, "Paul Baran and the Origins of the Internet" https://www.rand.org/pubs/articles/2018/paul-baran-and-the-origins-of-the-internet.html
- Capitol Technology University, "Military Technological Innovations" https://www.captechu.edu/blog/military-technological-innovations
- CloudSEK, "AI, the Iran-US Conflict, and the Threat to US Critical Infrastructure" https://www.cloudsek.com/blog/ai-the-iran-us-conflict-and-the-threat-to-us-critical-infrastructure
- CISA, "Iranian-Affiliated Cyber Actors Exploit Programmable Logic Controllers Across US Critical Infrastructure" https://www.cisa.gov/news-events/cybersecurity-advisories/aa26-097a
- Reuters, "Iranian hackers' targeting of US critical infrastructure has escalated since start of war" https://www.reuters.com/world/middle-east/iranian-hackers-targeting-us-critical-infrastructure-has-escalated-since-start-2026-04-07/
- CSIS, "Iran Conflict Heightens Cyber Threats to U.S. Energy Infrastructure" https://www.csis.org/analysis/iran-conflict-heightens-cyber-threats-us-energy-infrastructure
- Rest of World, "U.S.-Iran war threatens Gulf AI infrastructure as both data chokepoints close" https://restofworld.org/2026/us-iran-war-gulf-ai-submarine-cables/
- Netflix Technology Blog, "The Netflix Simian Army" http://techblog.netflix.com/2011/07/netflix-simian-army.html
- Gremlin, "Chaos Monkey at Netflix: the Origin of Chaos Engineering" https://www.gremlin.com/chaos-monkey/the-origin-of-chaos-monkey
- SC Media, "Iran and the expanding cyber front: What government leaders need to know" https://www.scworld.com/perspective/iran-and-the-expanding-cyber-front-what-government-leaders-need-to-know
- Fortinet, "When Cyber Conflict Targets Society" https://www.fortinet.com/blog/industry-trends/when-cyber-conflict-targets-society
You might also enjoy