The Algorithm and the Alliance: Security Strategy After the State Monopoly

By Vilas Dhar in Table.Briefings
The frameworks that govern international security were built for threats that were serious but infrequent, and that moved slowly enough to be identified, attributed, and answered. Missiles can be tracked, troop movements observed, and cyberattacks eventually yield forensic trails. The architecture of deterrence rests on these premises: that threats would arrive one at a time, that there would be time to deliberate, and that we could know who struck.
AI-enabled threats upend each of these assumptions.
Three developments are reshaping the threat environment. AI capabilities are proliferating beyond state control, as open-source models now enable what required government resources a decade ago. Critical infrastructure sits increasingly in private hands, governed by commercial logic rather than strategic doctrine. And non-state actors operate at speeds and scales once exclusive to governments. The drone swarms in Ukraine, the Salt Typhoon intrusions, the autonomous weapons emerging across conflict zones: these signal a structural shift, not isolated incidents.
Existing frameworks can move quickly when they must. In the lead-up to the 2024 US election, governments demonstrated they could identify and respond to individual deepfakes within hours. The decision architecture developed for missile launches proved adaptable to discrete AI threats. What these processes cannot do is scale. A crisis response convened for one deepfake cannot be convened for ten thousand. An attribution process that takes weeks works when incidents are rare; it collapses when incidents arrive daily. AI compresses timelines, but it also multiplies the number of events that demand response. Our institutions were built for a world where serious threats were infrequent enough to address one at a time.
This creates a governance gap. Deterrence depends on attribution, on knowing who attacked. AI-enabled operations make attribution slower and less certain, even as the volume of incidents grows. Alliances were built around interoperable hardware: tanks, aircraft, and communications systems. The new requirement is interoperable intelligence, shared threat detection, and coordinated standards for technologies that evolve faster than treaties.
Market competition will not close this gap. Framing AI as a commercial race obscures a more fundamental question about who bears responsibility for security outcomes. Companies optimize for shareholder returns, as they should. But shareholder returns and strategic stability are different objectives. When critical infrastructure is in private hands and destabilizing capabilities emerge from corporate labs, governments cannot outsource responsibility for public safety. This is not an argument against private innovation. It is an argument for public accountability.
What would an evolved doctrine look like? It would treat digital infrastructure as strategic terrain. It would build alliance-level standards for AI governance, including shared protocols for threat intelligence and incident response. It would create mechanisms for coordinated action below the threshold of armed conflict. And it would invest in resilience for threats that cannot be deterred and arrive too frequently to address one by one.
The Munich Security Conference convenes as these pressures mount. The technology exists, and the incidents are accumulating. Alliances must learn to share intelligence as fluidly as they share ammunition, governance frameworks must impose real obligations on private infrastructure, and response mechanisms must finally match both the speed and the volume of the threats they face. The state monopoly on disruption is ending. Public institutions will fill the gap, or others will.
Author: Vilas Dhar is the President of the Patrick J. McGovern Foudnation. He advises governments and international bodies on AI governance and serves on multilateral initiatives addressing technology and global security.