More

    Geopolitics in the Age of Artificial Intelligence

    Everyone has a theory of artificial intelligence. Some believe the technology is progressing toward superintelligence—powerful AI that will bring epochal changes beyond any previous technology. Others expect that it will boost productivity and scientific discovery but will follow a more uneven and potentially less dramatic path.

    People also disagree about how easily breakthroughs can be replicated. Some argue that rivals will fast-follow (that is, quickly imitate), whereas others believe catching up will become slower and costlier, giving first movers lasting advantage. And whereas many are sure China is determined to beat the United States at the frontier, others insist it is focused on deployment of existing technology while seeking to distill and reproduce leading-edgeAmerican innovations once they appear.

    Every confident policy argument rests on hidden assumptions about which of these stories is true. Those prioritizing frontier innovation assume breakthroughs will compound and be difficult to replicate, whereas those focused on spreading American systems abroad often assume the opposite. If those assumptions are wrong, the strategies built on them will waste resources and could cost the United States its lead.

    Betting everything on a single story is tempting but dangerous. Washington does not need another prediction about the AI age. It needs a way to make choices under uncertainty—one that secures the United States’ advantage across multiple possible futures and adapts as the shape of the AI era comes into view.

    EIGHT WORLDS

    However the AI future ultimately unfolds, U.S. strategy should begin with a clear definition of success. Washington should use AI to strengthen national security, broad-based prosperity, and democratic values both at home and among allies. When aligned with the public good, AI can drive scientific and technological progress to improve lives; help address global challenges such as public health, development, and climate change; and sustain and extend American military, economic, technological, and diplomatic advantages vis-à-vis China. The United States can do all of this while responsibly managing the very real risks that AI creates.

    The challenge is how to get there. To make hidden assumptions explicit and to test strategies against different futures, those thinking about AI strategy should considera simple framework. It turns on three questions: Will AI progress accelerate toward superintelligence, or plateau for an extended period? Will breakthroughs be easy to copy, or will catching up become difficult and costly? And is China truly racing for the frontier, or is it putting its resources elsewhere on the assumption that it can imitate and commodify later? Each question has two plausible answers. Considering every combination yields a three-dimensional matrix—a 2×2×2 diagram with eight possible worlds.

    The first axis is the nature of AI progress. At one end lies superintelligence: an AI that far outpaces humans and is capable of recursive self-improvement, teaching itself to become ever smarter and inventing ever more new things. At the other end lies bounded and jagged intelligence: impressive scientific, economic, and military applications, but not a singular break with history. It is bounded because the progress it makes eventually hits limits, at least for a while. And it is jagged because it is uneven; systems may reach incredible performance in areas such as math or coding but struggle with judgment, creativity, or certain physical applications. If progress leads to superintelligence, even a narrow lead could prove decisive, justifying massive frontier investments. If it is bounded and jagged, channeling unlimited resources to moonshots is less compelling than prioritizing adoption and diffusion.

    The second axis is the ease of catching up—the fast-follow problem. In one world, catching up is easy. Breakthroughs can be copied quickly through espionage; leaked weights, in which a trained model’s internal parameters are stolen or released; innovative training on older hardware; or model distillation, in which a less capable system is trained to imitate a more advanced one. In the other, catching up is hard: frontier capability depends on the full technological stack—proprietary hardware, institutional expertise, vast and often unique datasets, a vibrant ecosystem of talent, and structural factors that cannot be foreseen. The model, or software layer, may be easy to copy, but the quality and scale of hardware, infrastructure, and human capital behind training and inference may be far more difficult to reproduce. When catching up is easy, the contest is more about diffusion, embedding American systems abroad before rivals can spread their own. When it is hard, diffusion still matters, but strategy places greater emphasis on defending the underlying foundations of frontier capability—that is, the inputs and know-how that allow advances to compound over time. Across the whole axis, the question is not whether AI spreads, but how quickly, to whom, and on what terms.

    AI processors on display at a tech conference in Tongxiang, China, November 2025
    AI processors on display at a tech conference in Tongxiang, China, November 2025 Tingshu Wang / Reuters

    The third axis is China’s strategy. At one extreme, Beijing is racing aggressively to the frontier, funding massive training runs and competing labs. At the other extreme,Beijing is not racing but prioritizing adoption and diffusion and occasionally producing large models to signal progress and spur the United States into focusing on the frontier. China may not have a perfectly coherent national plan—indeed, different institutions within the country may act differently—but at the system level, China’s behavior will still approximate either racing or not racing. This dimension of the framework focuses on China because, at present, it is the United States’ dominant competitor at the frontier. If other actors emerge, the matrix would need to adjust to reflect their racing calculus, as well.

    Reality is, of course, more complicated than any diagram. More axes could be added, and each axis could be treated as a spectrum. China may pursue a middle path in frontier R & D. Catching up may be only somewhat hard. AI may be truly powerful but still have certain limitations. Although considering binary outcomes can make strategic planning easier, policymakers can still account for the intermediate possibilities by thinking probabilistically along each axis. A partial Chinese investment strategy, for instance, increases the odds that Beijing narrowly follows the United States or even unexpectedly closes the gap.

    Finally, policymakers’ own decisions can shape which AI future emerges, at least on the margins. U.S. actions can make catching up harder or easier, particularly by tightening or loosening export controls. Whether China races or holds back will depend in part on how Beijing judges the pace of AI progress and the difficulty of catching up. Still, by making uncertainty part of the policy framework, policymakers will at least be forced to confront their own assumptions and plan for multiple futures rather than one.

    SOURCES OF AI POWER

    Before turning to that planning exercise, it is worth pausing to ask two questions: Who actually sets U.S. AI strategy? And what tools does Washington have to shape the trajectory of AI? After all, the government doesn’t own the country’s leading labs or decide what they build. It can’t set production targets or direct investment flows the way Beijing can. Yet Washington’s policy choices and signaling significantly influence the AI ecosystem, even if indirectly.

    Many American policies amount to an implicit subsidy for the domestic AI industry. Export controls and investment restrictions have limited China’s access to advanced chips and U.S. capital. They have raised the value of American and allied firms by constraining their strongest competitors and channeling private capital toward them.

    Expectations amplify that effect. When senior officials describe AI leadership as a national priority, companies and investors anticipate favorable rulemaking, administrative streamlining, and closer coordination with government. Those assumptions influence how much risk firms take on and where investors place their bets—perhaps even more than a slow-to-deploy congressional appropriation would.

    Washington’s direct support complements these signals. R & D tax credits, infrastructure investments, federal research grants, and a host of executive branch decisions—on permitting, immigration, and much else—collectively influence where and how AI capacity grows. Meanwhile, federal procurement and partnership is becoming a meaningful demand signal itself as agencies begin testing and adopting AI systems at scale. If diffusion becomes as strategically important as frontier breakthroughs, Washington may need to use more of the tools at its disposal, offering partners a trusted alternative to Beijing’s AI stack and working through institutions such as the Development Finance Corporation to fund deployment abroad in places the market alone will not serve. This also includes thinking about how open or closed American AI systems should be. The United Statesmust decide whether to rely on tightly controlled proprietary models or promote open-source alternatives as a way to shape global adoption.

    Still, the private sector remains the engine of this race, and its incentives do not always align with the country’s interests. Many leading labs in the United States are betting on superintelligence, pouring resources into massive training runs rather than safe deployment or broad diffusion. Some would prefer to build and operate the infrastructure for large-scale training runs overseas, drawn by looser rules, cheaper energy, and additional capital. Managing that tension will remain one of Washington’s most difficult tasks.

    Policymakers should treat AI not as a single story but as a shifting landscape.

     

    The United States’ strength has never been central planning but deploying a mix of tools to direct a decentralized system toward shared goals. It creates policy incentives, shapes expectations, and coaxes capital toward national purpose. How to use these tools to maintain U.S. leadership in AI depends on which future ultimately emerges. Some policies that make sense in one scenario may be counterproductive in another. But a few priorities will hold across most of them—core elements of national power that most versions of the AI future are likely to require, even as their relative importance varies from one world to another.

    Compute, or computing power, remains the foundation of AI capability. Control over chips, data centers, and the energy to run them determines who can train and deploy the systems that set the pace of progress. Robotics and advanced manufacturing extend that power into the physical world, turning digital intelligence into productive capacity. None of it endures without a strong industrial-scientific base. The United States needs basic research both to advance today’s technologies and to explore new approaches to AI development; talent, both homegrown and attracted from around the world; the manufacturing capacity to build at scale; and energy that keeps it all running. If AI firms lack sufficient access to electric power, in particular, that bottleneck could limit overall progress.

    Risk management, often regarded as a constraint because it can slow deployment and limit experimentation, can be a source of stability and legitimacy. It’s what keeps competition from collapsing due to unintended escalation from accidents, deliberate misuse of AI systems, or loss of control resulting from the deployment of systems whose behavior humans can no longer reliably control. Just as important is ensuring that safety protocols and domestic political support develop fast enough to keep pace with capability gains. Some futures give Washington room to build that foundation; others compress the timeline.

    Then there’s the question of diffusion—the spread and adoption of AI systems abroad. The systems that take root will decide whose values and governance ideals define the digital order, and which country or countries draw the most economic and strategic gains. Beijing already treats AI governance itself as a strategic export, using its systems, standards, and regulatory templates to shape how other countries use and oversee the technology. Washington demonstrates conviction on diffusion in theory but has yet to prove it in practice.

    U.S. allies and partners are the last critical piece of this puzzle. Working in concert with trusted partners multiplies American capacity and improves the chances that democratic systems—not authoritarian ones—define the shape of the AI age.

    WORLD ONE

    The three axes—superintelligence versus bounded and jagged intelligence, ease versus difficulty in catching up to another’s breakthrough, and a China that races to the frontier versus a China that does not—create eight possible worlds. The task of policymakers is to fill in this matrix with a range of reasonable policy choices in each one.

    First, consider a world in which superintelligence is achievable, the technology is hard to imitate quickly, and China is racing at full speed. This world looks and feels like something between an arms race and a space race: the contest would become a struggle to reach and secure the frontier first. The stakes would be immense. Whoever develops and controls the most advanced systems could gain enduring technological, economic, and military advantages. At the extreme end of this scenario, some argue that once recursive self-improvement begins, the lead may become self-reinforcing, making meaningful catch-up not merely difficult but effectively impossible. This framework treats that possibility as the limiting case of “hard to catch up” rather than assuming it as a baseline, and tests strategy accordingly.

    The United States might have to consider a Manhattan Project 2.0, which would entail the mobilization of public resources, extraordinary coordination between government and industry, and a level of secrecy more typical of military programs, potentially requiring new authorities or expanded use of the 1950 Defense Production Act, which grants the president broad authority to regulate industry for purposes of national defense. Such an effort would force policymakers to choose between centralizing development in a single entity to ensure strict security oversight or maintaining competition among multiple frontier laboratories on the assumption that parallel experimentation would yield results faster.

    A robot on display at a tech event in Taipei, Taiwan, November 2025
    A robot on display at a tech event in Taipei, Taiwan, November 2025 Ann Wang / Reuters

    Under these conditions, Washington would tighten export controls to the limits of enforceability. Every layer of the semiconductor supply chain would fall under stricter regimes, and coordination with allies would be essential to prevent circumvention. Model weights (the numerical parameters that determine how a system behaves), training data, and data centers would need to be hardened against theft and sabotage.

    Risk management with China, based on a shared interest in avoiding loss of human control of superintelligence, would move center stage. The faster systems advance, the greater the chance of accidents and unintended escalation as autonomous systems interact in ways neither side fully anticipates. One plausible move would be a mutual restraint agreement, limiting development while both Beijing and Washington build safety systems that can keep pace. But such an arrangement would be fragile and hard to sustain, given mutual distrust, verification challenges, and the potential gains from breaking the agreement and racing ahead.

    Because catching up is difficult and China’s success is not inevitable in this world, the United States might find itself with a narrow window in which it has reached superintelligence first. In that moment, Washington would face a decision: whether to take steps to prevent others from reaching the same capability. The opposite scenario is equally important: if Beijing reaches the frontier first, Washington would need to be ready to manage and mitigate the harms. And if both powers cross the threshold, they would need to reduce risk with clear guardrails, communication, and restraint while also working to prevent loss of control and the adoption of superintelligence by rogue states or nonstate actors.

    WORLD TWO

    In another world, superintelligence is still achievable and it is still hard to catch up to new technologies, but China is not racing toward the frontier. This scenario sees the United States achieve a unipolar AI moment. Even if Beijing pursued a strategy of partial frontier investment, the difficulty of catching up would all but guarantee that the United States would stand alone at the technological peak, with a real chance to define the structure of the world that follows. The central question would no longer be how to win the race, but how to wield and manage a lead.

    At the industrial level, AI development could progress at a more measured pace. Although R & D spending should remain elevated enough to reach superintelligence, no Manhattan Project–style mobilization would likely be needed. The United States would have to keep the frontier secure—protecting model weights, compute, and key talent—while allowing the innovation ecosystem to operate dynamically. Notably, as the market matures and some AI companies fail, China should not be allowed to buy up their intellectual property.

    This future would make many other countries uneasy. Concentrating such transformational power in one country would raise doubts about whether Washington would lead responsibly or pursue a narrower national interest. The task for the United States would be to build and maintain a democratic AI order that generates trust in American leadership at the frontier—a similar undertaking to the one Washington faced in 1945, but far more difficult in today’s political and geopolitical landscape. With no immediate rival at the cusp of superintelligence, the United States could more comfortably exercise unilateral restraint, pacing frontier development efforts to ensure safety keeps up. Diffusion would be strategic and selective: extending secure access to allies and partners while preventing uncontrolled proliferation.

    Domestically, the United States could focus on building a new social contract. If AI delivered enormous productivity and capability gains, the challenge would turn to channeling those gains into broad-based prosperity while reinforcing society’s resilience to AI-driven disruptions. Sensible regulation would ensure safety and accountability without stifling progress.

    Of course, this unipolar moment would not be guaranteed to be permanent. If the United States reached superintelligence, China would likely flip into racing mode overnight, and other powers would not stay idle for long. Washington would have to decide how to respond and how to use its position to shape how and where the technology spreads.

    WORLD THREE

    A third possibility is a world of all-out proliferation: superintelligence can be reached, it is easy to catch up, and China is racing ahead. Breakthroughs would compound quickly, but copying them would be quick, too. In this world, the task for the United States would be less about containment and more about resilience—that is, preparing the nation’s cyber, biosecurity, infrastructure, and defense systems to withstand the full range of AI-enabled threats.

    Whether to race or fast-follow would become a strategic choice. If breakthroughs proliferated quickly, the advantage gained from reaching the frontier first may be short-lived, but letting others get there first, even for a short period, would still create a meaningful window of vulnerability. And if progress continued to compound rapidly, arriving first would matter even more, because the early mover would begin climbing the curve first. The likely optimal path would be to race defensively, maintaining high R & D spending and frontier capability while matching advances with new layers of security and resilience.

    The innovation ecosystem itself would face stress. A single national champion would provide little security value, since whatever it builds would quickly be copied, and sustaining many private firms that work on leading-edge technology would be difficult if investors see profits vanish as innovations are quickly copied. Many of these companies would fail as superintelligence becomes commoditized. The firms that innovate to build better business models to capture value would succeed, but the firms that innovate to build better AI models may not.

    Risk management would rise in importance, and not only with regard to managing escalation and miscalculation. To mitigate the threat of uncontrolled proliferation to nonstate actors and rogue states, the United States would have to build new layers of global cooperation, with both allies and China, to slow or stop irresponsible players from gaining access to the technology. Although a joint U.S.-Chinese restraint agreement would still be difficult to enforce, the two countries’ awareness of the heightened danger in this scenario could make a deal more viable.

    Export controls could still be useful, but their effectiveness would depend on why catching up is easy. If China developed a viable alternative compute stack, then chip controls would become essentially useless and competition would shift to global deployment. If the ease of catching up stemmed from other factors (such as model distillation, theft, or the rapid spread of new algorithms and practical know-how), then chip controls would be less compelling than in other scenarios but still useful as a tool for buying time and slowing diffusion.

    WORLD FOUR

    If superintelligence could be achieved, catching up was easy, and China was not racing, the United States would find itself in a fleeting unipolar window. The United States could reach artificial superintelligence first, but others could follow quickly once they began to race. With China not trying to innovate too quickly, the logic of holding back on a major push to the frontier would be somewhat more compelling, especially if doing so could avoid the all-out proliferation scenario. Still, that path would be risky: China could secretly race or another actor could conceivably advance beyond American capabilities.

    If the United States continued to race, it would have to decide how to use its lead. Washington could attempt to use the narrow window to block others from reaching the frontier. Alternatively, it could use even a brief period of uncontested superintelligence to strengthen its own and allied defenses and work to implement safeguards against loss of control and unbounded proliferation scenarios.

    Since Beijing would not be racing, it would likely pursue a different strategy, positioning itself to commoditize American breakthroughs, embedding Chinese systems globally through low-cost AI exports, and linking AI to the physical world through robotics. That would make diffusion an important contest. The United States would need to invest in robotics and advanced manufacturing to translate digital breakthroughs into physical and industrial applications and move decisively to spread safe, democratic systems abroad before China filled the vacuum.

    WORLD FIVE

    Superintelligence is no longer on the table in the next set of possible worlds. In one of these scenarios, it is hard to catch up to breakthrough technologies, and China is racing to the frontier. The United States and Chinawould enter a grinding innovation race. Although the stakes would be high, they would be lower than in the superintelligence scenarios. It would remain important to invest in R & D, even if not at emergency levels, and to support that spending with long-term industrial policy that builds durable robotics and advanced manufacturing capabilities. Policymakers would have to be mindful that markets often misjudge turning points—investors may panic and declare a “bubble” before AI reaches its full potential, or they may keep spending long after the technology has matured. Risk management would have to focus less on loss of control and more on misuse in biological, cyber, or military applications.

    The importance of diffusion and deployment would increase significantly. The United States would have to push aggressive adoption of AI across domestic industry and the military and move quickly to spread American and allied systems abroad. Even nonfrontier models—when well integrated, cheaply priced, or paired with robust infrastructure—could capture massive market share, as Beijing well knows from past experience. The security of models and data centers would still matter, since catching up would not be trivial, and frontier models would remain essential for securing U.S. and allied systems, but the overriding task would be to get capable systems into wide use early, building familiarity and dependence before Chinese alternatives took hold. Export controls would remain valuable to slow China’s advance, but the United States would have to be mindful not to hinder deployment abroad.

    WORLD SIX

    In a world without superintelligence, where catching up is hard, and where China is not racing, the United States would hold a comfortable lead and have a meaningful window to entrench its advantage, using AI to develop new lifesaving medicines, expand education, and revitalize lagging American industries. China would not necessarily exit AI entirely, but Beijing would limit its investment in frontier model development so muchthat it would effectively be out of the race for cutting-edge capability. Instead, China would focus on applications and commoditizing U.S. breakthroughs. Meanwhile, Washington would be able to focus on safety, accountability, and ensuring that AI-driven gains translate into broad-based prosperity.

    Internationally, the United States would have space to develop a positive vision for an AI-infused world, welcoming partners into its AI ecosystem and offering access to models, data, and infrastructure but keeping critical elements anchored at home. The aim would not be to diffuse American systems as widely and quickly as possible, but to ensure that the systems that spread are safe and aligned with democratic values.

    WORLD SEVEN

    The second-to-last scenario sees bounded and uneven AI, easy catch-up, and China racing to the frontier. In this world, the United States and China engage in a diffusion race. Because breakthroughs would be easy to imitate, no country could monopolize intelligence for long; advantage would come from developing and commercializing faster than one’s rivals.

    Private capital would be harder to corral. If the technology was easily copied, investors would likely underinvest, seeing little defensible return. But the United States would still need to run the race; the systems that spread first would shape the global environment and should reflect U.S. values. And because China would be racing, the United States would need to innovate at the same pace or faster to prevent Beijing from compromising American cybersecurity, biosecurity, and military and intelligence advantages.

    Diffusion would become not just a component of AI strategy but a core pillar of U.S. foreign policy. China already systematically pushes its technology into foreign markets, often bundling it with financing and large-scale development projects. The United States would rightly have serious concerns about allowing the world’s digital infrastructure to be built on Chinese models that can exfiltrate data, monitor communications, and run far-reaching influence operations. Washington would need to embed AI diffusion into its statecraft, expanding the remit and deployable capital of institutions such as the Development Finance Corporation to help American and allied firms build data centers, networks, and regionally tailored systems around the world. That would require an American leadership focused not on short-term profit but on bringing about a world that runs to a much greater extent on American systems than on Chinese ones.

    If copying was easy and proliferation inevitable, secrecy would offer little return. The better play may be to open-source or widely license safe versions of key systems, ensuring that they would be run on American or allied platforms rather than adversarial ones. In this world, export controls would offer less benefit and may in some extreme cases even undermine the diffusion race because China could reliably bypass them by quickly replicating American technologies.

    WORLD EIGHT

    In the final world, AI would resemble many past major technologies. The United States would lead in innovation, but advances would be easy to copy. This free-riding would make private investment for large frontier pushes harder to mobilize and, with China not racing, the national security rationale for public spending would become less all-encompassing. Instead, AI investment would follow projected revenue from diffusion. Open-source models would likely dominate.

    The race for AI leadership would also be primarily a race for diffusion. It would resemble earlier contests, such as the one over 5G, which was driven by deployment and scale. Washington’s task would be to ensure that trusted American and allied systems become the default infrastructure for global industry, leaving less space for Beijing to establish a low-cost viable alternative.

    FROM SCENARIOS TO STRATEGY

    Strategy in the AI age will be less about predicting a single outcome or one right policy and more about thinking in probabilities. To make use of this matrix, policymakers should start by selecting a base case—the world they believe is most likely. Each major policy proposal should be tested against that base case: Does the policy make sense in the world one thinks one is in? Policymakers must also determine what can be done to avoid or blunt the worst possible outcomes in the worlds where the United States is most exposed and the stakes are highest, such as in World One—even if they do not think those worlds are most likely. From there, they should hedge, aligning strategy to the base case while also making it resilient across the most challenging worlds. That means identifying which policies work across multiple worlds, which can be reversed if the predicted future shifts, and which would be damaging if the base case proves false.

    For each of the eight worlds, the government should have a ready-to-execute plan that can be adapted as conditions shift. That requires institutions to think probabilistically. The National Security Council should use the matrix to stress-test U.S. policy against alternative futures. And the intelligence community should track signals of movement along the three axes (such as the pace of progress at the frontier, the speed with which new capabilities are replicated, or shifts in Chinese investment) and update the odds of each future accordingly. Senior national security officials should be prepared to recommend policy adjustments when it begins to look as though a different world is most likely. The task is not to make perfect predictions but to balance risk and reward, adjust priorities as probabilities change, redraw the matrix as circumstances demand, and establish the systems and processes to do these things.

    This framework is not only for policymakers. It also offers a practical way for anyone to engage in debates about AI and geopolitics. These arguments too often end in two sides talking past each other; they could become more productive if the participants pin down which future is being assumed. Is AI expected to race toward something transformative or plateau? Will breakthroughs spread quickly or remain hard to replicate? And is China racing for the frontier or positioning itself to follow and commoditize? Asking these questions and mapping each side’s argument onto the matrix often reveals whether disagreements really lie in policy recommendations or in assumed futures.

    The point of this framework is not to forecast the final world but to discipline strategy in the face of uncertainty—to make assumptions explicit and test them against alternatives. The framework is also meant to evolve. There are more dimensions to the progression of AI than the three axes presented here; some of the questions that seem most pertinent today may eventually be resolved, and new ones will emerge. If it becomes apparent that superintelligence is within reach, for example, the possibility of more limited advancement will become irrelevant, and the matrix may feature a new axis that considers two new possibilities: beneficial superintelligence and dangerous superintelligence. Actors other than China could grow more important, too, as the technological landscape shifts. What matters is having a policy framework that can adapt as evidence accumulates.

    Geopolitics in the age of AI will not be simple. But without a disciplined way of thinking, strategy will collapse under the weight of hidden assumptions and agendas. By mapping possible worlds and the choices they demand, this framework offers a way to see through the fog. The task for policymakers now is clear: treat AI not as a single story but as a shifting landscape. If American leaders learn to think this way, they will define whatever AI age emerges. If not, others will do it for them.

     

    Latest articles

    Related articles