Can We “Win” the AI Race Together?
by Kent O. Bhupathi
At the IndiaAI Impact Summit 2026, everyone agreed that artificial intelligence should improve lives. Nobody agreed on how to get there while the world keeps treating AI like both a public utility and a strategic weapon.
On stage, ministers praised open ecosystems, shared benchmarking, coordinated safety science, and cross-border infrastructure as if collaboration were simply a matter of goodwill and calendar invites. In more private settings, the conversation turned rather blunt… who owns the compute, where the data sits, whether a government can deploy models without a foreign permission slip, and what happens when access becomes a bargaining chip.
That tension comes from incentives. And incentives show up plainly in official policy language across the major blocs as trade-offs. That’s where this economist enters the chat…
The “AI arms race” framing has moved beyond metaphor. The United States openly frames its approach as a plan to “win the AI race,” pairing innovation acceleration with infrastructure buildout and geopolitical positioning. The European Union frames its ambition as becoming an “AI continent,” explicitly tying AI leadership to competitiveness, democratic values, and security. Once governments adopt race framing, they stop optimizing purely for economic efficiency and start optimizing for resilience and control.
And, that shift matters because AI as a technology rewards scale, interoperability, and network effects. AI as a geopolitical asset, on the other hand, rewards command over dependencies, such as data flows, compute access, chips, energy, standards, and talent. Put those incentives in the same room and you start asking, “can meaningful sovereignty and deep collaboration coexist without collapsing into fragmentation?”
What “Sovereignty” Actually Means
“Sovereignty” gets thrown around as if it means planting a flag on a data center. In reality, it’s all operational. Essentially, AI sovereignty means a state’s credible capacity to decide, enforce, and sustain its preferred AI development and deployment path under normal conditions and under stress, including geopolitical shocks, supply constraints, and cyber incidents. Understandably, there are layers to this…
Data sovereignty covers rules and enforcement around collection, access, localization, cross-border transfer, and downstream reuse for training and evaluation.
Compute sovereignty covers resilience and governance control over compute access, including cloud regions, domestic clusters, chip supply chains, and advanced packaging. It also includes the less glamorous constraints concerning energy and grid capacity.
Model sovereignty covers the ability to train, fine-tune, deploy, and govern key models without structural dependence on foreign APIs or unilateral licensing decisions.
Standards and governance sovereignty covers the capacity to shape domestic rules and influence international norms, including audits, procurement, liability, and certification.
Talent and organizational sovereignty covers the ability to attract and retain talent and operationalize policy through credible institutions and procurement capacity.
Once you define sovereignty thusly, the full-stack starts looking like a luxury good. Most states cannot realistically own or control everything across chips, hyperscale compute, frontier training pipelines, evaluation capacity, and a full governance apparatus without massive opportunity costs. Compute and frontier model development exhibit economies of scale and supply-chain concentration, and the energy bottleneck makes “just build more” a lot less simple than it sounds.
So, governments compromise. They pursue modular regimes. They identify politically sensitive dependencies and secure those first It’s risk management under constraint.
The Constraint Policymakers Keep Circling Around
If you want to understand why AI sovereignty has momentum, follow the physical inputs. The International Energy Agency projects that data center electricity demand will more than double by 2030 to roughly 945 TWh, with AI as the most significant driver. It also projects that AI-optimized data centers could more than quadruple their electricity demand by 2030.
That moves AI strategy out of pure software optimism and into power-system planning. And electricity cannot be imported with a click. Governments have to permit it, finance it, generate it, transmit it, and then defend the political choices that come with siting and pricing. The energy reality drags AI policy into the world of grid constraints. It also creates an uncomfortable truth:
If you cannot power the data center, you cannot scale the model. And if you cannot scale the model, you cannot compete in the parts of AI that reward scale.
Energy constraints also change the geopolitics of compute. Here, governments start treating compute less like a commodity service and more like strategic infrastructure.
The “Cloud” Has Borders (an aside)
“Cloud” is a soothing word. It makes compute sound soft and mobile. But all digital clouds sit under jurisdiction and inside regulatory reach. And when tensions rise, physical location determines which laws apply, which export controls bite, and which agencies can compel action. Even without crisis, jurisdiction affects latency, local spillovers, and enforceability.
OECD mapping of public-cloud AI compute highlights how concentrated this layer is, focusing on nine providers that represent over 70% of global public cloud spending. That kind of concentration proves that access is never just technical.
Moreover, for businesses, the investment question is less about “who’s cheapest?” (assuming similar quality) but “who stays available when rules change?” Remember, the market rewards efficiency, right up to the moment it invoices fragility.
Why Collaboration Doesn’t Go Away, Even in a Race
Sovereignty pressures do not erase collaboration pressures. And that’s because AI produces externalities that cross borders, by design.
Safety failures travel. From this, misuse travels. A major incident in one market triggers policy responses in many others. That’s why the Seoul Statement of Intent emphasized international coordination on AI safety science based on openness, transparency, and reciprocity. In practice, that means shared testing methods, shared incident learning, and interoperable evaluation norms. In essence, countries don’t need to share frontier advantage to share the tools that reduce systemic risk.
Standards and interoperability create similar incentives. Without shared technical and governance foundations, compliance costs rise and diffusion slows. The updated OECD Recommendation on AI provides an intergovernmental normative anchor that states can adopt while still pursuing national industrial strategies. Such could offer alignment without demanding identical law everywhere.
Trade fragmentation adds another cost. WTO analysis warns that AI can drive growth, but regulatory divergence and uneven digital infrastructure can prevent inclusive gains and deepen fragmentation of the digital economy. And it will be businesses who pay that price first through duplicated compliance tracks, region-specific product lines, and slower market entry.
So, collaboration remains rational. The difficulty comes from where it collides with advantage.
Layered Coexistence
Collaboration survives where it pays for itself. Safety science, benchmarking, incident-sharing, and standards harmonization reduce shared risk and lower the friction that slows adoption, and they do it without asking governments to hand over the crown jewels. Move toward frontier compute, chip supply chains, and sensitive data, and the mood changes. Export controls and industrial policy mark out the boundary line, because states treat chips, compute, and infrastructure as strategic assets worth funding and fencing.
Open source sits in the hybrid zone. While a work-in-progress, the EU AI Act offers unusually explicit legal language recognizing free and open-source data, models, and more, as contributors to innovation, while carving conditional exemptions that reduce when systemic risk emerges. It also notes that open licensing does not automatically disclose training data or resolve copyright compliance questions.
But this layered dynamic explains why summit rhetoric can sound contradictory. Officials can advocate openness and still pursue domestic control because they are optimizing across different layers of the AI stack.
How the Major Players Operationalize the Tension
The United States blends cooperative safety language with competitive instruments. It participates in international coordination on safety science while enforcing export controls that restrict diffusion of frontier compute capabilities. It also treats infrastructure as strategic through executive action aimed at advancing AI infrastructure and uses standards leadership as a sovereignty lever through NIST-centered governance work. This includes the transformation of the U.S. AI Safety Institute into a Center for AI Standards and Innovation.
The European Union leans on regulatory harmonization and pooled capacity. The AI Act anchors governance across Member States, and the EU has expanded its network of AI Factories, reporting 19 across 16 Member States, to broaden access to AI-optimized supercomputing and strengthen regional ecosystems. Europe’s model pursues sovereignty through bloc-level pooling rather than isolated national buildouts, and it exports regulatory influence through market access and compliance expectations.
China operationalizes governance sovereignty through enforceable rules. Its AI-generated content labeling requirements impose obligations on service providers and distribution platforms. China has also released an AI Safety Governance Framework emphasizing risk management and system security while using “open cooperation” language. The combination signals selective engagement under tight domestic control.
India’s IndiaAI Mission illustrates modular sovereignty oriented toward development. India frames capacity-building as mission-mode, and the IndiaAI portal describes compute capacity such as “10,000+ GPUs via public-private partnerships.” This signals sovereignty-through-access and procurement design rather than full state ownership. India has also launched BharatGen, a government-supported multimodal foundation model initiative with an explicit open-source ecosystem orientation. Instead of chasing self-sufficiency, it builds domestic capability and delivery muscle while keeping diffusion and collaboration on the table.
While much is shared, they split on how sovereignty is achieved. Where the US, EU and China mostly secure sovereignty by conditioning diffusion, India is betting on sovereignty-through-access via modular capacity-building and open ecosystems that scale domestic capability quickly.
What Decision-makers Should Take from This
AI increasingly resembles a club good. Collaboration deepens inside trusted blocs and among aligned partners, while global openness narrows around frontier capabilities, sensitive datasets, and strategic infrastructure. Firms that rely heavily on cross-border data flows, foreign model APIs, or geopolitically exposed compute should treat access risk as a first-order variable, not an edge case for the legal team.
Energy now sits on the critical path, with data center expansions at every exit. The IEA’s projections should force a sober rewrite of many scale assumptions, especially for markets where electricity infrastructure and permitting capacity already face stress.
Standards and compliance frameworks increasingly function as channels of competitive influence. When jurisdictions define audit expectations, documentation norms, and procurement requirements, they export governance power through market gravity. Firms that align early reduce redesign cycles and speed procurement. Firms that ignore standards tend to discover the cost right before a launch, which is the most expensive time to learn anything.
Open source remains economically central but increasingly governed by thresholds tied to capability and deployment risk. Businesses should plan for regimes where the question isn’t “open or closed,” but “open up to what capability level, under what deployment context, with what safety and transparency obligations.” No solutions, only trade-offs!
Lastly, domestic labor politics will amplify the sovereignty pull. OECD and ILO work points to uneven exposure across regions and occupations, and that uneven exposure creates legitimacy pressure. And the moment jobs look vulnerable, governments tighten the rulebook and start asking what stays onshore. Exposure becomes salience fast.
Is the Balance Worth Trying?
Yes, because the alternatives cost more.
Unrestrained competition wastes capital, duplicates infrastructure, and fractures standards. And naive global collaboration snaps under stress because it asks governments to ignore incentives they have already written into export controls, infrastructure policy, and domestic politics.
A workable path demands a discipline. Modular sovereignty offers to secure critical dependencies without chasing self-sufficiency. Layered collaboration keeps safety science, benchmarking, incident-sharing, and interoperability moving; and it does so without forcing states to hand over strategic advantage. That is, at least to me, the compromise the summit made visible.
The race rhetoric will keep funding budgets (creating enemies is unfortunately a sexy way to inspire action…). The real contest will be over reliability, interoperability, and institutional trust. No country can indefinitely scale AI alone. But no country needs to give up control entirely.
If this technology is to persist, the next decade will need to be negotiated in layers. And the winners will be the ones building for that reality instead of performing unity on stage.
Sources:
“AI Action Plan,” AI.gov, https://www.ai.gov/action-plan.
“The AI Continent Action Plan,” Digital Strategy—European Commission, April 9, 2025, https://digital-strategy.ec.europa.eu/en/library/ai-continent-action-plan.
Bacchetta, Marc, Eddy Bekkers, Emmanuelle Ganne, and Ankai Xu. “Harnessing AI for Inclusive Growth.” Blog post, September 29, 2025. https://ttd.wto.org/en/news-blog/harnessing-ai-for-inclusive-growth.
Center for AI Standards and Innovation (CAISI), NIST, https://www.nist.gov/caisi.
China Cyberspace Administration. Artificial Intelligence Security Governance Framework, Version 1.0, September 9, 2024. https://www.cac.gov.cn/2024-09/09/c_1727567886199789.htm.
China Cyberspace Administration. Notice on Printing and Distributing the “Measures for the Identification of Artificial Intelligence Generated Synthetic Content”, March 14, 2025. https://www.cac.gov.cn/2025-03/14/c_1743654684782215.htm.
Council of Europe. Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. Council of Europe Treaty Series No. 225, Vilnius, 5 September 2024. https://rm.coe.int/1680afae3c.
“Ensuring a National Policy Framework for Artificial Intelligence,” Executive Order, December 11, 2025, https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/.
“EU Expands Network of AI Factories, Strengthening Its ‘AI Continent’ Ambition,” Digital Strategy—European Commission, October 10, 2025, https://digital-strategy.ec.europa.eu/en/news/eu-expands-network-ai-factories-strengthening-its-ai-continent-ambition.
“Generative AI Set to Exacerbate Regional Divide in OECD Countries, Says First Regional Analysis on Its Impact on Local Job Markets,” press release, Organisation for Economic Co-operation and Development, November 28, 2024, https://www.oecd.org/en/about/news/press-releases/2024/11/generative-ai-set-to-exacerbate-regional-divide-in-oecd-countries-says-first-regional-analysis-on-its-impact-on-local-job-markets.html.
Gmyrek, Paweł, Janine Berg, Karol Kamiński, Filip Konopczyński, Agnieszka Ładna, Balint Nafradi, Konrad Rosłaniec, and Marek Troszyński. Generative AI and Jobs: A 2025. Geneva: International Labour Organization, May 2025. https://www.ilo.org/sites/default/files/2025-05/Research%20brief_FINAL_15May2025_21.05.25_1.pdf.
Guzman, Isabella Casillas. Federal Register, Vol. 89, No. 233: Rules and Regulations, December 4, 2024. https://www.govinfo.gov/content/pkg/FR-2024-12-04/pdf/2024-28423.pdf.
Huergo, Jennifer. “Biden-Harris Administration Announces First-Ever Consortium Dedicated to AI Safety.” February 8, 2024. https://www.nist.gov/news-events/news/2024/02/biden-harris-administration-announces-first-ever-consortium-dedicated-ai.
IndiaAI Compute Capacity. https://indiaai.gov.in/hub/indiaai-compute-capacity.
International Energy Agency, Energy and AI (April 2025), https://iea.blob.core.windows.net/assets/de9dea13-b07d-42c5-a398-d1b3ae17d866/EnergyandAI.pdf.
Launch of BharatGen: The First Government Supported Multimodal Large Language Model Initiative, Department of Science and Technology, Government of India. https://dst.gov.in/launch-bharatgen-first-government-supported-multimodal-large-language-model-initiative.
Lehdonvirta, Vili, Boxi Wu, Zoe Jay Hawkins, Celine Caira, and Lucia Russo. Measuring Domestic Public Cloud Compute Availability for Artificial Intelligence. OECD Artificial Intelligence Papers No. 49. October 2025. https://www.oecd.org/content/dam/oecd/en/publications/reports/2025/10/measuring-domestic-public-cloud-compute-availability-for-artificial-intelligence_39fa6b0e/8602a322-en.pdf.
Maslej, Nestor, Loredana Fattorini, Raymond Perrault, Yolanda Gil, Vanessa Parli, Njenga Kariuki, Emily Capstick, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, Toby Walsh, Armin Hamrah, Lapo Santarlasci, Julia Betts Lotufo, Alexandra Rome, Andrew Shi, and Sukrut Oak. “The AI Index 2025 Annual Report.” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2025. https://doi.org/10.48550/arXiv.2504.07139.
Ministry of Electronics & IT. “Cabinet Approves Ambitious IndiaAI Mission to Strengthen the AI Innovation Ecosystem.” Press Information Bureau, Delhi, March 7, 2024. https://www.pib.gov.in/PressReleaseIframePage.aspx?PRID=2012357®=3&lang=2.
Ministry of Electronics and Information Technology. Press Release, January 30, 2025. https://negd.gov.in/wp-content/uploads/2025/02/Draft-Press-Release-Updated-1.pdf.
Office of the Spokesperson. “United Nations General Assembly Adopts by Consensus U.S.-Led Resolution on Seizing the Opportunities of Safe, Secure and Trustworthy Artificial Intelligence Systems for Sustainable Development.” Fact Sheet, March 21, 2024. https://2021-2025.state.gov/united-nations-general-assembly-adopts-by-consensus-u-s-led-resolution-on-seizing-the-opportunities-of-safe-secure-and-trustworthy-artificial-intelligence-systems-for-sustainable-development/.
Organisation for Economic Co-operation and Development, Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449 (amended May 3, 2024), https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
“Seoul Statement of Intent toward International Cooperation on AI Safety Science, AI Seoul Summit 2024 (Annex),” GOV.UK, May 21, 2024, https://www.gov.uk/government/publications/seoul-declaration-for-safe-innovative-and-inclusive-ai-ai-seoul-summit-2024/seoul-statement-of-intent-toward-international-cooperation-on-ai-safety-science-ai-seoul-summit-2024-annex.
“The Seoul Declaration by Countries Attending the AI Seoul Summit, 21-22 May 2024.” Department of Industry, Science and Resources (Australia), May 24, 2024. https://www.industry.gov.au/publications/seoul-declaration-countries-attending-ai-seoul-summit-21-22-may-2024.
Tobey, Danny, Ashley Carr, Karley Buckley, Coran Darling, and Kyle Kloeppel. “NIST Releases Its Generative Artificial Intelligence Profile: Key Points.” DLA Piper, July 30, 2024. https://www.dlapiper.com/en/insights/publications/ai-outlook/2024/nist-releases-its-generative-artificial-intelligence-profile.
U.S. Department of Commerce, Statement from U.S. Secretary of Commerce Howard Lutnick on Transforming the U.S. AI Safety Institute into the Pro-Innovation, Pro-Science U.S. Center for AI Standards and Innovation (June 3, 2025), https://www.commerce.gov/news/press-releases/2025/06/statement-us-secretary-commerce-howard-lutnick-transforming-us-ai.
U.S. Government Accountability Office. Artificial Intelligence: Federal Efforts Guided by Requirements and Advisory Groups. GAO-25-107933, September 9, 2025. https://www.gao.gov/products/gao-25-107933.
World Bank Group. Digital Progress and Trends Report 2025: Strengthening AI Foundations. 978-1-4648-2265-0. https://openknowledge.worldbank.org/server/api/core/bitstreams/f2509a0f-7153-4f32-b180-bc11e90c4940/content.

