The Algorithmic Oligarchies: How Big Models Rewrite Geopolitics

Executive Summary
Artificial intelligence (AI) is no longer just a technological curiosity; it has become a geopolitical instrument. The ability to design, train and deploy frontier models depends on a chain of inputs—specialised chips, lithography tools, fabrication plants, cloud platforms, data, and cultural resources. Because each layer of this geostrategic stack is controlled by a handful of actors, the world risks drifting toward algorithmic oligarchies. Compute chokepoints — points where control over hardware, software or infrastructure can restrict access — resemble energy chokepoints of the 20th century. As AI permeates finance, defence, culture and policymaking, nations that lack sovereignty over compute risk becoming dependent on foreign power brokers. A balanced strategy must therefore go beyond building fabs to include cloud governance, evaluation capacity, digital public infrastructure and cultural pluralism.
This article examines three dimensions of algorithmic sovereignty. First, it traces hardware concentration: from extreme ultraviolet (EUV) lithography and fab capacity to the export‑control regimes that police the flow of advanced chips and model weights. Second, it analyses compute governance: how nations use laws and standards to restrict or encourage model development and how evaluation institutes help governments understand risks. Third, it explores cultural sovereignty, arguing that training data and platform design can flatten linguistic diversity and undermine national cultures unless public institutions build alternative digital rails. We conclude with a strategic scorecard that ranks the United States, European Union, United Kingdom, India and China across five levers of algorithmic sovereignty — fabrication, governance, evaluation, digital rails and cultural pluralism — and propose actions for leaders who want to avoid a world dominated by a few algorithmic oligarchies.
The Age of Algorithmic Oligarchies
Over the past decade, AI development has become concentrated in an elite club of nations and corporations. Three metrics illustrate this concentration. First, only one company—ASML—can supply extreme ultraviolet lithography machines, the tools needed to print features on chips smaller than 7 nanometres. A July 2024 analysis notes that ASML became the sole supplier of EUV machines after a 30‑year innovation race, effectively granting it a monopoly on the tool essential for fabricating leading‑edge semiconductors[1]. This monopoly means governments rely on Dutch export policies to decide who can build advanced chips.
Second, fabrication capacity is highly geographically concentrated. A May 2024 report for the Semiconductor Industry Association and Boston Consulting Group finds that Taiwan has captured over 70 % of advanced‑node (<10 nm) chip manufacturing[2]. South Korean firms such as Samsung and SK Hynix hold leading positions in DRAM and NAND flash, while all advanced logic fabrication (<10 nm) is located in Asia and 97 % of assembly, test and packaging occurs outside the United States[3]. Europe’s share of global fabrication is below 10 %[4], and the EU Court of Auditors warns that the bloc’s 20 % target by 2030 is unrealistic; auditors project only 11.7 % by 2030 and note that the EU contributes just €4.5 billion out of an estimated €43 billion needed.
Third, the economics of compute create massive barriers to entry. McKinsey estimates that companies will need to invest more than US$5 trillion in data‑centre capacity by 2030 just to meet AI demand[5]; this sum exceeds the GDP of every country except the United States and China. Such capital intensity advantages firms with access to deep capital markets and stable energy supplies and makes it hard for latecomers to catch up.
These metrics show why algorithmic oligarchies are emerging. When a small cluster of firms controls lithography tools, a handful of Asian countries fabricate most advanced chips, and only two countries can finance multi‑trillion‑dollar data‑centre investments, it is not surprising that AI development is dominated by the United States and China. The rest of the world must choose between dependence, alliances or sovereign alternatives.
Compute as the New Geopolitical Chokepoint
The Semiconductor Frontline
The semiconductor stack comprises design, lithography equipment, fabrication, packaging and materials. EUV lithography sits at the heart of this stack. Developed over decades through international collaboration, EUV enables feature sizes below 10 nm. ASML’s monopoly on EUV machines means that export decisions by the Netherlands or the United States can effectively bar entire nations from advanced chipmaking[1]. In 2019, when the first devices using EUV hit the market, the technology was hailed as “the machine that saved Moore’s law,” and by 2024 ASML’s tools were indispensable for AI accelerators[1].
Yet supply chains are more global and fragile than they appear. Research shows that EUV emerged from a collaborative network of Japanese, American and European researchers, with public‑private partnerships and consortia overcoming technical hurdles over 30 years[6]. The CSET report notes that although ASML commercialised the technology, the underlying research community and supply chain remain global[7]. This means that while export controls can restrict tool sales, innovations still depend on international collaboration in materials science, plasma physics and optics. Policymakers thus face a dilemma: restrict technology to protect national security or foster global cooperation to advance research.
Advanced node capacity is another chokepoint. The same BCG/SIA report cited above observes that Taiwan’s TSMC is the go‑to foundry for fabless companies, producing more than 70 % of sub‑10 nm chips[2]. South Korea’s Samsung and SK Hynix dominate DRAM and NAND flash markets[2]. Meanwhile, China leads assembly, test and packaging for chips older than 28 nm and has invested US$63 billion in 73 fabs over the past five years[8]. Japan is trying to catch up with multi‑billion‑dollar subsidies and a consortium called Rapidus aimed at producing 2 nm chips[9].
Because advanced fabs cost more than US$20 billion each and take years to build, export controls become powerful levers. In late 2024 and early 2025, the U.S. Bureau of Industry and Security (BIS) issued interim final rules expanding export controls on advanced integrated circuits, semiconductor manufacturing equipment and AI model parameters[10]. The new rules created a classification for high‑bandwidth memory (HBM) chips (ECCN 3A090.c), imposed worldwide licence requirements for supplying advanced chips to China and introduced restrictions on items containing such chips[11]. The BIS also signalled future controls on AI model weights and advocated for due‑diligence practices when exporting AI chips and software. These measures show that the United States views compute as part of its national‑security arsenal.
Europe, meanwhile, hopes to build its own capacity. The EU Chips Act aims for a 20 % share of global semiconductor production by 2030, but auditors warn that Europe would need to quadruple its capacity and overcome dependencies on raw materials and energy. European industry groups are already calling for a “Chips Act 2.0” that focuses on design, materials and equipment rather than just subsidies[12]. Without changes, Europe risks remaining an equipment supplier — ASML leads in EUV — while relying on Asia and the United States for fabs and cloud services.
The Policy–Market Timeline (2022–2025)
The relationship between policy and market dynamics can be visualised through a timeline of export controls and industry responses:
1. October 2022 – The United States imposes controls on advanced chip exports to China, targeting GPU accelerators and AI chips.
2. October 2023 – A second round of U.S. controls expands restrictions to include chips used for AI training and some foundry services, signalling a shift from hardware to services.
3. Late 2024–January 2025 – BIS introduces interim final rules adding HBM chips, cloud services and AI model weights to the control list[10][11]. The rules emphasise the need for companies to verify end‑users and restrict China’s access to the AI supply chain.
4. 2024–2025 – Nvidia, AMD and Intel launch China‑specific chips with reduced performance to comply with U.S. export limits. Huawei responds by announcing its own AI chip — the Ascend series — and doubling production capacity. Chinese cloud providers develop domestic AI models that circumvent U.S. chips.
5. 2025 – Calls for an EU “Chips Act 2.0” emerge, focusing on equipment and design[12]. Meanwhile, South Korea and Taiwan accelerate fab expansion. Japan invests billions in Rapidus and IBM collaboration[9].
This timeline shows how export controls spawn adaptive strategies. When the U.S. restricts chips, semiconductor firms design downgraded SKUs; when tools are denied, China invests in domestic equipment; when Europe subsidises fabs, Asian incumbents expand in response. Policymakers must therefore expect feedback loops and design controls accordingly.
Europe’s Ambition and Reality
Europe’s push for semiconductor sovereignty faces several obstacles. The EU’s goal of doubling its share of global production to 20 % by 2030 is hampered by the region’s limited fabrication base (less than 10 % today) and heavy reliance on external suppliers[4]. The European Court of Auditors reports that the Chips Act invests only €4.5 billion from the EU budget, with the remainder expected from member states and industry, while global chipmakers spent more than €405 billion from 2020–2023. The auditors predict the EU will reach only 11.7 % share by 2030, citing high energy costs, a lack of raw materials and regulatory fragmentation. Moreover, nearly 80 % of EU‑based chip input suppliers and 63 % of customers are outside the EU, amplifying geopolitical risks[13].
Industry groups have called for a second Chips Act to address these weaknesses[12]. A new act could prioritise design tools, materials, workforce development and open‑foundry models. However, Europe must balance sovereignty with openness. The EU remains the global leader in lithography (ASML) and process equipment but is not a major fab location; focusing solely on fabs may divert resources from areas of comparative advantage. A sound strategy should therefore link investments in fabrication with policies to retain high‑end equipment manufacturing and research.
Beyond Hardware: Sovereignty in the Model Era
Cloud & Compute Governance
Compute governance involves not only hardware but also cloud services, export controls and evaluation mechanisms. Governments are realising that access to cloud resources can be as strategically important as access to chips. For example, the U.S. BIS interim rules introduced in 2025 include requirements for cloud service providers to verify and, in some cases, block access to AI training capacity for restricted end‑users[10]. This underscores the shift from controlling goods to controlling services. In addition, states are considering measures to monitor use of advanced chips through cloud telemetry to ensure that exports are not circumvented, though detailed public information on enforcement technologies is limited.
Evaluation capacity is another key lever. The United Kingdom’s AI Safety Institute (AISI) was launched following the 2023 AI Safety Summit to equip governments with an empirical understanding of frontier model risks[14]. Within a year it built a large team of evaluators, signed a memorandum of understanding with the U.S. and published an open‑source evaluation framework[15]. The institute tests models for cyberattack assistance, chemical and biological misuse, autonomous agent capabilities and societal impacts. In its first year the AISI evaluated 16 large language models and developed proprietary tools and datasets[16][17]. It also chairs the International AI Safety Report, led by Yoshua Bengio, which will inform the Third AI Action Summit[18].
Evaluation capacity has geopolitical significance because it allows governments to verify safety claims and impose conditions on model deployment. Independent evaluations can reveal whether frontier models circumvent safeguards or leak confidential data. The AISI argues that repeated tests are essential because model behaviour changes with fine‑tuning and system updates[19]. Countries without evaluation infrastructure will have to rely on vendors’ assurances or foreign regulators, diminishing sovereignty. Therefore, building evaluation institutes and sharing methodologies through international coalitions is a sovereignty lever akin to building a fab.
Digital Public Infrastructure Alternatives
Digital public infrastructure (DPI) offers a path toward sovereignty that does not rely on building world‑leading fabs. DPI refers to secure, inclusive and interoperable systems for digital identity, payment and data exchange that are controlled by public institutions rather than proprietary platforms[20]. A 2025 policy paper from the Digital Cooperation Organization notes that DPI can bridge financial and gender gaps by enabling transparent and accountable services[21]. It cites India’s “India Stack”—a suite of open‑source protocols for identity (Aadhaar), payments (UPI) and data sharing—as an international benchmark[22].
DPI is relevant to algorithmic sovereignty for two reasons. First, it reduces reliance on foreign platforms for basic digital functions. Countries using India’s model can build domestic ecosystems for authentication, payments and data exchange, which fosters local innovation and reduces the need for Western cloud platforms. Second, DPI creates a basis for safe AI deployment. When identity and payment rails are public, governments can enforce accountability in AI services (e.g., verifying who requests AI‑generated content and tracking financial flows). DPI also enhances cultural pluralism by enabling local content distribution channels outside global platforms.
However, building DPI requires investment, governance and technical expertise. Countries must choose open standards, ensure interoperability and protect privacy. A risk is that DPI becomes a surveillance tool or replicates corporate oligarchies within the state. The DCO report emphasises the importance of transparency, trust and accountability to avoid these pitfalls[21].
Cultural Sovereignty and the Risk of Model Monocultures
While hardware and compute governance occupy headlines, cultural sovereignty—the ability of societies to maintain linguistic diversity, artistic expression and civic values—may prove just as important. Models trained on predominantly English, Mandarin or other large‑language corpora risk marginalising smaller languages and flattening cultural narratives. UNESCO’s 2005 Convention on the Protection and Promotion of the Diversity of Cultural Expressions recognises that cultural diversity is a precondition for societal development. In 2017, the Convention’s Conference of Parties approved Operational Guidelines for the Digital Environment to address discoverability, linguistic diversity and transparency on digital platforms[23][24]. In 2019 and 2023, UNESCO created a reflection group to assess AI’s impact on cultural industries and to propose recommendations for 2025[24][25].
The guidelines emphasise three principles: (1) linguistic diversity, requiring platforms to support a wide range of languages; (2) discoverability, ensuring national and local cultural content is visible in recommendation algorithms; and (3) transparency, mandating that platforms disclose how algorithms prioritise content[24]. Countries can implement these guidelines by requiring AI developers to include local language datasets, funding digital cultural repositories and mandating transparency in recommendation systems. Without such measures, training data will predominantly reflect cultures with abundant digital resources, leading to model monocultures that shape discourse and identity.
Civil society organisations add further dimensions to cultural sovereignty. Groups such as Mozilla, Access Now and the Electronic Frontier Foundation advocate for rights‑preserving AI, algorithmic accountability and open access to training data. By campaigning for contestability—the ability to challenge model outputs and training practices—these organisations help prevent a concentration of cultural power in a few corporate hands. They also highlight that AI systems, if unchecked, can reinforce biases and colonial patterns of data extraction. Combining UNESCO’s guidelines with civil society advocacy provides a framework for cultural sovereignty.
Compute Demand, Energy and Climate
Cultural concerns intersect with environmental ones. AI compute demand is straining electricity grids. A 2024 article summarising the International Energy Agency’s (IEA) Energy and AI report notes that global data‑centre electricity consumption could more than double between 2024 and 2030, reaching 945 terawatt-hours (TWh)—roughly the same as Japan’s current electricity demand[26]. The IEA identifies AI as the most important driver of this growth[27] and estimates that AI currently accounts for 5–15 % of data‑centre power use but could rise to 35–50 % by 2030[28]. Nearly half of data‑centre electricity consumption occurs in the United States, 25 % in China and 15 % in Europe[29]. In Ireland, data centres already consume about 21 % of electricity and could reach 32 % by 2026[30]. The IEA warns that data‑centre growth may consume 7–12 % of U.S. electricity by 2028[31] and that pressure on grids could force trade‑offs with electrification and climate goals[32].
These figures show that compute sovereignty is not just a security issue but also an energy and climate issue. Countries with limited grid capacity may find that building AI compute clusters requires either expanding fossil‑fuel use or diverting renewable energy from other sectors. In Europe, water and energy consumption have already become constraints on new fabs[33]. Without coordinating compute expansion with clean‑energy investments, algorithmic sovereignty may exacerbate climate risks.
Strategic Scorecard: How Nations Can Respond
To avoid a world of algorithmic oligarchies, policymakers must deploy a portfolio of levers. The following framework summarises five levers of algorithmic sovereignty and assesses the current position of major players:
1. Fabrication Capacity – building or partnering for advanced fabs, secure access to lithography tools, and investing in materials and packaging.
2. Compute Governance – export controls on chips and model weights, cloud‑service regulation, and transparency requirements.
3. Evaluation Capacity – independent institutes with technical expertise to test and certify frontier models; standards for robustness and safety.
4. Digital Rails – public digital identity, payment and data‑exchange systems that reduce dependence on proprietary platforms.
5. Cultural Pluralism – policies ensuring linguistic diversity, discoverability of local content and contestability of AI outputs.
Scorecard (Qualitative Assessment)
Region/Country |
Fabrication
Capacity |
Compute
Governance |
Evaluation
Capacity |
Digital
Rails |
Cultural
Pluralism |
United
States |
Strong
design ecosystem, limited onshore fabs; incentives via CHIPS Act but
<10 % capacity today[3]. |
Aggressive
export controls on chips and services; BIS rules on HBM chips and model
weights[10][11]. |
Emerging;
NIST and newly proposed safety institutes, but no permanent national
evaluator yet. |
Patchwork;
digital identity and payments largely private; government exploring U.S.
Digital ID. |
Limited
cultural policies; Section 230 debates; reliance on market solutions. |
European
Union |
Strong
equipment (ASML) and research; limited fabs (<10 %) and dependence on
Asia[4]; Chips
Act funding inadequate. |
Developing;
EU AI Act emphasises risk‑based regulation; exploring export controls;
reliant on member‑state enforcement. |
Nascent;
some national initiatives but no EU‑wide evaluation institute. |
Varied;
some member states offer digital ID and payments, but no unified DPI. |
UNESCO’s
Convention implemented; strong cultural policies; exploring discoverability
rules; calls for algorithmic transparency. |
United
Kingdom |
No
advanced fabs; strategy focuses on R&D, design and compound
semiconductors with £1 billion over 10 years[34][35]. |
Alignment
with U.S. export controls; exploring cloud oversight; still developing. |
Strong;
AI Safety Institute tests frontier models and shares frameworks[15][16]. |
Limited;
digital identity project (Gov.uk Verify) struggled; payments dominated by
private banks. |
Active in
cultural policy (BBC, Arts Council); engaged in UNESCO initiatives;
evaluating algorithmic impact on cultural diversity. |
China |
Rapidly
expanding capacity; 73 fabs built for chips >28 nm with
US$63 billion investment[8]; making
strides in advanced nodes despite export controls; heavy state subsidies. |
Subject
to U.S. export controls; implements strict tech restrictions domestically;
invests heavily in sovereign chips. |
Limited
transparency; evaluation occurs within state labs; limited international
collaboration. |
Developing
digital yuan and national digital ID; leading in mobile payments; building
data‑exchange platforms. |
Tight
state control over culture; censorship restricts pluralism; invests in
Mandarin‑first AI models. |
India |
Minimal
advanced fabs; focusing on design talent; exploring incentives for
semiconductor manufacturing; reliant on foreign chips. |
Participates
in U.S. export regimes; exploring guidelines; digital regulations emphasise
data localisation. |
Limited;
no dedicated evaluation institute yet. |
Strong;
India Stack offers digital identity, payments and data consent platforms[22]. |
Diverse
languages and cultural policies; concerns over content moderation; potential
to integrate UNESCO guidelines. |
This qualitative assessment suggests that no country yet has full algorithmic sovereignty. The United States leads in compute governance and design but lacks onshore fabrication and digital rails. The EU excels in equipment and cultural policy but trails in fabs and evaluation capacity. The UK has pioneered evaluation but lacks fabs and digital infrastructure. China is building a vertically integrated stack but faces external controls and internal censorship. India is strong in digital rails and design talent but lags in fabrication and evaluation. Policymakers can use this scorecard to benchmark progress and prioritise investments.
Conclusion: The Future of Geopolitical Power
The 20th century taught that nations with control over oil reserves and shipping lanes could shape global politics. In the 21st century, the chokepoints are chips, clouds and algorithms. Lithography tools like EUV machines are monopolised by a single firm[1]. Advanced-node fabrication is concentrated in Taiwan and South Korea, leaving the rest of the world dependent[3]. Export controls on chips and cloud services extend national security into cyberspace[10]. Meanwhile, the cost of building data centres required for AI training exceeds US$5 trillion[5] and strains electricity grids[36]. Cultural power is also at stake, as training data and recommendation systems can flatten linguistic diversity unless governments enforce discoverability and transparency[24].
Nations that ignore these dynamics risk becoming clients of algorithmic oligarchies. Fabs alone will not deliver sovereignty; states must also govern cloud access, build evaluation institutes, invest in digital public infrastructure and protect cultural diversity. Conversely, hard‑nosed industrial policy without openness risks stifling innovation. EUV emerged from decades of public‑private collaboration and global research[6][7]; replicating that success requires balancing security with cooperation.
Finally, algorithmic sovereignty is intertwined with energy and climate. Doubling data‑centre electricity demand by 2030 may derail decarbonisation unless countries accelerate clean‑energy investments and design AI systems that prioritise efficiency[32]. Policies that align compute expansion with renewable capacity will determine whether AI accelerates sustainability or exacerbates emissions.
The race for algorithmic power is not zero‑sum. International agreements on export controls, evaluation standards and cultural guidelines can mitigate harmful competition. Coalitions such as the Global Partnership on AI, UNESCO’s reflection group and the DCO offer platforms for shared governance. By investing in the five levers—fabrication, governance, evaluation, digital rails and cultural pluralism—states can negotiate a future where AI empowers societies instead of entrenching oligarchies. This article offers a roadmap for leaders who seek to harness big models without surrendering sovereignty.
FAQ: Key Questions on Algorithmic Oligarchies
1. Why are EUV lithography tools considered the “single point of failure” in global AI geopolitics?
Because ASML remains the sole supplier, any disruption—whether regulatory, technical, or geopolitical—can throttle the entire advanced-node chip pipeline. This turns a corporate monopoly into a geopolitical chokehold.
2. How do export controls on AI chips reshape global power dynamics?
Controls now extend beyond GPUs to CPUs and cloud access, meaning the U.S. is effectively setting the rules of who can—and cannot—train large models at scale. It’s statecraft executed through supply chains.
3. Why does the EU’s “Chips Act 2.0” struggle to deliver sovereignty?
Ambition outpaces capacity: Europe is pledging billions but lacks fabrication scale, skilled labour pipelines, and power grid readiness. This creates a sovereignty gap between rhetoric and reality.
4. What role does the UK’s AI Safety Institute play in sovereignty?
Unlike fabs, evaluation capacity cannot be offshored. By defining tests and benchmarks for frontier models, institutes like AISI turn governance into a form of geopolitical leverage.
5. How does Digital Public Infrastructure (DPI) challenge the platform oligarchies?
IndiaStack shows that sovereign rails for identity, payments, and data exchange can rebalance power. Instead of depending on U.S. or Chinese platforms, states can create their own digital commons.
6. What is “cultural sovereignty” in the age of big models?
It’s the ability of nations to preserve and project their linguistic, historical, and artistic diversity in digital ecosystems. UNESCO’s 2005 Convention is a legal anchor, but AI risks eroding pluralism by training on homogenised data.
7. What should executives and policymakers measure to avoid falling into algorithmic dependency?
Key indicators include: concentration of advanced-node supply, national grid resilience, number of trained evaluators, DPI adoption rates, and cultural content discoverability. These metrics convert abstract debates into actionable dashboards.
[1] [6] [7] Tracing the Emergence of Extreme Ultraviolet Lithography | Center for Security and Emerging Technology
https://cset.georgetown.edu/publication/tracing-the-emergence-of-extreme-ultraviolet-lithography/
[2] [3] [8] [9] Report_Emerging-Resilience-in-the-Semiconductor-Supply-Chain.pdf
[4] [13] [33] Addressing Vulnerabilities in the EU’s Semiconductor Value Chain - PromethEUs
[5] PowerPoint Presentation
[10] [11] BIS Further Restricts Exports of Artificial Intelligence and Advanced Chips to China | Cleary Foreign Investment and International Trade Watch
[12] Semiconductor firms call for EU Chips Act 2.0 | Reuters
https://www.reuters.com/technology/semiconductor-firms-call-eu-chips-act-20-2025-03-19/
[14] [16] [17] [18] Our First Year | AISI Work
https://www.aisi.gov.uk/work/our-first-year
[15] [19] Early lessons from evaluating frontier AI systems | AISI Work
https://www.aisi.gov.uk/work/early-lessons-from-evaluating-frontier-ai-systems
[20] [21] [22] DPI-Policy-Paper.pdf
https://dco.org/wp-content/uploads/2025/06/DPI-Policy-Paper.pdf
[23] [24] [25] Appel_experts_numerique_en_sept_2023.pdf
https://www.unesco.at/fileadmin/user_upload/Appel_experts_numerique_en_sept_2023.pdf
[26] [27] [28] [29] [30] [31] [32] [36] AI: Five charts that put data-centre energy use – and emissions – into context - Carbon Brief