195. That is the number of countries that will eventually need to decide whose AI governance architecture they live under — Washington’s, Beijing’s, or something negotiated uncomfortably in between. The clock on that decision is moving faster than most boardrooms appreciate.
In the summer of 2025, China released a formal Action Plan for Global AI Governance, a document that reads less like a domestic regulatory update and more like a bid for custodianship of the world’s most consequential technology. For executives charting capital allocation and market strategy across borders, dismissing this as geopolitical noise would be a costly misread.
Beijing Is Not Playing Defense Anymore
For years, the dominant narrative positioned China as a fast follower in AI — voracious in data collection, capable in engineering, but reactive to norms set elsewhere. That framing is now obsolete. China’s 2025 action plan articulates an affirmative vision: to become a global AI innovation leader by 2030, with governance frameworks to match. This is not a compliance document. It is a standard-setting manifesto.
The plan explicitly calls for international cooperation on AI safety, ethical development, and technical standards — language that sounds anodyne until you consider that whoever writes the definitions controls the game. When Beijing calls for “balanced” AI governance, it means governance that does not privilege the liability frameworks of Brussels or the export-control architectures of Washington. It means a third way, one that China intends to anchor in multilateral bodies where it has spent years accumulating influence. Read more: China AI’s Five-Year Ultimatum: Why the West Is Fighting the Last War. Read more: India and Japan’s AI Dialogue Is the Opening Move in Asia’s Play for Global Tech Leadership. Read more: UK and India Are Writing the Rules Together-Before Someone Else Does.
The pattern is familiar. China deployed the same playbook in telecommunications, where early positioning in 3GPP standards bodies helped Huawei embed its technical assumptions into 5G specifications that the rest of the world then adopted. Mayer Brown’s analysis of the action plan notes that Beijing has simultaneously issued draft ethics rules and AI labelling requirements — the granular technical work that, once internationalized, becomes infrastructure others must conform to.
Three Regulatory Worlds, One Fractured Market
Any executive running a multinational technology operation is already navigating at least two AI governance regimes simultaneously. The honest answer is that there are now three distinct philosophies competing for global adoption, and they are not converging.
| Dimension | European Union | United States | China |
|---|---|---|---|
| Primary Logic | Rights-based precaution | Innovation-first, sectoral | State-directed development with security overlay |
| Key Instrument | EU AI Act (risk tiers) | Executive orders, NIST frameworks | Action plan + draft ethics and labelling rules |
| Standards Ambition | Export EU norms via market access | Bilateral agreements, export controls | Multilateral bodies, Global South outreach |
| 2030 Target | Trustworthy AI leadership | Maintained technological supremacy | Global AI innovation leader |
| Compliance Cost Driver | Documentation, audits, liability | Export licensing, sector rules | Content controls, ethics certification, labelling |
The compliance burden for any firm operating across all three jurisdictions is not additive — it is multiplicative. A model trained and deployed in accordance with EU transparency requirements may simultaneously violate Chinese content governance rules. An algorithm optimized under US national security carve-outs may be treated as an unethical system under Beijing’s draft ethics framework. There is no harmonization roadmap in sight.
The Global South Is the Prize Nobody Is Discussing
Analysts focused on the US-China binary are missing the more consequential contest: which governance model becomes the default for the roughly 100 nations that lack the regulatory capacity to write their own AI frameworks from scratch.
China’s action plan is explicitly oriented toward this audience. Beijing has spent the past decade building infrastructure relationships across Africa, Southeast Asia, and Latin America through the Belt and Road Initiative. The AI governance action plan is the digital-era extension of that strategy. Countries that receive Chinese AI infrastructure — cloud platforms, surveillance systems, agricultural analytics — will also receive Chinese governance assumptions embedded in those systems. The model exports itself.
“Standards are geopolitics by other means. The country that defines what ‘safe AI’ means in Lagos, Jakarta, and Bogotá will shape the technology environment for billions of people who have no seat at the table where those definitions are written.”
The EU has recognized this dynamic and is attempting to use trade relationships and development finance to export its own AI governance frameworks to partner countries. The United States, by contrast, has leaned heavily on export controls and chip restrictions — tools designed to deny China capability, but not tools that offer alternative governance infrastructure to developing nations. That asymmetry matters enormously in the long run.
China’s Domestic Architecture Is More Sophisticated Than the West Assumes
A second analytical error is treating China’s AI governance as purely political theater — heavy on state control rhetoric, light on technical substance. The peer-reviewed literature on China’s AI governance norms tells a more complicated story. Beijing has developed a layered system involving state agencies, industry bodies, and academic institutions in ways that parallel — and in some respects outpace — Western multi-stakeholder models.
China’s regulatory sequencing has been notably adaptive. Rather than attempting a comprehensive AI law on the EU model, Beijing issued targeted regulations on specific applications: algorithmic recommendation systems in 2022, deepfakes in 2022, generative AI in 2023, and now the broader governance action plan in 2025. This incremental approach allowed regulators to learn from each deployment before overcommitting to rigid statutory frameworks. The draft ethics rules and labelling requirements released alongside the 2025 action plan represent the maturation of that learning process.
For multinationals, this means the compliance landscape in China is not static. It is an evolving system that will continue to add layers. Companies that built China market strategies around the assumption that Beijing’s AI rules were aspirational rather than enforceable should revisit that judgment urgently.
What the Standards War Means for Capital Allocation
Investors evaluating AI-exposed companies need to price regulatory fragmentation as a structural cost, not an episodic risk. The divergence between governance regimes creates at least three identifiable headwinds.
First, model architecture decisions made today will carry jurisdictional assumptions that are difficult to unwind. A foundation model built to satisfy Chinese content governance requirements will have different training data constraints than one optimized for EU transparency rules. These are not superficial differences. They affect what the model can and cannot do, and therefore what markets it can and cannot serve.
Second, the talent and data pipelines that feed AI development are themselves becoming subject to governance restrictions. China’s data export rules, the EU’s GDPR-adjacent provisions in the AI Act, and US restrictions on data flows to adversarial nations create an environment where the free movement of training data — a core input to AI capability — is increasingly constrained by political geography.
Third, the labelling and certification requirements emerging from multiple jurisdictions will create compliance overhead that disproportionately burdens smaller AI developers. This is not necessarily bad news for large incumbents, who can absorb compliance costs that function as barriers to entry. But it does mean that the AI market structure will be shaped as much by regulatory economics as by technical merit.
The Multilateral Illusion
There is a recurring hope in policy circles that some combination of the UN, the OECD, or the G20 will eventually produce a unified global AI governance framework that resolves these tensions. That hope should be treated with extreme skepticism by anyone making decisions that need to survive the next five years.
The structural incentives pushing the major powers toward divergent frameworks are stronger than the diplomatic incentives toward harmonization. The United States and China are engaged in a technology competition where the rules of AI governance are themselves a competitive variable. The EU has built an entire regulatory identity around being the global standard for rights-based digital governance. None of these actors has a compelling reason to surrender their framework for a compromise that dilutes their strategic position.
China’s 2025 action plan, for all its cooperative language about international AI governance, is best understood not as an olive branch but as a competing architecture. It is an invitation to adopt Beijing’s terms, dressed in the vocabulary of multilateralism. The difference matters for anyone sitting across the negotiating table.
FetchLogic Take
The AI governance fragmentation of 2025 will produce a market structure by 2028 that resembles the internet’s own geographic splintering — what technologists call the “splinternet” — but operating at the level of model capability rather than content access. Companies that attempt to build single global AI products will face an impossible optimization problem: satisfying Chinese ethics certification, EU explainability mandates, and US national security carve-outs simultaneously will require fundamentally different model architectures, not just different documentation. The winners in this environment will not be the firms with the best models. They will be the firms that master regulatory arbitrage — building modular AI systems that can be reconfigured by jurisdiction, and deploying lobbying infrastructure in Geneva and multilateral standards bodies with the same seriousness they currently apply to Washington and Brussels. The China AI governance action plan is the starting gun on that race. Most boardrooms have not yet heard the shot.