Chinese AI Models and the High-Stakes Fight for AI Neutrality

Chinese and Western large language models are reshaping global information power, embedding political world views into the systems that increasingly mediate public discourse and geopolitical influence.

January 14, 2026
Lamensch, Marie - Chinese LLMs v3
DeepSeek is challenging the long-standing dominance of US firms in the AI market. (Dado Ruvic/REUTERS)

When DeepSeek-R1 was released in January 2025, the Chinese artificial intelligence (AI) start-up stunned the world, triggering a global wave of downloads — even outside China — and challenging the long-standing dominance of US firms in the AI market. Chinese developers are now emerging as rivals in open-weight models, a category in which model parameters are publicly released and can be downloaded, fine-tuned and inspected by anyone. The shift is measurable: According to a recent study of the open-weight AI model ecosystem written by a group of American and international researchers, China’s open-source models such as DeepSeek and Alibaba’s Qwen now account for 30 percent of all AI downloads globally, surpassing the United States (15.7 percent).

Several factors explain this rapid adoption. Open-weight models enable developers, researchers and smaller organizations to run systems locally, adapt them to specific use cases and innovate without relying on proprietary US tech platforms. DeepSeek, in particular, is much cheaper to access and operate compared to American alternatives, making it attractive to universities, start-ups and public-sector institutions worldwide that do not have large-scale resources to spend. The company’s forthcoming DeepSeek-R2 model, which is likely to be released in 2026, is expected to intensify competition further and accelerate the global spread of perfectly capable, low-cost AI systems.

This global proliferation is unfolding at the same time as China expands state-backed investment into strategic AI sectors and deploys AI across governance, industry and military modernization, according to a study by the Australian Strategic Policy Institute (ASPI) published in December 2025. These shifts must prompt policy makers, technologists and civil society to confront a deeper question: What does it mean when an authoritarian state begins to shape the technical foundations of systems that will mediate information flows, civic discourse and economic life worldwide?

At the Montreal International Security Summit in October 2025, Qiang Xiao, research scientist at the School of Information and the founder and editor-in-chief of China Digital Times, described this phenomenon as “infrastructure colonization” — the idea that broad adoption of Chinese large language models (LLMs) by developers and institutions embeds foreign political assumptions into the architectures of software, workflows and public knowledge systems. Understanding these risks requires situating Chinese AI development within its domestic and global contexts.

China’s Domestic AI Strategy: Control Through Intelligent Infrastructure

AI has become central to China’s long-term strategic vision. Since the introduction of the New Generation Artificial Intelligence Development Plan in 2017, Beijing has treated AI as an instrument of national power, economic resilience and ideological control. Complementary policies, such as Made in China 2025, reinforce this ambition by linking AI to industrial modernization and geopolitical leverage.

Domestically, China has deployed AI-enabled surveillance networks, predictive policing tools, biometric tracking systems and algorithmic monitoring across social media to manage its population at an unprecedented scale. AI-driven disinformation and content-shaping systems reinforce state narratives and suppress dissent, while the updated AI Safety Governance Framework 2.0 codifies Beijing’s dual priorities: accelerating innovation while embedding regime-aligned “security” and “compliance” requirements into technical standards. Carnegie Endowment for International Peace fellows Matt Sheehan and Scott Singer, in their article “How China Views AI Risks and What to do About Them,” warn that these standards risk legitimizing state control, concentrating regulatory power and constraining independent innovation.

China’s Global Ambition: Exporting Infrastructure and Norms

Abroad, China’s AI strategy extends into the Belt and Road Initiative, where AI-enabled smart-city solutions, surveillance systems and digital public-administration tools create long-term technological dependencies, especially with developing nations in Africa, Asia and, Latin America, and Africa. Chinese firms also use AI to consolidate global positions in industrial automation, supply-chain management and digital finance. Initiatives such as the China Standards 2035, the “whole-of-nation” approach to developing AI foundation models, and the State Council’s and Chinese Communist Party’s (CCP’s) 2021 standards strategy aim to shape global technology norms in ways that reflect the party’s governance models, appealing to other authoritarian or illiberal states.

Rebecca Arcesati, in a study for the Mercator Institute for China Studies, describes China’s AI system as a “local-global hybrid model” built on three pillars: massive state investment in computer and data centres; a whole-of-nation development strategy mobilizing companies such as Alibaba, Baidu and Huawei alongside smaller subsidized players; and selective international engagement that preserves access to global innovation even as China reduces strategic dependence on foreign technologies.

Even with the political restrictions imposed by the government, including governance that prioritizes security and control and finding the right balance between rapid AI development and meaningful safety measures, Chinese models are innovative and now excel in specific domains, including cost efficiency and practical applications.

The Security and Political Implications of Chinese LLM Diffusion

The international uptake of Chinese LLMs is not simply a technical trend that should interest technologists alone. It could potentially reshape global information ecosystems if companies and people around the globe use them. Unlike US models, Chinese LLMs are trained within a tightly regulated political environment in which certain topics and narratives are constrained by law and state ideology.

Recent research comparing ChatGPT and DeepSeek in the study “Echoes of Power: Investigating Geopolitical Bias in US and China Large Language Models” underscores the consequences. Indeed, the three researchers posed dozens of questions relating to geopolitics, international relations, human rights and governance, and found that while US and Chinese models often produced similar factual content, their framing diverged in clear and systematic ways. US models anchored explanations in concepts such as international law, multilateral institutions or democratic norms such as individual rights. Meanwhile, Chinese models emphasized state sovereignty, national unity, historical grievances and geopolitical stability — subjects dear to the CCP. On sensitive topics such as Taiwan, DeepSeek declined to answer entirely, indicating hard-coded restrictions reflecting domestic political red lines.

These are not instances of overt propaganda. Rather, they represent subtle, structurally embedded biases that users may not detect, thereby making them more insidious. As Chinese models become integrated into search engines, productivity software, commercial platforms and government systems, these biases can scale globally, shaping political understanding and public discourse far beyond China’s borders, without being overly propagandistic.

ASPI’s new study titled “The party’s AI: How China’s new AI systems are reshaping human rights” confirms the risks. Indeed, the authors argue that China is increasingly using LLMs to consolidate state control and suppress dissent as AI is embedded across multiple sectors, from governance to law enforcement and content moderation. In one scary example, the authors show how China is developing minority-language LLMs to monitor Korean, Mongolian, Tibetan and Uyghur communications, extending surveillance to diaspora communities abroad. As noted already, China exports AI-powered surveillance and control systems, thereby influencing practices in other countries.

Political control and ideology are therefore embedded into technological innovation, which, as Xiao argued at the international security summit, can be regarded as a form of “infrastructure colonization.” The adoption of Chinese LLMs, therefore, raises critical questions not only about market competition but also about the political values embedded in our digital infrastructure and the risks to privacy, freedom of expression and the rule of law. What happens when the tools we use are trained within an authoritarian information ecosystem? This can be a fundamental challenge to the integrity and resilience of public discourse worldwide. As ASPI senior analyst Fergus Ryan wrote on his Substack “Red Packet,” “open-weight model releases, cheap API access and aggressive international partnerships mean these systems are no longer staying inside China’s borders.” The question now isn’t whether Chinese models are “best” overall, but whether they achieve enough capability and adoption to reshape AI’s geopolitical landscape.

Grok: A Western Case Study in Ideological Drift, Bias and the Erosion of AI Neutrality

As fears intensify around Chinese LLMs and their embedded political biases, the case of Elon Musk’s Grok provides an important counterbalance: ideological distortion and influence in AI are not confined to authoritarian states. Grok, which is tightly integrated into X, has become an example of how models can begin to mirror the ideology, interests and personality of their creators.

An analysis by the British media outlet Sky News found that X’s algorithm heavily amplified right-wing and extreme content in the United Kingdom, irrespective of user preferences. Musk himself has become a political actor on the platform, elevating fringe figures, boosting new right-wing parties and provoking what UK politicians such as Ed Miliband and Ed Davey describe as a “toxic” threat to democratic debate. In this ecosystem, Grok appears as an extension of a broader ideological project.

A new investigation by Adi Robertson, tech and senior editor at The Verge, shows that recent interactions with Grok have shifted in tone: The chatbot increasingly praises Musk in exaggerated, almost devotional language, calling him a “visionary,” the “greatest mind of our time” and, in troll-baited scenarios, even “a better role model than Jesus Christ.” It has gone so far as to assert Musk’s superiority to basketball player LeBron James and European dictators. These outputs are not random glitches but reflect the intentional design of a chatbot aligned with Musk’s public personality. Similar issues occurred with Grokipedia, Musk’s attempted alternative to Wikipedia. Early entries cited Kremlin sources and fringe forums such as Infowars and Stormfront, pushed far-right talking points and emphasized Musk’s physical capacities. Musk created the platform to rival what he calls “Wokepedia” (Wikipedia), but does Grokipedia function as a curated knowledge environment shaped by Musk’s world view?

Beyond the cult of personality, the Grok case highlights several risks, such as the erosion of neutrality, the amplification of misinformation and the centralization of narrative power in private hands, especially tech giants that appear no more powerful than states. ChatGPT and Gemini are far from perfect in terms of content safety, moderation and neutrality, but Grok openly aligns with Musk’s ideology.

There are legal implications at stake. Most recently, Grok has been under fire over non-consensual sexually explicit deepfakes. Additionally, France has launched a formal investigation into Grok after the model produced Holocaust-denial content, which is illegal under French law and a potential violation of the European Union’s Digital Services Act and forthcoming AI Act. Regulators are already signalling the need for greater transparency around training data, moderation systems and human oversight for sensitive topics. As such, European officials see Grok as a test case for AI accountability, especially as personality-driven models intersect with real-world legal and historical boundaries.

As we learn how AI models shape public discourse, distort information environments and challenge democratic norms, the Grok case underscores the urgent need for global — not just China-focused — scrutiny of how AI systems embed ideology, construct knowledge and wield influence in society as well as in national and international politics.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Marie Lamensch is the global affairs officer at the Montreal Institute for Global Security and is an expert in global and human security.