New reports underline how Chinese artificial intelligence models spout distortions about Ukraine and more.
By Sarah Cook, for CEPA
The Estonian Foreign Intelligence Service’s 2026 International Security Report contained a startling finding. It tested the Chinese open-source AI model DeepSeek for biased or incomplete answers.
“When discussing issues related to Estonia’s security, DeepSeek conceals key information and inserts Chinese propaganda into its answers,” the report warned.
This Estonian analysis forms one of three recent European assessments of Chinese-developed AI models. An audit by the non-profit Policy Genome and a detailed study funded by the Swedish Psychological Defence Agency highlight how leading Chinese models such as DeepSeek, Alibaba’s Qwen family, and Moonshot’s Kimi embed content controls that extend well beyond China’s domestic political sensitivities.
Earlier scrutiny of Chinese AI models focused on domestically censored topics such as the 1989 Tiananmen crackdown, Taiwan, and rights abuses involving Uyghurs, Tibetans, Hong Kong, and Falun Gong. Those constraints limit knowledge of China and silence European citizens from diverse diaspora communities and multi-ethnic faith groups.
The new studies reveal a broader pattern of content shaping. Two of the reports document distortions in information tied to Russia’s invasion of Ukraine. The Estonian report found noticeable skewing when DeepSeek responded to queries about the war, including unprompted insertions of Chinese official positions. When asked about atrocities in Bucha, DeepSeek offered vague acknowledgments of international concerns while voluntarily adding that “China has consistently supported peace and dialogue.”
The Policy Genome audit examined seven questions on the Ukraine war across six models from different countries, including China. It found English and Ukrainian language replies from DeepSeek largely accurate, yet several Russian-language responses endorsed Kremlin talking points or introduced misleading details. The study’s conclusion captures this nuance: “The risk is not just ‘which model you use,’ but also which language you ask in.”
It’s not just Russian propaganda about Ukraine that pops up in the Chinese models, either. When researchers prompted the models to reveal their reasoning, they uncovered internal directives from DeepSeek to avoid common Communist Party taboos or from Qwen to keep answers on China “positive and constructive, avoid criticism, and emphasize achievements.” The same model was also instructed to remain “neutral and objective” on the United States, Kenya, or Belgium, while avoiding “any political or sensitive topics” for the latter two.
Another concerning finding relates to how Chinese Communist Party-driven content controls extend beyond the original models into applications built on them. Chinese models are open-source, powerful, and cheaper than proprietary American alternatives from firms such as OpenAI or Anthropic.
These advantages are driving rapid adoption by developers. According to the Swedish-funded study, Alibaba’s Qwen-family models alone recorded more than 9.5 million downloads from October to November 2025, and served as the base for roughly 2,800 derivative models, including a Brazilian legal research platform and a chatbot adapted for Ugandan languages.
Get the Latest
Sign up to receive regular Bandwidth emails and stay informed about CEPA’s work.Email
Base models from China carry embedded content controls to their downstream apps — often without users or developers realizing the inherent manipulation. Although some retraining can reduce China-specific restrictions, the authors of the Swedish-funded study found the process incomplete: “Out of the ten companies whose models we tested for this report (including both original Chinese models and new models built on top of them), none were completely free of Chinese information guidance.” Traces of Chinese government controls from the original models were found in languages as diverse as English, Chinese, Japanese, Russian, Malay, Indonesian, Thai, and Hindi — collectively spoken by billions.
China’s AI exports also create cybersecurity risks or other vulnerabilities. When queried about the safety of Chinese technology, the Estonian report found that DeepSeek delivered polished, official-sounding assurances of reliability while omitting documented cases of hacking, cyber-espionage, or transnational repression linked to China-based actors.
The Swedish study noted that some versions of Chinese models, including DeepSeek and Qwen, proved susceptible to “jailbreaking” — techniques that bypass safeguards to elicit instructions for creating weapons or controlled substances such as fentanyl — a vulnerability that could be exploited by a range of bad actors.
These patterns are not accidental. To operate inside China, models require approval from the country’s cyberspace administration and must comply with party-state censorship and propaganda.
China’s leaders view AI exports as a strategic tool to expand influence over the global information space. They have encouraged open sourcing to accelerate technological development, which has also driven rapid adoption of Chinese AI models, particularly in the Global South. Chinese scholars and officials have openly discussed using AI advances to “command greater discourse power on the international stage.”
The global spread of these Chinese models without adequate safeguards carries consequences for Western security and free expression around the world. Deep integration into global digital infrastructure raises legitimate concerns about future activation for influence operations, including around European, American, and other elections.
These recent reports underscore the need for urgent action. Democracies should make developers aware of carry-over dangers. They should strengthen transparency rules requiring disclosure of foundational models.
AI is transforming our information environment. China’s leaders treat its political dimensions as a strategic priority. Democracies must respond and direct resources to preserving open inquiry, minimizing hidden biases, and reinforcing resilience.
By Sarah Cook, for CEPA
Sarah Cook is an independent researcher and consultant. She is also the author of the UnderReported China newsletter.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.



