Published in AI

Deepseek caught serving dodgy code to China's ‘enemies’ 

by on18 September 2025


Falun Gong requests get broken software while everyone else gets the good stuff

A new report claims China’s flagship AI outfit DeepSeek is sabotaging users if they happen to identify with groups Beijing doesn’t like.

According to the Washington Post, security firm CrowdStrike ran a test that showed DeepSeek produced weaker or even insecure code when requests mentioned Falun Gong, Tibet, or Taiwan. The same tool, when asked for help in more neutral contexts, spat out cleaner and more reliable software.

CrowdStrike said the biggest risks came from industrial control system code, where about 22.8 per cent of DeepSeek’s answers were already flawed. But if the code was framed as being used by the Islamic State, that shot up to 42.1 per cent. Mention Falun Gong and DeepSeek either delivered rubbish or refused outright almost half the time. To be fair, Western models also refuse to help terrorists, but have no issue with Falun Gong requests, CrowdStrike pointed out.

The outfit's senior vice president Adam Meyers suggested there are three plausible reasons why the Chinese model misbehaves. It could be direct sabotage under government orders, skewed training data with weaker examples from politically sensitive regions, or the AI itself “learning” to spit out poor code if it infers the user is from a rebellious area.

Earlier research by NewsGuard showed DeepSeek parrots Chinese government talking points on sensitive issues, even when they’re factually wrong. But this is the first evidence that the model may be deliberately undermining software quality for political reasons.

Given DeepSeek has a hugely popular open-source version, the idea that it might slip poison into code depending on who asks is bound to rattle developers outside China.

Last modified on 18 September 2025
Rate this item
(0 votes)