Published on

Chinese artificial intelligence (AI) chatbots often refuse to answer political questions or echo official state narratives, suggesting that they may be censored, according to a new study.

The study, published in the journal PNAS Nexus, compared how leading AI chatbots in China, including BaiChuan, DeepSeek, and ChatGLM, responded to more than 100 questions about state politics, benchmarking them against models developed outside of China.

Researchers flagged responses as potentially censored if a chatbot declined to answer or provided inaccurate information.

Questions related to the status of Taiwan, ethnic minorities or those about well-known pro-democracy activists triggered refusals, deflections, or government talking points from the Chinese models, the study noted.

“Our findings have implications for how censorship by China-based LLMs may shape users’ access to information and their very awareness of being censored,” the researchers said, noting that China is one of the few countries aside from the United States that can build foundational AI models.

When these models did respond to the prompts, they provided shorter answers with higher levels of inaccuracy because they omitted key information or challenged the premise of the question.

BaiChuan and ChatGLM had the lowest inaccuracy rates among Chinese models at 8 percent, while DeepSeek reached 22 percent, which is more than double the 10 percent ceiling seen in non-Chinese models.

AI censorship could ‘quietly shape decision-making,’

In one example, a Chinese model asked about internet censorship, failed to mention the country’s “Great Firewall,” system which Stanford University describes as a state-controlled internet monitoring and censorship programme that regulates what can and cannot be seen through an online network. For example, popular American sites Google, Facebook and Yahoo are blocked in China through the initiative.

The chatbots did not mention this firewall in their answer. They said instead that authorities “manage the internet in accordance with the law.”

The study warns that this type of censorship could be harder for users to detect, because chatbots often apologise or offer a justification for not answering directly. It’s a subtle approach that could “quietly shape perceptions, decision-making and behaviours,” the study reads.

In 2023, new Chinese laws said that AI companies have to uphold “core socialist values,” and are barred from generating content that “incites subversion of national sovereignty or the overturn of the socialist system … or harming the nation’s image.”

Companies that could enable “social mobilisation” have to undergo security assessments and file their algorithms to the Cybersecurity Administration of China (CAC), the rules added

Researchers said the regulations “have the potential to influence the outputs of large language models that are developed within China.”

However, they cautioned that not all differences in how the chatbots respond come from state pressure.

Chinese models may also be trained on datasets that reflect “China’s cultural, social, and linguistic context,” that might not be used in the training of other models outside China, the study notes.

Share.
Exit mobile version