This news brought to you by: INTER PRESS SERVICE
By CIVICUS | 18.Mar.26 | Twitter
CHINA: ‘The State Is Using Generative AI to Engineer Reality Through Informational Gaslighting’

Mar 18 2026 (IPS) -  
CIVICUS discusses China’s tech-enabled repression with Fergus Ryan, a Senior Analyst at the Australian Strategic Policy Institute (ASPI), where he specialises in how the Chinese Communist Party shapes global information environments through censorship, propaganda and platform governance. His research includes a major study on China’s AI ecosystem and its human rights impacts, as well as investigations into China’s use of foreign influencers.

CHINA: ‘The State Is Using Generative AI to Engineer Reality Through Informational Gaslighting’

Fergus Ryan

China’s authoritarian government is deploying AI at scale to censor, control and monitor its population. As these tools grow more sophisticated and are exported abroad, the implications for civic space extend far beyond China’s borders.

What AI systems is China developing?

Based on our research, China is rapidly developing a multi-layered AI ecosystem designed to expand state control.

Tech giants are building multimodal large language models (LLMs) such as Alibaba’s Qwen and Baidu’s Ernie Bot, which censor and reshape descriptions of politically sensitive images. Hardware companies including Dahua, Hikvision and SenseTime supply the camera networks that feed into these systems.

The state is building what amounts to an AI-driven criminal justice pipeline. This includes City Brain operations centres such as Shanghai’s Pudong district, which process massive surveillance data, as well as the 206 System, developed by iFlyTek, which analyses evidence and recommends criminal sentences. Inside prisons, AI monitors inmates’ facial expressions and tracks their emotions.

AI-enabled satellite surveillance, such as the Xinjiang Jiaotong-01, enables autonomous real-time tracking over politically sensitive regions. Additionally, AI-enabled fishing platforms such as Sea Eagle expand economic extraction in the exclusive economic zones of countries including Mauritania and Vanuatu, displacing artisanal fishing communities.

How does China use AI for censorship and policing?

China relies on a hybrid model of censorship that fuses the speed of AI with human political judgement. The government requires companies to self-censor, creating a commercial market for AI moderation tools. Tech giants such as Baidu and Tencent have industrialised this process: systems automatically scan images, text and videos to detect content deemed to be risky in real time, while human reviewers handle nuanced or coded speech.

In policing, City Brains ingest data from millions of cameras, drones and Internet of Things sensors and use AI to identify suspects, track vehicles and predict unrest before it happens. In Xinjiang, the Integrated Joint Operations Platform aggregates data from cameras, phone scanners and informants to generate risk scores for individuals, enabling pre-emptive detention based on behavioural patterns rather than specific crimes.

On platforms such as Douyin, the state does not just delete content; it algorithmically suppresses dissent while amplifying ‘positive energy’. AI links surveillance data directly to narrative control and police action.

What are the human rights impacts?

These AI systems erode the rights to freedom of expression, privacy and a fair trial.

Historically, online censorship meant deleting a post. Today, generative AI engages in ‘informational gaslighting’. When ASPI researchers showed an Alibaba LLM a photograph of a protest against human rights violations in Xinjiang, the AI described it as ‘individuals in a public setting holding signs with incorrect statements’ based on ‘prejudice and lies’. The technology subtly engineers reality, preventing users accessing objective historical truths.

AI also undermines the right to a fair trial. In courts that lack judicial independence, AI systems that recommend sentences or predict recidivism act as a black box that defence lawyers cannot scrutinise.

Pervasive surveillance changes behaviour even when not actively used, so its chilling effect may be as significant as direct deployment. Knowing their conversations may be monitored, people self-censor online and in private messaging. Emotion recognition in prisons takes this further: people can theoretically be flagged for their internal states of mind. It’s not just actions that are punished, but also thoughts.

Which groups are most affected?

While AI-enabled surveillance affects all people, ethnic minorities such as Koreans, Mongolians, Tibetans and Uyghurs are disproportionately targeted.

Mainstream LLMs are trained primarily in Mandarin, leaving little commercial incentive to develop AI for minority languages. The Chinese state, however, views those languages as a security vulnerability. State-funded institutions, including the National Key Laboratory at Minzu University, are building LLMs in minority languages, not for cultural preservation, but to power public-opinion control and prevention platforms. These scan text, audio and video in Tibetan and Uyghur to detect cultural advocacy, dissent or religious activity.

Feminist activists, human rights lawyers — particularly since the 709 crackdown in 2015 — labour activists and religious minorities including Falun Gong practitioners face disproportionate targeting. Chinese models consistently adopt state-aligned narratives about such groups, labelling Falun Gong a cult and avoiding human rights framing. Since 2020, Hong Kongers have also been subject to National Security Law surveillance using many of the same tools deployed on the mainland, a reminder that this infrastructure can be rapidly extended.

How can activists in China protect themselves?

Protecting oneself inside China is increasingly difficult. AI leaves very few blind spots. But the system is not perfectly omniscient.

Activists have historically relied on coded speech, euphemisms and satire, the classic example being the use of ‘Winnie the Pooh’ to refer to President Xi Jinping. Because AI struggles with cultural nuance and evolving memes, new linguistic workarounds can temporarily bypass automated filters. But this is a relentless game of Whac-a-Mole: Chinese tech companies employ thousands of human content reviewers whose only job is to catch new memes and feed them back into the AI.

The most practical steps are to use VPNs to access blocked platforms, secure communications apps such as Signal and separate devices for sensitive work. None of these are foolproof. VPN use is technically illegal and increasingly detected and Signal can only be accessed via VPN. It helps to keep a minimal digital footprint and communicate face-to-face on sensitive matters. For activists in Xinjiang, however, surveillance is so pervasive that individual precautions offer little protection. Strong international networks and rigorous documentation practices are essential.

Is China exporting these technologies?

China is the world’s largest exporter of AI-powered surveillance technology, marketing these systems globally, particularly to the global south.

The Chinese state is purposefully expanding its minority-language public-opinion monitoring software throughout Belt and Road Initiative countries, effectively extending its censorship apparatus to monitor Tibetan and Uyghur diaspora communities abroad. Chinese companies including Dahua, Hikvision, Huawei and ZTE have deployed surveillance and ‘safe city’ systems across over 100 countries, with Saudi Arabia and the United Arab Emirates among the most significant recipients. Critically, these companies operate under China’s 2017 National Intelligence Law, which requires cooperation with state intelligence, meaning data flowing through these systems could be accessible to Beijing as well as to purchasing governments.

China is also exporting its governance model through the open-source release of its LLMs, embedding Chinese censorship norms into foundational infrastructure used by developers worldwide.

What should the international community do?

The international community must recognise that countering this requires regulatory pushback.

First, democratic states should set minimum transparency standards for public procurement. This means refusing to purchase AI models that conceal political or historical censorship and mandating that providers publish a ‘moderation log’ with refusal reason codes so users know when content is restricted for political reasons.

Second, states should enact ‘safe-harbour laws’ to protect civil society organisations, journalists and researchers who audit AI models for hidden censorship. Currently, doing so can breach corporate terms of service.

Third, strict export controls should block the transfer of repression-enabling technologies to authoritarian regimes, while companies providing public-opinion management services should be excluded from democratic markets. Existing targeted sanctions on companies such as Dahua and Hikvision for their role in Xinjiang should be enforced more rigorously.

Finally, the international community must recognise that Chinese surveillance extends beyond China’s borders. Spyware targeting Tibetan and Uyghur activists in exile is well-documented, as is pressure on family members remaining in China. Rigorous documentation by international civil society remains essential for building the evidentiary record for future accountability.

CIVICUS interviews a wide range of civil society activists, experts and leaders to gather diverse perspectives on civil society action and current issues for publication on its CIVICUS Lens platform. The views expressed in interviews are the interviewees’ and do not necessarily reflect those of CIVICUS. Publication does not imply endorsement of interviewees or the organisations they represent.

GET IN TOUCH
Website
LinkedIn
Twitter/X

SEE ALSO
Technology: innovation without accountability CIVICUS | 2026 State of Civil Society Report
The silencing of Hong Kong CIVICUS Lens 25.Jun.2025
The long reach of authoritarianism CIVICUS Lens 20.Mar.2024

Follow @IPSNewsUNBureau
  
peace

The online film archive supports schools, universities, NGOs and other civil-service organizations across the globe on the principle of gift-economy. Watch films (documentaries, short films, talks & more) and promote filmmakers. Join this community of soulful storytellers from myriad cultures, in their mission to promote global consciousness. Empower their willful hearts, who see the future to be united and harmonious, who aspire for the wellbeing of all. Support learning about the ‘self’, culture, nature and the eternal soul – the evolution of life.
Support the humanity in the process of becoming ‘that’...

© 2026 Culture Unplugged. Serving Since 2007.
Promoting our collective consciousness through stories from across the planet!

Consciousness Matters!