Argument
An expert's point of view on a current event.

China’s Censors Are Afraid of What Chatbots Might Say

Artificial intelligence development may get held up for political reasons.

By , an editor at ChinaTalk, and , the host of the ChinaTalk podcast.
ChatGPT artificial intelligence software is seen.
ChatGPT artificial intelligence software is seen.
An illustrative picture shows ChatGPT artificial intelligence software, which generates human-like conversation, on Feb. 3. Nicolas Maeterlinck/Belgamag/AFP via Getty Images

ChatGPT has made quite the stir in China: virtually every major tech company is keen on developing its own artificial intelligence chatbot. Baidu has announced plans to release its own strain sometime next month. This newfound obsession is in line with paramount Chinese leader Xi Jinping’s strategic prioritization of AI development—dating back to at least 2017—in China’s race to become the world’s dominant AI player and ultimately a “science and technology superpower.” And while the development of large language model (LLM) bots such as ChatGPT is just one facet of the future of AI, LLMs will, as one leading AI scientist recently put it, “define artificial intelligence.” Indeed, the sudden popularity of ChatGPT has at Google “upended the work of numerous groups inside the company to respond to the threat that ChatGPT poses”—a clarion indicator of the arguably outsized importance of LLMs.

ChatGPT has made quite the stir in China: virtually every major tech company is keen on developing its own artificial intelligence chatbot. Baidu has announced plans to release its own strain sometime next month. This newfound obsession is in line with paramount Chinese leader Xi Jinping’s strategic prioritization of AI development—dating back to at least 2017—in China’s race to become the world’s dominant AI player and ultimately a “science and technology superpower.” And while the development of large language model (LLM) bots such as ChatGPT is just one facet of the future of AI, LLMs will, as one leading AI scientist recently put it, “define artificial intelligence.” Indeed, the sudden popularity of ChatGPT has at Google “upended the work of numerous groups inside the company to respond to the threat that ChatGPT poses”—a clarion indicator of the arguably outsized importance of LLMs.

Yet, China’s aspirations to become a world-leading AI superpower are fast approaching a head-on collision with none other than its own censorship regime. The Chinese Communist Party (CCP) prioritizes controlling the information space over innovation and creativity, human or otherwise. That may dramatically hinder the development and rollout of LLMs, leaving China to find itself a pace behind the West in the AI race.

According to a bombshell report from Nikkei Asia, Chinese regulators have instructed key Chinese tech companies not to offer ChatGPT services “amid growing alarm in Beijing over the AI-powered chatbot’s uncensored replies to user queries.” A cited justification, from state-sponsored newspaper China Daily, is that such chatbots “could provide a helping hand to the U.S. government in its spread of disinformation and its manipulation of global narratives for its own geopolitical interests.”

The fundamental problem is that plenty of speech is forbidden in China—and the political penalties for straying over the line are harsh. A chatbot that produces racist content or threatens to stalk a user makes for an embarrassing story in the United States; a chatbot that implies Taiwan is an independent country or says Tiananmen Square was a massacre can bring down the wrath of the CCP on its parent company.

Ensuring that LLMs never say anything disparaging about the CCP is a genuinely herculean and perhaps impossible task. As Yonadav Shavit, a computer science Ph.D. student at Harvard University, put it: “Getting a chatbot to follow the rules 90% of the time is fairly easy. But getting it to follow the rules 99.99% of the time is a major unsolved research problem.” LLMs output is unpredictable, and they learn from natural language produced by humans, which is of course subject to inference, bias, and inaccuracies. Thus users can with little effort “hypnotize ortrick” models into producing outputs the developer fastidiously tries to prevent. Indeed, Shavit reminded us that, “so far, when clever users have actively tried to get a model to break its own rules, they’ve always succeeded.

“Getting language models to consistently follow any rules at all, even simple rules like ‘never threaten your user,’ is the key research problem of the next generation of AI,” Shavit said.

What are Chinese engineers to do, then? The Cyberspace Administration of China (CAC) won’t take it easy on a Chinese tech company just because it’s hard to control its chatbot. One potential solution would be to prevent the model from learning about, say, the 1989 Tiananmen Square massacre. But as Shavit observed, “No one really knows how to get a model trained on most of the internet to not learn basic facts.”

Another option would be for the LLM to spit out a form response like, “As a Baidu chatbot, I cannot …” if there’s a chance that criticism of the CCP would follow—similar to how ChatGPT responds to emotive or erotic requests. But again, given the stochastic nature of chatbots, that option doesn’t guarantee that politically objectionable speech to the CCP could never arise.

In that case, the de facto method by which Chinese AI companies compete among one another would involve feeding clever and suggestive prompts to an opponent’s AI chatbot, waiting until it produces material critical of the CCP, and forwarding a screenshot to the CAC. That’s what happened with Bluegogo, a bikeshare company. In early June 2017, the app featured a promotion using tank icons around Tiananmen Square. The $140 million company folded immediately. Although most guessed that Bluegogo had been hacked by a competitor, to the CCP that defense was clearly irrelevant. And while this one-off example may not account for the complexities and possibilities that could emerge in a Chinese chatbot market—one could imagine, for example, the CCP leveraging LLMs to project their influence globally, as it already does with TikTok—as Mercatus Center research fellow Matthew Mittelsteadt wrote, the fall of Bluegogo demonstrated quite well the CCP’s “regulatory brittleness,” which would need to change if China wants a “thriving generative AI industry.”

For what it’s worth, former Assistant Secretary for Policy at the U.S. Department of Homeland Security Stewart Baker on Feb. 20 publicized a lavishly generous offer, given his current salary at Steptoe and Johnson LLP: “The person who gets Baidu’s AI to say the rudest possible thing about Xi Jinping or the Chinese Communist Party will get a cash prize—and, if they are a Chinese national, I will personally represent them in their asylum filing in the United States. You’ll get a green card, you’ll get U.S. citizenship, and you’ll get a cash prize if you win this contest.”

Chinese tech companies have received, to say the least, mixed signals from the top. On one hand, government officials express routine confidence in China’s inexorable surge in AI development and the important role that LLMs will play. Chen Jiachang, the director-general of the Department of High and New Technology of the Ministry of Science and Technology, said at a Feb. 24 press conference, “the Ministry of Science and Technology is committed to supporting AI as a strategic emerging industry and a key driver of economic growth,” and added that one of the “important directions for the development of AI is human-machine dialogue based on natural language understanding.”

Wang Zhigang, the minister of science and technology, followed up: “We have taken corresponding measures in terms of ethics for any new technology, including AI technology, to ensure that the development of science and technology is beneficial and poses no harm and to leverage its benefits better.” And Yi Weidong, an American-educated professor at the University of the Chinese Academy of Sciences, wrote, “We are confident that China has the ability to surpass the world’s advanced level in the field of artificial intelligence applications.”

But the government’s own censoriousness over ChatGPT already suggests serious problems. So far regulators have focused on foreign products. In a recent writeup, Zhou Ting (dean of the School of Government and Public Affairs at the Communication University of China) and Pu Cheng (a Ph.D. student there) write that the dangers of AI chatbots include “becoming a tool in cognitive warfare,” prolonging international conflicts, damaging cybersecurity, and exacerbating global digital inequality. For example, Zhou and Pu cite an unverified ChatGPT conversation in which the bot justified the United States shooting down a hypothetical Chinese civilian balloon floating over U.S. airspace, yet answered that China should not shoot down such a balloon originating from the United States.

Interestingly, those at the top haven’t explicitly mentioned their censorship concerns or demands, instead relying on traditional and wholly expected anti-Western propagandized narratives. But their angst is felt nonetheless, and it’s not hard to see where it fundamentally comes from. Xi has no tolerance for dissent. Yet that fear leads in a straight line to regulatory reticence in China’s AI rollout.

And now is not a good time to send mixed signals about—let alone put the brakes on—the rollout of potentially game-changing technology. After all, one reason underscoring Xi’s goal to transform China into a science and technology superpower is such an achievement would alleviate some of the impending perils of demographic trends and slowing growth that may catch China in the dreaded middle-income trap.

To weather these challenges in the long run—and to fully embrace revolutionary technology of all stripes—the CCP needs an economic system able to stomach creative destruction without falling apart. The absence of such a system would be precarious: If Xi grows worried that, for instance, AI-powered automation will displace too many jobs and thus metastasize the risk of social unrest, he would have to make a hard choice between staying competitive in the tech race and mitigating short-term unrest.

But there he would have to pick his poison, as either option, ironically, would result in increased political insecurity. No doubt Xi recalls that the rapid economic changes of the 1980s, including high inflation and failed price reforms, contributed to the unrest which culminated in 1989 at Tiananmen Square.

To be sure, even in democracies with liberal protections of speech and expression, AI regulations are still very much a work in progress. In the United States, for example, University of North Carolina professor Matt Perault noted that courts would likely find ChatGPT and other LLMs to be “information content providers”—i.e. content creators—because they “develop,” at least in part, information to a content host. If this happens, ChatGPT won’t qualify for Section 230 immunity (given to online platforms under U.S. law to prevent them being held responsible for content provided by third parties) and could thereby be held legally liable for the content it outputs. Due to the risk of costly legal battles, Perault wrote, companies will “narrow the scope and scale of [LLM] deployment dramatically” and will “inevitably censor legitimate speech as well.” Moreover, while Perault suggests several common-sense proposals to avert such an outcome—such as adding a time-delay LLM carveout to Section 230—he admits none of them is “likely to be politically feasible” in today’s U.S. Congress.

Recently, Brookings Institution fellow Alex Engler discussed a wide gamut of potential AI regulations the United States and EU may consider. They include watermarking (hidden patterns in AI-generated content to distinguish between AI- and human-generated content), “model cards” (disclosures on how an AI model performs in various conditions), human review of AI-generated content, and information-sharing requirements. Engler, however, repeatedly observed that such regulations are insufficient, no panacea, and in any case “raise many key questions,” such as in the realm of enforcement and social impact.

Even so, these Western regulatory hurdles pale in comparison to what Chinese chatbot-developing companies will be up against—if for no other reason than Chinese regulators will require AI companies to do the impossible: guarantee, somehow, that their probabilistic LLM never says anything bad about the CCP. And given that pre-chatbot AI was already testing the limits of the CCP’s human censors—overwhelmed censors are one potential, albeit partial, explanation for how the white paper protests of late 2022 swelled into a nationwide movement—the CCP is all the more likely to fear what generative AI may do to its surveillance complex.

If LLMs end up being a genuinely transformative technology rather than an amusing online plaything, whichever country discovers how to best harness their power will come out on top. Doing so will take good data, efficient algorithms, top talent, and access to computing power—but it will also take institutions that can usher in effective productivity changes. Particularly if AI tech diffuses relatively smoothly across borders, it is the regulatory response which will determine how governments and firms wield its power.

Nicholas Welch is an editor at ChinaTalk.

Jordan Schneider is the host of the ChinaTalk podcast.

Join the Conversation

Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.

Already a subscriber? .

Join the Conversation

Join the conversation on this and other recent Foreign Policy articles when you subscribe now.

Not your account?

Join the Conversation

Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.

You are commenting as .

More from Foreign Policy

US President Joe Biden and Saudi Crown Prince Mohammed bin Salman arrive for the family photo during the Jeddah Security and Development Summit (GCC+3) at a hotel in Saudi Arabia's Red Sea coastal city of Jeddah on July 16, 2022.
US President Joe Biden and Saudi Crown Prince Mohammed bin Salman arrive for the family photo during the Jeddah Security and Development Summit (GCC+3) at a hotel in Saudi Arabia's Red Sea coastal city of Jeddah on July 16, 2022.

Saudi Arabia Is on the Way to Becoming the Next Egypt

Washington is brokering a diplomatic deal that could deeply distort its relationship with Riyadh.

Police try to block students and faculty members from the School of the Art Institute of Chicago, Roosevelt University, and Columbia College Chicago amid a pro-Palestinian demonstration in Chicago, on April 26.
Police try to block students and faculty members from the School of the Art Institute of Chicago, Roosevelt University, and Columbia College Chicago amid a pro-Palestinian demonstration in Chicago, on April 26.

What America’s Palestine Protesters Should and Shouldn’t Do

A how-to guide for university students from a sympathetic observer.

U.S. President Joe Biden and China's President Xi Jinping, both wearing dark suits, are seen from behind as they walk through a large wooden doorway. Biden reaches out to pat a hand on Xi's back. Small trees flank the entrance.
U.S. President Joe Biden and China's President Xi Jinping, both wearing dark suits, are seen from behind as they walk through a large wooden doorway. Biden reaches out to pat a hand on Xi's back. Small trees flank the entrance.

No, This Is Not a Cold War—Yet

Why are China hawks exaggerating the threat from Beijing?

U.S. President Joe Biden speaks about the situation in Kabul, Afghanistan from the East Room of the White House on August 26, 2021 in Washington.
U.S. President Joe Biden speaks about the situation in Kabul, Afghanistan from the East Room of the White House on August 26, 2021 in Washington.

The Original Sin of Biden’s Foreign Policy

All of the administration’s diplomatic weaknesses were already visible in the withdrawal from Afghanistan.