The global disinformation order
China’s global disinformation campaign—which recently painted Hong Kong’s democracy advocates as violent and unpopular radicals—should cause concern for democracies around the world.
In addition to highlighting the many ways in which technology can be used to suppress freedom of speech, China’s ascent to the global disinformation order demonstrates a deeper insidious trend: governments increasingly see social media as a powerful tool to manipulate public opinion both domestically and abroad.
At Oxford University, we have tracked how governments use social media to spread computational propaganda. Over the past three years, we have found evidence of disinformation campaigns run by state actors in more than 70 countries around the world.
Many of these countries are authoritarian regimes that use networks of fake accounts to spread pro-government propaganda, drown out opposing voices, and threaten activists and journalists with hate and violence. But the case of China is particularly worrying.
The Chinese government has a long history of censorship and information control. Even before ‘fake news’ was weaponised by US President Donald Trump, Chinese officials used the term to crack down on political dissent and discredit opinions that challenged the position of government.
And it has become well known that fake online commentators—employed by the Chinese government—are responsible for fabricating hundreds of millions of online posts on Chinese native platforms every year to divert criticism away from the state. But until recently, China’s disinformation efforts have largely remained within the boundaries of its Great Firewall.
Before the Hong Kong protests, Chinese foreign influence operations were small in scale and relatively unsophisticated. We know they used crudely automated accounts to attack political figures in Taiwan, and have targeted social media disinformation campaigns against Falun Gong and Tibetans in exile.
But now—with Facebook, Twitter and Google taking down hundreds of accounts, pages and channels operated by the Chinese government—China has demonstrated its capability and willingness to target global audiences with disinformation.
China’s quick, calculated and deliberate rise to the global disinformation order has two implications for democratic societies.
First, despite the numerous countries experimenting with computational propaganda, China is poised to become the next disinformation superpower. China has quickly become a global leader in artificial intelligence.
Combined with the vast amounts of data collected from China’s social credit system and critical infrastructure rollouts around the world, AI will boost the state’s ability to target, tailor and amplify disinformation campaigns.
Democracies already have to contend with disinformation from domestic sources, as well as a growing number of sophisticated actors using social media for foreign influence operations. China—along with its technological prowess—is a significant player in the disinformation realm.
Second, China’s disinformation campaign against protesters in Hong Kong shows that we are still stuck. We remain beholden to social media firms to tell us when there is trouble, and not all governments are committed to combating disinformation.
Major technology firms don’t always stand up to China, so it is a great sign that they have exposed such interference. But independent researchers are still on the outside, unable to look under the bonnet because the most valuable data about public life remains in private hands.
The question of public funding for better elections administration is on hold in the US. Highly trusted public broadcasters in many countries, which effectively inoculate the public against a lot of misinformation, have shrinking budgets.
There are a host of good ideas for using social media to strengthen democracy. Facilitating ‘data donation’ by users, contributing data to public archives, and setting up industry-funded but independent auditing bodies would all help put civic life back into social media.
But until the technology giants become better corporate citizens and politicians commit to securing elections, voters in democracies will remain soft targets for computational propaganda—from a growing number of countries with increasingly sophisticated technologies.
This article was written by Philip N. Howard, the director of the Oxford Internet Institute and its computational propaganda research project. He is the author of Lie machines, to be published in 2020 by Yale University Press and Samantha Bradshaw, a PhD candidate at the Oxford Internet Institute and a researcher on the computational propaganda project. It was published by The Strategist.
Philip N. Howard is a professor of sociology, information and international affairs and the Director of the Oxford Internet Institute. He is the author, most recently, of Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up.