Articles

Censorship or rumour management? How Weibo constructs “truth” around crisis events

Examining the content moderation strategies of Sina Weibo, China’s largest microblogging platform, in regulating discussion of rumours following the 2015 Tianjin blasts.

On 12 August 2015, a series of explosions killed 173 people and injured hundreds at a container storage station at the Port of Tianjin. Tianjin Port by Matthias Catón (Flickr CC BY-NC-ND 2.0).

As social media become increasingly important as a source of news and information for citizens, there is a growing concern over the impacts of social media platforms on information quality—as evidenced by the furore over the impact of “fake news”. Driven in part by the apparently substantial impact of social media on the outcomes of Brexit and the US Presidential election, various attempts have been made to hold social media platforms to account for presiding over misinformation, with recent efforts to improve fact-checking.

There is a large and growing body of research examining rumour management on social media platforms. However, most of these studies treat it as a technical matter, and little attention has been paid to the social and political aspects of rumour. In their Policy & Internet article “How Social Media Construct ‘Truth’ Around Crisis Events: Weibo’s Rumor Management Strategies after the 2015 Tianjin Blasts“, Jing Zeng, Chung-hong Chan and King-wa Fu examine the content moderation strategies of Sina Weibo, China’s largest microblogging platform, in regulating discussion of rumours following the 2015 Tianjin blasts.

Studying rumour communication in relation to the manipulation of social media platforms is particularly important in the context of China. In China, Internet companies are licensed by the state, and their businesses must therefore be compliant with Chinese law and collaborate with the government in monitoring and censoring politically sensitive topics. Given most Chinese citizens rely heavily on Chinese social media services as alternative information sources or as grassroots “truth”, the anti-rumour policies have raised widespread concern over the implications for China’s online sphere. As there is virtually no transparency in rumour management on Chinese social media, it is an important task for researchers to investigate how Internet platforms engage with rumour content and any associated impact on public discussion.

We caught up with the authors to discuss their findings:

Ed.: “Fake news” is currently a very hot issue, with Twitter and Facebook both exploring mechanisms to try to combat it. On the flip-side we have state-sponsored propaganda now suddenly very visible (e.g. Russia), in an attempt to reduce trust, destabilise institutions, and inject rumour into the public sphere. What is the difference between rumour, propaganda and fake news; and how do they play out online in China?

Jing / Chung-hong / King-wa: The definition of rumour is very fuzzy, and it is very common to see ‘rumour’ being used interchangeably with other related concepts. Our study drew the definition of rumour from the fields of sociology and social psychology, wherein this concept has been most thoroughly articulated.

Rumour is a form of unverified information circulated in uncertain circumstances. The major difference between rumour and propaganda lies in their functions. Rumour sharing is a social practice of sense-making, therefore it functions to help people make meaning of an uncertain situation. In contrast, the concept of propaganda is more political. Propaganda is a form of information strategically used to mobilise political support for a political force.

Fake news is a new buzz word and works closely with another buzz term – post-truth. There is no established and widely accepted definition of fake news, and its true meaning(s) should be understood with respect to specific contexts. For example, Donald Trump’s use of “fake news” in his tweets aims to attack a few media outlets who have reported unfavourable stories about the him, whereas ungrounded and speculative “fake news” is created and widely circulated on the public’s social media. If we simply understand fake news as a form of fabricated news, I would argue that fake news can operate as either rumour, propaganda, or both of them.

It is worth pointing out that, in the Chinese contexts, rumour may not always be fake and propaganda is not necessarily bad. As pointed out by different scholars, rumour functions as a social protest against the authoritarian state’s information control. And in the Chinese language, the Mandarin term Xuanchuan (‘propaganda’) does not always have the same negative connotation as does its English counterpart.

Ed.: You mention previous research finding that the “Chinese government’s propaganda and censorship policies were mainly used by the authoritarian regime to prevent collective action and to maintain social stability” — is that what you found as well? i.e. that criticism of the Government is tolerated, but not organised protest?

Jing / Chung-hong / King-wa: This study examined rumour communication around the 2015 Tianjin blasts, therefore our analyses did not directly address Weibo users’ attempts to organise protest. However, regarding the Chinese government’s response to Weibo users’ criticism of its handling of the crisis, our study suggested that some criticisms of the government were tolerated. For example, the messages about local government officials mishandling of the crisis were not heavily censored. Instead, what we have found seems to confirm that social stability is of paramount importance for the ruling regime and thus online censorship was used as a mean to maintain social stability. It explains Weibo’s decision to silence the discussions on the assault of a CNN reporter, the chaotic aftermath of the blasts and the local media’s reluctance to broadcast the blasts.

Ed.: What are people’s responses to obvious government attempts to censor or head-off online rumour, e.g. by deleting posts or issuing statements? And are people generally supportive of efforts to have a “clean, rumour-free Internet”, or cynical about the ultimate intentions or effects of censorship?

Jing / Chung-hong / King-wa: From our time series analysis, we found different responses from netizens with respect to topics but we cannot find a consistent pattern of a chilling effect. Basically, the Weibo rumour management strategies, either deleting posts or refuting posts, will usually stimulate more public interest. At least as shown in our data, netizens are not supportive of those censorship efforts and somehow end up posting more messages of rumours as a counter-reaction.

Ed.: Is online rumour particularly a feature of contemporary Chinese society — or do you think that’s just a human thing (we’ve certainly seen lots of lying in the Brexit and Trump campaigns)? How might rumour relate more generally to levels of trust in institutions, and the presence of a strong, free press?

Jing / Chung-hong / King-wa: Online rumour is common in China, but it can be also pervasive in any country where use of digital technologies for communication is prevalent. Rumour sharing is a human thing, yes you can say that. But it is more accurate to say, it is a societally constructed thing. As mentioned earlier, rumour is a social practice of collective sense-making under uncertain circumstances.

Levels of public trust in governmental organisations and the media can directly impact rumour circulation, and rumour-debunking efforts. When there is a lack of public trust in official sources of information, it opens up room for rumour circulation. Likewise, when the authorities have low credibility, the official rumour debunking efforts can backfire, because the public may think the authorities are trying to hide something. This might explain what we observed in our study.

Ed.: I guess we live in interesting times; Theresa May now wants to control the Internet, Trump is attacking the very institution of the press, social media companies are under pressure to accept responsibility for the content they host. What can we learn from the Chinese case, of a very sophisticated system focused on social control and stability?

Jing / Chung-hong / King-wa: The most important implication of this study is that the most sophisticated rumour control mechanism can only be developed on a good understanding of the social roots of rumour. As our study shows, without solving the more fundamental social cause of rumour, rumour debunking efforts can backfire.


Read the full article: Jing Zeng, Chung-hong Chan and King-wa Fu (2017) How Social Media Construct ‘Truth’ Around Crisis Events: Weibo’s Rumor Management Strategies after the 2015 Tianjin Blasts. Policy & Internet 9 (3) 297-320. DOI: 10.1002/poi3.155

Jing Zeng, Chung-hong Chan and King-wa Fu were talking to blog editor David Sutcliffe.