Censorship or rumour management? How Weibo constructs “truth” around crisis events

As social media become increasingly important as a source of news and information for citizens, there is a growing concern over the impacts of social media platforms on information quality — as evidenced by the furore over the impact of “fake news”. Driven in part by the apparently substantial impact of social media on the outcomes of Brexit and the US Presidential election, various attempts have been made to hold social media platforms to account for presiding over misinformation, with recent efforts to improve fact-checking.

There is a large and growing body of research examining rumour management on social media platforms. However, most of these studies treat it as a technical matter, and little attention has been paid to the social and political aspects of rumour. In their Policy & Internet article “How Social Media Construct ‘Truth’ Around Crisis Events: Weibo’s Rumor Management Strategies after the 2015 Tianjin Blasts“, Jing Zeng, Chung-hong Chan and King-wa Fu examine the content moderation strategies of Sina Weibo, China’s largest microblogging platform, in regulating discussion of rumours following the 2015 Tianjin blasts.

Studying rumour communication in relation to the manipulation of social media platforms is particularly important in the context of China. In China, Internet companies are licensed by the state, and their businesses must therefore be compliant with Chinese law and collaborate with the government in monitoring and censoring politically sensitive topics. Given most Chinese citizens rely heavily on Chinese social media services as alternative information sources or as grassroots “truth”, the anti-rumour policies have raised widespread concern over the implications for China’s online sphere. As there is virtually no transparency in rumour management on Chinese social media, it is an important task for researchers to investigate how Internet platforms engage with rumour content and any associated impact on public discussion.

We caught up with the authors to discuss their findings:

Ed.: “Fake news” is currently a very hot issue, with Twitter and Facebook both exploring mechanisms to try to combat it. On the flip-side we have state-sponsored propaganda now suddenly very visible (e.g. Russia), in an attempt to reduce trust, destabilise institutions, and inject rumour into the public sphere. What is the difference between rumour, propaganda and fake news; and how do they play out online in China?

Jing / Chung-hong / King-wa: The definition of rumour is very fuzzy, and it is very common to see ‘rumour’ being used interchangeably with other related concepts. Our study drew the definition of rumour from the fields of sociology and social psychology, wherein this concept has been most thoroughly articulated.

Rumour is a form of unverified information circulated in uncertain circumstances. The major difference between rumour and propaganda lies in their functions. Rumour sharing is a social practice of sense-making, therefore it functions to help people make meaning of an uncertain situation. In contrast, the concept of propaganda is more political. Propaganda is a form of information strategically used to mobilise political support for a political force.

Fake news is a new buzz word and works closely with another buzz term – post-truth. There is no established and widely accepted definition of fake news, and its true meaning(s) should be understood with respect to specific contexts. For example, Donald Trump’s use of “fake news” in his tweets aims to attack a few media outlets who have reported unfavourable stories about the him, whereas ungrounded and speculative “fake news” is created and widely circulated on the public’s social media. If we simply understand fake news as a form of fabricated news, I would argue that fake news can operate as either rumour, propaganda, or both of them.

It is worth pointing out that, in the Chinese contexts, rumour may not always be fake and propaganda is not necessarily bad. As pointed out by different scholars, rumour functions as a social protest against the authoritarian state’s information control. And in the Chinese language, the Mandarin term Xuanchuan (‘propaganda’) does not always have the same negative connotation as does its English counterpart.

Ed.: You mention previous research finding that the “Chinese government’s propaganda and censorship policies were mainly used by the authoritarian regime to prevent collective action and to maintain social stability” — is that what you found as well? i.e. that criticism of the Government is tolerated, but not organised protest?

Jing / Chung-hong / King-wa: This study examined rumour communication around the 2015 Tianjin blasts, therefore our analyses did not directly address Weibo users’ attempts to organise protest. However, regarding the Chinese government’s response to Weibo users’ criticism of its handling of the crisis, our study suggested that some criticisms of the government were tolerated. For example, the messages about local government officials mishandling of the crisis were not heavily censored. Instead, what we have found seems to confirm that social stability is of paramount importance for the ruling regime and thus online censorship was used as a mean to maintain social stability. It explains Weibo’s decision to silence the discussions on the assault of a CNN reporter, the chaotic aftermath of the blasts and the local media’s reluctance to broadcast the blasts.

Ed.: What are people’s responses to obvious government attempts to censor or head-off online rumour, e.g. by deleting posts or issuing statements? And are people generally supportive of efforts to have a “clean, rumour-free Internet”, or cynical about the ultimate intentions or effects of censorship?

Jing / Chung-hong / King-wa: From our time series analysis, we found different responses from netizens with respect to topics but we cannot find a consistent pattern of a chilling effect. Basically, the Weibo rumour management strategies, either deleting posts or refuting posts, will usually stimulate more public interest. At least as shown in our data, netizens are not supportive of those censorship efforts and somehow end up posting more messages of rumours as a counter-reaction.

Ed.: Is online rumour particularly a feature of contemporary Chinese society — or do you think that’s just a human thing (we’ve certainly seen lots of lying in the Brexit and Trump campaigns)? How might rumour relate more generally to levels of trust in institutions, and the presence of a strong, free press?

Jing / Chung-hong / King-wa: Online rumour is common in China, but it can be also pervasive in any country where use of digital technologies for communication is prevalent. Rumour sharing is a human thing, yes you can say that. But it is more accurate to say, it is a societally constructed thing. As mentioned earlier, rumour is a social practice of collective sense-making under uncertain circumstances.

Levels of public trust in governmental organisations and the media can directly impact rumour circulation, and rumour-debunking efforts. When there is a lack of public trust in official sources of information, it opens up room for rumour circulation. Likewise, when the authorities have low credibility, the official rumour debunking efforts can backfire, because the public may think the authorities are trying to hide something. This might explain what we observed in our study.

Ed.: I guess we live in interesting times; Theresa May now wants to control the Internet, Trump is attacking the very institution of the press, social media companies are under pressure to accept responsibility for the content they host. What can we learn from the Chinese case, of a very sophisticated system focused on social control and stability?

Jing / Chung-hong / King-wa: The most important implication of this study is that the most sophisticated rumour control mechanism can only be developed on a good understanding of the social roots of rumour. As our study shows, without solving the more fundamental social cause of rumour, rumour debunking efforts can backfire.


Read the full article: Jing Zeng, Chung-hong Chan and King-wa Fu (2017) How Social Media Construct ‘Truth’ Around Crisis Events: Weibo’s Rumor Management Strategies after the 2015 Tianjin Blasts. Policy & Internet 9 (3) 297-320. DOI: 10.1002/poi3.155

Jing Zeng, Chung-hong Chan and King-wa Fu were talking to blog editor David Sutcliffe.

Verification of crowd-sourced information: is this ‘crowd wisdom’ or machine wisdom?

Crisis mapping platform
‘Code’ or ‘law’? Image from an Ushahidi development meetup by afropicmusing.

In ‘Code and Other Laws of Cyberspace’, Lawrence Lessig (2006) writes that computer code (or what he calls ‘West Coast code’) can have the same regulatory effect as the laws and legal code developed in Washington D.C., so-called ‘East Coast code’. Computer code impacts on a person’s behaviour by virtue of its essentially restrictive architecture: on some websites you must enter a password before you gain access, in other places you can enter unidentified. The problem with computer code, Lessig argues, is that it is invisible, and that it makes it easy to regulate people’s behaviour directly and often without recourse.

For example, fair use provisions in US copyright law enable certain uses of copyrighted works, such as copying for research or teaching purposes. However the architecture of many online publishing systems heavily regulates what one can do with an e-book: how many times it can be transferred to another device, how many times it can be printed, whether it can be moved to a different format – activities that have been unregulated until now, or that are enabled by the law but effectively ‘closed off’ by code. In this case code works to reshape behaviour, upsetting the balance between the rights of copyright holders and the rights of the public to access works to support values like education and innovation.

Working as an ethnographic researcher for Ushahidi, the non-profit technology company that makes tools for people to crowdsource crisis information, has made me acutely aware of the many ways in which ‘code’ can become ‘law’. During my time at Ushahidi, I studied the practices that people were using to verify reports by people affected by a variety of events – from earthquakes to elections, from floods to bomb blasts. I then compared these processes with those followed by Wikipedians when editing articles about breaking news events. In order to understand how to best design architecture to enable particular behaviour, it becomes important to understand how such behaviour actually occurs in practice.

In addition to the impact of code on the behaviour of users, norms, the market and laws also play a role. By interviewing both the users and designers of crowdsourcing tools I soon realized that ‘human’ verification, a process of checking whether a particular report meets a group’s truth standards, is an acutely social process. It involves negotiation between different narratives of what happened and why; identifying the sources of information and assessing their reputation among groups who are considered important users of that information; and identifying gatekeeping and fact checking processes where the source is a group or institution, amongst other factors.

One disjuncture between verification ‘practice’ and the architecture of the verification code developed by Ushahidi for users was that verification categories were set as a default feature, whereas some users of the platform wanted the verification process to be invisible to external users. Items would show up as being ‘unverified’ unless they had been explicitly marked as ‘verified’, thus confusing users about whether the item was unverified because the team hadn’t yet verified it, or whether it was unverified because it had been found to be inaccurate. Some user groups wanted to be able to turn off such features when they could not take responsibility for data verification. In the case of the Christchurch Recovery Map in the aftermath of the 2011 New Zealand earthquake, the government officials with whom volunteers who set up the Ushahidi instance were working wanted to be able to turn off such features because they were concerned that they could not ensure that reports were indeed verified and having the category show up (as ‘unverified’ until ‘verified’) implied that they were engaged in some kind of verification process.

The existence of a default verification category impacted on the Christchurch Recovery Map group’s ability to gain support from multiple stakeholders, including the government, but this feature of the platform’s architecture did not have the same effect in other places and at other times. For other users like the original Ushahidi Kenya team who worked to collate instances of violence after the Kenyan elections in 2007/08, this detailed verification workflow was essential to counter the misinformation and rumour that dogged those events. As Ushahidi’s use cases have diversified – from reporting death and damage during natural disasters to political events including elections, civil war and revolutions, the architecture of Ushahidi’s code base has needed to expand. Ushahidi has recognised that code plays a defining role in the experience of verification practices, but also that code’s impact will not be the same at all times, and in all circumstances. This is why it invested in research about user diversity in a bid to understand the contexts in which code runs, and how these contexts result in a variety of different impacts.

A key question being asked in the design of future verification mechanisms is the extent to which verification work should be done by humans or non-humans (machines). Here, verification is not a binary categorisation, but rather there is a spectrum between human and non-human verification work, and indeed, projects like Ushahidi, Wikipedia and Galaxy Zoo have all developed different verification mechanisms. Wikipedia uses a set of policies and practices about how content should be added and reviewed, such as the use of ‘citation needed’ tags for information that sounds controversial and that should be backed up by a reliable source. Galaxy Zoo uses an algorithm to detect whether certain contributions are accurate by comparing them to the same work by other volunteers.

Ushahidi leaves it up to individual deployers of their tools and platform to make decisions about verification policies and practices, and is going to be designing new defaults to accommodate this variety of use. In parallel, Veri.ly, a project by ex-Ushahidi Patrick Meier with organisations Masdar and QCRI is responding to the large amounts of unverified and often contradictory information that appears on social media following natural disasters by enabling social media users to collectively evaluate the credibility of rapidly crowdsourced evidence. The project was inspired by MIT’s winning entry to DARPA’s ‘Red Balloon Challenge’ which was intended to highlight social networking’s potential to solve widely distributed, time-sensitive problems, in this case by correctly identifying the GPS coordinates of 10 balloons suspended at fixed, undisclosed locations across the US. The winning MIT team crowdsourced the problem by using a monetary incentive structure, promising $2,000 to the first person who submitted the correct coordinates for a single balloon, $1,000 to the person who invited that person to the challenge; $500 to the person who invited the inviter, and so on. The system quickly took root, spawning geographically broad, dense branches of connections. After eight hours and 52 minutes, the MIT team identified the correct coordinates for all 10 balloons.

Veri.ly aims to apply MIT’s approach to the process of rapidly collecting and evaluating critical evidence during disasters: “Instead of looking for weather balloons across an entire country in less than 9 hours, we hope Veri.ly will facilitate the crowdsourced collection of multimedia evidence for individual disasters in under 9 minutes.” It is still unclear how (or whether) Verily will be able to reproduce the same incentive structure, but a bigger question lies around the scale and spread of social media in the majority of countries where humanitarian assistance is needed. The majority of Ushahidi or Crowdmap installations are, for example, still “small data” projects, with many focused on areas that still require offline verification procedures (such as calling volunteers or paid staff who are stationed across a country, as was the case in Sudan [3]). In these cases – where the social media presence may be insignificant — a team’s ability to achieve a strong local presence will define the quality of verification practices, and consequently the level of trust accorded to their project.

If code is law and if other aspects in addition to code determine how we can act in the world, it is important to understand the context in which code is deployed. Verification is a practice that determines how we can trust information coming from a variety of sources. Only by illuminating such practices and the variety of impacts that code can have in different environments can we begin to understand how code regulates our actions in crowdsourcing environments.

For more on Ushahidi verification practices and the management of sources on Wikipedia during breaking news events, see:

[1] Ford, H. (2012) Wikipedia Sources: Managing Sources in Rapidly Evolving Global News Articles on the English Wikipedia. SSRN Electronic Journal. doi:10.2139/ssrn.2127204

[2] Ford, H. (2012) Crowd Wisdom. Index on Censorship 41(4), 33–39. doi:10.1177/0306422012465800

[3] Ford, H. (2011) Verifying information from the crowd. Ushahidi.


Heather Ford has worked as a researcher, activist, journalist, educator and strategist in the fields of online collaboration, intellectual property reform, information privacy and open source software in South Africa, the United Kingdom and the United States. She is currently a DPhil student at the OII, where she is studying how Wikipedia editors write history as it happens in a format that is unprecedented in the history of encyclopedias. Before this, she worked as an ethnographer for Ushahidi. Read Heather’s blog.

For more on the ChristChurch Earthquake, and the role of digital humanities in preserving the digital record of its impact see: Preserving the digital record of major natural disasters: the CEISMIC Canterbury Earthquakes Digital Archive project on this blog.

Did Libyan crisis mapping create usable military intelligence?

The Middle East has recently witnessed a series of popular uprisings against autocratic rulers. In mid-January 2011, Tunisian President Zine El Abidine Ben Ali fled his country, and just four weeks later, protesters overthrew the regime of Egyptian President Hosni Mubarak. Yemen’s government was also overthrown in 2011, and Morocco, Jordan, and Oman saw significant governmental reforms leading, if only modestly, toward the implementation of additional civil liberties.

Protesters in Libya called for their own ‘day of rage’ on February 17, 2011, marked by violent protests in several major cities, including the capitol Tripoli. As they transformed from ‘protestors’ to ‘Opposition forces’ they began pushing information onto Twitter, Facebook, and YouTube, reporting their firsthand experiences of what had turned into a civil war virtually overnight. The evolving humanitarian crisis prompted the United Nations to request the creation of the Libya Crisis Map, which was made public on March 6, 2011. Other, more focused crisis maps followed, and were widely distributed on Twitter.

While the map was initially populated with humanitarian information pulled from the media and online social networks, as the imposition of an internationally enforced No Fly Zone (NFZ) over Libya became imminent, information began to appear on it that appeared to be of a tactical military nature. While many people continued to contribute conventional humanitarian information to the map, the sudden shift toward information that could aid international military intervention was unmistakable.

How useful was this information, though? Agencies in the U.S. Intelligence Community convert raw data into useable information (incorporated into finished intelligence) by utilizing some form of the Intelligence Process. As outlined in the U.S. military’s joint intelligence manual, this consists of six interrelated steps all centered on a specific mission. It is interesting that many Twitter users, though perhaps unaware of the intelligence process, replicated each step during the Libyan civil war; producing finished intelligence adequate for consumption by NATO commanders and rebel leadership.

It was clear from the beginning of the Libyan civil war that very few people knew exactly what was happening on the ground. Even NATO, according to one of the organization’s spokesmen, lacked the ground-level informants necessary to get a full picture of the situation in Libya. There is no public information about the extent to which military commanders used information from crisis maps during the Libyan civil war. According to one NATO official, “Any military campaign relies on something that we call ‘fused information’. So we will take information from every source we can… We’ll get information from open source on the internet, we’ll get Twitter, you name any source of media and our fusion centre will deliver all of that into useable intelligence.”

The data in these crisis maps came from a variety of sources, including journalists, official press releases, and civilians on the ground who updated blogs and/or maintaining telephone contact. The @feb17voices Twitter feed (translated into English and used to support the creation of The Guardian’s and the UN’s Libya Crisis Map) included accounts of live phone calls from people on the ground in areas where the Internet was blocked, and where there was little or no media coverage. Twitter users began compiling data and information; they tweeted and retweeted data they collected, information they filtered and processed, and their own requests for specific data and clarifications.

Information from various Twitter feeds was then published in detailed maps of major events that contained information pertinent to military and humanitarian operations. For example, as fighting intensified, @LibyaMap’s updates began to provide a general picture of the battlefield, including specific, sourced intelligence about the progress of fighting, humanitarian and supply needs, and the success of some NATO missions. Although it did not explicitly state its purpose as spreading mission-relevant intelligence, the nature of the information renders alternative motivations highly unlikely.

Interestingly, the Twitter users featured in a June 2011 article by the Guardian had already explicitly expressed their intention of affecting military outcomes in Libya by providing NATO forces with specific geographical coordinates to target Qadhafi regime forces. We could speculate at this point about the extent to which the Intelligence Community might have guided Twitter users to participate in the intelligence process; while NATO and the Libyan Opposition issued no explicit intelligence requirements to the public, they tweeted stories about social network users trying to help NATO, likely leading their online supporters to draw their own conclusions.

It appears from similar maps created during the ongoing uprisings in Syria that the creation of finished intelligence products by crisis mappers may become a regular occurrence. Future study should focus on determining the motivations of mappers for collecting, processing, and distributing intelligence, particularly as a better understanding of their motivations could inform research on the ethics of crisis mapping. It is reasonable to believe that some (or possibly many) crisis mappers would be averse to their efforts being used by military commanders to target “enemy” forces and infrastructure.

Indeed, some are already questioning the direction of crisis mapping in the absence of professional oversight (Global Brief 2011): “[If] crisis mappers do not develop a set of best practices and shared ethical standards, they will not only lose the trust of the populations that they seek to serve and the policymakers that they seek to influence, but (…) they could unwittingly increase the number of civilians being hurt, arrested or even killed without knowing that they are in fact doing so.”


Read the full paper: Stottlemyre, S., and Stottlemyre, S. (2012) Crisis Mapping Intelligence Information During the Libyan Civil War: An Exploratory Case Study. Policy and Internet 4 (3-4).

Preserving the digital record of major natural disasters: the CEISMIC Canterbury Earthquakes Digital Archive project

The 6.2 magnitude earthquake that struck the centre of Christchurch on 22 February 2011 claimed 185 lives, damaged 80% of the central city beyond repair, and forced the abandonment of 6000 homes. It was the third costliest insurance event in history. The CEISMIC archive developed at the University of Canterbury will soon have collected almost 100,000 digital objects documenting the experiences of the people and communities affected by the earthquake, all of it available for study.

The Internet can be hugely useful to coordinate disaster relief efforts, or to help rebuild affected communities. Paul Millar came to the OII on 21 May 2012 to discuss the CEISMIC archive project and the role of digital humanities after a major disaster (below). We talked to him afterwards.

Continue reading “Preserving the digital record of major natural disasters: the CEISMIC Canterbury Earthquakes Digital Archive project”