Can “We the People” really help draft a national constitution? (sort of..)

As innovations like social media and open government initiatives have become an integral part of the politics in the twenty-first century, there is increasing interest in the possibility of citizens directly participating in the drafting of legislation. Indeed, there is a clear trend of greater public participation in the process of constitution making, and with the growth of e-democracy tools, this trend is likely to continue. However, this view is certainly not universally held, and a number of recent studies have been much more skeptical about the value of public participation, questioning whether it has any real impact on the text of a constitution.

Following the banking crisis, and a groundswell of popular opposition to the existing political system in 2009, the people of Iceland embarked on a unique process of constitutional reform. Having opened the entire drafting process to public input and scrutiny, these efforts culminated in Iceland’s 2011 draft crowdsourced constitution: reputedly the world’s first. In his Policy & Internet article “When Does Public Participation Make a Difference? Evidence From Iceland’s Crowdsourced Constitution”, Alexander Hudson examines the impact that the Icelandic public had on the development of the draft constitution. He finds that almost 10 percent of the written proposals submitted generated a change in the draft text, particularly in the area of rights.

This remarkably high number is likely explained by the isolation of the drafters from both political parties and special interests, making them more reliant on and open to input from the public. However, although this would appear to be an example of successful public crowdsourcing, the new constitution was ultimately rejected by parliament. Iceland’s experiment with participatory drafting therefore demonstrates the possibility of successful online public engagement — but also the need to connect the masses with the political elites. It was the disconnect between these groups that triggered the initial protests and constitutional reform, but also that led to its ultimate failure.

We caught up with Alexander to discuss his findings.

Ed: We know from Wikipedia (and other studies) that group decisions are better, and crowds can be trusted. However, I guess (re US, UK) I also feel increasingly nervous about the idea of “the public” having a say over anything important and binding. How do we distribute power and consultation, while avoiding populist chaos?  

Alexander: That’s a large and important question, which I can probably answer only in part. One thing we need to be careful of is what kind of public we are talking about. In many cases, we view self-selection as a bad thing — it can’t be representative. However, in cases like Wikipedia, we see self-selected individuals with specialized knowledge and an uncommon level of interest collaborating. I would suggest that there is an important difference between the kind of decisions that are made by careful and informed participants in citizens’ juries, deliberative polls, or Wikipedia editing, and the oversimplified binary choices that we make in elections or referendums.

So, while there is research to suggest that large numbers of ordinary people can make better decisions, there are some conditions in terms of prior knowledge and careful consideration attached to that. I have high hopes for these more deliberative forms of public participation, but we are right to be cautious about referendums. The Icelandic constitutional reform process actually involved several forms of public participation, including two randomly selected deliberative fora, self-selected online participation, and a popular referendum with several questions.

Ed: A constitution is a very technical piece of text: how much could non-experts realistically contribute to its development — or was there also contribution from specialised interest groups? Presumably there was a team of lawyers and drafters managing the process? 

Alexander: All of these things were going on in Iceland’s drafting process. In my research here and on a few other constitution-making processes in other countries, I’ve been impressed by the ability of citizens to engage at a high level with fundamental questions about the nature of the state, constitutional rights, and legal theory. Assuming a reasonable level of literacy, people are fully capable of reading some literature on constitutional law and political philosophy, and writing very well-informed submissions that express what they would like to see in the constitutional text. A small, self-selected set of the public in many countries seeks to engage in spirited and for the most part respectful debate on these issues. In the Icelandic case, these debates have continued from 2009 to the present.

I would also add that public interest is not distributed uniformly across all the topics that constitutions cover. Members of the public show much more interest in discussing issues of human rights, and have more success in seeing proposals on that theme included in the draft constitution. Some NGOs were involved in submitting proposals to the Icelandic Constitutional Council, but interest groups do not appear to have been a major factor in the process. Unlike some constitution-making processes, the Icelandic Constitutional Council had a limited staff, and the drafters themselves were very engaged with the public on social media.

Ed: I guess Iceland is fairly small, but also unusually homogeneous. That helps, presumably, in creating a general consensus across a society? Or will party / political leaning always tend to trump any sense of common purpose and destiny, when defining the form and identity of the nation?

Alexander: You are certainly right that Iceland is unusual in these respects, and this raises important questions of what this is a case of, and how the findings here can inform us about what might happen in other contexts. I would not say that the Icelandic people reached any sort of broad, national-level consensus about how the constitution should change. During the early part of the drafting process, it seems that those who had strong disagreements with what was taking place absented themselves from the proceedings. They did turn up later to some extent (especially after the 2012 referendum), and sought to prevent this draft from becoming law.

Where the small size and homogeneous population really came into play in Iceland is through the level of knowledge that those who participated had of one another before entering into the constitution-making process. While this has been over emphasized in some discussions of Iceland, there are communities of shared interests where people all seem to know each other, or at least know of each other. This makes forming new societies, NGOs, or interest groups easier, and probably helped to launch the constitution-making project in the first place. 

Ed: How many people were involved in the process — and how were bad suggestions rejected, discussed, or improved? I imagine there must have been divisive issues, that someone would have had to arbitrate? 

Alexander: The number of people who interacted with the process in some way, either by attending one of the public forums that took place early in the process, voting in the election for the Constitutional Council, or engaging with the process on social media, is certainly in the tens of thousands. In fact, one of the striking things about this case is that 522 people stood for election to the 25 member Constitutional Council which drafted the new constitution. So there was certainly a high level of interest in participating in this process.

My research here focused on the written proposals that were posted to the Constitutional Council’s website. 204 individuals participated in that more intensive way. As the members of the Constitutional Council tell it, they would read some of the comments on social media, and the formal submissions on their website during their committee meetings, and discuss amongst themselves which ideas should be carried forward into the draft. The vast majority of the submissions were well-informed, on topic, and conveyed a collegial tone. In this case at least, there was very little of the kind of abusive participation that we observe in some online networks. 

Ed: You say that despite the success in creating a crowd-sourced constitution (that passed a public referendum), it was never ratified by parliament — why is that? And what lessons can we learn from this?

Alexander: Yes, this is one of the most interesting aspects of the whole thing for scholars, and certainly a source of some outrage for those Icelanders who are still active in trying to see this draft constitution become law. Some of this relates to the specifics of Iceland’s constitutional amendment process (which disincentives parliament from approving changes in between elections), but I think that there are also a couple of broadly applicable things going on here. First, the constitution-making process arose as a response to the way that the Icelandic government was perceived to have failed in governing the financial system in the late 2000s. By the time a last-ditch attempt to bring the draft constitution up for a vote in parliament occurred right before the 2013 election, almost five years had passed since the crisis that began this whole saga, and the economic situation had begun to improve. So legislators were not feeling pressure to address those issues any more.

Second, since political parties were not active in the drafting process, too few members of parliament had a stake in the issue. If one of the larger parties had taken ownership of this draft constitution, we might have seen a different outcome. I think this is one of the most important lessons from this case: if the success of the project depends on action by elite political actors, they should be involved in the earlier stages of the process. For various reasons, the Icelanders chose to exclude professional politicians from the process, but that meant that the Constitutional Council had too few friends in parliament to ratify the draft.

Read the full article: Hudson, A. (2018) When Does Public Participation Make a Difference? Evidence From Iceland’s Crowdsourced Constitution. Policy & Internet 10 (2) 185-217. DOI: https://doi.org/10.1002/poi3.167

Alexander Hudson was talking to blog editor David Sutcliffe.

Bursting the bubbles of the Arab Spring: the brokers who bridge ideology on Twitter

Online activism has become increasingly visible, with social media platforms being used to express protest and dissent from the Arab Spring to #MeToo. Scholarly interest in online activism has grown with its use, together with disagreement about its impact. Do social media really challenge traditional politics? Some claim that social media have had a profound and positive effect on modern protest — the speed of information sharing making online networks highly effective in building revolutionary movements. Others argue that this activity is merely symbolic: online activism has little or no impact, dilutes offline activism, and weakens social movements. Given online activity doesn’t involve the degree of risk, trust, or effort required on the ground, they argue that it can’t be considered to be “real” activism. In this view, the Arab Spring wasn’t simply a series of “Twitter revolutions”.

Despite much work on offline social movements and coalition building, few studies have used social network analysis to examine the influence of brokers of online activists (i.e. those who act as a bridge between different ideological groups), or their role in information diffusion across a network. In her Policy & Internet article “Brokerage Roles and Strategic Positions in Twitter Networks of the 2011 Egyptian Revolution”, Deena Abul-Fottouh tests whether social movements theory of networks and coalition building — developed to explain brokerage roles in offline networks, between established parties and organisations — can also be used to explain what happens online.

Social movements theory suggests that actors who occupy an intermediary structural position between different ideological groups are more influential than those embedded only in their own faction. That is, the “bridging ties” that link across political ideologies have a greater impact on mobilization than the bonding ties within a faction. Indeed, examining the Egyptian revolution and ensuing crisis, Deena finds that these online brokers were more evident during the first phase of movement solidarity between liberals, islamists, and socialists than in the period of schism and crisis (2011-2014) that followed the initial protests. However, she also found that the online brokers didn’t match the brokers on the ground: they played different roles, complementing rather than mirroring each other in advancing the revolutionary movement.

We caught up with Deena to discuss her findings:

Ed: Firstly: is the “Arab Spring” a useful term? Does it help to think of the events that took place across parts of the Middle East and North Africa under this umbrella term — which I suppose implies some common cause or mechanism?

Deena: Well, I believe it’s useful to an extent. It helps describe some positive common features that existed in the region such as dissatisfaction with the existing regimes, a dissatisfaction that was transformed from the domain of advocacy to the domain of high-risk activism, a common feeling among the people that they can make a difference, even though it did not last long, and the evidence that there are young people in the region who are willing to sacrifice for their freedom. On the other hand, structural forces in the region such as the power of deep states and the forces of counter-revolution were capable of halting this Arab Spring before it burgeoned or bore fruit, so may be the term “Spring” is no longer relevant.

Ed: Revolutions have been happening for centuries, i.e. they obviously don’t need Twitter or Facebook to happen. How significant do you think social media were in this case, either in sparking or sustaining the protests? And how useful are these new social media data as a means to examine the mechanisms of protest?

Deena: Social media platforms have proven to be useful in facilitating protests such as by sharing information in a speedy manner and on a broad range across borders. People in Egypt and other places in the region were influenced by Tunisia, and protest tactics were shared online. In other words, social media platforms definitely facilitate diffusion of protests. They are also hubs to create a common identity and culture among activists, which is crucial for the success of social movements. I also believe that social media present activists with various ways to circumvent policing of activism (e.g. using pseudonyms to hide the identity of the activists, sharing information about places to avoid in times of protests, many platforms offer the possibility for activists to form closed groups where they have high privacy to discuss non-public matters, etc.).

However, social media ties are weak ties. These platforms are not necessarily efficient in building the trust needed to bond social movements, especially in times of schism and at the level of high-risk activism. That is why, as I discuss in my article, we can see that the type of brokerage that is formed online is brokerage that is built on weak ties, not necessarily the same as offline brokerage that usually requires high trust.

Ed: It’s interesting that you could detect bridging between groups. Given schism seems to be fairly standard in society (Cf filter bubbles etc.) .. has enough attention been paid to this process of temporary shifting alignments, to advance a common cause? And are these incidental, or intentional acts of brokerage?

Deena: I believe further studies need to be made on the concepts of solidarity, schism and brokerage within social movements both online and offline. Little attention has been given to how movements come together or break apart online. The Egyptian revolution is a rich case to study these concepts as the many changes that happened in the path of the revolution in its first five years and the intervention of different forces have led to multiple shifts of alliances that deserve study. Acts of brokerage do not necessarily have to be intentional. In social movements studies, researchers have studied incidental acts that could eventually lead to formation of alliances, such as considering co-members of various social movements organizations as brokers between these organizations.

I believe that the same happens online. Brokerage could start with incidental acts such as activists following each other on Twitter for example, which could develop into stronger ties through mentioning each other. This could also build up to coordinating activities online and offline. In the case of the Egyptian revolution, many activists who met in protests on the ground were also friends online. The same happened in Moldova where activists coordinated tactics online and met on the ground. Thus, incidental acts that start with following each other online could develop into intentional coordinated activism offline. I believe further qualitative interviews need to be conducted with activists to study how they coordinate between online and offline activism, as there are certain mechanisms that cannot be observed through just studying the public profiles of activists or their structural networks.

Ed: The “Arab Spring” has had a mixed outcome across the region — and is also now perhaps a bit forgotten in the West. There have been various network studies of the 2011 protests: but what about the time between visible protests .. isn’t that in a way more important? What would a social network study of the current situation in Egypt look like, do you think?

Deena: Yes, the in-between times of waves of protests are as important to study as the waves themselves as they reveal a lot about what could happen, and we usually study them retroactively after the big shocks happen. A social network of the current situation in Egypt would probably include many “isolates” and tiny “components”, if I would use social network analysis terms. This started showing in 2014 as the effects of schism in the movement. I believe this became aggravated over time as the military coup d’état got a stronger grip over the country, suppressing all opposition. Many activists are either detained or have left the country. A quick look at their online profiles does not reveal strong communication between them. Yet, this is what apparently shows from public profiles. One of the levers that social media platforms offer is the ability to create private or “closed” groups online.

I believe these groups might include rich data about activists’ communication. However, it is very difficult, almost impossible to study these groups, unless you are a member or they give you permission. In other words, there might be some sort of communication occurring between activists but at a level that researchers unfortunately cannot access. I think we might call it the “underground of online activism”, which I believe is potentially a very rich area of study.

Ed: A standard criticism of “Twitter network studies” is that they aren’t very rich — they may show who’s following whom, but not necessarily why, or with what effect. Have there been any larger, more detailed studies of the Arab Spring that take in all sides: networks, politics, ethnography, history — both online and offline?

Deena: To my knowledge, there haven’t been studies that have included all these aspects together. Yet there are many studies that covered each of them separately, especially the politics, ethnography, and history of the Arab Spring (see for example: Egypt’s Tahrir Revolution 2013, edited by D. Tschirgi, W. Kazziha and S. F. McMahon). Similarly, very few studies have tried to compare the online and offline repertoires (see for example: Weber, Garimella and Batayneh 2013, Abul-Fottouh and Fetner 2018). In my doctoral dissertation (2018 from McMaster University), I tried to include many of these elements.

Read the full article: Abul-Fottouh, D. (2018) Brokerage Roles and Strategic Positions in Twitter Networks of the 2011 Egyptian Revolution. Policy & Internet 10: 218-240. doi:10.1002/poi3.169

Deena Abul-Fottouh was talking to blog editor David Sutcliffe.

Call for Papers: Government, Industry, Civil Society Responses to Online Extremism

We are calling for articles for a Special Issue of the journal Policy & Internet on “Online Extremism: Government, Private Sector, and Civil Society Responses”, edited by Jonathan Bright and Bharath Ganesh, to be published in 2019. The submission deadline is October 30, 2018.

Issue Outline

Governments, the private sector, and civil society are beginning to work together to challenge extremist exploitation of digital communications. Both Islamic and right-wing extremists use websites, blogs, social media, encrypted messaging, and filesharing websites to spread narratives and propaganda, influence mainstream public spheres, recruit members, and advise audiences on undertaking attacks.

Across the world, public-private partnerships have emerged to counter this problem. For example, the Global Internet Forum to Counter Terrorism (GIFCT) organized by the UN Counter-Terrorism Executive Directorate has organized a “shared hash database” that provides “digital fingerprints” of ISIS visual content to help platforms quickly take down content. In another case, the UK government funded ASI Data Science to build a tool to accurately detect jihadist content. Elsewhere, Jigsaw (a Google-owned company) has developed techniques to use content recommendations on YouTube to “redirect” viewers of extremist content to content that might challenge their views.

While these are important and admirable efforts, their impacts and effectiveness is unclear. The purpose of this special issue is to map and evaluate emerging public-private partnerships, technologies, and responses to online extremism. There are three main areas of concern that the issue will address:

(1) the changing role of content moderation, including taking down content and user accounts, as well as the use of AI techniques to assist;

(2) the increasing focus on “counter-narrative” campaigns and strategic communication; and

(3) the inclusion of global civil society in this agenda.

This mapping will contribute to understanding how power is distributed across these actors, the ways in which technology is expected to address the problem, and the design of the measures currently being undertaken.

Topics of Interest

Papers exploring one or more of the following areas are invited for consideration:

Content moderation

  • Efficacy of user and content takedown (and effects it has on extremist audiences);
  • Navigating the politics of freedom of speech in light of the proliferation of hateful and extreme speech online;
  • Development of content and community guidelines on social media platforms;
  • Effect of government policy, recent inquiries, and civil society on content moderation practices by the private sector (e.g. recent laws in Germany, Parliamentary inquiries in the UK);
  • Role and efficacy of Artificial Intelligence (AI) and machine learning in countering extremism.

Counter-narrative Campaigns and Strategic Communication

  • Effectiveness of counter-narrative campaigns in dissuading potential extremists;
  • Formal and informal approaches to counter narratives;
  • Emerging governmental or parastatal bodies to produce and disseminate counter-narratives;
  • Involvement of media and third sector in counter-narrative programming;
  • Research on counter-narrative practitioners;
  • Use of technology in supporting counter-narrative production and dissemination.

Inclusion of Global Civil Society

  • Concentration of decision making power between government, private sector, and civil society actors;
  • Diversity of global civil society actors involved in informing content moderation and counter-narrative campaigns;
  • Extent to which inclusion of diverse civil society/third sector actors improves content moderation and counter-narrative campaigns;
  • Challenges and opportunities faced by global civil society in informing agendas to respond to online extremism.

Submitting your Paper

We encourage interested scholars to submit 6,000 to 8,000 word papers that address one or more of the issues raised in the call. Submissions should be made through Policy & Internet’s manuscript submission system. Interested authors are encouraged to contact Jonathan Bright (jonathan.bright@oii.ox.ac.uk) and Bharath Ganesh (bharath.ganesh@oii.ox.ac.uk) to check the suitability of their paper.

Special Issue Schedule

The special issue will proceed according to the following timeline:

Paper submission: 30 October 2018

First round of reviews: January 2019

Revisions received: March 2019

Final review and decision: May 2019

Publication (estimated): December 2019

The special issue as a whole will be published at some time in late 2019, though individual papers will be published online in EarlyView as soon as they are accepted.

Call for Papers: Government, Industry, Civil Society Responses to Online Extremism

We are calling for articles for a Special Issue of the journal Policy & Internet on “Online Extremism: Government, Private Sector, and Civil Society Responses”, edited by Jonathan Bright and Bharath Ganesh, to be published in 2019. The submission deadline is October 30, 2018.

Issue Outline

Governments, the private sector, and civil society are beginning to work together to challenge extremist exploitation of digital communications. Both Islamic and right-wing extremists use websites, blogs, social media, encrypted messaging, and filesharing websites to spread narratives and propaganda, influence mainstream public spheres, recruit members, and advise audiences on undertaking attacks.

Across the world, public-private partnerships have emerged to counter this problem. For example, the Global Internet Forum to Counter Terrorism (GIFCT) organized by the UN Counter-Terrorism Executive Directorate has organized a “shared hash database” that provides “digital fingerprints” of ISIS visual content to help platforms quickly take down content. In another case, the UK government funded ASI Data Science to build a tool to accurately detect jihadist content. Elsewhere, Jigsaw (a Google-owned company) has developed techniques to use content recommendations on YouTube to “redirect” viewers of extremist content to content that might challenge their views.

While these are important and admirable efforts, their impacts and effectiveness is unclear. The purpose of this special issue is to map and evaluate emerging public-private partnerships, technologies, and responses to online extremism. There are three main areas of concern that the issue will address:

(1) the changing role of content moderation, including taking down content and user accounts, as well as the use of AI techniques to assist;

(2) the increasing focus on “counter-narrative” campaigns and strategic communication; and

(3) the inclusion of global civil society in this agenda.

This mapping will contribute to understanding how power is distributed across these actors, the ways in which technology is expected to address the problem, and the design of the measures currently being undertaken.

Topics of Interest

Papers exploring one or more of the following areas are invited for consideration:

Content moderation

  • Efficacy of user and content takedown (and effects it has on extremist audiences);
  • Navigating the politics of freedom of speech in light of the proliferation of hateful and extreme speech online;
  • Development of content and community guidelines on social media platforms;
  • Effect of government policy, recent inquiries, and civil society on content moderation practices by the private sector (e.g. recent laws in Germany, Parliamentary inquiries in the UK);
  • Role and efficacy of Artificial Intelligence (AI) and machine learning in countering extremism.

Counter-narrative Campaigns and Strategic Communication

  • Effectiveness of counter-narrative campaigns in dissuading potential extremists;
  • Formal and informal approaches to counter narratives;
  • Emerging governmental or parastatal bodies to produce and disseminate counter-narratives;
  • Involvement of media and third sector in counter-narrative programming;
  • Research on counter-narrative practitioners;
  • Use of technology in supporting counter-narrative production and dissemination.

Inclusion of Global Civil Society

  • Concentration of decision making power between government, private sector, and civil society actors;
  • Diversity of global civil society actors involved in informing content moderation and counter-narrative campaigns;
  • Extent to which inclusion of diverse civil society/third sector actors improves content moderation and counter-narrative campaigns;
  • Challenges and opportunities faced by global civil society in informing agendas to respond to online extremism.

Submitting your Paper

We encourage interested scholars to submit 6,000 to 8,000 word papers that address one or more of the issues raised in the call. Submissions should be made through Policy & Internet’s manuscript submission system. Interested authors are encouraged to contact Jonathan Bright (jonathan.bright@oii.ox.ac.uk) and Bharath Ganesh (bharath.ganesh@oii.ox.ac.uk) to check the suitability of their paper.

Special Issue Schedule

The special issue will proceed according to the following timeline:

Paper submission: 30 October 2018

First round of reviews: January 2019

Revisions received: March 2019

Final review and decision: May 2019

Publication (estimated): December 2019

The special issue as a whole will be published at some time in late 2019, though individual papers will be published online in EarlyView as soon as they are accepted.

How can we encourage participation in online political deliberation?

Political parties have been criticized for failing to link citizen preferences to political decision-making. But in an attempt to enhance policy representation, many political parties have established online platforms to allow discussion of policy issues and proposals, and to open up their decision-making processes. The Internet — and particularly the social web — seems to provide an obvious opportunity to strengthen intra-party democracy and mobilize passive party members. However, these mobilizing capacities are limited, and in most instances, participation has been low.

In their Policy & Internet article “Does the Internet Encourage Political Participation? Use of an Online Platform by Members of a German Political Party,” Katharina Gerl, Stefan Marschall, and Nadja Wilker examine the German Greens’ online collaboration platform to ask why only some party members and supporters use it. The platform aims improve the inclusion of party supporters and members in the party’s opinion-formation and decision-making process, but it has failed to reach inactive members. Instead, those who have already been active in the party also use the online platform. It also seems that classical resources such as education and employment status do not (directly) explain differences in participation; instead, participation is motivated by process-related and ideological incentives.

We caught up with the authors to discuss their findings:

Ed.: You say “When it comes to explaining political online participation within parties, we face a conceptual and empirical void” .. can you explain briefly what the offline models are, and why they don’t work for the Internet age?

Katharina / Stefan / Nadja: According to Verba et al. (1995) the reasons for political non-participation can be boiled down to three factors: (1) citizens do not want to participate, (2) they cannot, (3) nobody asked them to. Speaking model-wise we can distinguish three perspectives: Citizens need certain resources like education, information, time and civic skills to participate (resource model and civic voluntarism model). The social psychological model looks at the role of attitudes and political interest that are supposed to increase participation. In addition to resources and attitudes, the general incentives model analyses how motives, costs and benefits influence participation.

These models can be applied to online participation as well, but findings for the online context indicate that the mechanisms do not always work like in the offline context. For example, age plays out differently for online participation. Generally, the models have to be specified for each participation context. This especially applies for the online context as forms of online participation sometimes demand different resources, skills or motivational factors. Therefore, we have to adapt and supplemented the models with additional online factors like internet skills and internet sophistication.

Ed.: What’s the value to a political party of involving its members in policy discussion? (i.e. why go through the bother?)

Katharina / Stefan / Nadja: Broadly speaking, there are normative and rational reasons for that. At least for the German parties, intra-party democracy plays a crucial role. The involvement of members in policy discussion can serve as a means to strengthen the integration and legitimation power of a party. Additionally, the involvement of members can have a mobilizing effect for the party on the ground. This can positively influence the linkage between the party in central office, the party on the ground, and the societal base. Furthermore, member participation can be a way to react on dissatisfaction within a party.

Ed.: Are there any examples of successful “public deliberation” — i.e. is this maybe just a problem of getting disparate voices to usefully engage online, rather than a failure of political parties per se?

Katharina / Stefan / Nadja: This is definitely not unique to political parties. The problems we observe regarding online public deliberation in political parties also apply to other online participation platforms: political participation and especially public deliberation require time and effort for participants, so they will only be willing to engage if they feel they benefit from it. But the benefits of participation may remain unclear as public deliberation – by parties or other initiators – often takes place without a clear goal or a real say in decision-making for the participants. Initiators of public deliberation often fail to integrate processes of public deliberation into formal and meaningful decision-making procedures. This leads to disappointment for potential participants who might have different expectations concerning their role and scope of influence. There is a risk of a vicious circle and disappointed expectations on both sides.

Ed.: Based on your findings, what would you suggest that the Greens do in order to increase participation by their members on their platform?

Katharina / Stefan / Nadja: Our study shows that the members of the Greens are generally willing to participate online and appreciate this opportunity. However, the survey also revealed that the most important incentive for them is to have an influence on the party’s decision-making. We would suggest that the Greens create an actual cause for participation, meaning to set clear goals and to integrate it into specific and relevant decisions. Participation should not be an end in itself!

Ed.: How far do political parties try to harness deliberation where it happens in the wild e.g. on social media, rather than trying to get people to use bespoke party channels? Or might social media users see this as takeover by the very “establishment politics” they might have abandoned, or be reacting against?

Katharina / Stefan / Nadja: Parties do not constrain their online activities to their own official platforms and channels but also try to develop strategies for influencing discourses in the wild. However, this works much better and has much more authenticity as well as credibility if it isn’t parties as abstract organizations but rather individual politicians such as members of parliament who engage in person on social media, for example by using Twitter.

Ed.: How far have political scientists understood the reasons behind the so-called “crisis of democracy”, and how to address it? And even if academics came up with “the answer” — what is the process for getting academic work and knowledge put into practice by political parties?

Katharina / Stefan / Nadja: The alleged “crisis of democracy” is in first line seen as a crisis of representation in which the gap between political elites and the citizens has widened drastically within the last years, giving room to populist movements and parties in many democracies. Our impression is that facing the rise of populism in many countries, politicians have become more and more attentive towards discussions and findings in political science which have been addressing the linkage problems for years. But perhaps this is like shutting the stable door after the horse has bolted.

Read the full article: Gerl, K., Marschall, S., and Wilker, N. (2016) Does the Internet Encourage Political Participation? Use of an Online Platform by Members of a German Political Party. Policy & Internet doi:10.1002/poi3.149

Katharina Gerl, Stefan Marschall, and Nadja Wilker were talking to blog editor David Sutcliffe.

Making crowdsourcing work as a space for democratic deliberation

There are a many instances of crowdsourcing in both local and national governance across the world, as governments implement crowdsourcing as part of their open government practices aimed at fostering civic engagement and knowledge discovery for policies. But is crowdsourcing conducive to deliberation among citizens or is it essentially just a consulting mechanism for information gathering? Second, if it is conducive to deliberation, what kind of deliberation is it? (And is it democratic?) Third, how representative are the online deliberative exchanges of the wishes and priorities of the larger population?

In their Policy & Internet article “Crowdsourced Deliberation: The Case of the Law on Off-Road Traffic in Finland”, Tanja Aitamurto and Hélène Landemore examine a partially crowdsourced reform of the Finnish off-road traffic law. The aim of the process was to search for knowledge and ideas from the crowd, enhance people’s understanding of the law, and to increase the perception of the policy’s legitimacy. The participants could propose ideas on the platform, vote others’ ideas up or down, and comment.

The authors find that despite the lack of explicit incentives for deliberation in the crowdsourced process, crowdsourcing indeed functioned as a space for democratic deliberation; that is, an exchange of arguments among participants characterized by a degree of freedom, equality, and inclusiveness. An important finding, in particular, is that despite the lack of statistical representativeness among the participants, the deliberative exchanges reflected a diversity of viewpoints and opinions, tempering to a degree the worry about the bias likely introduced by the self-selected nature of citizen participation.

They introduce the term “crowdsourced deliberation” to mean the deliberation that happens (intentionally or unintentionally) in crowdsourcing, even when the primary aim is to gather knowledge rather than to generate deliberation. In their assessment, crowdsourcing in the Finnish experiment was conducive to some degree of democratic deliberation, even though, strikingly, the process was not designed for it.

We caught up with the authors to discuss their findings:

Ed.: There’s a lot of discussion currently about “filter bubbles” (and indeed fake news) damaging public deliberation. Do you think collaborative crowdsourced efforts (that include things like Wikipedia) help at all more generally, or .. are we all damned to our individual echo chambers?

Tanja and Hélène: Deliberation, whether taking place within a crowdsourced policymaking process or in another context, has a positive impact on society, when the participants exchange knowledge and arguments. While all deliberative processes are, to a certain extent, their own microcosms, there is typically at least some cross-cutting exposure of opinions and perspectives among the crowd. The more diverse the participant crowd is and the larger the number of participants, the more likely there is diversity also in the opinions, preventing strictly siloed echo chambers.

Moreover, it all comes down to design and incentives in the end. In our crowdsourcing platform we did not particularly try to attract a cross-cutting section of the population so there was a risk of having only a relatively homogenous population self-selecting into the process, which is what happened to a degree, demographically at last (over 90% of our participants were educated male professionals). In terms of ideas though, the pool was much more diverse than the demography would have suggested, and techniques we used (like clustering) helped maintain the visibility (to the researchers) of the minority views.

That said, if what you are is after is maximal openness and cross-cutting exposure, nothing beats random selection, like the one used in mini-publics of all kinds, from citizens’ juries to deliberative polls to citizens’ assemblies… That’s what Facebook and Twitter should use in order to break the filter bubbles in which people lock themselves: algorithms that randomize the content of our newsfeed and expose us to a vast range of opinions, rather than algorithms that maximize similarity with what we already like.

But for us the goal was different and so our design was different. Our goal was to gather knowledge and ideas and for this self-selection (the sort also at play in Wikipedia) is better than random-selection: whereas with random selection you shut the door on most people, in crowdsourcing platform you just let the door open to anyone who can self-identify as having a relevant form of knowledge and has the motivation to participate. The remarkable thing in our case is that even though we didn’t design the process for democratic deliberation, it occurred anyway, between the cracks of the design so to speak.

Ed.: I suppose crowdsourcing won’t work unless there is useful cooperation: do you think these successful relationships self-select on a platform, or do things perhaps work precisely because people may NOT be discussing other, divisive things (like immigration) when working together on something apparently unrelated, like an off-road law?

Tanja and Hélène: There is a varying degree of collaboration in crowdsourcing. In crowdsourced policymaking, the crowd does not typically collaborate on drafting the law (unlike the crowd does in Wikipedia writing), but they rather respond to the crowdsourcer’s, in this case, the government’s prompts. In this type of crowdsourcing, which was the case in the crowdsourced off-road traffic law reform, the crowd members don’t need to collaborate with each other in order the process to achieve its goal of finding new knowledge. The crowd, can, of course decide not to collaborate with the government and not answer the prompts, or start sabotaging the process.

The degree and success of collaboration will depend on the design and the goals of your experiment. In our case, crowdsourcing might have worked even without collaboration because our goal was to gather knowledge and information, which can be done by harvesting the contributions of the individual members of the crowd without them interacting with each other. But if what you are after is co-creation or deliberation, then yes you need to create the background conditions and incentives for cooperation.

Cooperation may require bracketing some sensitive topics or else learning to disagree in respectful ways. Deliberation, and more broadly cooperation are social skills — human technologies you might say — that we still don’t know how to use very well. This comes in part from the fact that our school systems do not teach those skills, focused as they are on promoting individual rather than collaborative success and creating an eco-system of zero-sum competition between students, when in the real world there is almost nothing you can do all by yourself and we would be much better off nurturing collaborative skills and the art or technology of deliberation.

Ed.: Have there been any other examples in Finland — i.e. is crowdsourcing (and deliberation) something that is seen as useful and successful by the government?

Tanja and Hélène: Yes, there has been several crowdsourced policymaking processes in Finland. One is a crowdsourced Limited Liability Housing Company Law reform, organized by the Ministry of Justice in the Finland government. We examined the quality of deliberation in the case, and the findings show that the quality of deliberation, as measured by Discourse Quality Index, was pretty good.

Read the full article: Aitamurto, T. and Landemore, H. (2016) Crowdsourced Deliberation: The Case of the Law on Off-Road Traffic in Finland. Policy & Internet 8 (2) doi:10.1002/poi3.115.


Tanja Aitamurto and Hélène Landemore were talking to blog editor David Sutcliffe.

Habermas by design: designing public deliberation into online platforms

Advocates of deliberative democracy have always hoped that the Internet would provide the means for an improved public sphere. But what particular platform features should we look to, to promote deliberative debate online? In their Policy & Internet article “Design Matters! An Empirical Analysis of Online Deliberation on Different News Platforms“, Katharina Esau, Dennis Friess, and Christiane Eilders show how differences in the design of various news platforms result in significant variation in the quality of deliberation; measured as rationality, reciprocity, respect, and constructiveness.

The empirical findings of their comparative analysis across three types of news platforms broadly support the assumption that platform design affects the level of deliberative quality of user comments. Deliberation was most likely to be found in news fora, which are of course specifically designed to initiate user discussions. News websites showed a lower level of deliberative quality, with Facebook coming last in terms of meeting deliberative design criteria and sustaining deliberation. However, while Facebook performed poorly in terms of overall level of deliberative quality, it did promote a high degree of general engagement among users.

The study’s findings suggest that deliberative discourse in the virtual public sphere of the Internet is indeed possible, which is good news for advocates of deliberative theory. However, this will only be possible by carefully considering how platforms function, and how they are designed. Some may argue that the “power of design” (shaped by organizers like media companies), contradicts the basic idea of open debate amongst equals where the only necessary force is Habermas’s “forceless force of the better argument”. These advocates of an utterly free virtual public sphere may be disappointed, given it’s clear that deliberation is only likely to emerge if the platform is designed in a particular way.

We caught up with the authors to discuss their findings:

Ed: Just briefly: what design features did you find helped support public deliberation, i.e. reasoned, reciprocal, respectful, constructive discussion?

Katharina / Dennis / Christiane: There are several design features which are known to influence online deliberation. However, in this study we particularly focus on moderation, asynchronous discussion, clear topic definition, and the availability of information, which we have found to have a positive influence on the quality of online deliberation.

Ed.: I associate “Internet as a deliberative space” with Habermas, but have never read him: what’s the short version of what he thinks about “the public sphere” — and how the Internet might support this?

Katharina / Dennis / Christiane: Well, Habermas describes the public sphere as a space where free and equal people discuss topics of public import in a specific way. The respectful exchange of rational reasons is crucial in this normative ideal. Due to its open architecture, the Internet has often been presented as providing the infrastructure for large scale deliberation processes. However, Habermas himself is very skeptical as to whether online spaces support his ideas on deliberation. Ironically, he is one of the most influential authors in online deliberation scholarship.

Ed.: What do advocates of the Internet as a “deliberation space” hope for — simply that people will feel part of a social space / community if they can like things or comment on them (and see similar viewpoints); or that it will result in actual rational debate, and people changing their minds to “better” viewpoints, whatever they may be? I can personally see a value for the former, but I can’t imagine the latter ever working, i.e. given people basically don’t change?

Katharina / Dennis / Christiane: We are thinking that both hopes are present in the current debate, and we partly agree with your perception that changing minds seems to be difficult. But we may also be facing some methodological or empirical issues here, because changing of minds is not an easy thing to measure. We know from other studies that deliberation can indeed cause changes of opinion. However, most of this probably takes place within the individual’s mind. Robert E. Goodin has called this process “deliberation within” and this is not accessible through content analysis. People do not articulate “Oh, thanks for this argument, I have changed my mind”, but they probably take something away from online discussions which makes them more open minded.

Ed.: Does Wikipedia provide an example where strangers have (oddly!) come together to create something of genuine value — but maybe only because they’re actually making a specific public good? Is the basic problem of the idea of the “Internet supporting public discourse” that this is just too aimless an activity, with no obvious individual or collective benefit?

Katharina / Dennis / Christiane: We think Wikipedia is a very particular case. However, we can learn from this case that the collective goal plays a very important role for the quality of contributions. We know from empirical research that if people have the intention of contributing to something meaningful, discussion quality is significantly higher than in online spaces without that desire to have an impact.

Ed.: I wonder: isn’t Twitter the place where “deliberation” now takes place? How does it fit into, or inform, the deliberation literature, which I am assuming has largely focused on things like discussion fora?

Katharina / Dennis / Christiane: This depends on the definition of the term “deliberation”. We would argue that the limitation to 280 characters is probably not the best design feature for meaningful deliberation. However, we may have to think about deliberation in less complex contexts in order to reach more people; but this is a polarizing debate.

Ed.: You say that “outsourcing discussions to social networking sites such as Facebook is not advisable due to the low level of deliberative quality compared to other news platforms”. Facebook has now decided that instead of “connecting the world” it’s going to “bring people closer together” — what would you recommend that they do to support this, in terms of the design of the interactive (or deliberative) features of the platform?

Katharina / Dennis / Christiane: This is a difficult one! We think that the quality of deliberation on Facebook would strongly benefit from moderators, which should be more present on the platform to structure the discussions. By this we do not only mean professional moderators but also participative forms of moderation, which could be encouraged more by mechanisms which support such behaviour.

Read the full article: Katharina Esau, Dennis Friess, and Christiane Eilders (2017) Design Matters! An Empirical Analysis of Online Deliberation on Different News Platforms. Policy & Internet 9 (3) 321-342.

Katharina (@kathaesa), Dennis, and Christiane were talking to blog editor David Sutcliffe.

Could Counterfactuals Explain Algorithmic Decisions Without Opening the Black Box?

The EU General Data Protection Regulation (GDPR) has sparked much discussion about the “right to explanation” for the algorithm-supported decisions made about us in our everyday lives. While there’s an obvious need for transparency in the automated decisions that are increasingly being made in areas like policing, education, healthcare and recruitment, explaining how these complex algorithmic decision-making systems arrive at any particular decision is a technically challenging problem—to put it mildly.

In their article “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR” which is forthcoming in the Harvard Journal of Law & Technology, Sandra Wachter, Brent Mittelstadt, and Chris Russell present the concept of “unconditional counterfactual explanations” as a novel type of explanation of automated decisions that could address many of these challenges. Counterfactual explanations describe the minimum conditions that would have led to an alternative decision (e.g. a bank loan being approved), without the need to describe the full logic of the algorithm.

Relying on counterfactual explanations as a means to help us act rather than merely to understand could help us gauge the scope and impact of automated decisions in our lives. They might also help bridge the gap between the interests of data subjects and data controllers, which might otherwise be a barrier to a legally binding right to explanation.

We caught up with the authors to explore the role of algorithms in our everyday lives, and how a “right to explanation” for decisions might be achievable in practice:

Ed: There’s a lot of discussion about algorithmic “black boxes” — where decisions are made about us, using data and algorithms about which we (and perhaps the operator) have no direct understanding. How prevalent are these systems?

Sandra: Basically, every decision that can be made by a human can now be made by an algorithm. Which can be a good thing. Algorithms (when we talk about artificial intelligence) are very good at spotting patterns and correlations that even experienced humans might miss, for example in predicting disease. They are also very cost efficient—they don’t get tired, and they don’t need holidays. This could help to cut costs, for example in healthcare.

Algorithms are also certainly more consistent than humans in making decisions. We have the famous example of judges varying the severity of their judgements depending on whether or not they’ve had lunch. That wouldn’t happen with an algorithm. That’s not to say algorithms are always going to make better decisions: but they do make more consistent ones. If the decision is bad, it’ll be distributed equally, but still be bad. Of course, in a certain way humans are also black boxes—we don’t understand what humans do either. But you can at least try to understand an algorithm: it can’t lie, for example.

Brent: In principle, any sector involving human decision-making could be prone to decision-making by algorithms. In practice, we already see algorithmic systems either making automated decisions or producing recommendations for human decision-makers in online search, advertising, shopping, medicine, criminal justice, etc. The information you consume online, the products you are recommended when shopping, the friends and contacts you are encouraged to engage with, even assessments of your likelihood to commit a crime in the immediate and long-term future—all of these tasks can currently be affected by algorithmic decision-making.

Ed: I can see that algorithmic decision-making could be faster and better than human decisions in many situations. Are there downsides?

Sandra: Simple algorithms that follow a basic decision tree (with parameters decided by people) can be easily understood. But we’re now also using much more complex systems like neural nets that act in a very unpredictable way, and that’s the problem. The system is also starting to become autonomous, rather than being under the full control of the operator. You will see the output, but not necessarily why it got there. This also happens with humans, of course: I could be told by a recruiter that my failure to land a job had nothing to do with my gender (even if it did); an algorithm, however, would not intentionally lie. But of course the algorithm might be biased against me if it’s trained on biased data—thereby reproducing the biases of our world.

We have seen that the COMPAS algorithm used by US judges to calculate the probability of re-offending when making sentencing and parole decisions is a major source of discrimination. Data provenance is massively important, and probably one of the reasons why we have biased decisions. We don’t necessarily know where the data comes from, and whether it’s accurate, complete, biased, etc. We need to have lots of standards in place to ensure that the data set is unbiased. Only then can the algorithm produce nondiscriminatory results.

A more fundamental problem with predictions is that you might never know what would have happened—as you’re just dealing with probabilities; with correlations in a population, rather than with causalities. Another problem is that algorithms might produce correct decisions, but not necessarily fair ones. We’ve been wrestling with the concept of fairness for centuries, without consensus. But lack of fairness is certainly something the system won’t correct itself—that’s something that society must correct.

Brent: The biases and inequalities that exist in the real world and in real people can easily be transferred to algorithmic systems. Humans training learning systems can inadvertently or purposefully embed biases into the model, for example through labelling content as ‘offensive’ or ‘inoffensive’ based on personal taste. Once learned, these biases can spread at scale, exacerbating existing inequalities. Eliminating these biases can be very difficult, hence we currently see much research done on the measurement of fairness or detection of discrimination in algorithmic systems.

These systems can also be very difficult—if not impossible—to understand, for experts as well as the general public. We might traditionally expect to be able to question the reasoning of a human decision-maker, even if imperfectly, but the rationale of many complex algorithmic systems can be highly inaccessible to people affected by their decisions. These potential risks aren’t necessarily reasons to forego algorithmic decision-making altogether; rather, they can be seen as potential effects to be mitigated through other means (e.g. a loan programme weighted towards historically disadvantaged communities), or at least to be weighed against the potential benefits when choosing whether or not to adopt a system.

Ed: So it sounds like many algorithmic decisions could be too complex to “explain” to someone, even if a right to explanation became law. But you propose “counterfactual explanations” as an alternative— i.e. explaining to the subject what would have to change (e.g. about a job application) for a different decision to be arrived at. How does this simplify things?

Brent: So rather than trying to explain the entire rationale of a highly complex decision-making process, counterfactuals allow us to provide simple statements about what would have needed to be different about an individual’s situation to get a different, preferred outcome. You basically work from the outcome: you say “I am here; what is the minimum I need to do to get there?” By providing simple statements that are generally meaningful, and that reveal a small bit of the rationale of a decision, the individual has grounds to change their situation or contest the decision, regardless of their technical expertise. Understanding even a bit of how a decision is made is better than being told “sorry, you wouldn’t understand”—at least in terms of fostering trust in the system.

Sandra: And the nice thing about counterfactuals is that they work with highly complex systems, like neural nets. They don’t explain why something happened, but they explain what happened. And three things people might want to know are:

(1) What happened: why did I not get the loan (or get refused parole, etc.)?

(2) Information so I can contest the decision if I think it’s inaccurate or unfair.

(3) Even if the decision was accurate and fair, tell me what I can do to improve my chances in the future.

Machine learning and neural nets make use of so much information that individuals have really no oversight of what they’re processing, so it’s much easier to give someone an explanation of the key variables that affected the decision. With the counterfactual idea of a “close possible world” you give an indication of the minimal changes required to get what you actually want.

Ed: So would a series of counterfactuals (e.g. “over 18” “no prior convictions” “no debt”) essentially define a space within which a certain decision is likely to be reached? This decision space could presumably be graphed quite easily, to help people understand what factors will likely be important in reaching a decision?

Brent: This would only work for highly simplistic, linear models, which are not normally the type that confound human capacities for understanding. The complex systems that we refer to as ‘black boxes’ are highly dimensional and involve a multitude of (probabilistic) dependencies between variables that can’t be graphed simply. It may be the case that if I were aged between 35-40 with an income of £30,000, I would not get a loan. But, I could be told that if I had an income of £35,000, I would have gotten the loan. I may then assume that an income over £35,000 guarantees me a loan in the future. But, it may turn out that I would be refused a loan with an income above £40,000 because of a change in tax bracket. Non-linear relationships of this type can make it misleading to graph decision spaces. For simple linear models, such a graph may be a very good idea, but not for black box systems; they could, in fact, be highly misleading.

Chris: As Brent says, we’re concerned with understanding complicated algorithms that don’t just use hard cut-offs based on binary features. To use your example, maybe a little bit of debt is acceptable, but it would increase your risk of default slightly, so the amount of money you need to earn would go up. Or maybe certain convictions committed in the past also only increase your risk of defaulting slightly, and can be compensated for with higher salary. It’s not at all obvious how you could graph these complicated interdependencies over many variables together. This is why we picked on counterfactuals as a way to give people a direct and easy to understand path to move from the decision they got now, to a more favourable one at a later date.

Ed: But could a counterfactual approach just end up kicking the can down the road, if we know “how” a particular decision was reached, but not “why” the algorithm was weighted in such a way to produce that decision?

Brent: It depends what we mean by “why”. If this is “why” in the sense of, why was the system designed this way, to consider this type of data for this task, then we should be asking these questions while these systems are designed and deployed. Counterfactuals address decisions that have already been made, but still can reveal uncomfortable knowledge about a system’s design and functionality. So it can certainly inform “why” questions.

Sandra: Just to echo Brent, we don’t want to imply that asking the “why” is unimportant—I think it’s very important, and interpretability as a field has to be pursued, particularly if we’re using algorithms in highly sensitive areas. Even if we have the “what”, the “why” question is still necessary to ensure the safety of those systems.

Chris: And anyone who’s talked to a three-year old knows there is an endless stream of “Why” questions that can be asked. But already, counterfactuals provide a major step forward in answering why, compared to previous approaches that were concerned with providing approximate descriptions of how algorithms make decisions—but not the “why” or the external facts leading to that decision. I think when judging the strength of an explanation, you also have to look at questions like “How easy is this to understand?” and “How does this help the person I’m explaining things to?” For me, counterfactuals are a more immediately useful explanation, than something which explains where the weights came from. Even if you did know, what could you do with that information?

Ed: I guess the question of algorithmic decision making in society involves a hugely complex intersection of industry, research, and policy making? Are we control of things?

Sandra: Artificial intelligence (and the technology supporting it) is an area where many sectors are now trying to work together, including in the crucial areas of fairness, transparency and accountability of algorithmic decision-making. I feel at the moment we see a very multi-stakeholder approach, and I hope that continues in the future. We can see for example that industry is very concerned with it—the Partnership in AI is addressing these topics and trying to come up with a set of industry guidelines, recognising the responsibilities inherent in producing these systems. There are also lots of data scientists (eg at the OII and Turing Institute) working on these questions. Policy-makers around the world (e.g. UK, EU, US, China) preparing their countries for the AI future, so it’s on everybody’s mind at the moment. It’s an extremely important topic.

Law and ethics obviously has an important role to play. The opacity, unpredictability of AI and its potentially discriminatory nature, requires that we think about the legal and ethical implications very early on. That starts with educating the coding community, and ensuring diversity. At the same time, it’s important to have an interdisciplinary approach. At the moment we’re focusing a bit too much on the STEM subjects; there’s a lot of funding going to those areas (which makes sense, obviously), but the social sciences are currently a bit neglected despite the major role they play in recognising things like discrimination and bias, which you might not recognise from just looking at code.

Brent: Yes—and we’ll need much greater interaction and collaboration between these sectors to stay ‘in control’ of things, so to speak. Policy always has a tendency to lag behind technological developments; the challenge here is to stay close enough to the curve to prevent major issues from arising. The potential for algorithms to transform society is massive, so ensuring a quicker and more reflexive relationship between these sectors than normal is absolutely critical.

Read the full article: Sandra Wachter, Brent Mittelstadt, Chris Russell (2018) Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology (Forthcoming).

This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1.


Sandra Wachter, Brent Mittelstadt and Chris Russell were talking to blog editor David Sutcliffe.

Why we shouldn’t be pathologizing online gaming before the evidence is in

Internet-based video games are a ubiquitous form of recreation pursued by the majority of adults and young people. With sales eclipsing box office receipts, games are now an integral part of modern leisure. However, the American Psychiatric Association (APA) recently identified Internet Gaming Disorder (IGD) as a potential psychiatric condition and has called for research to investigate the potential disorder’s validity and its impacts on health and behaviour.

Research responding to this call for a better understanding of IGD is still at a formative stage, and there are active debates surrounding it. There is a growing literature that suggests there is a basis to expect that excessive or problematic gaming may be related to lower health, though findings in this area are mixed. Some argue for a theoretical framing akin to a substance abuse disorder (i.e. where gaming is considered to be inherently addictive), while others frame Internet-based gaming as a self-regulatory challenge for individuals.

In their article “A prospective study of the motivational and health dynamics of Internet Gaming Disorder“, Netta Weinstein, the OII’s Andrew Przybylski, and Kou Murayama address this gap in the literature by linking self-regulation and Internet Gaming Disorder research. Drawing on a representative sample of 5,777 American adults they examine how problematic gaming emerges from a state of individual “dysregulation” and how it predicts health — finding no evidence directly linking IGD to health over time.

This negative finding indicates that IGD may not, in itself, be robustly associated with important clinical outcomes. As such, it may be premature to invest in management of IGD using the same kinds of approaches taken in response to substance-based addiction disorders. Further, the findings suggests that more high-quality evidence regarding clinical and behavioural effects is needed before concluding that IGD is a legitimate candidate for inclusion in future revisions of the Diagnostic and Statistical Manual of Mental Disorders.

We caught up with Andy to explore the implications of the study:

Ed: To ask a blunt question upfront: do you feel that Internet Gaming Disorder is a valid psychiatric condition (and that “games can cause problems”)? Or is it still too early to say?

Andy: No, it is not. It’s difficult to overstate how sceptical the public should be of researchers who claim, and communicate their research, as if Internet addiction, gaming addiction, or Internet gaming disorder (IGD) are recognized psychiatric disorders. The fact of the matter is that American psychiatrists working on the most recent revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM5) highlighted that problematic online play was a topic they were interested in learning more about. These concerns are highlighted in Section III of the DSM5 (entitled “Emerging Measures and Models”). For those interested in this debate see this position paper.

Ed: Internet gaming seems like quite a specific activity to worry about: how does it differ from things like offline video games, online gambling and casino games; or indeed the various “problematic” uses of the Internet that lead to people admitting themselves to digital detox camps?

Andy: In some ways computer games, and Internet ones, are distinct from other activities. They are frequently updated to meet players expectations and some business models of games, such as pay-to-play are explicitly designed to target high engagement players to spend real money for in-game advantages. Detox camps are very worrying to me as a scientist because they have no scientific basis, many of those who run them have financial conflicts of interest when they comment in the press, and there have been a number of deaths at these facilities.

Ed: You say there are two schools of thought: that if IGD is indeed a valid condition, that it should be framed as an addiction, i.e. that there’s something inherently addictive about certain games. Alternatively, that it should be framed as a self-regulatory challenge, relating to an individual’s self-control. I guess intuitively it might involve a bit of both: online environments can be very persuasive, and some people are easily persuaded?

Andy: Indeed it could be. As researchers mainly interested in self-regulation we’re most interested in gaming as one of many activities that can be successfully (or unsuccessfully) integrated into everyday life. Unfortunately we don’t know much for sure about whether there is something inherently addictive about games because the research literature is based largely on inferences based on correlational data, drawn from convenience samples, with post-hoc analyses. Because the evidence base is of such low quality most of the published findings (i.e. correlations/factor analyses) regarding gaming addiction supporting it as valid condition likely suffer from the Texas Sharpshooter Fallacy.

Ed: Did you examine the question of whether online games may trigger things like anxiety, depression, violence, isolation etc. — or whether these conditions (if pre-existing) might influence the development of IGD?

Andy: Well, our modelling focused on the links between Internet Gaming Disorder, health (mental, physical, and social), and motivational factors (feeling competent, choiceful, and a sense of belonging) examined at two time points six months apart. We found that those who had their motivational needs met at the start of the study were more likely to have higher levels of health six months later and were less likely to say they experienced some of the symptoms of Internet Gaming Disorder.

Though there was no direct link between Internet Gaming Disorder and health six months later, we perform an exploratory analysis (one we did not pre-register) and found an indirect link between Internet Gaming Disorder and health by way of motivational factors. In other words, Internet Gaming Disorder was linked to lower levels of feeling competent, choiceful, and connected, which was in turn linked to lower levels of health.

Ed: All games are different. How would a clinician identify if someone was genuinely “addicted” to a particular game — there would presumably have to be game-by-game ratings of their addictive potential (like there are with drugs). How would anyone find the time to do that? Or would diagnosis focus more on the individual’s behaviour, rather than what games they play? I suppose this goes back to the question of whether “some games are addictive” or whether just “some people have poor self-control”?

Andy: No one knows. In fact, the APA doesn’t define what “Internet Games” are. In our research we define ask participants to define it for themselves by “Think[ing] about the Internet games you may play on Facebook (e.g. Farmville), Tablet/Smartphones (e.g. Candy Crush), or Computer/Consoles (e.g. Minecraft).” It’s very difficult to overstate how suboptimal this state of affairs is from a scientific perspective.

Ed: Is it odd that it was the APA’s Substance-Related Disorders Work Group that has called for research into IGD? Are “Internet Games” unique in being classed as a substance, or are there other information based-behaviours that fall under the group’s remit?

Andy: Yes it’s very odd. Our research group is not privy to these discussions but my understanding is that a range of behaviours and other technology-related activities, such as general Internet use have been discussed.

Ed: A huge amount of money must be spent on developing and refining these games, i.e. to get people to spend as much time (and money) as possible playing them. Are academics (and clinicians) always going to be playing catch-up to industry?

Andy: I’m not sure that there is one answer to this. One useful way to think of online games is using the example of a gym. Gyms are most profitable when many people are paying for (and don’t cancel) their memberships but owners can still maintain a small footprint. The world’s most successful gym might be a one square meter facility, with seven billion members but no one ever goes. Many online games are like this, some costs scale nicely, but others have high costs, like servers, community management, upkeep, and power. There are many studying the addictive potential of games but because they constantly reinvent the wheel by creating duplicate survey instruments (there are literally dozens that are only used once or a couple of times) very little of real-world relevance is ever learned or transmitted to the public.

Ed: It can’t be trivial to admit another condition into the Diagnostic and Statistical Manual of Mental Disorders (DSM-5)? Presumably there must be firm (reproducible) evidence that it is a (persistent) problem for certain people, with a specific (identifiable) cause — given it could presumably be admitted in courts as a mitigating condition, and possibly also have implications for health insurance and health policy? What are the wider implications if it does end up being admitted to the DSM-5?

Andy: It is very serious stuff. Opening the door to pathologizing one of the world’s most popular recreational activities risks stigmatizing hundreds of millions of people and shifting resources in an already overstretched mental health systems over the breaking point.

Ed: You note that your study followed a “pre-registered analysis plan” — what does that mean?

Andy: We’ve discussed the wider problems in social, psychological, and medical science before. But basically, preregistration, and Registered Reports provide scientists a way to record their hypotheses in advance of data collection. This improves the quality of the inferences researchers draw from experiments and large-scale social data science. In this study, and also in our other work, we recorded our sampling plan, our analysis plan, and our materials before we collected our data.

Ed: And finally: what follow up studies are you planning?

Andy: We are now conducting a series of studies investigating problematic play in younger participants with a focus on child-caregiver dynamics.

Read the full article: Weinstein N, Przybylski AK, Murayama K. (2017) A prospective study of the motivational and health dynamics of Internet Gaming Disorder. PeerJ 5:e3838 https://doi.org/10.7717/peerj.3838

Additional peer-reviewed articles in this area by Andy include:

Przybylski, A.K. & Weinstein N. (2017). A Large-Scale Test of the Goldilocks Hypothesis: Quantifying the Relations Between Digital Screens and the Mental Well-Being of Adolescents. Psychological Science. DOI: 10.1177/0956797616678438.

Przybylski, A. K., Weinstein, N., & Murayama, K. (2016). Internet Gaming Disorder: Investigating the Clinical Relevance of a New Phenomenon. American Journal of Psychiatry. DOI: 10.1176/appi.ajp.2016.16020224.

Przybylski, A. K. (2016). Mischievous responding in Internet Gaming Disorder research. PeerJ, 4, e2401. https://doi.org/10.7717/peerj.2401

For more on the ongoing “crisis in psychology” and how pre-registration of studies might offer a solution, see this discussion with Andy and Malte Elson: Psychology is in crisis, and here’s how to fix it.

Andy Przybylski was talking to blog editor David Sutcliffe.