Interviews

Examining the supply of channels for digital politics distributed by Swedish municipalities and understanding the drivers of variation in local online engagement.

Sweden is a leader in terms of digitalisation, but poorer municipalities struggle to find the resources to develop digital forms of politics. Image: Stockholm by Peter Tandlund (Flickr CC BY-NC-ND 2.0)

While much of the modern political process is now carried out digitally, ICTs have yet to bring democracies to their full utopian ideal. The drivers of involvement in digital politics from an individual perspective are well studied, but less attention has been paid to the supply-side of online engagement in politics. In his Policy & Internet article “Inequality in Local Digital Politics: How Different Preconditions for Citizen Engagement Can Be Explained,” Gustav Lidén examines the supply of channels for digital politics distributed by Swedish municipalities, in order to understand the drivers of variation in local online engagement. He finds a positive trajectory for digital politics in Swedish municipalities, but with significant variation between municipalities when it comes to opportunities for engagement in local politics via their websites. These patterns are explained primarily by population size (digital politics is costly, and larger societies are probably better able to carry these costs), but also by economic conditions and education levels. He also find that a lack of policies and unenthusiastic politicians creates poor possibilities for development, verifying previous findings that without citizen demand—and ambitious politicians—successful provision of channels for digital politics will be hard to achieve. We caught up with Gustav to discuss his findings: Ed.: I guess there must be a huge literature (also in development studies) on the interactions between connectivity, education, the economy, and supply and demand for digital government; and what the influencers are in each of these relationships. Not to mention causality. I’m guessing “everything is important, but nothing is clear”—is that fair? And do you think any “general principles” explaining demand and supply of electronic government/democracy could ever be established, if they haven’t already? Gustav: Although the literature in this field is becoming vast the subfield that I am primarily engaged in, that is the conditions for digital policy at the subnational level, has only recently attracted greater numbers of scholars. Even if predictors of these…

We might expect bot interactions to be relatively predictable and uneventful.

Wikipedia uses editing bots to clean articles: but what happens when their interactions go bad? Image of "Nomade", a sculpture in downtown Des Moines by Jason Mrachina (Flickr CC BY-NC-ND 2.0).

Recent years have seen a huge increase in the number of bots online—including search engine Web crawlers, online customer service chat bots, social media spambots, and content-editing bots in online collaborative communities like Wikipedia. (Bots are important contributors to Wikipedia, completing about 15% of all Wikipedia edits in 2014 overall, and more than 50% in certain language editions.) While the online world has turned into an ecosystem of bots (by which we mean computer scripts that automatically handle repetitive and mundane tasks), our knowledge of how these automated agents interact with each other is rather poor. But being automata without capacity for emotions, meaning-making, creativity, or sociality, we might expect bot interactions to be relatively predictable and uneventful. In their PLOS ONE article “Even good bots fight: The case of Wikipedia”, Milena Tsvetkova, Ruth García-Gavilanes, Luciano Floridi, and Taha Yasseri analyse the interactions between bots that edit articles on Wikipedia. They track the extent to which bots undid each other’s edits over the period 2001–2010, model how pairs of bots interact over time, and identify different types of interaction outcomes. Although Wikipedia bots are intended to support the encyclopaedia—identifying and undoing vandalism, enforcing bans, checking spelling, creating inter-language links, importing content automatically, mining data, identifying copyright violations, greeting newcomers, etc.—the authors find they often undid each other’s edits, with these sterile “fights” sometimes continuing for years. They suggest that even relatively “dumb” bots may give rise to complex interactions, carrying important implications for Artificial Intelligence research. Understanding these bot-bot interactions will be crucial for managing social media, providing adequate cyber-security, and designing autonomous vehicles (that don’t crash). We caught up with Taha Yasseri and Luciano Floridi to discuss the implications of the findings: Ed.: Is there any particular difference between the way individual bots interact (and maybe get bogged down in conflict), and lines of vast and complex code interacting badly, or having unforeseen results (e.g. flash-crashes in automated trading):…

A number of studies have shown that VAA use has an impact on the cognitive behaviour of users, on their likelihood to participate in elections, and on the choice of the party they vote for.

To what extent do VAAs alter the way voters perceive the meaning of elections, and encourage them to hold politicians to account for election promises? Image: ep_jhu (Flickr CC BY-NC 2.0)

In many countries, Voting Advice Applications (VAAs) have become an almost indispensable part of the electoral process, playing an important role in the campaigning activities of parties and candidates, an essential element of media coverage of the elections, and being widely used by citizens. A number of studies have shown that VAA use has an impact on the cognitive behaviour of users, on their likelihood to participate in elections, and on the choice of the party they vote for. These applications are based on the idea of issue and proximity voting—the parties and candidates recommended by VAAs are those with the highest number of matching positions on a number of political questions and issues. Many of these questions are much more specific and detailed than party programs and electoral platforms, and show the voters exactly what the party or candidates stand for and how they will vote in parliament once elected. In his Policy & Internet article “Do VAAs Encourage Issue Voting and Promissory Representation? Evidence From the Swiss Smartvote,” Andreas Ladner examines the extent to which VAAs alter the way voters perceive the meaning of elections, and encourage them to hold politicians to account for election promises. His main hypothesis is that VAAs lead to “promissory representation”—where parties and candidates are elected for their promises and sanctioned by the electorate if they don’t keep them. He suggests that as these tools become more popular, the “delegate model” is likely to increase in popularity: i.e. one in which politicians are regarded as delegates voted into parliament to keep their promises, rather than being voted a free mandate to act how they see fit (the “trustee model”). We caught up with Andreas to discuss his findings: Ed.: You found that issue-voters were more likely (than other voters) to say they would sanction a politician who broke their election promises. But also that issue voters are less politically engaged. So is this…

The U.S.–Mexico border is a complex region encompassing both positives and negatives — but understanding these narratives could have a real-world impact on policy along the border.

The U.S.–Mexico border to be the location of an annual legal flow of economic trade of $300 billion each year, the frontier of 100 years of peaceful coexistence between two countries, and the point of integration for the U.S.–Mexico relationship. Image: BBC World Service (Flickr CC BY-NC 2.0)

The US-Mexican border region is home to approximately 12 million people, and is the most-crossed international border in the world. Unlike the current physical border, the image people hold of “the border” is not firmly established, and can be modified. One way is via narratives (or stories), which are a powerful tool for gaining support for public policies. Politicians’ narratives about the border have historically been perpetuated by the traditional media, particularly when this allows them to publish sensational and attention grabbing news stories. However, new social media, including YouTube, provide opportunities for less-mainstream narratives of cooperation. In their Policy & Internet article “Do New Media Support New Policy Narratives? The Social Construction of the U.S.–Mexico Border on YouTube”, Donna L. Lybecker, Mark K. McBeth, Maria A. Husmann, and Nicholas Pelikan find that YouTube videos about the U.S.–Mexico border focus (perhaps unsurprisingly) on mainstream, divisive issues such as security and violence, immigration, and drugs. However, the videos appear to construct more favourable perspectives of the border region than traditional media, with around half constructing a sympathetic view of the border, and the people associated with it. The common perceptions of the border generally take two distinct forms. One holds the U.S.–Mexico border to be the location of an annual legal flow of economic trade of $300 billion each year, a line which millions of people legally cross annually, the frontier of 100 years of peaceful coexistence between two countries, and the point of integration for the U.S.–Mexico relationship. An alternative perspective (particularly common since 9/11) focuses less on economic trade and legal crossing and more on undocumented immigration, violence and drug wars, and a U.S.-centric view of “us versus them”. In order to garner public support for their “solutions” to these issues, politicians often define the border using one of these perspectives. Acceptance of the first view might well allow policymakers to find cooperative solutions to joint problems. Acceptance of…

Advocates hope that opening government data will increase government transparency, catalyse economic growth, address social and environmental challenges.

Advocates hope that opening government data will increase government transparency, catalyse economic growth, address social and environmental challenges. Image by the UK’s Open Data Institute.

Community-based approaches are widely employed in programmes that monitor and promote socioeconomic development. And building the “capacity” of a community—i.e. the ability of people to act individually or collectively to benefit the community—is key to these approaches. The various definitions of community capacity all agree that it comprises a number of dimensions—including opportunities and skills development, resource mobilisation, leadership, participatory decision making, etc.—all of which can be measured in order to understand and monitor the implementation of community-based policy. However, measuring these dimensions (typically using surveys) is time consuming and expensive, and the absence of such measurements is reflected in a greater focus in the literature on describing the process of community capacity building, rather than on describing how it’s actually measured. A cheaper way to measure these dimensions, for example by applying predictive algorithms to existing secondary data like socioeconomic characteristics, socio-demographics, and condition of housing stock, would certainly help policy makers gain a better understanding of local communities. In their Policy & Internet article “Predicting Sense of Community and Participation by Applying Machine Learning to Open Government Data”, Alessandro Piscopo, Ronald Siebes, and Lynda Hardman employ a machine-learning technique (“Random Forests”) to evaluate an estimate of community capacity derived from open government data, and determine the most important predictive variables. The resulting models were found to be more accurate than those based on traditional statistics, demonstrating the feasibility of the Random Forests technique for this purpose—being accurate, able to deal with small data sets and nonlinear data, and providing information about how each variable in the dataset contributes to predictive accuracy. We caught up with the authors to discuss their findings: Ed.: Just briefly: how did you do the study? Were you essentially trying to find which combinations of variables available in Open Government Data predicted “sense of community and participation” as already measured by surveys? Authors: Our research stemmed from an observation of the measures of social…

Notably, nearly 90 percent of the advertisements contained no responsible or problem gambling language, despite the gambling-like content.

Lord of the Rings slot machines at the Flamingo, image by jenneze (Flickr CC BY-NC 2.0). Unlike gambling played for real money, “social casino games” generally have no monetary prizes.

Social casino gaming, which simulates gambling games on a social platform such as Facebook, is a nascent but rapidly growing industry—social casino game revenues grew 97 percent between 2012 and 2013, with a USD$3.5 billion market size by the end of 2015. Unlike gambling played for real money, social casino games generally have no monetary prizes and are free-to-play, although they may include some optional monetised features. The size of the market and users’ demonstrated interest in gambling-themed activities mean that social casino gamers are an attractive market for many gambling operators, and several large international gambling companies have merged with social casino game operators. Some operators consider the games to be a source of additional revenue in jurisdictions where online gambling is largely illegal, or a way to attract new customers to a land-based gambling venue. Hybrid models are also emerging, with the potential for tangible rewards for playing social casino games. This merging of gaming and gambling means that many previously established boundaries are becoming blurred, and at many points, the two are indistinguishable. However, content analysis of game content and advertising can help researchers, industry, and policymakers better understand how the two entertainment forms overlap. In their Policy & Internet article “Gambling Games on Social Platforms: How Do Advertisements for Social Casino Games Target Young Adults?”, Brett Abarbanel, Sally M. Gainsbury, Daniel King, Nerilee Hing, and Paul H. Delfabbro undertake a content analysis of 115 social casino gaming advertisements captured by young adults during their regular Internet use. They find advertisement imagery typically features images likely to appeal to young adults, with message themes including a glamorising and normalisation of gambling. Notably, nearly 90 percent of the advertisements contained no responsible or problem gambling language, despite the gambling-like content. Gambling advertisements currently face much stricter restrictions on exposure and distribution than do social casino game advertisements: despite the latter containing much gambling-themed content designed to attract consumers.…

Concerns have been raised about the quality of amateur mapping and data efforts, and the uses to which they are put.

Haitians set up impromtu tent cities thorough the capital after an earthquake measuring 7 plus on the Richter scale rocked Port au Prince Haiti just before 5 pm yesterday, January 12, 2009.

User-generated content can provide a useful source of information during humanitarian crises like armed conflict or natural disasters. With the rise of interactive websites, social media, and online mapping tools, volunteer crisis mappers are now able to compile geographic data as a humanitarian crisis unfolds, allowing individuals across the world to organise as ad hoc groups to participate in data collection. Crisis mappers have created maps of earthquake damage and trapped victims, analysed satellite imagery for signs of armed conflict, and cleaned Twitter data sets to uncover useful information about unfolding extreme weather events like typhoons. Although these volunteers provide useful technical assistance to humanitarian efforts (e.g. when maps and records don’t exist or are lost), their lack of affiliation with “formal” actors, such as the United Nations, and the very fact that they are volunteers, makes them a dubious data source. Indeed, concerns have been raised about the quality of amateur mapping and data efforts, and the uses to which they are put. Most of these concerns assume that volunteers have no professional training. And herein lies the contradiction: by doing the work for free and at their own will the volunteers make these efforts possible and innovative, but this is also why crisis mapping is doubted and questioned by experts. By investigating crisis-mapping volunteers and organisations, Elizabeth Resor’s article “The Neo-Humanitarians: Assessing the Credibility of Organised Volunteer Crisis Mappers” published in Policy & Internet presents evidence of a more professional cadre of volunteers and a means to distinguish between different types of volunteer organisations. Given these organisations now play an increasingly integrated role in humanitarian responses, it’s crucial that their differences are understood and that concerns about the volunteers are answered. We caught up with Elizabeth to discuss her findings: Ed.: We have seen from Citizen Science (and Wikipedia) that large crowds of non-professional volunteers can produce work of incredible value, if projects are set up right. Are…

The Left–Right dimension is the most common way of conceptualising ideological difference. But in an ever more globalised world, are the concepts of Left and Right still relevant?

Theresa May meets European Council President Donald Tusk in April, ahead of the start of Brexit talks. Image: European Council President (Flickr CC BY-NC-ND 2.0)

The Left–Right dimension—based on the traditional cleavage in society between capital and labor—is the most common way of conceptualising ideological difference. But in an ever more globalised world, are the concepts of Left and Right still relevant? In recent years political scientists have increasingly come to talk of a two-dimensional politics in Europe, defined by an economic (Left–Right) dimension, and a cultural dimension that relates to voter and party positions on sociocultural issues. In his Policy & Internet article “Cleavage Structures and Dimensions of Ideology in English Politics: Evidence From Voting Advice Application Data”, Jonathan Wheatley argues that the cleavage that exists in many European societies between “winners” and “losers” of globalisation has engendered a new ideological dimension pitting “cosmopolitans” against “communitarians” and that draws on cultural issues relating to identity—rather than economic issues. He identifies latent dimensions from opinion data generated by two Voting Advice Applications deployed in England in 2014 and 2015—finding that the political space in England is defined by two main ideological dimensions: an economic Left–Right dimension and a cultural communitarian–cosmopolitan dimension. While they co-vary to a significant degree, with economic rightists tending to be more communitarian and economic leftists tending to be more cosmopolitan, these tendencies do not always hold and the two dimensions should be considered as separate. The identification of the communitarian–cosmopolitan dimension lends weight to the hypothesis of Kriesi et al. (2006) that politics is increasingly defined by a cleavage between “winners” and “losers” of globalisation, with “losers” tending to adopt a position of cultural demarcation and to perceive “outsiders” such as immigrants and the EU, as a threat. If an economic dimension pitting Left against Right (or labour against capital) defined the political arena in Europe in the twentieth century, maybe it’s a cultural cleavage that pits cosmopolitans against communitarians that defines politics in the twenty-first. We caught up with Jonathan to discuss his findings: Ed.: The big thing that happened…

It is important for policymakers to ask how policy can bridge economic inequality. But does policy actually have an effect on these differences? And if so, which specific policy variables?

The last decade has seen a rapid growth of Internet access across Africa, although it has not been evenly distributed. Cameroonian Cybercafe by SarahTz (Flickr CC BY 2.0).

There is a consensus among researchers that ICT is an engine for growth, and it’s also considered by the OECD to be a part of fundamental infrastructure, like electricity and roads. The last decade has seen a rapid growth of Internet access across Africa, although it has not been evenly distributed. Some African countries have an Internet penetration of over 50 percent (such as the Seychelles and South Africa) whereas some resemble digital deserts, not even reaching two percent. Even more surprisingly, countries that are seemingly comparable in terms of economic development often show considerable differences in terms of Internet access (e.g., Kenya and Ghana). Being excluded from the Internet economy has negative economic and social implications; it is therefore important for policymakers to ask how policy can bridge this inequality. But does policy actually have an effect on these differences? And if so, which specific policy variables? In their Policy & Internet article “Crossing the Digital Desert in Sub-Saharan Africa: Does Policy Matter?”, Robert Wentrup, Xiangxuan Xu, H. Richard Nakamura, and Patrik Ström address the dearth of research assessing the interplay between policy and Internet penetration by identifying Internet penetration-related policy variables and institutional constructs in Sub-Saharan Africa. It is a first attempt to investigate whether Internet policy variables have any effect on Internet penetration in Sub-Saharan Africa, and to shed light on them. Based on a literature review and the available data, they examine four variables: (i) free flow of information (e.g. level of censorship); (ii) market concentration (i.e. whether or not internet provision is monopolistic); (iii) the activity level of the Universal Service Fund (a public policy promoted by some governments and international telecom organizations to address digital inclusion); and (iv) total tax on computer equipment, including import tariffs on personal computers. The results show that only the activity level of the USF and low total tax on computer equipment are significantly positively related to Internet penetration…

The popularity of technologies and services that reveal insights about our daily lives paints a picture of a public that is voluntarily offering itself up to increasingly invasive forms of surveillance.

We are increasingly exposed to new practices of data collection. Image by ijclark (Flickr CC BY 2.0).

As digital technologies and platforms are increasingly incorporated into our lives, we are exposed to new practices of data creation and collection—and there is evidence that American citizens are deeply concerned about the consequences of these practices. But despite these concerns, the public has not abandoned technologies that produce data and collect personal information. In fact, the popularity of technologies and services that reveal insights about our health, fitness, medical conditions, and family histories in exchange for extensive monitoring and tracking paints a picture of a public that is voluntarily offering itself up to increasingly invasive forms of surveillance. This seeming inconsistency between intent and behaviour is routinely explained with reference to the “privacy paradox”. Advertisers, retailers, and others with a vested interest in avoiding the regulation of digital data collection have pointed to this so-called paradox as an argument against government intervention. By phrasing privacy as a choice between involvement in (or isolation from) various social and economic communities, they frame information disclosure as a strategic decision made by informed consumers. Indeed, discussions on digital privacy have been dominated by the idea of the “empowered consumer” or “privacy pragmatist”—an autonomous individual who makes informed decisions about the disclosure of their personal information. But there is increasing evidence that “control” is a problematic framework through which to operationalize privacy. In her Policy & Internet article “From Privacy Pragmatist to Privacy Resigned: Challenging Narratives of Rational Choice in Digital Privacy Debates,” Nora A. Draper examines how the figure of the “privacy pragmatist” developed by the prominent privacy researcher Alan Westin has been used to frame privacy within a typology of personal preference—a framework that persists in academic, regulatory, and commercial discourses in the United States. Those in the pragmatist group are wary about the safety and security of their personal information, but make supposedly rational decisions about the conditions under which they are comfortable with disclosure, logically calculating the costs and…