Articles

A number of studies have shown that VAA use has an impact on the cognitive behaviour of users, on their likelihood to participate in elections, and on the choice of the party they vote for.

To what extent do VAAs alter the way voters perceive the meaning of elections, and encourage them to hold politicians to account for election promises? Image: ep_jhu (Flickr CC BY-NC 2.0)

In many countries, Voting Advice Applications (VAAs) have become an almost indispensable part of the electoral process, playing an important role in the campaigning activities of parties and candidates, an essential element of media coverage of the elections, and being widely used by citizens. A number of studies have shown that VAA use has an impact on the cognitive behaviour of users, on their likelihood to participate in elections, and on the choice of the party they vote for. These applications are based on the idea of issue and proximity voting—the parties and candidates recommended by VAAs are those with the highest number of matching positions on a number of political questions and issues. Many of these questions are much more specific and detailed than party programs and electoral platforms, and show the voters exactly what the party or candidates stand for and how they will vote in parliament once elected. In his Policy & Internet article “Do VAAs Encourage Issue Voting and Promissory Representation? Evidence From the Swiss Smartvote,” Andreas Ladner examines the extent to which VAAs alter the way voters perceive the meaning of elections, and encourage them to hold politicians to account for election promises. His main hypothesis is that VAAs lead to “promissory representation”—where parties and candidates are elected for their promises and sanctioned by the electorate if they don’t keep them. He suggests that as these tools become more popular, the “delegate model” is likely to increase in popularity: i.e. one in which politicians are regarded as delegates voted into parliament to keep their promises, rather than being voted a free mandate to act how they see fit (the “trustee model”). We caught up with Andreas to discuss his findings: Ed.: You found that issue-voters were more likely (than other voters) to say they would sanction a politician who broke their election promises. But also that issue voters are less politically engaged. So is this…

The U.S.–Mexico border is a complex region encompassing both positives and negatives — but understanding these narratives could have a real-world impact on policy along the border.

The U.S.–Mexico border to be the location of an annual legal flow of economic trade of $300 billion each year, the frontier of 100 years of peaceful coexistence between two countries, and the point of integration for the U.S.–Mexico relationship. Image: BBC World Service (Flickr CC BY-NC 2.0)

The US-Mexican border region is home to approximately 12 million people, and is the most-crossed international border in the world. Unlike the current physical border, the image people hold of “the border” is not firmly established, and can be modified. One way is via narratives (or stories), which are a powerful tool for gaining support for public policies. Politicians’ narratives about the border have historically been perpetuated by the traditional media, particularly when this allows them to publish sensational and attention grabbing news stories. However, new social media, including YouTube, provide opportunities for less-mainstream narratives of cooperation. In their Policy & Internet article “Do New Media Support New Policy Narratives? The Social Construction of the U.S.–Mexico Border on YouTube”, Donna L. Lybecker, Mark K. McBeth, Maria A. Husmann, and Nicholas Pelikan find that YouTube videos about the U.S.–Mexico border focus (perhaps unsurprisingly) on mainstream, divisive issues such as security and violence, immigration, and drugs. However, the videos appear to construct more favourable perspectives of the border region than traditional media, with around half constructing a sympathetic view of the border, and the people associated with it. The common perceptions of the border generally take two distinct forms. One holds the U.S.–Mexico border to be the location of an annual legal flow of economic trade of $300 billion each year, a line which millions of people legally cross annually, the frontier of 100 years of peaceful coexistence between two countries, and the point of integration for the U.S.–Mexico relationship. An alternative perspective (particularly common since 9/11) focuses less on economic trade and legal crossing and more on undocumented immigration, violence and drug wars, and a U.S.-centric view of “us versus them”. In order to garner public support for their “solutions” to these issues, politicians often define the border using one of these perspectives. Acceptance of the first view might well allow policymakers to find cooperative solutions to joint problems. Acceptance of…

Advocates hope that opening government data will increase government transparency, catalyse economic growth, address social and environmental challenges.

Advocates hope that opening government data will increase government transparency, catalyse economic growth, address social and environmental challenges. Image by the UK’s Open Data Institute.

Community-based approaches are widely employed in programmes that monitor and promote socioeconomic development. And building the “capacity” of a community—i.e. the ability of people to act individually or collectively to benefit the community—is key to these approaches. The various definitions of community capacity all agree that it comprises a number of dimensions—including opportunities and skills development, resource mobilisation, leadership, participatory decision making, etc.—all of which can be measured in order to understand and monitor the implementation of community-based policy. However, measuring these dimensions (typically using surveys) is time consuming and expensive, and the absence of such measurements is reflected in a greater focus in the literature on describing the process of community capacity building, rather than on describing how it’s actually measured. A cheaper way to measure these dimensions, for example by applying predictive algorithms to existing secondary data like socioeconomic characteristics, socio-demographics, and condition of housing stock, would certainly help policy makers gain a better understanding of local communities. In their Policy & Internet article “Predicting Sense of Community and Participation by Applying Machine Learning to Open Government Data”, Alessandro Piscopo, Ronald Siebes, and Lynda Hardman employ a machine-learning technique (“Random Forests”) to evaluate an estimate of community capacity derived from open government data, and determine the most important predictive variables. The resulting models were found to be more accurate than those based on traditional statistics, demonstrating the feasibility of the Random Forests technique for this purpose—being accurate, able to deal with small data sets and nonlinear data, and providing information about how each variable in the dataset contributes to predictive accuracy. We caught up with the authors to discuss their findings: Ed.: Just briefly: how did you do the study? Were you essentially trying to find which combinations of variables available in Open Government Data predicted “sense of community and participation” as already measured by surveys? Authors: Our research stemmed from an observation of the measures of social…

Notably, nearly 90 percent of the advertisements contained no responsible or problem gambling language, despite the gambling-like content.

Lord of the Rings slot machines at the Flamingo, image by jenneze (Flickr CC BY-NC 2.0). Unlike gambling played for real money, “social casino games” generally have no monetary prizes.

Social casino gaming, which simulates gambling games on a social platform such as Facebook, is a nascent but rapidly growing industry—social casino game revenues grew 97 percent between 2012 and 2013, with a USD$3.5 billion market size by the end of 2015. Unlike gambling played for real money, social casino games generally have no monetary prizes and are free-to-play, although they may include some optional monetised features. The size of the market and users’ demonstrated interest in gambling-themed activities mean that social casino gamers are an attractive market for many gambling operators, and several large international gambling companies have merged with social casino game operators. Some operators consider the games to be a source of additional revenue in jurisdictions where online gambling is largely illegal, or a way to attract new customers to a land-based gambling venue. Hybrid models are also emerging, with the potential for tangible rewards for playing social casino games. This merging of gaming and gambling means that many previously established boundaries are becoming blurred, and at many points, the two are indistinguishable. However, content analysis of game content and advertising can help researchers, industry, and policymakers better understand how the two entertainment forms overlap. In their Policy & Internet article “Gambling Games on Social Platforms: How Do Advertisements for Social Casino Games Target Young Adults?”, Brett Abarbanel, Sally M. Gainsbury, Daniel King, Nerilee Hing, and Paul H. Delfabbro undertake a content analysis of 115 social casino gaming advertisements captured by young adults during their regular Internet use. They find advertisement imagery typically features images likely to appeal to young adults, with message themes including a glamorising and normalisation of gambling. Notably, nearly 90 percent of the advertisements contained no responsible or problem gambling language, despite the gambling-like content. Gambling advertisements currently face much stricter restrictions on exposure and distribution than do social casino game advertisements: despite the latter containing much gambling-themed content designed to attract consumers.…

Concerns have been raised about the quality of amateur mapping and data efforts, and the uses to which they are put.

Haitians set up impromtu tent cities thorough the capital after an earthquake measuring 7 plus on the Richter scale rocked Port au Prince Haiti just before 5 pm yesterday, January 12, 2009.

User-generated content can provide a useful source of information during humanitarian crises like armed conflict or natural disasters. With the rise of interactive websites, social media, and online mapping tools, volunteer crisis mappers are now able to compile geographic data as a humanitarian crisis unfolds, allowing individuals across the world to organise as ad hoc groups to participate in data collection. Crisis mappers have created maps of earthquake damage and trapped victims, analysed satellite imagery for signs of armed conflict, and cleaned Twitter data sets to uncover useful information about unfolding extreme weather events like typhoons. Although these volunteers provide useful technical assistance to humanitarian efforts (e.g. when maps and records don’t exist or are lost), their lack of affiliation with “formal” actors, such as the United Nations, and the very fact that they are volunteers, makes them a dubious data source. Indeed, concerns have been raised about the quality of amateur mapping and data efforts, and the uses to which they are put. Most of these concerns assume that volunteers have no professional training. And herein lies the contradiction: by doing the work for free and at their own will the volunteers make these efforts possible and innovative, but this is also why crisis mapping is doubted and questioned by experts. By investigating crisis-mapping volunteers and organisations, Elizabeth Resor’s article “The Neo-Humanitarians: Assessing the Credibility of Organised Volunteer Crisis Mappers” published in Policy & Internet presents evidence of a more professional cadre of volunteers and a means to distinguish between different types of volunteer organisations. Given these organisations now play an increasingly integrated role in humanitarian responses, it’s crucial that their differences are understood and that concerns about the volunteers are answered. We caught up with Elizabeth to discuss her findings: Ed.: We have seen from Citizen Science (and Wikipedia) that large crowds of non-professional volunteers can produce work of incredible value, if projects are set up right. Are…

The Left–Right dimension is the most common way of conceptualising ideological difference. But in an ever more globalised world, are the concepts of Left and Right still relevant?

Theresa May meets European Council President Donald Tusk in April, ahead of the start of Brexit talks. Image: European Council President (Flickr CC BY-NC-ND 2.0)

The Left–Right dimension—based on the traditional cleavage in society between capital and labor—is the most common way of conceptualising ideological difference. But in an ever more globalised world, are the concepts of Left and Right still relevant? In recent years political scientists have increasingly come to talk of a two-dimensional politics in Europe, defined by an economic (Left–Right) dimension, and a cultural dimension that relates to voter and party positions on sociocultural issues. In his Policy & Internet article “Cleavage Structures and Dimensions of Ideology in English Politics: Evidence From Voting Advice Application Data”, Jonathan Wheatley argues that the cleavage that exists in many European societies between “winners” and “losers” of globalisation has engendered a new ideological dimension pitting “cosmopolitans” against “communitarians” and that draws on cultural issues relating to identity—rather than economic issues. He identifies latent dimensions from opinion data generated by two Voting Advice Applications deployed in England in 2014 and 2015—finding that the political space in England is defined by two main ideological dimensions: an economic Left–Right dimension and a cultural communitarian–cosmopolitan dimension. While they co-vary to a significant degree, with economic rightists tending to be more communitarian and economic leftists tending to be more cosmopolitan, these tendencies do not always hold and the two dimensions should be considered as separate. The identification of the communitarian–cosmopolitan dimension lends weight to the hypothesis of Kriesi et al. (2006) that politics is increasingly defined by a cleavage between “winners” and “losers” of globalisation, with “losers” tending to adopt a position of cultural demarcation and to perceive “outsiders” such as immigrants and the EU, as a threat. If an economic dimension pitting Left against Right (or labour against capital) defined the political arena in Europe in the twentieth century, maybe it’s a cultural cleavage that pits cosmopolitans against communitarians that defines politics in the twenty-first. We caught up with Jonathan to discuss his findings: Ed.: The big thing that happened…

It’s time to refocus on our responsibilities to children before they are eclipsed by the commercial incentives that are driving digital developments.

“Whether your child is an artist, a storyteller, a singer or a scientist, I’m the lovable little friend that will bring that out!” says the FisherPrice Smart Bear.

Everyone of a certain age remembers logging-on to a noisy dial-up modem and surfing the Web via AOL or AltaVista. Back then, the distinction between offline and online made much more sense. Today, three trends are conspiring to firmly confine this distinction to history. These are the mass proliferation of Wi-Fi, the appification of the Web, and the rapid expansion of the Internet of (smart) Things. Combined they are engineering multi-layered information ecosystems that enmesh around children going about their every day lives. But it’s time to refocus on our responsibilities to children before they are eclipsed by the commercial incentives that are driving these developments. Three Trends 1. The proliferation of Wi-Fi means when children can use smart phones or tablets in variety of new contexts including on buses and trains, in hotels and restaurants, in school, libraries and health centre waiting rooms. 2. Research confirms apps on smart phones and tablets are now children’s primary gateway to the Web. This is the appification of the Web that Jonathon Zittrain predicted: the WeChat app, popular in China, is becoming its full realisation. 3. Simultaneously, the rapid expansion of the Internet of Things means everything is becoming ‘smart’ – phones, cars, toys, baby monitors, watches, toasters: we are even promised smart cities. Essentially, this means these devices have an IP address that allows to them receive, process, and transmit data on the Internet. Often these devices (including personal assistants like Alexa, game consoles and smart TVs) are picking up data produced by children. Marketing about smart toys tells us they are enhancing children’s play, augmenting children’s learning, incentivising children’s healthy habits and can even reclaim family time. Salient examples include Hello Barbie and Smart Toy Bear, which use voice and/or image recognition and connect to the cloud to analyse, process, and respond to children’s conversations and images. This sector is expanding to include app-enabled toys such as toy drones, cars, and droids (e.g. Star…

It is important for policymakers to ask how policy can bridge economic inequality. But does policy actually have an effect on these differences? And if so, which specific policy variables?

The last decade has seen a rapid growth of Internet access across Africa, although it has not been evenly distributed. Cameroonian Cybercafe by SarahTz (Flickr CC BY 2.0).

There is a consensus among researchers that ICT is an engine for growth, and it’s also considered by the OECD to be a part of fundamental infrastructure, like electricity and roads. The last decade has seen a rapid growth of Internet access across Africa, although it has not been evenly distributed. Some African countries have an Internet penetration of over 50 percent (such as the Seychelles and South Africa) whereas some resemble digital deserts, not even reaching two percent. Even more surprisingly, countries that are seemingly comparable in terms of economic development often show considerable differences in terms of Internet access (e.g., Kenya and Ghana). Being excluded from the Internet economy has negative economic and social implications; it is therefore important for policymakers to ask how policy can bridge this inequality. But does policy actually have an effect on these differences? And if so, which specific policy variables? In their Policy & Internet article “Crossing the Digital Desert in Sub-Saharan Africa: Does Policy Matter?”, Robert Wentrup, Xiangxuan Xu, H. Richard Nakamura, and Patrik Ström address the dearth of research assessing the interplay between policy and Internet penetration by identifying Internet penetration-related policy variables and institutional constructs in Sub-Saharan Africa. It is a first attempt to investigate whether Internet policy variables have any effect on Internet penetration in Sub-Saharan Africa, and to shed light on them. Based on a literature review and the available data, they examine four variables: (i) free flow of information (e.g. level of censorship); (ii) market concentration (i.e. whether or not internet provision is monopolistic); (iii) the activity level of the Universal Service Fund (a public policy promoted by some governments and international telecom organizations to address digital inclusion); and (iv) total tax on computer equipment, including import tariffs on personal computers. The results show that only the activity level of the USF and low total tax on computer equipment are significantly positively related to Internet penetration…

The popularity of technologies and services that reveal insights about our daily lives paints a picture of a public that is voluntarily offering itself up to increasingly invasive forms of surveillance.

We are increasingly exposed to new practices of data collection. Image by ijclark (Flickr CC BY 2.0).

As digital technologies and platforms are increasingly incorporated into our lives, we are exposed to new practices of data creation and collection—and there is evidence that American citizens are deeply concerned about the consequences of these practices. But despite these concerns, the public has not abandoned technologies that produce data and collect personal information. In fact, the popularity of technologies and services that reveal insights about our health, fitness, medical conditions, and family histories in exchange for extensive monitoring and tracking paints a picture of a public that is voluntarily offering itself up to increasingly invasive forms of surveillance. This seeming inconsistency between intent and behaviour is routinely explained with reference to the “privacy paradox”. Advertisers, retailers, and others with a vested interest in avoiding the regulation of digital data collection have pointed to this so-called paradox as an argument against government intervention. By phrasing privacy as a choice between involvement in (or isolation from) various social and economic communities, they frame information disclosure as a strategic decision made by informed consumers. Indeed, discussions on digital privacy have been dominated by the idea of the “empowered consumer” or “privacy pragmatist”—an autonomous individual who makes informed decisions about the disclosure of their personal information. But there is increasing evidence that “control” is a problematic framework through which to operationalize privacy. In her Policy & Internet article “From Privacy Pragmatist to Privacy Resigned: Challenging Narratives of Rational Choice in Digital Privacy Debates,” Nora A. Draper examines how the figure of the “privacy pragmatist” developed by the prominent privacy researcher Alan Westin has been used to frame privacy within a typology of personal preference—a framework that persists in academic, regulatory, and commercial discourses in the United States. Those in the pragmatist group are wary about the safety and security of their personal information, but make supposedly rational decisions about the conditions under which they are comfortable with disclosure, logically calculating the costs and…

The Internet is neither purely public nor private, but combines public and private networks, platforms, and interests. Given its complexity and global importance, there is clearly a public interest in how it is governed.

Reading of the NetMundial outcome document, by mikiwoz (Flickr CC BY-SA 2.0)

The Internet is neither purely public nor private, but combines public and private networks, platforms, and interests. Given its complexity and global importance, there is clearly a public interest in how it is governed, and role of the public in Internet governance debates is a critical issue for policymaking. The current dominant mechanism for public inclusion is the multistakeholder approach, i.e. one that includes governments, industry and civil society in governance debates. Despite at times being used as a shorthand for public inclusion, multistakeholder governance is implemented in many different ways and has faced criticism, with some arguing that multistakeholder discussions serve as a cover for the growth of state dominance over the Web, and enables oligarchic domination of discourses that are ostensibly open and democratic. In her Policy & Internet article “Searching for the Public in Internet Governance: Examining Infrastructures of Participation at NETmundial”, Sarah Myers West examines the role of the public in Internet governance debates, with reference to public inclusion at the 2014 Global Multistakeholder Meeting on the Future of Internet Governance (NETmundial). NETmundial emerged at a point when public legitimacy was a particular concern for the Internet governance community, so finding ways to include the rapidly growing, and increasingly diverse group of stakeholders in the governance debate was especially important for the meeting’s success. This is particularly significant as the Internet governance community faces problems of increasing complexity and diversity of views. The growth of the Internet has made the public central to Internet governance—but introduces problems around the growing number of stakeholders speaking different languages, with different technical backgrounds, and different perspectives on the future of the Internet. However, the article suggests that rather than attempting to unify behind a single institution or achieve public consensus through a single, deliberative forum, the NETmundial example suggests that the Internet community may further fragment into multiple publics, further redistributing into a more networked and “agonistic” model. This…