Articles

If we only undertake research on the nature or extent of risk, then it’s difficult to learn anything useful about who is harmed, and what this means for their lives.

The range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, but there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. Image by Brad Flickinger

Child Internet safety is a topic that continues to gain a great deal of media coverage and policy attention. Recent UK policy initiatives such as Active Choice Plus in which major UK broadband providers agreed to provide household-level filtering options, or the industry-led Internet Matters portal, reflect a public concern with the potential risks and harms of children’s Internet use. At the same time, the range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, in large part due to the extensive international studies funded by the European Commission as part of the excellent EU Kids Online network. Whilst this has greatly helped us understand how children behave online, there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. This is a problematic, first, because risks can only be identified if we understand what types of harms we wish to avoid, and second, because if we only undertake research on the nature or extent of risk, then it’s difficult to learn anything useful about who is harmed, and what this means for their lives. Of course, the focus on risk rather than harm is understandable from an ethical and methodological perspective. It wouldn’t be ethical, for example, to conduct a trial in which one group of children was deliberately exposed to very violent or sexual content to observe whether any harms resulted. Similarly, surveys can ask respondents to self-report harms experienced online, perhaps through the lens of upsetting images or experiences. But again, there are ethical concerns about adding to children’s distress by questioning them extensively on difficult experiences, and in a survey context it’s also difficult to avoid imposing adult conceptions of ‘harm’ through the wording of the questions. Despite these difficulties, there are many research projects that aim to measure and understand the relationship between various types of physical, emotional or psychological harm…

There are more Wikipedia articles in English than Arabic about almost every Arabic speaking country in the Middle East.

Image of rock paintings in the Tadrart Acacus region of Libya by Luca Galuzzi.

Wikipedia is often seen to be both an enabler and an equaliser. Every day hundreds of thousands of people collaborate on an (encyclopaedic) range of topics; writing, editing and discussing articles, and uploading images and video content. This structural openness combined with Wikipedia’s tremendous visibility has led some commentators to highlight it as “a technology to equalise the opportunity that people have to access and participate in the construction of knowledge and culture, regardless of their geographic placing” (Lessig 2003). However, despite Wikipedia’s openness, there are also fears that the platform is simply reproducing worldviews and knowledge created in the Global North at the expense of Southern viewpoints (Graham 2011; Ford 2011). Indeed, there are indications that global coverage in the encyclopaedia is far from ‘equal’, with some parts of the world heavily represented on the platform, and others largely left out (Hecht and Gergle 2009; Graham 2011, 2013, 2014). These second-generation digital divides are not merely divides of Internet access (so discussed in the late 1990s), but gaps in representation and participation (Hargittai and Walejko 2008). Whereas most Wikipedia articles written about most European and East Asian countries are written in their dominant languages, for much of the Global South we see a dominance of articles written in English. These geographic differences in the coverage of different language versions of Wikipedia matter, because fundamentally different narratives can be (and are) created about places and topics in different languages (Graham and Zook 2013; Graham 2014). If we undertake a ‘global analysis’ of this pattern by examining the number of geocoded articles (ie about a specific place) across Wikipedia’s main language versions (Figure 1), the first thing we can observe is the incredible human effort that has gone into describing ‘place’ in Wikipedia. The second is the clear and highly uneven geography of information, with Europe and North America home to 84% of all geolocated articles. Almost all of Africa is…

Arabic is one of the least represented major world languages on Wikipedia: few languages have more speakers and fewer articles than Arabic.

Image of the Umayyad Mosque (Damascus) by Travel Aficionado

Wikipedia currently contains over 9 million articles in 272 languages, far surpassing any other publicly available information repository. Being the first point of contact for most general topics (therefore an effective site for framing any subsequent representations) it is an important platform from which we can learn whether the Internet facilitates increased open participation across cultures—or reinforces existing global hierarchies and power dynamics. Because the underlying political, geographic and social structures of Wikipedia are hidden from users, and because there have not been any large scale studies of the geography of these structures and their relationship to online participation, entire groups of people (and regions) may be marginalised without their knowledge. This process is important to understand, for the simple reason that Wikipedia content has begun to form a central part of services offered elsewhere on the Internet. When you look for information about a place on Facebook, the description of that place (including its geographic coordinates) comes from Wikipedia. If you want to “check in” to a museum in Doha to signify you were there to their friends, the place you check in to was created with Wikipedia data. When you Google “House of Saud” you are presented not only with a list of links (with Wikipedia at the top) but also with a special ‘card’ summarising the House. This data comes from Wikipedia. When you look for people or places, Google now has these terms inside its ‘knowledge graph’, a network of related concepts with data coming directly from Wikipedia. Similarly, on Google maps, Wikipedia descriptions for landmarks are presented as part of the default information. Ironically, Wikipedia editorship is actually on a slow and steady decline, even as its content and readership increases year on year. Since 2007 and the introduction of significant devolution of administrative powers to volunteers, Wikipedia has not been able to effectively retain newcomers, something which has been noted as a concern by…

Understanding these economies is therefore crucial to anyone who is interested in the social dynamics and power relations of digital media today.

Vili discusses his new book from MIT Press (with E.Castronova): Virtual Economies: Design and Analysis.

Digital gaming, once a stigmatised hobby, is now a mainstream cultural activity. According to the Oxford Internet Survey, more than half of British Internet users play games online; more in fact, than watch films or pornography online. Most new games today contain some kind of a virtual economy: that is, a set of processes for the production, allocation, and consumption of artificially scarce virtual goods. Often the virtual economy is very simple; sometimes, as in massively multiplayer online game EVE Online, it starts to approach the scale and complexity of a small national economy. Just like national economies, virtual economies incentivise certain behaviours and discourage others; they ask people to make choices between mutually exclusive options; they ask people to coordinate. They can also propagate value systems setting out what modes of participation are considered valuable. These virtual economies are now built into many of the most popular areas of the Internet, including social media sites and knowledge commons—with their systems of artificially scarce likes, stars, votes, and badges. Understanding these economies is therefore crucial to anyone who is interested in the social dynamics and power relations of digital media today. But a question I am asked a lot is: what can ‘real’ economies and the economists who run them learn from these virtual economies? We might start by imagining how a textbook economist would approach the economy of an online game. In EVE Online, hundreds of thousands of players trade minerals, spaceship components and other virtual commodities on a number of regional marketplaces. These marketplaces are very sophisticated, resembling real commodity spot markets. Our economist would doubtless point out several ways its efficiency could be radically improved. For example, EVE players can only see prices quoted in their current region, likely missing a better deal available elsewhere. (In physical commodity markets, prices are instantly broadcast worldwide: you wouldn’t pay more for gold in Tokyo than you would in New…

What’s new about companies and academic researchers doing this kind of research to manipulate peoples’ behaviour?

Reports about the Facebook study ‘Experimental evidence of massive-scale emotional contagion through social networks’ have resulted in something of a media storm. Yet it can be predicted that ultimately this debate will result in the question: so what’s new about companies and academic researchers doing this kind of research to manipulate peoples’ behaviour? Isn’t that what a lot of advertising and marketing research does already—changing peoples’ minds about things? And don’t researchers sometimes deceive subjects in experiments about their behaviour? What’s new? This way of thinking about the study has a serious defect, because there are three issues raised by this research: The first is the legality of the study, which, as the authors correctly point out, falls within Facebook users’ giving informed consent when they sign up to the service. Laws or regulation may be required here to prevent this kind of manipulation, but may also be difficult, since it will be hard to draw a line between this experiment and other forms of manipulating peoples’ responses to media. However, Facebook may not want to lose users, for whom this way of manipulating them via their service may ‘cause anxiety’ (as the first author of the study, Adam Kramer, acknowledged in a blog post response to the outcry). In short, it may be bad for business, and hence Facebook may abandon this kind of research (but we’ll come back to this later). But this—companies using techniques that users don’t like, so they are forced to change course—is not new. The second issue is academic research ethics. This study was carried out by two academic researchers (the other two authors of the study). In retrospect, it is hard to see how this study would have received approval from an institutional review board (IRB), the boards at which academic institutions check the ethics of studies. Perhaps stricter guidelines are needed here since a) big data research is becoming much more prominent…

Without detailed information about small areas we can’t identify where would benefit most from policy intervention to encourage Internet use and improve access.

Britain has one of the largest Internet economies in the industrial world. The Internet contributes an estimated 8.3% to Britain’s GDP (Dean et al. 2012), and strongly supports domestic job and income growth by enabling access to new customers, markets and ideas. People benefit from better communications, and businesses are more likely to locate in areas with good digital access, thereby boosting local economies (Malecki & Moriset 2008). While the Internet brings clear benefits, there is also a marked inequality in its uptake and use (the so-called ‘digital divide’). We already know from the Oxford Internet Surveys (OxIS) that Internet use in Britain is strongly stratified by age, by income and by education; and yet we know almost nothing about local patterns of Internet use across the country. A problem with national sample surveys (the usual source of data about Internet use and non-use), is that the sample sizes become too small to allow accurate generalisation at smaller, sub-national areas. No one knows, for example, the proportion of Internet users in Glasgow, because national surveys simply won’t have enough respondents to make reliable city-level estimates. We know that Internet use is not evenly distributed at the regional level; Ofcom reports on broadband speeds and penetration at the county level (Ofcom 2011), and we know that London and the southeast are the most wired part of the country (Dean et al. 2012). But given the importance of the Internet, the lack of knowledge about local patterns of access and use in Britain is surprising. This is a problem because without detailed information about small areas we can’t identify where would benefit most from policy intervention to encourage Internet use and improve access. We have begun to address this lack of information by combining two important but separate datasets—the 2011 national census, and the 2013 OxIS surveys—using the technique of small area estimation. By definition, census data are available for very small…

It is simply not possible to consider public policy today without some regard for the intertwining of information technologies with everyday life and society.

We can't understand, analyse or make public policy without understanding the technological, social and economic shifts associated with the Internet. Image from the (post-PRISM) "Stop Watching Us" Berlin Demonstration (2013) by mw238.

In the journal’s inaugural issue, founding Editor-in-Chief Helen Margetts outlined what are essentially two central premises behind Policy & Internet’s launch. The first is that “we cannot understand, analyse or make public policy without understanding the technological, social and economic shifts associated with the Internet” (Margetts 2009, 1). It is simply not possible to consider public policy today without some regard for the intertwining of information technologies with everyday life and society. The second premise is that the rise of the Internet is associated with shifts in how policy itself is made. In particular, she proposed that impacts of Internet adoption would be felt in the tools through which policies are effected, and the values that policy processes embody. The purpose of the Policy and Internet journal was to take up these two challenges: the public policy implications of Internet-related social change, and Internet-related changes in policy processes themselves. In recognition of the inherently multi-disciplinary nature of policy research, the journal is designed to act as a meeting place for all kinds of disciplinary and methodological approaches. Helen predicted that methodological approaches based on large-scale transactional data, network analysis, and experimentation would turn out to be particularly important for policy and Internet studies. Driving the advancement of these methods was therefore the journal’s third purpose. Today, the journal has reached a significant milestone: over one hundred high-quality peer-reviewed articles published. This seems an opportune moment to take stock of what kind of research we have published in practice, and see how it stacks up against the original vision. At the most general level, the journal’s articles fall into three broad categories: the Internet and public policy (48 articles), the Internet and policy processes (51 articles), and discussion of novel methodologies (10 articles). The first of these categories, “the Internet and public policy,” can be further broken down into a number of subcategories. One of the most prominent of these streams…

Looking at “networked cultural production”—ie the creation of cultural goods like films through crowdsourcing platforms—specifically in the ‘wreckamovie’ community

Nomad, the perky-looking Mars rover from the crowdsourced documentary Solar System 3D (Wreckamovie).

Ed: You have been looking at “networked cultural production”—ie the creation of cultural goods like films through crowdsourcing platforms—specifically in the ‘wreckamovie’ community. What is wreckamovie? Isis: Wreckamovie is an open online platform that is designed to facilitate collaborate film production. The main advantage of the platform is that it encourages a granular and modular approach to cultural production; this means that the whole process is broken down into small, specific tasks. In doing so, it allows a diverse range of geographically dispersed, self-selected members to contribute in accordance with their expertise, interests and skills. The platform was launched by a group of young Finnish filmmakers in 2008, having successfully produced films with the aid of an online forum since the late 1990s. Officially, there are more than 11,000 Wreckamovie members, but the active core, the community, consists of fewer than 300 individuals. Ed: You mentioned a tendency in the literature to regard production systems as being either ‘market driven’ (eg Hollywood) or ‘not market driven’ (eg open or crowdsourced things); is that a distinction you recognised in your research? Isis: There’s been a lot of talk about the disruptive and transformative powers nested in networked technologies, and most often Wikipedia or open source software are highlighted as examples of new production models, denoting a discontinuity from established practices of the cultural industries. Typically, the production models are discriminated based on their relation to the market: are they market-driven or fuelled by virtues such as sharing and collaboration? This way of explaining differences in cultural production isn’t just present in contemporary literature dealing with networked phenomena, though. For example, the sociologist Bourdieu equally theorised cultural production by drawing this distinction between market and non-market production, portraying the irreconcilable differences in their underlying value systems, as proposed in his The Rules of Art. However, one of the key findings of my research is that the shaping force of these productions is…

Were firms adopting internet, as it became cheaper? Had this new connectivity had the effects that were anticipated, or was it purely hype?

Ed: There has a lot of excitement about the potential of increased connectivity in the region: where did this come from? And what sort of benefits were promised? Chris: Yes, at the end of the 2000s when the first fibre cables landed in East Africa, there was much anticipation about what this new connectivity would mean for the region. I remember I was in Tanzania at the time, and people were very excited about this development—being tired of the slow and expensive satellite connections where even simple websites could take a minute to load. The perception, both in the international press and from East African politicians was that the cables would be a game changer. Firms would be able to market and sell more directly to customers and reduce inefficient ‘intermediaries’. Connectivity would allow new types of digital-driven business, and it would provide opportunity for small and medium firms to become part of the global economy. We wanted to revisit this discussion. Were firms adopting internet, as it became cheaper? Had this new connectivity had the effects that were anticipated, or was it purely hype? Ed:  So what is the current level and quality of broadband access in Rwanda? ie how connected are people on the ground? Chris: Internet access has greatly improved over the previous few years, and the costs of bandwidth have declined markedly. The government has installed a ‘backbone’ fibre network and in the private sector there has also been a growth in the number of firms providing Internet service. There are still some problems though. Prices are still are quite high, particularly for dedicated broadband connections, and in the industries we looked at (tea and tourism) many firms couldn’t afford it. Secondly, we heard a lot of complaints that lower bandwidth connections—WiMax and mobile internet—are unreliable and become saturated at peak times. So, Rwanda has come a long way, but we expect there will be more…

Key to successful adoption of Internet-based health records is how much a patient places trust that data will be properly secured from inadvertent leakage.

In an attempt to reduce costs and improve quality, digital health records are permeating health systems all over the world. Internet-based access to them creates new opportunities for access and sharing—while at the same time causing nightmares to many patients: medical data floating around freely within the clouds, unprotected from strangers, being abused to target and discriminate people without their knowledge? Individuals often have little knowledge about the actual risks, and single instances of breaches are exaggerated in the media. Key to successful adoption of Internet-based health records is, however, how much a patient places trust in the technology: trust that data will be properly secured from inadvertent leakage, and trust that it will not be accessed by unauthorised strangers. Situated in this context, my own research has taken a closer look at the structural and institutional factors influencing patient trust in Internet-based health records. Utilising a survey and interviews, the research has looked specifically at Germany—a very suitable environment for this question given its wide range of actors in the health system, and often being referred to as a “hard-line privacy country”. Germany has struggled for years with the introduction of smart cards linked to centralised Electronic Health Records, not only changing its design features over several iterations, but also battling negative press coverage about data security. The first element to this question of patient trust is the “who”: that is, does it make a difference whether the health record is maintained by either a medical or a non-medical entity, and whether the entity is public or private? I found that patients clearly expressed a higher trust in medical operators, evidence of a certain “halo effect” surrounding medical professionals and organisations driven by patient faith in their good intentions. This overrode the concern that medical operators might be less adept at securing the data than (for example) most non-medical IT firms. The distinction between public and private operators is…