Governance & Security

So are young people completely unconcerned about their privacy online, gaily granting access to everything to everyone? Well, in a word, no.

A pretty good idea of what not to do on a social media site. Image by Sean MacEntee. Standing on a stage in San Francisco in early 2010, Facebook founder Mark Zuckerberg, partly responding to the site’s decision to change the privacy settings of its 350 million users, announced that as Internet users had become more comfortable sharing information online, privacy was no longer a “social norm”. Of course, he had an obvious commercial interest in relaxing norms surrounding online privacy, but this attitude has nevertheless been widely echoed in the popular media. Young people are supposed to be sharing their private lives online—and providing huge amounts of data for commercial and government entities—because they don’t fully understand the implications of the public nature of the Internet. There has actually been little systematic research on the privacy behaviour of different age groups in online settings. But there is certainly evidence of a growing (general) concern about online privacy (Marwick et al., 2010), with a 2013 Pew study finding that 50 percent of Internet users were worried about the information available about them online, up from 30 percent in 2009. Following the recent revelations about the NSA’s surveillance activities, a Washington Post-ABC poll reported 40 percent of its U.S. respondents as saying that it was more important to protect citizens’ privacy even if it limited the ability of the government to investigate terrorist threats. But what of young people, specifically? Do they really care less about their online privacy than older users? Privacy concerns an individual’s ability to control what personal information about them is disclosed, to whom, when, and under what circumstances. We present different versions of ourselves to different audiences, and the expectations and norms of the particular audience (or context) will determine what personal information is presented or kept hidden. This highlights a fundamental problem with privacy in some SNSs: that of ‘context collapse’ (Marwick and boyd 2011).…

Informing the global discussions on information control research and practice in the fields of censorship, circumvention, surveillance and adherence to human rights.

Jon Penny presenting on the US experience of Internet-related corporate transparency reporting.

根据相关法律法规和政策,部分搜索结果未予显示 could be a warning message we will see displayed more often on the Internet; but likely translations thereof. In Chinese, this means “according to the relevant laws, regulations, and policies, a portion of search results have not been displayed.” The control of information flows on the Internet is becoming more commonplace, in authoritarian regimes as well as in liberal democracies, either via technical or regulatory means. Such information controls can be defined as “[…] actions conducted in or through information and communications technologies (ICTs), which seek to deny (such as web filtering), disrupt (such as denial-of-service attacks), shape (such as throttling), secure (such as through encryption or circumvention) or monitor (such as passive or targeted surveillance) information for political ends. Information controls can also be non-technical and can be implemented through legal and regulatory frameworks, including informal pressures placed on private companies. […]” Information controls are not intrinsically good or bad, but much is to be explored and analysed about their use, for political or commercial purposes. The University of Toronto’s Citizen Lab organised a one-week summer institute titled “Monitoring Internet Openness and Rights” to inform the global discussions on information control research and practice in the fields of censorship, circumvention, surveillance and adherence to human rights. A week full of presentations and workshops on the intersection of technical tools, social science research, ethical and legal reflections and policy implications was attended by a distinguished group of about 60 community members, amongst whom were two OII DPhil students; Jon Penney and Ben Zevenbergen. Conducting Internet measurements may be considered to be a terra incognita in terms of methodology and data collection, but the relevance and impacts for Internet policy-making, geopolitics or network management are obvious and undisputed. The Citizen Lab prides itself in being a “hacker hothouse”, or an “intelligence agency for civil society” where security expertise, politics, and ethics intersect. Their research adds the much-needed geopolitical angle to…

If we only undertake research on the nature or extent of risk, then it’s difficult to learn anything useful about who is harmed, and what this means for their lives.

The range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, but there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. Image by Brad Flickinger

Child Internet safety is a topic that continues to gain a great deal of media coverage and policy attention. Recent UK policy initiatives such as Active Choice Plus in which major UK broadband providers agreed to provide household-level filtering options, or the industry-led Internet Matters portal, reflect a public concern with the potential risks and harms of children’s Internet use. At the same time, the range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, in large part due to the extensive international studies funded by the European Commission as part of the excellent EU Kids Online network. Whilst this has greatly helped us understand how children behave online, there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. This is a problematic, first, because risks can only be identified if we understand what types of harms we wish to avoid, and second, because if we only undertake research on the nature or extent of risk, then it’s difficult to learn anything useful about who is harmed, and what this means for their lives. Of course, the focus on risk rather than harm is understandable from an ethical and methodological perspective. It wouldn’t be ethical, for example, to conduct a trial in which one group of children was deliberately exposed to very violent or sexual content to observe whether any harms resulted. Similarly, surveys can ask respondents to self-report harms experienced online, perhaps through the lens of upsetting images or experiences. But again, there are ethical concerns about adding to children’s distress by questioning them extensively on difficult experiences, and in a survey context it’s also difficult to avoid imposing adult conceptions of ‘harm’ through the wording of the questions. Despite these difficulties, there are many research projects that aim to measure and understand the relationship between various types of physical, emotional or psychological harm…

Key to successful adoption of Internet-based health records is how much a patient places trust that data will be properly secured from inadvertent leakage.

In an attempt to reduce costs and improve quality, digital health records are permeating health systems all over the world. Internet-based access to them creates new opportunities for access and sharing—while at the same time causing nightmares to many patients: medical data floating around freely within the clouds, unprotected from strangers, being abused to target and discriminate people without their knowledge? Individuals often have little knowledge about the actual risks, and single instances of breaches are exaggerated in the media. Key to successful adoption of Internet-based health records is, however, how much a patient places trust in the technology: trust that data will be properly secured from inadvertent leakage, and trust that it will not be accessed by unauthorised strangers. Situated in this context, my own research has taken a closer look at the structural and institutional factors influencing patient trust in Internet-based health records. Utilising a survey and interviews, the research has looked specifically at Germany—a very suitable environment for this question given its wide range of actors in the health system, and often being referred to as a “hard-line privacy country”. Germany has struggled for years with the introduction of smart cards linked to centralised Electronic Health Records, not only changing its design features over several iterations, but also battling negative press coverage about data security. The first element to this question of patient trust is the “who”: that is, does it make a difference whether the health record is maintained by either a medical or a non-medical entity, and whether the entity is public or private? I found that patients clearly expressed a higher trust in medical operators, evidence of a certain “halo effect” surrounding medical professionals and organisations driven by patient faith in their good intentions. This overrode the concern that medical operators might be less adept at securing the data than (for example) most non-medical IT firms. The distinction between public and private operators is…

One central concern of those governments that are leading in the public sector’s migration to cloud computing is how to retain unconditional sovereignty over their data.

Cloud services are not meant to recognise national frontiers, but to thrive on economies of scale and scope globally -- presenting particular challenges to government. Image by NASA Goddard Photo and Video

Ed: You open your recent Policy and Internet article by noting that “the modern treasury of public institutions is where the wealth of public information is stored and processed,” what are the challenges of government use of cloud services? Kristina: The public sector is a very large user of information technology but data handling policies, vendor accreditation and procurement often predate the era of cloud computing. Governments first have to put in place new internal policies to ensure the security and integrity of their information assets residing in the cloud. Through this process governments are discovering that their traditional notions of control are challenged because cloud services are virtual, dynamic, and operate across borders. One central concern of those governments that are leading in the public sector’s migration to cloud computing is how to retain unconditional sovereignty over their data—after all, public sector information embodies the past, the present, and the future of a country. The ability to govern presupposes command and control over government information to the extent necessary to deliver public services, protect citizens’ personal data and to ensure the integrity of the state, among other considerations. One could even assert that in today’s interconnected world national sovereignty is conditional upon adequate data sovereignty. Ed: A basic question: if a country’s health records (in the cloud) temporarily reside on/are processed on commercial servers in a different country: who is liable for the integrity and protection of that data, and under who’s legal scheme? ie can a country actually technically lose sovereignty over its data? Kristina: There is always one line of responsibility flowing from the contract with the cloud service provider. However, when these health records cross borders they are effectively governed under a third country’s jurisdiction where disclosure authorities vis-à-vis the cloud service provider can likely be invoked. In some situations the geographical whereabouts of the public health records is not even that important because certain countries’…

Parents have different and often conflicting views about what’s best for their children. What’s helpful to one group of parents may not actually benefit parents or youth as a whole.

Ed: You’ve spent a great deal of time studying the way that children and young people use the Internet, much of which focuses on the positive experiences that result. Why do you think this is so under-represented in public debate? boyd/Hargittai: The public has many myths about young people’s use of technology. This is often perpetuated by media coverage that focuses on the extremes. Salacious negative headlines often capture people’s attention, even if the practices or incidents described are outliers and do not represent the majority’s experiences. While focusing on extremely negative and horrific incidents is a great way to attract attention and get readers, it does a disservice to young people, their parents, and ultimately society as a whole. As researchers, we believe that it’s important to understand the nuances of what people experience when they engage with technology. Thus, we are interested in gaining a better understanding of their everyday practices—both the good and the bad. Our goal is to introduce research that can help contextualise socio-technical practices and provide insight into the diversity of viewpoints and perspectives that shape young people’s use of technology. Ed: Your paper suggests we need a more granular understanding of how parental concerns relating to the Internet can vary across different groups. Why is this important? What are the main policy implications of this research? boyd/Hargittai: Parents are often seen as the target of policy interventions. Many lawmakers imagine that they’re designing laws to help empower parents, but when you ask them to explain which parents they are empowering, it becomes clear that there’s an imagined parent that is not always representative of the diverse views and perspectives of all parents. We’re not opposed to laws that enable parents to protect their children, but we’re concerned whenever a class of people, especially a class as large as “parents,” is viewed as homogenous. Parents have different and often conflicting views about what’s best…

Measuring the mobile Internet can expose information about an individual’s location, contact details, and communications metadata.

Four of the 6.8 billion mobile phones worldwide. Measuring the mobile Internet can expose information about an individual's location, contact details, and communications metadata. Image by Cocoarmani.

Ed: GCHQ / the NSA aside, who collects mobile data and for what purpose? How can you tell if your data are being collected and passed on? Ben: Data collected from mobile phones is used for a wide range of (divergent) purposes. First and foremost, mobile operators need information about mobile phones in real-time to be able to communicate with individual mobile handsets. Apps can also collect all sorts of information, which may be necessary to provide entertainment, location specific services, to conduct network research and many other reasons. Mobile phone users usually consent to the collection of their data by clicking “I agree” or other legally relevant buttons, but this is not always the case. Sometimes data is collected lawfully without consent, for example for the provision of a mobile connectivity service. Other times it is harder to substantiate a relevant legal basis. Many applications keep track of the information that is generated by a mobile phone and it is often not possible to find out how the receiver processes this data. Ed: How are data subjects typically recruited for a mobile research project? And how many subjects might a typical research data set contain? Ben: This depends on the research design; some research projects provide data subjects with a specific app, which they can use to conduct measurements (so called ‘active measurements’). Other apps collect data in the background and, in effect, conduct local surveillance of the mobile phone use (so called passive measurements). Other research uses existing datasets, for example provided by telecom operators, which will generally be de-identified in some way. We purposely do not use the term anonymisation in the report, because much research and several case studies have shown that real anonymisation is very difficult to achieve if the original raw data is collected about individuals. Datasets can be re-identified by techniques such as fingerprinting or by linking them with existing, auxiliary datasets. The size…

Broadly speaking, most of the online services we think we’re using for “free”—that is, the ones we’re paying for with the currency of our attention—have some sort of persuasive design goal.

We’re living through a crisis of distraction. Image: "What’s on my iPhone" by Erik Mallinson

Ed: What persuasive technologies might we routinely meet online? And how are they designed to guide us towards certain decisions? There’s a broad spectrum, from the very simple to the very complex. A simple example would be something like Amazon’s “one-click” purchase feature, which compresses the entire checkout process down to a split-second decision. This uses a persuasive technique known as “reduction” to minimise the perceived cost to a user of going through with a purchase, making it more likely that they’ll transact. At the more complex end of the spectrum, you have the whole set of systems and subsystems that is online advertising. As it becomes easier to measure people’s behaviour over time and across media, advertisers are increasingly able to customise messages to potential customers and guide them down the path toward a purchase. It isn’t just commerce, though: mobile behaviour-change apps have seen really vibrant growth in the past couple years. In particular, health and fitness: products like Nike+, Map My Run, and Fitbit let you monitor your exercise, share your performance with friends, use social motivation to help you define and reach your fitness goals, and so on. One interesting example I came across recently is called “Zombies, Run!” which motivates by fright, spawning virtual zombies to chase you down the street while you’re on your run. As one final example, If you’ve ever tried to deactivate your Facebook account, you’ve probably seen a good example of social persuasive technology: the screen that comes up saying, “If you leave Facebook, these people will miss you” and then shows you pictures of your friends. Broadly speaking, most of the online services we think we’re using for “free”—that is, the ones we’re paying for with the currency of our attention—have some sort of persuasive design goal. And this can be particularly apparent when people are entering or exiting the system. Ed: Advertising has been around for centuries, so…

Combating child pornography and child abuse is a universal and legitimate concern. With regard to this subject there is a worldwide consensus that action must be undertaken in order to punish abusers and protect children.

The recent announcement by ‘Anonymous Belgium’ (above) that they would 'liberate the Belgian Web' on 15 July 2013 in response to blocking of websites by the Belgian government was revealed to be a promotional stunt by a commercial law firm wanting to protest non-transparent blocking of online content.

Ed: European legislation introduced in 2011 requires Member States to ensure the prompt removal of child pornography websites hosted in their territory and to endeavour to obtain the removal of such websites hosted outside; leaving open the option to block access by users within their own territory. What is problematic about this blocking? Authors: From a technical point of view, all possible blocking methods that could be used by Member States are ineffective as they can all be circumvented very easily. The use of widely available technologies (like encryption or proxy servers) or tiny changes in computer configurations (for instance the choice of DNS-server), that may also be used for better performance or the enhancement of security or privacy, enable circumvention of blocking methods. Another problem arises from the fact that this legislation only targets website content while offenders often use other technologies such as peer-to-peer systems, newsgroups or email. Ed: Many of these blocking activities stem from European efforts to combat child pornography, but you suggest that child protection may be used as a way to add other types of content to lists of blocked sites—notably those that purportedly violate copyright. Can you explain how this “mission creep” is occurring, and what the risks are? Authors: Combating child pornography and child abuse is a universal and legitimate concern. With regard to this subject there is a worldwide consensus that action must be undertaken in order to punish abusers and protect children. Blocking measures are usually advocated on the basis of the argument that access to these images must be prevented, hence avoiding that users stumble upon child pornography inadvertently. Whereas this seems reasonable with regard to this particular type of content, in some countries governments increasingly use blocking mechanisms for other ‘illegal’ content, such as gambling or copyright-infringing content, often in a very non-transparent way, without clear or established procedures. It is, in our view, especially important at a…

We stress the importance of digital environments for providing contenders of copyright reform with a robust discursive opportunity structure.

Anti-HADOPI march in Paris, 2009. Image bykurto.

In the past few years, many governments have attempted to curb online “piracy” by enforcing harsher copyright control upon Internet users. This trend is now well documented in the academic literature, as with Jon Bright and José Agustina’s or Sebastian Haunss’ recent reviews of such developments. However, as the digital copyright control bills of the 21st century reached parliamentary floors, several of them failed to pass. Many of these legislative failures, such as the postponement of the SOPA and PIPA bills in the United States, succeeded in mobilising large audiences and received widespread media coverage. Writing about these bills and the related events that led to the demise of the similarly-intentioned Anti-Counterfeiting Treaty Agreement (ACTA), Susan Sell, a seasoned analyst of intellectual property enforcement, points to the transnational coalition of Internet users at the heart of these outcomes. As she puts it: In key respects, this is a David and Goliath story in which relatively weak activists were able to achieve surprising success against the strong. That analogy also appears in our recently published article in Policy & Internet, which focuses on the groups that fought several digital copyright control bills as they went through the European and French parliaments in 2007-2009—most notably the EU “Telecoms Package” and the French “HADOPI” laws. Like Susan Sell, our analysis shows “David” civil society groups formed by socially and technically skilled activists disrupting the work of “Goliath” coalitions of powerful actors that had previously been successful at converting the interests of the so-called “creative industries” into copyright law. To explain this process, we stress the importance of digital environments for providing contenders of copyright reform with a robust discursive opportunity structure—a space in which activist groups could defend and diffuse alternative understandings and practices of copyright control and telecommunication reform. These counter-frames and practices refer to the Internet as a public good, and make openness, sharing and creativity central features of the new…