Ben Zevenbergen

The United Nations Human Rights Council has reaffirmed many times that “the same rights that people have offline must also be protected online”.

The increased reliance on Internet technology impacts human rights. Image: Bruno Cordioli (Flickr CC BY 2.0).

The Internet has drastically reshaped communication practices across the globe, including many aspects of modern life. This increased reliance on Internet technology also impacts human rights. The United Nations Human Rights Council has reaffirmed many times (most recently in a 2016 resolution) that “the same rights that people have offline must also be protected online”. However, only limited guidance is given by international human rights monitoring bodies and courts on how to apply human rights law to the design and use of Internet technology, especially when developed by non-state actors. And while the Internet can certainly facilitate the exercise and fulfilment of human rights, it is also conducive to human rights violations, with many Internet organisations and companies currently grappling with their responsibilities in this area. To help understand how digital technology can support the exercise of human rights, we—Corinne Cath, Ben Zevenbergen, and Christiaan van Veen—organised a workshop at the 2017 Citizen Lab Summer Institute in Toronto, on ‘Coding Human Rights Law’. By gathering together academics, technologists, human rights experts, lawyers, government officials, and NGO employees, we hoped to gather experience and scope the field to: 1. Explore the relationship between connected technology and human rights; 2. Understand how this technology can support the exercise of human rights; 3. Identify current bottlenecks for integrating human rights considerations into Internet technology, and; 4. List recommendations to provide guidance to the various stakeholders working on human-rights strengthening technology. In the workshop report “Coding Human Rights Law: Citizen Lab Summer Institute 2017 Workshop Report”, we give an overview of the discussion. We address multiple legal and technical concerns. We consider the legal issues arising from human rights law being state-centric, while most connected technologies are being developed by the private sector. We also discuss the applicability of current international human rights frameworks to debates about new technologies. We cover the technical issues that arise when trying to code for human rights, in…

Sharing instructive primers for developers interested in creating technologies for those affected by gender-based violence.

Image by ijclark (Flickr CC BY 2.0).

Digital technologies are increasingly proposed as innovative solution to the problems and threats faced by vulnerable groups such as children, women, and LGBTQ people. However, there exists a structural lack of consideration for gender and power relations in the design of Internet technologies, as previously discussed by scholars in media and communication studies (Barocas & Nissenbaum, 2009; Boyd, 2001; Thakor, 2015) and technology studies (Balsamo, 2011; MacKenzie and Wajcman, 1999). But the intersection between gender-based violence and technology deserves greater attention. To this end, scholars from the Center for Information Technology at Princeton and the Oxford Internet Institute organised a workshop to explore the design ethics of gender-based violence and safety technologies at Princeton in the Spring of 2017. The workshop welcomed a wide range of advocates in areas of intimate partner violence and sex work; engineers, designers, developers, and academics working on IT ethics. The objectives of the day were threefold: (1) to better understand the lack of gender considerations in technology design, (2) to formulate critical questions for functional requirement discussions between advocates and developers of gender-based violence applications; and (3) to establish a set of criteria by which new applications can be assessed from a gender perspective. Following three conceptual takeaways from the workshop, we share instructive primers for developers interested in creating technologies for those affected by gender-based violence. Survivors, sex workers, and young people are intentional technology users Increasing public awareness of the prevalence gender-based violence, both on and offline, often frames survivors of gender-based violence, activists, and young people as vulnerable and helpless. Contrary to this representation, those affected by gender-based violence are intentional technology users, choosing to adopt or abandon tools as they see fit. For example, sexual assault victims strategically disclose their stories on specific social media platforms to mobilise collective action. Sex workers adopt locative technologies to make safety plans. Young people utilise secure search tools to find information about sexual health resources…

Experimentation and research on the Internet require ethical scrutiny in order to give useful feedback to engineers and researchers about the social impact of their work.

The image shows the paths taken through the Internet to reach a large number of DNS servers in China used in experiments on DNS censorship by Joss Wright and Ning Wang, where they queried blocked domain names across China to discover patterns in where the network filtered DNS requests, and how it responded.

To maintain an open and working Internet, we need to make sense of how the complex and decentralised technical system operates. Research groups, governments, and companies have dedicated teams working on highly technical research and experimentation to make sense of information flows and how these can be affected by new developments, be they intentional or due to unforeseen consequences of decisions made in another domain. These teams, composed of network engineers and computer scientists, therefore analyse Internet data transfers, typically by collecting data from devices of large groups of individuals as well as organisations. The Internet, however, has become a complex and global socio-technical information system that mediates a significant amount of our social or professional activities, relationships, as well as mental processes. Experimentation and research on the Internet therefore require ethical scrutiny in order to give useful feedback to engineers and researchers about the social impact of their work. The organising committee of the Association of Computing Machinery (ACM) SigComm (Signal Communications) conference has regularly encountered paper submissions that can be considered dubious from an ethical point of view. A strong debate on the research ethics of the ACM was sparked by the paper entitled “Encore: Lightweight Measurement of Web Censorship with Cross-Origin Requests,” among others submitted for the 2015 conference. In the study, researchers directed unsuspecting Internet users to test potential censorship systems in their country by directing their browser to specified URLs that could be blocked in their jurisdiction. Concerns were raised about whether this could be considered ‘human subject research’ and whether the unsuspecting users could be harmed as a result of this experiment. Consider, for example, a Chinese citizen continuously requesting the Falun Gong website from their Beijing-based laptop with no knowledge of this occurring whatsoever. As a result of these discussions, the ACM realised that there was no formal procedure or methodology in place to make informed decisions about the ethical dimensions of such…

Informing the global discussions on information control research and practice in the fields of censorship, circumvention, surveillance and adherence to human rights.

Jon Penny presenting on the US experience of Internet-related corporate transparency reporting.

根据相关法律法规和政策,部分搜索结果未予显示 could be a warning message we will see displayed more often on the Internet; but likely translations thereof. In Chinese, this means “according to the relevant laws, regulations, and policies, a portion of search results have not been displayed.” The control of information flows on the Internet is becoming more commonplace, in authoritarian regimes as well as in liberal democracies, either via technical or regulatory means. Such information controls can be defined as “[…] actions conducted in or through information and communications technologies (ICTs), which seek to deny (such as web filtering), disrupt (such as denial-of-service attacks), shape (such as throttling), secure (such as through encryption or circumvention) or monitor (such as passive or targeted surveillance) information for political ends. Information controls can also be non-technical and can be implemented through legal and regulatory frameworks, including informal pressures placed on private companies. […]” Information controls are not intrinsically good or bad, but much is to be explored and analysed about their use, for political or commercial purposes. The University of Toronto’s Citizen Lab organised a one-week summer institute titled “Monitoring Internet Openness and Rights” to inform the global discussions on information control research and practice in the fields of censorship, circumvention, surveillance and adherence to human rights. A week full of presentations and workshops on the intersection of technical tools, social science research, ethical and legal reflections and policy implications was attended by a distinguished group of about 60 community members, amongst whom were two OII DPhil students; Jon Penney and Ben Zevenbergen. Conducting Internet measurements may be considered to be a terra incognita in terms of methodology and data collection, but the relevance and impacts for Internet policy-making, geopolitics or network management are obvious and undisputed. The Citizen Lab prides itself in being a “hacker hothouse”, or an “intelligence agency for civil society” where security expertise, politics, and ethics intersect. Their research adds the much-needed geopolitical angle to…

Measuring the mobile Internet can expose information about an individual’s location, contact details, and communications metadata.

Four of the 6.8 billion mobile phones worldwide. Measuring the mobile Internet can expose information about an individual's location, contact details, and communications metadata. Image by Cocoarmani.

Ed: GCHQ / the NSA aside, who collects mobile data and for what purpose? How can you tell if your data are being collected and passed on? Ben: Data collected from mobile phones is used for a wide range of (divergent) purposes. First and foremost, mobile operators need information about mobile phones in real-time to be able to communicate with individual mobile handsets. Apps can also collect all sorts of information, which may be necessary to provide entertainment, location specific services, to conduct network research and many other reasons. Mobile phone users usually consent to the collection of their data by clicking “I agree” or other legally relevant buttons, but this is not always the case. Sometimes data is collected lawfully without consent, for example for the provision of a mobile connectivity service. Other times it is harder to substantiate a relevant legal basis. Many applications keep track of the information that is generated by a mobile phone and it is often not possible to find out how the receiver processes this data. Ed: How are data subjects typically recruited for a mobile research project? And how many subjects might a typical research data set contain? Ben: This depends on the research design; some research projects provide data subjects with a specific app, which they can use to conduct measurements (so called ‘active measurements’). Other apps collect data in the background and, in effect, conduct local surveillance of the mobile phone use (so called passive measurements). Other research uses existing datasets, for example provided by telecom operators, which will generally be de-identified in some way. We purposely do not use the term anonymisation in the report, because much research and several case studies have shown that real anonymisation is very difficult to achieve if the original raw data is collected about individuals. Datasets can be re-identified by techniques such as fingerprinting or by linking them with existing, auxiliary datasets. The size…