Ethics

The United Nations Human Rights Council has reaffirmed many times that “the same rights that people have offline must also be protected online”.

The increased reliance on Internet technology impacts human rights. Image: Bruno Cordioli (Flickr CC BY 2.0).

The Internet has drastically reshaped communication practices across the globe, including many aspects of modern life. This increased reliance on Internet technology also impacts human rights. The United Nations Human Rights Council has reaffirmed many times (most recently in a 2016 resolution) that “the same rights that people have offline must also be protected online”. However, only limited guidance is given by international human rights monitoring bodies and courts on how to apply human rights law to the design and use of Internet technology, especially when developed by non-state actors. And while the Internet can certainly facilitate the exercise and fulfilment of human rights, it is also conducive to human rights violations, with many Internet organisations and companies currently grappling with their responsibilities in this area. To help understand how digital technology can support the exercise of human rights, we—Corinne Cath, Ben Zevenbergen, and Christiaan van Veen—organised a workshop at the 2017 Citizen Lab Summer Institute in Toronto, on ‘Coding Human Rights Law’. By gathering together academics, technologists, human rights experts, lawyers, government officials, and NGO employees, we hoped to gather experience and scope the field to: 1. Explore the relationship between connected technology and human rights; 2. Understand how this technology can support the exercise of human rights; 3. Identify current bottlenecks for integrating human rights considerations into Internet technology, and; 4. List recommendations to provide guidance to the various stakeholders working on human-rights strengthening technology. In the workshop report “Coding Human Rights Law: Citizen Lab Summer Institute 2017 Workshop Report”, we give an overview of the discussion. We address multiple legal and technical concerns. We consider the legal issues arising from human rights law being state-centric, while most connected technologies are being developed by the private sector. We also discuss the applicability of current international human rights frameworks to debates about new technologies. We cover the technical issues that arise when trying to code for human rights, in…

Exploring the role of algorithms in our everyday lives, and how a “right to explanation” for decisions might be achievable in practice

Algorithmic systems (such as those deciding mortgage applications, or sentencing decisions) can be very difficult to understand, for experts as well as the general public. Image: Ken Lane (CC BY-NC 2.0).

The EU General Data Protection Regulation (GDPR) has sparked much discussion about the “right to explanation” for the algorithm-supported decisions made about us in our everyday lives. While there’s an obvious need for transparency in the automated decisions that are increasingly being made in areas like policing, education, healthcare and recruitment, explaining how these complex algorithmic decision-making systems arrive at any particular decision is a technically challenging problem—to put it mildly. In their article “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR” which is forthcoming in the Harvard Journal of Law & Technology, Sandra Wachter, Brent Mittelstadt, and Chris Russell present the concept of “unconditional counterfactual explanations” as a novel type of explanation of automated decisions that could address many of these challenges. Counterfactual explanations describe the minimum conditions that would have led to an alternative decision (e.g. a bank loan being approved), without the need to describe the full logic of the algorithm. Relying on counterfactual explanations as a means to help us act rather than merely to understand could help us gauge the scope and impact of automated decisions in our lives. They might also help bridge the gap between the interests of data subjects and data controllers, which might otherwise be a barrier to a legally binding right to explanation. We caught up with the authors to explore the role of algorithms in our everyday lives, and how a “right to explanation” for decisions might be achievable in practice: Ed: There’s a lot of discussion about algorithmic “black boxes” — where decisions are made about us, using data and algorithms about which we (and perhaps the operator) have no direct understanding. How prevalent are these systems? Sandra: Basically, every decision that can be made by a human can now be made by an algorithm, which can be a good thing. Algorithms (when we talk about artificial intelligence) are very good at spotting patterns and…

Examining the content moderation strategies of Sina Weibo, China’s largest microblogging platform, in regulating discussion of rumours following the 2015 Tianjin blasts.

On 12 August 2015, a series of explosions killed 173 people and injured hundreds at a container storage station at the Port of Tianjin. Tianjin Port by Matthias Catón (Flickr CC BY-NC-ND 2.0).

As social media become increasingly important as a source of news and information for citizens, there is a growing concern over the impacts of social media platforms on information quality—as evidenced by the furore over the impact of “fake news”. Driven in part by the apparently substantial impact of social media on the outcomes of Brexit and the US Presidential election, various attempts have been made to hold social media platforms to account for presiding over misinformation, with recent efforts to improve fact-checking. There is a large and growing body of research examining rumour management on social media platforms. However, most of these studies treat it as a technical matter, and little attention has been paid to the social and political aspects of rumour. In their Policy & Internet article “How Social Media Construct ‘Truth’ Around Crisis Events: Weibo’s Rumor Management Strategies after the 2015 Tianjin Blasts”, Jing Zeng, Chung-hong Chan and King-wa Fu examine the content moderation strategies of Sina Weibo, China’s largest microblogging platform, in regulating discussion of rumours following the 2015 Tianjin blasts. Studying rumour communication in relation to the manipulation of social media platforms is particularly important in the context of China. In China, Internet companies are licensed by the state, and their businesses must therefore be compliant with Chinese law and collaborate with the government in monitoring and censoring politically sensitive topics. Given most Chinese citizens rely heavily on Chinese social media services as alternative information sources or as grassroots “truth”, the anti-rumour policies have raised widespread concern over the implications for China’s online sphere. As there is virtually no transparency in rumour management on Chinese social media, it is an important task for researchers to investigate how Internet platforms engage with rumour content and any associated impact on public discussion. We caught up with the authors to discuss their findings: Ed.: “Fake news” is currently a very hot issue, with Twitter and Facebook both…

What is the responsibility of the private industry, which runs and owns much of the Internet, towards human rights?

The Human Rights Council in Geneva, Switzerland. Image: United Nations Photo (Flickr CC BY-NC-ND 2.0).

“The digital access industry is in the business of digital expression […] since privately owned networks are indispensable to the contemporary exercise of freedom of expression, their operators also assume critical social and public functions. The industry’s decisions […] can directly impact freedom of expression and related human rights in both beneficial and detrimental ways.” [Report of the Special Rapporteur on the right to freedom of expression, June 2017] The Internet is often portrayed as a disruptive equaliser, an information medium able to directly give individuals access to information and provide a platform to share their opinions unmediated. But the Internet is also a tool for surveillance, censorship, and information warfare. Often states drive such practices, but increasingly the private sector plays a role. While states have a clear obligation to protect human rights on the Internet, questions surrounding the human right accountability of the private sector are unclear. Which begs the question what the responsibility is of the private industry, which runs and owns much of the Internet, towards human rights? During the 35th session of the United Nations (UN) Human Rights Council this month, David Kaye, UN Special Rapporteur (UNSR) for the right to freedom of expression, presented his latest report [1], which focuses on the role of the private sector in the provision of Internet and telecommunications access. The UNSR on freedom of expression is an independent expert, appointed by the Human Rights Council to analyse, document, and report on the state of freedom of expression globally [2]. The rapporteur is also expected to make recommendations towards ‘better promoting and protection of the right to freedom of expression’ [3]. In recent years, the UNSRs on freedom of expression increasingly focus on the intersection between access to information, expression, and the Internet [4]. This most recent report is a landmark document. Its focus on the role and responsibilities of the private sector towards the right to freedom of…

We might expect bot interactions to be relatively predictable and uneventful.

Wikipedia uses editing bots to clean articles: but what happens when their interactions go bad? Image of "Nomade", a sculpture in downtown Des Moines by Jason Mrachina (Flickr CC BY-NC-ND 2.0).

Recent years have seen a huge increase in the number of bots online—including search engine Web crawlers, online customer service chat bots, social media spambots, and content-editing bots in online collaborative communities like Wikipedia. (Bots are important contributors to Wikipedia, completing about 15% of all Wikipedia edits in 2014 overall, and more than 50% in certain language editions.) While the online world has turned into an ecosystem of bots (by which we mean computer scripts that automatically handle repetitive and mundane tasks), our knowledge of how these automated agents interact with each other is rather poor. But being automata without capacity for emotions, meaning-making, creativity, or sociality, we might expect bot interactions to be relatively predictable and uneventful. In their PLOS ONE article “Even good bots fight: The case of Wikipedia”, Milena Tsvetkova, Ruth García-Gavilanes, Luciano Floridi, and Taha Yasseri analyse the interactions between bots that edit articles on Wikipedia. They track the extent to which bots undid each other’s edits over the period 2001–2010, model how pairs of bots interact over time, and identify different types of interaction outcomes. Although Wikipedia bots are intended to support the encyclopaedia—identifying and undoing vandalism, enforcing bans, checking spelling, creating inter-language links, importing content automatically, mining data, identifying copyright violations, greeting newcomers, etc.—the authors find they often undid each other’s edits, with these sterile “fights” sometimes continuing for years. They suggest that even relatively “dumb” bots may give rise to complex interactions, carrying important implications for Artificial Intelligence research. Understanding these bot-bot interactions will be crucial for managing social media, providing adequate cyber-security, and designing autonomous vehicles (that don’t crash). We caught up with Taha Yasseri and Luciano Floridi to discuss the implications of the findings: Ed.: Is there any particular difference between the way individual bots interact (and maybe get bogged down in conflict), and lines of vast and complex code interacting badly, or having unforeseen results (e.g. flash-crashes in automated trading):…

Things you should probably know, and things that deserve to be brought out for another viewing. This week: Reality, Augmented Reality and Ambient Fun!

This is the third post in a series that will uncover great writing by faculty and students at the Oxford Internet Institute, things you should probably know, and things that deserve to be brought out for another viewing. This week: Reality, Augmented Reality and Ambient Fun! The addictive gameplay of Pokémon GO has led to police departments warning people that they should be more careful about revealing their locations, players injuring themselves, finding dead bodies, and even the Holocaust Museum telling people to play elsewhere. Our environments are increasingly augmented with digital information: but how do we assert our rights over how and where this information is used? And should we be paying more attention to the design of persuasive technologies in increasingly attention-scarce environments? Or should we maybe just bin all our devices and pack ourselves off to digital detox camp? 1. James Williams: Bring Your Own Boundaries: Pokémon GO and the Challenge of Ambient Fun 23 July 2016 | 2500 words | 12 min | Gross misuses of the “Poké-” prefix: 6 “The slogan of the Pokémon franchise is ‘Gotta catch ‘em all!’ This phrase has always seemed to me an apt slogan for the digital era as a whole. It expresses an important element of the attitude we’re expected to have as we grapple with the Sisyphean boulder of information abundance using our woefully insufficient cognitive toolsets.” Pokémon GO signals the first mainstream adoption of a type of game—always on, always with you—that requires you to ‘Bring Your Own Boundaries’, says James Williams. Regulation of the games falls on the user; presenting us with a unique opportunity to advance the conversation about the ethics of self-regulation and self-determination in environments of increasingly persuasive technology. 2. James Williams: Orwell, Huxley, Banksy 24 May 2014 | 1000 words | 5 min “Orwell worried that what we fear could ultimately come to control us: the “boot stamping on a human…

Advocates of “digital detoxing” view digital communication as eroding our ability to concentrate, to empathise, and to have meaningful conversations.

The new (old) inbox. Camp Grounded tries to build up attendees’ confidence to be silly and playful, with their identities less tied to their work persona—in a backlash against Silicon Valley’s intense work ethic. Photo by Pumpernickle.

As our social interactions become increasingly entangled with the online world, there are some who insist on the benefits of disconnecting entirely from digital technology. These advocates of “digital detoxing” view digital communication as eroding our ability to concentrate, to empathise, and to have meaningful conversations. A 2016 survey by OnePoll found that 40% of respondents felt they had “not truly experienced valuable moments such as a child’s first steps or graduation” because “technology got in the way”, and OfCom’s 2016 survey showed that 15 million British Internet users (representing a third of those online), have already tried a digital detox. In recent years, America has sought to pathologise a perceived over-use of digital technology as “Internet addiction”. While the term is not recognised by the DSM, the idea is commonly used in media rhetoric and forms an important backdrop to digital detoxing. The article Disconnect to reconnect: The food/technology metaphor in digital detoxing (First Monday) by Theodora Sutton presents a short ethnography of the digital detoxing community in the San Francisco Bay Area. Her informants attend an annual four-day digital detox and summer camp for adults in the Californian forest called Camp Grounded. She attended two Camp Grounded sessions in 2014, and followed up with semi-structured interviews with eight detoxers. We caught up with Theodora to examine the implications of the study and to learn more about her PhD research, which focuses on the same field site. Ed.: In your forthcoming article you say that Camp Grounded attendees used food metaphors (and words like “snacking” and “nutrition”) to understand their own use of technology and behaviour. How useful is this as an analogy? Theodora: The food/technology analogy is an incredibly neat way to talk about something we think of as immaterial in a more tangible way. We know that our digital world relies on physical connections, but we forget that all the time. Another thing it does in lending a dietary…

Do social media divide us, hide us from each other? Are you particularly aware of what content is personalised for you, what it is you’re not seeing?

This is the second post in a series that will uncover great writing by faculty and students at the Oxford Internet Institute, things you should probably know, and things that deserve to be brought out for another viewing. This week: Fake News and Filter Bubbles! Fake news, post-truth, “alternative facts”, filter bubbles—this is the news and media environment we apparently now inhabit, and that has formed the fabric and backdrop of Brexit (“£350 million a week”) and Trump (“This was the largest audience to ever witness an inauguration—period”). Do social media divide us, hide us from each other? Are you particularly aware of what content is personalised for you, what it is you’re not seeing? How much can we do with machine-automated or crowd-sourced verification of facts? And are things really any worse now than when Bacon complained in 1620 about the false notions that “are now in possession of the human understanding, and have taken deep root therein”? 1. Bernie Hogan: How Facebook divides us [Times Literary Supplement] 27 October 2016 | 1000 words | 5 minutes “Filter bubbles can create an increasingly fractured population, such as the one developing in America. For the many people shocked by the result of the British EU referendum, we can also partially blame filter bubbles: Facebook literally filters our friends’ views that are least palatable to us, yielding a doctored account of their personalities.” Bernie Hogan says it’s time Facebook considered ways to use the information it has about us to bring us together across political, ideological and cultural lines, rather than hide us from each other or push us into polarised and hostile camps. He says it’s not only possible for Facebook to help mitigate the issues of filter bubbles and context collapse; it’s imperative, and it’s surprisingly simple. 2. Luciano Floridi: Fake news and a 400-year-old problem: we need to resolve the ‘post-truth’ crisis [the Guardian] 29 November 2016 | 1000…

The algorithms technology rely upon create a new type of curated media that can undermine the fairness and quality of political discourse.

The Facebook Wall, by René C. Nielsen (Flickr).

A central ideal of democracy is that political discourse should allow a fair and critical exchange of ideas and values. But political discourse is unavoidably mediated by the mechanisms and technologies we use to communicate and receive information—and content personalisation systems (think search engines, social media feeds and targeted advertising), and the algorithms they rely upon, create a new type of curated media that can undermine the fairness and quality of political discourse. A new article by Brent Mittlestadt explores the challenges of enforcing a political right to transparency in content personalisation systems. Firstly, he explains the value of transparency to political discourse and suggests how content personalisation systems undermine open exchange of ideas and evidence among participants: at a minimum, personalisation systems can undermine political discourse by curbing the diversity of ideas that participants encounter. Second, he explores work on the detection of discrimination in algorithmic decision making, including techniques of algorithmic auditing that service providers can employ to detect political bias. Third, he identifies several factors that inhibit auditing and thus indicate reasonable limitations on the ethical duties incurred by service providers—content personalisation systems can function opaquely and be resistant to auditing because of poor accessibility and interpretability of decision-making frameworks. Finally, Brent concludes with reflections on the need for regulation of content personalisation systems. He notes that no matter how auditing is pursued, standards to detect evidence of political bias in personalised content are urgently required. Methods are needed to routinely and consistently assign political value labels to content delivered by personalisation systems. This is perhaps the most pressing area for future work—to develop practical methods for algorithmic auditing. The right to transparency in political discourse may seem unusual and farfetched. However, standards already set by the U.S. Federal Communication Commission’s fairness doctrine—no longer in force—and the British Broadcasting Corporation’s fairness principle both demonstrate the importance of the idealised version of political discourse described here. Both precedents…

Advancing the practical and theoretical basis for how we conceptualise and shape the infosphere.

Photograph of workshop participants by David Peter Simon.

On June 27 the Ethics and Philosophy of Information Cluster at the OII hosted a workshop to foster a dialogue between the discipline of Information Architecture (IA) and the Philosophy of Information (PI), and advance the practical and theoretical basis for how we conceptualise and shape the infosphere. A core topic of concern is how we should develop better principles to understand design practices. The latter surfaces when IA looks at other disciplines, like linguistics, design thinking, new media studies and architecture to develop the theoretical foundations that can back and/or inform its practice. Within the philosophy of information this need to understand general principles of (conceptual or informational) design arises in relation to the question of how we develop and adopt the right level of abstraction (what Luciano Floridi calls the logic of design). This suggests a two-way interaction between PI and IA. On the one hand, PI can become part of the theoretical background that informs Information Architecture as one of the disciplines from which it can borrow concepts and theories. The philosophy of information, on the other hand, can benefit from the rich practice of IA and the growing body of critical reflection on how, within a particular context, the access to online information is best designed. Throughout the workshop, two themes emerged: The need for more integrated ways to reason about and describe (a) informational artefacts and infrastructures, (b) the design-processes that lead to their creation, and (c) the requirements to which they should conform. This presupposes a convergence between the things we build (informational artefacts) and the conceptual apparatus we rely on (the levels of abstraction we adopt), which surfaces in IA as well as in PI. At the same time, it also calls for novel frameworks and linguistic abstractions. This need to reframe the ways that we observe informational phenomena could be discerned in several contributions to the workshop. It surfaced in the more…