Latest Report by UN Special Rapporteur for the Right to Freedom of Expression is a Landmark Document

The Human Rights Council in Geneva, Switzerland. Image: United Nations Photo (Flickr CC BY-NC-ND 2.0).

“The digital access industry is in the business of digital expression […] since privately owned networks are indispensable to the contemporary exercise of freedom of expression, their operators also assume critical social and public functions. The industry’s decisions […] can directly impact freedom of expression and related human rights in both beneficial and detrimental ways.” [Report of the Special Rapporteur on the right to freedom of expression, June 2017]

The Internet is often portrayed as a disruptive equaliser, an information medium able to directly give individuals access to information and provide a platform to share their opinions unmediated. But the Internet is also a tool for surveillance, censorship, and information warfare. Often states drive such practices, but increasingly the private sector plays a role. While states have a clear obligation to protect human rights on the Internet, questions surrounding the human right accountability of the private sector are unclear. Which begs the question what the responsibility is of the private industry, which runs and owns much of the Internet, towards human rights?

During the 35th session of the United Nations (UN) Human Rights Council this month, David Kaye, UN Special Rapporteur (UNSR) for the right to freedom of expression, presented his latest report [1], which focuses on the role of the private sector in the provision of Internet and telecommunications access. The UNSR on freedom of expression is an independent expert, appointed by the Human Rights Council to analyse, document, and report on the state of freedom of expression globally [2]. The rapporteur is also expected to make recommendations towards ‘better promoting and protection of the right to freedom of expression’ [3]. In recent years, the UNSRs on freedom of expression increasingly focus on the intersection between access to information, expression, and the Internet [4].

This most recent report is a landmark document. Its focus on the role and responsibilities of the private sector towards the right to freedom of expression presents a necessary step forward in the debate about the responsibility for the realisation of human rights online. The report takes on the legal difficulties surrounding the increased reliance of states on access to privately owned networks and data, whether by necessity, through cooperation, or through coercion, for surveillance, security, and service provision. It also tackles the legal responsibilities that private organisations have to respect human rights.

The first half of Kaye’s report emphasises the role of states in protecting the right to freedom of expression and access to information online, in particular in the context of state-mandated Internet shutdowns and private-public data sharing. Kaye highlights several major Internet shutdowns across the world and argues that considering ‘the number of essential activities and services they affect, shutdowns restrict expression and interfere with other fundamental rights’ [5]. In order to address this issue, he recommends that the Human Rights Council supplements and specifies resolution 32/13, on ‘the promotion, protection and enjoyment of human rights on the Internet’ [6], in which it condemns such disruptions to the network. On the interaction between private actors and the state, Kaye walks a delicate line. On the one hand, he argues that governments should not pressure or threaten companies to provide them with access to data. On the other hand, he also argues that states should not allow companies to make network management decisions that treat data differentially based on its origin.

The second half of the report focusses on the responsibility of the private sector. In this context, the UNSR highlights the responsibilities of private actors towards the right to freedom of expression. Kaye argues that this sector plays a crucial role in providing access to information and communication services to millions across the globe. He looks specifically at the role of telecommunication and Internet service providers, Internet exchange points, content delivery networks, network equipment vendors, and other private actors. He argues that four contextual factors are relevant to understanding the responsibility of private actors vis-à-vis human rights:

(1) private actors provide access to ‘a public good;’
(2) due to the technical nature of the Internet, any restrictions on access affect freedom of expression on a global level;
(3) the private sector is vulnerable to state pressure, but;
(4) it is also in a unique position to respect users’ rights.

The report draws out the dilemma of the boundaries of responsibility. When should companies decide to comply with state policies that might undermine the rights of Internet end-users? What remedies should they offer end-users if they are complicit in human rights violations? How can private actors assess what impact their technologies might have on human rights?

Private actors across the spectrum, from multinational social media platforms to the garage-based start-ups are likely to run into these questions. As the Internet underpins a large part of the functioning of our societies, and will only further continue to do so as physical devices increasingly become part of the network (aka the Internet of Things), it is even more important to understand and allocate private sector responsibility for protecting human rights.

The report has a dedicated addendum [7] that specifically details the responsibility of Internet Standard Developing Organizations (SDOs). In it, Kaye relies on the article written by Corinne Cath and Luciano Floridi of the Oxford Internet Institute (OII) entitled ‘The Design of the Internet’s Architecture by the Internet Engineering Task Force (IETF) and Human Rights’ [8] to support his argument that SDOs should take on a credible approach to human rights accountability.

Overall, Kaye argues that companies should adopt the UN Guiding Principles on Business and Human Rights [9], which would provide a ‘minimum baseline for corporate human rights accountability’. To operationalise this commitment, the private sector will need to take several urgent steps. It should ensure that sufficient resources are reserved for meeting its responsibility towards human rights, and it should integrate the principles of due diligence, human rights by design, stakeholder engagement, mitigation of the harms of government-imposed restrictions, transparency, and effective remedies to complement its ‘high level commitment to human rights’.

While this report is not binding [10] on states or companies, it does set out a much-needed detailed blue print of how to address questions of corporate responsibility towards human rights in the digital age.


[5] The author of this blog has written about this issue here:

Assessing the Ethics and Politics of Policing the Internet for Extremist Material

The Internet serves not only as a breeding ground for extremism, but also offers myriad data streams which potentially hold great value to law enforcement. The report by the OII’s Ian Brown and Josh Cowls for the VOX-Pol project: Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material explores the complexities of policing the web for extremist material, and its implications for security, privacy and human rights. Josh Cowls discusses the report with blog editor Bertie Vidgen.*

*please note that the views given here do not necessarily reflect the content of the report, or those of the lead author, Ian Brown.

In terms of counter-speech there are different roles for government, civil society, and industry. Image by Miguel Discart (Flickr).

Ed: Josh, could you let us know the purpose of the report, outline some of the key findings, and tell us how you went about researching the topic?

Josh: Sure. In the report we take a step back from the ground-level question of ‘what are the police doing?’ and instead ask, ‘what are the ethical and political boundaries, rationale and justifications for policing the web for these kinds of activity?’ We used an international human rights framework as an ethical and legal basis to understand what is being done. We also tried to further the debate by clarifying a few things: what has already been done by law enforcement, and, really crucially, what the perspectives are of all those involved, including lawmakers, law enforcers, technology companies, academia and many others.

We derived the insights in the report from a series of workshops, one of which was held as part of the EU-funded VOX-Pol network. The workshops involved participants who were quite high up in law enforcement, the intelligence agencies, the tech industry civil society, and academia. We followed these up with interviews with other individuals in similar positions and conducted background policy research.

Ed: You highlight that many extremist groups (such as Isis) are making really significant use of online platforms to organise, radicalise people, and communicate their messages.

Josh: Absolutely. A large part of our initial interest when writing the report lay in finding out more about the role of the Internet in facilitating the organisation, coordination, recruitment and inspiration of violent extremism. The impact of this has been felt very recently in Paris and Beirut, and many other places worldwide. This report pre-dates these most recent developments, but was written in the context of these sorts of events.

Given the Internet is so embedded in our social lives, I think it would have been surprising if political extremist activity hadn’t gone online as well. Of course, the Internet is a very powerful tool and in the wrong hands it can be a very destructive force. But other research, separate from this report, has found that the Internet is not usually people’s first point of contact with extremism: more often than not that actually happens offline through people you know in the wider world. Nonetheless it can definitely serve as an incubator of extremism and can serve to inspire further attacks.

Ed: In the report you identify different groups in society that are affected by, and affecting, issues of extremism, privacy, and governance—including civil society, academics, large corporations and governments

Josh: Yes, in the later stages of the report we do divide society into these groups, and offer some perspectives on what they do, and what they think about counter-extremism. For example, in terms of counter-speech there are different roles for government, civil society, and industry. There is this idea that ISIS are really good at social media, and that that is how they are powering a lot of their support; but one of the people that we spoke to said that it is not the case that ISIS are really good, it is just that governments are really bad!

We shouldn’t ask government to participate in the social network: bureaucracies often struggle to be really flexible and nimble players on social media. In contrast, civil society groups tend to be more engaged with communities and know how to “speak the language” of those who might be vulnerable to radicalisation. As such they can enter that dialogue in a much more informed and effective way.

The other tension, or paradigm, that we offer in this report is the distinction between whether people are ‘at risk’ or ‘a risk’. What we try to point to is that people can go from one to the other. They start by being ‘at risk’ of radicalisation, but if they do get radicalised and become a violent threat to society, which only happens in the minority of cases, then they become ‘a risk’. Engaging with people who are ‘at risk’ highlights the importance of having respect and dialogue with communities that are often the first to be lambasted when things go wrong, but which seldom get all the help they need, or the credit when they get it right. We argue that civil society is particularly suited for being part of this process.

Ed: It seems like the things that people do or say online can only really be understood in terms of the context. But often we don’t have enough information, and it can be very hard to just look at something and say ‘This is definitely extremist material that is going to incite someone to commit terrorist or violent acts’.

Josh: Yes, I think you’re right. In the report we try to take what is a very complicated concept—extremist material—and divide it into more manageable chunks of meaning. We talk about three hierarchical levels. The degree of legal consensus over whether content should be banned decreases as it gets less extreme. The first level we identified was straight up provocation and hate speech. Hate speech legislation has been part of the law for a long time. You can’t incite racial hatred, you can’t incite people to crimes, and you can’t promote terrorism. Most countries in Europe have laws against these things.

The second level is the glorification and justification of terrorism. This is usually more post-hoc as by definition if you are glorifying something it has already happened. You may well be inspiring future actions, but that relationship between the act of violence and the speech act is different than with provocation. Nevertheless, some countries, such as Spain and France, have pushed hard on criminalising this. The third level is non-violent extremist material. This is the most contentious level, as there is very little consensus about what types of material should be called ‘extremist’ even though they are non-violent. One of the interviewees that we spoke to said that often it is hard to distinguish between someone who is just being friendly and someone who is really trying to persuade or groom someone to go to Syria. It is really hard to put this into a legal framework with the level of clarity that the law demands.

There is a proportionality question here. When should something be considered specifically illegal? And, then, if an illegal act has been committed what should the appropriate response be? This is bound to be very different in different situations.

Ed: Do you think that there are any immediate or practical steps that governments can take to improve the current situation? And do you think that there any ethical concerns which are not being paid sufficient attention?

Josh: In the report we raised a few concerns about existing government responses. There are lots of things beside privacy that could be seen as fundamental human rights and that are being encroached upon. Freedom of association and assembly is a really interesting one. We might not have the same reverence for a Facebook event plan or discussion group as we would a protest in a town hall, but of course they are fundamentally pretty similar.

The wider danger here is the issue of mission creep. Once you have systems in place that can do potentially very powerful analytical investigatory things then there is a risk that we could just keep extending them. If something can help us fight terrorism then should we use it to fight drug trafficking and violent crime more generally? It feels to me like there is a technical-military-industrial complex mentality in government where if you build the systems then you just want to use them. In the same way that CCTV cameras record you irrespective of whether or not you commit a violent crime or shoplift, we need to ask whether the same panoptical systems of surveillance should be extended to the Internet. Now, to a large extent they are already there. But what should we train the torchlight on next?

This takes us back to the importance of having necessary, proportionate, and independently authorised processes. When you drill down into how rights privacy should be balanced with security then it gets really complicated. But the basic process-driven things that we identified in the report are far simpler: if we accept that governments have the right to take certain actions in the name of security, then, no matter how important or life-saving those actions are, there are still protocols that governments must follow. We really wanted to infuse these issues into the debate through the report.

Read the full report: Brown, I., and Cowls, J., (2015) Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material. VOX-Pol Publications.

Josh Cowls is a a student and researcher based at MIT, working to understand the impact of technology on politics, communication and the media.

Josh Cowls was talking to Blog Editor Bertie Vidgen.