Governance & Security

The Internet is neither purely public nor private, but combines public and private networks, platforms, and interests. Given its complexity and global importance, there is clearly a public interest in how it is governed.

Reading of the NetMundial outcome document, by mikiwoz (Flickr CC BY-SA 2.0)

The Internet is neither purely public nor private, but combines public and private networks, platforms, and interests. Given its complexity and global importance, there is clearly a public interest in how it is governed, and role of the public in Internet governance debates is a critical issue for policymaking. The current dominant mechanism for public inclusion is the multistakeholder approach, i.e. one that includes governments, industry and civil society in governance debates. Despite at times being used as a shorthand for public inclusion, multistakeholder governance is implemented in many different ways and has faced criticism, with some arguing that multistakeholder discussions serve as a cover for the growth of state dominance over the Web, and enables oligarchic domination of discourses that are ostensibly open and democratic. In her Policy & Internet article “Searching for the Public in Internet Governance: Examining Infrastructures of Participation at NETmundial”, Sarah Myers West examines the role of the public in Internet governance debates, with reference to public inclusion at the 2014 Global Multistakeholder Meeting on the Future of Internet Governance (NETmundial). NETmundial emerged at a point when public legitimacy was a particular concern for the Internet governance community, so finding ways to include the rapidly growing, and increasingly diverse group of stakeholders in the governance debate was especially important for the meeting’s success. This is particularly significant as the Internet governance community faces problems of increasing complexity and diversity of views. The growth of the Internet has made the public central to Internet governance—but introduces problems around the growing number of stakeholders speaking different languages, with different technical backgrounds, and different perspectives on the future of the Internet. However, the article suggests that rather than attempting to unify behind a single institution or achieve public consensus through a single, deliberative forum, the NETmundial example suggests that the Internet community may further fragment into multiple publics, further redistributing into a more networked and “agonistic” model. This…

Peter John and Toby Blume design and report a randomised control trial that encouraged users of a disability parking scheme to renew online.

A randomised control trial that “nudged” users of a disability parking scheme to renew online showed a six percentage point increase in online renewals. Image: Wendell (Flickr).

In an era when most transactions occur online, it’s natural for public authorities to want the vast bulk of their contacts with citizens to occur through the Internet. But they also face a minority for whom paper and face-to-face interactions are still preferred or needed—leading to fears that efforts to move services online “by default” might reinforce or even encourage exclusion. Notwithstanding these fears, it might be possible to “nudge” citizens from long-held habits by making online submission advantageous and other routes of use more difficult. Behavioural public policy has been strongly advocated in recent years as a low-cost means to shift citizen behaviour, and has been used to reform many standard administrative processes in government. But how can we design non-obtrusive nudges to make users shift channels without them losing access to services? In their new Policy & Internet article “Nudges That Promote Channel Shift: A Randomised Evaluation of Messages to Encourage Citizens to Renew Benefits Online” Peter John and Toby Blume design and report a randomised control trial that encouraged users of a disability parking scheme to renew online. They found that by simplifying messages and adding incentives (i.e. signalling the collective benefit of moving online) users were encouraged to switch from paper to online channels by about six percentage points. As a result of the intervention and ongoing efforts by the Council, virtually all the parking scheme users now renew online. The finding that it’s possible to appeal to citizens’ willingness to act for collective benefit is encouraging. The results also support the more general literature that shows that citizens’ use of online services is based on trust and confidence with public services and that interventions should go with the grain of citizen preferences and norms. We caught up with Peter John to discuss his findings, and the role of behavioural public policy in government: Ed.: Is it fair to say that the real innovation of behavioural…

Things you should probably know, and things that deserve to be brought out for another viewing. This week: Reality, Augmented Reality and Ambient Fun!

This is the third post in a series that will uncover great writing by faculty and students at the Oxford Internet Institute, things you should probably know, and things that deserve to be brought out for another viewing. This week: Reality, Augmented Reality and Ambient Fun! The addictive gameplay of Pokémon GO has led to police departments warning people that they should be more careful about revealing their locations, players injuring themselves, finding dead bodies, and even the Holocaust Museum telling people to play elsewhere. Our environments are increasingly augmented with digital information: but how do we assert our rights over how and where this information is used? And should we be paying more attention to the design of persuasive technologies in increasingly attention-scarce environments? Or should we maybe just bin all our devices and pack ourselves off to digital detox camp? 1. James Williams: Bring Your Own Boundaries: Pokémon GO and the Challenge of Ambient Fun 23 July 2016 | 2500 words | 12 min | Gross misuses of the “Poké-” prefix: 6 “The slogan of the Pokémon franchise is ‘Gotta catch ‘em all!’ This phrase has always seemed to me an apt slogan for the digital era as a whole. It expresses an important element of the attitude we’re expected to have as we grapple with the Sisyphean boulder of information abundance using our woefully insufficient cognitive toolsets.” Pokémon GO signals the first mainstream adoption of a type of game—always on, always with you—that requires you to ‘Bring Your Own Boundaries’, says James Williams. Regulation of the games falls on the user; presenting us with a unique opportunity to advance the conversation about the ethics of self-regulation and self-determination in environments of increasingly persuasive technology. 2. James Williams: Orwell, Huxley, Banksy 24 May 2014 | 1000 words | 5 min “Orwell worried that what we fear could ultimately come to control us: the “boot stamping on a human…

Are there ways in which the data economy could directly finance global causes such as climate change prevention, poverty alleviation and infrastructure?

“If data is the new oil, then why aren’t we taxing it like we tax oil?” That was the essence of the provocative brief that set in motion our recent 6-month research project funded by the Rockefeller Foundation. The results are detailed in the new report: Data Financing for Global Good: A Feasibility Study. The parallels between data and oil break down quickly once you start considering practicalities such as measuring and valuing data. Data is, after all, a highly heterogeneous good whose value is context-specific—very different from a commodity such as oil that can be measured and valued by the barrel. But even if the value of data can’t simply be metered and taxed, are there other ways in which the data economy could be more directly aligned with social good? Data-intensive industries already contribute to social good by producing useful services and paying taxes on their profits (though some pay regrettably little). But are there ways in which the data economy could directly finance global causes such as climate change prevention, poverty alleviation and infrastructure? Such mechanisms should not just arbitrarily siphon off money from industry, but also contribute value back to the data economy by correcting market failures and investment gaps. The potential impacts are significant: estimates value the data economy at around seven percent of GDP in rich industrialised countries, or around ten times the value of the United Nations development aid spending goal. Here’s where “data financing” comes in. It’s a term we coined that’s based on innovative financing, a concept increasingly used in the philanthropical world. Innovative financing refers to initiatives that seek to unlock private capital for the sake of global development and socially beneficial projects, which face substantial funding gaps globally. Since government funding towards addressing global challenges is not growing, the proponents of innovative financing are asking how else these critical causes could be funded. An existing example of innovative financing is the…

The algorithms technology rely upon create a new type of curated media that can undermine the fairness and quality of political discourse.

The Facebook Wall, by René C. Nielsen (Flickr).

A central ideal of democracy is that political discourse should allow a fair and critical exchange of ideas and values. But political discourse is unavoidably mediated by the mechanisms and technologies we use to communicate and receive information—and content personalisation systems (think search engines, social media feeds and targeted advertising), and the algorithms they rely upon, create a new type of curated media that can undermine the fairness and quality of political discourse. A new article by Brent Mittlestadt explores the challenges of enforcing a political right to transparency in content personalisation systems. Firstly, he explains the value of transparency to political discourse and suggests how content personalisation systems undermine open exchange of ideas and evidence among participants: at a minimum, personalisation systems can undermine political discourse by curbing the diversity of ideas that participants encounter. Second, he explores work on the detection of discrimination in algorithmic decision making, including techniques of algorithmic auditing that service providers can employ to detect political bias. Third, he identifies several factors that inhibit auditing and thus indicate reasonable limitations on the ethical duties incurred by service providers—content personalisation systems can function opaquely and be resistant to auditing because of poor accessibility and interpretability of decision-making frameworks. Finally, Brent concludes with reflections on the need for regulation of content personalisation systems. He notes that no matter how auditing is pursued, standards to detect evidence of political bias in personalised content are urgently required. Methods are needed to routinely and consistently assign political value labels to content delivered by personalisation systems. This is perhaps the most pressing area for future work—to develop practical methods for algorithmic auditing. The right to transparency in political discourse may seem unusual and farfetched. However, standards already set by the U.S. Federal Communication Commission’s fairness doctrine—no longer in force—and the British Broadcasting Corporation’s fairness principle both demonstrate the importance of the idealised version of political discourse described here. Both precedents…

Applying elementary institutional economics to examine what blockchain technologies really do in terms of economic organisation, and what problems this gives rise to.

Bitcoin’s underlying technology, the blockchain, is widely expected to find applications far beyond digital payments. It is celebrated as a “paradigm shift in the very idea of economic organisation”. But the OII’s Professor Vili Lehdonvirta contends that such revolutionary potentials may be undermined by a fundamental paradox that has to do with the governance of the technology. I recently gave a talk at the Alan Turing Institute (ATI) under the title The Problem of Governance in Distributed Ledger Technologies. The starting point of my talk was that it is frequently posited that blockchain technologies will “revolutionise industries that rely on digital record keeping”, such as financial services and government. In the talk I applied elementary institutional economics to examine what blockchain technologies really do in terms of economic organisation, and what problems this gives rise to. In this essay I present an abbreviated version of the argument. Alternatively you can watch a video of the talk below. https://www.youtube.com/watch?v=eNrzE_UfkTw&w=640&h=360 First, it is necessary to note that there is quite a bit of confusion as to what exactly is meant by a blockchain. When people talk about “the” blockchain, they often refer to the Bitcoin blockchain, an ongoing ledger of transactions started in 2009 and maintained by the approximately 5,000 computers that form the Bitcoin peer-to-peer network. The term blockchain can also be used to refer to other instances or forks of the same technology (“a” blockchain). The term “distributed ledger technology” (DLT) has also gained currency recently as a more general label for related technologies. In each case, I think it is fair to say that the reason that so many people are so excited about blockchain today is not the technical features as such. In terms of performance metrics like transactions per second, existing blockchain technologies are in many ways inferior to more conventional technologies. This is frequently illustrated with the point that the Bitcoin network is limited by design…

Exploring the complexities of policing the web for extremist material, and its implications for security, privacy and human rights.

In terms of counter-speech there are different roles for government, civil society, and industry. Image by Miguel Discart (Flickr).

The Internet serves not only as a breeding ground for extremism, but also offers myriad data streams which potentially hold great value to law enforcement. The report by the OII’s Ian Brown and Josh Cowls for the VOX-Pol project: Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material explores the complexities of policing the web for extremist material, and its implications for security, privacy and human rights. Josh Cowls discusses the report with blog editor Bertie Vidgen.* *please note that the views given here do not necessarily reflect the content of the report, or those of the lead author, Ian Brown. Ed: Josh, could you let us know the purpose of the report, outline some of the key findings, and tell us how you went about researching the topic? Josh: Sure. In the report we take a step back from the ground-level question of ‘what are the police doing?’ and instead ask, ‘what are the ethical and political boundaries, rationale and justifications for policing the web for these kinds of activity?’ We used an international human rights framework as an ethical and legal basis to understand what is being done. We also tried to further the debate by clarifying a few things: what has already been done by law enforcement, and, really crucially, what the perspectives are of all those involved, including lawmakers, law enforcers, technology companies, academia and many others. We derived the insights in the report from a series of workshops, one of which was held as part of the EU-funded VOX-Pol network. The workshops involved participants who were quite high up in law enforcement, the intelligence agencies, the tech industry civil society, and academia. We followed these up with interviews with other individuals in similar positions and conducted background policy research. Ed: You highlight that many extremist groups (such as Isis) are making really significant use of online platforms to organise,…

For data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice.

Many organisations are coming up with their own internal policy and guidelines for data sharing. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During her time as an OII Visiting Associate, Alison Holt developed a pragmatic solution in the form of a Voluntary Code, anchored in the developing ISO standards for the Governance of Data. She discusses the voluntary code, and the need to provide urgent advice to organisations struggling with policy for sharing data. Collecting, storing and distributing digital data is significantly easier and cheaper now than ever before, in line with predictions from Moore, Kryder and Gilder. Organisations are incentivised to collect large volumes of data with the hope of unleashing new business opportunities or maybe even new businesses. Consider the likes of Uber, Netflix, and Airbnb and the other data mongers who have built services based solely on digital assets. The use of this new abundant data will continue to disrupt traditional business models for years to come, and there is no doubt that these large data volumes can provide value. However, they also bring associated risks (such as unplanned disclosure and hacks) and they come with constraints (for example in the form of privacy or data protection legislation). Hardly a week goes by without a data breach hitting the headlines. Even if your telecommunications provider didn’t inadvertently share your bank account and sort code with hackers, and your child wasn’t one of the hundreds of thousands of children whose birthdays, names, and photos were exposed by a smart toy company, you might still be wondering exactly how your data is being looked after by the banks, schools, clinics, utility companies, local authorities and government departments that are so quick to collect your digital details. Then there are the companies who have invited you to sign away the rights to your data and possibly your…

Government involvement in crowdsourcing efforts can actually be used to control and regulate volunteers from the top down—not just to “mobilise them”.

RUSSIA, NEAR RYAZAN - 8 MAY 2011: Piled up wood in the forest one winter after a terribly huge forest fire in Russia in year 2010. Image: Max Mayorov (Flickr).

There is a great deal of interest in the use of crowdsourcing tools and practices in emergency situations. Gregory Asmolov’s article Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations (Policy and Internet 7,3) examines crowdsourcing of emergency response in Russia in the wake of the devastating forest fires of 2010. Interestingly, he argues that government involvement in these crowdsourcing efforts can actually be used to control and regulate volunteers from the top down—not just to “mobilise them”. My interest in the role of crowdsourcing tools and practices in emergency situations was triggered by my personal experience. In 2010 I was one of the co-founders of the Russian “Help Map” project, which facilitated volunteer-based response to wildfires in central Russia. When I was working on this project, I realised that a crowdsourcing platform can bring the participation of the citizen to a new level and transform sporadic initiatives by single citizens and groups into large-scale, relatively well coordinated operations. What was also important was that both the needs and the forms of participation required in order to address these needs be defined by the users themselves. To some extent the citizen-based response filled the gap left by the lack of a sufficient response from the traditional institutions.[1] This suggests that the role of ICTs in disaster response should be examined within the political context of the power relationship between members of the public who use digital tools and the traditional institutions. My experience in 2010 was the first time I was able to see that, while we would expect that in a case of natural disaster both the authorities and the citizens would be mostly concerned about the emergency, the actual situation might be different. Apparently the emergence of independent, citizen-based collective action in response to a disaster was considered as some type of threat by the institutional actors. First, it was a threat to…