Governance & Security

Examining the extent to which local governments in the UK are using intelligence from big data, in light of the structural barriers they face when trying to exploit it.

Many local governments have reams of data (both hard data and soft data) on local inhabitants and local businesses. Image: Chris Dawkins (Flickr CC BY-NC-ND 2.0).

The concept of Big Data has become very popular over the last decade, with many large technology companies successfully building their business models around its exploitation. The UK’s public sector has tried to follow suit, with local governments in particular trying to introduce new models of service delivery based on the routine extraction of information from their own big data. These attempts have been hailed as the beginning of a new era for the public sector, with some commentators suggesting that it could help local governments transition toward a model of service delivery where the quantity and quality of commissioned services is underpinned by data intelligence on users and their current and future needs. In their Policy & Internet article “Data Intelligence for Local Government? Assessing the Benefits and Barriers to Use of Big Data in the Public Sector”, Fola Malomo and Vania Sena examine the extent to which local governments in the UK are indeed using intelligence from big data, in light of the structural barriers they face when trying to exploit it. Their analysis suggests that the ambitions around the development of big data capabilities in local government are not reflected in actual use. Indeed, these methods have mostly been employed to develop new digital channels for service delivery, and even if the financial benefits of these initiatives are documented, very little is known about the benefits generated by them for the local communities. While this is slowly changing as councils start to develop their big data capability, the overall impression gained from even a cursory overview is that the full potential of big data is yet to be exploited. We caught up with the authors to discuss their findings: Ed.: So what actually is “the full potential” that local government is supposed to be aiming for? What exactly is the promise of “big data” in this context? Fola / Vania: Local governments seek to improve service delivery…

Notably, nearly 90 percent of the advertisements contained no responsible or problem gambling language, despite the gambling-like content.

Lord of the Rings slot machines at the Flamingo, image by jenneze (Flickr CC BY-NC 2.0). Unlike gambling played for real money, “social casino games” generally have no monetary prizes.

Social casino gaming, which simulates gambling games on a social platform such as Facebook, is a nascent but rapidly growing industry—social casino game revenues grew 97 percent between 2012 and 2013, with a USD$3.5 billion market size by the end of 2015. Unlike gambling played for real money, social casino games generally have no monetary prizes and are free-to-play, although they may include some optional monetised features. The size of the market and users’ demonstrated interest in gambling-themed activities mean that social casino gamers are an attractive market for many gambling operators, and several large international gambling companies have merged with social casino game operators. Some operators consider the games to be a source of additional revenue in jurisdictions where online gambling is largely illegal, or a way to attract new customers to a land-based gambling venue. Hybrid models are also emerging, with the potential for tangible rewards for playing social casino games. This merging of gaming and gambling means that many previously established boundaries are becoming blurred, and at many points, the two are indistinguishable. However, content analysis of game content and advertising can help researchers, industry, and policymakers better understand how the two entertainment forms overlap. In their Policy & Internet article “Gambling Games on Social Platforms: How Do Advertisements for Social Casino Games Target Young Adults?”, Brett Abarbanel, Sally M. Gainsbury, Daniel King, Nerilee Hing, and Paul H. Delfabbro undertake a content analysis of 115 social casino gaming advertisements captured by young adults during their regular Internet use. They find advertisement imagery typically features images likely to appeal to young adults, with message themes including a glamorising and normalisation of gambling. Notably, nearly 90 percent of the advertisements contained no responsible or problem gambling language, despite the gambling-like content. Gambling advertisements currently face much stricter restrictions on exposure and distribution than do social casino game advertisements: despite the latter containing much gambling-themed content designed to attract consumers.…

The popularity of technologies and services that reveal insights about our daily lives paints a picture of a public that is voluntarily offering itself up to increasingly invasive forms of surveillance.

We are increasingly exposed to new practices of data collection. Image by ijclark (Flickr CC BY 2.0).

As digital technologies and platforms are increasingly incorporated into our lives, we are exposed to new practices of data creation and collection—and there is evidence that American citizens are deeply concerned about the consequences of these practices. But despite these concerns, the public has not abandoned technologies that produce data and collect personal information. In fact, the popularity of technologies and services that reveal insights about our health, fitness, medical conditions, and family histories in exchange for extensive monitoring and tracking paints a picture of a public that is voluntarily offering itself up to increasingly invasive forms of surveillance. This seeming inconsistency between intent and behaviour is routinely explained with reference to the “privacy paradox”. Advertisers, retailers, and others with a vested interest in avoiding the regulation of digital data collection have pointed to this so-called paradox as an argument against government intervention. By phrasing privacy as a choice between involvement in (or isolation from) various social and economic communities, they frame information disclosure as a strategic decision made by informed consumers. Indeed, discussions on digital privacy have been dominated by the idea of the “empowered consumer” or “privacy pragmatist”—an autonomous individual who makes informed decisions about the disclosure of their personal information. But there is increasing evidence that “control” is a problematic framework through which to operationalize privacy. In her Policy & Internet article “From Privacy Pragmatist to Privacy Resigned: Challenging Narratives of Rational Choice in Digital Privacy Debates,” Nora A. Draper examines how the figure of the “privacy pragmatist” developed by the prominent privacy researcher Alan Westin has been used to frame privacy within a typology of personal preference—a framework that persists in academic, regulatory, and commercial discourses in the United States. Those in the pragmatist group are wary about the safety and security of their personal information, but make supposedly rational decisions about the conditions under which they are comfortable with disclosure, logically calculating the costs and…

The Internet is neither purely public nor private, but combines public and private networks, platforms, and interests. Given its complexity and global importance, there is clearly a public interest in how it is governed.

Reading of the NetMundial outcome document, by mikiwoz (Flickr CC BY-SA 2.0)

The Internet is neither purely public nor private, but combines public and private networks, platforms, and interests. Given its complexity and global importance, there is clearly a public interest in how it is governed, and role of the public in Internet governance debates is a critical issue for policymaking. The current dominant mechanism for public inclusion is the multistakeholder approach, i.e. one that includes governments, industry and civil society in governance debates. Despite at times being used as a shorthand for public inclusion, multistakeholder governance is implemented in many different ways and has faced criticism, with some arguing that multistakeholder discussions serve as a cover for the growth of state dominance over the Web, and enables oligarchic domination of discourses that are ostensibly open and democratic. In her Policy & Internet article “Searching for the Public in Internet Governance: Examining Infrastructures of Participation at NETmundial”, Sarah Myers West examines the role of the public in Internet governance debates, with reference to public inclusion at the 2014 Global Multistakeholder Meeting on the Future of Internet Governance (NETmundial). NETmundial emerged at a point when public legitimacy was a particular concern for the Internet governance community, so finding ways to include the rapidly growing, and increasingly diverse group of stakeholders in the governance debate was especially important for the meeting’s success. This is particularly significant as the Internet governance community faces problems of increasing complexity and diversity of views. The growth of the Internet has made the public central to Internet governance—but introduces problems around the growing number of stakeholders speaking different languages, with different technical backgrounds, and different perspectives on the future of the Internet. However, the article suggests that rather than attempting to unify behind a single institution or achieve public consensus through a single, deliberative forum, the NETmundial example suggests that the Internet community may further fragment into multiple publics, further redistributing into a more networked and “agonistic” model. This…

Peter John and Toby Blume design and report a randomised control trial that encouraged users of a disability parking scheme to renew online.

A randomised control trial that “nudged” users of a disability parking scheme to renew online showed a six percentage point increase in online renewals. Image: Wendell (Flickr).

In an era when most transactions occur online, it’s natural for public authorities to want the vast bulk of their contacts with citizens to occur through the Internet. But they also face a minority for whom paper and face-to-face interactions are still preferred or needed—leading to fears that efforts to move services online “by default” might reinforce or even encourage exclusion. Notwithstanding these fears, it might be possible to “nudge” citizens from long-held habits by making online submission advantageous and other routes of use more difficult. Behavioural public policy has been strongly advocated in recent years as a low-cost means to shift citizen behaviour, and has been used to reform many standard administrative processes in government. But how can we design non-obtrusive nudges to make users shift channels without them losing access to services? In their new Policy & Internet article “Nudges That Promote Channel Shift: A Randomised Evaluation of Messages to Encourage Citizens to Renew Benefits Online” Peter John and Toby Blume design and report a randomised control trial that encouraged users of a disability parking scheme to renew online. They found that by simplifying messages and adding incentives (i.e. signalling the collective benefit of moving online) users were encouraged to switch from paper to online channels by about six percentage points. As a result of the intervention and ongoing efforts by the Council, virtually all the parking scheme users now renew online. The finding that it’s possible to appeal to citizens’ willingness to act for collective benefit is encouraging. The results also support the more general literature that shows that citizens’ use of online services is based on trust and confidence with public services and that interventions should go with the grain of citizen preferences and norms. We caught up with Peter John to discuss his findings, and the role of behavioural public policy in government: Ed.: Is it fair to say that the real innovation of behavioural…

Things you should probably know, and things that deserve to be brought out for another viewing. This week: Reality, Augmented Reality and Ambient Fun!

This is the third post in a series that will uncover great writing by faculty and students at the Oxford Internet Institute, things you should probably know, and things that deserve to be brought out for another viewing. This week: Reality, Augmented Reality and Ambient Fun! The addictive gameplay of Pokémon GO has led to police departments warning people that they should be more careful about revealing their locations, players injuring themselves, finding dead bodies, and even the Holocaust Museum telling people to play elsewhere. Our environments are increasingly augmented with digital information: but how do we assert our rights over how and where this information is used? And should we be paying more attention to the design of persuasive technologies in increasingly attention-scarce environments? Or should we maybe just bin all our devices and pack ourselves off to digital detox camp? 1. James Williams: Bring Your Own Boundaries: Pokémon GO and the Challenge of Ambient Fun 23 July 2016 | 2500 words | 12 min | Gross misuses of the “Poké-” prefix: 6 “The slogan of the Pokémon franchise is ‘Gotta catch ‘em all!’ This phrase has always seemed to me an apt slogan for the digital era as a whole. It expresses an important element of the attitude we’re expected to have as we grapple with the Sisyphean boulder of information abundance using our woefully insufficient cognitive toolsets.” Pokémon GO signals the first mainstream adoption of a type of game—always on, always with you—that requires you to ‘Bring Your Own Boundaries’, says James Williams. Regulation of the games falls on the user; presenting us with a unique opportunity to advance the conversation about the ethics of self-regulation and self-determination in environments of increasingly persuasive technology. 2. James Williams: Orwell, Huxley, Banksy 24 May 2014 | 1000 words | 5 min “Orwell worried that what we fear could ultimately come to control us: the “boot stamping on a human…

Are there ways in which the data economy could directly finance global causes such as climate change prevention, poverty alleviation and infrastructure?

“If data is the new oil, then why aren’t we taxing it like we tax oil?” That was the essence of the provocative brief that set in motion our recent 6-month research project funded by the Rockefeller Foundation. The results are detailed in the new report: Data Financing for Global Good: A Feasibility Study. The parallels between data and oil break down quickly once you start considering practicalities such as measuring and valuing data. Data is, after all, a highly heterogeneous good whose value is context-specific—very different from a commodity such as oil that can be measured and valued by the barrel. But even if the value of data can’t simply be metered and taxed, are there other ways in which the data economy could be more directly aligned with social good? Data-intensive industries already contribute to social good by producing useful services and paying taxes on their profits (though some pay regrettably little). But are there ways in which the data economy could directly finance global causes such as climate change prevention, poverty alleviation and infrastructure? Such mechanisms should not just arbitrarily siphon off money from industry, but also contribute value back to the data economy by correcting market failures and investment gaps. The potential impacts are significant: estimates value the data economy at around seven percent of GDP in rich industrialised countries, or around ten times the value of the United Nations development aid spending goal. Here’s where “data financing” comes in. It’s a term we coined that’s based on innovative financing, a concept increasingly used in the philanthropical world. Innovative financing refers to initiatives that seek to unlock private capital for the sake of global development and socially beneficial projects, which face substantial funding gaps globally. Since government funding towards addressing global challenges is not growing, the proponents of innovative financing are asking how else these critical causes could be funded. An existing example of innovative financing is the…

The algorithms technology rely upon create a new type of curated media that can undermine the fairness and quality of political discourse.

The Facebook Wall, by René C. Nielsen (Flickr).

A central ideal of democracy is that political discourse should allow a fair and critical exchange of ideas and values. But political discourse is unavoidably mediated by the mechanisms and technologies we use to communicate and receive information—and content personalisation systems (think search engines, social media feeds and targeted advertising), and the algorithms they rely upon, create a new type of curated media that can undermine the fairness and quality of political discourse. A new article by Brent Mittlestadt explores the challenges of enforcing a political right to transparency in content personalisation systems. Firstly, he explains the value of transparency to political discourse and suggests how content personalisation systems undermine open exchange of ideas and evidence among participants: at a minimum, personalisation systems can undermine political discourse by curbing the diversity of ideas that participants encounter. Second, he explores work on the detection of discrimination in algorithmic decision making, including techniques of algorithmic auditing that service providers can employ to detect political bias. Third, he identifies several factors that inhibit auditing and thus indicate reasonable limitations on the ethical duties incurred by service providers—content personalisation systems can function opaquely and be resistant to auditing because of poor accessibility and interpretability of decision-making frameworks. Finally, Brent concludes with reflections on the need for regulation of content personalisation systems. He notes that no matter how auditing is pursued, standards to detect evidence of political bias in personalised content are urgently required. Methods are needed to routinely and consistently assign political value labels to content delivered by personalisation systems. This is perhaps the most pressing area for future work—to develop practical methods for algorithmic auditing. The right to transparency in political discourse may seem unusual and farfetched. However, standards already set by the U.S. Federal Communication Commission’s fairness doctrine—no longer in force—and the British Broadcasting Corporation’s fairness principle both demonstrate the importance of the idealised version of political discourse described here. Both precedents…

Applying elementary institutional economics to examine what blockchain technologies really do in terms of economic organisation, and what problems this gives rise to.

Bitcoin’s underlying technology, the blockchain, is widely expected to find applications far beyond digital payments. It is celebrated as a “paradigm shift in the very idea of economic organisation”. But the OII’s Professor Vili Lehdonvirta contends that such revolutionary potentials may be undermined by a fundamental paradox that has to do with the governance of the technology. I recently gave a talk at the Alan Turing Institute (ATI) under the title The Problem of Governance in Distributed Ledger Technologies. The starting point of my talk was that it is frequently posited that blockchain technologies will “revolutionise industries that rely on digital record keeping”, such as financial services and government. In the talk I applied elementary institutional economics to examine what blockchain technologies really do in terms of economic organisation, and what problems this gives rise to. In this essay I present an abbreviated version of the argument. Alternatively you can watch a video of the talk below. https://www.youtube.com/watch?v=eNrzE_UfkTw&w=640&h=360 First, it is necessary to note that there is quite a bit of confusion as to what exactly is meant by a blockchain. When people talk about “the” blockchain, they often refer to the Bitcoin blockchain, an ongoing ledger of transactions started in 2009 and maintained by the approximately 5,000 computers that form the Bitcoin peer-to-peer network. The term blockchain can also be used to refer to other instances or forks of the same technology (“a” blockchain). The term “distributed ledger technology” (DLT) has also gained currency recently as a more general label for related technologies. In each case, I think it is fair to say that the reason that so many people are so excited about blockchain today is not the technical features as such. In terms of performance metrics like transactions per second, existing blockchain technologies are in many ways inferior to more conventional technologies. This is frequently illustrated with the point that the Bitcoin network is limited by design…