algorithms

Exploring the role of algorithms in our everyday lives, and how a “right to explanation” for decisions might be achievable in practice

Algorithmic systems (such as those deciding mortgage applications, or sentencing decisions) can be very difficult to understand, for experts as well as the general public. Image: Ken Lane (CC BY-NC 2.0).

The EU General Data Protection Regulation (GDPR) has sparked much discussion about the “right to explanation” for the algorithm-supported decisions made about us in our everyday lives. While there’s an obvious need for transparency in the automated decisions that are increasingly being made in areas like policing, education, healthcare and recruitment, explaining how these complex algorithmic decision-making systems arrive at any particular decision is a technically challenging problem—to put it mildly. In their article “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR” which is forthcoming in the Harvard Journal of Law & Technology, Sandra Wachter, Brent Mittelstadt, and Chris Russell present the concept of “unconditional counterfactual explanations” as a novel type of explanation of automated decisions that could address many of these challenges. Counterfactual explanations describe the minimum conditions that would have led to an alternative decision (e.g. a bank loan being approved), without the need to describe the full logic of the algorithm. Relying on counterfactual explanations as a means to help us act rather than merely to understand could help us gauge the scope and impact of automated decisions in our lives. They might also help bridge the gap between the interests of data subjects and data controllers, which might otherwise be a barrier to a legally binding right to explanation. We caught up with the authors to explore the role of algorithms in our everyday lives, and how a “right to explanation” for decisions might be achievable in practice: Ed: There’s a lot of discussion about algorithmic “black boxes” — where decisions are made about us, using data and algorithms about which we (and perhaps the operator) have no direct understanding. How prevalent are these systems? Sandra: Basically, every decision that can be made by a human can now be made by an algorithm, which can be a good thing. Algorithms (when we talk about artificial intelligence) are very good at spotting patterns and…

Mark Zuckerberg has responded with the strange claim that his company does not influence people’s decisions. So what role did social media play in the political events of 2016?

After Brexit and the election of Donald Trump, 2016 will be remembered as the year of cataclysmic democratic events on both sides of the Atlantic. Social media has been implicated in the wave of populism that led to both these developments. Attention has focused on echo chambers, with many arguing that social media users exist in ideological filter bubbles, narrowly focused on their own preferences, prey to fake news and political bots, reinforcing polarisation and leading voters to turn away from the mainstream. Mark Zuckerberg has responded with the strange claim that his company (built on $5 billion of advertising revenue) does not influence people’s decisions. So what role did social media play in the political events of 2016? Political turbulence and the new populism There is no doubt that social media has brought change to politics. From the waves of protest and unrest in response to the 2008 financial crisis, to the Arab spring of 2011, there has been a generalised feeling that political mobilisation is on the rise, and that social media had something to do with it. Our book investigating the relationship between social media and collective action, Political Turbulence, focuses on how social media allows new, “tiny acts” of political participation (liking, tweeting, viewing, following, signing petitions and so on), which turn social movement theory around. Rather than identifying with issues, forming collective identity and then acting to support the interests of that identity—or voting for a political party that supports it—in a social media world, people act first, and think about it, or identify with others later, if at all. These tiny acts of participation can scale up to large-scale mobilisations, such as demonstrations, protests or campaigns for policy change. But they almost always don’t. The overwhelming majority (99.99%) of petitions to the UK or US governments fail to get the 100,000 signatures required for a parliamentary debate (UK) or an official response (US). The very few that…

The algorithms technology rely upon create a new type of curated media that can undermine the fairness and quality of political discourse.

The Facebook Wall, by René C. Nielsen (Flickr).

A central ideal of democracy is that political discourse should allow a fair and critical exchange of ideas and values. But political discourse is unavoidably mediated by the mechanisms and technologies we use to communicate and receive information—and content personalisation systems (think search engines, social media feeds and targeted advertising), and the algorithms they rely upon, create a new type of curated media that can undermine the fairness and quality of political discourse. A new article by Brent Mittlestadt explores the challenges of enforcing a political right to transparency in content personalisation systems. Firstly, he explains the value of transparency to political discourse and suggests how content personalisation systems undermine open exchange of ideas and evidence among participants: at a minimum, personalisation systems can undermine political discourse by curbing the diversity of ideas that participants encounter. Second, he explores work on the detection of discrimination in algorithmic decision making, including techniques of algorithmic auditing that service providers can employ to detect political bias. Third, he identifies several factors that inhibit auditing and thus indicate reasonable limitations on the ethical duties incurred by service providers—content personalisation systems can function opaquely and be resistant to auditing because of poor accessibility and interpretability of decision-making frameworks. Finally, Brent concludes with reflections on the need for regulation of content personalisation systems. He notes that no matter how auditing is pursued, standards to detect evidence of political bias in personalised content are urgently required. Methods are needed to routinely and consistently assign political value labels to content delivered by personalisation systems. This is perhaps the most pressing area for future work—to develop practical methods for algorithmic auditing. The right to transparency in political discourse may seem unusual and farfetched. However, standards already set by the U.S. Federal Communication Commission’s fairness doctrine—no longer in force—and the British Broadcasting Corporation’s fairness principle both demonstrate the importance of the idealised version of political discourse described here. Both precedents…

Both the Brexit referendum and US election have revealed the limits of modern democracy, and social media platforms are currently setting those limits.

Donald Trump in Reno, Nevada, by Darron Birgenheier (Flickr).

This is the big year for computational propaganda — using immense data sets to manipulate public opinion over social media. Both the Brexit referendum and US election have revealed the limits of modern democracy, and social media platforms are currently setting those limits. Platforms like Twitter and Facebook now provide a structure for our political lives. We’ve always relied on many kinds of sources for our political news and information. Family, friends, news organisations, charismatic politicians certainly predate the internet. But whereas those are sources of information, social media now provides the structure for political conversation. And the problem is that these technologies permit too much fake news, encourage our herding instincts, and aren’t expected to provide public goods. First, social algorithms allow fake news stories from untrustworthy sources to spread like wildfire over networks of family and friends. Many of us just assume that there is a modicum of truth-in-advertising. We expect this from advertisements for commercial goods and services, but not from politicians and political parties. Occasionally a political actor gets punished for betraying the public trust through their misinformation campaigns. But in the United States “political speech” is completely free from reasonable public oversight, and in most other countries the media organisations and public offices for watching politicians are legally constrained, poorly financed, or themselves untrustworthy. Research demonstrates that during the campaigns for Brexit and the U.S. presidency, large volumes of fake news stories, false factoids, and absurd claims were passed over social media networks, often by Twitter’s highly automated accounts and Facebook’s algorithms. Second, social media algorithms provide very real structure to what political scientists often call “elective affinity” or “selective exposure”. When offered the choice of who to spend time with or which organisations to trust, we prefer to strengthen our ties to the people and organisations we already know and like. When offered a choice of news stories, we prefer to read about the issues we already care about,…

Leading policy makers, data scientists and academics came together to discuss how the ATI and government could work together to develop data science for the public good.

The benefits of big data and data science for the private sector are well recognised. So far, considerably less attention has been paid to the power and potential of the growing field of data science for policy-making and public services. On Monday 14th March 2016 the Oxford Internet Institute (OII) and the Alan Turing Institute (ATI) hosted a Summit on Data Science for Government and Policy Making, funded by the EPSRC. Leading policy makers, data scientists and academics came together to discuss how the ATI and government could work together to develop data science for the public good. The convenors of the Summit, Professors Helen Margetts (OII) and Tom Melham (Computer Science), report on the day’s proceedings. The Alan Turing Institute will build on the UK’s existing academic strengths in the analysis and application of big data and algorithm research to place the UK at the forefront of world-wide research in data science. The University of Oxford is one of five university partners, and the OII is the only partnering department in the social sciences. The aim of the summit on Data Science for Government and Policy-Making was to understand how government can make better use of big data and the ATI—with the academic partners in listening mode. We hoped that the participants would bring forward their own stories, hopes and fears regarding data science for the public good. Crucially, we wanted to work out a roadmap for how different stakeholders can work together on the distinct challenges facing government, as opposed to commercial organisations. At the same time, data science research and development has much to gain from the policy-making community. Some of the things that government does—collect tax from the whole population, or give money away at scale, or possess the legitimate use of force—it does by virtue of being government. So the sources of data and some of the data science challenges that public agencies face are…