filter bubbles

Do social media divide us, hide us from each other? Are you particularly aware of what content is personalised for you, what it is you’re not seeing?

This is the second post in a series that will uncover great writing by faculty and students at the Oxford Internet Institute, things you should probably know, and things that deserve to be brought out for another viewing. This week: Fake News and Filter Bubbles! Fake news, post-truth, “alternative facts”, filter bubbles—this is the news and media environment we apparently now inhabit, and that has formed the fabric and backdrop of Brexit (“£350 million a week”) and Trump (“This was the largest audience to ever witness an inauguration—period”). Do social media divide us, hide us from each other? Are you particularly aware of what content is personalised for you, what it is you’re not seeing? How much can we do with machine-automated or crowd-sourced verification of facts? And are things really any worse now than when Bacon complained in 1620 about the false notions that “are now in possession of the human understanding, and have taken deep root therein”? 1. Bernie Hogan: How Facebook divides us [Times Literary Supplement] 27 October 2016 | 1000 words | 5 minutes “Filter bubbles can create an increasingly fractured population, such as the one developing in America. For the many people shocked by the result of the British EU referendum, we can also partially blame filter bubbles: Facebook literally filters our friends’ views that are least palatable to us, yielding a doctored account of their personalities.” Bernie Hogan says it’s time Facebook considered ways to use the information it has about us to bring us together across political, ideological and cultural lines, rather than hide us from each other or push us into polarised and hostile camps. He says it’s not only possible for Facebook to help mitigate the issues of filter bubbles and context collapse; it’s imperative, and it’s surprisingly simple. 2. Luciano Floridi: Fake news and a 400-year-old problem: we need to resolve the ‘post-truth’ crisis [the Guardian] 29 November 2016 | 1000…

The algorithms technology rely upon create a new type of curated media that can undermine the fairness and quality of political discourse.

The Facebook Wall, by René C. Nielsen (Flickr).

A central ideal of democracy is that political discourse should allow a fair and critical exchange of ideas and values. But political discourse is unavoidably mediated by the mechanisms and technologies we use to communicate and receive information—and content personalisation systems (think search engines, social media feeds and targeted advertising), and the algorithms they rely upon, create a new type of curated media that can undermine the fairness and quality of political discourse. A new article by Brent Mittlestadt explores the challenges of enforcing a political right to transparency in content personalisation systems. Firstly, he explains the value of transparency to political discourse and suggests how content personalisation systems undermine open exchange of ideas and evidence among participants: at a minimum, personalisation systems can undermine political discourse by curbing the diversity of ideas that participants encounter. Second, he explores work on the detection of discrimination in algorithmic decision making, including techniques of algorithmic auditing that service providers can employ to detect political bias. Third, he identifies several factors that inhibit auditing and thus indicate reasonable limitations on the ethical duties incurred by service providers—content personalisation systems can function opaquely and be resistant to auditing because of poor accessibility and interpretability of decision-making frameworks. Finally, Brent concludes with reflections on the need for regulation of content personalisation systems. He notes that no matter how auditing is pursued, standards to detect evidence of political bias in personalised content are urgently required. Methods are needed to routinely and consistently assign political value labels to content delivered by personalisation systems. This is perhaps the most pressing area for future work—to develop practical methods for algorithmic auditing. The right to transparency in political discourse may seem unusual and farfetched. However, standards already set by the U.S. Federal Communication Commission’s fairness doctrine—no longer in force—and the British Broadcasting Corporation’s fairness principle both demonstrate the importance of the idealised version of political discourse described here. Both precedents…