Articles

Explaining why many political mobilisations of our times seem to come from nowhere.

Cross-posted from the Princeton University Press blog. The authors of Political Turbulence discuss how the explosive rise, non-normal distribution and lack of organisation that characterises contemporary politics as a chaotic system, can explain why many political mobilisations of our times seem to come from nowhere.

On 23rd June 2016, a majority of the British public voted in a referendum on whether to leave the European Union. The Leave or so-called #Brexit option was victorious, with a margin of 52% to 48% across the country, although Scotland, Northern Ireland, London and some towns voted to remain. The result was a shock to both leave and remain supporters alike. US readers might note that when the polls closed, the odds on futures markets of Brexit (15%) were longer than those of Trump being elected President. Political scientists are reeling with the sheer volume of politics that has been packed into the month after the result. From the Prime Minister’s morning-after resignation on 24th June the country was mired in political chaos, with almost every political institution challenged and under question in the aftermath of the vote, including both Conservative and Labour parties and the existence of the United Kingdom itself, given Scotland’s resistance to leaving the EU. The eventual formation of a government under a new prime minister, Teresa May, has brought some stability. But she was not elected and her government has a tiny majority of only 12 Members of Parliament. A cartoon by Matt in the Telegraph on July 2nd (which would work for almost any day) showed two students, one of them saying ‘I’m studying politics. The course covers the period from 8am on Thursday to lunchtime on Friday.’ All these events—the campaigns to remain or leave, the post-referendum turmoil, resignations, sackings and appointments—were played out on social media; the speed of change and the unpredictability of events being far too great for conventional media to keep pace. So our book, Political Turbulence: How Social Media Shape Collective Action, can provide a way to think about the past weeks. The book focuses on how social media allow new, ‘tiny acts’ of political participation (liking, tweeting, viewing, following, signing petitions and so on), which turn social movement theory…

The Government Digital Service (GDS) isn’t perfect, but to erase the progress it has put in place would be a terrible loss.

Technology and the public sector have rarely been happy bedfellows in the UK, where every government technology project seems doomed to arrive late, unperform and come in over budget. The Government Digital Service (GDS) was created to drag the civil service into the 21st century, making services “digital by default”, cheaper, faster, and easier to use. It quickly won accolades for its approach and early cost savings. But then its leadership departed, not once or twice but three times—the latter two within the last few months. The largest government departments have begun to reassert their authority over GDS expert advice, and digital government looks likely to be dragged back towards the deeply dysfunctional old ways of doing things. GDS isn’t perfect, but to erase the progress it has put in place would be a terrible loss. The UK government’s use of technology has previously lagged far behind other countries. Low usage of digital services rendered them expensive and inefficient. Digital operations were often handicapped by complex networks of legacy systems, some dating right back to the 1970s. The development of the long-promised “digital era governance” was mired in a series of mega contracts: huge in terms of cost, scope and timescale, bigger than any attempted by other governments worldwide, and to be delivered by the same handful of giant global computer consulting firms that rarely saw any challenge to their grip on public contracts. Departmental silos ensured there were no economies of scale, shared services failed, and the Treasury negotiated with 24 departments individually for their IT expenditure. Some commentators (including this one) were a little sceptical on our first encounter with GDS. We had seen it before: the Office of the e-Envoy set up by Tony Blair in 1999, superseded by the E-government Unit (2004-7), and then Directgov until 2010. Successes and failures In many ways GDS has been a success story, with former prime minister David Cameron calling it one of the “great unsung triumphs…

Advancing the practical and theoretical basis for how we conceptualise and shape the infosphere.

Photograph of workshop participants by David Peter Simon.

On June 27 the Ethics and Philosophy of Information Cluster at the OII hosted a workshop to foster a dialogue between the discipline of Information Architecture (IA) and the Philosophy of Information (PI), and advance the practical and theoretical basis for how we conceptualise and shape the infosphere. A core topic of concern is how we should develop better principles to understand design practices. The latter surfaces when IA looks at other disciplines, like linguistics, design thinking, new media studies and architecture to develop the theoretical foundations that can back and/or inform its practice. Within the philosophy of information this need to understand general principles of (conceptual or informational) design arises in relation to the question of how we develop and adopt the right level of abstraction (what Luciano Floridi calls the logic of design). This suggests a two-way interaction between PI and IA. On the one hand, PI can become part of the theoretical background that informs Information Architecture as one of the disciplines from which it can borrow concepts and theories. The philosophy of information, on the other hand, can benefit from the rich practice of IA and the growing body of critical reflection on how, within a particular context, the access to online information is best designed. Throughout the workshop, two themes emerged: The need for more integrated ways to reason about and describe (a) informational artefacts and infrastructures, (b) the design-processes that lead to their creation, and (c) the requirements to which they should conform. This presupposes a convergence between the things we build (informational artefacts) and the conceptual apparatus we rely on (the levels of abstraction we adopt), which surfaces in IA as well as in PI. At the same time, it also calls for novel frameworks and linguistic abstractions. This need to reframe the ways that we observe informational phenomena could be discerned in several contributions to the workshop. It surfaced in the more…

Drawing on the rich history of gender studies in the social sciences, coupling it with emerging computational methods for topic modelling, to better understand the content of reports to the Everyday Sexism Project.

The Everyday Sexism Project catalogues instances of sexism experienced by women on a day to day basis. We will be using computational techniques to extract the most commonly occurring sexism-related topics.

As Laura Bates, founder of the Everyday Sexism project, has recently highlighted, “it seems to be increasingly difficult to talk about sexism, equality, and women’s rights” (Everyday Sexism Project, 2015). With many theorists suggesting that we have entered a so-called “post-feminist” era in which gender equality has been achieved (cf. McRobbie, 2008; Modleski, 1991), to complain about sexism not only risks being labelled as “uptight”, “prudish”, or a “militant feminist”, but also exposes those who speak out to sustained, and at times vicious, personal attacks (Everyday Sexism Project, 2015). Despite this, thousands of women are speaking out, through Bates’ project, about their experiences of everyday sexism. Our research seeks to draw on the rich history of gender studies in the social sciences, coupling it with emerging computational methods for topic modelling, to better understand the content of reports to the Everyday Sexism Project and the lived experiences of those who post them. Here, we outline the literature which contextualises our study. Studies on sexism are far from new. Indeed, particularly amongst feminist theorists and sociologists, the analysis (and deconstruction) of “inequality based on sex or gender categorisation” (Harper, 2008) has formed a central tenet of both academic inquiry and a radical politics of female emancipation for several decades (De Beauvoir, 1949; Friedan, 1963; Rubin, 1975; Millett, 1971). Reflecting its feminist origins, historical research on sexism has broadly focused on defining sexist interactions (cf. Glick and Fiske, 1997) and on highlighting the problematic, biologically rooted ‘gender roles’ that form the foundation of inequality between men and women (Millett, 1971; Renzetti and Curran, 1992; Chodorow, 1995). More recent studies, particularly in the field of psychology, have shifted the focus away from whether and how sexism exists, towards an examination of the psychological, personal, and social implications that sexist incidents have for the women who experience them. As such, theorists such as Matteson and Moradi (2005), Swim et al (2001) and Jost and…

Leading policy makers, data scientists and academics came together to discuss how the ATI and government could work together to develop data science for the public good.

The benefits of big data and data science for the private sector are well recognised. So far, considerably less attention has been paid to the power and potential of the growing field of data science for policy-making and public services. On Monday 14th March 2016 the Oxford Internet Institute (OII) and the Alan Turing Institute (ATI) hosted a Summit on Data Science for Government and Policy Making, funded by the EPSRC. Leading policy makers, data scientists and academics came together to discuss how the ATI and government could work together to develop data science for the public good. The convenors of the Summit, Professors Helen Margetts (OII) and Tom Melham (Computer Science), report on the day’s proceedings. The Alan Turing Institute will build on the UK’s existing academic strengths in the analysis and application of big data and algorithm research to place the UK at the forefront of world-wide research in data science. The University of Oxford is one of five university partners, and the OII is the only partnering department in the social sciences. The aim of the summit on Data Science for Government and Policy-Making was to understand how government can make better use of big data and the ATI—with the academic partners in listening mode. We hoped that the participants would bring forward their own stories, hopes and fears regarding data science for the public good. Crucially, we wanted to work out a roadmap for how different stakeholders can work together on the distinct challenges facing government, as opposed to commercial organisations. At the same time, data science research and development has much to gain from the policy-making community. Some of the things that government does—collect tax from the whole population, or give money away at scale, or possess the legitimate use of force—it does by virtue of being government. So the sources of data and some of the data science challenges that public agencies face are…

Online support groups are one of the major ways in which the Internet has fundamentally changed how people experience health and health care.

Online forums are important means of people living with health conditions to obtain both emotional and informational support from this in a similar situation. Pictured: The Alzheimer Society of B.C. unveiled three life-size ice sculptures depicting important moments in life. The ice sculptures will melt, representing the fading of life memories on the dementia journey. Image: bcgovphotos (Flickr)

Online support groups are being used increasingly by individuals who suffer from a wide range of medical conditions. OII DPhil Student Ulrike Deetjen’s recent article with John Powell, Informational and emotional elements in online support groups: a Bayesian approach to large-scale content analysis uses machine learning to examine the role of online support groups in the healthcare process. They categorise 40,000 online posts from one of the most well-used forums to show how users with different conditions receive different types of support. Online support groups are one of the major ways in which the Internet has fundamentally changed how people experience health and health care. They provide a platform for health discussions formerly restricted by time and place, enable individuals to connect with others in similar situations, and facilitate open, anonymous communication. Previous studies have identified that individuals primarily obtain two kinds of support from online support groups: informational (for example, advice on treatments, medication, symptom relief, and diet) and emotional (for example, receiving encouragement, being told they are in others’ prayers, receiving “hugs”, or being told that they are not alone). However, existing research has been limited as it has often used hand-coded qualitative approaches to contrast both forms of support, thereby only examining relatively few posts (<1,000) for one or two conditions. In contrast, our research employed a machine-learning approach suitable for uncovering patterns in “big data”. Using this method a computer (which initially has no knowledge of online support groups) is given examples of informational and emotional posts (2,000 examples in our study). It then “learns” what words are associated with each category (emotional: prayers, sorry, hugs, glad, thoughts, deal, welcome, thank, god, loved, strength, alone, support, wonderful, sending; informational: effects, started, weight, blood, eating, drink, dose, night, recently, taking, side, using, twice, meal). The computer then uses this knowledge to assess new posts, and decide whether they contain more emotional or informational support. With this approach we were able to determine the emotional or informational content of 40,000…

How does the topic modelling algorithm ‘discover’ the topics within the context of everyday sexism?

We recently announced the start of an exciting new research project that will involve the use of topic modelling in understanding the patterns in submitted stories to the Everyday Sexism website. Here, we briefly explain our text analysis approach, “topic modelling”. At its very core, topic modelling is a technique that seeks to automatically discover the topics contained within a group of documents. ‘Documents’ in this context could refer to text items as lengthy as individual books, or as short as sentences within a paragraph. Let’s take the idea of sentences-as-documents as an example: Document 1: I like to eat kippers for breakfast. Document 2: I love all animals, but kittens are the cutest. Document 3: My kitten eats kippers too. Assuming that each sentence contains a mixture of different topics (and that a ‘topic’ can be understood as a collection of words (of any part of speech) that have different probabilities of appearance in passages discussing the topic), how does the topic modelling algorithm ‘discover’ the topics within these sentences? The algorithm is initiated by setting the number of topics that it needs to extract. Of course, it is hard to guess this number without having an insight on the topics, but one can think of this as a resolution tuning parameter. The smaller the number of topics is set, the more general the bag of words in each topic would be, and the looser the connections between them. The algorithm loops through all of the words in each document, assigning every word to one of our topics in a temporary and semi-random manner. This initial assignment is arbitrary and it is easy to show that different initialisations lead to the same results in long run. Once each word has been assigned a temporary topic, the algorithm then re-iterates through each word in each document to update the topic assignment using two criteria: 1) How prevalent is the word in question across topics? And 2) How prevalent are the…

Homejoy was slated to become the Uber of domestic cleaning services. It was a platform that allowed customers to summon a cleaner as easily as they could hail a ride. Why did it fail to achieve success?

Homejoy CEO Adora Cheung appears on stage at the 2014 TechCrunch Disrupt Europe/London, at The Old Billingsgate on October 21, 2014 in London, England. Image: TechCruch (Flickr)

Platforms that enable users to come together and  buy/sell services with confidence, such as Uber, have become remarkably popular, with the companies often transforming the industries they enter. In this blog post the OII’s Vili Lehdonvirta analyses why the domestic cleaning platform Homejoy failed to achieve such success. He argues that when buyer and sellers enter into repeated transactions they can communicate directly, and as such often abandon the platform. Homejoy was slated to become the Uber of domestic cleaning services. It was a platform that allowed customers to summon a cleaner as easily as they could hail a ride. Regular cleanups were just as easy to schedule. Ratings from previous clients attested to the skill and trustworthiness of each cleaner. There was no need to go through a cleaning services agency, or scour local classifieds to find a cleaner directly: the platform made it easy for both customers and people working as cleaners to find each other. Homejoy made its money by taking a cut out of each transaction. Given how incredibly successful Uber and Airbnb had been in applying the same model to their industries, Homejoy was widely expected to become the next big success story. It was to be the next step in the inexorable uberisation of every industry in the economy. On 17 July 2015, Homejoy announced that it was shutting down. Usage had grown slower than expected, revenues remained poor, technical glitches hurt operations, and the company was being hit with lawsuits on contractor misclassification. Investors’ money and patience had finally ran out. Journalists wrote interesting analyses of Homejoy’s demise (Forbes, TechCrunch, Backchannel). The root causes of any major business failure (or indeed success) are complex and hard to pinpoint. However, one of the possible explanations identified in these stories stands out, because it corresponds strongly with what theory on platforms and markets could have predicted. Homejoy wasn’t growing and making money because clients and cleaners were taking their relationships off-platform:…