Articles

New research suggests that very few of those who play internet-based video games have symptoms suggesting they may be addicted.

New research by Andrew Przybylski (OII, Oxford University), Netta Weinstein (Cardiff University), and Kou Murayama (Reading University) published today in the American Journal of Psychiatry suggests that very few of those who play internet-based video games have symptoms suggesting they may be addicted. The article also says that gaming, though popular, is unlikely to be as addictive as gambling. Two years ago the APA identified a critical need for good research to look into whether internet gamers run a risk of becoming addicted and asked how such an addiction might be diagnosed properly. To the authors’ knowledge, these are the first findings from a large-scale project to produce robust evidence on the potential new problem of “internet gaming disorder”. The authors surveyed 19,000 men and women from nationally representative samples from the UK, the United States, Canada and Germany, with over half saying they had played internet games recently. Out of the total sample, 1% of young adults (18-24 year olds) and 0.5% of the general population (aged 18 or older) reported symptoms linking play to possible addictive behaviour—less than half of recently reported rates for gambling. They warn that researchers studying the potential “darker sides” of Internet-based games must be cautious. Extrapolating from their data, as many as a million American adults might meet the proposed DSM-5 criteria for addiction to online games—representing a large cohort of people struggling with what could be clinically dysregulated behaviour. However, because the authors found no evidence supporting a clear link to clinical outcomes, they warn that more evidence for clinical and behavioural effects is needed before concluding that this is a legitimate candidate for inclusion in future revisions of the DSM. If adopted, Internet gaming disorder would vie for limited therapeutic resources with a range of serious psychiatric disorders. Read the full article: Andrew K. Przybylski, Netta Weinstein, Kou Murayama (2016) Internet Gaming Disorder: Investigating the Clinical Relevance of a New Phenomenon.…

What are the dangers or new opportunities of digital media?

Did Twitter lead to Donald Trump’s rise and success to date in the American campaign for the presidency? Image: Gage Skidmore (Flickr).

One of the major debates in relation to digital media in the United States has been whether they contribute to political polarisation. I argue in a new paper (Rethinking Digital Media and Political Change) that Twitter led to Donald Trump’s rise and success to date in the American campaign for the presidency. There is plenty of evidence to show that Trump received a disproportionate amount of attention on Twitter, which in turn generated a disproportionate amount of attention in the mainstream media. The strong correlation between the two suggests that Trump was able to bypass the gatekeepers of the traditional media. A second ingredient in his success has been populism, which rails against dominant political elites (including the Republican party) and the ‘biased’ media. Populism also rests on the notion of an ‘authentic’ people—by implication excluding ‘others’ such as immigrants and foreign powers like the Chinese—to whom the leader appeals directly. The paper makes parallels with the strength of the Sweden Democrats, an anti-immigrant party which, in a similar way, has been able to appeal to its following via social media and online newspapers, again bypassing mainstream media with its populist message. There is a difference, however: in the US, commercial media compete for audience share, so Trump’s controversial tweets have been eagerly embraced by journalists seeking high viewership and readership ratings. In Sweden, where public media dominate and there is far less of the ‘horserace’ politics of American politics, the Sweden Democrats have been more locked out of the mainstream media and of politics. In short, Twitter plus populism has led to Trump. I argue that dominating the mediated attention space is crucial. One outcome of how this story ends will be known in November. But whatever the outcome, it is already clear that the role of the media in politics, and how they can be circumvented by new media, requires fundamental rethinking. Ralph Schroeder is Professor and director…

Explaining why many political mobilisations of our times seem to come from nowhere.

Cross-posted from the Princeton University Press blog. The authors of Political Turbulence discuss how the explosive rise, non-normal distribution and lack of organisation that characterises contemporary politics as a chaotic system, can explain why many political mobilisations of our times seem to come from nowhere.

On 23rd June 2016, a majority of the British public voted in a referendum on whether to leave the European Union. The Leave or so-called #Brexit option was victorious, with a margin of 52% to 48% across the country, although Scotland, Northern Ireland, London and some towns voted to remain. The result was a shock to both leave and remain supporters alike. US readers might note that when the polls closed, the odds on futures markets of Brexit (15%) were longer than those of Trump being elected President. Political scientists are reeling with the sheer volume of politics that has been packed into the month after the result. From the Prime Minister’s morning-after resignation on 24th June the country was mired in political chaos, with almost every political institution challenged and under question in the aftermath of the vote, including both Conservative and Labour parties and the existence of the United Kingdom itself, given Scotland’s resistance to leaving the EU. The eventual formation of a government under a new prime minister, Teresa May, has brought some stability. But she was not elected and her government has a tiny majority of only 12 Members of Parliament. A cartoon by Matt in the Telegraph on July 2nd (which would work for almost any day) showed two students, one of them saying ‘I’m studying politics. The course covers the period from 8am on Thursday to lunchtime on Friday.’ All these events—the campaigns to remain or leave, the post-referendum turmoil, resignations, sackings and appointments—were played out on social media; the speed of change and the unpredictability of events being far too great for conventional media to keep pace. So our book, Political Turbulence: How Social Media Shape Collective Action, can provide a way to think about the past weeks. The book focuses on how social media allow new, ‘tiny acts’ of political participation (liking, tweeting, viewing, following, signing petitions and so on), which turn social movement theory…

The Government Digital Service (GDS) isn’t perfect, but to erase the progress it has put in place would be a terrible loss.

Technology and the public sector have rarely been happy bedfellows in the UK, where every government technology project seems doomed to arrive late, unperform and come in over budget. The Government Digital Service (GDS) was created to drag the civil service into the 21st century, making services “digital by default”, cheaper, faster, and easier to use. It quickly won accolades for its approach and early cost savings. But then its leadership departed, not once or twice but three times—the latter two within the last few months. The largest government departments have begun to reassert their authority over GDS expert advice, and digital government looks likely to be dragged back towards the deeply dysfunctional old ways of doing things. GDS isn’t perfect, but to erase the progress it has put in place would be a terrible loss. The UK government’s use of technology has previously lagged far behind other countries. Low usage of digital services rendered them expensive and inefficient. Digital operations were often handicapped by complex networks of legacy systems, some dating right back to the 1970s. The development of the long-promised “digital era governance” was mired in a series of mega contracts: huge in terms of cost, scope and timescale, bigger than any attempted by other governments worldwide, and to be delivered by the same handful of giant global computer consulting firms that rarely saw any challenge to their grip on public contracts. Departmental silos ensured there were no economies of scale, shared services failed, and the Treasury negotiated with 24 departments individually for their IT expenditure. Some commentators (including this one) were a little sceptical on our first encounter with GDS. We had seen it before: the Office of the e-Envoy set up by Tony Blair in 1999, superseded by the E-government Unit (2004-7), and then Directgov until 2010. Successes and failures In many ways GDS has been a success story, with former prime minister David Cameron calling it one of the “great unsung triumphs…

Advancing the practical and theoretical basis for how we conceptualise and shape the infosphere.

Photograph of workshop participants by David Peter Simon.

On June 27 the Ethics and Philosophy of Information Cluster at the OII hosted a workshop to foster a dialogue between the discipline of Information Architecture (IA) and the Philosophy of Information (PI), and advance the practical and theoretical basis for how we conceptualise and shape the infosphere. A core topic of concern is how we should develop better principles to understand design practices. The latter surfaces when IA looks at other disciplines, like linguistics, design thinking, new media studies and architecture to develop the theoretical foundations that can back and/or inform its practice. Within the philosophy of information this need to understand general principles of (conceptual or informational) design arises in relation to the question of how we develop and adopt the right level of abstraction (what Luciano Floridi calls the logic of design). This suggests a two-way interaction between PI and IA. On the one hand, PI can become part of the theoretical background that informs Information Architecture as one of the disciplines from which it can borrow concepts and theories. The philosophy of information, on the other hand, can benefit from the rich practice of IA and the growing body of critical reflection on how, within a particular context, the access to online information is best designed. Throughout the workshop, two themes emerged: The need for more integrated ways to reason about and describe (a) informational artefacts and infrastructures, (b) the design-processes that lead to their creation, and (c) the requirements to which they should conform. This presupposes a convergence between the things we build (informational artefacts) and the conceptual apparatus we rely on (the levels of abstraction we adopt), which surfaces in IA as well as in PI. At the same time, it also calls for novel frameworks and linguistic abstractions. This need to reframe the ways that we observe informational phenomena could be discerned in several contributions to the workshop. It surfaced in the more…

Drawing on the rich history of gender studies in the social sciences, coupling it with emerging computational methods for topic modelling, to better understand the content of reports to the Everyday Sexism Project.

The Everyday Sexism Project catalogues instances of sexism experienced by women on a day to day basis. We will be using computational techniques to extract the most commonly occurring sexism-related topics.

As Laura Bates, founder of the Everyday Sexism project, has recently highlighted, “it seems to be increasingly difficult to talk about sexism, equality, and women’s rights” (Everyday Sexism Project, 2015). With many theorists suggesting that we have entered a so-called “post-feminist” era in which gender equality has been achieved (cf. McRobbie, 2008; Modleski, 1991), to complain about sexism not only risks being labelled as “uptight”, “prudish”, or a “militant feminist”, but also exposes those who speak out to sustained, and at times vicious, personal attacks (Everyday Sexism Project, 2015). Despite this, thousands of women are speaking out, through Bates’ project, about their experiences of everyday sexism. Our research seeks to draw on the rich history of gender studies in the social sciences, coupling it with emerging computational methods for topic modelling, to better understand the content of reports to the Everyday Sexism Project and the lived experiences of those who post them. Here, we outline the literature which contextualises our study. Studies on sexism are far from new. Indeed, particularly amongst feminist theorists and sociologists, the analysis (and deconstruction) of “inequality based on sex or gender categorisation” (Harper, 2008) has formed a central tenet of both academic inquiry and a radical politics of female emancipation for several decades (De Beauvoir, 1949; Friedan, 1963; Rubin, 1975; Millett, 1971). Reflecting its feminist origins, historical research on sexism has broadly focused on defining sexist interactions (cf. Glick and Fiske, 1997) and on highlighting the problematic, biologically rooted ‘gender roles’ that form the foundation of inequality between men and women (Millett, 1971; Renzetti and Curran, 1992; Chodorow, 1995). More recent studies, particularly in the field of psychology, have shifted the focus away from whether and how sexism exists, towards an examination of the psychological, personal, and social implications that sexist incidents have for the women who experience them. As such, theorists such as Matteson and Moradi (2005), Swim et al (2001) and Jost and…

Leading policy makers, data scientists and academics came together to discuss how the ATI and government could work together to develop data science for the public good.

The benefits of big data and data science for the private sector are well recognised. So far, considerably less attention has been paid to the power and potential of the growing field of data science for policy-making and public services. On Monday 14th March 2016 the Oxford Internet Institute (OII) and the Alan Turing Institute (ATI) hosted a Summit on Data Science for Government and Policy Making, funded by the EPSRC. Leading policy makers, data scientists and academics came together to discuss how the ATI and government could work together to develop data science for the public good. The convenors of the Summit, Professors Helen Margetts (OII) and Tom Melham (Computer Science), report on the day’s proceedings. The Alan Turing Institute will build on the UK’s existing academic strengths in the analysis and application of big data and algorithm research to place the UK at the forefront of world-wide research in data science. The University of Oxford is one of five university partners, and the OII is the only partnering department in the social sciences. The aim of the summit on Data Science for Government and Policy-Making was to understand how government can make better use of big data and the ATI—with the academic partners in listening mode. We hoped that the participants would bring forward their own stories, hopes and fears regarding data science for the public good. Crucially, we wanted to work out a roadmap for how different stakeholders can work together on the distinct challenges facing government, as opposed to commercial organisations. At the same time, data science research and development has much to gain from the policy-making community. Some of the things that government does—collect tax from the whole population, or give money away at scale, or possess the legitimate use of force—it does by virtue of being government. So the sources of data and some of the data science challenges that public agencies face are…

Online support groups are one of the major ways in which the Internet has fundamentally changed how people experience health and health care.

Online forums are important means of people living with health conditions to obtain both emotional and informational support from this in a similar situation. Pictured: The Alzheimer Society of B.C. unveiled three life-size ice sculptures depicting important moments in life. The ice sculptures will melt, representing the fading of life memories on the dementia journey. Image: bcgovphotos (Flickr)

Online support groups are being used increasingly by individuals who suffer from a wide range of medical conditions. OII DPhil Student Ulrike Deetjen’s recent article with John Powell, Informational and emotional elements in online support groups: a Bayesian approach to large-scale content analysis uses machine learning to examine the role of online support groups in the healthcare process. They categorise 40,000 online posts from one of the most well-used forums to show how users with different conditions receive different types of support. Online support groups are one of the major ways in which the Internet has fundamentally changed how people experience health and health care. They provide a platform for health discussions formerly restricted by time and place, enable individuals to connect with others in similar situations, and facilitate open, anonymous communication. Previous studies have identified that individuals primarily obtain two kinds of support from online support groups: informational (for example, advice on treatments, medication, symptom relief, and diet) and emotional (for example, receiving encouragement, being told they are in others’ prayers, receiving “hugs”, or being told that they are not alone). However, existing research has been limited as it has often used hand-coded qualitative approaches to contrast both forms of support, thereby only examining relatively few posts (<1,000) for one or two conditions. In contrast, our research employed a machine-learning approach suitable for uncovering patterns in “big data”. Using this method a computer (which initially has no knowledge of online support groups) is given examples of informational and emotional posts (2,000 examples in our study). It then “learns” what words are associated with each category (emotional: prayers, sorry, hugs, glad, thoughts, deal, welcome, thank, god, loved, strength, alone, support, wonderful, sending; informational: effects, started, weight, blood, eating, drink, dose, night, recently, taking, side, using, twice, meal). The computer then uses this knowledge to assess new posts, and decide whether they contain more emotional or informational support. With this approach we were able to determine the emotional or informational content of 40,000…