Evidence on the extent of harms experienced by children as a result of online risks: implications for policy and research

The range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, but there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. Image by Brad Flickinger
Child Internet safety is a topic that continues to gain a great deal of media coverage and policy attention. Recent UK policy initiatives such as Active Choice Plus in which major UK broadband providers agreed to provide household-level filtering options, or the industry-led Internet Matters portal, reflect a public concern with the potential risks and harms of children’s Internet use. At the same time, the range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, in large part due to the extensive international studies funded by the European Commission as part of the excellent EU Kids Online network. Whilst this has greatly helped us understand how children behave online, there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. This is a problematic, first, because risks can only be identified if we understand what types of harms we wish to avoid, and second, because if we only undertake research on the nature or extent of risk, then it’s difficult to learn anything useful about who is harmed, and what this means for their lives.

Of course, the focus on risk rather than harm is understandable from an ethical and methodological perspective. It wouldn’t be ethical, for example, to conduct a trial in which one group of children was deliberately exposed to very violent or sexual content to observe whether any harms resulted. Similarly, surveys can ask respondents to self-report harms experienced online, perhaps through the lens of upsetting images or experiences. But again, there are ethical concerns about adding to children’s distress by questioning them extensively on difficult experiences, and in a survey context it’s also difficult to avoid imposing adult conceptions of ‘harm’ through the wording of the questions.

Despite these difficulties, there are many research projects that aim to measure and understand the relationship between various types of physical, emotional or psychological harm and activities online, albeit often outside the social sciences. With support from the OUP Fell Fund, I worked with colleagues Vera Slavtcheva-Petkova and Monica Bulger to review the extent of evidence available across these other disciplines. Looking at journal articles published between 1997 and 2012, we aimed to identify any empirical evidence detailing Internet-related harms experienced by children and adolescents and to gain a sense of the types of harm recorded, their severity and frequency.

Our findings demonstrate that there are many good studies out there which do address questions of harm, rather than just risk. The narrowly drawn search found 148 empirical studies which either clearly delineated evidence of very specific harms, or offered some evidence of less well-defined harms. Further, these studies offer rich insights into three broad types of harm: health-related (including harms relating to the exacerbation of eating disorders, self-harming behaviour and suicide attempts); sex-related (largely focused on studies of online solicitation and child abuse); and bullying-related (including the effects on mental health and behaviour). Such a range of coverage would come as no surprise to most researchers focusing on children’s Internet use – these are generally well-documented areas, albeit with the focus more normally on risk rather than harm. Perhaps more surprising was the absence in our search of evidence of harm in relation to privacy violations or economic well-being, both of which are increasingly discussed as significant concerns or risks for minors using the Internet. This gap might have been a factor of our search terms, of course, but given the policy relevance of both issues, more empirical study of not just risk but actual harm would seem to be merited in these areas.

Another important gap in the literature concerned the absence of literature demonstrating that severe harms often befall those without prior evidence of vulnerability or risky behaviour. For example, in relation to websites promoting self-harm or eating disorders, there is little evidence that young people previously unaffected by self-harm or eating disorders are influenced by these websites. This isn’t unexpected – other researchers have shown that harm more often befalls those who display riskier behaviour, but this is important to bear in mind when devising treatment or policy strategies for reducing such harms.

It’s also worth noting how difficult it is to determine the prevalence of harms. The best-documented cases are often those where medical, police or court records provide great depth of qualitative detail about individual suffering in cases of online grooming and abuse, eating disorders or self-harm. Yet these cases provide little insight into prevalence. And whilst survey research offers more sense of scale, we found substantial disparities in the levels of harm reported on some issues, with the prevalence of cyber-bullying, for example, varying from 9% to 72% across studies with similar age groups of children. It’s also clear that we quite simply need much more research and policy attention on certain issues. The studies relating to the online grooming of children and production of abuse images are an excellent example of how a broad research base can make an important contribution to our understanding of online risks and harms. Here, journal articles offered a remarkably rich understanding, drawing on data from police reports, court records or clinical files as well as surveys and interviews with victims, perpetrators and carers. There would be real benefits to taking a similarly thorough approach to the study of users of pro-eating disorder, self-harm and pro-suicide websites.

Our review flagged up some important lessons for policy-makers. First, whilst we (justifiably) devote a wealth of resources to the small proportion of children experiencing severe harms as a result of online experiences, the number of those experiencing more minor harms such as those caused by online bullying is likely much higher and may thus deserve more attention than currently received. Second, the diversity of topics discussed and types of harm identified seems to suggest that a one-size-fits-all solution will not work when it comes to online protection of minors. Simply banning or filtering all potentially harmful websites, pages or groups might be more damaging than useful if it drives users to less public means of communicating. Further, whilst some content such as child sexual abuse images are clearly illegal and generate great harms, other content and sites is less easy to condemn if the balance between perpetuating harmful behavior and provide valued peer support is hard to call. It should also be remembered that the need to protect young people from online harms must always be balanced against the need to protect their rights (and opportunities) to freely express themselves and seek information online.

Finally, this study makes an important contribution to public debates about child online safety by reminding us that risk and harm are not equivalent and should not be conflated. More children and young people are exposed to online risks than are actually harmed as a result and our policy responses should reflect this. In this context, the need to protect minors from online harms must always be balanced against their rights and opportunities to freely express themselves and seek information online.

A more detailed account of our findings can be found in this Information, Communication and Society journal article: Evidence on the extent of harms experienced by children as a result of online risks: implications for policy and research. If you can’t access this, please e-mail me for a copy.

Victoria Nash is a Policy and Research Fellow at the Oxford Internet Institute (OII), responsible for connecting OII research with policy and practice. Her own particular research interests draw on her background as a political theorist, and concern the theoretical and practical application of fundamental liberal values in the Internet era. Recent projects have included efforts to map the legal and regulatory trends shaping freedom of expression online for UNESCO, analysis of age verification as a tool to protect and empower children online, and the role of information and Internet access in the development of moral autonomy.

Responsible research agendas for public policy in the era of big data

Last week the OII went to Harvard. Against the backdrop of a gathering storm of interest around the potential of computational social science to contribute to the public good, we sought to bring together leading social science academics with senior government agency staff to discuss its public policy potential. Supported by the OII-edited journal Policy and Internet and its owners, the Washington-based Policy Studies Organization (PSO), this one-day workshop facilitated a thought-provoking conversation between leading big data researchers such as David Lazer, Brooke Foucault-Welles and Sandra Gonzalez-Bailon, e-government experts such as Cary Coglianese, Helen Margetts and Jane Fountain, and senior agency staff from US federal bureaus including Labor Statistics, Census, and the Office for the Management of the Budget.

It’s often difficult to appreciate the impact of research beyond the ivory tower, but what this productive workshop demonstrated is that policy-makers and academics share many similar hopes and challenges in relation to the exploitation of ‘big data’. Our motivations and approaches may differ, but insofar as the youth of the ‘big data’ concept explains the lack of common language and understanding, there is value in mutual exploration of the issues. Although it’s impossible to do justice to the richness of the day’s interactions, some of the most pertinent and interesting conversations arose around the following four issues.

Managing a diversity of data sources. In a world where our capacity to ask important questions often exceeds the availability of data to answer them, many participants spoke of the difficulties of managing a diversity of data sources. For agency staff this issue comes into sharp focus when available administrative data that is supposed to inform policy formulation is either incomplete or inadequate. Consider, for example, the challenge of regulating an economy in a situation of fundamental data asymmetry, where private sector institutions track, record and analyse every transaction, whilst the state only has access to far more basic performance metrics and accounts. Such asymmetric data practices also affect academic research, where once again private sector tech companies such as Google, Facebook and Twitter often offer access only to portions of their data. In both cases participants gave examples of creative solutions using merged or blended data sources, which raise significant methodological and also ethical difficulties which merit further attention. The Berkman Center’s Rob Faris also noted the challenges of combining ‘intentional’ and ‘found’ data, where the former allow far greater certainty about the circumstances of their collection.

Data dictating the questions. If participants expressed the need to expend more effort on getting the most out of available but diverse data sources, several also canvassed against the dangers of letting data availability dictate the questions that could be asked. As we’ve experienced at the OII, for example, the availability of Wikipedia or Twitter data means that questions of unequal digital access (to political resources, knowledge production etc.) can often be addressed through the lens of these applications or platforms. But these data can provide only a snapshot, and large questions of great social or political importance may not easily be answered through such proxy measurements. Similarly, big data may be very helpful in providing insights into policy-relevant patterns or correlations, such as identifying early indicators of seasonal diseases or neighbourhood decline, but seem ill-suited to answer difficult questions regarding say, the efficacy of small-scale family interventions. Just because the latter are harder to answer using currently vogue-ish tools doesn’t mean we should cease to ask these questions.

Ethics. Concerns about privacy are frequently raised as a significant limitation of the usefulness of big data. Given that with two or more data sets even supposedly anonymous data subjects may be identified, the general consensus seems to be that ‘privacy is dead’. Whilst all participants recognised the importance of public debate around this issue, several academics and policy-makers expressed a desire to get beyond this discussion to a more nuanced consideration of appropriate ethical standards. Accountability and transparency are often held up as more realistic means of protecting citizens’ interests, but one workshop participant also suggested it would be helpful to encourage more public debate about acceptable and unacceptable uses of our data, to determine whether some uses might simply be deemed ‘off-limits’, whilst other uses could be accepted as offering few risks.

Accountability. Following on from this debate about the ethical limits of our uses of big data, discussion exposed the starkly differing standards to which government and academics (to say nothing of industry) are held accountable. As agency officials noted on several occasions it matters less what they actually do with citizens’ data, than what they are perceived to do with it, or even what it’s feared they might do. One of the greatest hurdles to be overcome here concerns the fundamental complexity of big data research, and the sheer difficulty of communicating to the public how it informs policy decisions. Quite apart from the opacity of the algorithms underlying big data analysis, the explicit focus on correlation rather than causation or explanation presents a new challenge for the justification of policy decisions, and consequently, for public acceptance of their legitimacy. As Greg Elin of Gitmachines emphasised, policy decisions are still the result of explicitly normative political discussion, but the justifiability of such decisions may be rendered more difficult given the nature of the evidence employed.

We could not resolve all these issues over the course of the day, but they served as pivot points for honest and productive discussion amongst the group. If nothing else, they demonstrate the value of interaction between academics and policy-makers in a research field where the stakes are set very high. We plan to reconvene in Washington in the spring.

*We are very grateful to the Policy Studies Organization (PSO) and the American Public University for their generous support of this workshop. The workshop “Responsible Research Agendas for Public Policy in the Era of Big Data” was held at the Harvard Faculty Club on 13 September 2013.

Also read: Big Data and Public Policy Workshop by Eric Meyer, workshop attendee and PI of the OII project Accessing and Using Big Data to Advance Social Science Knowledge.

Victoria Nash received her M.Phil in Politics from Magdalen College in 1996, after completing a First Class BA (Hons) Degree in Politics, Philosophy and Economics, before going on to complete a D.Phil in Politics from Nuffield College, Oxford University in 1999. She was a Research Fellow at the Institute of Public Policy Research prior to joining the OII in 2002. As Research and Policy Fellow at the OII, her work seeks to connect OII research with policy and practice, identifying and communicating the broader implications of OII’s research into Internet and technology use.

Personal data protection vs the digital economy? OII policy forum considers our digital footprints

Catching a bus, picking up some groceries, calling home to check on the children – all simple, seemingly private activities that characterise many people’s end to the working day. Yet each of these activities leaves a data trail that enables companies, even the state, to track the most mundane aspects of our lives. Add to this the range and quantity of personal data that many of us willingly post online on our blogs, Facebook walls or Google docs, and it is clear that the trail of digital footprints we leave is long and hard to erase.

Even if in most cases, this data is only likely to be used in an anonymised and aggregated form to identify trends in transport or shopping patterns, or to personalise the Internet services available to us, the fact that its collection is now so routine and so extensive should make us question whether the regulatory system governing data collection, storage and use is fit for purpose. A forthcoming OII policy forum on Tracing the Policy Implications of the Future Digital Economy (16 Feb) will consider this question, bringing together leading academics from across several disciplines with policy-makers and industry experts.

This is a topic which the OII is well-placed to address. Ian Brown’s Privacy Values Network project addresses a major knowledge gap, measuring the various costs and benefits to individuals of handing over data in different contexts, as without this we simply don’t know how much people value their privacy (or indeed understand its limits). The last Oxford Internet Survey (OxIS) rather surprisingly showed that in 2009 people were significantly less concerned about privacy online in the UK than in previous years (45% of all those surveyed in 2009 against 66% in 2007); we wait to see whether this finding is repeated when OxIS 2011 goes into the field next month.

Our faculty also have much to say about the adequacy (or otherwise) of the regulatory framework: a recent report by Ian Brown and Douwe Korff on New Challenges to Data Protection identified for the European Commission the scale of challenges presented to the current data protection regime, whilst Viktor-Mayer Schoenberger’s book Delete: The Virtue of Forgetting in the Digital Age has rightly raised the suggestion that personal information online should have an expiration date, to ensure it doesn’t hang around for years to embarrass us at a later date.

The forum will consider the way in which the market for information storage and collection is rapidly changing with the advent of new technologies, and on this point, one conclusion is clear: if we accept Helen Nissenbaum’s contention that personal information and data should be collected and protected according to the social norms governing different social contexts, then we need to get to grips pretty fast with the way in which these technologies are playing out in the way we work, play, learn and consume.