Child Safety

It’s time to refocus on our responsibilities to children before they are eclipsed by the commercial incentives that are driving digital developments.

“Whether your child is an artist, a storyteller, a singer or a scientist, I’m the lovable little friend that will bring that out!” says the FisherPrice Smart Bear.

Everyone of a certain age remembers logging-on to a noisy dial-up modem and surfing the Web via AOL or AltaVista. Back then, the distinction between offline and online made much more sense. Today, three trends are conspiring to firmly confine this distinction to history. These are the mass proliferation of Wi-Fi, the appification of the Web, and the rapid expansion of the Internet of (smart) Things. Combined they are engineering multi-layered information ecosystems that enmesh around children going about their every day lives. But it’s time to refocus on our responsibilities to children before they are eclipsed by the commercial incentives that are driving these developments. Three Trends 1. The proliferation of Wi-Fi means when children can use smart phones or tablets in variety of new contexts including on buses and trains, in hotels and restaurants, in school, libraries and health centre waiting rooms. 2. Research confirms apps on smart phones and tablets are now children’s primary gateway to the Web. This is the appification of the Web that Jonathon Zittrain predicted: the WeChat app, popular in China, is becoming its full realisation. 3. Simultaneously, the rapid expansion of the Internet of Things means everything is becoming ‘smart’ – phones, cars, toys, baby monitors, watches, toasters: we are even promised smart cities. Essentially, this means these devices have an IP address that allows to them receive, process, and transmit data on the Internet. Often these devices (including personal assistants like Alexa, game consoles and smart TVs) are picking up data produced by children. Marketing about smart toys tells us they are enhancing children’s play, augmenting children’s learning, incentivising children’s healthy habits and can even reclaim family time. Salient examples include Hello Barbie and Smart Toy Bear, which use voice and/or image recognition and connect to the cloud to analyse, process, and respond to children’s conversations and images. This sector is expanding to include app-enabled toys such as toy drones, cars, and droids (e.g. Star…

So are young people completely unconcerned about their privacy online, gaily granting access to everything to everyone? Well, in a word, no.

A pretty good idea of what not to do on a social media site. Image by Sean MacEntee. Standing on a stage in San Francisco in early 2010, Facebook founder Mark Zuckerberg, partly responding to the site’s decision to change the privacy settings of its 350 million users, announced that as Internet users had become more comfortable sharing information online, privacy was no longer a “social norm”. Of course, he had an obvious commercial interest in relaxing norms surrounding online privacy, but this attitude has nevertheless been widely echoed in the popular media. Young people are supposed to be sharing their private lives online—and providing huge amounts of data for commercial and government entities—because they don’t fully understand the implications of the public nature of the Internet. There has actually been little systematic research on the privacy behaviour of different age groups in online settings. But there is certainly evidence of a growing (general) concern about online privacy (Marwick et al., 2010), with a 2013 Pew study finding that 50 percent of Internet users were worried about the information available about them online, up from 30 percent in 2009. Following the recent revelations about the NSA’s surveillance activities, a Washington Post-ABC poll reported 40 percent of its U.S. respondents as saying that it was more important to protect citizens’ privacy even if it limited the ability of the government to investigate terrorist threats. But what of young people, specifically? Do they really care less about their online privacy than older users? Privacy concerns an individual’s ability to control what personal information about them is disclosed, to whom, when, and under what circumstances. We present different versions of ourselves to different audiences, and the expectations and norms of the particular audience (or context) will determine what personal information is presented or kept hidden. This highlights a fundamental problem with privacy in some SNSs: that of ‘context collapse’ (Marwick and boyd 2011).…

If we only undertake research on the nature or extent of risk, then it’s difficult to learn anything useful about who is harmed, and what this means for their lives.

The range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, but there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. Image by Brad Flickinger

Child Internet safety is a topic that continues to gain a great deal of media coverage and policy attention. Recent UK policy initiatives such as Active Choice Plus in which major UK broadband providers agreed to provide household-level filtering options, or the industry-led Internet Matters portal, reflect a public concern with the potential risks and harms of children’s Internet use. At the same time, the range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, in large part due to the extensive international studies funded by the European Commission as part of the excellent EU Kids Online network. Whilst this has greatly helped us understand how children behave online, there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. This is a problematic, first, because risks can only be identified if we understand what types of harms we wish to avoid, and second, because if we only undertake research on the nature or extent of risk, then it’s difficult to learn anything useful about who is harmed, and what this means for their lives. Of course, the focus on risk rather than harm is understandable from an ethical and methodological perspective. It wouldn’t be ethical, for example, to conduct a trial in which one group of children was deliberately exposed to very violent or sexual content to observe whether any harms resulted. Similarly, surveys can ask respondents to self-report harms experienced online, perhaps through the lens of upsetting images or experiences. But again, there are ethical concerns about adding to children’s distress by questioning them extensively on difficult experiences, and in a survey context it’s also difficult to avoid imposing adult conceptions of ‘harm’ through the wording of the questions. Despite these difficulties, there are many research projects that aim to measure and understand the relationship between various types of physical, emotional or psychological harm…

Parents have different and often conflicting views about what’s best for their children. What’s helpful to one group of parents may not actually benefit parents or youth as a whole.

Ed: You’ve spent a great deal of time studying the way that children and young people use the Internet, much of which focuses on the positive experiences that result. Why do you think this is so under-represented in public debate? boyd/Hargittai: The public has many myths about young people’s use of technology. This is often perpetuated by media coverage that focuses on the extremes. Salacious negative headlines often capture people’s attention, even if the practices or incidents described are outliers and do not represent the majority’s experiences. While focusing on extremely negative and horrific incidents is a great way to attract attention and get readers, it does a disservice to young people, their parents, and ultimately society as a whole. As researchers, we believe that it’s important to understand the nuances of what people experience when they engage with technology. Thus, we are interested in gaining a better understanding of their everyday practices—both the good and the bad. Our goal is to introduce research that can help contextualise socio-technical practices and provide insight into the diversity of viewpoints and perspectives that shape young people’s use of technology. Ed: Your paper suggests we need a more granular understanding of how parental concerns relating to the Internet can vary across different groups. Why is this important? What are the main policy implications of this research? boyd/Hargittai: Parents are often seen as the target of policy interventions. Many lawmakers imagine that they’re designing laws to help empower parents, but when you ask them to explain which parents they are empowering, it becomes clear that there’s an imagined parent that is not always representative of the diverse views and perspectives of all parents. We’re not opposed to laws that enable parents to protect their children, but we’re concerned whenever a class of people, especially a class as large as “parents,” is viewed as homogenous. Parents have different and often conflicting views about what’s best…

Combating child pornography and child abuse is a universal and legitimate concern. With regard to this subject there is a worldwide consensus that action must be undertaken in order to punish abusers and protect children.

The recent announcement by ‘Anonymous Belgium’ (above) that they would 'liberate the Belgian Web' on 15 July 2013 in response to blocking of websites by the Belgian government was revealed to be a promotional stunt by a commercial law firm wanting to protest non-transparent blocking of online content.

Ed: European legislation introduced in 2011 requires Member States to ensure the prompt removal of child pornography websites hosted in their territory and to endeavour to obtain the removal of such websites hosted outside; leaving open the option to block access by users within their own territory. What is problematic about this blocking? Authors: From a technical point of view, all possible blocking methods that could be used by Member States are ineffective as they can all be circumvented very easily. The use of widely available technologies (like encryption or proxy servers) or tiny changes in computer configurations (for instance the choice of DNS-server), that may also be used for better performance or the enhancement of security or privacy, enable circumvention of blocking methods. Another problem arises from the fact that this legislation only targets website content while offenders often use other technologies such as peer-to-peer systems, newsgroups or email. Ed: Many of these blocking activities stem from European efforts to combat child pornography, but you suggest that child protection may be used as a way to add other types of content to lists of blocked sites—notably those that purportedly violate copyright. Can you explain how this “mission creep” is occurring, and what the risks are? Authors: Combating child pornography and child abuse is a universal and legitimate concern. With regard to this subject there is a worldwide consensus that action must be undertaken in order to punish abusers and protect children. Blocking measures are usually advocated on the basis of the argument that access to these images must be prevented, hence avoiding that users stumble upon child pornography inadvertently. Whereas this seems reasonable with regard to this particular type of content, in some countries governments increasingly use blocking mechanisms for other ‘illegal’ content, such as gambling or copyright-infringing content, often in a very non-transparent way, without clear or established procedures. It is, in our view, especially important at a…

Despite large investments of law enforcement resources, online child exploitation is nowhere near under control, and while there are numerous technological products to aid this, they still require substantial human intervention.

The Internet has provided the social, individual, and technological circumstances needed for child pornography to flourish. Sex offenders have been able to utilise the Internet for dissemination of child pornographic content, for social networking with other pedophiles through chatrooms and newsgroups, and for sexual communication with children. A 2009 estimate by the United Nations estimates that there are more than four million websites containing child pornography, with 35 percent of them depicting serious sexual assault [1]. Even if this report or others exaggerate the true prevalence of those websites by a wide margin, the fact of the matter is that those websites are pervasive on the world wide web. Despite large investments of law enforcement resources, online child exploitation is nowhere near under control, and while there are numerous technological products to aid in finding child pornography online, they still require substantial human intervention. Despite this, steps can be taken to increase the automation process of these searches, to reduce the amount of content police officers have to examine, and increase the time they can spend on investigating individuals. While law enforcement agencies will aim for maximum disruption of online child exploitation networks by targeting the most connected players, there is a general lack of research on the structural nature of these networks; something we aimed to address in our study, by developing a method to extract child exploitation networks, map their structure, and analyse their content. Our custom-written Child Exploitation Network Extractor (CENE) automatically crawls the Web from a user-specified seed page, collecting information about the pages it visits by recursively following the links out of the page; the result of the crawl is a network structure containing information about the content of the websites, and the linkages between them [2]. We chose ten websites as starting points for the crawls; four were selected from a list of known child pornography websites while the other six were selected and…