Why we shouldn’t be pathologizing online gaming before the evidence is in

Internet-based video games are a ubiquitous form of recreation pursued by the majority of adults and young people. With sales eclipsing box office receipts, games are now an integral part of modern leisure. However, the American Psychiatric Association (APA) recently identified Internet Gaming Disorder (IGD) as a potential psychiatric condition and has called for research to investigate the potential disorder’s validity and its impacts on health and behaviour.

Research responding to this call for a better understanding of IGD is still at a formative stage, and there are active debates surrounding it. There is a growing literature that suggests there is a basis to expect that excessive or problematic gaming may be related to lower health, though findings in this area are mixed. Some argue for a theoretical framing akin to a substance abuse disorder (i.e. where gaming is considered to be inherently addictive), while others frame Internet-based gaming as a self-regulatory challenge for individuals.

In their article “A prospective study of the motivational and health dynamics of Internet Gaming Disorder“, Netta Weinstein, the OII’s Andrew Przybylski, and Kou Murayama address this gap in the literature by linking self-regulation and Internet Gaming Disorder research. Drawing on a representative sample of 5,777 American adults they examine how problematic gaming emerges from a state of individual “dysregulation” and how it predicts health — finding no evidence directly linking IGD to health over time.

This negative finding indicates that IGD may not, in itself, be robustly associated with important clinical outcomes. As such, it may be premature to invest in management of IGD using the same kinds of approaches taken in response to substance-based addiction disorders. Further, the findings suggests that more high-quality evidence regarding clinical and behavioural effects is needed before concluding that IGD is a legitimate candidate for inclusion in future revisions of the Diagnostic and Statistical Manual of Mental Disorders.

We caught up with Andy to explore the implications of the study:

Ed: To ask a blunt question upfront: do you feel that Internet Gaming Disorder is a valid psychiatric condition (and that “games can cause problems”)? Or is it still too early to say?

Andy: No, it is not. It’s difficult to overstate how sceptical the public should be of researchers who claim, and communicate their research, as if Internet addiction, gaming addiction, or Internet gaming disorder (IGD) are recognized psychiatric disorders. The fact of the matter is that American psychiatrists working on the most recent revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM5) highlighted that problematic online play was a topic they were interested in learning more about. These concerns are highlighted in Section III of the DSM5 (entitled “Emerging Measures and Models”). For those interested in this debate see this position paper.

Ed: Internet gaming seems like quite a specific activity to worry about: how does it differ from things like offline video games, online gambling and casino games; or indeed the various “problematic” uses of the Internet that lead to people admitting themselves to digital detox camps?

Andy: In some ways computer games, and Internet ones, are distinct from other activities. They are frequently updated to meet players expectations and some business models of games, such as pay-to-play are explicitly designed to target high engagement players to spend real money for in-game advantages. Detox camps are very worrying to me as a scientist because they have no scientific basis, many of those who run them have financial conflicts of interest when they comment in the press, and there have been a number of deaths at these facilities.

Ed: You say there are two schools of thought: that if IGD is indeed a valid condition, that it should be framed as an addiction, i.e. that there’s something inherently addictive about certain games. Alternatively, that it should be framed as a self-regulatory challenge, relating to an individual’s self-control. I guess intuitively it might involve a bit of both: online environments can be very persuasive, and some people are easily persuaded?

Andy: Indeed it could be. As researchers mainly interested in self-regulation we’re most interested in gaming as one of many activities that can be successfully (or unsuccessfully) integrated into everyday life. Unfortunately we don’t know much for sure about whether there is something inherently addictive about games because the research literature is based largely on inferences based on correlational data, drawn from convenience samples, with post-hoc analyses. Because the evidence base is of such low quality most of the published findings (i.e. correlations/factor analyses) regarding gaming addiction supporting it as valid condition likely suffer from the Texas Sharpshooter Fallacy.

Ed: Did you examine the question of whether online games may trigger things like anxiety, depression, violence, isolation etc. — or whether these conditions (if pre-existing) might influence the development of IGD?

Andy: Well, our modelling focused on the links between Internet Gaming Disorder, health (mental, physical, and social), and motivational factors (feeling competent, choiceful, and a sense of belonging) examined at two time points six months apart. We found that those who had their motivational needs met at the start of the study were more likely to have higher levels of health six months later and were less likely to say they experienced some of the symptoms of Internet Gaming Disorder.

Though there was no direct link between Internet Gaming Disorder and health six months later, we perform an exploratory analysis (one we did not pre-register) and found an indirect link between Internet Gaming Disorder and health by way of motivational factors. In other words, Internet Gaming Disorder was linked to lower levels of feeling competent, choiceful, and connected, which was in turn linked to lower levels of health.

Ed: All games are different. How would a clinician identify if someone was genuinely “addicted” to a particular game — there would presumably have to be game-by-game ratings of their addictive potential (like there are with drugs). How would anyone find the time to do that? Or would diagnosis focus more on the individual’s behaviour, rather than what games they play? I suppose this goes back to the question of whether “some games are addictive” or whether just “some people have poor self-control”?

Andy: No one knows. In fact, the APA doesn’t define what “Internet Games” are. In our research we define ask participants to define it for themselves by “Think[ing] about the Internet games you may play on Facebook (e.g. Farmville), Tablet/Smartphones (e.g. Candy Crush), or Computer/Consoles (e.g. Minecraft).” It’s very difficult to overstate how suboptimal this state of affairs is from a scientific perspective.

Ed: Is it odd that it was the APA’s Substance-Related Disorders Work Group that has called for research into IGD? Are “Internet Games” unique in being classed as a substance, or are there other information based-behaviours that fall under the group’s remit?

Andy: Yes it’s very odd. Our research group is not privy to these discussions but my understanding is that a range of behaviours and other technology-related activities, such as general Internet use have been discussed.

Ed: A huge amount of money must be spent on developing and refining these games, i.e. to get people to spend as much time (and money) as possible playing them. Are academics (and clinicians) always going to be playing catch-up to industry?

Andy: I’m not sure that there is one answer to this. One useful way to think of online games is using the example of a gym. Gyms are most profitable when many people are paying for (and don’t cancel) their memberships but owners can still maintain a small footprint. The world’s most successful gym might be a one square meter facility, with seven billion members but no one ever goes. Many online games are like this, some costs scale nicely, but others have high costs, like servers, community management, upkeep, and power. There are many studying the addictive potential of games but because they constantly reinvent the wheel by creating duplicate survey instruments (there are literally dozens that are only used once or a couple of times) very little of real-world relevance is ever learned or transmitted to the public.

Ed: It can’t be trivial to admit another condition into the Diagnostic and Statistical Manual of Mental Disorders (DSM-5)? Presumably there must be firm (reproducible) evidence that it is a (persistent) problem for certain people, with a specific (identifiable) cause — given it could presumably be admitted in courts as a mitigating condition, and possibly also have implications for health insurance and health policy? What are the wider implications if it does end up being admitted to the DSM-5?

Andy: It is very serious stuff. Opening the door to pathologizing one of the world’s most popular recreational activities risks stigmatizing hundreds of millions of people and shifting resources in an already overstretched mental health systems over the breaking point.

Ed: You note that your study followed a “pre-registered analysis plan” — what does that mean?

Andy: We’ve discussed the wider problems in social, psychological, and medical science before. But basically, preregistration, and Registered Reports provide scientists a way to record their hypotheses in advance of data collection. This improves the quality of the inferences researchers draw from experiments and large-scale social data science. In this study, and also in our other work, we recorded our sampling plan, our analysis plan, and our materials before we collected our data.

Ed: And finally: what follow up studies are you planning?

Andy: We are now conducting a series of studies investigating problematic play in younger participants with a focus on child-caregiver dynamics.

Read the full article: Weinstein N, Przybylski AK, Murayama K. (2017) A prospective study of the motivational and health dynamics of Internet Gaming Disorder. PeerJ 5:e3838 https://doi.org/10.7717/peerj.3838

Additional peer-reviewed articles in this area by Andy include:

Przybylski, A.K. & Weinstein N. (2017). A Large-Scale Test of the Goldilocks Hypothesis: Quantifying the Relations Between Digital Screens and the Mental Well-Being of Adolescents. Psychological Science. DOI: 10.1177/0956797616678438.

Przybylski, A. K., Weinstein, N., & Murayama, K. (2016). Internet Gaming Disorder: Investigating the Clinical Relevance of a New Phenomenon. American Journal of Psychiatry. DOI: 10.1176/appi.ajp.2016.16020224.

Przybylski, A. K. (2016). Mischievous responding in Internet Gaming Disorder research. PeerJ, 4, e2401. https://doi.org/10.7717/peerj.2401

For more on the ongoing “crisis in psychology” and how pre-registration of studies might offer a solution, see this discussion with Andy and Malte Elson: Psychology is in crisis, and here’s how to fix it.

Andy Przybylski was talking to blog editor David Sutcliffe.

Why we shouldn’t be pathologizing online gaming before the evidence is in

Internet-based video games are a ubiquitous form of recreation pursued by the majority of adults and young people. With sales eclipsing box office receipts, games are now an integral part of modern leisure. However, the American Psychiatric Association (APA) recently identified Internet Gaming Disorder (IGD) as a potential psychiatric condition and has called for research to investigate the potential disorder’s validity and its impacts on health and behaviour.

Research responding to this call for a better understanding of IGD is still at a formative stage, and there are active debates surrounding it. There is a growing literature that suggests there is a basis to expect that excessive or problematic gaming may be related to lower health, though findings in this area are mixed. Some argue for a theoretical framing akin to a substance abuse disorder (i.e. where gaming is considered to be inherently addictive), while others frame Internet-based gaming as a self-regulatory challenge for individuals.

In their article “A prospective study of the motivational and health dynamics of Internet Gaming Disorder“, Netta Weinstein, the OII’s Andrew Przybylski, and Kou Murayama address this gap in the literature by linking self-regulation and Internet Gaming Disorder research. Drawing on a representative sample of 5,777 American adults they examine how problematic gaming emerges from a state of individual “dysregulation” and how it predicts health — finding no evidence directly linking IGD to health over time.

This negative finding indicates that IGD may not, in itself, be robustly associated with important clinical outcomes. As such, it may be premature to invest in management of IGD using the same kinds of approaches taken in response to substance-based addiction disorders. Further, the findings suggests that more high-quality evidence regarding clinical and behavioural effects is needed before concluding that IGD is a legitimate candidate for inclusion in future revisions of the Diagnostic and Statistical Manual of Mental Disorders.

We caught up with Andy to explore the implications of the study:

Ed: To ask a blunt question upfront: do you feel that Internet Gaming Disorder is a valid psychiatric condition (and that “games can cause problems”)? Or is it still too early to say?

Andy: No, it is not. It’s difficult to overstate how sceptical the public should be of researchers who claim, and communicate their research, as if Internet addiction, gaming addiction, or Internet gaming disorder (IGD) are recognized psychiatric disorders. The fact of the matter is that American psychiatrists working on the most recent revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM5) highlighted that problematic online play was a topic they were interested in learning more about. These concerns are highlighted in Section III of the DSM5 (entitled “Emerging Measures and Models”). For those interested in this debate see this position paper.

Ed: Internet gaming seems like quite a specific activity to worry about: how does it differ from things like offline video games, online gambling and casino games; or indeed the various “problematic” uses of the Internet that lead to people admitting themselves to digital detox camps?

Andy: In some ways computer games, and Internet ones, are distinct from other activities. They are frequently updated to meet players expectations and some business models of games, such as pay-to-play are explicitly designed to target high engagement players to spend real money for in-game advantages. Detox camps are very worrying to me as a scientist because they have no scientific basis, many of those who run them have financial conflicts of interest when they comment in the press, and there have been a number of deaths at these facilities.

Ed: You say there are two schools of thought: that if IGD is indeed a valid condition, that it should be framed as an addiction, i.e. that there’s something inherently addictive about certain games. Alternatively, that it should be framed as a self-regulatory challenge, relating to an individual’s self-control. I guess intuitively it might involve a bit of both: online environments can be very persuasive, and some people are easily persuaded?

Andy: Indeed it could be. As researchers mainly interested in self-regulation we’re most interested in gaming as one of many activities that can be successfully (or unsuccessfully) integrated into everyday life. Unfortunately we don’t know much for sure about whether there is something inherently addictive about games because the research literature is based largely on inferences based on correlational data, drawn from convenience samples, with post-hoc analyses. Because the evidence base is of such low quality most of the published findings (i.e. correlations/factor analyses) regarding gaming addiction supporting it as valid condition likely suffer from the Texas Sharpshooter Fallacy.

Ed: Did you examine the question of whether online games may trigger things like anxiety, depression, violence, isolation etc. — or whether these conditions (if pre-existing) might influence the development of IGD?

Andy: Well, our modelling focused on the links between Internet Gaming Disorder, health (mental, physical, and social), and motivational factors (feeling competent, choiceful, and a sense of belonging) examined at two time points six months apart. We found that those who had their motivational needs met at the start of the study were more likely to have higher levels of health six months later and were less likely to say they experienced some of the symptoms of Internet Gaming Disorder.

Though there was no direct link between Internet Gaming Disorder and health six months later, we perform an exploratory analysis (one we did not pre-register) and found an indirect link between Internet Gaming Disorder and health by way of motivational factors. In other words, Internet Gaming Disorder was linked to lower levels of feeling competent, choiceful, and connected, which was in turn linked to lower levels of health.

Ed: All games are different. How would a clinician identify if someone was genuinely “addicted” to a particular game — there would presumably have to be game-by-game ratings of their addictive potential (like there are with drugs). How would anyone find the time to do that? Or would diagnosis focus more on the individual’s behaviour, rather than what games they play? I suppose this goes back to the question of whether “some games are addictive” or whether just “some people have poor self-control”?

Andy: No one knows. In fact, the APA doesn’t define what “Internet Games” are. In our research we define ask participants to define it for themselves by “Think[ing] about the Internet games you may play on Facebook (e.g. Farmville), Tablet/Smartphones (e.g. Candy Crush), or Computer/Consoles (e.g. Minecraft).” It’s very difficult to overstate how suboptimal this state of affairs is from a scientific perspective.

Ed: Is it odd that it was the APA’s Substance-Related Disorders Work Group that has called for research into IGD? Are “Internet Games” unique in being classed as a substance, or are there other information based-behaviours that fall under the group’s remit?

Andy: Yes it’s very odd. Our research group is not privy to these discussions but my understanding is that a range of behaviours and other technology-related activities, such as general Internet use have been discussed.

Ed: A huge amount of money must be spent on developing and refining these games, i.e. to get people to spend as much time (and money) as possible playing them. Are academics (and clinicians) always going to be playing catch-up to industry?

Andy: I’m not sure that there is one answer to this. One useful way to think of online games is using the example of a gym. Gyms are most profitable when many people are paying for (and don’t cancel) their memberships but owners can still maintain a small footprint. The world’s most successful gym might be a one square meter facility, with seven billion members but no one ever goes. Many online games are like this, some costs scale nicely, but others have high costs, like servers, community management, upkeep, and power. There are many studying the addictive potential of games but because they constantly reinvent the wheel by creating duplicate survey instruments (there are literally dozens that are only used once or a couple of times) very little of real-world relevance is ever learned or transmitted to the public.

Ed: It can’t be trivial to admit another condition into the Diagnostic and Statistical Manual of Mental Disorders (DSM-5)? Presumably there must be firm (reproducible) evidence that it is a (persistent) problem for certain people, with a specific (identifiable) cause — given it could presumably be admitted in courts as a mitigating condition, and possibly also have implications for health insurance and health policy? What are the wider implications if it does end up being admitted to the DSM-5?

Andy: It is very serious stuff. Opening the door to pathologizing one of the world’s most popular recreational activities risks stigmatizing hundreds of millions of people and shifting resources in an already overstretched mental health systems over the breaking point.

Ed: You note that your study followed a “pre-registered analysis plan” — what does that mean?

Andy: We’ve discussed the wider problems in social, psychological, and medical science before. But basically, preregistration, and Registered Reports provide scientists a way to record their hypotheses in advance of data collection. This improves the quality of the inferences researchers draw from experiments and large-scale social data science. In this study, and also in our other work, we recorded our sampling plan, our analysis plan, and our materials before we collected our data.

Ed: And finally: what follow up studies are you planning?

Andy: We are now conducting a series of studies investigating problematic play in younger participants with a focus on child-caregiver dynamics.

Read the full article: Weinstein N, Przybylski AK, Murayama K. (2017) A prospective study of the motivational and health dynamics of Internet Gaming Disorder. PeerJ 5:e3838 https://doi.org/10.7717/peerj.3838

Additional peer-reviewed articles in this area by Andy include:

Przybylski, A.K. & Weinstein N. (2017). A Large-Scale Test of the Goldilocks Hypothesis: Quantifying the Relations Between Digital Screens and the Mental Well-Being of Adolescents. Psychological Science. DOI: 10.1177/0956797616678438.

Przybylski, A. K., Weinstein, N., & Murayama, K. (2016). Internet Gaming Disorder: Investigating the Clinical Relevance of a New Phenomenon. American Journal of Psychiatry. DOI: 10.1176/appi.ajp.2016.16020224.

Przybylski, A. K. (2016). Mischievous responding in Internet Gaming Disorder research. PeerJ, 4, e2401. https://doi.org/10.7717/peerj.2401

For more on the ongoing “crisis in psychology” and how pre-registration of studies might offer a solution, see this discussion with Andy and Malte Elson: Psychology is in crisis, and here’s how to fix it.

Andy Przybylski was talking to blog editor David Sutcliffe.

From private profit to public liabilities: how platform capitalism’s business model works for children

Two concepts have recently emerged that invite us to rethink the relationship between children and digital technology: the “datafied child” (Lupton & Williamson, 2017) and children’s digital rights (Livingstone & Third, 2017). The concept of the datafied child highlights the amount of data that is being harvested about children during their daily lives, and the children’s rights agenda includes a response to ethical and legal challenges the datafied child presents.

Children have never been afforded the full sovereignty of adulthood (Cunningham, 2009) but both these concepts suggest children have become the points of application for new forms of power that have emerged from the digitisation of society. The most dominant form of this power is called “platform capitalism” (Srnicek, 2016). As a result of platform capitalism’s success, there has never been a stronger association between data, young people’s private lives, their relationships with friends and family, their life at school, and the broader political economy. In this post I will define platform capitalism, outline why it has come to dominate children’s relationship to the internet and suggest two reasons in particular why this is problematic.

Children predominantly experience the Internet through platforms

‘At the most general level, platforms are digital infrastructures that enable two or more groups to interact. They therefore position themselves as intermediaries that bring together different users: customers, advertisers, service providers, producers, suppliers, and even physical objects’ (Srnicek 2016, p43). Examples of platforms capitalism include the technology superpowers – Google, Apple, Facebook, and Amazon. There are, however, many relevant instances of platforms that children and young people use. This includes platforms for socialising, platforms for audio-visual content, platforms that communicate with smart devices and toys, and platforms for games and sports franchises and platforms that provide services (including within in the public sector) that children or their parents use.

Young people choose to use platforms for play, socialising and expressing their identity. Adults have also introduced platforms into children’s lives: for example Capita SIMS is a platform used by over 80% of schools in the UK for assessment and monitoring (over the coming months at the Oxford Internet Institute we will be studying such platforms, including SIMS, for The Oak Foundation). Platforms for personal use have been facilitated by the popularity of tablets and smartphones.

Amongst the young, there has been a sharp uptake in tablet and smart phone usage at the expense of PC or laptop use. Sixteen per cent of 3-4 year olds have their own tablet, with this incidence doubling for 5-7 year olds. By the age of 12, smartphone ownership begins to outstrip tablet ownership (Ofcom, 2016). For our research at the OII, even when we included low-income families in our sample, 93% of teenagers owned a smartphone. This has brought forth the ‘appification’ of the web that Zittrain predicted in 2008. This means that children and young people predominately experience the internet via platforms that we can think of as controlled gateways to the open web.

Platforms exist to make money for investors

In public discourse some of these platforms are called social media. This term distracts us from the reason many of these publicly floated companies exist: to make money for their investors. It is only logical for all these companies to pursue the WeChat model that is becoming so popular in China. WeChat is a closed circuit platform, in that it keeps all engagements with the internet, including shopping, betting, and video calls, within its corporate compound. This brings WeChat closer to monopoly on data extraction.

Platforms have consolidated their success by buying out their competitors. Alphabet, Amazon, Apple, Facebook and Microsoft have made 436 acquisitions worth $131 billion over the last decade (Bloomberg, 2017). Alternatively, they just mimic the features of their competitors. For example, when Facebook acquired Instagram it introduced Stories, a feature use by Snapchat, which lets its users upload photos and videos as a ‘story’ (that automatically expires after 24 hours).

The more data these companies capture that their competitors are unable to capture, the more value they can extract from it and the better their business model works. It is unsurprising therefore that during our research we asked groups of teenagers to draw a visual representation of what they thought the world wide web and internet looked like – almost all of them just drew corporate logos (they also told us they had no idea that platforms such as Facebook own WhatsApp and Instagram, or that Google owns YouTube). Platform capitalism dominates and controls their digital experiences — but what provisions do these platforms make for children?

The General Data Protection Regulation (GDPR) (set to be implemented in all EU states, including the UK, in 2018) says that platforms collecting data about children below the age of 13 years shall only be lawful if and to the extent that consent is given or authorised by the child’s parent or custodian. Because most platforms are American-owned, they tend to apply a piece of Federal legislation known as COPPA; the age of consent for using Snapchat, WhatsApp, Facebook, and Twitter, for example, is therefore set at 13. Yet, the BBC found last year that 78% of children aged 10 to 12 had signed up to a platform, including Facebook, Instagram, Snapchat and WhatsApp.

Platform capitalism offloads its responsibilities onto the user

Why is this a problem? Firstly, because platform capitalism offloads any responsibility onto problematically normative constructs of childhood, parenting, and paternal relations. The owners of platforms assume children will always consult their parents before using their services and that parents will read and understand their terms and conditions, which, research confirms, in reality few users, children or adults, even look at.

Moreover, we found in our research many parents don’t have the knowledge, expertise, or time to monitor what their children are doing online. Some parents, for instance, worked night shifts or had more than one job. We talked to children who regularly moved between homes and whose estranged parents didn’t communicate with each other to supervise their children online. We found that parents who are in financial difficulties, or affected by mental and physical illness, are often unable to keep on top of their children’s digital lives.

We also interviewed children who use strategies to manage their parent’s anxieties so they would leave them alone. They would, for example, allow their parents to be their friends on Facebook, but do all their personal communication on other platforms that their parents knew nothing about. Often then the most vulnerable children offline, children in care for example, are the most vulnerable children online. My colleagues at the OII found 9 out of 10 of the teenagers who are bullied online also face regular ‘traditional’ bullying. Helping these children requires extra investment from their families, as well as teachers, charities and social services. The burden is on schools too to address the problem of fake news and extremism such as Holocaust denialism that children can read on platforms.

This is typical of platform capitalism. It monetises what are called social graphs: i.e. the networks of users who use its platforms that it then makes available to advertisers. Social graphs are more than just nodes and edges representing our social lives: they are embodiments of often intimate or very sensitive data (that can often be de-anonymised by linking, matching and combining digital profiles). When graphs become dysfunctional and manifest social problems such as abuse, doxxing, stalking, and grooming), local social systems and institutions — that are usually publicly funded — have to deal with the fall-out. These institutions are often either under-resourced and ill-equipped to these solve such problems, or they are already overburdened.

Are platforms too powerful?

The second problem is the ecosystems of dependency that emerge, within which smaller companies or other corporations try to monetise their associations with successful platforms: they seek to get in on the monopolies of data extraction that the big platforms are creating. Many of these companies are not wealthy corporations and therefore don’t have the infrastructure or expertise to develop their own robust security measures. They can cut costs by neglecting security or they subcontract out services to yet more companies that are added to the network of data sharers.

Again, the platforms offload any responsibility onto the user. For example, WhatsApp tells its users; “Please note that when you use third-party services, their own terms and privacy policies will govern your use of those services”. These ecosystems are networks that are only as strong as their weakest link. There are many infamous examples that illustrate this, including the so-called ‘Snappening’ where sexually explicit pictures harvested from Snapchat — a platform that is popular with teenagers — were released on to the open web. There is also a growing industry in fake apps that enable illegal data capture and fraud by leveraging the implicit trust users have for corporate walled gardens.

What can we do about these problems? Platform capitalism is restructuring labour markets and social relations in such a way that opting out from it is becoming an option available only to a privileged few. Moreover, we found teenagers whose parents prohibited them from using social platforms often felt socially isolated and stigmatised. In the real world of messy social reality, platforms can’t continue to offload their responsibilities on parents and schools.

We need some solutions fast because, by tacitly accepting the terms and conditions of platform capitalism – particularly when that they tell us it is not responsible for the harms its business model can facilitate – we may now be passing an event horizon where these companies are becoming too powerful, unaccountable, and distant from our local reality.

References

Hugh Cunningham (2009) Children and Childhood in Western Society Since 1500. Routledge.

Sonia Livingstone, Amanda Third (2017) Children and young people’s rights in the digital age: An emerging agenda. New Media and Society 19 (5).

Deborah Lupton, Ben Williamson (2017) The datafied child: The dataveillance of children and implications for their rights. New Media and Society 19 (5).

Nick Srnicek (2016) Platform Capitalism. Wiley.

From private profit to public liabilities: how platform capitalism’s business model works for children

Two concepts have recently emerged that invite us to rethink the relationship between children and digital technology: the “datafied child” (Lupton & Williamson, 2017) and children’s digital rights (Livingstone & Third, 2017). The concept of the datafied child highlights the amount of data that is being harvested about children during their daily lives, and the children’s rights agenda includes a response to ethical and legal challenges the datafied child presents.

Children have never been afforded the full sovereignty of adulthood (Cunningham, 2009) but both these concepts suggest children have become the points of application for new forms of power that have emerged from the digitisation of society. The most dominant form of this power is called “platform capitalism” (Srnicek, 2016). As a result of platform capitalism’s success, there has never been a stronger association between data, young people’s private lives, their relationships with friends and family, their life at school, and the broader political economy. In this post I will define platform capitalism, outline why it has come to dominate children’s relationship to the internet and suggest two reasons in particular why this is problematic.

Children predominantly experience the Internet through platforms

‘At the most general level, platforms are digital infrastructures that enable two or more groups to interact. They therefore position themselves as intermediaries that bring together different users: customers, advertisers, service providers, producers, suppliers, and even physical objects’ (Srnicek 2016, p43). Examples of platforms capitalism include the technology superpowers – Google, Apple, Facebook, and Amazon. There are, however, many relevant instances of platforms that children and young people use. This includes platforms for socialising, platforms for audio-visual content, platforms that communicate with smart devices and toys, and platforms for games and sports franchises and platforms that provide services (including within in the public sector) that children or their parents use.

Young people choose to use platforms for play, socialising and expressing their identity. Adults have also introduced platforms into children’s lives: for example Capita SIMS is a platform used by over 80% of schools in the UK for assessment and monitoring (over the coming months at the Oxford Internet Institute we will be studying such platforms, including SIMS, for The Oak Foundation). Platforms for personal use have been facilitated by the popularity of tablets and smartphones.

Amongst the young, there has been a sharp uptake in tablet and smart phone usage at the expense of PC or laptop use. Sixteen per cent of 3-4 year olds have their own tablet, with this incidence doubling for 5-7 year olds. By the age of 12, smartphone ownership begins to outstrip tablet ownership (Ofcom, 2016). For our research at the OII, even when we included low-income families in our sample, 93% of teenagers owned a smartphone. This has brought forth the ‘appification’ of the web that Zittrain predicted in 2008. This means that children and young people predominately experience the internet via platforms that we can think of as controlled gateways to the open web.

Platforms exist to make money for investors

In public discourse some of these platforms are called social media. This term distracts us from the reason many of these publicly floated companies exist: to make money for their investors. It is only logical for all these companies to pursue the WeChat model that is becoming so popular in China. WeChat is a closed circuit platform, in that it keeps all engagements with the internet, including shopping, betting, and video calls, within its corporate compound. This brings WeChat closer to monopoly on data extraction.

Platforms have consolidated their success by buying out their competitors. Alphabet, Amazon, Apple, Facebook and Microsoft have made 436 acquisitions worth $131 billion over the last decade (Bloomberg, 2017). Alternatively, they just mimic the features of their competitors. For example, when Facebook acquired Instagram it introduced Stories, a feature use by Snapchat, which lets its users upload photos and videos as a ‘story’ (that automatically expires after 24 hours).

The more data these companies capture that their competitors are unable to capture, the more value they can extract from it and the better their business model works. It is unsurprising therefore that during our research we asked groups of teenagers to draw a visual representation of what they thought the world wide web and internet looked like – almost all of them just drew corporate logos (they also told us they had no idea that platforms such as Facebook own WhatsApp and Instagram, or that Google owns YouTube). Platform capitalism dominates and controls their digital experiences — but what provisions do these platforms make for children?

The General Data Protection Regulation (GDPR) (set to be implemented in all EU states, including the UK, in 2018) says that platforms collecting data about children below the age of 13 years shall only be lawful if and to the extent that consent is given or authorised by the child’s parent or custodian. Because most platforms are American-owned, they tend to apply a piece of Federal legislation known as COPPA; the age of consent for using Snapchat, WhatsApp, Facebook, and Twitter, for example, is therefore set at 13. Yet, the BBC found last year that 78% of children aged 10 to 12 had signed up to a platform, including Facebook, Instagram, Snapchat and WhatsApp.

Platform capitalism offloads its responsibilities onto the user

Why is this a problem? Firstly, because platform capitalism offloads any responsibility onto problematically normative constructs of childhood, parenting, and paternal relations. The owners of platforms assume children will always consult their parents before using their services and that parents will read and understand their terms and conditions, which, research confirms, in reality few users, children or adults, even look at.

Moreover, we found in our research many parents don’t have the knowledge, expertise, or time to monitor what their children are doing online. Some parents, for instance, worked night shifts or had more than one job. We talked to children who regularly moved between homes and whose estranged parents didn’t communicate with each other to supervise their children online. We found that parents who are in financial difficulties, or affected by mental and physical illness, are often unable to keep on top of their children’s digital lives.

We also interviewed children who use strategies to manage their parent’s anxieties so they would leave them alone. They would, for example, allow their parents to be their friends on Facebook, but do all their personal communication on other platforms that their parents knew nothing about. Often then the most vulnerable children offline, children in care for example, are the most vulnerable children online. My colleagues at the OII found 9 out of 10 of the teenagers who are bullied online also face regular ‘traditional’ bullying. Helping these children requires extra investment from their families, as well as teachers, charities and social services. The burden is on schools too to address the problem of fake news and extremism such as Holocaust denialism that children can read on platforms.

This is typical of platform capitalism. It monetises what are called social graphs: i.e. the networks of users who use its platforms that it then makes available to advertisers. Social graphs are more than just nodes and edges representing our social lives: they are embodiments of often intimate or very sensitive data (that can often be de-anonymised by linking, matching and combining digital profiles). When graphs become dysfunctional and manifest social problems such as abuse, doxxing, stalking, and grooming), local social systems and institutions — that are usually publicly funded — have to deal with the fall-out. These institutions are often either under-resourced and ill-equipped to these solve such problems, or they are already overburdened.

Are platforms too powerful?

The second problem is the ecosystems of dependency that emerge, within which smaller companies or other corporations try to monetise their associations with successful platforms: they seek to get in on the monopolies of data extraction that the big platforms are creating. Many of these companies are not wealthy corporations and therefore don’t have the infrastructure or expertise to develop their own robust security measures. They can cut costs by neglecting security or they subcontract out services to yet more companies that are added to the network of data sharers.

Again, the platforms offload any responsibility onto the user. For example, WhatsApp tells its users; “Please note that when you use third-party services, their own terms and privacy policies will govern your use of those services”. These ecosystems are networks that are only as strong as their weakest link. There are many infamous examples that illustrate this, including the so-called ‘Snappening’ where sexually explicit pictures harvested from Snapchat — a platform that is popular with teenagers — were released on to the open web. There is also a growing industry in fake apps that enable illegal data capture and fraud by leveraging the implicit trust users have for corporate walled gardens.

What can we do about these problems? Platform capitalism is restructuring labour markets and social relations in such a way that opting out from it is becoming an option available only to a privileged few. Moreover, we found teenagers whose parents prohibited them from using social platforms often felt socially isolated and stigmatised. In the real world of messy social reality, platforms can’t continue to offload their responsibilities on parents and schools.

We need some solutions fast because, by tacitly accepting the terms and conditions of platform capitalism – particularly when that they tell us it is not responsible for the harms its business model can facilitate – we may now be passing an event horizon where these companies are becoming too powerful, unaccountable, and distant from our local reality.

References

Hugh Cunningham (2009) Children and Childhood in Western Society Since 1500. Routledge.

Sonia Livingstone, Amanda Third (2017) Children and young people’s rights in the digital age: An emerging agenda. New Media and Society 19 (5).

Deborah Lupton, Ben Williamson (2017) The datafied child: The dataveillance of children and implications for their rights. New Media and Society 19 (5).

Nick Srnicek (2016) Platform Capitalism. Wiley.

Design ethics for gender-based violence and safety technologies

Digital technologies are increasingly proposed as innovative solution to the problems and threats faced by vulnerable groups such as children, women, and LGBTQ people. However, there exists a structural lack of consideration for gender and power relations in the design of Internet technologies, as previously discussed by scholars in media and communication studies (Barocas & Nissenbaum, 2009; boyd, 2001; Thakor, 2015) and technology studies (Balsamo, 2011; MacKenzie and Wajcman, 1999). But the intersection between gender-based violence and technology deserves greater attention. To this end, scholars from the Center for Information Technology at Princeton and the Oxford Internet Institute organized a workshop to explore the design ethics of gender-based violence and safety technologies at Princeton in the Spring of 2017.

The workshop welcomed a wide range of advocates in areas of intimate partner violence and sex work; engineers, designers, developers, and academics working on IT ethics. The objectives of the day were threefold:

(1) to better understand the lack of gender considerations in technology design,

(2) to formulate critical questions for functional requirement discussions between advocates and developers of gender-based violence applications; and

(3) to establish a set of criteria by which new applications can be assessed from a gender perspective.

Following three conceptual takeaways from the workshop, we share instructive primers for developers interested in creating technologies for those affected by gender-based violence.

Survivors, sex workers, and young people are intentional technology users

Increasing public awareness of the prevalence gender-based violence, both on and offline, often frames survivors of gender-based violence, activists, and young people as vulnerable and helpless. Contrary to this representation, those affected by gender-based violence are intentional technology users, choosing to adopt or abandon tools as they see fit. For example, sexual assault victims strategically disclose their stories on specific social media platforms to mobilize collective action. Sex workers adopt locative technologies to make safety plans. Young people utilize secure search tools to find information about sexual health resources near them. To fully understand how and why some technologies appear to do more for these communities, developers need to pay greater attention to the depth of their lived experience with technology.

Context matters

Technologies designed with good intentions do not inherently achieve their stated objectives. Functions that we take for granted to be neutral, such as a ‘Find my iPhone’ feature, can have unintended consequences. In contexts of gender-based violence, abusers and survivors appropriate these technological tools. For example, survivors and sex workers can use such a feature to share their whereabouts with friends in times of need. Abusers, on the other hand, can use the locative functions to stalk their victims. It is crucial to consider the context within which a technology is used, the user’s relationship to their environment, their needs, and interests so that technologies can begin to support those affected by gender-based violence.

Vulnerable communities perceive unique affordances

Drawing from ecological psychology, technology scholars have described this tension between design and use as affordance, to explain how a user’s perception of what can and cannot be done on a device informs their use. Designers may create a technology with a specific use in mind, but users will appropriate, resist, and improvise their use of the features as they see fit. For example, the use of a hashtags like #SurvivorPrivilege is an example of how rape victims create in-groups on Twitter to engage in supportive discussions, without the intention of it going viral.

Action Item

1. Predict unintended outcomes

Relatedly, the idea of devices as having affordances allows us to detect how technologies lead to unintended outcomes. Facebook’s ‘authentic name’ policy may have been instituted to promote safety for victims of relationship violence. The social and political contexts in which this policy is used, however, disproportionately affects the safety of human rights activists, drag queens, sex workers, and others — including survivors of partner violence.

2. Question the default

Technology developers are in a position to design the default settings of their technology. Since such settings are typically left unchanged by users, developers must take into account the effect on their target end users. For example, the default notification setting for text messages display the full message content in home screen. A smartphone user may experience texting as a private activity, but the default setting enables other people who are physically co-present to be involved. Opting out of this default setting requires some technical knowledge from the user. In abusive relationships, the abuser can therefore easily access the victim’s text messages through this default setting. So, in designing smartphone applications for survivors, developers should question the default privacy setting.

3. Inclusivity is not generalizability

There appears to be an equation of generalizability with inclusivity. An alarm button that claims to be for generally safety purposes may take a one-size-fits-all approach by automatically connecting the user to law enforcement. In cases of sexual assault, especially involving those who are of color, in sex work, or of LGBTQ identities, survivors are likely to avoid such features precisely because of its connection to law enforcement. This means that those who are most vulnerable are inadvertently excluded from the feature. Alternatively, an alarm feature that centers on these communities may direct the user to local resources. Thus, a feature that is generalizable may overlook target groups it aims to support; a more targeted feature may have less reach, but meet its objective. Just as communities’ needs are context-based, inclusivity, too, is contextualized. Developers should realize that that the broader mission of inclusivity can in fact be completed by addressing a specific need, though this may reduce the scope of end-users.

4. Consider co-designing

How, then, can we develop targeted technologies? Workshop participants suggested co-design (similarly, user-participatory design) as a process through which marginalized communities can take a leading role in developing new technologies. Instead of thinking about communities as passive recipients of technological tools, co-design positions both target communities and technologists as active agents who share skills and knowledge to develop innovative, technological interventions.

5. Involve funders and donors

Breakout group discussions pointed out how developers’ organizational and funding structures play a key role in shaping the kind of technologies they create. Suggested strategies included (1) educating donors about the specific social issue being addressed, (2) carefully considering whether funding sources meet developers’ objectives, and (3) ensuring diversity in the development team.

6. Do no harm with your research

In conducting user research, academics and technologists aim to better understand marginalized groups’ technology uses because they are typically at the forefront of adopting and appropriating digital tools. While it is important to expand our understanding of vulnerable communities’ everyday experience with technology, research on this topic can be used by authorities to further marginalize and target these communities. Take, for example, how tech startups like this align with law enforcement in ways that negatively affect sex workers. To ensure that research done about communities can actually contribute to supporting those communities, academics and developers must be vigilant and cautious about conducting ethical research that protects its subjects.

7. Should this app exist?

The most important question to address at the beginning of a technology design process should be: Should there even be an app for this? The idea that technologies can solve social problems as long as the technologists just “nerd harder” continues to guide the development and funding of new technologies. Many social problems are not necessarily data problems that can be solved by an efficient design and padded with enhanced privacy features. One necessary early strategy of intervention is to simply raise the question of whether technologies truly have a place in the particular context and, if so, whether it addresses a specific need.

Our workshop began with big questions about the intersections of gender-based violence and technology, and concluded with a simple but piercing question: Who designs what for whom? Implicated here are the complex workings of gender, sexuality, and power embedded in the lifetime of newly emerging devices from design to use. Apps and platforms can certainly have their place when confronting social problems, but the flow of data and the revealed information must be carefully tailored to the target context.

If you want to be involved with these future projects, please contact Kate Sim or Ben Zevenbergen.

The workshop was funded by the Princeton’s Center for Information Technology Policy (CITP), Princeton’s University Center for Human Values, the Ford Foundation, the Mozilla Foundation, and Princeton’s Council on Science and Technology.

This post was originally posted on CITP’s Freedom to Tinker blog.

Cyberbullying is far less prevalent than offline bullying, but still needs addressing

Bullying is a major public health problem, with systematic reviews supporting an association between adolescent bullying and poor mental wellbeing outcomes. In their Lancet article “Cyberbullying and adolescent well-being in England: a population-based cross sectional study”, Andrew Przybylski and Lucy Bowes report the largest study to date on the prevalence of traditional and cyberbullying, based on a nationally representative sample of 120,115 adolescents in England.

While nearly a third of the adolescent respondents reported experiencing significant bullying in the past few months, cyberbullying was much less common, with around five percent of respondents reporting recent significant experiences. Both traditional and cyberbullying were independently associated with lower mental well-being, but only the relation between traditional bullying and well-being was robust. This supports the view that cyberbullying is unlikely to provide a source for new victims, but rather presents an avenue for further victimisation of those already suffering from traditional forms of bullying.

This stands in stark contrast to media reports and the popular perception that young people are now more likely to be victims of cyberbullying than traditional forms. The results also suggest that interventions to address cyberbullying will only be effective if they also consider the dynamics of traditional forms of bullying, supporting the urgent need for evidence-based interventions that target *both* forms of bullying in adolescence. That said, as social media and Internet connectivity become an increasingly intrinsic part of modern childhood, initiatives fostering resilience in online and every day contexts will be required.

We caught up with Andy and Lucy to discuss their findings:

Ed.: You say that given “the rise in the use of mobile and online technologies among young people, an up to date estimation of the current prevalence of cyberbullying in the UK is needed.” Having undertaken that—what are your initial thoughts on the results?

Andy: I think a really compelling thing we learned in this project is that researchers and policymakers have to think very carefully about what constitutes a meaningful degree of bullying or cyberbullying. Many of the studies and reports we reviewed were really loose on details here while a smaller core of work was precise and informative. When we started our study it was difficult to sort through the noise but we settled on a solid standard—at least two or three experiences of bullying in the past month—to base our prevalence numbers and statistical models on.

Lucy: One of the issues here is that studies often use different measures, so it is hard to compare like for like, but in general our study supports other recent studies indicating that relatively few adolescents report being cyberbullied only—one study by Dieter Wolke and colleagues that collected between 2014-2015 found that whilst 29% of school students reported being bullied, only 1% of 11-16 year olds reported only cyberbullying. Whilst that study was only in a handful of schools in one part of England, the findings are strikingly similar to our own. In general then it seems that rates of cyberbullying are not increasing dramatically; though it is concerning that prevalence rates of both forms of bullying—particularly traditional bullying—have remained unacceptably high.

Ed.: Is there a policy distinction drawn between “bullying” (i.e. young people) and “harassment” (i.e. the rest of us, including in the workplace)—and also between “bullying” and “cyber-bullying”? These are all basically the same thing, aren’t they—why distinguish?

Lucy: I think this is a good point; people do refer to ‘bullying’ in the workplace as well. Bullying, at its core, is defined as intentional, repeated aggression targeted against a person who is less able to defend him or herself—for example, a younger or more vulnerable person. Cyberbullying has the additional definition of occurring only in an online format—but I agree that this is the same action or behaviour, just taking place in a different context. Whilst in practice bullying and harassment have very similar meanings and may be used interchangeably, harassment is unlawful under the Equality Act 2010, whilst bullying actually isn’t a legal term at all. However certain acts of bullying could be considered harassment and therefore be prosecuted. I think this really just reflects the fact that we often ‘carve up’ human behaviour and experience according to our different policies, practices and research fields—when in reality they are not so distinct.

Ed.: I suppose online bullying of young people might be more difficult to deal with, given it can occur under the radar, and in social spaces that might not easily admit adults (though conversely, leave actual evidence, if reported..). Why do you think there’s a moral panic about cyberbullying — is it just newspapers selling copy, or does it say something interesting about the Internet as a medium — a space that’s both very open and very closed? And does any of this hysteria affect actual policy?

Andy: I think our concern arises from the uncertainty and unfamiliarity people have about the possibilities the Internet provides. Because it is full of potential—for good and ill—and is always changing, wild claims about it capture our imagination and fears. That said, the panic absolutely does affect policy and parenting discussions in the UK. Statistics and figures coming from pressure groups and well-meaning charities do put the prevalence of cyberbullying at terrifying, and unrealistically high, levels. This certainly has affected the way parents see things. Policy makers tend to seize on the worse case scenario and interpret things through this lens. Unfortunately this can be a distraction when there are known health and behavioural challenges facing young people.

Lucy: For me, I think we do tend to panic and highlight the negative impacts of the online world—often at the expense of the many positive impacts. That said, there was—and remains—a worry that cyberbullying could have the potential to be more widespread, and to be more difficult to resolve. The perpetrator’s identity may be unknown, may follow the child home from school, and may be persistent—in that it may be difficult to remove hurtful comments or photos from the Internet. It is reassuring that our findings, as well as others’, suggest that cyberbullying may not be associated with as great an impact on well-being as people have suggested.

Ed.: Obviously something as deeply complex and social as bullying requires a complex, multivalent response: but (that said), do you think there are any low-hanging interventions that might help address online bullying, like age verification, reporting tools, more information in online spaces about available help, more discussion of it as a problem (etc.)?

Andy: No easy ones. Understanding that cyber- and traditional bullying aren’t dissimilar, parental engagement and keeping lines of communication open are key. This means parents should learn about the technology their young people are using, and that kids should know they’re safe disclosing when something scary or distressing eventually happens.

Lucy: Bullying is certainly complex; school-based interventions that have been successful in reducing more traditional forms of bullying have tended to involve those students who are not directly involved but who act as ‘bystanders’—encouraging them to take a more active stance against bullying rather than remaining silent and implicitly suggesting that it is acceptable. There are online equivalents being developed, and greater education that discourages people (both children and adults) from sharing negative images or words, or encourages them to actively ‘dislike’ such negative posts show promise. I also think it’s important that targeted advice and support for those directly affected is provided.

Ed.: Who’s seen as the primary body responsible for dealing with bullying online: is it schools? NGOs? Or the platform owners who actually (if not-intentionally) host this abuse? And does this topic bump up against wider current concerns about (e.g.) the moral responsibilities of social media companies?

Andy: There is no single body that takes responsibility for this for young people. Some charities and government agencies, like the Child Exploitation and Online Protection command (CEOP) are doing great work. They provide a forum for information for parents and professionals for kids that is stratified by age, and easy-to-complete forms that young people or carers can use to get help. Most industry-based solutions require users to report and flag offensive content and they’re pretty far behind the ball on this because we don’t know what works and what doesn’t. At present cyberbullying consultants occupy the space and the services they provide are of dubious empirical value. If industry and the government want to improve things on this front they need to make direct investments in supporting robust, open, basic scientific research into cyberbulling and trials of promising intervention approaches.

Lucy: There was an interesting discussion by the NSPCC about this recently, and it seems that people are very mixed in their opinions—some would also say parents play an important role, as well as Government. I think this reflects the fact that cyberbullying is a complex social issue. It is important that social media companies are aware, and work with government, NGOs and young people to safeguard against harm (as many are doing), but equally schools and parents play an important role in educating children about cyberbullying—how to stay safe, how to play an active role in reducing cyberbullying, and who to turn to if children are experiencing cyberbullying.

Ed.: You mention various limitations to the study; what further evidence do you think we need, in order to more completely understand this issue, and support good interventions?

Lucy: I think we need to know more about how to support children directly affected by bullying, and more work is needed in developing effective interventions for cyberbullying. There are some very good school-based interventions with a strong evidence base to suggest that they reduce the prevalence of at least traditional forms of bullying, but they are not being widely implemented in the UK, and this is a missed opportunity.

Andy: I agree—a focus on flashy cyberbullying headlines presents the real risk of distracting us from developing and implementing evidence-based interventions. The Internet cannot be turned off and there are no simple solutions.

Ed.: You say the UK is ranked 20th of 27 EU countries on the mental well-being index, and also note the link between well-being and productivity. Do you think there’s enough discussion and effort being put into well-being, generally? And is there even a general public understanding of what “well-being” encompasses?

Lucy: I think the public understanding of well-being is probably pretty close to the research definition—people have a good sense that this involves more than not having psychological difficulty for example, and that it refers to friendships, relationships, and doing well; one’s overall quality of life. Both research and policy is placing more of an emphasis on well-being—in part because large international studies have suggested that the UK may score particularly poorly on measures of well-being. This is very important if we are going to raise standards and improve people’s quality of life.


Read the full article: Andrew Przybylski and Lucy Bowes (2017) Cyberbullying and adolescent well-being in England: a population-based cross sectional study. The Lancet Child & Adolescent Health.

Andrew Przybylski is an experimental psychologist based at the Oxford Internet Institute. His research focuses on applying motivational theory to understand the universal aspects of video games and social media that draw people in, the role of game structure and content on human aggression, and the factors that lead to successful versus unsuccessful self-regulation of gaming contexts and social media use. @ShuhBillSkee

Lucy Bowes is a Leverhulme Early Career Research Fellow at Oxford’s Department of Experimental Psychology. Her research focuses on the impact of early life stress on psychological and behavioural development, integrating social epidemiology, developmental psychology and behavioural genetics to understand the complex genetic and environmental influences that promote resilience to victimization and early life stress. @DrLucyBowes

Andy Przybylski and Lucy Bowes were talking to the Oxford Internet Institute’s Managing Editor, David Sutcliffe.

Social media is nothing like drugs, despite all the horror stories

File 20170615 23574 1yaztx7
Nothing like Instagram. cliplab.pro/Shutterstock

Letting your child use social media is like giving them cocaine, alcohol and cigarettes – all at once, or so we’re told. If you have been following recent press reports about the effects of social media on young people, you may well believe this. But there is no scientific evidence to support such extreme claims.

An article in The Independent likening
smartphone use to cocaine.

The Independent

The real story is far more complex. It is very difficult to predict how social media will affect any specific individual – the effect depends on things like their personality, type of social media use and social surroundings. In reality, social media can have both positive and negative outcomes.

Media reports that compare social media to drug use are ignoring evidence of positive effects, while exaggerating and generalising the evidence of negative effects. This is scaremongering – and it does not promote healthy social media use. We would not liken giving children sweets to giving children drugs, even though having sweets for every meal could have serious health consequences. We should therefore not liken social media to drugs either.

An article in The Conversation likening
social media use to alcohol and drugs.

For a claim to be proved scientifically it needs to be thoroughly tested. To fully confirm The Independent’s headline that: “Giving your child a smartphone is like giving them a gram of cocaine, says top addiction expert”, you would need to give children both a gram of cocaine and a smartphone and then compare the effects. Similarly, you would need to provide millennials with social media, drugs and alcohol to test The Conversation’s headline that: “Social media is as harmful as alcohol and drugs for millennials”. But ethical guidelines at universities were put in place so that such studies will never be done.

The diversity of social media

But maybe news headlines should be discounted – as exaggerations are often used to grab the readers’ attention. But even when ignoring these grand claims, the media coverage of social media is still misleading. For example, reports that talk about the effects of social media are often oversimplifying reality. Social media is incredibly diverse – different sites providing a host of different features. This makes it extremely difficult to generalise about social media’s effects.

A recent review of past research concluded that the effect of Facebook depends on which of the platform’s features you use. A dialog with friends over Facebook messenger can improve your mood, while comparing your life to other people’s photos on the Newsfeed can do the opposite. By treating all social media sites and features as one concept, the media is oversimplifying something that is very complex.

Focusing on the negative

An article from the Pakistani Express
Tribune.

The Express Tribune

Past media coverage has not only oversimplified social media, but has often only focused on social media’s negative aspects. But scientific research demonstrates that there are both positive and negative outcomes of social media use. Research has shown that Facebook increases self-esteem and promotes feeling connected to others. People’s physiological reactions also indicate they react positively to Facebook use.

By contrast, it has also been found that social media can decrease well-being and increases social anxiety. An analysis of 57 scientific studies found that social media is associated with slightly higher levels of narcissism. This array of conflicting evidence suggests that social media has both negative and positive effects. Not just one or the other.

The amount matters

The effect of social media also depends on the amount of time you spend using it. In a recent study we conducted of more than 120,000 UK teenagers, we found that moderate social media use is not harmful to mental health.

We compared the relationship between screen time and well-being. We found that those who used screens a moderate amount – between one and three hours each day – reported higher well-being compared with those who didn’t use social media at all and those who used it more than three hours a day. So, unlike drugs, those who practise abstinence do not appear to fare better.

The ConversationRecent media reports may have made parents unnecessarily anxious about their child’s use of social media. A flashy quote or headline can often distract from the real challenges of parenting. It’s time the media covered not only the bad, but also the beneficial and complex sides of social media. The effects of social media cannot be summarised by comparing social media to drugs. It is just not that simple.


Andy Przybylski, Associate Professor and Senior Research Fellow, University of Oxford and Amy C Orben, College Lecturer and DPhil Candidate, University of Oxford

This article was originally published on The Conversation. Read the original article.

How and why is children’s digital data being harvested?

Everyone of a certain age remembers logging-on to a noisy dial-up modem and surfing the Web via AOL or AltaVista. Back then, the distinction between offline and online made much more sense. Today, three trends are conspiring to firmly confine this distinction to history. These are the mass proliferation of Wi-Fi, the appification of the Web, and the rapid expansion of the Internet of (smart) Things. Combined they are engineering multi-layered information ecosystems that enmesh around children going about their every day lives. But it’s time to refocus on our responsibilities to children before they are eclipsed by the commercial incentives that are driving these developments.

Three Trends

1. The proliferation of Wi-Fi means when children can use smart phones or tablets in variety of new contexts including on buses and trains, in hotels and restaurants, in school, libraries and health centre waiting rooms.

2. Research confirms apps on smart phones and tablets are now children’s primary gateway to the Web. This is the appification of the Web that Jonathon Zittrain predicted: the WeChat app, popular in China, is becoming its full realisation.

3. Simultaneously, the rapid expansion of the Internet of Things means everything is becoming ‘smart’ – phones, cars, toys, baby monitors, watches, toasters: we are even promised smart cities. Essentially, this means these devices have an IP address that allows to them receive, process, and transmit data on the Internet. Often these devices (including personal assistants like Alexa, game consoles and smart TVs) are picking up data produced by children. Marketing about smart toys tells us they are enhancing children’s play, augmenting children’s learning, incentivising children’s healthy habits and can even reclaim family time. Salient examples include Hello Barbie and Smart Toy Bear, which use voice and/or image recognition and connect to the cloud to analyse, process, and respond to children’s conversations and images. This sector is expanding to include app-enabled toys such as toy drones, cars, and droids (e.g. Star Wars BB-8); toys-to-life, which connect action figures to video games (e.g. Skylanders, Amiibo); puzzle and building games (e.g. Osmo, Lego Fusion); and children’s GPS-enabled wearables such as smart watches and fitness trackers. We need to look beyond the marketing to see what is making this technology ubiquitous.

The commercial incentives to collect children’s data

Service providers now use free Wi-Fi as an additional enticement to their customers, including families. Apps offer companies opportunities to contain children’s usage in a walled-garden so that they can capture valuable marketing data, or offer children and parents opportunities to make in-app purchases. Therefore, more and more companies, especially companies that have no background in technology such as bus operators and cereal manufactures, use Wi-Fi and apps to engage with children.

The smart label is also a new way for companies to differentiate their products from others in saturated markets that overwhelm consumers with choice. However, security is an additional cost that manufactures of smart technologies manufacturers are unwilling to pay. The microprocessors in smart toys often don’t have the processing power required for strong security measures and secure communication, such as encryption (e.g. an 8-bit microcontroller cannot support the industry standard SSL to encrypt communications). Therefore these devices are designed without the ability to accommodate software or firmware updates. Some smart toys transmit data in clear text (parents of course are unaware of such details when purchasing these toys).

While children are using their devices they are constantly emitting data. Because this data is so valuable to businesses it has become a cliché to frame it as an exploitable ‘natural’ resource like oil. This means every digitisable movement, transaction and interaction we make is potentially commodifiable. Moreover, the networks of specialist companies, partners and affiliates that capture, store process, broker and resell the new oil are becoming so complex they are impenetrable. This includes the involvement of commercial actors in public institutions such as schools.

Lupton & Williamson (2017) use the term ‘datafied child’ to draw attention to this creeping normalisation of harvesting data about children. As its provenance becomes more opaque the data is orphaned and vulnerable to further commodification. And when it is shared across unencrypted channels or stored using weak security (as high profile cases show) it is easily hacked. The implications of this are only beginning to emerge. In response, children’s rights, privacy and protection; the particular ethics of the capture and management of children’s data; and its potential for commercial exploitation are all beginning to receive more attention.

Refocusing on children

Apart from a ticked box, companies have no way of knowing if a parent or child has given their consent. Children, or their parents, will often sign away their data to quickly dispatch any impediment to accessing the Wi-Fi. When children use public Wi-Fi they are opening, often unencrypted, channels to their devices. We need to start mapping the range of actors who are collecting data in this way and find out if they have any provisions for protecting children’s data.

Similarly, when children use their apps, companies assume that a responsible adult has agreed to the terms and conditions. Parents are expected to be gatekeepers, boundary setters, and supervisors. However, for various reasons, there may not be an informed, (digitally) literate adult on hand. For example, parents may be too busy with work or too ill to stay on top of their children’s complex digital lives. Children are educated in year groups but they share digital networks and practices with older children and teenagers, including siblings, extended family members, and friends who may enable risky practices.

We may need to start looking at additional ways of protecting children that transfers the burden away from the family and to companies that are capturing and monetising the data. This includes being realistic about the efficacy of current legislation. Because children can simply enter a fake birthdate, application of the US Children’s Online Privacy Protection Act to restrict the collection of children’s personal data online has been fairly ineffectual (boyd et al., 2011). In Europe, the incoming General Data Protection Regulation allows EU states to set a minimum age of 16 under which children cannot consent to having their data processed, potentially encouraging and even larger population of minors to lie about their age online.

We need to ask what would data capture and management look like if it is guided by a children’s framework such as this one developed here by Sonia Livingstone and endorsed by the Children’s Commissioner here. Perhaps only companies that complied with strong security and anonymisation procedures would be licenced to trade in UK? Given the financial drivers at work, an ideal solution would possibly make better regulation a commerical incentive. We will be exploring these and other similar questions that emerge over the coming months.


This work is part of the OII project “Child safety on the Internet: looking beyond ICT actors“, which maps the range of non-ICT companies engaging digitally with children and identifying areas where their actions might affect a child’s exposure to online risks such as data theft, adverse online experiences or sexual exploitation. It is funded by the Oak Foundation.

How and why is children’s digital data being harvested?

Everyone of a certain age remembers logging-on to a noisy dial-up modem and surfing the Web via AOL or AltaVista. Back then, the distinction between offline and online made much more sense. Today, three trends are conspiring to firmly confine this distinction to history. These are the mass proliferation of Wi-Fi, the appification of the Web, and the rapid expansion of the Internet of (smart) Things. Combined they are engineering multi-layered information ecosystems that enmesh around children going about their every day lives. But it’s time to refocus on our responsibilities to children before they are eclipsed by the commercial incentives that are driving these developments.

Three Trends

1. The proliferation of Wi-Fi means when children can use smart phones or tablets in variety of new contexts including on buses and trains, in hotels and restaurants, in school, libraries and health centre waiting rooms.

2. Research confirms apps on smart phones and tablets are now children’s primary gateway to the Web. This is the appification of the Web that Jonathon Zittrain predicted: the WeChat app, popular in China, is becoming its full realisation.

3. Simultaneously, the rapid expansion of the Internet of Things means everything is becoming ‘smart’ – phones, cars, toys, baby monitors, watches, toasters: we are even promised smart cities. Essentially, this means these devices have an IP address that allows to them receive, process, and transmit data on the Internet. Often these devices (including personal assistants like Alexa, game consoles and smart TVs) are picking up data produced by children. Marketing about smart toys tells us they are enhancing children’s play, augmenting children’s learning, incentivising children’s healthy habits and can even reclaim family time. Salient examples include Hello Barbie and Smart Toy Bear, which use voice and/or image recognition and connect to the cloud to analyse, process, and respond to children’s conversations and images. This sector is expanding to include app-enabled toys such as toy drones, cars, and droids (e.g. Star Wars BB-8); toys-to-life, which connect action figures to video games (e.g. Skylanders, Amiibo); puzzle and building games (e.g. Osmo, Lego Fusion); and children’s GPS-enabled wearables such as smart watches and fitness trackers. We need to look beyond the marketing to see what is making this technology ubiquitous.

The commercial incentives to collect children’s data

Service providers now use free Wi-Fi as an additional enticement to their customers, including families. Apps offer companies opportunities to contain children’s usage in a walled-garden so that they can capture valuable marketing data, or offer children and parents opportunities to make in-app purchases. Therefore, more and more companies, especially companies that have no background in technology such as bus operators and cereal manufactures, use Wi-Fi and apps to engage with children.

The smart label is also a new way for companies to differentiate their products from others in saturated markets that overwhelm consumers with choice. However, security is an additional cost that manufactures of smart technologies manufacturers are unwilling to pay. The microprocessors in smart toys often don’t have the processing power required for strong security measures and secure communication, such as encryption (e.g. an 8-bit microcontroller cannot support the industry standard SSL to encrypt communications). Therefore these devices are designed without the ability to accommodate software or firmware updates. Some smart toys transmit data in clear text (parents of course are unaware of such details when purchasing these toys).

While children are using their devices they are constantly emitting data. Because this data is so valuable to businesses it has become a cliché to frame it as an exploitable ‘natural’ resource like oil. This means every digitisable movement, transaction and interaction we make is potentially commodifiable. Moreover, the networks of specialist companies, partners and affiliates that capture, store process, broker and resell the new oil are becoming so complex they are impenetrable. This includes the involvement of commercial actors in public institutions such as schools.

Lupton & Williamson (2017) use the term ‘datafied child’ to draw attention to this creeping normalisation of harvesting data about children. As its provenance becomes more opaque the data is orphaned and vulnerable to further commodification. And when it is shared across unencrypted channels or stored using weak security (as high profile cases show) it is easily hacked. The implications of this are only beginning to emerge. In response, children’s rights, privacy and protection; the particular ethics of the capture and management of children’s data; and its potential for commercial exploitation are all beginning to receive more attention.

Refocusing on children

Apart from a ticked box, companies have no way of knowing if a parent or child has given their consent. Children, or their parents, will often sign away their data to quickly dispatch any impediment to accessing the Wi-Fi. When children use public Wi-Fi they are opening, often unencrypted, channels to their devices. We need to start mapping the range of actors who are collecting data in this way and find out if they have any provisions for protecting children’s data.

Similarly, when children use their apps, companies assume that a responsible adult has agreed to the terms and conditions. Parents are expected to be gatekeepers, boundary setters, and supervisors. However, for various reasons, there may not be an informed, (digitally) literate adult on hand. For example, parents may be too busy with work or too ill to stay on top of their children’s complex digital lives. Children are educated in year groups but they share digital networks and practices with older children and teenagers, including siblings, extended family members, and friends who may enable risky practices.

We may need to start looking at additional ways of protecting children that transfers the burden away from the family and to companies that are capturing and monetising the data. This includes being realistic about the efficacy of current legislation. Because children can simply enter a fake birthdate, application of the US Children’s Online Privacy Protection Act to restrict the collection of children’s personal data online has been fairly ineffectual (boyd et al., 2011). In Europe, the incoming General Data Protection Regulation allows EU states to set a minimum age of 16 under which children cannot consent to having their data processed, potentially encouraging and even larger population of minors to lie about their age online.

We need to ask what would data capture and management look like if it is guided by a children’s framework such as this one developed here by Sonia Livingstone and endorsed by the Children’s Commissioner here. Perhaps only companies that complied with strong security and anonymisation procedures would be licenced to trade in UK? Given the financial drivers at work, an ideal solution would possibly make better regulation a commerical incentive. We will be exploring these and other similar questions that emerge over the coming months.


This work is part of the OII project “Child safety on the Internet: looking beyond ICT actors“, which maps the range of non-ICT companies engaging digitally with children and identifying areas where their actions might affect a child’s exposure to online risks such as data theft, adverse online experiences or sexual exploitation. It is funded by the Oak Foundation.

We should look to automation to relieve the current pressures on healthcare

Image by TheeErin (Flickr CC BY-NC-ND 2.0), who writes: “Working on a national cancer research project. This is the usual volume of mail that comes in two-days time.”

In many sectors, automation is seen as a threat due to the potential for job losses. By contrast, automation is seen as an opportunity in healthcare, as a way to address pressures including staff shortages, increasing demand and workloads, reduced budget, skills shortages, and decreased consultation times. Automation may address these pressures in primary care, while also reconfiguring the work of staff roles and changing the patient-doctor relationship.

In the interview below, Matt Willis discusses a project, funded by The Health Foundation, which looks at opportunities and challenges to automation in NHS England general practice services. While the main goal of the project is to classify work tasks and then calculate the probability that each task will be automated, Matt is currently conducting ethnographic fieldwork in primary care sites to understand the work practices of surgery staff and clinicians.

Since the first automated pill counting machine was introduced in 1970 the role of the pharmacist has expanded to where they now perform more patient consultations, consult with primary care physicians, and require greater technical skill (including a Pharm.D degree). While this provides one clear way in which a medical profession has responded to automation, the research team is now looking at how automation will reconfigure other professions in primary care, and how it will shape its technical and digital infrastructures.

We caught up with Matt Willis to explore the implications of automation in primary care.

Ed.: One finding from an analysis by Frey and Osborne is that most healthcare occupations (that involve things like social intelligence, caring etc.) show a remarkably low probability for computerisation. But what sorts of things could be automated, despite that?

Matt: While providing care is the most important work that happens in primary care, there are many tasks that support that care. Many of those tasks are highly structured and repetitive, ideal things we can automate. There is an incredible amount of what I call “letter work” that occurs in primary care. It’s tasks like responding to requests for information from secondary care, an information request from a medical supplier, processing a trusted assessment, and so on.

There is also generating the letters that are sent to other parts of the NHS — and letters are also triaged at the beginning of each day depending on the urgency of the request. Medical coding is another task that can be automated as well as medication orders and renewal. All of these tasks require someone working with paper or digital text documents and gathering information according to a set of criteria. Often surgeries are overwhelmed with paperwork, so automation is a potential way to make a dent in the way information is processed.

Ed.: I suppose that the increasing digitisation of sensors and data capture (e.g. digital thermometers) and patient records actually helps in this: i.e. automation sounds like the obvious next step in an increasingly digital environment? But is it really as simple as that?

Matt: Well, it’s never as simple as you think it’s going to be. The commonality of data originating in a digital format usually does make data easier to work with, manipulate, analyze, and make actionable. Even when information is entirely digital there can be barriers of interoperability between systems. Automation could even be automating the use of data from one system to the next. There are also social and policy barriers to the use of digital data for automation. Think back to the recent care.data debacle that was supposed to centralize much of the NHS data from disparate silos.

Ed.: So will automation of these tasks be driven by government / within the NHS, or by industry / the market? i.e. is there already a market for automating aspects of healthcare?

Matt: Oh yes, I think it will be a variety of those forces you mention. There is already partial automation in many little ways all over NHS. Automation of messages and notifications, blood pressure cuffs, and other medical devices. Automation is not entirely new to healthcare. The pharmacist is an exemplar health profession to look at if we want to see how automation has changed the tasks of a profession for decades. Many of the electronic health record providers in the UK have different workflow automation features or let clinicians develop workflow efficiency protocols that may automate things in specific ways.

Ed.: You say that one of the bottlenecks to automating healthcare is lack of detailed knowledge of the sorts of tasks that could actually be automated. Is this what you’re working on now?

Matt: Absolutely. The data from labour statistics is self-reported and many of the occupations were lumped together meaning all receptionists in different sectors are just listed under receptionist. One early finding I have that I have been thinking about is how a receptionist in the healthcare sector is different in their information work than a receptionist’s counterpart in another sector. I see this with occupations across health, that there are unique features that differentiate health occupations from similar occupations. This begs the need to tease out those details in the data.

Additionally, we need to understand the use of technologies in primary care and what tasks those technologies perform. One of the most important links I am trying to understand is that between the tasks of people and the tasks of technologies. I am working on not only understanding the opportunities and challenges of automation in primary care but also what are the precursors that exist that may support the implementation of automation.

Ed.: When I started in journals publishing I went to the post room every day to mail out hardcopy proofs to authors. Now everything I do is electronic. I’m not really aware of when the shift happened, or what I do with the time freed up (blog, I suppose..). Do you think it will be similarly difficult in healthcare to pin-point a moment when “things got automated”?

Matt: Well, often times with technology and the change of social practices it’s rarely something that happens overnight. You probably started to gradually send out less and less paper manuscripts over a period of time. It’s the frog sitting in a pot where the heat is slowly turned up. There is a theory that technological change comes in swarm patterns — meaning it’s not one technological change that upends everything, but the advent of numerous technologies that start to create big change.

For example, one of the many reasons that the application of automation technologies is increasing is the swarming of prior technologies like “big data” sets, advances in machine vision, machine learning, machine pattern recognition, mobile robotics, the proliferation of sensors, and further development of autonomous technologies. These kinds of things drive big advances forward.

Ed.: I don’t know if people in the publishing house I worked in lost their jobs when things like post rooms and tea trolleys got replaced by email and coffee machines — or were simply moved to different types of jobs. Do you think people will “lose their jobs“ as automation spreads through the health sector, or will it just drive a shift to people doing something else instead?

Matt: One of the justifications in the project is that in many sectors automation is seen as a threat, however, automation is seen as an opportunity in healthcare. This is in great part due to the current state of the NHS and that the smart and appropriate application of automation technologies can be a force multiplier, particularly in primary care.

I see it as not that people will be put out of jobs, but that you’ll be less likely to have to work 12 hours when you should be working 8 and to not have a pile of documents stacking up that you are three months behind in processing. The demand for healthcare is increasing, the population is aging, and people live longer. One of the ways to keep up with this trend is to implement automation technologies that support healthcare workers and management.

I think we are a long ways away from the science fiction future where a patient lays in an entirely automated medical pod that scans them and administers whatever drug, treatment, procedure, or surgery they need. A person’s tasks and the allocation of work will shift in part due to technology. But that has been happening for decades. There is also a longstanding debate about if technology creates more jobs in the long term than it destroys. It’s likely that in healthcare we will see new occupational roles, job titles, and tasks emerge that are in part automation related. Also, that tasks like filing paperwork or writing a letter will seem barbaric when a computer can, through little time and effort, do that for you.


Matthew Willis was talking to blog editor David Sutcliffe.