Digital platforms are governing systems — so it’s time we examined them in more detail

Digital platforms are not just software-based media, they are governing systems that control, interact, and accumulate. As surfaces on which social action takes place, digital platforms mediate — and to a considerable extent, dictate — economic relationships and social action. By automating market exchanges they solidify relationships into material infrastructure, lend a degree of immutability and traceability to engagements, and render what previously would have been informal exchanges into much more formalized rules.

In his Policy & Internet article “Platform Logic: An Interdisciplinary Approach to the Platform-based Economy“, Jonas Andersson Schwarz argues that digital platforms enact a twofold logic of micro-level technocentric control and macro-level geopolitical domination, while supporting a range of generative outcomes between the two levels. Technology isn’t ‘neutral’, and what designers want may clash with what users want: so it’s important that we take a multi-perspective view of the role of digital platforms in contemporary society. For example, if we only consider the technical, we’ll notice modularity, compatibility, compliance, flexibility, mutual subsistence, and cross-subsidization. By contrast, if we consider ownership and organizational control, we’ll observe issues of consolidation, privatization, enclosure, financialization and protectionism.

When focusing on local interactions (e.g. with users), the digital nature of platforms is seen to strongly determine structure; essentially representing an absolute or totalitarian form of control. When we focus on geopolitical power arrangements in the “platform society”, patterns can be observed that are worryingly suggestive of market dominance, colonization, and consolidation. Concerns have been expressed that these (overwhelmingly US-biased) platform giants are not only enacting hegemony, but are on a road to “usurpation through tech — a worry that these companies could grow so large and become so deeply entrenched in world economies that they could effectively make their own laws”.

We caught up with Jonas to discuss his findings:

Ed.: You say that there are lots of different ways of considering “platforms”: what (briefly) are some of these different approaches, and why should they be linked up a bit? Certainly the conference your paper was presented at (“IPP2016: The Platform Society”) seemed to have struck an incredibly rich seam in this topic, and I think showed the value of approaching an issue like digital platforms from multiple disciplinary angles.

Jonas: In my article I’ve chosen to exclusively theorize *digital* platforms, which of course narrows down the meaning of the concept, to begin with. There are different interpretations as for what actually constitutes a digital platform. There has to be an element of proprietary control over the surface on which interaction takes place, for example. While being ubiquitous digital tools, free software and open protocols need not necessarily be considered as platforms, while proprietary operating systems should.

Within contemporary media studies there is considerable divergence as to whether one should define so-called over-the-top streaming services as platforms or not. Netflix, for example: In a strict technical sense, it’s not a platform for self-publishing and sharing in the way that YouTube is—but, in an economic sense, Netflix definitely enacts a multi-sided market, which is one of the key components of a what a platform does, economically speaking. Since platforms crystallize economic relationships into material infrastructure, conceptual conflation of this kind is unavoidable—different scholars tend to put different emphasis on different things.

Hence, when it comes to normative concerns, there are numerous approaches, ranging from largely apolitical computer science and design management studies, brandishing a largely optimistic view where blithe conceptions of innovation and generativity are emphasized, to critical approaches in political economy, where things like market dominance and consolidation are emphasized.

In my article, I try to relate to both of these schools of thought, by noting that they each are normative — albeit in vastly different ways — and by noting that not only do they each have somewhat different focus, they actually bring different research objects to the table: Usually, “efficacy” in purely technical interaction design is something altogether different than “efficacy” in matters of societal power relations, for example. While both notions can be said to be true, their respective validity might differ, depending on which matter of concern we are dealing with in each respective inquiry.

Ed.: You note in your article that platforms have a “twofold logic of micro-level technocentric control and macro-level geopolitical domination” .. which sounds quite a lot like what government does. Do you think “platform as government” is a useful way to think about this, i.e. are there any analogies?

Jonas: Sure, especially if we understand how platforms enact governance in really quite rigid forms. Platforms literally transform market relations into infrastructure. Compared to informal or spontaneous social structures, where there’s a lot of elasticity and ambiguity — put simply, giving-and-taking — automated digital infrastructure operates by unambiguous implementations of computer code. As Lawrence Lessig and others have argued, the perhaps most dangerous aspect of this is when digital infrastructures implement highly centralized modes of governance, often literally only having one point of command-and-control. The platform owner flicks a switch, and then certain listings and settings are allowed or disallowed, and so on…

This should worry any liberal, since it is a mode of governance that is totalitarian by nature; it runs counter to any democratic, liberal notion of spontaneous, emergent civic action. Funnily, a lot of Silicon Valley ideology appears to be indebted to theorists like Friedrich von Hayek, who observed a calculative rationality emerging out of heterogeneous, spontaneous market activity — but at the same time, Hayek’s call to arms was in itself a reaction to central planning of the very kind that I think digital platforms, when designed in too rigid a way, risk erecting.

Ed.: Is there a sense (in hindsight) that these platforms are basically the logical outcome of the ruthless pursuit of market efficiency, i.e. enabled by digital technologies? But is there also a danger that they could lock out equitable development and innovation if they become too powerful (e.g. leading to worries about market concentration and anti-trust issues)? At one point you ask: “Why is society collectively acquiescing to this development?” .. why do you think that is?

Jonas: The governance aspect above rests on a kind of managerialist fantasy of perfect calculative rationality that is conferred upon the platform as an allegedly neutral agent or intermediary; scholars like Frank Pasquale have begun to unravel some of the rather dodgy ideology underpinning this informational idealism, or “dataism,” as José van Dijck calls it. However, it’s important to note how much of this risk for overly rigid structures comes down to sheer design implementation; I truly believe there is scope for more democratically adaptive, benign platforms, but that can only be achieved either through real incentives at the design stage (e.g. Wikipedia, and the ways in which its core business idea involves quality control by design), or through ex-post regulation, forcing platform owners to consider certain societally desirable consequences.

Ed.: A lot of this discussion seems to be based on control. Is there a general theory of “control” — i.e. are these companies creating systems of user management and control that follow similar conceptual / theoretical lines, or just doing “what seems right” to them in their own particular contexts?

Jonas: Down the stack, there is always a binary logic of control at play in any digital infrastructure. Still, on a higher level in the stack, as more complexity is added, we should expect to see more non-linear, adaptive functionality that can handle complexity and context. And where computational logic falls short, we should demand tolerable degrees of human moderation, more than there is now, to be sure. Regulators are going this way when it comes to things like Facebook and hate speech, and I think there is considerable consumer demand for it, as when disputes arise on Airbnb and similar markets.

Ed.: What do you think are the main worries with the way things are going with these mega-platforms, i.e. the things that policy-makers should hopefully be concentrating on, and looking out for?

Jonas: Policymakers are beginning to realize the unexpected synergies that big data gives rise to. As The Economist recently pointed out, once you control portable smartphones, you’ll have instant geopositioning data on a massive scale — you’ll want to own and control map services because you’ll then also have data on car traffic in real time, which means you’d be likely to have the transportation market cornered, self driving cars especially… If one takes an agnostic, heterodox view on companies like Alphabet, some of their far-flung projects actually begin to make sense, if synergy is taken into consideration. For automated systems, the more detailed the data becomes, the better the system will perform; vast pools of data get to act as protective moats.

One solution that The Economist suggests, and that has been championed for years by internet veteran Doc Searls, is to press for vastly increased transparency in terms of user data, so that individuals can improve their own sovereignty, control their relationships with platform companies, and thereby collectively demand that the companies in question disclose the value of this data — which would, by extent, improve signalling of the actual value of the company itself. If today’s platform companies are reluctant to do this, is that because it would perhaps reveal some of them to be less valuable than what they are held out to be?

Another potentially useful, proactive measure, that I describe in my article, is the establishment of vital competitors or supplements to the services that so many of us have gotten used to being provided for by platform giants. Instead of Facebook monopolizing identity management online, which sadly seems to have become the norm in some countries, look to the Scandinavian example of BankID, which is a platform service run by a regional bank consortium, offering a much safer and more nationally controllable identity management solution.

Alternative platform services like these could be built by private companies as well as state-funded ones; alongside privately owned consortia of this kind, it would be interesting to see innovation within the public service remit, exploring how that concept could be re-thought in an era of platform capitalism.


Read the full article: Jonas Andersson Schwarz (2017) Platform Logic: An Interdisciplinary Approach to the Platform-based Economy. Policy & Internet DOI: 10.1002/poi3.159.

Jonas Andersson Schwarz was talking to blog editor David Sutcliffe.

Design ethics for gender-based violence and safety technologies

Digital technologies are increasingly proposed as innovative solution to the problems and threats faced by vulnerable groups such as children, women, and LGBTQ people. However, there exists a structural lack of consideration for gender and power relations in the design of Internet technologies, as previously discussed by scholars in media and communication studies (Barocas & Nissenbaum, 2009; boyd, 2001; Thakor, 2015) and technology studies (Balsamo, 2011; MacKenzie and Wajcman, 1999). But the intersection between gender-based violence and technology deserves greater attention. To this end, scholars from the Center for Information Technology at Princeton and the Oxford Internet Institute organized a workshop to explore the design ethics of gender-based violence and safety technologies at Princeton in the Spring of 2017.

The workshop welcomed a wide range of advocates in areas of intimate partner violence and sex work; engineers, designers, developers, and academics working on IT ethics. The objectives of the day were threefold:

(1) to better understand the lack of gender considerations in technology design,

(2) to formulate critical questions for functional requirement discussions between advocates and developers of gender-based violence applications; and

(3) to establish a set of criteria by which new applications can be assessed from a gender perspective.

Following three conceptual takeaways from the workshop, we share instructive primers for developers interested in creating technologies for those affected by gender-based violence.

Survivors, sex workers, and young people are intentional technology users

Increasing public awareness of the prevalence gender-based violence, both on and offline, often frames survivors of gender-based violence, activists, and young people as vulnerable and helpless. Contrary to this representation, those affected by gender-based violence are intentional technology users, choosing to adopt or abandon tools as they see fit. For example, sexual assault victims strategically disclose their stories on specific social media platforms to mobilize collective action. Sex workers adopt locative technologies to make safety plans. Young people utilize secure search tools to find information about sexual health resources near them. To fully understand how and why some technologies appear to do more for these communities, developers need to pay greater attention to the depth of their lived experience with technology.

Context matters

Technologies designed with good intentions do not inherently achieve their stated objectives. Functions that we take for granted to be neutral, such as a ‘Find my iPhone’ feature, can have unintended consequences. In contexts of gender-based violence, abusers and survivors appropriate these technological tools. For example, survivors and sex workers can use such a feature to share their whereabouts with friends in times of need. Abusers, on the other hand, can use the locative functions to stalk their victims. It is crucial to consider the context within which a technology is used, the user’s relationship to their environment, their needs, and interests so that technologies can begin to support those affected by gender-based violence.

Vulnerable communities perceive unique affordances

Drawing from ecological psychology, technology scholars have described this tension between design and use as affordance, to explain how a user’s perception of what can and cannot be done on a device informs their use. Designers may create a technology with a specific use in mind, but users will appropriate, resist, and improvise their use of the features as they see fit. For example, the use of a hashtags like #SurvivorPrivilege is an example of how rape victims create in-groups on Twitter to engage in supportive discussions, without the intention of it going viral.

Action Item

1. Predict unintended outcomes

Relatedly, the idea of devices as having affordances allows us to detect how technologies lead to unintended outcomes. Facebook’s ‘authentic name’ policy may have been instituted to promote safety for victims of relationship violence. The social and political contexts in which this policy is used, however, disproportionately affects the safety of human rights activists, drag queens, sex workers, and others — including survivors of partner violence.

2. Question the default

Technology developers are in a position to design the default settings of their technology. Since such settings are typically left unchanged by users, developers must take into account the effect on their target end users. For example, the default notification setting for text messages display the full message content in home screen. A smartphone user may experience texting as a private activity, but the default setting enables other people who are physically co-present to be involved. Opting out of this default setting requires some technical knowledge from the user. In abusive relationships, the abuser can therefore easily access the victim’s text messages through this default setting. So, in designing smartphone applications for survivors, developers should question the default privacy setting.

3. Inclusivity is not generalizability

There appears to be an equation of generalizability with inclusivity. An alarm button that claims to be for generally safety purposes may take a one-size-fits-all approach by automatically connecting the user to law enforcement. In cases of sexual assault, especially involving those who are of color, in sex work, or of LGBTQ identities, survivors are likely to avoid such features precisely because of its connection to law enforcement. This means that those who are most vulnerable are inadvertently excluded from the feature. Alternatively, an alarm feature that centers on these communities may direct the user to local resources. Thus, a feature that is generalizable may overlook target groups it aims to support; a more targeted feature may have less reach, but meet its objective. Just as communities’ needs are context-based, inclusivity, too, is contextualized. Developers should realize that that the broader mission of inclusivity can in fact be completed by addressing a specific need, though this may reduce the scope of end-users.

4. Consider co-designing

How, then, can we develop targeted technologies? Workshop participants suggested co-design (similarly, user-participatory design) as a process through which marginalized communities can take a leading role in developing new technologies. Instead of thinking about communities as passive recipients of technological tools, co-design positions both target communities and technologists as active agents who share skills and knowledge to develop innovative, technological interventions.

5. Involve funders and donors

Breakout group discussions pointed out how developers’ organizational and funding structures play a key role in shaping the kind of technologies they create. Suggested strategies included (1) educating donors about the specific social issue being addressed, (2) carefully considering whether funding sources meet developers’ objectives, and (3) ensuring diversity in the development team.

6. Do no harm with your research

In conducting user research, academics and technologists aim to better understand marginalized groups’ technology uses because they are typically at the forefront of adopting and appropriating digital tools. While it is important to expand our understanding of vulnerable communities’ everyday experience with technology, research on this topic can be used by authorities to further marginalize and target these communities. Take, for example, how tech startups like this align with law enforcement in ways that negatively affect sex workers. To ensure that research done about communities can actually contribute to supporting those communities, academics and developers must be vigilant and cautious about conducting ethical research that protects its subjects.

7. Should this app exist?

The most important question to address at the beginning of a technology design process should be: Should there even be an app for this? The idea that technologies can solve social problems as long as the technologists just “nerd harder” continues to guide the development and funding of new technologies. Many social problems are not necessarily data problems that can be solved by an efficient design and padded with enhanced privacy features. One necessary early strategy of intervention is to simply raise the question of whether technologies truly have a place in the particular context and, if so, whether it addresses a specific need.

Our workshop began with big questions about the intersections of gender-based violence and technology, and concluded with a simple but piercing question: Who designs what for whom? Implicated here are the complex workings of gender, sexuality, and power embedded in the lifetime of newly emerging devices from design to use. Apps and platforms can certainly have their place when confronting social problems, but the flow of data and the revealed information must be carefully tailored to the target context.

If you want to be involved with these future projects, please contact Kate Sim or Ben Zevenbergen.

The workshop was funded by the Princeton’s Center for Information Technology Policy (CITP), Princeton’s University Center for Human Values, the Ford Foundation, the Mozilla Foundation, and Princeton’s Council on Science and Technology.

This post was originally posted on CITP’s Freedom to Tinker blog.

How and why is children’s digital data being harvested?

Everyone of a certain age remembers logging-on to a noisy dial-up modem and surfing the Web via AOL or AltaVista. Back then, the distinction between offline and online made much more sense. Today, three trends are conspiring to firmly confine this distinction to history. These are the mass proliferation of Wi-Fi, the appification of the Web, and the rapid expansion of the Internet of (smart) Things. Combined they are engineering multi-layered information ecosystems that enmesh around children going about their every day lives. But it’s time to refocus on our responsibilities to children before they are eclipsed by the commercial incentives that are driving these developments.

Three Trends

1. The proliferation of Wi-Fi means when children can use smart phones or tablets in variety of new contexts including on buses and trains, in hotels and restaurants, in school, libraries and health centre waiting rooms.

2. Research confirms apps on smart phones and tablets are now children’s primary gateway to the Web. This is the appification of the Web that Jonathon Zittrain predicted: the WeChat app, popular in China, is becoming its full realisation.

3. Simultaneously, the rapid expansion of the Internet of Things means everything is becoming ‘smart’ – phones, cars, toys, baby monitors, watches, toasters: we are even promised smart cities. Essentially, this means these devices have an IP address that allows to them receive, process, and transmit data on the Internet. Often these devices (including personal assistants like Alexa, game consoles and smart TVs) are picking up data produced by children. Marketing about smart toys tells us they are enhancing children’s play, augmenting children’s learning, incentivising children’s healthy habits and can even reclaim family time. Salient examples include Hello Barbie and Smart Toy Bear, which use voice and/or image recognition and connect to the cloud to analyse, process, and respond to children’s conversations and images. This sector is expanding to include app-enabled toys such as toy drones, cars, and droids (e.g. Star Wars BB-8); toys-to-life, which connect action figures to video games (e.g. Skylanders, Amiibo); puzzle and building games (e.g. Osmo, Lego Fusion); and children’s GPS-enabled wearables such as smart watches and fitness trackers. We need to look beyond the marketing to see what is making this technology ubiquitous.

The commercial incentives to collect children’s data

Service providers now use free Wi-Fi as an additional enticement to their customers, including families. Apps offer companies opportunities to contain children’s usage in a walled-garden so that they can capture valuable marketing data, or offer children and parents opportunities to make in-app purchases. Therefore, more and more companies, especially companies that have no background in technology such as bus operators and cereal manufactures, use Wi-Fi and apps to engage with children.

The smart label is also a new way for companies to differentiate their products from others in saturated markets that overwhelm consumers with choice. However, security is an additional cost that manufactures of smart technologies manufacturers are unwilling to pay. The microprocessors in smart toys often don’t have the processing power required for strong security measures and secure communication, such as encryption (e.g. an 8-bit microcontroller cannot support the industry standard SSL to encrypt communications). Therefore these devices are designed without the ability to accommodate software or firmware updates. Some smart toys transmit data in clear text (parents of course are unaware of such details when purchasing these toys).

While children are using their devices they are constantly emitting data. Because this data is so valuable to businesses it has become a cliché to frame it as an exploitable ‘natural’ resource like oil. This means every digitisable movement, transaction and interaction we make is potentially commodifiable. Moreover, the networks of specialist companies, partners and affiliates that capture, store process, broker and resell the new oil are becoming so complex they are impenetrable. This includes the involvement of commercial actors in public institutions such as schools.

Lupton & Williamson (2017) use the term ‘datafied child’ to draw attention to this creeping normalisation of harvesting data about children. As its provenance becomes more opaque the data is orphaned and vulnerable to further commodification. And when it is shared across unencrypted channels or stored using weak security (as high profile cases show) it is easily hacked. The implications of this are only beginning to emerge. In response, children’s rights, privacy and protection; the particular ethics of the capture and management of children’s data; and its potential for commercial exploitation are all beginning to receive more attention.

Refocusing on children

Apart from a ticked box, companies have no way of knowing if a parent or child has given their consent. Children, or their parents, will often sign away their data to quickly dispatch any impediment to accessing the Wi-Fi. When children use public Wi-Fi they are opening, often unencrypted, channels to their devices. We need to start mapping the range of actors who are collecting data in this way and find out if they have any provisions for protecting children’s data.

Similarly, when children use their apps, companies assume that a responsible adult has agreed to the terms and conditions. Parents are expected to be gatekeepers, boundary setters, and supervisors. However, for various reasons, there may not be an informed, (digitally) literate adult on hand. For example, parents may be too busy with work or too ill to stay on top of their children’s complex digital lives. Children are educated in year groups but they share digital networks and practices with older children and teenagers, including siblings, extended family members, and friends who may enable risky practices.

We may need to start looking at additional ways of protecting children that transfers the burden away from the family and to companies that are capturing and monetising the data. This includes being realistic about the efficacy of current legislation. Because children can simply enter a fake birthdate, application of the US Children’s Online Privacy Protection Act to restrict the collection of children’s personal data online has been fairly ineffectual (boyd et al., 2011). In Europe, the incoming General Data Protection Regulation allows EU states to set a minimum age of 16 under which children cannot consent to having their data processed, potentially encouraging and even larger population of minors to lie about their age online.

We need to ask what would data capture and management look like if it is guided by a children’s framework such as this one developed here by Sonia Livingstone and endorsed by the Children’s Commissioner here. Perhaps only companies that complied with strong security and anonymisation procedures would be licenced to trade in UK? Given the financial drivers at work, an ideal solution would possibly make better regulation a commerical incentive. We will be exploring these and other similar questions that emerge over the coming months.


This work is part of the OII project “Child safety on the Internet: looking beyond ICT actors“, which maps the range of non-ICT companies engaging digitally with children and identifying areas where their actions might affect a child’s exposure to online risks such as data theft, adverse online experiences or sexual exploitation. It is funded by the Oak Foundation.

How and why is children’s digital data being harvested?

Everyone of a certain age remembers logging-on to a noisy dial-up modem and surfing the Web via AOL or AltaVista. Back then, the distinction between offline and online made much more sense. Today, three trends are conspiring to firmly confine this distinction to history. These are the mass proliferation of Wi-Fi, the appification of the Web, and the rapid expansion of the Internet of (smart) Things. Combined they are engineering multi-layered information ecosystems that enmesh around children going about their every day lives. But it’s time to refocus on our responsibilities to children before they are eclipsed by the commercial incentives that are driving these developments.

Three Trends

1. The proliferation of Wi-Fi means when children can use smart phones or tablets in variety of new contexts including on buses and trains, in hotels and restaurants, in school, libraries and health centre waiting rooms.

2. Research confirms apps on smart phones and tablets are now children’s primary gateway to the Web. This is the appification of the Web that Jonathon Zittrain predicted: the WeChat app, popular in China, is becoming its full realisation.

3. Simultaneously, the rapid expansion of the Internet of Things means everything is becoming ‘smart’ – phones, cars, toys, baby monitors, watches, toasters: we are even promised smart cities. Essentially, this means these devices have an IP address that allows to them receive, process, and transmit data on the Internet. Often these devices (including personal assistants like Alexa, game consoles and smart TVs) are picking up data produced by children. Marketing about smart toys tells us they are enhancing children’s play, augmenting children’s learning, incentivising children’s healthy habits and can even reclaim family time. Salient examples include Hello Barbie and Smart Toy Bear, which use voice and/or image recognition and connect to the cloud to analyse, process, and respond to children’s conversations and images. This sector is expanding to include app-enabled toys such as toy drones, cars, and droids (e.g. Star Wars BB-8); toys-to-life, which connect action figures to video games (e.g. Skylanders, Amiibo); puzzle and building games (e.g. Osmo, Lego Fusion); and children’s GPS-enabled wearables such as smart watches and fitness trackers. We need to look beyond the marketing to see what is making this technology ubiquitous.

The commercial incentives to collect children’s data

Service providers now use free Wi-Fi as an additional enticement to their customers, including families. Apps offer companies opportunities to contain children’s usage in a walled-garden so that they can capture valuable marketing data, or offer children and parents opportunities to make in-app purchases. Therefore, more and more companies, especially companies that have no background in technology such as bus operators and cereal manufactures, use Wi-Fi and apps to engage with children.

The smart label is also a new way for companies to differentiate their products from others in saturated markets that overwhelm consumers with choice. However, security is an additional cost that manufactures of smart technologies manufacturers are unwilling to pay. The microprocessors in smart toys often don’t have the processing power required for strong security measures and secure communication, such as encryption (e.g. an 8-bit microcontroller cannot support the industry standard SSL to encrypt communications). Therefore these devices are designed without the ability to accommodate software or firmware updates. Some smart toys transmit data in clear text (parents of course are unaware of such details when purchasing these toys).

While children are using their devices they are constantly emitting data. Because this data is so valuable to businesses it has become a cliché to frame it as an exploitable ‘natural’ resource like oil. This means every digitisable movement, transaction and interaction we make is potentially commodifiable. Moreover, the networks of specialist companies, partners and affiliates that capture, store process, broker and resell the new oil are becoming so complex they are impenetrable. This includes the involvement of commercial actors in public institutions such as schools.

Lupton & Williamson (2017) use the term ‘datafied child’ to draw attention to this creeping normalisation of harvesting data about children. As its provenance becomes more opaque the data is orphaned and vulnerable to further commodification. And when it is shared across unencrypted channels or stored using weak security (as high profile cases show) it is easily hacked. The implications of this are only beginning to emerge. In response, children’s rights, privacy and protection; the particular ethics of the capture and management of children’s data; and its potential for commercial exploitation are all beginning to receive more attention.

Refocusing on children

Apart from a ticked box, companies have no way of knowing if a parent or child has given their consent. Children, or their parents, will often sign away their data to quickly dispatch any impediment to accessing the Wi-Fi. When children use public Wi-Fi they are opening, often unencrypted, channels to their devices. We need to start mapping the range of actors who are collecting data in this way and find out if they have any provisions for protecting children’s data.

Similarly, when children use their apps, companies assume that a responsible adult has agreed to the terms and conditions. Parents are expected to be gatekeepers, boundary setters, and supervisors. However, for various reasons, there may not be an informed, (digitally) literate adult on hand. For example, parents may be too busy with work or too ill to stay on top of their children’s complex digital lives. Children are educated in year groups but they share digital networks and practices with older children and teenagers, including siblings, extended family members, and friends who may enable risky practices.

We may need to start looking at additional ways of protecting children that transfers the burden away from the family and to companies that are capturing and monetising the data. This includes being realistic about the efficacy of current legislation. Because children can simply enter a fake birthdate, application of the US Children’s Online Privacy Protection Act to restrict the collection of children’s personal data online has been fairly ineffectual (boyd et al., 2011). In Europe, the incoming General Data Protection Regulation allows EU states to set a minimum age of 16 under which children cannot consent to having their data processed, potentially encouraging and even larger population of minors to lie about their age online.

We need to ask what would data capture and management look like if it is guided by a children’s framework such as this one developed here by Sonia Livingstone and endorsed by the Children’s Commissioner here. Perhaps only companies that complied with strong security and anonymisation procedures would be licenced to trade in UK? Given the financial drivers at work, an ideal solution would possibly make better regulation a commerical incentive. We will be exploring these and other similar questions that emerge over the coming months.


This work is part of the OII project “Child safety on the Internet: looking beyond ICT actors“, which maps the range of non-ICT companies engaging digitally with children and identifying areas where their actions might affect a child’s exposure to online risks such as data theft, adverse online experiences or sexual exploitation. It is funded by the Oak Foundation.

We aren’t “rational actors” when it come to privacy — and we need protecting

We are increasingly exposed to new practices of data collection. Image by ijclark (Flickr CC BY 2.0).

As digital technologies and platforms are increasingly incorporated into our lives, we are exposed to new practices of data creation and collection — and there is evidence that American citizens are deeply concerned about the consequences of these practices. But despite these concerns, the public has not abandoned technologies that produce data and collect personal information. In fact, the popularity of technologies and services that reveal insights about our health, fitness, medical conditions, and family histories in exchange for extensive monitoring and tracking paints a picture of a public that is voluntarily offering itself up to increasingly invasive forms of surveillance.

This seeming inconsistency between intent and behaviour is routinely explained with reference to the “privacy paradox”. Advertisers, retailers, and others with a vested interest in avoiding the regulation of digital data collection have pointed to this so-called paradox as an argument against government intervention. By phrasing privacy as a choice between involvement in (or isolation from) various social and economic communities, they frame information disclosure as a strategic decision made by informed consumers. Indeed, discussions on digital privacy have been dominated by the idea of the “empowered consumer” or “privacy pragmatist” — an autonomous individual who makes informed decisions about the disclosure of their personal information.

But there is increasing evidence that “control” is a problematic framework through which to operationalize privacy. In her Policy & Internet article “From Privacy Pragmatist to Privacy Resigned: Challenging Narratives of Rational Choice in Digital Privacy Debates,” Nora A. Draper examines how the figure of the “privacy pragmatist” developed by the prominent privacy researcher Alan Westin has been used to frame privacy within a typology of personal preference — a framework that persists in academic, regulatory, and commercial discourses in the United States. Those in the pragmatist group are wary about the safety and security of their personal information, but make supposedly rational decisions about the conditions under which they are comfortable with disclosure, logically calculating the costs and benefits associated with information exchange.

Academic critiques of this model have tended to focus on the methodological and theoretical validity of the pragmatist framework; however, in light of two recent studies that suggest individuals are resigned to the loss of privacy online, this article argues for the need to examine a possibility that has been overlooked as a consequence of this focus on Westin’s typology of privacy preferences: that people have opted out of the discussion altogether. Considering a theory of resignation alters how the problem of privacy is framed and opens the door to alternative discussions around policy solutions.

We caught up with Nora to discuss her findings:

Ed.: How easy is it even to discuss privacy (and people’s “rational choices”), when we know so little about what data is collected about us through a vast number of individually innocuous channels — or the uses to which it is put?

Nora: This is a fundamental challenge in current discussions around privacy. There are steps that we can take as individuals that protect us from particular types of intrusion, but in an environment where seemingly benign data flows are used to understand and predict our behaviours, it is easy for personal privacy protection to feel like an uphill battle. In such an environment, it is increasingly important that we consider resigned inaction to be a rational choice.

Ed.: I’m not surprised that there will be people who basically give up in exhaustion, when faced with the job of managing their privacy (I mean, who actually reads the Google terms that pop up every so often?). Is there a danger that this lack of engagement with privacy will be normalised during a time that we should actually be paying more, not less, attention to it?

Nora: This feeling of powerlessness around our ability to secure opportunities for privacy has the potential to discourage individual or collective action around privacy. Anthropologists Peter Benson and Stuart Kirsch have described the cultivation of resignation as a strategy to discourage collective action against undesirable corporate practices. Whether or not these are deliberate efforts, the consequences of creating a nearly unnavigable privacy landscape is that people may accept undesirable practices as inevitable.

Ed.: I suppose another irony is the difficulty of getting people to care about something that nevertheless relates so fundamentally and intimately to themselves. How do we get privacy to seem more interesting and important to the general public?

Nora: People experience the threats of unwanted visibility very differently. For those who are used to the comfortable feeling of public invisibility — the types of anonymity we feel even in public spaces — the likelihood of an unwanted privacy breach can feel remote. This is one of the problems of thinking about privacy purely as a personal issue. When people internalize the idea that if they have done nothing wrong, they have no reason to be concerned about their privacy, it can become easy to dismiss violations when they happen to others. We can become comfortable with a narrative that if a person’s privacy has been violated, it’s likely because they failed to use the appropriate safeguards to protect their information.

This cultivation of a set of personal responsibilities around privacy is problematic not least because it has the potential to blame victims rather than those parties responsible for the privacy incursions. I believe there is real value in building empathy around this issue. Efforts to treat privacy as a community practice and, perhaps, a social obligation may encourage us to think about privacy as a collective rather than individual value.

Ed.: We have a forthcoming article that explores the privacy views of Facebook / Google (companies and employees), essentially pointing out that while the public may regard privacy as pertaining to whether or not companies collect information in the first place, the companies frame it as an issue of “control” — they collect it, but let users subsequently “control” what others see. Is this fundamental discrepancy (data collection vs control) something you recognise in the discussion?

Nora: The discursive and practical framing of privacy as a question of control brings together issues addressed in your previous two questions. By providing individuals with tools to manage particular aspects of their information, companies are able to cultivate an illusion of control. For example, we may feel empowered to determine who in our digital network has access to a particular posted image, but little ability to determine how information related to that image — for example, its associated metadata or details on who likes, comments, or reposts it — is used.

The “control” framework further encourages us to think about privacy as an individual responsibility. For example, we may assume that unwanted visibility related to that image is the result of an individual’s failure to correctly manage their privacy settings. The reality is usually much more complicated than this assigning of individual blame allows for.

Ed.: How much of the privacy debate and policy making (in the States) is skewed by economic interests — i.e. holding that it’s necessary for the public to provide data in order to keep business competitive? And is the “Europe favours privacy, US favours industry” truism broadly true?

Nora: I don’t have a satisfactory answer to this question. There is evidence from past surveys I’ve done with colleagues that people in the United States are more alarmed by the collection and use of personal information by political parties than they are by similar corporate practices. Even that distinction, however, may be too simplistic. Political parties have an established history of using consumer information to segment and target particular audience groups for political purposes. We know that the U.S. government has required private companies to share information about consumers to assist in various surveillance efforts. Discussions about privacy in the U.S. are often framed in terms of tradeoffs with, for example, technological and economic innovation. This is, however, only one of the ways in which the value of privacy is undermined through the creation of false tradeoffs. Daniel Solove, for example, has written extensively on how efforts to frame privacy in opposition to safety encourages capitulation to transparency in the service of national security.

Ed.: There are some truly terrible US laws (e.g. the General Mining Act of 1872) that were developed for one purpose, but are now hugely exploitable. What is the situation for privacy? Is the law still largely fit for purpose, in a world of ubiquitous data collection? Or is reform necessary?

Nora: One example of such a law is the Electronic Communication Privacy Act (ECPA) of 1986. This law was written before many Americans had email accounts, but continues to influence the scope authorities have to access digital communications. One of the key issues in the ECPA is the differential protection for messages depending on when they were sent. The ECPA, which was written when emails would have been downloaded from a server onto a personal computer, treats emails stored for more than 180 days as “abandoned.” While messages received in the past 180 days cannot be accessed without a warrant, so-called abandoned messages require only a subpoena. Although there is some debate about whether subpoenas offer adequate privacy protections for messages stored on remote servers, the issue is that the time-based distinction created by “180-day rule” makes little sense when access to cloud storage allows people to save messages indefinitely. Bipartisan efforts to introduce the Email Privacy Act, which would extend warrant protections to digital communication that is over 180 days old has received wide support from those in the tech industry as well as from privacy advocacy groups.

Another challenge, which you alluded to in your first question, pertains to the regulation of algorithms and algorithmic decision-making. These technologies are often described as “black boxes” to reflect the difficulties in assessing how they work. While the consequences of algorithmic decision-making can be profound, the processes that lead to those decisions are often opaque. The result has been increased scholarly and regulatory attention on strategies to understand, evaluate, and regulate the processes by which algorithms make decisions about individuals.

Read the full article: Draper, N.A. (2017) From Privacy Pragmatist to Privacy Resigned: Challenging Narratives of Rational Choice in Digital Privacy Debates. Policy & Internet 9 (2). doi:10.1002/poi3.142.


Nora A. Draper was talking to blog editor David Sutcliffe.

New Voluntary Code: Guidance for Sharing Data Between Organisations

Many organisations are coming up with their own internal policy and guidelines for data sharing. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During her time as an OII Visiting Associate, Alison Holt developed a pragmatic solution in the form of a Voluntary Code, anchored in the developing ISO standards for the Governance of Data. She discusses the voluntary code, and the need to provide urgent advice to organisations struggling with policy for sharing data.

Collecting, storing and distributing digital data is significantly easier and cheaper now than ever before, in line with predictions from Moore, Kryder and Gilder. Organisations are incentivised to collect large volumes of data with the hope of unleashing new business opportunities or maybe even new businesses. Consider the likes of uber, Netflix, and Airbnb and the other data mongers who have built services based solely on digital assets.

The use of this new abundant data will continue to disrupt traditional business models for years to come, and there is no doubt that these large data volumes can provide value. However, they also bring associated risks (such as unplanned disclosure and hacks) and they come with constraints (for example in the form of privacy or data protection legislation). Hardly a week goes by without a data breach hitting the headlines. Even if your telecommunications provider didn’t inadvertently share your bank account and sort code with hackers, and your child wasn’t one of the hundreds of thousands of children whose birthdays, names, and photos were exposed by a smart toy company, you might still be wondering exactly how your data is being looked after by the banks, schools, clinics, utility companies, local authorities and government departments that are so quick to collect your digital details.

Then there are the companies who have invited you to sign away the rights to your data and possibly your privacy too – the ones that ask you to sign the Terms and Conditions for access to a particular service (such as a music or online shopping service) or have asked you for access to your photos. And possibly you are one of the “worried well” who wear or carry a device that collects your health data and sends it back to storage in a faraway country, for analysis.

So unless you live in a lead-lined concrete bunker without any access to internet connected devices, and you don’t have the need to pass by webcams or sensors, or use public transport or public services; then your data is being collected and shared. And for the majority of the time, you benefit from this enormously. The bus stop tells you exactly when the next bus is coming, you have easy access to services and entertainment fitted very well to your needs, and you can do most of your bank and utility transactions online in the peace and quiet of your own home. Beyond you as an individual, there are organisations “out there” sharing your data to provide you better healthcare, education, smarter city services and secure and efficient financial services, and generally matching the demand for services with the people needing them.

So we most likely all have data that is being shared and it is generally in our interest to share it, but how can we trust the organisations responsible for sharing our data? As an organisation, how can I know that my partner and supplier organisations are taking care of my client and product information?

Organisations taking these issues seriously are coming up with their own internal policy and guidelines. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During my time as a visiting associate at the Oxford Internet Institute, University of Oxford, I have developed a pragmatic solution in the form of a Voluntary Code. The Code has been produced using the guidelines for voluntary code development produced by the Office of Community Affairs, Industry Canada. More importantly, the Code is anchored in the developing ISO standards for the Governance of Data (the 38505 series). These standards apply the governance principles and model from the 38500 standard and introduce the concept of a data accountability map, highlighting six focus areas for a governing body to apply governance. The early stage standard suggests considering the aspects of Value, Risk and Constraint for each area, to determine what practice and policy should be applied to maximise the value from organisational data, whilst applying constraints as set by legislation and local policy, and minimising risk.

I am Head of the New Zealand delegation to the ISO group developing IT Service Management and IT Governance standards, SC40, and am leading the development of the 38505 series of Governance of Data standards, working with a talented editorial team of industry and standards experts from Australia, China and the Netherlands. I am confident that the robust ISO consensus-led process involving subject matter experts from around the world, will result in the publication of best practice guidance for the governance of data, presented in a format that will have relevance and acceptance internationally.

In the meantime, however, I see a need to provide urgent advice to organisations struggling with policy for sharing data. I have used my time at Oxford to interview policy, ethics, smart city, open data, health informatics, education, cyber security and social science experts and users, owners and curators of large data sets, and have come up with a “Voluntary Code for Data Sharing”. The Code takes three areas from the data accountability map in the developing ISO standard 38505-1; namely Collect, Store, Distribute, and applies the aspects of Value, Risk and Constraint to provide seven maxims for sharing data. To assist with adoption and compliance, the Code provides references to best practice and examples. As the ISO standards for the Governance of Data develop, the Code will be updated. New examples of good practice will be added as they come to light.

[A permanent home for the voluntary code is currently being organised; please email me in the meantime if you are interested in it: Alison.holt@longitude174.com]

The Code is deliberately short and succinct, but it does provide links for those who need to read more to understand the underpinning practices and standards, and those tasked with implementing organisational data policy and practice. It cannot guarantee good outcomes. With new security threats arising daily, nobody can fully guarantee the safety of your information. However, if you deal with an organisation that is compliant with the Voluntary Code, then at least you can have assurance that the organisation has at least considered how it is using your data now and how it might want to reuse your data in the future, how and where your data will be stored, and then finally how your data will be distributed or discarded. And that’s a good start!


alison_holtAlison Holt was an OII Academic Visitor in late 2015. She is an internationally acclaimed expert in the Governance of Information Technology and Data, heading up the New Zealand delegations to the international standards committees for IT Governance and Service Management (SC40) and Software and Systems Engineering (SC7). The British Computer Society published Alison’s first book on the Governance of IT in 2013.

Designing Internet technologies for the public good

Caption
MEPs failed to support a Green call to protect Edward Snowden as a whistleblower, in order to allow him to give his testimony to the European Parliament in March. Image by greensefa.
Computers have developed enormously since the Second World War: alongside a rough doubling of computer power every two years, communications bandwidth and storage capacity have grown just as quickly. Computers can now store much more personal data, process it much faster, and rapidly share it across networks.

Data is collected about us as we interact with digital technology, directly and via organisations. Many people volunteer data to social networking sites, and sensors – in smartphones, CCTV cameras, and “Internet of Things” objects – are making the physical world as trackable as the virtual. People are very often unaware of how much data is gathered about them – let alone the purposes for which it can be used. Also, most privacy risks are highly probabilistic, cumulative, and difficult to calculate. A student sharing a photo today might not be thinking about a future interview panel; or that the heart rate data shared from a fitness gadget might affect future decisions by insurance and financial services (Brown 2014).

Rather than organisations waiting for something to go wrong, then spending large amounts of time and money trying (and often failing) to fix privacy problems, computer scientists have been developing methods for designing privacy directly into new technologies and systems (Spiekermann and Cranor 2009). One of the most important principles is data minimization; that is, limiting the collection of personal data to that needed to provide a service – rather than storing everything that can be conveniently retrieved. This limits the impact of data losses and breaches, for example by corrupt staff with authorised access to data – a practice that the UK Information Commissioner’s Office (2006) has shown to be widespread.

Privacy by design also protects against function creep (Gürses et al. 2011). When an organisation invests significant resources to collect personal data for one reason, it can be very tempting to use it for other purposes. While this is limited in the EU by data protection law, government agencies are in a good position to push for changes to national laws if they wish, bypassing such “purpose limitations”. Nor do these rules tend to apply to intelligence agencies.

Another key aspect of putting users in control of their personal data is making sure they know what data is being collected, how it is being used – and ideally being asked for their consent. There have been some interesting experiments with privacy interfaces, for example helping smartphone users understand who is asking for their location data, and what data has been recently shared with whom.

Smartphones have enough storage and computing capacity to do some tasks, such as showing users adverts relevant to their known interests, without sharing any personal data with third parties such as advertisers. This kind of user-controlled data storage and processing has all kinds of applications – for example, with smart electricity meters (Danezis et al. 2013), and congestion charging for roads (Balasch et al. 2010).

What broader lessons can be drawn about shaping technologies for the public good? What is the public good, and who gets to define it? One option is to look at opinion polling about public concerns and values over long periods of time. The European Commission’s Eurobarometer polls reveal that in most European countries (including the UK), people have had significant concerns about data privacy for decades.

A more fundamental view of core social values can be found at the national level in constitutions, and between nations in human rights treaties. As well as the protection of private life and correspondence in the European Convention on Human Rights’ Article 8, the freedom of thought, expression, association and assembly rights in Articles 9-11 (and their equivalents in the US Bill of Rights, and the International Covenant on Civil and Political Rights) are also relevant.

This national and international law restricts how states use technology to infringe human rights – even for national security purposes. There are several US legal challenges to the constitutionality of NSA communications surveillance, with a federal court in Washington DC finding that bulk access to phone records is against the Fourth Amendment [1] (but another court in New York finding the opposite [2]). The UK campaign groups Big Brother Watch, Open Rights Group, and English PEN have taken a case to the European Court of Human Rights, arguing that UK law in this regard is incompatible with the Human Rights Convention.

Can technology development be shaped more broadly to reflect such constitutional values? One of the best-known attempts is the European Union’s data protection framework. Privacy is a core European political value, not least because of the horrors of the Nazi and Communist regimes of the 20th century. Germany, France and Sweden all developed data protection laws in the 1970s in response to the development of automated systems for processing personal data, followed by most other European countries. The EU’s Data Protection Directive (95/46/EC) harmonises these laws, and has provisions that encourage organisations to use technical measures to protect personal data.

An update of this Directive, which the European parliament has been debating over the last year, more explicitly includes this type of regulation by technology. Under this General Data Protection Regulation, organisations that are processing personal data will have to implement appropriate technical measures to protect Regulation rights. By default, organisations should only collect the minimum personal data they need, and allow individuals to control the distribution of their personal data. The Regulation would also require companies to make it easier for users to download all of their data, so that it could be uploaded to a competitor service (for example, one with better data protection) – bringing market pressure to bear (Brown and Marsden 2013).

This type of technology regulation is not uncontroversial. The European Commissioner responsible until July for the Data Protection Regulation, Viviane Reding, said that she had seen unprecedented and “absolutely fierce” lobbying against some of its provisions. Legislators would clearly be foolish to try and micro-manage the development of new technology. But the EU’s principles-based approach to privacy has been internationally influential, with over 100 countries now having adopted the Data Protection Directive or similar laws (Greenleaf 2014).

If the EU can find the right balance in its Regulation, it has the opportunity to set the new global standard for privacy-protective technologies – a very significant opportunity indeed in the global marketplace.

[1] Klayman v. Obama, 2013 WL 6571596 (D.D.C. 2013)

[2] ACLU v. Clapper, No. 13-3994 (S.D. New York December 28, 2013)

References

Balasch, J., Rial, A., Troncoso, C., Preneel, B., Verbauwhede, I. and Geuens, C. (2010) PrETP: Privacy-preserving electronic toll pricing. 19th USENIX Security Symposium, pp. 63–78.

Brown, I. (2014) The economics of privacy, data protection and surveillance. In J.M. Bauer and M. Latzer (eds.) Research Handbook on the Economics of the Internet. Cheltenham: Edward Elgar.

Brown, I. and Marsden, C. (2013) Regulating Code: Good Governance and Better Regulation in the Information Age. Cambridge, MA: MIT Press.

Danezis, G., Fournet, C., Kohlweiss, M. and Zanella-Beguelin, S. (2013) Smart Meter Aggregation via Secret-Sharing. ACM Smart Energy Grid Security Workshop.

Greenleaf, G. (2014) Sheherezade and the 101 data privacy laws: Origins, significance and global trajectories. Journal of Law, Information & Science.

Gürses, S., Troncoso, C. and Diaz, C. (2011) Engineering Privacy by Design. Computers, Privacy & Data Protection.

Haddadi, H, Hui, P., Henderson, T. and Brown, I. (2011) Targeted Advertising on the Handset: Privacy and Security Challenges. In Müller, J., Alt, F., Michelis, D. (eds) Pervasive Advertising. Heidelberg: Springer, pp. 119-137.

Information Commissioner’s Office (2006) What price privacy? HC 1056.

Spiekermann, S. and Cranor, L.F. (2009) Engineering Privacy. IEEE Transactions on Software Engineering 35 (1).


Read the full article: Keeping our secrets? Designing Internet technologies for the public good, European Human Rights Law Review 4: 369-377. This article is adapted from Ian Brown’s 2014 Oxford London Lecture, given at Church House, Westminster, on 18 March 2014, supported by Oxford University’s Romanes fund.

Professor Ian Brown is Associate Director of Oxford University’s Cyber Security Centre and Senior Research Fellow at the Oxford Internet Institute. His research is focused on information security, privacy-enhancing technologies, and Internet regulation.

Young people are the most likely to take action to protect their privacy on social networking sites

A pretty good idea of what not to do on a social media site. Image by Sean MacEntee.

Standing on a stage in San Francisco in early 2010, Facebook founder Mark Zuckerberg, partly responding to the site’s decision to change the privacy settings of its 350 million users, announced that as Internet users had become more comfortable sharing information online, privacy was no longer a “social norm”. Of course, he had an obvious commercial interest in relaxing norms surrounding online privacy, but this attitude has nevertheless been widely echoed in the popular media. Young people are supposed to be sharing their private lives online — and providing huge amounts of data for commercial and government entities — because they don’t fully understand the implications of the public nature of the Internet.

There has actually been little systematic research on the privacy behaviour of different age groups in online settings. But there is certainly evidence of a growing (general) concern about online privacy (Marwick et al., 2010), with a 2013 Pew study finding that 50 percent of Internet users were worried about the information available about them online, up from 30 percent in 2009. Following the recent revelations about the NSA’s surveillance activities, a Washington Post-ABC poll reported 40 percent of its U.S. respondents as saying that it was more important to protect citizens’ privacy even if it limited the ability of the government to investigate terrorist threats. But what of young people, specifically? Do they really care less about their online privacy than older users?

Privacy concerns an individual’s ability to control what personal information about them is disclosed, to whom, when, and under what circumstances. We present different versions of ourselves to different audiences, and the expectations and norms of the particular audience (or context) will determine what personal information is presented or kept hidden. This highlights a fundamental problem with privacy in some SNSs: that of ‘context collapse’ (Marwick and boyd 2011). This describes what happens when audiences that are normally kept separate offline (such as employers and family) collapse into a single online context: such a single Facebook account or Twitter channel. This could lead to problems when actions that are appropriate in one context are seen by members of another audience; consider for example, the US high school teacher who was forced to resign after a parent complained about a Facebook photo of her holding a glass of wine while on holiday in Europe.

SNSs are particularly useful for investigating how people handle privacy. Their tendency to collapse the “circles of social life” may prompt users to reflect more about their online privacy (particularly if they have been primed by media coverage of people losing their jobs, going to prison, etc. as a result of injudicious postings). However, despite SNS being an incredibly useful source of information about online behaviour practices, few articles in the large body of literature on online privacy draw on systematically collected data, and the results published so far are probably best described as conflicting (see the literature review in the full paper). Furthermore, they often use convenience samples of college students, meaning they are unable to adequately address either age effects, or potentially related variables such as education and income. These ambiguities certainly provide fertile ground for additional research; particularly research based on empirical data.

The OII’s own Oxford Internet Surveys (OxIS) collect data on British Internet users and non-users through nationally representative random samples of more than 2,000 individuals aged 14 and older, surveyed face-to-face. One of the (many) things we are interested in is online privacy behaviour, which we measure by asking respondents who have an SNS profile: “Thinking about all the social network sites you use, … on average how often do you check or change your privacy settings?” In addition to the demographic factors we collect about respondents (age, sex, location, education, income etc.), we can construct various non-demographic measures that might have a bearing on this question, such as: comfort revealing personal data; bad experiences online; concern with negative experiences; number of SNSs used; and self-reported ability using the Internet.

So are young people completely unconcerned about their privacy online, gaily granting access to everything to everyone? Well, in a word, no. We actually find a clear inverse relationship: almost 95% of 14-17-year-olds have checked or changed their SNS privacy settings, with the percentage steadily dropping to 32.5% of respondents aged 65 and over. The strength of this effect is remarkable: between the oldest and youngest the difference is over 62 percentage points, and we find little difference in the pattern between the 2013 and 2011 surveys. This immediately suggests that the common assumption that young people don’t care about — and won’t act on — privacy concerns is probably wrong.

SNS-users

Comparing our own data with recent nationally representative surveys from Australia (OAIC 2013) and the US (Pew 2013) we see an amazing similarity: young people are more, not less, likely to have taken action to protect the privacy of their personal information on social networking sites than older people. We find that this age effect remains significant even after controlling for other demographic variables (such as education). And none of the five non-demographic variables changes the age effect either (see the paper for the full data, analysis and modelling). The age effect appears to be real.

So in short, and contrary to the prevailing discourse, we do not find young people to be apathetic when it comes to online privacy. Barnes (2006) outlined the original ‘privacy paradox’ by arguing that “adults are concerned about invasion of privacy, while teens freely give up personal information (…) because often teens are not aware of the public nature of the Internet.” This may once have been true, but it is certainly not the case today.

Existing theories are unable to explain why young people are more likely to act to protect privacy, but maybe the answer lies in the broad, fundamental characteristics of social life. It is social structure that creates context: people know each other based around shared life stages, experiences and purposes. Every person is the center of many social circles, and different circles have different norms for what is acceptable behavior, and thus for what is made public or kept private. If we think of privacy as a sort of meta-norm that arises between groups rather than within groups, it provides a way to smooth out some of the inevitable conflicts of the varied contexts of modern social life.

This might help explain why young people are particularly concerned about their online privacy. At a time when they’re leaving their families and establishing their own identities, they will often be doing activities in one circle (e.g. friends) that they do not want known in other circles (e.g. potential employers or parents). As an individual enters the work force, starts to pay taxes, and develops friendships and relationships farther from the home, the number of social circles increases, increasing the potential for conflicting privacy norms. Of course, while privacy may still be a strong social norm, it may not be in the interest of the SNS provider to cater for its differentiated nature.

The real paradox is that these sites have become so embedded in the social lives of users that to maintain their social lives they must disclose information on them despite the fact that there is a significant privacy risk in disclosing this information; and often inadequate controls to help users to meet their diverse and complex privacy needs.

Read the full paper: Blank, G., Bolsover, G., and Dubois, E. (2014) A New Privacy Paradox: Young people and privacy on social network sites. Prepared for the Annual Meeting of the American Sociological Association, 16-19 August 2014, San Francisco, California.

References

Barnes, S. B. (2006). A privacy paradox: Social networking in the United States. First Monday,11(9).

Marwick, A. E., Murgia-Diaz, D., & Palfrey, J. G. (2010). Youth, Privacy and Reputation (Literature Review). SSRN Scholarly Paper No. ID 1588163. Rochester, NY: Social Science Research Network.

Marwick, A. E., & boyd, D. (2011). I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media & Society, 13(1), 114–133. doi:10.1177/1461444810365313


Grant Blank is a Survey Research Fellow at the OII. He is a sociologist who studies the social and cultural impact of the Internet and other new communication media.

Past and Emerging Themes in Policy and Internet Studies

Caption
We can’t understand, analyze or make public policy without understanding the technological, social and economic shifts associated with the Internet. Image from the (post-PRISM) “Stop Watching Us” Berlin Demonstration (2013) by mw238.

In the journal’s inaugural issue, founding Editor-in-Chief Helen Margetts outlined what are essentially two central premises behind Policy & Internet’s launch. The first is that “we cannot understand, analyze or make public policy without understanding the technological, social and economic shifts associated with the Internet” (Margetts 2009, 1). It is simply not possible to consider public policy today without some regard for the intertwining of information technologies with everyday life and society. The second premise is that the rise of the Internet is associated with shifts in how policy itself is made. In particular, she proposed that impacts of Internet adoption would be felt in the tools through which policies are effected, and the values that policy processes embody.

The purpose of the Policy and Internet journal was to take up these two challenges: the public policy implications of Internet-related social change, and Internet-related changes in policy processes themselves. In recognition of the inherently multi-disciplinary nature of policy research, the journal is designed to act as a meeting place for all kinds of disciplinary and methodological approaches. Helen predicted that methodological approaches based on large-scale transactional data, network analysis, and experimentation would turn out to be particularly important for policy and Internet studies. Driving the advancement of these methods was therefore the journal’s third purpose. Today, the journal has reached a significant milestone: over one hundred high-quality peer-reviewed articles published. This seems an opportune moment to take stock of what kind of research we have published in practice, and see how it stacks up against the original vision.

At the most general level, the journal’s articles fall into three broad categories: the Internet and public policy (48 articles), the Internet and policy processes (51 articles), and discussion of novel methodologies (10 articles). The first of these categories, “the Internet and public policy,” can be further broken down into a number of subcategories. One of the most prominent of these streams is fundamental rights in a mediated society (11 articles), which focuses particularly on privacy and freedom of expression. Related streams are children and child protection (six articles), copyright and piracy (five articles), and general e-commerce regulation (six articles), including taxation. A recently emerged stream in the journal is hate speech and cybersecurity (four articles). Of course, an enduring research stream is Internet governance, or the regulation of technical infrastructures and economic institutions that constitute the material basis of the Internet (seven articles). In recent years, the research agenda in this stream has been influenced by national policy debates around broadband market competition and network neutrality (Hahn and Singer 2013). Another enduring stream deals with the Internet and public health (eight articles).

Looking specifically at “the Internet and policy processes” category, the largest stream is e-participation, or the role of the Internet in engaging citizens in national and local government policy processes, through methods such as online deliberation, petition platforms, and voting advice applications (18 articles). Two other streams are e-government, or the use of Internet technologies for government service provision (seven articles), and e-politics, or the use of the Internet in mainstream politics, such as election campaigning and communications of the political elite (nine articles). Another stream that has gained pace during recent years, is online collective action, or the role of the Internet in activism, ‘clicktivism,’ and protest campaigns (16 articles). Last year the journal published a special issue on online collective action (Calderaro and Kavada 2013), and the next forthcoming issue includes an invited article on digital civics by Ethan Zuckerman, director of MIT’s Center for Civic Media, with commentary from prominent scholars of Internet activism. A trajectory discernible in this stream over the years is a movement from discussing mere potentials towards analyzing real impacts—including critical analyses of the sometimes inflated expectations and “democracy bubbles” created by digital media (Shulman 2009; Karpf 2012; Bryer 2012).

The final category, discussion of novel methodologies, consists of articles that develop, analyze, and reflect critically on methodological innovations in policy and Internet studies. Empirical articles published in the journal have made use of a wide range of conventional and novel research methods, from interviews and surveys to automated content analysis and advanced network analysis methods. But of those articles where methodology is the topic rather than merely the tool, the majority deal with so-called “big data,” or the use of large-scale transactional data sources in research, commerce, and evidence-based public policy (nine articles). The journal recently devoted a special issue to the potentials and pitfalls of big data for public policy (Margetts and Sutcliffe 2013), based on selected contributions to the journal’s 2012 big data conference: Big Data, Big Challenges? In general, the notion of data science and public policy is a growing research theme.

This brief analysis suggests that research published in the journal over the last five years has indeed followed the broad contours of the original vision. The two challenges, namely policy implications of Internet-related social change and Internet-related changes in policy processes, have both been addressed. In particular, research has addressed the implications of the Internet’s increasing role in social and political life. The journal has also furthered the development of new methodologies, especially the use of online network analysis techniques and large-scale transactional data sources (aka ‘big data’).

As expected, authors from a wide range of disciplines have contributed their perspectives to the journal, and engaged with other disciplines, while retaining the rigor of their own specialisms. The geographic scope of the contributions has been truly global, with authors and research contexts from six continents. I am also pleased to note that a characteristic common to all the published articles is polish; this is no doubt in part due to the high level of editorial support that the journal is able to afford to authors, including copyediting. The justifications for the journal’s establishment five years ago have clearly been borne out, so that the journal now performs an important function in fostering and bringing together research on the public policy implications of an increasingly Internet-mediated society.

And what of my own research interests as an editor? In the inaugural editorial, Helen Margetts highlighted work, finance, exchange, and economic themes in general as being among the prominent areas of Internet-related social change that are likely to have significant future policy implications. I think for the most part, these implications remain to be addressed, and this is an area that the journal can encourage authors to tackle better. As an editor, I will work to direct attention to this opportunity, and welcome manuscript submissions on all aspects of Internet-enabled economic change and its policy implications. This work will be kickstarted by the journal’s 2014 conference (26-27 September), which this year focuses on crowdsourcing and online labor.

Our published articles will continue to be highlighted here in the journal’s blog. Launched last year, we believe this blog will help to expand the reach and impact of research published in Policy and Internet to the wider academic and practitioner communities, promote discussion, and increase authors’ citations. After all, publication is only the start of an article’s public life: we want people reading, debating, citing, and offering responses to the research that we, and our excellent reviewers, feel is important, and worth publishing.

Read the full editorial:  Lehdonvirta, V. (2014) Past and Emerging Themes in Policy and Internet Studies. Policy & Internet 6(2): 109-114.

References

Bryer, T.A. (2011) Online Public Engagement in the Obama Administration: Building a Democracy Bubble? Policy & Internet 3 (4).

Calderaro, A. and Kavada, A. (2013) Challenges and Opportunities of Online Collective Action for Policy Change. Policy & Internet (5) 1.

Hahn, R. and Singer, H. (2013) Is the U.S. Government’s Internet Policy Broken? Policy & Internet 5 (3) 340-363.

Karpf, D. (2012) Online Political Mobilization from the Advocacy Group’s Perspective: Looking Beyond Clicktivism. Policy & Internet 2 (4) 7-41.

Margetts, H. (2009) The Internet and Public Policy. Policy and Internet 1 (1).

Margetts, H. and Sutcliffe, D. (2013) Addressing the Policy Challenges and Opportunities of ‘Big Data.’ Policy & Internet 5 (2) 139-146.

Shulman, S.W. (2009) The Case Against Mass E-mails: Perverse Incentives and Low Quality Public Participation in U.S. Federal Rulemaking. Policy & Internet 1 (1) 23-53.