Open government policies are spreading across Europe—but what are the expected benefits?

The rhetoric of innovation and openness is bipartisan at the national level in Europe. Crowd celebrating the election victory of moderniser Emmanuel Macron, by Lorie Shaull (Flickr CC BY-SA 2.0).

Open government policies are spreading across Europe, challenging previous models of the public sector, and defining new forms of relationship between government, citizens, and digital technologies. In their Policy & Internet article “Why Choose Open Government? Motivations for the Adoption of Open Government Policies in Four European Countries,” Emiliana De Blasio and Donatella Selva present a qualitative analysis of policy documents from France, Italy, Spain, and the UK, in order to map out the different meanings of open government, and how it is framed by different national governments.

As a policy agenda, open government can be thought of as involving four variables: transparency, participation, collaboration, and digital technologies in democratic processes. Although the variables are all interpreted in different ways, participation, collaboration, and digital technology provide the greatest challenge to government, given they imply a major restructuring of public administration, whereas transparency goals (i.e., the disclosure of open data and the provision of monitoring tools) do not. Indeed, transparency is mentioned in the earliest accounts of open government from the 1950s.

The authors show the emergence of competing models of open government in Europe, with transparency and digital technologies being the most prominent issues in open government, and participation and collaboration being less considered and implemented. The standard model of open government seems to stress innovation and openness, and occasionally of public-private collaboration, but fails to achieve open decision making, with the policy-making process typically rooted in existing mechanisms. However, the authors also see the emergence of a policy framework within which democratic innovations can develop, testament to the vibrancy of the relationship between citizens and the public administration in contemporary European democracies.

We caught up with the authors to discuss their findings:

Ed.: Would you say there are more similarities than differences between these countries’ approaches and expectations for open government? What were your main findings (briefly)?

Emiliana / Donatella: We can imagine the four European countries (France, Italy, Spain and the UK) as positioned in a continuum between a participatory frame and an economic/innovation frame: on the one side, we could observe that French policies focus on open government in order to strengthen and innovate the tradition of débat public; at the opposite side, the roots of the UK’s open government are in cost-efficiency, accountability and transparency arguments. Just between those two poles, Italian and Spanish policies situate open government in the context of a massive reform of the public sector, in order to reduce the administrative burden and to restore citizen trust in institutions. Two years after we wrote the article, we can observe that both in Italy and Spain something has changed, and participation has regained attention as a public policy issue.

Ed.: How much does policy around open data change according to who’s in power? (Obama and Trump clearly have very different ideas about the value of opening up government). Or do civil services tend to smooth out any ideological differences around openness and transparency, even as parties enter and leave power?

Emiliana / Donatella: The case of open data is quite peculiar: it is one of the few policy issues directly addressed by the European Union Commission, and now by the transnational agreement on the G8 Open Data Charter, and for this reason we could say there is a homogenising trend. Moreover, opening up data is an ongoing process—started at least eight years ago—that will be too difficult for any new government to stop. As for openness and transparency in general, Cameron (and now May), Hollande, Monti (and then Renzi) and Rajoy’s governments, all wrote policies with a strong emphasis on innovation and openness as the key for a better future.

In fact, we observed that at the national level, the rhetoric of innovation and openness is bipartisan, and not dependent on political orientation—although the concrete policy instruments and implementation strategies might differ. It is also for this reason that governments tend to remain in the “comfort zone” of transparency and public-private partnerships: they still evocate a change in the relationship between public sector and civil society, but they don’t actually address this change.

Still, we should highlight that at the regional and local levels open data, transparency and participation policies are mostly promoted by liberal and/or left-leaning administrations.

Ed.: Your results for France (i.e. almost no mention of the digital economy, growth, or reform of public services) are basically the opposite of Macron’s (winning) platform of innovation and reform. Did Macron identify a problem in France; and might you expect a change as he takes control?

Emiliana / Donatella: Macron’s electoral programme is based on what he already did while in charge at the Ministry of Economy: he pursued a French digital agenda willing to attract foreign investments, to create digital productive hubs (the French Tech), and innovate the whole economy. Interestingly, however, he did not frame those policies under the umbrella of open government, preferring to speak about “modernisation.” The importance given by Macron to innovation in the economy and public sector finds some antecedents in the policies we analysed: the issue of “modernisation” was prominent and we expect it will be even more, now that he has gained the presidency.

Ed.: In your article you analyse policy documents, i.e. texts that set out hopes and intentions. But is there any sense of how much practical effect these have: particularly given how expensive it is to open up data? You note “the Spanish and Italian governments are especially focused on restoring trust in institutions, compensating for scandals, corruption, and a general distrust which is typical in Southern Europe” and yet the current Spanish government is still being rocked by corruption scandals.

Emiliana / Donatella: The efficacy of any kind of policies can vary depending on many factors—such as internal political context, international constraints, economic resources, and clarity of policy instruments. In addition, we should consider that at the national level, very few policies have an immediate consequence on citizens’ everyday lives. This is surely one of the worst problems of open government: from the one side, it is a policy agenda promoted in a top-down perspective—from international and/or national institutions; and from the other side, it fails to engage local communities in a purposeful dialogue. At such, open government policies appear to be self-reflective acts by governments, as paradoxical as this might be.

Ed.: Despite terrible, terrible things like the Trump administration’s apparent deletion of climate data, do you see a general trend towards increased datafication, accountability, and efficiency (perhaps even driven by industry, as well as NGOs)? Or are public administrations far too subject to political currents and individual whim?

Emiliana / Donatella: As we face turbulent times, it would be very risky to assert that tomorrow’s world will be more open than today’s. But even if we observe some interruptions, the principles of open democracy and open government have colonised public agendas: as we have tried to stress in our article, openness, participation, collaboration and innovation can have different meanings and degrees, but they succeeded in acquiring the status of policy issues.

And as you rightly point out, the way towards accountability and openness is not a public sector’s prerogative any more: many actors from civil society and industry have already mobilised in order to influence government agendas, public opinion, and to inform citizens. As the first open government policies start to produce practical effects on people’s everyday lives, we might expect that public awareness will rise, and that no individual will be able to ignore it.

Ed.: And does the EU have any supra-national influence, in terms of promoting general principles of openness, transparency etc.? Or is it strictly left to individual countries to open up (if they want), and in whatever direction they like? I would have thought the EU would be the ideal force to promote rational technocratic things like open government?

Emiliana / Donatella: The EU has the power of stressing some policy issues, and letting some others be “forgotten”. The complex legislative procedures of the EU, together with the trans-national conflictuality, produce policies with different degrees of enforcement. Generally speaking, some EU policies have a direct influence on national laws, whereas some others don’t, leaving with national governments the decision of whether or not to act. In the case of open government, we see that the EU has been particularly influential in setting the Digital Agenda for 2020 and now the Sustainable Future Agenda for 2030; in both documents, Europe encourages Member States to dialogue and collaborate with private actors and civil society, in order to achieve some objectives of economic development.

At the moment, initiatives like the Open Government Partnership—which runs outside the EU competence and involves many European countries—are tying up governments in trans-national networks converging on a set of principles and methods. Because of that Partnership, for example, countries like Italy and Spain have experimented with the first national co-drafting procedures.

Read the full article: De Blasio, E. and Selva, D. (2016) Why Choose Open Government? Motivations for the Adoption of Open Government Policies in Four European Countries. Policy & Internet 8 (3). DOI: doi:10.1002/poi3.118.

Emiliana De Blasio and Donatella Selva were talking to blog editor David Sutcliffe.

Using Open Government Data to predict sense of local community

Advocates hope that opening government data will increase government transparency, catalyse economic growth, address social and environmental challenges. Image by the UK’s Open Data Institute.

Community-based approaches are widely employed in programmes that monitor and promote socioeconomic development. And building the “capacity” of a community—i.e. the ability of people to act individually or collectively to benefit the community—is key to these approaches. The various definitions of community capacity all agree that it comprises a number of dimensions—including opportunities and skills development, resource mobilisation, leadership, participatory decision making, etc.—all of which can be measured in order to understand and monitor the implementation of community-based policy. However, measuring these dimensions (typically using surveys) is time consuming and expensive, and the absence of such measurements is reflected in a greater focus in the literature on describing the process of community capacity building, rather than on describing how it’s actually measured.

A cheaper way to measure these dimensions, for example by applying predictive algorithms to existing secondary data like socioeconomic characteristics, socio-demographics, and condition of housing stock, would certainly help policy makers gain a better understanding of local communities. In their Policy & Internet article “Predicting Sense of Community and Participation by Applying Machine Learning to Open Government Data“, Alessandro Piscopo, Ronald Siebes, and Lynda Hardman employ a machine-learning technique (“Random Forests”) to evaluate an estimate of community capacity derived from open government data, and determine the most important predictive variables.

The resulting models were found to be more accurate than those based on traditional statistics, demonstrating the feasibility of the Random Forests technique for this purpose—being accurate, able to deal with small data sets and nonlinear data, and providing information about how each variable in the dataset contributes to predictive accuracy.

We caught up with the authors to discuss their findings:

Ed.: Just briefly: how did you do the study? Were you essentially trying to find which combinations of variables available in Open Government Data predicted “sense of community and participation” as already measured by surveys?

Authors: Our research stemmed from an observation of the measures of social characteristics available. These are generally obtained through expensive surveys, so we asked ourselves “how could we generate them in a more economic and efficient way?” In recent years, the UK government has openly released a wealth of datasets, which could be used to provide information for other purposes—in our case, providing measures of sense of community and participation—than those for which they had been created. We started our work by consulting papers from the social science domain, to understand which factors were associated to sense of community and participation. Afterwards, we matched the factors that were most commonly mentioned in the literature with “actual” variables found in UK Open Government Data sources.

Ed.: You say “the most determinant variables in our models were only partially in agreement with the most influential factors for sense of community and participation according to the social science literature”—which were they, and how do you account for the discrepancy?

Authors: We observed two types of discrepancy. The first was the case of variables that had roughly the same level of importance in our models and in others previously developed, but with a different rank. For instance, median age was by far the most determinant variable in our model for sense of community. This variable was not ranked among the top five variables in the literature, although it was listed among the significant variables.

The second type of discrepancy regarded variables which were highly important in our models and not influential in others, or vice versa. An example is the socioeconomic status of residents of a neighbourhood, which appeared to have no effect on participation in prior studies, but was the top-ranking variable in our participation model (operationalised as the number of people in intermediate occupation).

We believe that there are multiple explanations for these phenomena, all of which deserve further investigation. First, highly determinant predictors in conventional statistical models have been proven to have little or no importance in ensemble algorithms, such as the one we used [1]. Second, factors influencing sense of community and civic participation may vary according to the context (e.g. different countries; see [3] about sense of community in China for an example). Finally, different methods may measure different aspects related to a socially meaningful concept, leading to different partial explanations.

Ed.: What were the predictors for “lack of community”— i.e. what would a terrible community look like, according to your models?

Authors: Our work did not really focus on finding “good” and “bad” communities. However, we did notice some characteristics that were typical of communities with low sense of community or participation in our dataset. For example, sense of community had a strong negative correlation with work and stores accessibility, with ethnic fragmentation, and with the number of people living in the UK for less than 10 years. On the other hand, it was positively correlated with the age of residents. Participation, instead, was negatively correlated with household composition and occupation of its residents, whilst it had a positive relation with their level of education and the weekly worked hours. Of course, these data would require to be interpreted by a social scientist, in order to properly contextualise and understand them.

Ed.: Do you see these techniques as being more useful to highlight issues and encourage discussion, or actually being used in planning? For example, I can see it might raise issues if machine-learning models “proved” that presence of immigrant populations, or neighbourhoods of mixed economic or ethnic backgrounds, were less cohesive than homogeneous ones (not sure if they are?).

Authors: How machine learning algorithms work is not always clear, even to specialists, and this has led some people to describe them as “black boxes”. We believe that models like those we developed can be extremely useful to challenge existing perspectives based on past data available in the social science literature, e.g. they can be used to confirm or reject previous measures in the literature. Additionally, machine learning models can serve as indicators that can be more frequently consulted: they are cheaper to produce, we can use them more often, and see whether policies have actually worked.

Ed.: It’s great that existing data (in this case, Open Government Data) can be used, rather than collecting new data from scratch. In practice, how easy is it to repurpose this data and build models with it—including in countries where this data may be more difficult to access? And were there any variables you were interested in that you couldn’t access?

Authors: Identifying relevant datasets and getting hold of them was a lengthy process, even in the UK, where plenty of work has been done to make government data openly available. We had to retrieve many datasets from the pages of the government department that produced them, such as the Department for Work and Pensions or the Home Office, because we could not find them through the portal Next to this, the ONS website was another very useful resource, which we used to get census data.

The hurdles encountered in gathering the data led us to recommend the development of methods that would be able to more automatically retrieve datasets from a list of sources and select the ones that provide the best results for predictive models of social dimensions.

Ed.: The OII has done some similar work, estimating the local geography of Internet use across Britain, combining survey and national census data. The researchers said the small-area estimation technique wasn’t being used routinely in government, despite its power. What do you think of their work and discussion, in relation to your own?

Authors: One of the issues we were faced with in our research was the absence of nationwide data about sense of community and participation at a neighbourhood level. The small area estimation approach used by Blank et al., 2017 [2] could provide a suitable solution to the issue. However, the estimates produced by their approach understandably incorporate a certain amount of error. In order to use estimated values as training data for predictive models of community measures it would be key to understand how this error would be propagated to the predicted values.

[1] Berk, R. 2006. “ An Introduction to Ensemble Methods for Data Analysis.” Sociological Methods & Research 34 (3): 263–95.
[2] Blank, G., Graham, M., and Calvino, C. 2017. Local Geographies of Digital Inequality. Social Science Computer Review. DOI: 10.1177/0894439317693332.
[3] Xu, Q., Perkins, D.D. and Chow, J.C.C., 2010. Sense of community, neighboring, and social capital as predictors of local political participation in China. American journal of community psychology, 45(3-4), pp.259-271.

Read the full article: Piscopo, A., Siebes, R. and Hardman, L. (2017) Predicting Sense of Community and Participation by Applying Machine Learning to Open Government Data. Policy & Internet 9 (1) doi:10.1002/poi3.145.

Alessandro Piscopo, Ronald Siebes, and Lynda Hardman were talking to blog editor David Sutcliffe.

Why does the Open Government Data agenda face such barriers?

Advocates hope that opening government data will increase government transparency, catalyse economic growth, address social and environmental challenges. Image by the UK’s Open Data Institute.

Advocates of Open Government Data (OGD)—that is, data produced or commissioned by government or government-controlled entities that can be freely used, reused and redistributed by anyone—talk about the potential of such data to increase government transparency, catalyse economic growth, address social and environmental challenges and boost democratic participation. This heady mix of potential benefits has proved persuasive to the UK Government (and governments around the world). Over the past decade, since the emergence of the OGD agenda, the UK Government has invested extensively in making more of its data open. This investment has included £10 million to establish the Open Data Institute and a £7.5 million fund to support public bodies overcome technical barriers to releasing open data.

Yet the transformative impacts claimed by OGD advocates, in government as well as NGOs such as the Open Knowledge Foundation, still seem a rather distant possibility. Even the more modest goal of integrating the creation and use of OGD into the mainstream practices of government, businesses and citizens remains to be achieved. In my recent article Barriers to the Open Government Data Agenda: Taking a Multi-Level Perspective (Policy & Internet 6:3) I reflect upon the barriers preventing the OGD agenda from making a breakthrough into the mainstream. These reflections centre on the five key finds of a survey exploring where key stakeholders within the UK OGD community perceive barriers to the OGD agenda. The key messages from the UK OGD community are that:

1. Barriers to the OGD agenda are perceived to be widespread 

Unsurprisingly, given the relatively limited impact of OGD to date, my research shows that barriers to the OGD agenda are perceived to be widespread and numerous in the UK’s OGD community. What I find rather more surprising is the expectation, amongst policy makers, that these barriers ought to just melt away when exposed to the OGD agenda’s transparently obvious value and virtue. Given that the breakthrough of the OGD agenda (in actual fact) will require changes across the complex socio-technical structures of government and society, many teething problems should be expected, and considerable work will be required to overcome them.

2. Barriers on the demand side are of great concern

Members of the UK OGD community are particularly concerned about the wide range of demand-side barriers, including the low level of demand for OGD across civil society and the public and private sectors. These concerns are likely to have arisen as a legacy of the OGD community’s focus on the supply of OGD (such as public spending, prescription and geospatial data), which has often led the community to overlook the need to nurture initiatives that make use of OGD: for example innovators such as Carbon Culture who use OGD to address environmental challenges.

Adopting a strategic approach to supporting niches of OGD use could help overcome some of the demand-side barriers. For example, such an approach could foster the social learning required to overcome barriers relating to the practices and business models of data users. Whilst there are encouraging signs that the UK’s Open Data Institute (a UK Government-supported not-for-profit organisation seeking to catalyse the use of open data) is supporting OGD use in the private sector, there remains a significant opportunity to improve the support offered to potential OGD users across civil society. It is also important to recognise that increasing the support for OGD users is not guaranteed to result in increased demand. Rather the possibility remains that demand for OGD is limited for many other reasons—including the possibility that the majority of businesses, citizens and community organisations find OGD of very little value.

3. The structures of government continue to act as barriers

Members of the UK OGD community are also concerned that major barriers remain on the supply side, particularly in the form of the established structures and institutions of government. For example, barriers were perceived in the forms of the risk-adverse cultures of government organisations and the ad hoc funding of OGD initiatives. Although resilient, these structures are dynamic, so proponents of OGD need to be aware of emerging ‘windows of opportunity’ as they open up. Such opportunities may take the form of tensions within the structures of government (e.g. where restrictions on data sharing between different parts of government present an opportunity for OGD to create efficiency savings); and external pressures on government (e.g. the pressure to transition to a low carbon economy could create opportunities for OGD initiatives and demand for OGD).

4. There are major challenges to mobilising resources to support the open government data agenda

The research results also showed that members of the UK’s OGD community see mobilising the resources required to support the OGD as a major challenge. Concerns around securing funding are predictably prominent, but concerns also extend to developing the skills and knowledge required to use OGD across civil society, government and the private sector. These challenges are likely to persist whilst the post-financial crisis narrative of public deficit reduction through public spending reduction dominates the political agenda. This leaves OGD advocates to consider the politics and ethics of calling for investment in OGD initiatives, whilst spending reductions elsewhere are leading to the degradation of public services provision to vulnerable and socially excluded individuals.

5. The nature of some barriers remains contentious within the OGD community

OGD is often presented by advocates as a neutral, apolitical public good. However, my research highlights the important role that values and politics plays in how individuals within the OGD community perceive the agenda and the barriers it faces. For example, there are considerable differences in opinion, within the OGD community, on whether or not a private sector focus on exploiting financial value from OGD is crowding out the creation of social and environmental value. So benefits may arise from advocates being more open about the values and politics that underpin and shape the agenda. At the same time, OGD-related policy and practice could create further opportunities for social learning that brings together the diverse values and perspectives that coexist within the OGD community.

Having considered the wide range of barriers to the breakthrough of OGD agenda, and some approaches to overcoming these barriers, these discussions need setting in a broader political context. If the agenda does indeed make a breakthrough into the mainstream, it remains unclear what form this will take. Will the OGD agenda make a breakthrough by conforming with, and reinforcing, prevailing neoliberal interests? Or will the agenda stretch the fabric of government, the economy and society, and transform the relationship between citizens and the state?

Read the full article: Martin, C. (2014) Barriers to the Open Government Data Agenda: Taking a Multi-Level Perspective. Policy & Internet 6 (3) 217-240.

Online crowd-sourcing of scientific data could document the worldwide loss of glaciers to climate change

Ed: Project Pressure has created a platform for crowdsourcing glacier imagery, often photographs taken by climbers and trekkers. Why are scientists interested in these images? And what’s the scientific value of the data set that’s being gathered by the platform?

Klaus: Comparative photography using historical photography allows year-on-year comparisons to document glacier change. The platform aims to create long-lasting scientific value with minimal technical entry barriers—it is valuable to have a global resource that combines photographs generated by Project Pressure in less documented areas, with crowdsourced images taken by for example by climbers and trekkers, combined with archival pictures. The platform is future focused and will hopefully allow an up-to-date view on glaciers across the planet.

The other ways for scientists to monitor glaciers takes a lot of time and effort; direct measurements of snow fall is a complicated, resource intensive and time-consuming process. And while glacier outlines can be traced from satellite imagery, this still needs to be done manually. Also, you can’t measure the thickness, images can be obscured by debris and cloud cover, and some areas just don’t have very many satellite fly-bys.

Ed: There are estimates that the glaciers of Montana’s Glacier National Park will likely to be gone by 2020 and the Ugandan glaciers by 2025, and the Alps are rapidly turning into a region of lakes. These are the famous and very visible examples of glacier loss—what’s the scale of the missing data globally?

Klaus: There’s a lot of great research being conducted in this area, however there are approximately 300,000 glaciers world wide, with huge data gaps in South America and the Himalayas for instance. Sharing of Himalayan data between Indian and Chinese scientists has been a sensitive issue, given glacier meltwater is an important strategic resource in the region. But this is a popular trekking route, and it is relatively easy to gather open-source data from the public. Furthermore, there are also numerous national and scientific archives with images lying around that don’t have a central home.

Ed: What metadata are being collected for the crowdsourced images?

Klaus: The public can upload their own photos embedded with GPS, compass direction, and date. This data is aggregated into a single managed platform. With GPS becoming standard in cameras, it’s very simple contribute to the project—taking photos with embedded GPS data is almost foolproof. The public can also contribute by uploading archival images and adding GPS data to old photographs.

Ed: So you are crowd sourcing the gathering of this data; are there any plans to crowd-source the actual analysis?

Klaus: It’s important to note that accuracy is very important in a database, and the automated (or semiautomated) process of data generation should result in good data. And while the analytical side should be done be professionals, we are making the data open source so it can be used in education for instance. We need to take harness what crowds are good at, and know what the limitations are.

Ed: You mentioned in your talk that the sheer amount of climate data—and also the way it is communicated—means that the public has become disconnected from the reality and urgency of climate change: how is the project working to address this? What are the future plans?

Klaus: Recent studies have demonstrated a disconnect between scientific information regarding climate change and the public. The problem is not access to scientific information, but the fact that is can be overwhelming. Project Pressure is working to reconnect the public with the urgency of the problem by inspiring people to action and participation, and to engage with climate change. Project Pressure is very scalable in terms of the scientific knowledge required to use the platform: from kids to scientists. On the interface one can navigate the world, find locations and directions of photographs, and once funding permits we will also add the time-dimension.

Ed: Project Pressure has deliberately taken a non-political stance on climate change: can you explain why?

Klaus: Climate change has unfortunately become a political subject, but we want to preserve our integrity by not taking a political stance. It’s important that everyone can engage with Project Pressure regardless of their political views. We want to be an independent, objective partner.

Ed: Finally, what’s your own background? How did you get involved?

Klaus: I’m the founder, and my background is in communication and photography. Input on how to strengthen the conceptualisation has come from a range of very smart people; in particular, Dr M. Zemph from the World Glacier Monitoring Service has been very valuable.

Klaus Thymann was talking at the OII on 18 March 2013; he talked later to blog editor David Sutcliffe.

Online collective action and policy change: new special issue from Policy and Internet

The Internet has multiplied the platforms available to influence public opinion and policy making. It has also provided citizens with a greater capacity for coordination and mobilisation, which can strengthen their voice and representation in the policy agenda. As waves of protest sweep both authoritarian regimes and liberal democracies, this rapidly developing field calls for more detailed enquiry. However, research exploring the relationship between online mobilisation and policy change is still limited. This special issue of ‘Policy and Internet’ addresses this gap through a variety of perspectives. Contributions to this issue view the Internet both as a tool that allows citizens to influence policy making, and as an object of new policies and regulations, such as data retention, privacy, and copyright laws, around which citizens are mobilising. Together, these articles offer a comprehensive empirical account of the interface between online collective action and policy making.

Within this framework, the first article in this issue, “Networked Collective Action and the Institutionalized Policy Debate: Bringing Cyberactivism to the Policy Arena?” by Stefania Milan and Arne Hintz (2013), looks at the Internet as both a tool of collective action and an object of policy. The authors provide a comprehensive overview of how computer-mediated communication creates not only new forms of organisational structure for collective action, but also new contentious policy fields. By focusing on what the authors define as ‘techie activists,’ Milan and Hintz explore how new grassroots actors participate in policy debates around the governance of the Internet at different levels. This article provides empirical evidence to what Kriesi et al. (1995) defines as “windows of opportunities” for collective action to contribute to the policy debate around this new space of contentious politics. Milan and Hintz demonstrate how this has happened from the first World Summit of Information Society (WSIS) in 2003 to more recent debates about Internet regulation.

Yana Breindl and François Briatte’s (2013) article “Digital Protest Skills and Online Activism Against Copyright Reform in France and the European Union” complements Milan and Hintz’s analysis by looking at how the regulation of copyright issues opens up new spaces of contentious politics. The authors compare how online and offline initiatives and campaigns in France around the “Droit d’Auteur et les Droits Voisins dans la Société de l’Information” (DADVSI) and “Haute Autorité pour la diffusion des œuvres et la protection des droits sure Internet” (HADOPI) laws, and in Europe around the Telecoms Package Reform, have contributed to the deliberations within the EU Parliament. They thus add to the rich debate on the contentious issues of intellectual property rights, demonstrating how collective action contributes to this debate at the European level.

The remaining articles in this special issue focus more on the online tactics and strategies of collective actors and the opportunities opened by the Internet for them to influence policy makers. In her article, “Activism and The Online Mediation Opportunity Structure: Attempts to Impact Global Climate Change Policies?” Julie Uldam (2013) discusses the tactics used by London-based environmental activists to influence policy making during the 17th UN climate conference (COP17) in 2011. Based on ethnographic research, Uldam traces the relationship between online modes of action and problem identification and demands. She also discusses the differences between radical and reformist activists in both their preferences for online action and their attitudes towards policy makers. Drawing on Cammaerts’ (2012) framework of the mediation opportunity structure, Uldam shows that radical activists preferred online tactics that aimed at disrupting the conference, since they viewed COP17 as representative of an unjust system. However, their lack of technical skills and resources prevented them from disrupting the conference in the virtual realm. Reformist activists, on the other hand, considered COP17 as a legitimate adversary, and attempted to influence its politics mainly through the diffusion of alternative information online.

The article by Ariadne Vromen and William Coleman (2013) “Online Campaigning Organizations and Storytelling Strategies: GetUp! Australia,” also investigates a climate change campaign but shifts the focus to the new ‘hybrid’ collective actors, who use the Internet extensively for campaigning. Based on a case study of GetUp!, Vromen and Coleman examine the storytelling strategies employed by the organisation in two separate campaigns, one around climate change, the other around mental health. The authors investigate the factors that led one campaign to be successful and the other to have limited resonance. They also skilfully highlight the difficulties encountered by new collective actors to gain legitimacy and influence policy making. In this respect, GetUp! used storytelling to set itself apart from traditional party-based politics and to emphasise its identity as an organiser and representative of grassroots communities, rather than as an insider lobbyist or disruptive protestor.

Romain Badouard and Laurence Monnoyer-Smith (2013), in their article “Hyperlinks as Political Resources: The European Commission Confronted with Online Activism,” explore some of the more structured ways in which citizens use online tools to engage with policy makers. They investigate the political opportunities offered by the e-participation and e-government platforms of the European Commission for activists wishing to make their voice heard in the European policy making sphere. They focus particularly on strategic uses of web technical resources and hyperlinks, which allows citizens to refine their proposals and thus increase their influence on European policy.

Finally, Jo Bates’ (2013) article “The Domestication of Open Government Data Advocacy in the UK: A Neo-Gramscian Analysis” provides a pertinent framework that facilitates our understanding of the policy challenges posed by the issue of open data. The digitisation of data offers new opportunities for increasing transparency; traditionally considered a fundamental public good. By focusing on the Open Data Government initiative in the UK, Bates explores the policy challenges generated by increasing transparency via new Internet platforms by applying the established theoretical instruments of Gramscian ‘Trasformismo.’ This article frames the open data debate in terms consistent with the literature on collective action, and provides empirical evidence as to how citizens have taken an active role in the debate on this issue, thereby challenging the policy debate on public transparency.

Taken together, these articles advance our understanding of the interface between online collective action and policy making. They introduce innovative theoretical frameworks and provide empirical evidence around the new forms of collective action, tactics, and contentious politics linked with the emergence of the Internet. If, as Melucci (1996) argues, contemporary social movements are sensors of new challenges within current societies, they can be an enriching resource for the policy debate arena. Gaining a better understanding of how the Internet might strengthen this process is a valuable line of enquiry.

Read the full article at: Calderaro, A. and Kavada A., (2013) “Challenges and Opportunities of Online Collective Action for Policy Change“, Policy and Internet 5(1).

Twitter: @AnastasiaKavada / @andreacalderaro
Web: Anastasia’s Personal Page / Andrea’s Personal Page


Badouard, R., and Monnoyer-Smith, L. 2013. Hyperlinks as Political Resources: The European Commission Confronted with Online Activism. Policy and Internet 5(1).

Bates, J. 2013. The Domestication of Open Government Data Advocacy in the UK: A Neo-Gramscian Analysis. Policy and Internet 5(1).

Breindl, Y., and Briatte, F. 2013. Digital Protest Skills and Online Activism Against Copyright Reform in France and the European Union. Policy and Internet 5(1).

Cammaerts, Bart. 2012. “Protest Logics and the Mediation Opportunity Structure.” European Journal of Communication 27(2): 117–134.

Kriesi, Hanspeter. 1995. “The Political Opportunity Structure of New Social Movements: its Impact on their Mobilization.” In The Politics of Social Protest, eds. J. Jenkins and B. Dermans. London: UCL Press, pp. 167–198.

Melucci, Alberto. 1996. Challenging Codes: Collective Action in the Information Age. Cambridge: Cambridge University Press.

Milan, S., and Hintz, A. 2013. Networked Collective Action and the Institutionalized Policy Debate: Bringing Cyberactivism to the Policy Arena? Policy and Internet 5(1).

Uldam, J. 2013. Activism and the Online Mediation Opportunity Structure: Attempts to Impact Global Climate Change Policies? Policy and Internet 5(1).

Vromen, A., and Coleman, W. 2013. Online Campaigning Organizations and Storytelling Strategies: GetUp! in Australia. Policy and Internet 5(1).

Preserving the digital record of major natural disasters: the CEISMIC Canterbury Earthquakes Digital Archive project

The 6.2 magnitude earthquake that struck the centre of Christchurch on 22 February 2011 claimed 185 lives, damaged 80% of the central city beyond repair, and forced the abandonment of 6000 homes. It was the third costliest insurance event in history. The CEISMIC archive developed at the University of Canterbury will soon have collected almost 100,000 digital objects documenting the experiences of the people and communities affected by the earthquake, all of it available for study.

The Internet can be hugely useful to coordinate disaster relief efforts, or to help rebuild affected communities. Paul Millar came to the OII on 21 May 2012 to discuss the CEISMIC archive project and the role of digital humanities after a major disaster (below). We talked to him afterwards.

Ed: You have collected a huge amount of information about the earthquake and people’s experiences that would otherwise have been lost: how do you think it will be used?

Paul: From the beginning I was determined to avoid being prescriptive about eventual uses. The secret of our success has been to stick to the principles of open data, open access and collaboration—the more content we can collect, the better chance future generations have to understand and draw conclusions from our experiences, behaviour and decisions. We have already assisted a number of research projects in public health, the social and physical sciences; even accounting. One of my colleagues reads balance sheets the way I read novels, and discovers all sorts of earthquake-related signs of cause and effect in them. I’d never have envisaged such a use for the archive. We have made our ontology is as detailed and flexible as possible in order to help with re-purposing of primary material: we currently use three layers of metadata—machine generated, human-curated and crowd sourced. We also intend to work more seriously on our GIS capabilities.

Ed: How do you go about preserving this information during a period of tremendous stress and chaos? Was it difficult to convince people of the importance of this longer-term view?

Paul: There was no difficulty convincing people of the importance of what we were doing: everyone got it immediately. However, the scope of this disaster is difficult to comprehend, even for those of us who live with it every day. We’ve lost a lot of material already, and we’re losing more everyday. Our major telecommunications provider recently switched off its CDMA network—all those redundant phones are gone, and with them any earthquake pictures or texts that might have been stored. One of the things I’d encourage every community to do now is make an effort to preserve key information against a day of disaster. If we’d digitised all our architectural plans of heritage buildings and linked them electronically to building reports and engineering assessments, we might have saved more.

Ed: It seems obvious in hindsight that the Internet can (and should be) be tremendously useful in the event of this sort of disaster: how do we ensure that best use is made?

Paul: The first thing is to be prepared, even in a low-key way, for whatever might happen. Good decision-making during a disaster requires accurate, accessible, and comprehensive data: digitisation and data linking are key activities in the creation of such a resource—and robust processes to ensure that information is of high quality are vital. One of the reasons CEISMIC works is because it is a federated archive—an ideal model for this sort of event — and we were able to roll it out extremely quickly. We could also harness online expert communities, crowd-sourcing efforts, open sourcing of planning processes, and robust vetting of information and auditing of outcomes. A lot of this needs to be done before a disaster strikes, though. For years I’ve encountered the mantra ‘we support research but we don’t fund databases’. We had to build CEISMIC because there was no equivalent, off-the-shelf product—but that development process lost us a year at least.

Ed: What equivalent efforts are there to preserve information about major disasters?

Paul: The obvious ones are the world-leading projects out of Center for History and New Media at George Mason University, including their 9/11 Digital Archive. One problem for any archive of this nature is that information doesn’t exist in a free and unmediated space. For example, the only full record of the pre-quake Christchurch cityscape is historic Google Street View; one of the most immediate sources of quake information was Twitter; many people communicated with the world via Facebook, and so on. It’s a question we’re all engaging with: who owns that information? How will it be preserved and accessed? We’ve had a lot of interest in what we are doing, and plenty of consultation and discussion with groups who see our model as being of some relevance to them. The UC CEISMIC project is essentially a proof of concept—versions of it could be rolled out around the world and left to tick over in the background, quietly accumulating material in the event that it is needed one day. That’s a small cost alongside losing a community’s heritage.

Ed: What difficulties have you encountered in setting up the archive?

Paul: Where do I start? There were the personal difficulties—my home damaged, my family traumatised, the university damaged, staff and students all struggling in different ways to cope: it’s not the ideal environment to try and introduce a major IT project. But I felt I had to do something, partly as a therapeutic response. I saw my engineering and geosciences colleagues at the front of the disaster, explaining what was happening, helping to provide context and even reassurance. For quite a while I wondered what on earth a professor of literature could do. It was James Smithies—now CEISMIC’s Project Manager – who reminded me of the 9/11 Archive. The difficulties we’ve encountered since have been those that beset most under-resourced projects—trying to build a million dollar project on a much smaller budget. A lot of the future development will be funding dependent, so much of my job will be getting the word out and looking for sponsors, supporters and partners. But although we’re understaffed, over-worked and living in a shaky city, the resilience, courage, humanity and good will of so many people never ceases to amaze and hearten me.

Ed: Your own research area is English Literature: has that had any influence on the sorts of content that have been collected, or your own personal responses to it?

Paul: My interest in digital archiving started when teaching New Zealand Literature at Victoria University of Wellington. In a country this small most books have a single print run of a few hundred; and even our best writers are lucky to have a text make it to a second edition. I therefore encountered the problem that many of the texts I wanted to prescribe were out of print: digitisation seemed like a good solution. In New Zealand the digital age has negated distance—the biggest factor preventing us from immediate and meaningful engagement with the rest of the world. CEISMIC actually started life as an acronym (the Canterbury Earthquakes Images, Stories and Media Integrated Collection), and the fact that ‘stories’ sits centrally certainly represents my own interest in the way we use narratives to make sense of experience. Everyone who went through the earthquakes has a story, and every story is different. I’m fascinated by the way a collective catastrophe becomes so much more meaningful when it is broken down into individual narratives. Ironically, despite the importance of this project to me, I find the earthquakes extremely difficult to write about in any personal or creative way. I haven’t written my own earthquake story yet.

Paul Millar was talking to blog editor David Sutcliffe.