government

The World Economic Forum engages business, political, academic and other leaders of society to shape global, regional and industry agendas.

The World Economic Forum engages business, political, academic and other leaders of society to shape global, regional and industry agendas. Image by World Economic Forum.

Last week, I was at the World Economic Forum in Davos, the first time that the Oxford Internet Institute has been represented there. Being closeted in a Swiss ski resort with 2,500 of the great, the good and the super-rich provided me with a good chance to see what the global elite are thinking about technological change and its role in ‘The Reshaping of the World: Consequences for Society, Politics and Business’, the stated focus of the WEF Annual Meeting in 2014. What follows are those impressions that relate to public policy and the internet, and reflect only my own experience there. Outside the official programme there are whole hierarchies of breakfasts, lunches, dinners and other events, most of which a newcomer to Davos finds it difficult to discover and some of which require one to be at least a president of a small to medium-sized state—or Matt Damon. There was much talk of hyperconnectivity, spirals of innovation, S-curves and exponential growth of technological diffusion, digitalisation and disruption. As you might expect, the pace of these was emphasised most by those participants from the technology industry. The future of work in the face of leaps forward in robotics was a key theme, drawing on the new book by Eric Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress and Prosperity in a Time of Brilliant Technologies, which is just out in the US. There were several sessions on digital health and the eventual fruition of decades of pilots in telehealth (a banned term now, apparently), as applications based on mobile technologies start to be used more widely. Indeed, all delegates were presented with a ‘Jawbone’ bracelet which tracks the wearer’s exercise and sleep patterns (7,801 steps so far today). And of course there was much talk about the possibilities afforded by big data, if not quite as much as I expected. The University of Oxford was represented in an…

How can social scientists help policy-makers in this changed environment, ensuring that social science research remains relevant?

As I discussed in a previous post on the promises and threats of big data for public policy-making, public policy making has entered a period of dramatic change. Widespread use of digital technologies, the Internet and social media means citizens and governments leave digital traces that can be harvested to generate big data. This increasingly rich data environment poses both promises and threats to policy-makers. So how can social scientists help policy-makers in this changed environment, ensuring that social science research remains relevant? Social scientists have a good record on having policy influence, indeed in the UK better than other academic fields, including medicine, as recent research from the LSE Public Policy group has shown. Big data hold major promise for social science, which should enable us to further extend our record in policy research. We have access to a cornucopia of data of a kind which is more like that traditionally associated with so-called ‘hard’ science. Rather than being dependent on surveys, the traditional data staple of empirical social science, social media such as Wikipedia, Twitter, Facebook, and Google Search present us with the opportunity to scrape, generate, analyse and archive comparative data of unprecedented quantity. For example, at the OII over the last four years we have been generating a dataset of all petition signing in the UK and US, which contains the joining rate (updated every hour) for the 30,000 petitions created in the last three years. As a political scientist, I am very excited by this kind of data (up to now, we have had big data like this only for voting, and that only at election time), which will allow us to create a complete ecology of petition signing, one of the more popular acts of political participation in the UK. Likewise, we can look at the entire transaction history of online organisations like Wikipedia, or map the link structure of government’s online presence. But…

Widespread use of digital technologies, the Internet and social media means both citizens and governments leave digital traces that can be harvested to generate big data.

The environment in which public policy is made has entered a period of dramatic change. Widespread use of digital technologies, the Internet and social media means both citizens and governments leave digital traces that can be harvested to generate big data. Policy-making takes place in an increasingly rich data environment, which poses both promises and threats to policy-makers. On the promise side, such data offers a chance for policy-making and implementation to be more citizen-focused, taking account of citizens’ needs, preferences and actual experience of public services, as recorded on social media platforms. As citizens express policy opinions on social networking sites such as Twitter and Facebook; rate or rank services or agencies on government applications such as NHS Choices; or enter discussions on the burgeoning range of social enterprise and NGO sites, such as Mumsnet, 38 degrees and patientopinion.org, they generate a whole range of data that government agencies might harvest to good use. Policy-makers also have access to a huge range of data on citizens’ actual behaviour, as recorded digitally whenever citizens interact with government administration or undertake some act of civic engagement, such as signing a petition. Data mined from social media or administrative operations in this way also provide a range of new data which can enable government agencies to monitor—and improve—their own performance, for example through log usage data of their own electronic presence or transactions recorded on internal information systems, which are increasingly interlinked. And they can use data from social media for self-improvement, by understanding what people are saying about government, and which policies, services or providers are attracting negative opinions and complaints, enabling identification of a failing school, hospital or contractor, for example. They can solicit such data via their own sites, or those of social enterprises. And they can find out what people are concerned about or looking for, from the Google Search API or Google trends, which record the search…

There has been a major shift in the policies of governments concerning participatory governance—that is, engaged, collaborative, and community-focused public policy.

Policy makers today must contend with two inescapable phenomena. On the one hand, there has been a major shift in the policies of governments concerning participatory governance—that is, engaged, collaborative, and community-focused public policy. At the same time, a significant proportion of government activities have now moved online, bringing about “a change to the whole information environment within which government operates” (Margetts 2009, 6). Indeed, the Internet has become the main medium of interaction between government and citizens, and numerous websites offer opportunities for online democratic participation. The Hansard Society, for instance, regularly runs e-consultations on behalf of UK parliamentary select committees. For examples, e-consultations have been run on the Climate Change Bill (2007), the Human Tissue and Embryo Bill (2007), and on domestic violence and forced marriage (2008). Councils and boroughs also regularly invite citizens to take part in online consultations on issues affecting their area. The London Borough of Hammersmith and Fulham, for example, recently asked its residents for thier views on Sex Entertainment Venues and Sex Establishment Licensing policy. However, citizen participation poses certain challenges for the design and analysis of public policy. In particular, governments and organisations must demonstrate that all opinions expressed through participatory exercises have been duly considered and carefully weighted before decisions are reached. One method for partly automating the interpretation of large quantities of online content typically produced by public consultations is text mining. Software products currently available range from those primarily used in qualitative research (integrating functions like tagging, indexing, and classification), to those integrating more quantitative and statistical tools, such as word frequency and cluster analysis (more information on text mining tools can be found at the National Centre for Text Mining). While these methods have certainly attracted criticism and skepticism in terms of the interpretability of the output, they offer four important advantages for the analyst: namely categorisation, data reduction, visualisation, and speed. 1. Categorisation. When analysing the results…

Bringing together leading social science academics with senior government agency staff to discuss its public policy potential.

Last week the OII went to Harvard. Against the backdrop of a gathering storm of interest around the potential of computational social science to contribute to the public good, we sought to bring together leading social science academics with senior government agency staff to discuss its public policy potential. Supported by the OII-edited journal Policy and Internet and its owners, the Washington-based Policy Studies Organization (PSO), this one-day workshop facilitated a thought-provoking conversation between leading big data researchers such as David Lazer, Brooke Foucault-Welles and Sandra Gonzalez-Bailon, e-government experts such as Cary Coglianese, Helen Margetts and Jane Fountain, and senior agency staff from US federal bureaus including Labor Statistics, Census, and the Office for the Management of the Budget. It’s often difficult to appreciate the impact of research beyond the ivory tower, but what this productive workshop demonstrated is that policy-makers and academics share many similar hopes and challenges in relation to the exploitation of ‘big data’. Our motivations and approaches may differ, but insofar as the youth of the ‘big data’ concept explains the lack of common language and understanding, there is value in mutual exploration of the issues. Although it’s impossible to do justice to the richness of the day’s interactions, some of the most pertinent and interesting conversations arose around the following four issues. Managing a diversity of data sources. In a world where our capacity to ask important questions often exceeds the availability of data to answer them, many participants spoke of the difficulties of managing a diversity of data sources. For agency staff this issue comes into sharp focus when available administrative data that is supposed to inform policy formulation is either incomplete or inadequate. Consider, for example, the challenge of regulating an economy in a situation of fundamental data asymmetry, where private sector institutions track, record and analyse every transaction, whilst the state only has access to far more basic performance metrics and accounts.…

China has made concerted efforts to reduce corruption at the lowest levels of government, as a result of dissatisfaction from both the business communities and the general public.

China has made concerted efforts to reduce corruption at the lowest levels of government. Image of the 18th National Congress of the CPC in the Great Hall of the People, Beijing, by: Bert van Dijk.

Ed: Investment by the Chinese government in internal monitoring systems has been substantial: what components make it up? Jesper: Two different information systems are currently in use. Within the government there is one system directed towards administrative case-processing. In addition to this, the Communist Party has its own monitoring system, which is less sophisticated in terms of real-time surveillance, but which has a deeper structure, as it collects and cross-references personal information about party-members working in the administration. These two systems parallel the existing institutional arrangements found in the dual structure consisting of the Discipline Inspection Commissions and the Bureaus of Supervision on different levels of government. As such, the e-monitoring system has particular ‘Chinese characteristics’, reflecting the bureaucracy’s Leninist heritage where Party-affairs and government-affairs are handled separately, applying different sets of rules. On the government’s e-monitoring platform the Bureau of Supervision (the closest we get to an Ombudsman function in the Chinese public administration) can collect data from several other data systems, such as the e-government systems of the individual bureaus involved in case processing; feeds from surveillance cameras in different government organisations; and even geographical data from satellites. The e-monitoring platform does not, however, afford scanning of information outside the government systems. For instance, social media are not part of the administration surveillance infrastructure. Ed: How centralised is it as a system? Is local or province-level monitoring of public officials linked up to the central government? Jesper: The architecture of the e-monitoring systems integrates the information flows to the provincial level, but not to the central level. One reason for this may be found by following the money. Funding for these systems mainly comes from local sources, and the construction was initially based on municipal-level systems supported by the provincial level. Hence, at the early stages the path towards individual local-level systems was the natural choice. A reason for why the build up was not initially envisioned to…