Do Finland’s digitally crowdsourced laws show a way to resolve democracy’s “legitimacy crisis”?

There is much discussion about a perceived “legitimacy crisis” in democracy. In his article The Rise of the Mediating Citizen: Time, Space, and Citizenship in the Crowdsourcing of Finnish Legislation, Taneli Heikka (University of Jyväskylä) discusses the digitally crowdsourced law for same-sex marriage that was passed in Finland in 2014, analysing how the campaign used new digital tools and created practices that affect democratic citizenship and power making.

Ed: There is much discussion about a perceived “legitimacy crisis” in democracy. For example, less than half of the Finnish electorate under 40 choose to vote. In your article you argue that Finland’s 2012 Citizens’ Initiative Act aimed to address this problem by allowing for the crowdsourcing of ideas for new legislation. How common is this idea? (And indeed, how successful?)

Taneli: The idea that digital participation could counter the “legitimacy crisis” is a fairly common one. Digital utopians have nurtured that idea from the early years of the internet, and have often been disappointed. A couple of things stand out in the Finnish experiment that make it worth a closer look.

First, the digital crowdsourcing system with strong digital identification is a reliable and potentially viral campaigning tool. Most civic initiative systems I have encountered rely on manual or otherwise cumbersome, and less reliable, signature collection methods.

Second, in the Finnish model, initiatives that break the threshold of 50,000 names must be treated in the Parliament equally to an initiative from a group of MPs. This gives the initiative constitutional and political weight.

Ed: The Act led to the passage of Finland’s first equal marriage law in 2014. In this case, online platforms were created for collecting signatures as well as drafting legislation. An NGO created a well-used platform, but it subsequently had to shut it down because it couldn’t afford the electronic signature system. Crowds are great, but not a silver bullet if something as prosaic as authentication is impossible. Where should the balance lie between NGOs and centrally funded services, i.e. government?

Taneli: The crucial thing in the success of a civic initiative system is whether it gives the people real power. This question is decided by the legal framework and constitutional basis of the initiative system. So, governments have a very important role in this early stage – designing a law for truly effective citizen initiatives.

When a framework for power-making is in place, service providers will emerge. Should the providers be public, private or third sector entities? I think that is defined by local political culture and history.

In the United States, the civic technology field is heavily funded by philanthropic foundations. There is an urge to make these tools commercially viable, though no one seems to have figured out the business model. In Europe there’s less philanthropic money, and in my experience experiments are more often government funded.

Both models have their pros and cons, but I’d like to see the two continents learning more from each other. American digital civic activists tell me enviously that the radically empowering Finnish model with a government-run service for crowdsourcing for law would be impossible in the US. In Europe, civic technologists say they wish they had the big foundations that Americans have.

Ed: But realistically, how useful is the input of non-lawyers in (technical) legislation drafting? And is there a critical threshold of people necessary to draft legislation?

Taneli: I believe that input is valuable from anyone who cares to invest some time in learning an issue. That said, having lawyers in the campaign team really helps. Writing legislation is a special skill. It’s a pity that the co-creation features in Finland’s Open Ministry website were shut down due to a lack of funding. In that model, help from lawyers could have been made more accessible for all campaign teams.

In terms of numbers, I don’t think the size of the group is an issue either way. A small group of skilled and committed people can do a lot in the drafting phase.

Ed: But can the drafting process become rather burdensome for contributors, given professional legislators will likely heavily rework, or even scrap, the text?

Taneli: Professional legislators will most likely rework the draft, and that is exactly what they are supposed to do. Initiating an idea, working on a draft, and collecting support for it are just phases in a complex process that continues in the parliament after the threshold of 50,000 signatures is reached. A well-written draft will make the legislators’ job easier, but it won’t replace them.

Ed: Do you think there’s a danger that crowdsourcing legislation might just end up reflecting the societal concerns of the web-savvy – or of campaigning and lobbying groups

Taneli: That’s certainly a risk, but so far there is little evidence of it happening. The only initiative passed so far in Finland – the Equal Marriage Act – was supported by the majority of Finns and by the majority of political parties, too. The initiative system was used to bypass a political gridlock. The handful of initiatives that have reached the 50,000 signatures threshold and entered parliamentary proceedings represent a healthy variety of issues in the fields of education, crime and punishment, and health care. Most initiatives seem to echo the viewpoint of the ‘ordinary people’ instead of lobbies or traditional political and business interest groups.

Ed: You state in your article that the real-time nature of digital crowdsourcing appeals to a generation that likes and dislikes quickly; a generation that inhabits “the space of flows”. Is this a potential source of instability or chaos? And how can this rapid turnover of attention be harnessed efficiently so as to usefully contribute to a stable and democratic society?

Taneli: The Citizens’ Initiative Act in Finland is one fairly successful model to look at in terms of balancing stability and disruptive change. It is a radical law in its potential to empower the individual and affect real power-making. But it is by no means a shortcut to ‘legislation by a digital mob’, or anything of that sort. While the digital campaigning phase can be an explosive expression of the power of the people in the ‘time and space of flows’, the elected representatives retain the final say. Passing a law is still a tedious process, and often for good reasons.

Ed: You also write about the emergence of the “mediating citizen” – what do you mean by this?

Taneli: The starting point for developing the idea of the mediating citizen is Lance Bennett’s AC/DC theory, i.e. the dichotomy of the actualising and the dutiful citizen. The dutiful citizen is the traditional form of democratic citizenship – it values voting, following the mass media, and political parties. The actualising citizen, on the other hand, finds voting and parties less appealing, and prefers more flexible and individualised forms of political action, such as ad hoc campaigns and the use of interactive technology.

I find these models accurate but was not able to place in this duality the emerging typologies of civic action I observed in the Finnish case. What we see is understanding and respect for parliamentary institutions and their power, but also strong faith in one’s skills and capability to improve the system in creative, technologically savvy ways. I used the concept of the mediating citizen to describe an actor who is able to move between the previous typologies, mediating between them. In the Finnish example, creative tools were developed to feed initiatives in the traditional power-making system of the parliament.

Ed: Do you think Finland’s Citizens Initiative Act is a model for other governments to follow when addressing concerns about “democratic legitimacy”?

Taneli: It is an interesting model to look at. But unfortunately the ‘legitimacy crisis’ is probably too complex a problem to be solved by a single participation tool. What I’d really like to see is a wave of experimentation, both on-line and off-line, as well as cross-border learning from each other. And is that not what happened when the representative model spread, too?

Read the full article: Heikka, T., (2015) The Rise of the Mediating Citizen: Time, Space, and Citizenship in the Crowdsourcing of Finnish Legislation. Policy and Internet 7 (3) 268–291.

Taneli Heikka is a journalist, author, entrepreneur, and PhD student based in Washington.

Taneli Heikka was talking to Blog Editor Pamina Smith.

Mapping collective public opinion in the Russian blogosphere

Widely reported as fraudulent, the 2011 Russian Parliamentary elections provoked mass street protest action by tens of thousands of people in Moscow and cities and towns across Russia. Image by Nikolai Vassiliev.

Blogs are becoming increasingly important for agenda setting and formation of collective public opinion on a wide range of issues. In countries like Russia where the Internet is not technically filtered, but where the traditional media is tightly controlled by the state, they may be particularly important. The Russian language blogosphere counts about 85 million blogs – an amount far beyond the capacities of any government to control – and the Russian search engine Yandex, with its blog rating service, serves as an important reference point for Russia’s educated public in its search of authoritative and independent sources of information. The blogosphere is thereby able to function as a mass medium of “public opinion” and also to exercise influence.

One topic that was particularly salient over the period we studied concerned the Russian Parliamentary elections of December 2011. Widely reported as fraudulent, they provoked immediate and mass street protest action by tens of thousands of people in Moscow and cities and towns across Russia, as well as corresponding activity in the blogosphere. Protesters made effective use of the Internet to organize a movement that demanded cancellation of the parliamentary election results, and the holding of new and fair elections. These protests continued until the following summer, gaining widespread national and international attention.

Most of the political and social discussion blogged in Russia is hosted on the blog platform LiveJournal. Some of these bloggers can claim a certain amount of influence; the top thirty bloggers have over 20,000 “friends” each, representing a good circulation for the average Russian newspaper. Part of the blogosphere may thereby resemble the traditional media; the deeper into the long tail of average bloggers, however, the more it functions as more as pure public opinion. This “top list” effect may be particularly important in societies (like Russia’s) where popularity lists exert a visible influence on bloggers’ competitive behavior and on public perceptions of their significance. Given the influence of these top bloggers, it may be claimed that, like the traditional media, they act as filters of issues to be thought about, and as definers of their relative importance and salience.

Gauging public opinion is of obvious interest to governments and politicians, and opinion polls are widely used to do this, but they have been consistently criticized for the imposition of agendas on respondents by pollsters, producing artefacts. Indeed, the public opinion literature has tended to regard opinion as something to be “extracted” by pollsters, which inevitably pre-structures the output. This literature doesn’t consider that public opinion might also exist in the form of natural language texts, such as blog posts, that have not been pre-structured by external observers.

There are two basic ways to detect topics in natural language texts: the first is manual coding of texts (ie by traditional content analysis), and the other involves rapidly developing techniques of automatic topic modeling or text clustering. The media studies literature has relied heavily on traditional content analysis; however, these studies are inevitably limited by the volume of data a person can physically process, given there may be hundreds of issues and opinions to track — LiveJournal’s 2.8 million blog accounts, for example, generate 90,000 posts daily.

For large text collections, therefore, only the second approach is feasible. In our article we explored how methods for topic modeling developed in computer science may be applied to social science questions – such as how to efficiently track public opinion on particular (and evolving) issues across entire populations. Specifically, we demonstrate how automated topic modeling can identify public agendas, their composition, structure, the relative salience of different topics, and their evolution over time without prior knowledge of the issues being discussed and written about. This automated “discovery” of issues in texts involves division of texts into topically — or more precisely, lexically — similar groups that can later be interpreted and labeled by researchers. Although this approach has limitations in tackling subtle meanings and links, experiments where automated results have been checked against human coding show over 90 percent accuracy.

The computer science literature is flooded with methodological papers on automatic analysis of big textual data. While these methods can’t entirely replace manual work with texts, they can help reduce it to the most meaningful and representative areas of the textual space they help to map, and are the only means to monitor agendas and attitudes across multiple sources, over long periods and at scale. They can also help solve problems of insufficient and biased sampling, when entire populations become available for analysis. Due to their recentness, as well as their mathematical and computational complexity, these approaches are rarely applied by social scientists, and to our knowledge, topic modeling has not previously been applied for the extraction of agendas from blogs in any social science research.

The natural extension of automated topic or issue extraction involves sentiment mining and analysis; as Gonzalez-Bailon, Kaltenbrunner, and Banches (2012) have pointed out, public opinion doesn’t just involve specific issues, but also encompasses the state of public emotion about these issues, including attitudes and preferences. This involves extracting opinions on the issues/agendas that are thought to be present in the texts, usually by dividing sentences into positive and negative. These techniques are based on human-coded dictionaries of emotive words, on algorithmic construction of sentiment dictionaries, or on machine learning techniques.

Both topic modeling and sentiment analysis techniques are required to effectively monitor self-generated public opinion. When methods for tracking attitudes complement methods to build topic structures, a rich and powerful map of self-generated public opinion can be drawn. Of course this mapping can’t completely replace opinion polls; rather, it’s a new way of learning what people are thinking and talking about; a method that makes the vast amounts of user-generated content about society – such as the 65 million blogs that make up the Russian blogosphere — available for social and policy analysis.

Naturally, this approach to public opinion and attitudes is not free of limitations. First, the dataset is only representative of the self-selected population of those who have authored the texts, not of the whole population. Second, like regular polled public opinion, online public opinion only covers those attitudes that bloggers are willing to share in public. Furthermore, there is still a long way to go before the relevant instruments become mature, and this will demand the efforts of the whole research community: computer scientists and social scientists alike.

Read the full paper: Olessia Koltsova and Sergei Koltcov (2013) Mapping the public agenda with topic modeling: The case of the Russian livejournal. Policy and Internet 5 (2) 207–227.

Also read on this blog: Can text mining help handle the data deluge in public policy analysis? by Aude Bicquelet.


González-Bailón, S., A. Kaltenbrunner, and R.E. Banches. 2012. “Emotions, Public Opinion and U.S. Presidential Approval Rates: A 5 Year Analysis of Online Political Discussions,” Human Communication Research 38 (2): 121–43.

Verification of crowd-sourced information: is this ‘crowd wisdom’ or machine wisdom?

Crisis mapping platform
‘Code’ or ‘law’? Image from an Ushahidi development meetup by afropicmusing.

In ‘Code and Other Laws of Cyberspace’, Lawrence Lessig (2006) writes that computer code (or what he calls ‘West Coast code’) can have the same regulatory effect as the laws and legal code developed in Washington D.C., so-called ‘East Coast code’. Computer code impacts on a person’s behaviour by virtue of its essentially restrictive architecture: on some websites you must enter a password before you gain access, in other places you can enter unidentified. The problem with computer code, Lessig argues, is that it is invisible, and that it makes it easy to regulate people’s behaviour directly and often without recourse.

For example, fair use provisions in US copyright law enable certain uses of copyrighted works, such as copying for research or teaching purposes. However the architecture of many online publishing systems heavily regulates what one can do with an e-book: how many times it can be transferred to another device, how many times it can be printed, whether it can be moved to a different format – activities that have been unregulated until now, or that are enabled by the law but effectively ‘closed off’ by code. In this case code works to reshape behaviour, upsetting the balance between the rights of copyright holders and the rights of the public to access works to support values like education and innovation.

Working as an ethnographic researcher for Ushahidi, the non-profit technology company that makes tools for people to crowdsource crisis information, has made me acutely aware of the many ways in which ‘code’ can become ‘law’. During my time at Ushahidi, I studied the practices that people were using to verify reports by people affected by a variety of events – from earthquakes to elections, from floods to bomb blasts. I then compared these processes with those followed by Wikipedians when editing articles about breaking news events. In order to understand how to best design architecture to enable particular behaviour, it becomes important to understand how such behaviour actually occurs in practice.

In addition to the impact of code on the behaviour of users, norms, the market and laws also play a role. By interviewing both the users and designers of crowdsourcing tools I soon realized that ‘human’ verification, a process of checking whether a particular report meets a group’s truth standards, is an acutely social process. It involves negotiation between different narratives of what happened and why; identifying the sources of information and assessing their reputation among groups who are considered important users of that information; and identifying gatekeeping and fact checking processes where the source is a group or institution, amongst other factors.

One disjuncture between verification ‘practice’ and the architecture of the verification code developed by Ushahidi for users was that verification categories were set as a default feature, whereas some users of the platform wanted the verification process to be invisible to external users. Items would show up as being ‘unverified’ unless they had been explicitly marked as ‘verified’, thus confusing users about whether the item was unverified because the team hadn’t yet verified it, or whether it was unverified because it had been found to be inaccurate. Some user groups wanted to be able to turn off such features when they could not take responsibility for data verification. In the case of the Christchurch Recovery Map in the aftermath of the 2011 New Zealand earthquake, the government officials with whom volunteers who set up the Ushahidi instance were working wanted to be able to turn off such features because they were concerned that they could not ensure that reports were indeed verified and having the category show up (as ‘unverified’ until ‘verified’) implied that they were engaged in some kind of verification process.

The existence of a default verification category impacted on the Christchurch Recovery Map group’s ability to gain support from multiple stakeholders, including the government, but this feature of the platform’s architecture did not have the same effect in other places and at other times. For other users like the original Ushahidi Kenya team who worked to collate instances of violence after the Kenyan elections in 2007/08, this detailed verification workflow was essential to counter the misinformation and rumour that dogged those events. As Ushahidi’s use cases have diversified – from reporting death and damage during natural disasters to political events including elections, civil war and revolutions, the architecture of Ushahidi’s code base has needed to expand. Ushahidi has recognised that code plays a defining role in the experience of verification practices, but also that code’s impact will not be the same at all times, and in all circumstances. This is why it invested in research about user diversity in a bid to understand the contexts in which code runs, and how these contexts result in a variety of different impacts.

A key question being asked in the design of future verification mechanisms is the extent to which verification work should be done by humans or non-humans (machines). Here, verification is not a binary categorisation, but rather there is a spectrum between human and non-human verification work, and indeed, projects like Ushahidi, Wikipedia and Galaxy Zoo have all developed different verification mechanisms. Wikipedia uses a set of policies and practices about how content should be added and reviewed, such as the use of ‘citation needed’ tags for information that sounds controversial and that should be backed up by a reliable source. Galaxy Zoo uses an algorithm to detect whether certain contributions are accurate by comparing them to the same work by other volunteers.

Ushahidi leaves it up to individual deployers of their tools and platform to make decisions about verification policies and practices, and is going to be designing new defaults to accommodate this variety of use. In parallel,, a project by ex-Ushahidi Patrick Meier with organisations Masdar and QCRI is responding to the large amounts of unverified and often contradictory information that appears on social media following natural disasters by enabling social media users to collectively evaluate the credibility of rapidly crowdsourced evidence. The project was inspired by MIT’s winning entry to DARPA’s ‘Red Balloon Challenge’ which was intended to highlight social networking’s potential to solve widely distributed, time-sensitive problems, in this case by correctly identifying the GPS coordinates of 10 balloons suspended at fixed, undisclosed locations across the US. The winning MIT team crowdsourced the problem by using a monetary incentive structure, promising $2,000 to the first person who submitted the correct coordinates for a single balloon, $1,000 to the person who invited that person to the challenge; $500 to the person who invited the inviter, and so on. The system quickly took root, spawning geographically broad, dense branches of connections. After eight hours and 52 minutes, the MIT team identified the correct coordinates for all 10 balloons. aims to apply MIT’s approach to the process of rapidly collecting and evaluating critical evidence during disasters: “Instead of looking for weather balloons across an entire country in less than 9 hours, we hope will facilitate the crowdsourced collection of multimedia evidence for individual disasters in under 9 minutes.” It is still unclear how (or whether) Verily will be able to reproduce the same incentive structure, but a bigger question lies around the scale and spread of social media in the majority of countries where humanitarian assistance is needed. The majority of Ushahidi or Crowdmap installations are, for example, still “small data” projects, with many focused on areas that still require offline verification procedures (such as calling volunteers or paid staff who are stationed across a country, as was the case in Sudan [3]). In these cases – where the social media presence may be insignificant — a team’s ability to achieve a strong local presence will define the quality of verification practices, and consequently the level of trust accorded to their project.

If code is law and if other aspects in addition to code determine how we can act in the world, it is important to understand the context in which code is deployed. Verification is a practice that determines how we can trust information coming from a variety of sources. Only by illuminating such practices and the variety of impacts that code can have in different environments can we begin to understand how code regulates our actions in crowdsourcing environments.

For more on Ushahidi verification practices and the management of sources on Wikipedia during breaking news events, see:

[1] Ford, H. (2012) Wikipedia Sources: Managing Sources in Rapidly Evolving Global News Articles on the English Wikipedia. SSRN Electronic Journal. doi:10.2139/ssrn.2127204

[2] Ford, H. (2012) Crowd Wisdom. Index on Censorship 41(4), 33–39. doi:10.1177/0306422012465800

[3] Ford, H. (2011) Verifying information from the crowd. Ushahidi.

Heather Ford has worked as a researcher, activist, journalist, educator and strategist in the fields of online collaboration, intellectual property reform, information privacy and open source software in South Africa, the United Kingdom and the United States. She is currently a DPhil student at the OII, where she is studying how Wikipedia editors write history as it happens in a format that is unprecedented in the history of encyclopedias. Before this, she worked as an ethnographer for Ushahidi. Read Heather’s blog.

For more on the ChristChurch Earthquake, and the role of digital humanities in preserving the digital record of its impact see: Preserving the digital record of major natural disasters: the CEISMIC Canterbury Earthquakes Digital Archive project on this blog.

Why do (some) political protest mobilisations succeed?

The communication technologies once used by rebels and protesters to gain global visibility now look burdensome and dated: much separates the once-futuristic-looking image of Subcomandante Marcos posing in the Chiapas jungle draped in electronic gear (1994) from the uprisings of the 2011 Egyptian revolution. While the only practical platform for amplifying a message was once provided by organisations, the rise of the Internet means that cross-national networks are now reachable by individuals—who are able to bypass organisations, ditch membership dues, and embrace self-organization. As social media and mobile applications increasingly blur the distinction between public and private, ordinary citizens are becoming crucial nodes in the contemporary protest network.

The personal networks that are the main channels of information flow in sites such as Facebook, Twitter and LinkedIn mean that we don’t need to actively seek out particular information; it can be served to us with no more effort than that of maintaining a connection with our contacts. News, opinions, and calls for justice are now shared and forwarded by our friends—and their friends—in a constant churn of information, all attached to familiar names and faces. Given we are more likely to pass on information if the source belongs to our social circle, this has had an important impact on the information environment within which protest movements are initiated and develop.

Mobile connectivity is also important for understanding contemporary protest, given that the ubiquitous streams of synchronous information we access anywhere are shortening our reaction times. This is important, as the evolution of mass recruitments—whether they result in flash mobilisations, slow burns, or simply damp squibs—can only be properly understood if we have a handle on the distribution of reaction times within a population. The increasing integration of the mainstream media into our personal networks is also important, given that online networks (and independent platforms like Indymedia) are not the clear-cut alternative to corporate media they once were. We can now write on the walls or feeds of mainstream media outlets, creating two-way communication channels and public discussion.

Online petitions have also transformed political protest; lower information diffusion costs mean that support (and signatures) can be scaled up much faster. These petitions provide a mine of information for researchers interested in what makes protests succeed or fail. The study of cascading behaviour in online networks suggests that most chain reactions fail quickly, and most petitions don’t gather that much attention anyway. While large cascades tend to start at the core of networks, network centrality is not always a guarantor of success.

So what does a successful cascade look like? Work by Duncan Watts has shown that the vast majority of cascades are small and simple, terminating within one degree of an initial adopting ‘seed.’ Research has also shown that adoptions resulting from chains of referrals are extremely rare; even for the largest cascades observed, the bulk of adoptions often took place within one degree of a few dominant individuals. Conversely, research on the spreading dynamics of a petition organised in opposition to the 2002-2003 Iraq war showed a narrow but very deep tree-like distribution, progressing through many steps and complex paths. The deepness and narrowness of the observed diffusion tree meant that it was fragile—and easily broken at any of the levels required for further distribution. Chain reactions are only successful with the right alignment of factors, and this becomes more likely as more attempts are launched. The rise of social media means that there are now more attempts.

One consequence of these—very recent—developments is the blurring of the public and the private. A significant portion of political information shared online travels through networks that are not necessarily political, but that can be activated for political purposes as circumstances arise. Online protest networks are decentralised structures that pull together local sources of information and create efficient channels for a potentially global diffusion, but they replicate the recruitment dynamics that operated in social networks prior to the emergence of the Internet.

The wave of protests seen in 2011—including the Arab Spring, the Spanish Indignados, and the Global Occupy Campaign—reflects this global interdependence of localised, personal networks, with protest movements emerging spontaneously from the individual actions of many thousands (or millions) of networked users. Political protest movements are seldom stable and fixed organisational structures, and online networks are inherently suited to channeling this fluid commitment and identity. However, systematic research to uncover the bridges and precise network mechanisms that facilitate cross-border diffusion is still lacking. Decentralized networks facilitate mobilisations of unprecedented reach and speed—but are actually not very good at maintaining momentum, or creating particularly stable structures. For this, traditional organisations are still relevant, even while they struggle to maintain a critical mass.

The general failure of traditional organisations to harness the power of these personal networks results from their complex structure, which complicates any attempts at prediction, planning, and engineering. Mobilization paths are difficult to predict because they depend on the right alignment of conditions on different levels—from the local information contexts of individuals who initiate or sustain diffusion chains, to the global assembly of separate diffusion branches. The networked chain reactions that result as people jump onto bandwagons follow complex paths; furthermore, the cumulative effects of these individual actions within the network are not linear, due to feedback mechanisms that can cause sudden changes and flips in mobilisation dynamics, such as exponential growth.

Of course, protest movements are not created by social media technologies; they provide just one mechanism by which a movement can emerge, given the right social, economic, and historical circumstances. We therefore need to focus less on the specific technologies and more on how they are used if we are to explain why most mobilisations fail, but some succeed. Technology is just a part of the story—and today’s Twitter accounts will soon look as dated as the electronic gizmos used by the Zapatistas in the Chiapas jungle.

Internet, Politics, Policy 2010: Campaigning in the 2010 UK General Election

The first day of the conference found an end in style with a well-received reception at Oxford’s fine Divinity Schools.

Day Two of the conference kicked off with panels on “Mobilisation and Agenda Setting”,“Virtual Goods” and “Comparative Campaigning”.  ICTlogy has been busy summarising some of the panels at the conference including this morning one’s with some interesting contributions on comparative campaigning.

The second round of panels included a number of scientific approaches to the role of the Internet for the recent UK election:

Gibson, Cantijoch and Ward in their analysis of the UK Elections drew attention to the fact that the 2010 UK General Election was dominated not by the Internet but by a very traditional media instead, namely the TV debates of party leaders. Importantly, they suggest to treat eParticipation as a multi-dimensional concept, ie. distinguish different forms of eParticipation with differing degrees of involvement, in fact in much the same way as we have come to treat traditional forms of participation.

Anstead and Jensen aimed to trace distinctions in election campaigning between the national and the local level. They have found evidence that online campaigns are both decentralized (little mention of national campaigns) and localised (emphasizing horizontal links with the community).

Lilleker and Jackson looked at how much party websites did encourage participation. They found that first and foremost, parties are about promoting their personnel and are rather cautious in engaging in any interactive communication. Most efforts were aimed at the campaign and not about getting input into policy. Even though there were more Web 2.0 features in use than in previous years, participation was low.

Sudulich and Wall were interested in the uptake of online campaigning (campaign website, Facebook profile) by election candidates. They take into account a range of factors including bookmakers odds for candidates but found little explanatory effects overall.