Datafication. Platformisation. Metaverse. Global Internet Policy or a Fractured Communication Future?
Special Issue Call for Papers, Volume 15, Issue 4
Datafication. Platformization. Metaverse. What is the state of global internet policy? Within our current online and hyper-connected lives, is it possible to have such a thing as global internet policy? Building off the 2022 Policy & Internet Conference, this special issue addresses the complex and multiple perspectives of internet policy from around the globe.
As we evolve through the Anthropocene and attempt to navigate the significant challenges humanity currently faces, we are consistently reminded of the most pressing critical issues of our epoch. Economic systems are the point of breaking, industrial action mobilised by unions is at an all-time high, inflation is rising, workers’ pay continues to fall, and the stability of our political systems has come into question. Our health systems are under unfathomable stress, refugee numbers are increasing through displacement, and the war in Ukraine continues, all of which adds to the growing global societal, economic and political pressures. And yet, concurrently, our connectivity through digital media and its surrounding environments is at an all-time high, arguably from the rise of technology players providing suites of social media platforms and its supporting infrastructures that enable a seamless and convenient, always-on lifestyle. The same app that enables us to chat with our friends and family can also book our rideshares, order our food, pay for our purchases and tempt us to become internet celebrities. What was once framed as user generated content activity has now become a normalised cultural pastime, as TikTok influencers feed the demotic turn that sees ordinary folk become internet superstars in rather small timeframes.
At the same time, policymakers are reforming legislation to address the incomprehensible imbalance of power that is generated by technology giants. One of the immediate issues concerning users is their online privacy. In many instances, governments continue to struggle with bringing large-scale social media platforms to account, and seeking mutually beneficial outcomes. TikTok especially has raised concerns with user privacy as many cybersecurity agencies who advise governments have no clear answers on how to maintain its use while not knowing what will happen to user data. Alongside user data issues, in some countries the relationship between technology providers and governments is blurred, where regulation is becoming a weaponized approach to citizen control. To counter these sorts of power imbalances, advocacy groups are consistently calling for safe, inclusive, affordable and reliable internet connectivity, as the digital divide continues to increase. The urgency for healthy online civic spaces has been highlighted as a key focus for advocacy groups, while ensuring the safety of its users has also been highlighted.
This special issue asks for responses to these contemporary issues and seeks to understand if a global internet policy is possible. How might we incorporate co-design, open dialogues, increased governance, interoperability and user-centred discussions into policy discussions? What are the immediate issues for policymakers?
We welcome research that addresses the following areas of interest (but not limited to):
Takedowns, shadowbanning, throttling
Non-western approaches towards internet policy
Internet governance and infrastructures
Content moderation
Regulatory responses that address the growing digital divide
Communication and technology for positive economic development
Building strong communication systems during times of high societal pressure
Social media and labour concerns
Emerging digital communication for marginalised groups and individuals
Digital communication that bridges regional legislation
Communication and technology through comparative media systems
Regulation for diversity across media systems
Media automation for the next 10 years and beyond
Young people and social media
Innovative empirical examples of positive digital communication and/or technology development
Please send through your title and 150-200 word abstract to Jonathon Hutchinson [jonathon.hutchinson@sydney.edu.au] and Milly Stilinovic [milica.stilinovic@sydney.edu.au] with the subject line: Policy & Internet Special Issue by October 31 2022.
Policy & Internet Journal: CFP Special Issue – Issue 1, 2022
Special Issue Editors: Jonathon Hutchinson, University of Sydney & Milica Stilinovic, University of Sydney
The Internet Regulation Turn? Policy, internet and technology
With the recent media focus on the regulation of social media platforms within our society, users, citizens, human rights advocacy groups, policymakers and content producers have all questioned the validity of these communication technologies. Do these technologies offer ease of connectivity, or do they have the potential to be weaponised and misappropriated to further political agendas, disrupt democratic processes, and abuse an individual’s right to (or assumption of) privacy? Recently, we have observed governments calling on platforms to account for their misalignment with local media markets. Regulators are asking platform providers for increased transparency into their distribution processes. Advocacy groups are asking for increased visibility. The custodians of the internet (Gillespie, 2018) are asking for better tools to manage their communities. At the same time, users are questioning the uses of their data.
Nonetheless, our societies are enjoying the benefits of our contemporary communication technologies for a variety of reasons. We see new markets emerging based on platform economic models, increased connectivity in times of physical isolation, new trends and connections are emerging, new cultural conventions are being forged between disparate individuals, and friends and families enjoy the increased ease and connectivity of communicating with their loved ones.
To say ‘if you do not pay for the product, you are the product’ (Orlowski, 2020) grossly misrepresents the entirety of the social dilemma we have found ourselves in – a hyper- commercialised and politicised internet of the 2020s. To combat this, we are observing several versions of a ‘Balkinized splinternet’ (Lemley, 2020) emerging, where nations and users are designing and creating their own version of what was conceived as a way to share and enjoy information across a connected and networked infrastructure. These new internet formations are accompanied by a variety of emerging economic models, such as cryptocurrency for example, to signify a moment of change has arrived (Swartz, 2020). By looking backwards, we are sometimes able to understand how we will move forward.
This special issue of Policy & Internet calls on scholars, practitioners, policymakers and students of the internet to rethink our internet, its policy and the surrounding communication technology of our contemporary society. We are looking for papers that examine the current social and communication dilemmas of the internet, and that map out the trajectory of Policy & Internet for the next five years. What will internet researchers be examining in three years? Has the idea of the ‘nation state’ returned within the debates surrounding ‘big tech’ giants? What will the civil society look like in five years? What does effective policy consider for the future of ourselves and our data in the several emerging versions of the internet?
Topics can be related, but not limited, to:
Internet studies
Platformisation
Everyday social media
Algorithmic media
Internet governance
The ‘regulation turn’ of the internet
News distribution
Platform accountability
Critical race studies
Civil unrest and the internet
Queer internet
The Internet of Things (IoT)
Smart Devices/Smart Cities
Robots and/or automation
E-surveillance and e-governance
Design, coding and development of the internet and its protocols
Please send through your title and 150-200 word abstract to Jonathon Hutchinson [jonathon.hutchinson@sydney.edu.au] and Milly Stilinovic [milica.stilinovic@sydney.edu.au] with the subject line: Policy & Internet Special Issue by May 15 2021.
Algorithmic systems (such as those deciding mortgage applications, or sentencing decisions) can be very difficult to understand, for experts as well as the general public. Image: Ken Lane (CC BY-NC 2.0).
The EU General Data Protection Regulation (GDPR) has sparked much discussion about the “right to explanation” for the algorithm-supported decisions made about us in our everyday lives. While there’s an obvious need for transparency in the automated decisions that are increasingly being made in areas like policing, education, healthcare and recruitment, explaining how these complex algorithmic decision-making systems arrive at any particular decision is a technically challenging problem—to put it mildly.
In their article “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR” which is forthcoming in the Harvard Journal of Law & Technology, Sandra Wachter, Brent Mittelstadt, and Chris Russell present the concept of “unconditional counterfactual explanations” as a novel type of explanation of automated decisions that could address many of these challenges. Counterfactual explanations describe the minimum conditions that would have led to an alternative decision (e.g. a bank loan being approved), without the need to describe the full logic of the algorithm.
Relying on counterfactual explanations as a means to help us act rather than merely to understand could help us gauge the scope and impact of automated decisions in our lives. They might also help bridge the gap between the interests of data subjects and data controllers, which might otherwise be a barrier to a legally binding right to explanation.
We caught up with the authors to explore the role of algorithms in our everyday lives, and how a “right to explanation” for decisions might be achievable in practice:
Ed: There’s a lot of discussion about algorithmic “black boxes” — where decisions are made about us, using data and algorithms about which we (and perhaps the operator) have no direct understanding. How prevalent are these systems?
Sandra: Basically, every decision that can be made by a human can now be made by an algorithm, which can be a good thing. Algorithms (when we talk about artificial intelligence) are very good at spotting patterns and correlations that even experienced humans might miss, for example in predicting disease. They are also very cost efficient—they don’t get tired, and they don’t need holidays. This could help to cut costs, for example in healthcare.
Algorithms are also certainly more consistent than humans in making decisions. We have the famous example of judges varying the severity of their judgements depending on whether or not they’ve had lunch. That wouldn’t happen with an algorithm. That’s not to say algorithms are always going to make better decisions: but they do make more consistent ones. If the decision is bad, it’ll be distributed equally, but still be bad. Of course, in a certain way humans are also black boxes—we don’t understand what humans do either. But you can at least try to understand an algorithm: it can’t lie, for example.
Brent: In principle, any sector involving human decision-making could be prone to decision-making by algorithms. In practice, we already see algorithmic systems either making automated decisions or producing recommendations for human decision-makers in online search, advertising, shopping, medicine, criminal justice, etc. The information you consume online, the products you are recommended when shopping, the friends and contacts you are encouraged to engage with, even assessments of your likelihood to commit a crime in the immediate and long-term future—all of these tasks can currently be affected by algorithmic decision-making.
Ed: I can see that algorithmic decision-making could be faster and better than human decisions in many situations. Are there downsides?
Sandra: Simple algorithms that follow a basic decision tree (with parameters decided by people) can be easily understood. But we’re now also using much more complex systems like neural nets that act in a very unpredictable way, and that’s the problem. The system is also starting to become autonomous, rather than being under the full control of the operator. You will see the output, but not necessarily why it got there. This also happens with humans, of course: I could be told by a recruiter that my failure to land a job had nothing to do with my gender (even if it did); an algorithm, however, would not intentionally lie. But of course the algorithm might be biased against me if it’s trained on biased data—thereby reproducing the biases of our world.
We have seen that the COMPAS algorithm used by US judges to calculate the probability of re-offending when making sentencing and parole decisions is a major source of discrimination. Data provenance is massively important, and probably one of the reasons why we have biased decisions. We don’t necessarily know where the data comes from, and whether it’s accurate, complete, biased, etc. We need to have lots of standards in place to ensure that the data set is unbiased. Only then can the algorithm produce nondiscriminatory results.
A more fundamental problem with predictions is that you might never know what would have happened—as you’re just dealing with probabilities; with correlations in a population, rather than with causalities. Another problem is that algorithms might produce correct decisions, but not necessarily fair ones. We’ve been wrestling with the concept of fairness for centuries, without consensus. But lack of fairness is certainly something the system won’t correct itself—that’s something that society must correct.
Brent: The biases and inequalities that exist in the real world and in real people can easily be transferred to algorithmic systems. Humans training learning systems can inadvertently or purposefully embed biases into the model, for example through labelling content as ‘offensive’ or ‘inoffensive’ based on personal taste. Once learned, these biases can spread at scale, exacerbating existing inequalities. Eliminating these biases can be very difficult, hence we currently see much research done on the measurement of fairness or detection of discrimination in algorithmic systems.
These systems can also be very difficult—if not impossible—to understand, for experts as well as the general public. We might traditionally expect to be able to question the reasoning of a human decision-maker, even if imperfectly, but the rationale of many complex algorithmic systems can be highly inaccessible to people affected by their decisions. These potential risks aren’t necessarily reasons to forego algorithmic decision-making altogether; rather, they can be seen as potential effects to be mitigated through other means (e.g. a loan programme weighted towards historically disadvantaged communities), or at least to be weighed against the potential benefits when choosing whether or not to adopt a system.
Ed: So it sounds like many algorithmic decisions could be too complex to “explain” to someone, even if a right to explanation became law. But you propose “counterfactual explanations” as an alternative— i.e. explaining to the subject what would have to change (e.g. about a job application) for a different decision to be arrived at. How does this simplify things?
Brent: So rather than trying to explain the entire rationale of a highly complex decision-making process, counterfactuals allow us to provide simple statements about what would have needed to be different about an individual’s situation to get a different, preferred outcome. You basically work from the outcome: you say “I am here; what is the minimum I need to do to get there?” By providing simple statements that are generally meaningful, and that reveal a small bit of the rationale of a decision, the individual has grounds to change their situation or contest the decision, regardless of their technical expertise. Understanding even a bit of how a decision is made is better than being told “sorry, you wouldn’t understand”—at least in terms of fostering trust in the system.
Sandra: And the nice thing about counterfactuals is that they work with highly complex systems, like neural nets. They don’t explain why something happened, but they explain what happened. And three things people might want to know are:
(1) What happened: why did I not get the loan (or get refused parole, etc.)?
(2) Information so I can contest the decision if I think it’s inaccurate or unfair.
(3) Even if the decision was accurate and fair, tell me what I can do to improve my chances in the future.
Machine learning and neural nets make use of so much information that individuals have really no oversight of what they’re processing, so it’s much easier to give someone an explanation of the key variables that affected the decision. With the counterfactual idea of a “close possible world” you give an indication of the minimal changes required to get what you actually want.
Ed: So would a series of counterfactuals (e.g. “over 18” “no prior convictions” “no debt”) essentially define a space within which a certain decision is likely to be reached? This decision space could presumably be graphed quite easily, to help people understand what factors will likely be important in reaching a decision?
Brent: This would only work for highly simplistic, linear models, which are not normally the type that confound human capacities for understanding. The complex systems that we refer to as ‘black boxes’ are highly dimensional and involve a multitude of (probabilistic) dependencies between variables that can’t be graphed simply. It may be the case that if I were aged between 35-40 with an income of £30,000, I would not get a loan. But, I could be told that if I had an income of £35,000, I would have gotten the loan. I may then assume that an income over £35,000 guarantees me a loan in the future. But, it may turn out that I would be refused a loan with an income above £40,000 because of a change in tax bracket. Non-linear relationships of this type can make it misleading to graph decision spaces. For simple linear models, such a graph may be a very good idea, but not for black box systems; they could, in fact, be highly misleading.
Chris: As Brent says, we’re concerned with understanding complicated algorithms that don’t just use hard cut-offs based on binary features. To use your example, maybe a little bit of debt is acceptable, but it would increase your risk of default slightly, so the amount of money you need to earn would go up. Or maybe certain convictions committed in the past also only increase your risk of defaulting slightly, and can be compensated for with higher salary. It’s not at all obvious how you could graph these complicated interdependencies over many variables together. This is why we picked on counterfactuals as a way to give people a direct and easy to understand path to move from the decision they got now, to a more favourable one at a later date.
Ed: But could a counterfactual approach just end up kicking the can down the road, if we know “how” a particular decision was reached, but not “why” the algorithm was weighted in such a way to produce that decision?
Brent: It depends what we mean by “why”. If this is “why” in the sense of, why was the system designed this way, to consider this type of data for this task, then we should be asking these questions while these systems are designed and deployed. Counterfactuals address decisions that have already been made, but still can reveal uncomfortable knowledge about a system’s design and functionality. So it can certainly inform “why” questions.
Sandra: Just to echo Brent, we don’t want to imply that asking the “why” is unimportant—I think it’s very important, and interpretability as a field has to be pursued, particularly if we’re using algorithms in highly sensitive areas. Even if we have the “what”, the “why” question is still necessary to ensure the safety of those systems.
Chris: And anyone who’s talked to a three-year old knows there is an endless stream of “Why” questions that can be asked. But already, counterfactuals provide a major step forward in answering why, compared to previous approaches that were concerned with providing approximate descriptions of how algorithms make decisions—but not the “why” or the external facts leading to that decision. I think when judging the strength of an explanation, you also have to look at questions like “How easy is this to understand?” and “How does this help the person I’m explaining things to?” For me, counterfactuals are a more immediately useful explanation, than something which explains where the weights came from. Even if you did know, what could you do with that information?
Ed: I guess the question of algorithmic decision making in society involves a hugely complex intersection of industry, research, and policy making? Are we control of things?
Sandra: Artificial intelligence (and the technology supporting it) is an area where many sectors are now trying to work together, including in the crucial areas of fairness, transparency and accountability of algorithmic decision-making. I feel at the moment we see a very multi-stakeholder approach, and I hope that continues in the future. We can see for example that industry is very concerned with it—the Partnership in AI is addressing these topics and trying to come up with a set of industry guidelines, recognising the responsibilities inherent in producing these systems. There are also lots of data scientists (e.g. at the OII and Turing Institute) working on these questions. Policy-makers around the world (e.g. UK, EU, US, China) preparing their countries for the AI future, so it’s on everybody’s mind at the moment. It’s an extremely important topic.
Law and ethics obviously has an important role to play. The opacity, unpredictability of AI and its potentially discriminatory nature, requires that we think about the legal and ethical implications very early on. That starts with educating the coding community, and ensuring diversity. At the same time, it’s important to have an interdisciplinary approach. At the moment we’re focusing a bit too much on the STEM subjects; there’s a lot of funding going to those areas (which makes sense, obviously), but the social sciences are currently a bit neglected despite the major role they play in recognising things like discrimination and bias, which you might not recognise from just looking at code.
Brent: Yes—and we’ll need much greater interaction and collaboration between these sectors to stay ‘in control’ of things, so to speak. Policy always has a tendency to lag behind technological developments; the challenge here is to stay close enough to the curve to prevent major issues from arising. The potential for algorithms to transform society is massive, so ensuring a quicker and more reflexive relationship between these sectors than normal is absolutely critical.
Digital platforms strongly determine the structure of local interactions with users; essentially representing a totalitarian form of control. Image: Bruno Cordioli (Flickr CC BY 2.0).
Digital platforms are not just software-based media, they are governing systems that control, interact, and accumulate. As surfaces on which social action takes place, digital platforms mediate—and to a considerable extent, dictate—economic relationships and social action. By automating market exchanges they solidify relationships into material infrastructure, lend a degree of immutability and traceability to engagements, and render what previously would have been informal exchanges into much more formalised rules.
In his Policy & Internet article “Platform Logic: An Interdisciplinary Approach to the Platform-based Economy“, Jonas Andersson Schwarz argues that digital platforms enact a twofold logic of micro-level technocentric control and macro-level geopolitical domination, while supporting a range of generative outcomes between the two levels. Technology isn’t ‘neutral’, and what designers want may clash with what users want: so it’s important that we take a multi-perspective view of the role of digital platforms in contemporary society. For example, if we only consider the technical, we’ll notice modularity, compatibility, compliance, flexibility, mutual subsistence, and cross-subsidisation. By contrast, if we consider ownership and organisational control, we’ll observe issues of consolidation, privatisation, enclosure, financialisation and protectionism.
When focusing on local interactions (e.g. with users), the digital nature of platforms is seen to strongly determine structure; essentially representing an absolute or totalitarian form of control. When we focus on geopolitical power arrangements in the “platform society”, patterns can be observed that are worryingly suggestive of market dominance, colonisation, and consolidation. Concerns have been expressed that these (overwhelmingly US-biased) platform giants are not only enacting hegemony, but are on a road to “usurpation through tech—a worry that these companies could grow so large and become so deeply entrenched in world economies that they could effectively make their own laws.”
We caught up with Jonas to discuss his findings:
Ed.: You say that there are lots of different ways of considering “platforms”: what (briefly) are some of these different approaches, and why should they be linked up a bit? Certainly the conference your paper was presented at “IPP2016: The Platform Society” seemed to have struck an incredibly rich seam in this topic, and I think showed the value of approaching an issue like digital platforms from multiple disciplinary angles.
Jonas: In my article I’ve chosen to exclusively theorise digital platforms, which of course narrows down the meaning of the concept, to begin with. There are different interpretations as for what actually constitutes a digital platform. There has to be an element of proprietary control over the surface on which interaction takes place, for example. While being ubiquitous digital tools, free software and open protocols need not necessarily be considered as platforms, while proprietary operating systems should.
Within contemporary media studies there is considerable divergence as to whether one should define so-called over-the-top streaming services as platforms or not. Netflix, for example, in a strict technical sense, is not a platform for self-publishing and sharing in the way that YouTube is. But, in an economic sense, Netflix definitely enacts a multi-sided market, which is one of the key components of a what a platform does, economically speaking. Since platforms crystallise economic relationships into material infrastructure, conceptual conflation of this kind is unavoidable—different scholars tend to put different emphasis on different things.
Hence, when it comes to normative concerns, there are numerous approaches, ranging from largely apolitical computer science and design management studies, brandishing a largely optimistic view where blithe conceptions of innovation and generativity are emphasised, to critical approaches in political economy, where things like market dominance and consolidation are emphasised.
In my article, I try to relate to both of these schools of thought, by noting that they each are normative—albeit in vastly different ways—and by noting that not only do they each have somewhat different focus, they actually bring different research objects to the table: Usually, “efficacy” in purely technical interaction design is something altogether different than “efficacy” in matters of societal power relations, for example. While both notions can be said to be true, their respective validity might differ, depending on which matter of concern we are dealing with in each respective inquiry.
Ed.: You note in your article that platforms have a “twofold logic of micro-level technocentric control and macro-level geopolitical domination” which sounds quite a lot like what government does. Do you think “platform as government” is a useful way to think about this, i.e. are there any analogies?
Jonas: Sure, especially if we understand how platforms enact governance in really quite rigid forms. Platforms literally transform market relations into infrastructure. Compared to informal or spontaneous social structures, where there’s a lot of elasticity and ambiguity—put simply, giving-and-taking—automated digital infrastructure operates by unambiguous implementations of computer code. As Lawrence Lessig and others have argued, the perhaps most dangerous aspect of this is when digital infrastructures implement highly centralised modes of governance, often literally only having one point of command-and-control. The platform owner flicks a switch, and then certain listings and settings are allowed or disallowed, and so on.
This should worry any liberal, since it is a mode of governance that is totalitarian by nature; it runs counter to any democratic, liberal notion of spontaneous, emergent civic action. Funnily, a lot of Silicon Valley ideology appears to be indebted to theorists like Friedrich von Hayek, who observed a calculative rationality emerging out of heterogeneous, spontaneous market activity—but at the same time, Hayek’s call to arms was in itself a reaction to central planning of the very kind that I think digital platforms, when designed in too rigid a way, risk erecting.
Ed.: Is there a sense (in hindsight) that these platforms are basically the logical outcome of the ruthless pursuit of market efficiency, i.e. enabled by digital technologies? But is there also a danger that they could lock out equitable development and innovation if they become too powerful (e.g. leading to worries about market concentration and anti-trust issues)? At one point you ask: “Why is society collectively acquiescing to this development?” Why do you think that is?
Jonas: The governance aspect above rests on a kind of managerialist fantasy of perfect calculative rationality that is conferred upon the platform as an allegedly neutral agent or intermediary; scholars like Frank Pasquale have begun to unravel some of the rather dodgy ideology underpinning this informational idealism, or “dataism,” as José van Dijck calls it. However, it’s important to note how much of this risk for overly rigid structures comes down to sheer design implementation; I truly believe there is scope for more democratically adaptive, benign platforms, but that can only be achieved either through real incentives at the design stage (e.g. Wikipedia, and the ways in which its core business idea involves quality control by design), or through ex-post regulation, forcing platform owners to consider certain societally desirable consequences.
Ed.: A lot of this discussion seems to be based on control. Is there a general theory of “control”—i.e. are these companies creating systems of user management and control that follow similar conceptual/theoretical lines, or just doing “what seems right” to them in their own particular contexts?
Jonas: Down the stack, there is always a binary logic of control at play in any digital infrastructure. Still, on a higher level in the stack, as more complexity is added, we should expect to see more non-linear, adaptive functionality that can handle complexity and context. And where computational logic falls short, we should demand tolerable degrees of human moderation, more than there is now, to be sure. Regulators are going this way when it comes to things like Facebook and hate speech, and I think there is considerable consumer demand for it, as when disputes arise on Airbnb and similar markets.
Ed.: What do you think are the main worries with the way things are going with these mega-platforms, i.e. the things that policy-makers should hopefully be concentrating on, and looking out for?
Jonas: Policymakers are beginning to realise the unexpected synergies that big data gives rise to. As The Economist recently pointed out, once you control portable smartphones, you’ll have instant geopositioning data on a massive scale—you’ll want to own and control map services because you’ll then also have data on car traffic in real time, which means you’d be likely to have the transportation market cornered, self driving cars especially. If one takes an agnostic, heterodox view on companies like Alphabet, some of their far-flung projects actually begin to make sense, if synergy is taken into consideration. For automated systems, the more detailed the data becomes, the better the system will perform; vast pools of data get to act as protective moats.
One solution that The Economist suggests, and that has been championed for years by internet veteran Doc Searls, is to press for vastly increased transparency in terms of user data, so that individuals can improve their own sovereignty, control their relationships with platform companies, and thereby collectively demand that the companies in question disclose the value of this data—which would, by extent, improve signalling of the actual value of the company itself. If today’s platform companies are reluctant to do this, is that because it would perhaps reveal some of them to be less valuable than what they are held out to be?
Another potentially useful, proactive measure, that I describe in my article, is the establishment of vital competitors or supplements to the services that so many of us have gotten used to being provided for by platform giants. Instead of Facebook monopolising identity management online, which sadly seems to have become the norm in some countries, look to the Scandinavian example of BankID, which is a platform service run by a regional bank consortium, offering a much safer and more nationally controllable identity management solution.
Alternative platform services like these could be built by private companies as well as state-funded ones; alongside privately owned consortia of this kind, it would be interesting to see innovation within the public service remit, exploring how that concept could be re-thought in an era of platform capitalism.
“If data is the new oil, then why aren’t we taxing it like we tax oil?” That was the essence of the provocative brief that set in motion our recent 6-month research project funded by the Rockefeller Foundation. The results are detailed in the new report: Data Financing for Global Good: A Feasibility Study.
The parallels between data and oil break down quickly once you start considering practicalities such as measuring and valuing data. Data is, after all, a highly heterogeneous good whose value is context-specific—very different from a commodity such as oil that can be measured and valued by the barrel. But even if the value of data can’t simply be metered and taxed, are there other ways in which the data economy could be more directly aligned with social good?
Data-intensive industries already contribute to social good by producing useful services and paying taxes on their profits (though some pay regrettably little). But are there ways in which the data economy could directly finance global causes such as climate change prevention, poverty alleviation and infrastructure? Such mechanisms should not just arbitrarily siphon off money from industry, but also contribute value back to the data economy by correcting market failures and investment gaps. The potential impacts are significant: estimates value the data economy at around seven percent of GDP in rich industrialised countries, or around ten times the value of the United Nations development aid spending goal.
Here’s where “data financing” comes in. It’s a term we coined that’s based on innovative financing, a concept increasingly used in the philanthropical world. Innovative financing refers to initiatives that seek to unlock private capital for the sake of global development and socially beneficial projects, which face substantial funding gaps globally. Since government funding towards addressing global challenges is not growing, the proponents of innovative financing are asking how else these critical causes could be funded. An existing example of innovative financing is the UNITAID air ticket levy used to advance global health.
Data financing, then, is a subset of innovative financing that refers to mechanisms that attempt to redirect a slice of the value created in the global data economy towards broader social objectives. For instance, a Global Internet Subsidy funded by large Internet companies could help to educate and and build infrastructure in the world’s marginalized regions, in the long run also growing the market for Internet companies’ services. But such a model would need well-designed governance mechanisms to avoid the pitfalls of current Internet subsidization initiatives, which risk failing because of well-founded concerns that they further entrench Internet giants’ dominance over emerging digital markets.
Besides the Global Internet Subsidy, other data financing models examined in the report are a Privacy Insurance for personal data processing, a Shared Knowledge Duty payable by businesses profiting from open and public data, and an Attention Levy to disincentivise intrusive marketing. Many of these have been considered before, and they come with significant economic, legal, political, and technical challenges. Our report considers these challenges in turn, assesses the feasibility of potential solutions, and presents rough estimates of potential financial impacts.
Some of the prevailing business models of the data economy—provoking users’ attention, extracting their personal information, and monetising it through advertising—are more or less taken for granted today. But they are something of a historical accident, an unanticipated corollary to some of the technical and political decisions made early in the Internet’s design. Certainly they are not any inherent feature of data as such. Although our report focuses on the technical, legal, and political practicalities of the idea of data financing, it also invites a careful reader to question some of the accepted truths on how a data-intensive economy could be organised, and what business models might be possible.
Read the report: Lehdonvirta, V., Mittelstadt, B. D., Taylor, G., Lu, Y. Y., Kadikov, A., and Margetts, H. (2016) Data Financing for Global Good: A Feasibility Study. University of Oxford: Oxford Internet Institute.
Bitcoin’s underlying technology, the blockchain, is widely expected to find applications far beyond digital payments. It is celebrated as a “paradigm shift in the very idea of economic organisation”. But the OII’s Professor Vili Lehdonvirta contends that such revolutionary potentials may be undermined by a fundamental paradox that has to do with the governance of the technology.
I recently gave a talk at the Alan Turing Institute (ATI) under the title The Problem of Governance in Distributed Ledger Technologies. The starting point of my talk was that it is frequently posited that blockchain technologies will “revolutionise industries that rely on digital record keeping”, such as financial services and government. In the talk I applied elementary institutional economics to examine what blockchain technologies really do in terms of economic organisation, and what problems this gives rise to. In this essay I present an abbreviated version of the argument. Alternatively you can watch a video of the talk below.
First, it is necessary to note that there is quite a bit of confusion as to what exactly is meant by a blockchain. When people talk about “the” blockchain, they often refer to the Bitcoin blockchain, an ongoing ledger of transactions started in 2009 and maintained by the approximately 5,000 computers that form the Bitcoin peer-to-peer network. The term blockchain can also be used to refer to other instances or forks of the same technology (“a” blockchain). The term “distributed ledger technology” (DLT) has also gained currency recently as a more general label for related technologies.
In each case, I think it is fair to say that the reason that so many people are so excited about blockchain today is not the technical features as such. In terms of performance metrics like transactions per second, existing blockchain technologies are in many ways inferior to more conventional technologies. This is frequently illustrated with the point that the Bitcoin network is limited by design to process at most approximately seven transactions per second, whereas the Visa payment network has a peak capacity of 56,000 transactions per second. Other implementations may have better performance, and on some other metrics blockchain technologies can perhaps beat more conventional technologies. But technical performance is not why so many people think blockchain is revolutionary and paradigm-shifting.
The reason that blockchain is making waves is that it promises to change the very way economies are organised: to eliminate centralised third parties. Let me explain what this means in theoretical terms. Many economic transactions, such as long-distance trade, can be modeled as a game of Prisoners’ Dilemma. The buyer and the seller can either cooperate (send the shipment/payment as promised) or defect (not send the shipment/payment). If the buyer and the seller don’t trust each other, then the equilibrium solution is that neither player cooperates and no trade takes place. This is known as the fundamental problem of cooperation.
There are several classic solutions to the problem of cooperation. One is reputation. In a community of traders where members repeatedly engage in exchange, any trader who defects (fails to deliver on a promise) will gain a negative reputation, and other traders will refuse to trade with them out of self-interest. This threat of exclusion from the community acts as a deterrent against defection, and the equilibrium under certain conditions becomes that everyone will cooperate.
Reputation is only a limited solution, however. It only works within communities where reputational information spreads effectively, and traders may still defect if the payoff from doing so is greater than the loss of future trade. Modern large-scale market economies where people trade with strangers on a daily basis are only possible because of another solution: third-party enforcement. In particular, this means state-enforced contracts and bills of exchange enforced by banks. These third parties in essence force parties to cooperate and to follow through with their promises.
Besides trade, another example of the problem of cooperation is currency. Currency can be modeled as a multiplayer game of Prisoners’ Dilemma. Traders collectively have an interest in maintaining a stable currency, because it acts as a lubricant to trade. But each trader individually has an interest in debasing the currency, in the sense of paying with fake money (what in blockchain-speak is referred to as double spending). Again the classic solution to this dilemma is third-party enforcement: the state polices metal currencies and punishes counterfeiters, and banks control ledgers and prevent people from spending money they don’t have.
So third-party enforcement is the dominant model of economic organisation in today’s market economies. But it’s not without its problems. The enforcer is in a powerful position in relation to the enforced: banks could extract exorbitant fees, and states could abuse their power by debasing the currency, illegitimately freezing assets, or enforcing contracts in unfair ways. One classic solution to the problems of third-party enforcement is competition. Bank fees are kept in check by competition: the enforced can switch to another enforcer if the fees get excessive.
But competition is not always a viable solution: there is a very high cost to switching to another state (i.e. becoming a refugee) if your state starts to abuse its power. Another classic solution is accountability: democratic institutions that try to ensure the enforcer acts in the interest of the enforced. For instance, the interbank payment messaging network SWIFT is a cooperative society owned by its member banks. The members elect a Board of Directors that is the highest decision making body in the organisation. This way, they attempt to ensure that SWIFT does not try to extract excessive fees from the member banks or abuse its power against them. Still, even accountability is not without its problems, since it comes with the politics of trying to reconcile different members’ diverging interests as best as possible.
Into this picture enters blockchain: a technology where third-party enforcers are replaced with a distributed network that enforces the rules. It can enforce contracts, prevent double spending, and cap the size of the money pool all without participants having to cede power to any particular third party who might abuse the power. No rent-seeking, no abuses of power, no politics—blockchain technologies can be used to create “math-based money” and “unstoppable” contracts that are enforced with the impartiality of a machine instead of the imperfect and capricious human bureaucracy of a state or a bank. This is why so many people are so excited about blockchain: its supposed ability change economic organisation in a way that transforms dominant relationships of power.
Unfortunately this turns out to be a naive understanding of blockchain, and the reality is inevitably less exciting. Let me explain why. In economic organisation, we must distinguish between enforcing rules and making rules. Laws are rules enforced by state bureaucracy and made by a legislature. The SWIFT Protocol is a set of rules enforced by SWIFTNet (a centralised computational system) and made, ultimately, by SWIFT’s Board of Directors. The Bitcoin Protocol is a set of rules enforced by the Bitcoin Network (a distributed network of computers) made by—whom exactly? Who makes the rules matters at least as much as who enforces them. Blockchain technology may provide for completely impartial rule-enforcement, but that is of little comfort if the rules themselves are changed. This rule-making is what we refer to as governance.
Using Bitcoin as an example, the initial versions of the protocol (ie. the rules) were written by the pseudonymous Satoshi Nakamoto, and later versions are released by a core development team. The development team is not autocratic: a complex set of social and technical entanglements means that other people are also influential in how Bitcoin’s rules are set; in particular, so-called mining pools, headed by a handful of individuals, are very influential. The point here is not to attempt to pick apart Bitcoin’s political order; the point is that Bitcoin has not in any sense eliminated human politics; humans are still very much in charge of setting the rules that the network enforces.
There is, however, no formal process for how governance works in Bitcoin, because for a very long time these politics were not explicitly recognised, and many people don’t recognise them, preferring instead the idea that Bitcoin is purely “math-based money” and that all the developers are doing is purely apolitical plumbing work. But what has started to make this position untenable and Bitcoin’s politics visible is the so-called “block size debate”—a big disagreement between factions of the Bitcoin community over the future direction of the rules. Different stakeholders have different interests in the matter, and in the absence of a robust governance mechanism that could reconcile between the interests, this has resulted in open “warfare” between the camps over social media and discussion forums.
Will competition solve the issue? Multiple “forks” of the Bitcoin protocol have emerged, each with slightly different rules. But network economics teaches us that competition does not work well at all in the presence of strong network effects: everyone prefers to be in the network where other people are, even if its rules are not exactly what they would prefer. Network markets tend to tip in favour of the largest network. Every fork/split diminishes the total value of the system, and those on the losing side of a fork may eventually find their assets worthless.
If competition doesn’t work, this leaves us with accountability. There is no obvious path how Bitcoin could develop accountable governance institutions. But other blockchain projects, especially those that are gaining some kind of commercial or public sector legitimacy, are designed from the ground up with some level of accountable governance. For instance, R3 is a firm that develops blockchain technology for use in the financial services industry. It has enrolled a consortium of banks to guide the effort, and its documents talk about the “mandate” it has from its “member banks”. Its governance model thus sounds a lot like the beginnings of something like SWIFT. Another example is RSCoin, designed by my ATI colleagues George Danezis and Sarah Meiklejohn, which is intended to be governed by a central bank.
Regardless of the model, my point is that blockchain technologies cannot escape the problem of governance. Whether they recognise it or not, they face the same governance issues as conventional third-party enforcers. You can use technologies to potentially enhance the processes of governance (eg. transparency, online deliberation, e-voting), but you can’t engineer away governance as such. All this leads me to wonder how revolutionary blockchain technologies really are. If you still rely on a Board of Directors or similar body to make it work, how much has economic organisation really changed?
And this leads me to my final point, a provocation: once you address the problem of governance, you no longer need blockchain; you can just as well use conventional technology that assumes a trusted central party to enforce the rules, because you’re already trusting somebody (or some organisation/process) to make the rules. I call this blockchain’s ‘governance paradox’: once you master it, you no longer need it. Indeed, R3’s design seems to have something called “uniqueness services”, which look a lot like trusted third-party enforcers (though this isn’t clear from the white paper). RSCoin likewise relies entirely on trusted third parties. The differences to conventional technology are no longer that apparent.
Perhaps blockchain technologies can still deliver better technical performance, like better availability and data integrity. But it’s not clear to me what real changes to economic organisation and power relations they could bring about. I’m very happy to be challenged on this, if you can point out a place in my reasoning where I’ve made an error. Understanding grows via debate. But for the time being, I can’t help but be very skeptical of the claims that blockchain will fundamentally transform the economy or government.
The governance of DLTs is also examined in this report chapter that I coauthored earlier this year:
Many organisations are coming up with their own internal policy and guidelines for data sharing. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During her time as an OII Visiting Associate, Alison Holt developed a pragmatic solution in the form of a Voluntary Code, anchored in the developing ISO standards for the Governance of Data. She discusses the voluntary code, and the need to provide urgent advice to organisations struggling with policy for sharing data.
Collecting, storing and distributing digital data is significantly easier and cheaper now than ever before, in line with predictions from Moore, Kryder and Gilder. Organisations are incentivised to collect large volumes of data with the hope of unleashing new business opportunities or maybe even new businesses. Consider the likes of Uber, Netflix, and Airbnb and the other data mongers who have built services based solely on digital assets.
The use of this new abundant data will continue to disrupt traditional business models for years to come, and there is no doubt that these large data volumes can provide value. However, they also bring associated risks (such as unplanned disclosure and hacks) and they come with constraints (for example in the form of privacy or data protection legislation). Hardly a week goes by without a data breach hitting the headlines. Even if your telecommunications provider didn’t inadvertently share your bank account and sort code with hackers, and your child wasn’t one of the hundreds of thousands of children whose birthdays, names, and photos were exposed by a smart toy company, you might still be wondering exactly how your data is being looked after by the banks, schools, clinics, utility companies, local authorities and government departments that are so quick to collect your digital details.
Then there are the companies who have invited you to sign away the rights to your data and possibly your privacy too—the ones that ask you to sign the Terms and Conditions for access to a particular service (such as a music or online shopping service) or have asked you for access to your photos. And possibly you are one of the “worried well” who wear or carry a device that collects your health data and sends it back to storage in a faraway country, for analysis.
So unless you live in a lead-lined concrete bunker without any access to internet connected devices, and you don’t have the need to pass by webcams or sensors, or use public transport or public services; then your data is being collected and shared. And for the majority of the time, you benefit from this enormously. The bus stop tells you exactly when the next bus is coming, you have easy access to services and entertainment fitted very well to your needs, and you can do most of your bank and utility transactions online in the peace and quiet of your own home. Beyond you as an individual, there are organisations “out there” sharing your data to provide you better healthcare, education, smarter city services and secure and efficient financial services, and generally matching the demand for services with the people needing them.
So we most likely all have data that is being shared and it is generally in our interest to share it, but how can we trust the organisations responsible for sharing our data? As an organisation, how can I know that my partner and supplier organisations are taking care of my client and product information?
Organisations taking these issues seriously are coming up with their own internal policy and guidelines. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During my time as a visiting associate at the Oxford Internet Institute, University of Oxford, I have developed a pragmatic solution in the form of a Voluntary Code. The Code has been produced using the guidelines for voluntary code development produced by the Office of Community Affairs, Industry Canada. More importantly, the Code is anchored in the developing ISO standards for the Governance of Data (the 38505 series). These standards apply the governance principles and model from the 38500 standard and introduce the concept of a data accountability map, highlighting six focus areas for a governing body to apply governance. The early stage standard suggests considering the aspects of Value, Risk and Constraint for each area, to determine what practice and policy should be applied to maximise the value from organisational data, whilst applying constraints as set by legislation and local policy, and minimising risk.
I am Head of the New Zealand delegation to the ISO group developing IT Service Management and IT Governance standards, SC40, and am leading the development of the 38505 series of Governance of Data standards, working with a talented editorial team of industry and standards experts from Australia, China and the Netherlands. I am confident that the robust ISO consensus-led process involving subject matter experts from around the world, will result in the publication of best practice guidance for the governance of data, presented in a format that will have relevance and acceptance internationally.
In the meantime, however, I see a need to provide urgent advice to organisations struggling with policy for sharing data. I have used my time at Oxford to interview policy, ethics, smart city, open data, health informatics, education, cyber security and social science experts and users, owners and curators of large data sets, and have come up with a “Voluntary Code for Data Sharing”. The Code takes three areas from the data accountability map in the developing ISO standard 38505-1; namely Collect, Store, Distribute, and applies the aspects of Value, Risk and Constraint to provide seven maxims for sharing data. To assist with adoption and compliance, the Code provides references to best practice and examples. As the ISO standards for the Governance of Data develop, the Code will be updated. New examples of good practice will be added as they come to light.
[A permanent home for the voluntary code is currently being organised; please email me in the meantime if you are interested in it: Alison.holt@longitude174.com]
The Code is deliberately short and succinct, but it does provide links for those who need to read more to understand the underpinning practices and standards, and those tasked with implementing organisational data policy and practice. It cannot guarantee good outcomes. With new security threats arising daily, nobody can fully guarantee the safety of your information. However, if you deal with an organisation that is compliant with the Voluntary Code, then at least you can have assurance that the organisation has at least considered how it is using your data now and how it might want to reuse your data in the future, how and where your data will be stored, and then finally how your data will be distributed or discarded. And that’s a good start!
Alison Holt was an OII Academic Visitor in late 2015. She is an internationally acclaimed expert in the Governance of Information Technology and Data, heading up the New Zealand delegations to the international standards committees for IT Governance and Service Management (SC40) and Software and Systems Engineering (SC7). The British Computer Society published Alison’s first book on the Governance of IT in 2013.
What are the linkages between multistakeholder governance and crowdsourcing? Both are new—trendy, if you will—approaches to governance premised on the potential of collective wisdom, bringing together diverse groups in policy-shaping processes. Their interlinkage has remained under explored so far. Our article recently published in Policy and Internet sought to investigate this in the context of Internet governance, in order to assess the extent to which crowdsourcing represents an emerging opportunity of participation in global public policymaking.
We examined two recent Internet governance initiatives which incorporated crowdsourcing with mixed results: the first one, the ICANN Strategy Panel on Multistakeholder Innovation, received only limited support from the online community; the second, NETmundial, had a significant number of online inputs from global stakeholders who had the opportunity to engage using a platform for political participation specifically set up for the drafting of the outcome document. The study builds on these two cases to evaluate how crowdsourcing was used as a form of public consultation aimed at bringing the online voice of the “undefined many” (as opposed to the “elected few”) into Internet governance processes.
From the two cases, it emerged that the design of the consultation processes conducted via crowdsourcing platforms is key in overcoming barriers of participation. For instance, in the NETmundial process, the ability to submit comments and participate remotely via www.netmundial.br attracted inputs from all over the world very early on, since the preparatory phase of the meeting. In addition, substantial public engagement was obtained from the local community in the drafting of the outcome document, through a platform for political participation—www.participa.br—that gathered comments in Portuguese. In contrast, the outreach efforts of the ICANN Strategy Panel on Multistakeholder Innovation remained limited; the crowdsourcing platform they used only gathered input (exclusively in English) from a small group of people, insufficient to attribute to online public input a significant role in the reform of ICANN’s multistakeholder processes.
Second, questions around how crowdsourcing should and could be used effectively to enhance the legitimacy of decision-making processes in Internet governance remain unanswered. A proper institutional setting that recognises a role for online multistakeholder participation is yet to be defined; in its absence, the initiatives we examined present a set of procedural limitations. For instance, in the NETmundial case, the Executive Multistakeholder Committee, in charge of drafting an outcome document to be discussed during the meeting based on the analysis of online contributions, favoured more “mainstream” and “uncontroversial” contributions. Additionally, online deliberation mechanisms for different propositions put forward by a High-Level Multistakeholder Committee, which commented on the initial draft, were not in place.
With regard to ICANN, online consultations have been used on a regular basis since its creation in 1998. Its target audience is the “ICANN community,” a group of stakeholders that volunteer their time and expertise to improve policy processes within the organisation. Despite the effort, initiatives such as the 2000 global election for the new At-Large Directors have revealed difficulties in reaching as broad of an audience as wanted. Our study discusses some of the obstacles of the implementation of this ambitious initiative, including limited information and awareness about the At-Large elections, and low Internet access and use in most developing countries, particularly in Africa and Latin America.
Third, there is a need for clear rules regarding the way in which contributions are evaluated in crowdsourcing efforts. When the deliberating body (or committee) is free to disregard inputs without providing any motivation, it triggers concerns about the broader transnational governance framework in which we operate, as there is no election of those few who end up determining which parts of the contributions should be reflected in the outcome document. To avoid the agency problem arising from the lack of accountability over the incorporation of inputs, it is important that crowdsourcing attempts pay particular attention to designing a clear and comprehensive assessment process.
The “wisdom of the crowd” has traditionally been explored in developing the Internet, yet it remains a contested ground when it comes to its governance. In multistakeholder set-ups, the diversity of voices and the collection of ideas and input from as many actors as possible—via online means—represent a desideratum, rather than a reality. In our exploration of empowerment through online crowdsourcing for institutional reform, we identify three fundamental preconditions: first, the existence of sufficient community interest, able to leverage wide expertise beyond a purely technical discussion; second, the existence of procedures for the collection and screening of inputs, streamlining certain ideas considered for implementation; and third, commitment to institutionalizing the procedures, especially by clearly defining the rules according to which feedback is incorporated and circumvention is avoided.
Roxana Radu is a PhD candidate in International Relations at the Graduate Institute of International and Development Studies in Geneva and a fellow at the Center for Media, Data and Society, Central European University (Budapest). Her current research explores the negotiation of internet policy-making in global and regional frameworks.
Nicolo Zingales is an assistant professor at Tilburg law school, a senior member of the Tilburg Law and Economics Center (TILEC), and a research associate of the Tilburg Institute for Law, Technology and Society (TILT). He researches on various aspects of Internet governance and regulation, including multistakeholder processes, data-driven innovation and the role of online intermediaries.
Enrico Calandro (PhD) is a senior research fellow at Research ICT Africa, an ICT policy think-tank based based in Cape Town. His academic research focuses on accessibility and affordability of ICT, broadband policy, and internet governance issues from an African perspective.
The “Airbnb Law” was signed by Mayor Ed Lee in October 2014 at San Francisco City Hall, legalising short-term rentals in SF with many conditions. Image of protesters by Kevin Krejci (Flickr).
Ride-hailing app Uber is close to replacing government-licensed taxis in some cities, while Airbnb’s accommodation rental platform has become a serious competitor to government-regulated hotel markets. Many other apps and platforms are trying to do the same in other sectors of the economy. In my previous post, I argued that platforms can be viewed in social science terms as economic institutions that provide infrastructures necessary for markets to thrive. I explained how the natural selection theory of institutional change suggests that people are migrating from state institutions to these new code-based institutions because they provide a more efficient environment for doing business. In this article, I will discuss some of the problems with this theory, and outline a more nuanced theory of institutional change that suggests that platforms’ effects on society will be complex and influence different people in different ways.
Economic sociologists like Neil Fligstein have pointed out that not everyone is as free to choose the means through which they conduct their trade. For example, if buyers in a market switch to new institutions, sellers may have little choice but to follow, even if the new institutions leave them worse off than the old ones did. Even if taxi drivers don’t like Uber’s rules, they may find that there is little business to be had outside the platform, and switch anyway. In the end, the choice of institutions can boil down to power. Economists have shown that even a small group of participants with enough market power—like corporate buyers—may be able to force a whole market to tip in favour of particular institutions. Uber offers a special solution for corporate clients, though I don’t know if this has played any part in the platform’s success.
Even when everyone participates in an institutional arrangement willingly, we still can’t assume that it will contribute to the social good. Cambridge economic historian Sheilagh Ogilvie has pointed out that an institution that is efficient for everyone who participates in it can still be inefficient for society as a whole if it affects third parties. For example, when Airbnb is used to turn an ordinary flat into a hotel room, it can cause nuisance to neighbours in the form of noise, traffic, and guests unfamiliar with the local rules. The convenience and low cost of doing business through the platform is achieved in part at others’ expense. In the worst case, a platform can make society not more but less efficient—by creating a ‘free rider economy’.
In general, social scientists recognize that different people and groups in society often have conflicting interests in how economic institutions are shaped. These interests are reconciled—if they are reconciled—through political institutions. Many social scientists thus look not so much at efficiencies but at political institutions to understand why economic institutions are shaped the way they are. For example, a democratic local government in principle represents the interests of its citizens, through political institutions such as council elections and public consultations. Local governments consequently try to strike a balance between the conflicting interests of hoteliers and their neighbours, by limiting hotel business to certain zones. In contrast, Airbnb as a for-profit business must cater to the interests of its customers, the would-be hoteliers and their guests. It has no mechanism, and more importantly, no mandate, to address on an equal footing the interests of third parties like customers’ neighbours. Perhaps because of this, 74% of Airbnb’s properties are not in the main hotel districts, but in ordinary residential blocks.
That said, governments have their own challenges in producing fair and efficient economic institutions. Not least among these is the fact that government regulators are at a risk of capture by incumbent market participants, or at the very least they face the innovator’s dilemma: it is easier to craft rules that benefit the incumbents than rules that provide great but uncertain benefits to future market participants. For example, cities around the world operate taxi licensing systems, where only strictly limited numbers of license owners are allowed to operate taxicabs. Whatever benefits this system offers to customers in terms of quality assurance, among its biggest beneficiaries are the license owners, and among its losers the would-be drivers who are excluded from the market. Institutional insiders and outsiders have conflicting interests, and government political institutions are often such that it is easier for it to side with the insiders.
Against this background, platforms appear almost as radical reformers that provide market access to those whom the establishment has denied it. For example, Uber recently announced that it aims to create one million jobs for women by 2020, a bold pledge in the male-dominated transport industry, and one that would likely not be possible if it adhered to government licensing requirements, as most licenses are owned by men. Having said that, Uber’s definition of a ‘job’ is something much more precarious and entrepreneurial than the conventional definition. My point here is not to side with either Uber or the licensing system, but to show that their social implications are very different. Both possess at least some flaws as well as redeeming qualities, many of which can be traced back to their political institutions and whom they represent.
What kind of new economic institutions are platform developers creating? How efficient are they? What other consequences, including unintended ones, do they have and to whom? Whose interests are they geared to represent—capital vs. labour, consumer vs. producer, Silicon Valley vs. local business, incumbent vs. marginalised? These are the questions that policy makers, journalists, and social scientists ought to be asking at this moment of transformation in our economic institutions. Instead of being forced to choose one or the other between established institutions and platforms as they currently are, I hope that we will be able to discover ways to take what is good in both, and create infrastructure for an economy that is as fair and inclusive as it is efficient and innovative.
Vili Lehdonvirta is a Research Fellow and DPhil Programme Director at the Oxford Internet Institute, and an editor of the Policy & Internet journal. He is an economic sociologist who studies the social and economic dimensions of new information technologies around the world, with particular expertise in digital markets and crowdsourcing.
Protest for fair taxi laws in Portland; organisers want city leaders to make ride-sharing companies play by the same rules as cabs and Town cars. Image: Aaron Parecki (Flickr).
Cars were smashed and tires burned in France last month in protests against the ride hailing app Uber. Less violent protests have also been staged against Airbnb, a platform for renting short-term accommodation. Despite the protests, neither platform shows any signs of faltering. Uber says it has a million users in France, and is available in 57 countries. Airbnb is available in over 190 countries, and boasts over a million rooms, more than hotel giants like Hilton and Marriott. Policy makers at the highest levels are starting to notice the rise of these and similar platforms. An EU Commission flagship strategy paper notes that “online platforms are playing an ever more central role in social and economic life,” while the Federal Trade Commission recently held a workshop on the topic in Washington.
Journalists and entrepreneurs have been quick to coin terms that try to capture the essence of the social and economic changes associated with online platforms: the sharing economy; the on-demand economy; the peer-to-peer economy; and so on. Each perhaps captures one aspect of the phenomenon, but doesn’t go very far in helping us make sense of all its potentials and contradictions, including why some people love it and some would like to smash it into pieces. Instead of starting from the assumption that everything we see today is new and unprecedented, what if we dug into existing social science theory to see what it has to say about economic transformation and the emergence of markets?
Economic sociologists are adamant that markets don’t just emerge by themselves: they are always based on some kind of an underlying infrastructure that allows people to find out what goods and services are on offer, agree on prices and terms, pay, and have a reasonable expectation that the other party will honour the agreement. The oldest market infrastructure is the personal social network: traders hear what’s on offer through word of mouth and trade only with those whom they personally know and trust. But personal networks alone couldn’t sustain the immense scale of trading in today’s society. Every day we do business with strangers and trust them to provide for our most basic needs. This is possible because modern society has developed institutions—things like private property, enforceable contracts, standardised weights and measures, consumer protection, and many other general and sector specific norms and facilities. By enabling and constraining everyone’s behaviours in predictable ways, institutions constitute a robust and more inclusive infrastructure for markets than personal social networks.
Modern institutions didn’t of course appear out of nowhere. Between prehistoric social networks and the contemporary institutions of the modern state, there is a long historical continuum of economic institutions, from ancient trade routes with their customs to medieval fairs with their codes of conduct to state-enforced trade laws of the early industrial era. Institutional economists led by Oliver Williamson and economic historians led by Douglass North theorised in the 1980s that economic institutions evolve towards more efficient forms through a process of natural selection. As new institutional forms become possible thanks to technological and organisational innovation, people switch to cheaper, easier, more secure, and overall more efficient institutions out of self-interest. Old and cumbersome institutions fall into disuse, and society becomes more efficient and economically prosperous as a result. Williamson and North both later received the Nobel Memorial Prize in Economic Sciences.
It is easy to frame platforms as the next step in such an evolutionary process. Even if platforms don’t replace state institutions, they can plug gaps that remain the state-provided infrastructure. For example, enforcing a contract in court is often too expensive and unwieldy to be used to secure transactions between individual consumers. Platforms provide cheaper and easier alternatives to formal contract enforcement, in the form of reputation systems that allow participants to rate each others’ conduct and view past ratings. Thanks to this, small transactions like sharing a commute that previously only happened in personal networks can now potentially take place on a wider scale, resulting in greater resource efficiency and prosperity (the ‘sharing economy’). Platforms are not the first companies to plug holes in state-provided market infrastructure, though. Private arbitrators, recruitment agencies, and credit rating firms have been doing similar things for a long time.
What’s arguably new about platforms, though, is that some of the most popular ones are not mere complements, but almost complete substitutes to state-provided market infrastructures. Uber provides a complete substitute to government-licensed taxi infrastructures, addressing everything from quality and discovery to trust and payment. Airbnb provides a similarly sweeping solution to short-term accommodation rental. Both platforms have been hugely successful; in San Francisco, Uber has far surpassed the city’s official taxi market in size. The sellers on these platforms are not just consumers wanting to make better use of their resources, but also firms and professionals switching over from the state infrastructure. It is as if people and companies were abandoning their national institutions and emigrating en masse to Platform Nation.
From the natural selection perspective, this move from state institutions to platforms seems easy to understand. State institutions are designed by committee and carry all kinds of historical baggage, while platforms are designed from the ground up to address their users’ needs. Government institutions are geographically fragmented, while platforms offer a seamless experience from one city, country, and language area to the other. Government offices have opening hours and queues, while platforms make use of latest technologies to provide services around the clock (the ‘on-demand economy’). Given the choice, people switch to the most efficient institutions, and society becomes more efficient as a result. The policy implications of the theory are that government shouldn’t try to stop people from using Uber and Airbnb, and that it shouldn’t try to impose its evidently less efficient norms on the platforms. Let competing platforms innovate new regulatory regimes, and let people vote with their feet; let there be a market for markets.
The natural selection theory of institutional change provides a compellingly simple way to explain the rise of platforms. However, it has difficulty in explaining some important facts, like why economic institutions have historically developed differently in different places around the world, and why some people now protest vehemently against supposedly better institutions. Indeed, over the years since the theory was first introduced, social scientists have discovered significant problems in it. Economic sociologists like Neil Fligstein have noted that not everyone is as free to choose the institutions that they use. Economic historian Sheilagh Ogilvie has pointed out that even institutions that are efficient for those who participate in them can still sometimes be inefficient for society as a whole. These points suggest a different theory of institutional change, which I will apply to online platforms in my next post.
Vili Lehdonvirta is a Research Fellow and DPhil Programme Director at the Oxford Internet Institute, and an editor of the Policy & Internet journal. He is an economic sociologist who studies the social and economic dimensions of new information technologies around the world, with particular expertise in digital markets and crowdsourcing.