Mapping Fentanyl Trades on the Darknet

My colleagues Joss Wright, Martin Dittus and I have been scraping the world’s largest darknet marketplaces over the last few months, as part of our darknet mapping project. The data we collected allow us to explore a wide range of trading activities, including the trade in the synthetic opioid Fentanyl, one of the drugs blamed for the rapid rise in overdose deaths and widespread opioid addiction in the US.

The above map shows the global distribution of the Fentanyl trade on the darknet. The US accounts for almost 40% of global darknet trade, with Canada and Australia at 15% and 12%, respectively. The UK and Germany are the largest sellers in Europe with 9% and 5% of sales. While China is often mentioned as an important source of the drug, it accounts for only 4% of darknet sales. However, this does not necessarily mean that China is not the ultimate site of production. Many of the sellers in places like the US, Canada, and Western Europe are likely intermediaries rather than producers themselves.

In the next few months, we’ll be sharing more visualisations of the economic geographies of products on the darknet. In the meantime you can find out more about our work by Exploring the Darknet in Five Easy Questions.

Follow the project here: https://www.oii.ox.ac.uk/research/projects/economic-geog-darknet/

Twitter: @OiiDarknet

Could data pay for global development? Introducing data financing for global good

“If data is the new oil, then why aren’t we taxing it like we tax oil?” That was the essence of the provocative brief that set in motion our recent 6-month research project funded by the Rockefeller Foundation. The results are detailed in the new report: Data Financing for Global Good: A Feasibility Study.

The parallels between data and oil break down quickly once you start considering practicalities such as measuring and valuing data. Data is, after all, a highly heterogeneous good whose value is context-specific — very different from a commodity such as oil that can be measured and valued by the barrel. But even if the value of data can’t simply be metered and taxed, are there other ways in which the data economy could be more directly aligned with social good?

Data-intensive industries already contribute to social good by producing useful services and paying taxes on their profits (though some pay regrettably little). But are there ways in which the data economy could directly finance global causes such as climate change prevention, poverty alleviation and infrastructure? Such mechanisms should not just arbitrarily siphon off money from industry, but also contribute value back to the data economy by correcting market failures and investment gaps. The potential impacts are significant: estimates value the data economy at around seven percent of GDP in rich industrialised countries, or around ten times the value of the United Nations development aid spending goal.

Here’s where “data financing” comes in. It’s a term we coined that’s based on innovative financing, a concept increasingly used in the philanthropical world. Innovative financing refers to initiatives that seek to unlock private capital for the sake of global development and socially beneficial projects, which face substantial funding gaps globally. Since government funding towards addressing global challenges is not growing, the proponents of innovative financing are asking how else these critical causes could be funded. An existing example of innovative financing is the UNITAID air ticket levy used to advance global health.

Data financing, then, is a subset of innovative financing that refers to mechanisms that attempt to redirect a slice of the value created in the global data economy towards broader social objectives. For instance, a Global Internet Subsidy funded by large Internet companies could help to educate and and build infrastructure in the world’s marginalized regions, in the long run also growing the market for Internet companies’ services. But such a model would need well-designed governance mechanisms to avoid the pitfalls of current Internet subsidization initiatives, which risk failing because of well-founded concerns that they further entrench Internet giants’ dominance over emerging digital markets.

Besides the Global Internet Subsidy, other data financing models examined in the report are a Privacy Insurance for personal data processing, a Shared Knowledge Duty payable by businesses profiting from open and public data, and an Attention Levy to disincentivise intrusive marketing. Many of these have been considered before, and they come with significant economic, legal, political, and technical challenges. Our report considers these challenges in turn, assesses the feasibility of potential solutions, and presents rough estimates of potential financial impacts.

Some of the prevailing business models of the data economy — provoking users’ attention, extracting their personal information, and monetizing it through advertising — are more or less taken for granted today. But they are something of a historical accident, an unanticipated corollary to some of the technical and political decisions made early in the Internet’s design. Certainly they are not any inherent feature of data as such. Although our report focuses on the technical, legal, and political practicalities of the idea of data financing, it also invites a careful reader to question some of the accepted truths on how a data-intensive economy could be organized, and what business models might be possible.

Read the report: Lehdonvirta, V., Mittelstadt, B. D., Taylor, G., Lu, Y. Y., Kadikov, A., and Margetts, H. (2016) Data Financing for Global Good: A Feasibility Study. University of Oxford: Oxford Internet Institute.

Should there be a better accounting of the algorithms that choose our news for us?

A central ideal of democracy is that political discourse should allow a fair and critical exchange of ideas and values. But political discourse is unavoidably mediated by the mechanisms and technologies we use to communicate and receive information — and content personalization systems (think search engines, social media feeds and targeted advertising), and the algorithms they rely upon, create a new type of curated media that can undermine the fairness and quality of political discourse.

A new article by Brent Mittlestadt explores the challenges of enforcing a political right to transparency in content personalization systems. Firstly, he explains the value of transparency to political discourse and suggests how content personalization systems undermine open exchange of ideas and evidence among participants: at a minimum, personalization systems can undermine political discourse by curbing the diversity of ideas that participants encounter. Second, he explores work on the detection of discrimination in algorithmic decision making, including techniques of algorithmic auditing that service providers can employ to detect political bias. Third, he identifies several factors that inhibit auditing and thus indicate reasonable limitations on the ethical duties incurred by service providers — content personalization systems can function opaquely and be resistant to auditing because of poor accessibility and interpretability of decision-making frameworks. Finally, Brent concludes with reflections on the need for regulation of content personalization systems.

He notes that no matter how auditing is pursued, standards to detect evidence of political bias in personalized content are urgently required. Methods are needed to routinely and consistently assign political value labels to content delivered by personalization systems. This is perhaps the most pressing area for future work—to develop practical methods for algorithmic auditing.

The right to transparency in political discourse may seem unusual and farfetched. However, standards already set by the U.S. Federal Communication Commission’s fairness doctrine — no longer in force — and the British Broadcasting Corporation’s fairness principle both demonstrate the importance of the idealized version of political discourse described here. Both precedents promote balance in public political discourse by setting standards for delivery of politically relevant content. Whether it is appropriate to hold service providers that use content personalization systems to a similar standard remains a crucial question.

Read the full article: Mittelstadt, B. (2016) Auditing for Transparency in Content Personalization Systems. International Journal of Communication 10(2016), 4991–5002.

We caught up with Brent to explore the broader implications of the study:

Ed: We basically accept that the tabloids will be filled with gross bias, populism and lies (in order to sell copy) — and editorial decisions are not generally transparent to us. In terms of their impact on the democratic process, what is the difference between the editorial boardroom and a personalising social media algorithm?

Brent: There are a number of differences. First, although not necessarily transparent to the public, one hopes that editorial boardrooms are at least transparent to those within the news organisations. Editors can discuss and debate the tone and factual accuracy of their stories, explain their reasoning to one another, reflect upon the impact of their decisions on their readers, and generally have a fair debate about the merits and weaknesses of particular content.

This is not the case for a personalising social media algorithm; those working with the algorithm inside a social media company are often unable to explain why the algorithm is functioning in a particular way, or determined a particular story or topic to be ‘trending’ or displayed to particular users, while others are not. It is also far more difficult to ‘fact check’ algorithmically curated news; a news item can be widely disseminated merely by many users posting or interacting with it, without any purposeful dissemination or fact checking by the platform provider.

Another big difference is the degree to which users can be aware of the bias of the stories they are reading. Whereas a reader of The Daily Mail or The Guardian will have some idea of the values of the paper, the same cannot be said of platforms offering algorithmically curated news and information. The platform can be neutral insofar as it disseminates news items and information reflecting a range of values and political viewpoints. A user will encounter items reflecting her particular values (or, more accurately, her history of interactions with the platform and the values inferred from them), but these values, and their impact on her exposure to alternative viewpoints, may not be apparent to the user.

Ed: And how is content “personalisation” different to content filtering (e.g. as we see with the Great Firewall of China) that people get very worked up about? Should we be more worried about personalisation?

Brent: Personalisation and filtering are essentially the same mechanism; information is tailored to a user or users according to some prevailing criteria. One difference is whether content is merely infeasible to access, or technically inaccessible. Content of all types will typically still be accessible in principle when personalisation is used, but the user will have to make an effort to access content that is not recommended or otherwise given special attention. Filtering systems, in contrast, will impose technical measures to make particular content inaccessible from a particular device or geographical area.

Another difference is the source of the criteria used to set the visibility of different types of content. In the case of personalisation, these criteria are typically based on the users (inferred) interests, values, past behaviours and explicit requests. Critically, these values are not necessarily apparent to the user. For filtering, criteria are typically externally determined by a third party, often a government. Some types of information are set off limits, according to the prevailing values of the third party. It is the imposition of external values, which limit the capacity of users to access content of their choosing, which often causes an outcry against filtering and censorship.

Importantly, the two mechanisms do not necessarily differ in terms of the transparency of the limiting factors or rules to users. In some cases, such as the recently proposed ban in the UK of adult websites that do not provide meaningful age verification mechanisms, the criteria that determine whether sites are off limits will be publicly known at a general level. In other cases, and especially with personalisation, the user inside the ‘filter bubble’ will be unaware of the rules that determine whether content is (in)accessible. And it is not always the case that the platform provider intentionally keeps these rules secret. Rather, the personalisation algorithms and background analytics that determine the rules can be too complex, inaccessible or poorly understood even by the provider to give the user any meaningful insight.

Ed: Where are these algorithms developed: are they basically all proprietary? i.e. how would you gain oversight of massively valuable and commercially sensitive intellectual property?

Brent: Personalisation algorithms tend to be proprietary, and thus are not normally open to public scrutiny in any meaningful sense. In one sense this is understandable; personalisation algorithms are valuable intellectual property. At the same time the lack of transparency is a problem, as personalisation fundamentally affects how users encounter and digest information on any number of topics. As recently argued, it may be the case that personalisation of news impacts on political and democratic processes. Existing regulatory mechanisms have not been successful in opening up the ‘black box’ so to speak.

It can be argued, however, that legal requirements should be adopted to require these algorithms to be open to public scrutiny due to the fundamental way they shape our consumption of news and information. Oversight can take a number of forms. As I argue in the article, algorithmic auditing is one promising route, performed both internally by the companies themselves, and externally by a government agency or researchers. A good starting point would be for the companies developing and deploying these algorithms to extend their cooperation with researchers, thereby allowing a third party to examine the effects these systems are having on political discourse, and society more broadly.

Ed: By “algorithm audit” — do you mean examining the code and inferring what the outcome might be in terms of bias, or checking the outcome (presumably statistically) and inferring that the algorithm must be introducing bias somewhere? And is it even possible to meaningfully audit personalisation algorithms, when they might rely on vast amounts of unpredictable user feedback to train the system?

Brent: Algorithm auditing can mean both of these things, and more. Audit studies are a tool already in use, whereby human participants introduce different inputs into a system, and examine the effect on the system’s outputs. Similar methods have long been used to detect discriminatory hiring practices, for instance. Code audits are another possibility, but are generally prohibitive due to problems of access and complexity. Also, even if you can access and understand the code of an algorithm, that tells you little about how the algorithm performs in practice when given certain input data. Both the algorithm and input data would need to be audited.

Alternatively, auditing can assess just the outputs of the algorithm; recent work to design mechanisms to detect disparate impact and discrimination, particularly in the Fairness, Accountability and Transparency in Machine Learning (FAT-ML) community, is a great example of this type of auditing. Algorithms can also be designed to attempt to prevent or detect discrimination and other harms as they occur. These methods are as much about the operation of the algorithm, as they are about the nature of the training and input data, which may itself be biased. In short, auditing is very difficult, but there are promising avenues of research and development. Once we have reliable auditing methods, the next major challenge will be to tailor them to specific sectors; a one-size-meets-all approach to auditing is not on the cards.

Ed: Do you think this is a real problem for our democracy? And what is the solution if so?

Brent: It’s difficult to say, in part because access and data to study the effects of personalisation systems are hard to come by. It is one thing to prove that personalisation is occurring on a particular platform, or to show that users are systematically displayed content reflecting a narrow range of values or interests. It is quite another to prove that these effects are having an overall harmful effect on democracy. Digesting information is one of the most basic elements of social and political life, so any mechanism that fundamentally changes how information is encountered should be subject to serious and sustained scrutiny.

Assuming personalisation actually harms democracy or political discourse, mitigating its effects is quite a different issue. Transparency is often treated as the solution, but merely opening up algorithms to public and individual scrutiny will not in itself solve the problem. Information about the functionality and effects of personalisation must be meaningful to users if anything is going to be accomplished.

At a minimum, users of personalisation systems should be given more information about their blind spots, about the types of information they are not seeing, or where they lie on the map of values or criteria used by the system to tailor content to users. A promising step would be proactively giving the user some idea of what the system thinks it knows about them, or how they are being classified or profiled, without the user first needing to ask.


Brent Mittelstadt was talking to blog editor David Sutcliffe.

The blockchain paradox: Why distributed ledger technologies may do little to transform the economy

Bitcoin’s underlying technology, the blockchain, is widely expected to find applications far beyond digital payments. It is celebrated as a “paradigm shift in the very idea of economic organization”. But the OII’s Professor Vili Lehdonvirta contends that such revolutionary potentials may be undermined by a fundamental paradox that has to do with the governance of the technology.


 

I recently gave a talk at the Alan Turing Institute (ATI) under the title The Problem of Governance in Distributed Ledger Technologies. The starting point of my talk was that it is frequently posited that blockchain technologies will “revolutionize industries that rely on digital record keeping”, such as financial services and government. In the talk I applied elementary institutional economics to examine what blockchain technologies really do in terms of economic organization, and what problems this gives rise to. In this essay I present an abbreviated version of the argument. Alternatively you can watch a video of the talk below.

 

[youtube https://www.youtube.com/watch?v=eNrzE_UfkTw&w=640&h=360]

 

First, it is necessary to note that there is quite a bit of confusion as to what exactly is meant by a blockchain. When people talk about “the” blockchain, they often refer to the Bitcoin blockchain, an ongoing ledger of transactions started in 2009 and maintained by the approximately 5,000 computers that form the Bitcoin peer-to-peer network. The term blockchain can also be used to refer to other instances or forks of the same technology (“a” blockchain). The term “distributed ledger technology” (DLT) has also gained currency recently as a more general label for related technologies.

In each case, I think it is fair to say that the reason that so many people are so excited about blockchain today is not the technical features as such. In terms of performance metrics like transactions per second, existing blockchain technologies are in many ways inferior to more conventional technologies. This is frequently illustrated with the point that the Bitcoin network is limited by design to process at most approximately seven transactions per second, whereas the Visa payment network has a peak capacity of 56,000 transactions per second. Other implementations may have better performance, and on some other metrics blockchain technologies can perhaps beat more conventional technologies. But technical performance is not why so many people think blockchain is revolutionary and paradigm-shifting.

The reason that blockchain is making waves is that it promises to change the very way economies are organized: to eliminate centralized third parties. Let me explain what this means in theoretical terms. Many economic transactions, such as long-distance trade, can be modeled as a game of Prisoners’ Dilemma. The buyer and the seller can either cooperate (send the shipment/payment as promised) or defect (not send the shipment/payment). If the buyer and the seller don’t trust each other, then the equilibrium solution is that neither player cooperates and no trade takes place. This is known as the fundamental problem of cooperation.

There are several classic solutions to the problem of cooperation. One is reputation. In a community of traders where members repeatedly engage in exchange, any trader who defects (fails to deliver on a promise) will gain a negative reputation, and other traders will refuse to trade with them out of self-interest. This threat of exclusion from the community acts as a deterrent against defection, and the equilibrium under certain conditions becomes that everyone will cooperate.

Reputation is only a limited solution, however. It only works within communities where reputational information spreads effectively, and traders may still defect if the payoff from doing so is greater than the loss of future trade. Modern large-scale market economies where people trade with strangers on a daily basis are only possible because of another solution: third-party enforcement. In particular, this means state-enforced contracts and bills of exchange enforced by banks. These third parties in essence force parties to cooperate and to follow through with their promises.

Besides trade, another example of the problem of cooperation is currency. Currency can be modeled as a multiplayer game of Prisoners’ Dilemma. Traders collectively have an interest in maintaining a stable currency, because it acts as a lubricant to trade. But each trader individually has an interest in debasing the currency, in the sense of paying with fake money (what in blockchain-speak is referred to as double spending). Again the classic solution to this dilemma is third-party enforcement: the state polices metal currencies and punishes counterfeiters, and banks control ledgers and prevent people from spending money they don’t have.

So third-party enforcement is the dominant model of economic organization in today’s market economies. But it’s not without its problems. The enforcer is in a powerful position in relation to the enforced: banks could extract exorbitant fees, and states could abuse their power by debasing the currency, illegitimately freezing assets, or enforcing contracts in unfair ways. One classic solution to the problems of third-party enforcement is competition. Bank fees are kept in check by competition: the enforced can switch to another enforcer if the fees get excessive.

But competition is not always a viable solution: there is a very high cost to switching to another state (i.e. becoming a refugee) if your state starts to abuse its power. Another classic solution is accountability: democratic institutions that try to ensure the enforcer acts in the interest of the enforced. For instance, the interbank payment messaging network SWIFT is a cooperative society owned by its member banks. The members elect a Board of Directors that is the highest decision making body in the organization. This way, they attempt to ensure that SWIFT does not try to extract excessive fees from the member banks or abuse its power against them. Still, even accountability is not without its problems, since it comes with the politics of trying to reconcile different members’ diverging interests as best as possible.

Into this picture enters blockchain: a technology where third-party enforcers are replaced with a distributed network that enforces the rules. It can enforce contracts, prevent double spending, and cap the size of the money pool all without participants having to cede power to any particular third party who might abuse the power. No rent-seeking, no abuses of power, no politics — blockchain technologies can be used to create “math-based money” and “unstoppable” contracts that are enforced with the impartiality of a machine instead of the imperfect and capricious human bureaucracy of a state or a bank. This is why so many people are so excited about blockchain: its supposed ability change economic organization in a way that transforms dominant relationships of power.

Unfortunately this turns out to be a naive understanding of blockchain, and the reality is inevitably less exciting. Let me explain why. In economic organization, we must distinguish between enforcing rules and making rules. Laws are rules enforced by state bureaucracy and made by a legislature. The SWIFT Protocol is a set of rules enforced by SWIFTNet (a centralized computational system) and made, ultimately, by SWIFT’s Board of Directors. The Bitcoin Protocol is a set of rules enforced by the Bitcoin Network (a distributed network of computers) made by — whom exactly? Who makes the rules matters at least as much as who enforces them. Blockchain technology may provide for completely impartial rule-enforcement, but that is of little comfort if the rules themselves are changed. This rule-making is what we refer to as governance.

Using Bitcoin as an example, the initial versions of the protocol (ie. the rules) were written by the pseudonymous Satoshi Nakamoto, and later versions are released by a core development team. The development team is not autocratic: a complex set of social and technical entanglements means that other people are also influential in how Bitcoin’s rules are set; in particular, so-called mining pools, headed by a handful of individuals, are very influential. The point here is not to attempt to pick apart Bitcoin’s political order; the point is that Bitcoin has not in any sense eliminated human politics; humans are still very much in charge of setting the rules that the network enforces.

There is, however, no formal process for how governance works in Bitcoin, because for a very long time these politics were not explicitly recognized, and many people don’t recognize them, preferring instead the idea that Bitcoin is purely “math-based money” and that all the developers are doing is purely apolitical plumbing work. But what has started to make this position untenable and Bitcoin’s politics visible is the so-called “block size debate” — a big disagreement between factions of the Bitcoin community over the future direction of the rules. Different stakeholders have different interests in the matter, and in the absence of a robust governance mechanism that could reconcile between the interests, this has resulted in open “warfare” between the camps over social media and discussion forums.

Will competition solve the issue? Multiple “forks” of the Bitcoin protocol have emerged, each with slightly different rules. But network economics teaches us that competition does not work well at all in the presence of strong network effects: everyone prefers to be in the network where other people are, even if its rules are not exactly what they would prefer. Network markets tend to tip in favour of the largest network. Every fork/split diminishes the total value of the system, and those on the losing side of a fork may eventually find their assets worthless.

If competition doesn’t work, this leaves us with accountability. There is no obvious path how Bitcoin could develop accountable governance institutions. But other blockchain projects, especially those that are gaining some kind of commercial or public sector legitimacy, are designed from the ground up with some level of accountable governance. For instance, R3 is a firm that develops blockchain technology for use in the financial services industry. It has enrolled a consortium of banks to guide the effort, and its documents talk about the “mandate” it has from its “member banks”. Its governance model thus sounds a lot like the beginnings of something like SWIFT. Another example is RSCoin, designed by my ATI colleagues George Danezis and Sarah Meiklejohn, which is intended to be governed by a central bank.

Regardless of the model, my point is that blockchain technologies cannot escape the problem of governance. Whether they recognize it or not, they face the same governance issues as conventional third-party enforcers. You can use technologies to potentially enhance the processes of governance (eg. transparency, online deliberation, e-voting), but you can’t engineer away governance as such. All this leads me to wonder how revolutionary blockchain technologies really are. If you still rely on a Board of Directors or similar body to make it work, how much has economic organization really changed?

And this leads me to my final point, a provocation: once you address the problem of governance, you no longer need blockchain; you can just as well use conventional technology that assumes a trusted central party to enforce the rules, because you’re already trusting somebody (or some organization/process) to make the rules. I call this blockchain’s ‘governance paradox’: once you master it, you no longer need it. Indeed, R3’s design seems to have something called “uniqueness services”, which look a lot like trusted third-party enforcers (though this isn’t clear from the white paper). RSCoin likewise relies entirely on trusted third parties. The differences to conventional technology are no longer that apparent.

Perhaps blockchain technologies can still deliver better technical performance, like better availability and data integrity. But it’s not clear to me what real changes to economic organization and power relations they could bring about. I’m very happy to be challenged on this, if you can point out a place in my reasoning where I’ve made an error. Understanding grows via debate. But for the time being, I can’t help but be very skeptical of the claims that blockchain will fundamentally transform the economy or government.

The governance of DLTs is also examined in this report chapter that I coauthored earlier this year:

Lehdonvirta, V. & Robleh, A. (2016) Governance and Regulation. In: M. Walport (ed.), Distributed Ledger Technology: Beyond Blockchain. London: UK Government Office for Science, pp. 40-45.

The blockchain paradox: Why distributed ledger technologies may do little to transform the economy

Bitcoin’s underlying technology, the blockchain, is widely expected to find applications far beyond digital payments. It is celebrated as a “paradigm shift in the very idea of economic organization”. But the OII’s Professor Vili Lehdonvirta contends that such revolutionary potentials may be undermined by a fundamental paradox that has to do with the governance of the technology.


 

I recently gave a talk at the Alan Turing Institute (ATI) under the title The Problem of Governance in Distributed Ledger Technologies. The starting point of my talk was that it is frequently posited that blockchain technologies will “revolutionize industries that rely on digital record keeping”, such as financial services and government. In the talk I applied elementary institutional economics to examine what blockchain technologies really do in terms of economic organization, and what problems this gives rise to. In this essay I present an abbreviated version of the argument. Alternatively you can watch a video of the talk below.

 

[youtube https://www.youtube.com/watch?v=eNrzE_UfkTw&w=640&h=360]

 

First, it is necessary to note that there is quite a bit of confusion as to what exactly is meant by a blockchain. When people talk about “the” blockchain, they often refer to the Bitcoin blockchain, an ongoing ledger of transactions started in 2009 and maintained by the approximately 5,000 computers that form the Bitcoin peer-to-peer network. The term blockchain can also be used to refer to other instances or forks of the same technology (“a” blockchain). The term “distributed ledger technology” (DLT) has also gained currency recently as a more general label for related technologies.

In each case, I think it is fair to say that the reason that so many people are so excited about blockchain today is not the technical features as such. In terms of performance metrics like transactions per second, existing blockchain technologies are in many ways inferior to more conventional technologies. This is frequently illustrated with the point that the Bitcoin network is limited by design to process at most approximately seven transactions per second, whereas the Visa payment network has a peak capacity of 56,000 transactions per second. Other implementations may have better performance, and on some other metrics blockchain technologies can perhaps beat more conventional technologies. But technical performance is not why so many people think blockchain is revolutionary and paradigm-shifting.

The reason that blockchain is making waves is that it promises to change the very way economies are organized: to eliminate centralized third parties. Let me explain what this means in theoretical terms. Many economic transactions, such as long-distance trade, can be modeled as a game of Prisoners’ Dilemma. The buyer and the seller can either cooperate (send the shipment/payment as promised) or defect (not send the shipment/payment). If the buyer and the seller don’t trust each other, then the equilibrium solution is that neither player cooperates and no trade takes place. This is known as the fundamental problem of cooperation.

There are several classic solutions to the problem of cooperation. One is reputation. In a community of traders where members repeatedly engage in exchange, any trader who defects (fails to deliver on a promise) will gain a negative reputation, and other traders will refuse to trade with them out of self-interest. This threat of exclusion from the community acts as a deterrent against defection, and the equilibrium under certain conditions becomes that everyone will cooperate.

Reputation is only a limited solution, however. It only works within communities where reputational information spreads effectively, and traders may still defect if the payoff from doing so is greater than the loss of future trade. Modern large-scale market economies where people trade with strangers on a daily basis are only possible because of another solution: third-party enforcement. In particular, this means state-enforced contracts and bills of exchange enforced by banks. These third parties in essence force parties to cooperate and to follow through with their promises.

Besides trade, another example of the problem of cooperation is currency. Currency can be modeled as a multiplayer game of Prisoners’ Dilemma. Traders collectively have an interest in maintaining a stable currency, because it acts as a lubricant to trade. But each trader individually has an interest in debasing the currency, in the sense of paying with fake money (what in blockchain-speak is referred to as double spending). Again the classic solution to this dilemma is third-party enforcement: the state polices metal currencies and punishes counterfeiters, and banks control ledgers and prevent people from spending money they don’t have.

So third-party enforcement is the dominant model of economic organization in today’s market economies. But it’s not without its problems. The enforcer is in a powerful position in relation to the enforced: banks could extract exorbitant fees, and states could abuse their power by debasing the currency, illegitimately freezing assets, or enforcing contracts in unfair ways. One classic solution to the problems of third-party enforcement is competition. Bank fees are kept in check by competition: the enforced can switch to another enforcer if the fees get excessive.

But competition is not always a viable solution: there is a very high cost to switching to another state (i.e. becoming a refugee) if your state starts to abuse its power. Another classic solution is accountability: democratic institutions that try to ensure the enforcer acts in the interest of the enforced. For instance, the interbank payment messaging network SWIFT is a cooperative society owned by its member banks. The members elect a Board of Directors that is the highest decision making body in the organization. This way, they attempt to ensure that SWIFT does not try to extract excessive fees from the member banks or abuse its power against them. Still, even accountability is not without its problems, since it comes with the politics of trying to reconcile different members’ diverging interests as best as possible.

Into this picture enters blockchain: a technology where third-party enforcers are replaced with a distributed network that enforces the rules. It can enforce contracts, prevent double spending, and cap the size of the money pool all without participants having to cede power to any particular third party who might abuse the power. No rent-seeking, no abuses of power, no politics — blockchain technologies can be used to create “math-based money” and “unstoppable” contracts that are enforced with the impartiality of a machine instead of the imperfect and capricious human bureaucracy of a state or a bank. This is why so many people are so excited about blockchain: its supposed ability change economic organization in a way that transforms dominant relationships of power.

Unfortunately this turns out to be a naive understanding of blockchain, and the reality is inevitably less exciting. Let me explain why. In economic organization, we must distinguish between enforcing rules and making rules. Laws are rules enforced by state bureaucracy and made by a legislature. The SWIFT Protocol is a set of rules enforced by SWIFTNet (a centralized computational system) and made, ultimately, by SWIFT’s Board of Directors. The Bitcoin Protocol is a set of rules enforced by the Bitcoin Network (a distributed network of computers) made by — whom exactly? Who makes the rules matters at least as much as who enforces them. Blockchain technology may provide for completely impartial rule-enforcement, but that is of little comfort if the rules themselves are changed. This rule-making is what we refer to as governance.

Using Bitcoin as an example, the initial versions of the protocol (ie. the rules) were written by the pseudonymous Satoshi Nakamoto, and later versions are released by a core development team. The development team is not autocratic: a complex set of social and technical entanglements means that other people are also influential in how Bitcoin’s rules are set; in particular, so-called mining pools, headed by a handful of individuals, are very influential. The point here is not to attempt to pick apart Bitcoin’s political order; the point is that Bitcoin has not in any sense eliminated human politics; humans are still very much in charge of setting the rules that the network enforces.

There is, however, no formal process for how governance works in Bitcoin, because for a very long time these politics were not explicitly recognized, and many people don’t recognize them, preferring instead the idea that Bitcoin is purely “math-based money” and that all the developers are doing is purely apolitical plumbing work. But what has started to make this position untenable and Bitcoin’s politics visible is the so-called “block size debate” — a big disagreement between factions of the Bitcoin community over the future direction of the rules. Different stakeholders have different interests in the matter, and in the absence of a robust governance mechanism that could reconcile between the interests, this has resulted in open “warfare” between the camps over social media and discussion forums.

Will competition solve the issue? Multiple “forks” of the Bitcoin protocol have emerged, each with slightly different rules. But network economics teaches us that competition does not work well at all in the presence of strong network effects: everyone prefers to be in the network where other people are, even if its rules are not exactly what they would prefer. Network markets tend to tip in favour of the largest network. Every fork/split diminishes the total value of the system, and those on the losing side of a fork may eventually find their assets worthless.

If competition doesn’t work, this leaves us with accountability. There is no obvious path how Bitcoin could develop accountable governance institutions. But other blockchain projects, especially those that are gaining some kind of commercial or public sector legitimacy, are designed from the ground up with some level of accountable governance. For instance, R3 is a firm that develops blockchain technology for use in the financial services industry. It has enrolled a consortium of banks to guide the effort, and its documents talk about the “mandate” it has from its “member banks”. Its governance model thus sounds a lot like the beginnings of something like SWIFT. Another example is RSCoin, designed by my ATI colleagues George Danezis and Sarah Meiklejohn, which is intended to be governed by a central bank.

Regardless of the model, my point is that blockchain technologies cannot escape the problem of governance. Whether they recognize it or not, they face the same governance issues as conventional third-party enforcers. You can use technologies to potentially enhance the processes of governance (eg. transparency, online deliberation, e-voting), but you can’t engineer away governance as such. All this leads me to wonder how revolutionary blockchain technologies really are. If you still rely on a Board of Directors or similar body to make it work, how much has economic organization really changed?

And this leads me to my final point, a provocation: once you address the problem of governance, you no longer need blockchain; you can just as well use conventional technology that assumes a trusted central party to enforce the rules, because you’re already trusting somebody (or some organization/process) to make the rules. I call this blockchain’s ‘governance paradox’: once you master it, you no longer need it. Indeed, R3’s design seems to have something called “uniqueness services”, which look a lot like trusted third-party enforcers (though this isn’t clear from the white paper). RSCoin likewise relies entirely on trusted third parties. The differences to conventional technology are no longer that apparent.

Perhaps blockchain technologies can still deliver better technical performance, like better availability and data integrity. But it’s not clear to me what real changes to economic organization and power relations they could bring about. I’m very happy to be challenged on this, if you can point out a place in my reasoning where I’ve made an error. Understanding grows via debate. But for the time being, I can’t help but be very skeptical of the claims that blockchain will fundamentally transform the economy or government.

The governance of DLTs is also examined in this report chapter that I coauthored earlier this year:

Lehdonvirta, V. & Robleh, A. (2016) Governance and Regulation. In: M. Walport (ed.), Distributed Ledger Technology: Beyond Blockchain. London: UK Government Office for Science, pp. 40-45.

Assessing the Ethics and Politics of Policing the Internet for Extremist Material

The Internet serves not only as a breeding ground for extremism, but also offers myriad data streams which potentially hold great value to law enforcement. The report by the OII’s Ian Brown and Josh Cowls for the VOX-Pol project: Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material explores the complexities of policing the web for extremist material, and its implications for security, privacy and human rights. Josh Cowls discusses the report with blog editor Bertie Vidgen.*

*please note that the views given here do not necessarily reflect the content of the report, or those of the lead author, Ian Brown.

In terms of counter-speech there are different roles for government, civil society, and industry. Image by Miguel Discart (Flickr).

 

Ed: Josh, could you let us know the purpose of the report, outline some of the key findings, and tell us how you went about researching the topic?

Josh: Sure. In the report we take a step back from the ground-level question of ‘what are the police doing?’ and instead ask, ‘what are the ethical and political boundaries, rationale and justifications for policing the web for these kinds of activity?’ We used an international human rights framework as an ethical and legal basis to understand what is being done. We also tried to further the debate by clarifying a few things: what has already been done by law enforcement, and, really crucially, what the perspectives are of all those involved, including lawmakers, law enforcers, technology companies, academia and many others.

We derived the insights in the report from a series of workshops, one of which was held as part of the EU-funded VOX-Pol network. The workshops involved participants who were quite high up in law enforcement, the intelligence agencies, the tech industry civil society, and academia. We followed these up with interviews with other individuals in similar positions and conducted background policy research.

Ed: You highlight that many extremist groups (such as Isis) are making really significant use of online platforms to organize, radicalize people, and communicate their messages.

Josh: Absolutely. A large part of our initial interest when writing the report lay in finding out more about the role of the Internet in facilitating the organization, coordination, recruitment and inspiration of violent extremism. The impact of this has been felt very recently in Paris and Beirut, and many other places worldwide. This report pre-dates these most recent developments, but was written in the context of these sorts of events.

Given the Internet is so embedded in our social lives, I think it would have been surprising if political extremist activity hadn’t gone online as well. Of course, the Internet is a very powerful tool and in the wrong hands it can be a very destructive force. But other research, separate from this report, has found that the Internet is not usually people’s first point of contact with extremism: more often than not that actually happens offline through people you know in the wider world. Nonetheless it can definitely serve as an incubator of extremism and can serve to inspire further attacks.

Ed: In the report you identify different groups in society that are affected by, and affecting, issues of extremism, privacy, and governance – including civil society, academics, large corporations and governments

Josh: Yes, in the later stages of the report we do divide society into these groups, and offer some perspectives on what they do, and what they think about counter-extremism. For example, in terms of counter-speech there are different roles for government, civil society, and industry. There is this idea that ISIS are really good at social media, and that that is how they are powering a lot of their support; but one of the people that we spoke to said that it is not the case that ISIS are really good, it is just that governments are really bad!

We shouldn’t ask government to participate in the social network: bureaucracies often struggle to be really flexible and nimble players on social media. In contrast, civil society groups tend to be more engaged with communities and know how to “speak the language” of those who might be vulnerable to radicalization. As such they can enter that dialogue in a much more informed and effective way.

The other tension, or paradigm, that we offer in this report is the distinction between whether people are ‘at risk’ or ‘a risk’. What we try to point to is that people can go from one to the other. They start by being ‘at risk’ of radicalization, but if they do get radicalized and become a violent threat to society, which only happens in the minority of cases, then they become ‘a risk’. Engaging with people who are ‘at risk’ highlights the importance of having respect and dialogue with communities that are often the first to be lambasted when things go wrong, but which seldom get all the help they need, or the credit when they get it right. We argue that civil society is particularly suited for being part of this process.

Ed: It seems like the things that people do or say online can only really be understood in terms of the context. But often we don’t have enough information, and it can be very hard to just look at something and say ‘This is definitely extremist material that is going to incite someone to commit terrorist or violent acts’.

Josh: Yes, I think you’re right. In the report we try to take what is a very complicated concept – extremist material – and divide it into more manageable chunks of meaning. We talk about three hierarchical levels. The degree of legal consensus over whether content should be banned decreases as it gets less extreme. The first level we identified was straight up provocation and hate speech. Hate speech legislation has been part of the law for a long time. You can’t incite racial hatred, you can’t incite people to crimes, and you can’t promote terrorism. Most countries in Europe have laws against these things.

The second level is the glorification and justification of terrorism. This is usually more post-hoc as by definition if you are glorifying something it has already happened. You may well be inspiring future actions, but that relationship between the act of violence and the speech act is different than with provocation. Nevertheless, some countries, such as Spain and France, have pushed hard on criminalising this. The third level is non-violent extremist material. This is the most contentious level, as there is very little consensus about what types of material should be called ‘extremist’ even though they are non-violent. One of the interviewees that we spoke to said that often it is hard to distinguish between someone who is just being friendly and someone who is really trying to persuade or groom someone to go to Syria. It is really hard to put this into a legal framework with the level of clarity that the law demands.

There is a proportionality question here. When should something be considered specifically illegal? And, then, if an illegal act has been committed what should the appropriate response be? This is bound to be very different in different situations.

Ed: Do you think that there are any immediate or practical steps that governments can take to improve the current situation? And do you think that there any ethical concerns which are not being paid sufficient attention?

Josh: In the report we raised a few concerns about existing government responses. There are lots of things beside privacy that could be seen as fundamental human rights and that are being encroached upon. Freedom of association and assembly is a really interesting one. We might not have the same reverence for a Facebook event plan or discussion group as we would a protest in a town hall, but of course they are fundamentally pretty similar.

The wider danger here is the issue of mission creep. Once you have systems in place that can do potentially very powerful analytical investigatory things then there is a risk that we could just keep extending them. If something can help us fight terrorism then should we use it to fight drug trafficking and violent crime more generally? It feels to me like there is a technical-military-industrial complex mentality in government where if you build the systems then you just want to use them. In the same way that CCTV cameras record you irrespective of whether or not you commit a violent crime or shoplift, we need to ask whether the same panoptical systems of surveillance should be extended to the Internet. Now, to a large extent they are already there. But what should we train the torchlight on next?

This takes us back to the importance of having necessary, proportionate, and independently authorized processes. When you drill down into how rights privacy should be balanced with security then it gets really complicated. But the basic process-driven things that we identified in the report are far simpler: if we accept that governments have the right to take certain actions in the name of security, then, no matter how important or life-saving those actions are, there are still protocols that governments must follow. We really wanted to infuse these issues into the debate through the report.

Read the full report: Brown, I., and Cowls, J., (2015) Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material. VOX-Pol Publications.


Josh Cowls is a a student and researcher based at MIT, working to understand the impact of technology on politics, communication and the media.

Josh Cowls was talking to Blog Editor Bertie Vidgen.

New Voluntary Code: Guidance for Sharing Data Between Organisations

Many organisations are coming up with their own internal policy and guidelines for data sharing. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During her time as an OII Visiting Associate, Alison Holt developed a pragmatic solution in the form of a Voluntary Code, anchored in the developing ISO standards for the Governance of Data. She discusses the voluntary code, and the need to provide urgent advice to organisations struggling with policy for sharing data.

Collecting, storing and distributing digital data is significantly easier and cheaper now than ever before, in line with predictions from Moore, Kryder and Gilder. Organisations are incentivised to collect large volumes of data with the hope of unleashing new business opportunities or maybe even new businesses. Consider the likes of uber, Netflix, and Airbnb and the other data mongers who have built services based solely on digital assets.

The use of this new abundant data will continue to disrupt traditional business models for years to come, and there is no doubt that these large data volumes can provide value. However, they also bring associated risks (such as unplanned disclosure and hacks) and they come with constraints (for example in the form of privacy or data protection legislation). Hardly a week goes by without a data breach hitting the headlines. Even if your telecommunications provider didn’t inadvertently share your bank account and sort code with hackers, and your child wasn’t one of the hundreds of thousands of children whose birthdays, names, and photos were exposed by a smart toy company, you might still be wondering exactly how your data is being looked after by the banks, schools, clinics, utility companies, local authorities and government departments that are so quick to collect your digital details.

Then there are the companies who have invited you to sign away the rights to your data and possibly your privacy too – the ones that ask you to sign the Terms and Conditions for access to a particular service (such as a music or online shopping service) or have asked you for access to your photos. And possibly you are one of the “worried well” who wear or carry a device that collects your health data and sends it back to storage in a faraway country, for analysis.

So unless you live in a lead-lined concrete bunker without any access to internet connected devices, and you don’t have the need to pass by webcams or sensors, or use public transport or public services; then your data is being collected and shared. And for the majority of the time, you benefit from this enormously. The bus stop tells you exactly when the next bus is coming, you have easy access to services and entertainment fitted very well to your needs, and you can do most of your bank and utility transactions online in the peace and quiet of your own home. Beyond you as an individual, there are organisations “out there” sharing your data to provide you better healthcare, education, smarter city services and secure and efficient financial services, and generally matching the demand for services with the people needing them.

So we most likely all have data that is being shared and it is generally in our interest to share it, but how can we trust the organisations responsible for sharing our data? As an organisation, how can I know that my partner and supplier organisations are taking care of my client and product information?

Organisations taking these issues seriously are coming up with their own internal policy and guidelines. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During my time as a visiting associate at the Oxford Internet Institute, University of Oxford, I have developed a pragmatic solution in the form of a Voluntary Code. The Code has been produced using the guidelines for voluntary code development produced by the Office of Community Affairs, Industry Canada. More importantly, the Code is anchored in the developing ISO standards for the Governance of Data (the 38505 series). These standards apply the governance principles and model from the 38500 standard and introduce the concept of a data accountability map, highlighting six focus areas for a governing body to apply governance. The early stage standard suggests considering the aspects of Value, Risk and Constraint for each area, to determine what practice and policy should be applied to maximise the value from organisational data, whilst applying constraints as set by legislation and local policy, and minimising risk.

I am Head of the New Zealand delegation to the ISO group developing IT Service Management and IT Governance standards, SC40, and am leading the development of the 38505 series of Governance of Data standards, working with a talented editorial team of industry and standards experts from Australia, China and the Netherlands. I am confident that the robust ISO consensus-led process involving subject matter experts from around the world, will result in the publication of best practice guidance for the governance of data, presented in a format that will have relevance and acceptance internationally.

In the meantime, however, I see a need to provide urgent advice to organisations struggling with policy for sharing data. I have used my time at Oxford to interview policy, ethics, smart city, open data, health informatics, education, cyber security and social science experts and users, owners and curators of large data sets, and have come up with a “Voluntary Code for Data Sharing”. The Code takes three areas from the data accountability map in the developing ISO standard 38505-1; namely Collect, Store, Distribute, and applies the aspects of Value, Risk and Constraint to provide seven maxims for sharing data. To assist with adoption and compliance, the Code provides references to best practice and examples. As the ISO standards for the Governance of Data develop, the Code will be updated. New examples of good practice will be added as they come to light.

[A permanent home for the voluntary code is currently being organised; please email me in the meantime if you are interested in it: Alison.holt@longitude174.com]

The Code is deliberately short and succinct, but it does provide links for those who need to read more to understand the underpinning practices and standards, and those tasked with implementing organisational data policy and practice. It cannot guarantee good outcomes. With new security threats arising daily, nobody can fully guarantee the safety of your information. However, if you deal with an organisation that is compliant with the Voluntary Code, then at least you can have assurance that the organisation has at least considered how it is using your data now and how it might want to reuse your data in the future, how and where your data will be stored, and then finally how your data will be distributed or discarded. And that’s a good start!


alison_holtAlison Holt was an OII Academic Visitor in late 2015. She is an internationally acclaimed expert in the Governance of Information Technology and Data, heading up the New Zealand delegations to the international standards committees for IT Governance and Service Management (SC40) and Software and Systems Engineering (SC7). The British Computer Society published Alison’s first book on the Governance of IT in 2013.

Controlling the crowd? Government and citizen interaction on emergency-response platforms

There is a great deal of interest in the use of crowdsourcing tools and practices in emergency situations. Gregory Asmolov‘s article Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations (Policy and Internet 7,3) examines crowdsourcing of emergency response in Russia in the wake of the devastating forest fires of 2010. Interestingly, he argues that government involvement in these crowdsourcing efforts can actually be used to control and regulate volunteers from the top down — not just to “mobilize them”.

RUSSIA, NEAR RYAZAN - 8 MAY 2011: Piled up woords in the forest one winter after a terribly huge forest fires in Russia in year 2010. Image: Max Mayorov.
RUSSIA, NEAR RYAZAN – 8 MAY 2011: Piled up wood in the forest one winter after a terribly huge forest fire in Russia in year 2010. Image: Max Mayorov (Flickr).
My interest in the role of crowdsourcing tools and practices in emergency situations was triggered by my personal experience. In 2010 I was one of the co-founders of the Russian “Help Map” project, which facilitated volunteer-based response to wildfires in central Russia. When I was working on this project, I realized that a crowdsourcing platform can bring the participation of the citizen to a new level and transform sporadic initiatives by single citizens and groups into large-scale, relatively well coordinated operations. What was also important was that both the needs and the forms of participation required in order to address these needs be defined by the users themselves.

To some extent the citizen-based response filled the gap left by the lack of a sufficient response from the traditional institutions.[1] This suggests that the role of ICTs in disaster response should be examined within the political context of the power relationship between members of the public who use digital tools and the traditional institutions. My experience in 2010 was the first time I was able to see that, while we would expect that in a case of natural disaster both the authorities and the citizens would be mostly concerned about the emergency, the actual situation might be different.

Apparently the emergence of independent, citizen-based collective action in response to a disaster was considered as some type of threat by the institutional actors. First, it was a threat to the image of these institutions, which didn’t want citizens to be portrayed as the leading responding actors. Second, any type of citizen-based collective action, even if not purely political, may be an issue of concern in authoritarian countries in particular. Accordingly, one can argue that, while citizens are struggling against a disaster, in some cases the traditional institutions may make substantial efforts to restrain and contain the action of citizens. In this light, the role of information technologies can include not only enhancing citizen engagement and increasing the efficiency of the response, but also controlling the digital crowd of potential volunteers.

The purpose of this paper was to conceptualize the tension between the role of ICTs in the engagement of the crowd and its resources, and the role of ICTs in controlling the resources of the crowd. The research suggests a theoretical and methodological framework that allows us to explore this tension. The paper focuses on an analysis of specific platforms and suggests empirical data about the structure of the platforms, and interviews with developers and administrators of the platforms. This data is used in order to identify how tools of engagement are transformed into tools of control, and what major differences there are between platforms that seek to achieve these two goals. That said, obviously any platform can have properties of control and properties of engagement at the same time; however the proportion of these two types of elements can differ significantly.

One of the core issues for my research is how traditional actors respond to fast, bottom-up innovation by citizens.[2]. On the one hand, the authorities try to restrict the empowerment of citizens by the new tools. On the other hand, the institutional actors also seek to innovate and develop new tools that can restore the balance of power that has been challenged by citizen-based innovation. The tension between using digital tools for the engagement of the crowd and for control of the crowd can be considered as one of the aspects of this dynamic.

That doesn’t mean that all state-backed platforms are created solely for the purpose of control. One can argue, however, that the development of digital tools that offer a mechanism of command and control over the resources of the crowd is prevalent among the projects that are supported by the authorities. This can also be approached as a means of using information technologies in order to include the digital crowd within the “vertical of power”, which is a top-down strategy of governance. That is why this paper seeks to conceptualize this phenomenon as “vertical crowdsourcing”.

The question of whether using a digital tool as a mechanism of control is intentional is to some extent secondary. What is important is that the analysis of platform structures relying on activity theory identifies a number of properties that allow us to argue that these tools are primarily tools of control. The conceptual framework introduced in the paper is used in order to follow the transformation of tools for the engagement of the crowd into tools of control over the crowd. That said, some of the interviews with the developers and administrators of the platforms may suggest the intentional nature of the development of tools of control, while crowd engagement is secondary.

[1] Asmolov G. “Natural Disasters and Alternative Modes of Governance: The Role of Social Networks and Crowdsourcing Platforms in Russia”, in Bits and Atoms Information and Communication Technology in Areas of Limited Statehood, edited by Steven Livingston and Gregor Walter-Drop, Oxford University Press, 2013.

[2] Asmolov G., “Dynamics of innovation and the balance of power in Russia”, in State Power 2.0 Authoritarian Entrenchment and Political Engagement Worldwide, edited by Muzammil M. Hussain and Philip N. Howard, Ashgate, 2013.

Read the full article: Asmolov, G. (2015) Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations. Policy and Internet 7,3: 292–318.


asmolovGregory Asmolov is a PhD student at the LSE, where he is studying crowdsourcing and emergence of spontaneous order in situations of limited statehood. He is examining the emerging collaborative power of ICT-enabled crowds in crisis situations, and aiming to investigate the topic drawing on evolutionary theories concerned with spontaneous action and the sustainability of voluntary networked organizations. He analyzes whether crowdsourcing practices can lead to development of bottom-up online networked institutions and “peer-to-peer” governance.