OII

With political advertising increasingly taking place online, the question of regulation is becoming inescapable. In their latest paper, published in Policy&Internet, Junyan Zhu explores the definition of online political advertising and identities key questions regulators must confront when devising effective frameworks.

The rapid surge of online political advertising in recent years has introduced a new dimension to election campaigns. Concerns have arisen regarding the potential consequences of this practice on democracy, including data privacy, voter manipulation, misinformation, and accountability issues. But what exactly is an online political advert? This kind of question is hard to answer, and indeed, reports show that 37 per cent of respondents in the 2021 Eurobarometer Survey couldn’t easily determine whether online content was a political advertisement or not. As of now, only a few platform companies, including Facebook and Google, have defined in their own terms what constitutes this form of content. To address the conceptual challenges faced by policymakers, in our latest paper, we conducted interviews with 19 experts from regulatory bodies, professional advertising associations, and civil society organisations engaged in discussions surrounding online political advertising in both the United Kingdom and the European Union. We delved into the policymakers’ perspectives, seeking to distil their understanding of what constitutes an “advert”, “online” platforms, and “political” content. Instead of crafting new definitions, we pinpointed these alternative factors and illustrated them through a sequence of decision trees. Specifically, our work led us to pose three questions that regulators need to confront: What does it mean for content to be considered an “advert”? When we inquired about the criteria for identifying an advert, a consistent key point that emerged was payment. The central idea revolves around whether payment is involved in content distribution or creation, and it also depends on the timing of the payment. Some interviewees also acknowledged the increasingly blurred boundaries between paid and unpaid content. There are organic ways of spreading material that don’t involve payment, such as an unpaid tweet. These differences matter as they suggest alternative criteria for determining what should or should not count as an advert. What does it mean for an advert to be “online”? This turned out to be the most challenging question…

Drawing on the rich history of gender studies in the social sciences, coupling it with emerging computational methods for topic modelling, to better understand the content of reports to the Everyday Sexism Project.

The Everyday Sexism Project catalogues instances of sexism experienced by women on a day to day basis. We will be using computational techniques to extract the most commonly occurring sexism-related topics.

As Laura Bates, founder of the Everyday Sexism project, has recently highlighted, “it seems to be increasingly difficult to talk about sexism, equality, and women’s rights” (Everyday Sexism Project, 2015). With many theorists suggesting that we have entered a so-called “post-feminist” era in which gender equality has been achieved (cf. McRobbie, 2008; Modleski, 1991), to complain about sexism not only risks being labelled as “uptight”, “prudish”, or a “militant feminist”, but also exposes those who speak out to sustained, and at times vicious, personal attacks (Everyday Sexism Project, 2015). Despite this, thousands of women are speaking out, through Bates’ project, about their experiences of everyday sexism. Our research seeks to draw on the rich history of gender studies in the social sciences, coupling it with emerging computational methods for topic modelling, to better understand the content of reports to the Everyday Sexism Project and the lived experiences of those who post them. Here, we outline the literature which contextualises our study. Studies on sexism are far from new. Indeed, particularly amongst feminist theorists and sociologists, the analysis (and deconstruction) of “inequality based on sex or gender categorisation” (Harper, 2008) has formed a central tenet of both academic inquiry and a radical politics of female emancipation for several decades (De Beauvoir, 1949; Friedan, 1963; Rubin, 1975; Millett, 1971). Reflecting its feminist origins, historical research on sexism has broadly focused on defining sexist interactions (cf. Glick and Fiske, 1997) and on highlighting the problematic, biologically rooted ‘gender roles’ that form the foundation of inequality between men and women (Millett, 1971; Renzetti and Curran, 1992; Chodorow, 1995). More recent studies, particularly in the field of psychology, have shifted the focus away from whether and how sexism exists, towards an examination of the psychological, personal, and social implications that sexist incidents have for the women who experience them. As such, theorists such as Matteson and Moradi (2005), Swim et al (2001) and Jost and…

Online support groups are one of the major ways in which the Internet has fundamentally changed how people experience health and health care.

Online forums are important means of people living with health conditions to obtain both emotional and informational support from this in a similar situation. Pictured: The Alzheimer Society of B.C. unveiled three life-size ice sculptures depicting important moments in life. The ice sculptures will melt, representing the fading of life memories on the dementia journey. Image: bcgovphotos (Flickr)

Online support groups are being used increasingly by individuals who suffer from a wide range of medical conditions. OII DPhil Student Ulrike Deetjen’s recent article with John Powell, Informational and emotional elements in online support groups: a Bayesian approach to large-scale content analysis uses machine learning to examine the role of online support groups in the healthcare process. They categorise 40,000 online posts from one of the most well-used forums to show how users with different conditions receive different types of support. Online support groups are one of the major ways in which the Internet has fundamentally changed how people experience health and health care. They provide a platform for health discussions formerly restricted by time and place, enable individuals to connect with others in similar situations, and facilitate open, anonymous communication. Previous studies have identified that individuals primarily obtain two kinds of support from online support groups: informational (for example, advice on treatments, medication, symptom relief, and diet) and emotional (for example, receiving encouragement, being told they are in others’ prayers, receiving “hugs”, or being told that they are not alone). However, existing research has been limited as it has often used hand-coded qualitative approaches to contrast both forms of support, thereby only examining relatively few posts (<1,000) for one or two conditions. In contrast, our research employed a machine-learning approach suitable for uncovering patterns in “big data”. Using this method a computer (which initially has no knowledge of online support groups) is given examples of informational and emotional posts (2,000 examples in our study). It then “learns” what words are associated with each category (emotional: prayers, sorry, hugs, glad, thoughts, deal, welcome, thank, god, loved, strength, alone, support, wonderful, sending; informational: effects, started, weight, blood, eating, drink, dose, night, recently, taking, side, using, twice, meal). The computer then uses this knowledge to assess new posts, and decide whether they contain more emotional or informational support. With this approach we were able to determine the emotional or informational content of 40,000…

How does the topic modelling algorithm ‘discover’ the topics within the context of everyday sexism?

We recently announced the start of an exciting new research project that will involve the use of topic modelling in understanding the patterns in submitted stories to the Everyday Sexism website. Here, we briefly explain our text analysis approach, “topic modelling”. At its very core, topic modelling is a technique that seeks to automatically discover the topics contained within a group of documents. ‘Documents’ in this context could refer to text items as lengthy as individual books, or as short as sentences within a paragraph. Let’s take the idea of sentences-as-documents as an example: Document 1: I like to eat kippers for breakfast. Document 2: I love all animals, but kittens are the cutest. Document 3: My kitten eats kippers too. Assuming that each sentence contains a mixture of different topics (and that a ‘topic’ can be understood as a collection of words (of any part of speech) that have different probabilities of appearance in passages discussing the topic), how does the topic modelling algorithm ‘discover’ the topics within these sentences? The algorithm is initiated by setting the number of topics that it needs to extract. Of course, it is hard to guess this number without having an insight on the topics, but one can think of this as a resolution tuning parameter. The smaller the number of topics is set, the more general the bag of words in each topic would be, and the looser the connections between them. The algorithm loops through all of the words in each document, assigning every word to one of our topics in a temporary and semi-random manner. This initial assignment is arbitrary and it is easy to show that different initialisations lead to the same results in long run. Once each word has been assigned a temporary topic, the algorithm then re-iterates through each word in each document to update the topic assignment using two criteria: 1) How prevalent is the word in question across topics? And 2) How prevalent are the…

Homejoy was slated to become the Uber of domestic cleaning services. It was a platform that allowed customers to summon a cleaner as easily as they could hail a ride. Why did it fail to achieve success?

Homejoy CEO Adora Cheung appears on stage at the 2014 TechCrunch Disrupt Europe/London, at The Old Billingsgate on October 21, 2014 in London, England. Image: TechCruch (Flickr)

Platforms that enable users to come together and  buy/sell services with confidence, such as Uber, have become remarkably popular, with the companies often transforming the industries they enter. In this blog post the OII’s Vili Lehdonvirta analyses why the domestic cleaning platform Homejoy failed to achieve such success. He argues that when buyer and sellers enter into repeated transactions they can communicate directly, and as such often abandon the platform. Homejoy was slated to become the Uber of domestic cleaning services. It was a platform that allowed customers to summon a cleaner as easily as they could hail a ride. Regular cleanups were just as easy to schedule. Ratings from previous clients attested to the skill and trustworthiness of each cleaner. There was no need to go through a cleaning services agency, or scour local classifieds to find a cleaner directly: the platform made it easy for both customers and people working as cleaners to find each other. Homejoy made its money by taking a cut out of each transaction. Given how incredibly successful Uber and Airbnb had been in applying the same model to their industries, Homejoy was widely expected to become the next big success story. It was to be the next step in the inexorable uberisation of every industry in the economy. On 17 July 2015, Homejoy announced that it was shutting down. Usage had grown slower than expected, revenues remained poor, technical glitches hurt operations, and the company was being hit with lawsuits on contractor misclassification. Investors’ money and patience had finally ran out. Journalists wrote interesting analyses of Homejoy’s demise (Forbes, TechCrunch, Backchannel). The root causes of any major business failure (or indeed success) are complex and hard to pinpoint. However, one of the possible explanations identified in these stories stands out, because it corresponds strongly with what theory on platforms and markets could have predicted. Homejoy wasn’t growing and making money because clients and cleaners were taking their relationships off-platform:…

Exploring the complexities of policing the web for extremist material, and its implications for security, privacy and human rights.

In terms of counter-speech there are different roles for government, civil society, and industry. Image by Miguel Discart (Flickr).

The Internet serves not only as a breeding ground for extremism, but also offers myriad data streams which potentially hold great value to law enforcement. The report by the OII’s Ian Brown and Josh Cowls for the VOX-Pol project: Check the Web: Assessing the Ethics and Politics of Policing the Internet for Extremist Material explores the complexities of policing the web for extremist material, and its implications for security, privacy and human rights. Josh Cowls discusses the report with blog editor Bertie Vidgen.* *please note that the views given here do not necessarily reflect the content of the report, or those of the lead author, Ian Brown. Ed: Josh, could you let us know the purpose of the report, outline some of the key findings, and tell us how you went about researching the topic? Josh: Sure. In the report we take a step back from the ground-level question of ‘what are the police doing?’ and instead ask, ‘what are the ethical and political boundaries, rationale and justifications for policing the web for these kinds of activity?’ We used an international human rights framework as an ethical and legal basis to understand what is being done. We also tried to further the debate by clarifying a few things: what has already been done by law enforcement, and, really crucially, what the perspectives are of all those involved, including lawmakers, law enforcers, technology companies, academia and many others. We derived the insights in the report from a series of workshops, one of which was held as part of the EU-funded VOX-Pol network. The workshops involved participants who were quite high up in law enforcement, the intelligence agencies, the tech industry civil society, and academia. We followed these up with interviews with other individuals in similar positions and conducted background policy research. Ed: You highlight that many extremist groups (such as Isis) are making really significant use of online platforms to organise,…

For data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice.

Many organisations are coming up with their own internal policy and guidelines for data sharing. However, for data sharing between organisations to be straight forward, there needs to a common understanding of basic policy and practice. During her time as an OII Visiting Associate, Alison Holt developed a pragmatic solution in the form of a Voluntary Code, anchored in the developing ISO standards for the Governance of Data. She discusses the voluntary code, and the need to provide urgent advice to organisations struggling with policy for sharing data. Collecting, storing and distributing digital data is significantly easier and cheaper now than ever before, in line with predictions from Moore, Kryder and Gilder. Organisations are incentivised to collect large volumes of data with the hope of unleashing new business opportunities or maybe even new businesses. Consider the likes of Uber, Netflix, and Airbnb and the other data mongers who have built services based solely on digital assets. The use of this new abundant data will continue to disrupt traditional business models for years to come, and there is no doubt that these large data volumes can provide value. However, they also bring associated risks (such as unplanned disclosure and hacks) and they come with constraints (for example in the form of privacy or data protection legislation). Hardly a week goes by without a data breach hitting the headlines. Even if your telecommunications provider didn’t inadvertently share your bank account and sort code with hackers, and your child wasn’t one of the hundreds of thousands of children whose birthdays, names, and photos were exposed by a smart toy company, you might still be wondering exactly how your data is being looked after by the banks, schools, clinics, utility companies, local authorities and government departments that are so quick to collect your digital details. Then there are the companies who have invited you to sign away the rights to your data and possibly your…

Government involvement in crowdsourcing efforts can actually be used to control and regulate volunteers from the top down—not just to “mobilise them”.

RUSSIA, NEAR RYAZAN - 8 MAY 2011: Piled up wood in the forest one winter after a terribly huge forest fire in Russia in year 2010. Image: Max Mayorov (Flickr).

There is a great deal of interest in the use of crowdsourcing tools and practices in emergency situations. Gregory Asmolov’s article Vertical Crowdsourcing in Russia: Balancing Governance of Crowds and State–Citizen Partnership in Emergency Situations (Policy and Internet 7,3) examines crowdsourcing of emergency response in Russia in the wake of the devastating forest fires of 2010. Interestingly, he argues that government involvement in these crowdsourcing efforts can actually be used to control and regulate volunteers from the top down—not just to “mobilise them”. My interest in the role of crowdsourcing tools and practices in emergency situations was triggered by my personal experience. In 2010 I was one of the co-founders of the Russian “Help Map” project, which facilitated volunteer-based response to wildfires in central Russia. When I was working on this project, I realised that a crowdsourcing platform can bring the participation of the citizen to a new level and transform sporadic initiatives by single citizens and groups into large-scale, relatively well coordinated operations. What was also important was that both the needs and the forms of participation required in order to address these needs be defined by the users themselves. To some extent the citizen-based response filled the gap left by the lack of a sufficient response from the traditional institutions.[1] This suggests that the role of ICTs in disaster response should be examined within the political context of the power relationship between members of the public who use digital tools and the traditional institutions. My experience in 2010 was the first time I was able to see that, while we would expect that in a case of natural disaster both the authorities and the citizens would be mostly concerned about the emergency, the actual situation might be different. Apparently the emergence of independent, citizen-based collective action in response to a disaster was considered as some type of threat by the institutional actors. First, it was a threat to…