Articles

The excitement over the potentially transformative effects of the internet in low-income countries is nowhere more evident than in East Africa.

Connecting 21st-century Africa takes more than just railways and roads. Steve Song, CC BY-NC-SA

Reposted from The Conversation. The excitement over the potentially transformative effects of the internet in low-income countries is nowhere more evident than in East Africa—the last major populated region of the world to gain a wired connection to the internet. Before 2009, there wasn’t a single fibre-optic cable connecting the region to the rest of the world. After hundreds of millions of dollars of investment, cables were laid to connect the region to the global network. Prices for internet access went down, speeds went up, and the number of internet users in the region skyrocketed. Politicians, journalists and academics all argued that better connectivity would lead to a blossoming of economic, social, and political activity—and a lot of influential people in the region made grand statements. For instance, former Kenyan president Mwai Kibai stated: I am gratified to be with you today at an event of truly historic proportions. The landing of this fibre-optic undersea cable project in Mombasa is one of the landmark projects in Kenya’s national development story. Indeed some have compared this to the completion of the Kenya-Uganda railway more than a century ago. This comparison is not far-fetched, because while the economies of the last century were driven by railway connections, the economies of today are largely driven by internet. The president of Rwanda, Paul Kagame, also spoke about the revolutionary potentials of these changes in connectivity. He claimed: In Africa, we have missed both the agricultural and industrial revolutions and in Rwanda we are determined to take full advantage of the digital revolution. This revolution is summed up by the fact that it no longer is of utmost importance where you are but rather what you can do – this is of great benefit to traditionally marginalised regions and geographically isolated populations. As many who have studied politics have long since noted, proclamations like these can have an important impact: they frame how scarce resources can be…

Despite the hype around MOOCs to date, there are many similarities between MOOC research and the breadth of previous investigations into (online) learning.

Timeline of the development of MOOCs and open education, from: Yuan, Li, and Stephen Powell. MOOCs and Open Education: Implications for Higher Education White Paper. University of Bolton: CETIS, 2013.

Ed: Does research on MOOCs differ in any way from existing research on online learning? Rebecca: Despite the hype around MOOCs to date, there are many similarities between MOOC research and the breadth of previous investigations into (online) learning. Many of the trends we’ve observed (the prevalence of forum lurking; community formation; etc.) have been studied previously and are supported by earlier findings. That said, the combination of scale, global-reach, duration, and “semi-synchronicity” of MOOCs have made them different enough to inspire this work. In particular, the optional nature of participation among a global-body of lifelong learners for a short burst of time (e.g. a few weeks) is a relatively new learning environment that, despite theoretical ties to existing educational research, poses a new set of challenges and opportunities. Ed: The MOOC forum networks you modelled seemed to be less efficient at spreading information than randomly generated networks. Do you think this inefficiency is due to structural constraints of the system (or just because inefficiency is not selected against); or is there something deeper happening here, maybe saying something about the nature of learning, and networked interaction? Rebecca: First off, it’s important to not confuse the structural “inefficiency” of communication with some inherent learning “inefficiency”. The inefficiency in the sub-forums is a matter of information diffusion—i.e., because there are communities that form in the discussion spaces, these communities tend to “trap” knowledge and information instead of promoting the spread of these ideas to a vast array of learners. This information diffusion inefficiency is not necessarily a bad thing, however. It’s a natural human tendency to form communities, and there is much education research that says learning in small groups can be much more beneficial / effective than large-scale learning. The important point that our work hopes to make is that the existence and nature of these communities seems to be influenced by the types of topics that are being discussed…

Why are there such vast amounts of resources invested in education technology without substantial evidence to suggest that the promises of such technologies are being fulfilled?

Plans were announced last year to place iPads in the hands of all 640,000 students in the Los Angeles Unified School District. Image by flickingerbrad.

In the realm of education and technology, a central question that researchers and policymakers alike have been grappling with has been: why does there continue to be such vast amounts of resources invested in education technology and related initiatives without substantial evidence to suggest that the promises of such technologies and related initiatives are being fulfilled? By adopting a political economy approach, which examines the social, political and economic processes shaping the production, consumption, and distribution of resources including information and communication technologies (Mosco, 2009), we can begin to understand why and how the considerable zeal surrounding education technologies and the sustained investments persist. An exemplar case for this type of analysis, giving us a deeper understanding of the structural forces shaping the K-12 institutional circuits, would be the recent tech-centred incidents riddling the Los Angeles Unified School District. iPad-for-all and the MiSiS CriSiS Last month the Los Angeles Unified School District Superintendent, John Deasy, and Chief Technology Officer, Ron Chandler, both resigned due to the $1 billion iPad initiative and what is being called the MiSiS CriSiS. Underpinning these initiatives are idealistic beliefs in the powers of technology and the trend towards the standardisation and corporatization of the US K-12 education. Despite the dire need for classroom upgrades and recovery from the recession-induced mass teacher layoffs and library closures, this past year John Deasy announced the plan to direct the district’s resources toward an initiative that places iPads in all 640,000 LAUSD students’ hands. Perpetuating the idealistic promise that technology acts as a leveling tool in society, Deasy pledged that this initiative would afford equal educational opportunities across the board regardless of race or socioeconomic background of students. He stated that this would allow low-income students to have access to the same technological tools as their middle class counterparts. Commendable as the effort was, this overly idealised sentiment that technology will ameliorate the deeply rooted systemic inequities facing society…

This mass connectivity has been one crucial ingredient for some significant changes in how work is organised, divided, outsourced, and rewarded.

Ed: You are looking at the structures of ‘virtual production networks’ to understand the economic and social implications of online work. How are you doing this? Mark: We are studying online freelancing. In other words this is digital or digitised work for which professional certification or formal training is usually not required. The work is monetised or monetisable, and can be mediated through an online marketplace. Freelancing is a very old format of work. What is new is the fact that we have almost three billion people connected to a global network: many of those people are potential workers in virtual production networks. This mass connectivity has been one crucial ingredient for some significant changes in how work is organised, divided, outsourced, and rewarded. What we plan to do in this project is better map the contours of some of those changes and understand who wins and who doesn’t in this new world of work. Ed: Are you able to define what comprises an individual contribution to a ‘virtual production network’—or to find data on it? How do you define and measure value within these global flows and exchanges? Mark: It is very far from easy. Much of what we are studying is immaterial and digitally-mediated work. We can find workers and we can find clients, but the links between them are often opaque and black-boxed. Some of the workers that we have spoken to operate under non-disclosure agreements, and many actually haven’t been told what their work is being used for. But that is precisely why we felt the need to embark on this project. With a combination of quantitative transaction data from key platforms and qualitative interviews in which we attempt to piece together parts of the network, we want to understand who is (and isn’t) able to capture and create value within these networks. Ed: You note that “within virtual production networks, are we seeing a shift…

while a lot is known about the mechanics of group learning in smaller and traditionally organised online classrooms, fewer studies have examined participant interactions when learning “at scale.”

Millions of people worldwide are currently enrolled in courses provided on large-scale learning platforms (aka ‘MOOCs’), typically collaborating in online discussion forums with thousands of peers. Current learning theory emphasises the importance of this group interaction for cognition. However, while a lot is known about the mechanics of group learning in smaller and traditionally organised online classrooms, fewer studies have examined participant interactions when learning “at scale.” Some studies have used clickstream data to trace participant behaviour; even predicting dropouts based on their engagement patterns. However, many questions remain about the characteristics of group interactions in these courses, highlighting the need to understand whether—and how—MOOCs allow for deep and meaningful learning by facilitating significant interactions. But what constitutes a “significant” learning interaction? In large-scale MOOC forums, with socio-culturally diverse learners with different motivations for participating, this is a non-trivial problem. MOOCs are best defined as “non-formal” learning spaces, where learners pick and choose how (and if) they interact. This kind of group membership, together with the short-term nature of these courses, means that relatively weak inter-personal relationships are likely. Many of the tens of thousands of interactions in the forum may have little relevance to the learning process. So can we actually define the underlying network of significant interactions? Only once we have done this can we explore firstly how information flows through the forums, and secondly the robustness of those interaction networks: in short, the effectiveness of the platform design for supporting group learning at scale. To explore these questions, we analysed data from 167,000 students registered on two business MOOCs offered on the Coursera platform. Almost 8000 students contributed around 30,000 discussion posts over the six weeks of the courses; almost 30,000 students viewed at least one discussion thread, totalling 321,769 discussion thread views. We first modelled these communications as a social network, with nodes representing students who posted in the discussion forums, and edges (ie links) indicating…

The quality of rural internet access in the UK, or lack of it, has long been a bone of contention.

Reposted from The Conversation. The quality of rural internet access in the UK, or lack of it, has long been a bone of contention. The government says “fast, reliable broadband” is essential, but the disparity between urban and rural areas is large and growing, with slow and patchy connections common outside towns and cities. The main reason for this is the difficulty and cost of installing the infrastructure necessary to bring broadband to all parts of the countryside—certainly to remote villages, hamlets, homes and farms, but even to areas not classified as “deep rural” too. A countryside unplugged As part of our project Access Denied, we are interviewing people in rural areas, both very remote and less so, to hear their experiences of slow and unreliable internet connections and the effects on their personal and professional lives. What we’ve found so far is that even in areas less than 20 miles away from big cities, the internet connection slows to far below the minimum of 2Mb/s identified by the government as “adequate”. Whether this is fast enough to navigate today’s data-rich Web 2.0 environment is questionable. Yes… but where, exactly? Rept0n1x, CC BY-SA Our interviewees could attain speeds between 0.1Mb/s and 1.2Mb/s, with the latter being a positive outlier among the speed tests we performed. Some interviewees also reported that the internet didn’t work in their homes at all, in some cases for 60% of the time. This wasn’t related to time of day; the dropped connection appeared to be random, and not something they could plan for. The result is that activities that those in cities and towns would see as entirely normal are virtually impossible in the country—online banking, web searches for information, even sending email. One respondent explained that she was unable to pay her workers’ wages for a full week because the internet was too slow and kept cutting out, causing her online banking session to reset. Linking villages So poor quality…

People are very often unaware of how much data is gathered about them—let alone the purposes for which it can be used.

MEPs failed to support a Green call to protect Edward Snowden as a whistleblower, in order to allow him to give his testimony to the European Parliament in March. Image by greensefa.

Computers have developed enormously since the Second World War: alongside a rough doubling of computer power every two years, communications bandwidth and storage capacity have grown just as quickly. Computers can now store much more personal data, process it much faster, and rapidly share it across networks. Data is collected about us as we interact with digital technology, directly and via organisations. Many people volunteer data to social networking sites, and sensors—in smartphones, CCTV cameras, and “Internet of Things” objects—are making the physical world as trackable as the virtual. People are very often unaware of how much data is gathered about them—let alone the purposes for which it can be used. Also, most privacy risks are highly probabilistic, cumulative, and difficult to calculate. A student sharing a photo today might not be thinking about a future interview panel; or that the heart rate data shared from a fitness gadget might affect future decisions by insurance and financial services (Brown 2014). Rather than organisations waiting for something to go wrong, then spending large amounts of time and money trying (and often failing) to fix privacy problems, computer scientists have been developing methods for designing privacy directly into new technologies and systems (Spiekermann and Cranor 2009). One of the most important principles is data minimisation; that is, limiting the collection of personal data to that needed to provide a service—rather than storing everything that can be conveniently retrieved. This limits the impact of data losses and breaches, for example by corrupt staff with authorised access to data—a practice that the UK Information Commissioner’s Office (2006) has shown to be widespread. Privacy by design also protects against function creep (Gürses et al. 2011). When an organisation invests significant resources to collect personal data for one reason, it can be very tempting to use it for other purposes. While this is limited in the EU by data protection law, government agencies are in a…

Editors must now decide not only what to publish and where, but how long it should remain prominent and visible to the audience on the front page of the news website.

Image of the Telegraph's state of the art "hub and spoke" newsroom layout by David Sim.

The political agenda has always been shaped by what the news media decide to publish—through their ability to broadcast to large, loyal audiences in a sustained manner, news editors have the ability to shape ‘political reality’ by deciding what is important to report. Traditionally, journalists pass to their editors from a pool of potential stories; editors then choose which stories to publish. However, with the increasing importance of online news, editors must now decide not only what to publish and where, but how long it should remain prominent and visible to the audience on the front page of the news website. The question of how much influence the audience has in these decisions has always been ambiguous. While in theory we might expect journalists to be attentive to readers, journalism has also been characterised as a profession with a “deliberate…ignorance of audience wants” (Anderson, 2011b). This ‘anti-populism’ is still often portrayed as an important journalistic virtue, in the context of telling people what they need to hear, rather than what they want to hear. Recently, however, attention has been turning to the potential impact that online audience metrics are having on journalism’s “deliberate ignorance”. Online publishing provides a huge amount of information to editors about visitor numbers, visit frequency, and what visitors choose to read and how long they spend reading it. Online editors now have detailed information about what articles are popular almost as soon as they are published, with these statistics frequently displayed prominently in the newsroom. The rise of audience metrics has created concern both within the journalistic profession and academia, as part of a broader set of concerns about the way journalism is changing online. Many have expressed concern about a ‘culture of click’, whereby important but unexciting stories make way for more attention grabbing pieces, and editorial judgments are overridden by traffic statistics. At a time when media business models are under great strain, the…

The geography of knowledge has always been uneven. Some people and places have always been more visible and had more voices than others.

Reposted from The Conversation. The geography of knowledge has always been uneven. Some people and places have always been more visible and had more voices than others. But the internet seemed to promise something different: a greater diversity of voices, opinions and narratives from more places. Unfortunately, this has not come to pass in quite the manner some expected it to. Many parts of the world remain invisible or under-represented on important websites and services. All of this matters because as geographic information becomes increasingly integral to our lives, places that are not represented on platforms like Wikipedia will be absent from many of our understandings of, and interactions with, the world. Mapping the differences Until now, there has been no large-scale analysis of the factors that explain the wide geographical spread of online information. This is something we have aimed to address in our research project on the geography of Wikipedia. Our focus areas were the Middle East and North Africa. Using statistical models of geotagged Wikipedia data, we identified the necessary conditions to make countries “visible”. This allowed us to map the countries that fare considerably better or worse than expected. We found that a large part of the variation between countries could be explained by just three factors: population, availability of broadband internet, and the number of edits originating in that country. Areas of Wikipedia hegemony and uneven geographic coverage. Oxford Internet Institute While these three variables help to explain the sparse amount of content written about much of sub-Saharan Africa, most of the Middle East and North Africa have much less geographic information than might be expected. For example, despite high levels of wealth and connectivity, Qatar and the United Arab Emirates have far fewer articles than we might expect. Constraints to creating content These three factors matter independently, but they will also be subject to other constraints. A country’s population will probably affect the number of activities, places, and practices…

As geographic content and geospatial information becomes increasingly integral to our everyday lives, places that are left off the ‘map of knowledge’ will be absent from our understanding of the world.

The geographies of codified knowledge have always been uneven, affording some people and places greater voice and visibility than others. While the rise of the geosocial Web seemed to promise a greater diversity of voices, opinions, and narratives about places, many regions remain largely absent from the websites and services that represent them to the rest of the world. These highly uneven geographies of codified information matter because they shape what is known and what can be known. As geographic content and geospatial information becomes increasingly integral to our everyday lives, places that are left off the ‘map of knowledge’ will be absent from our understanding of, and interaction with, the world. We know that Wikipedia is important to the construction of geographical imaginations of place, and that it has immense power to augment our spatial understandings and interactions (Graham et al. 2013). In other words, the presences and absences in Wikipedia matter. If a person’s primary free source of information about the world is the Persian or Arabic or Hebrew Wikipedia, then the world will look fundamentally different from the world presented through the lens of the English Wikipedia. The capacity to represent oneself to outsiders is especially important in those parts of the world that are characterised by highly uneven power relationships: Brunn and Wilson (2013) and Graham and Zook (2013) have already demonstrated the power of geospatial content to reinforce power in a South African township and Jerusalem, respectively. Until now, there has been no large-scale empirical analysis of the factors that explain information geographies at the global scale; this is something we have aimed to address in this research project on Mapping and measuring local knowledge production and representation in the Middle East and North Africa. Using regression models of geolocated Wikipedia data we have identified what are likely to be the necessary conditions for representation at the country level, and have also identified the outliers,…