Two years after the NYT’s ‘Year of the MOOC’: how much do we actually know about them?

Timeline of the development of MOOCs and open education, from: Yuan, Li, and Stephen Powell. MOOCs and Open Education: Implications for Higher Education White Paper. University of Bolton: CETIS, 2013.

Ed: Does research on MOOCs differ in any way from existing research on online learning?

Rebecca: Despite the hype around MOOCs to date, there are many similarities between MOOC research and the breadth of previous investigations into (online) learning. Many of the trends we’ve observed (the prevalence of forum lurking; community formation; etc.) have been studied previously and are supported by earlier findings. That said, the combination of scale, global-reach, duration, and “semi-synchronicity” of MOOCs have made them different enough to inspire this work. In particular, the optional nature of participation among a global-body of lifelong learners for a short burst of time (e.g. a few weeks) is a relatively new learning environment that, despite theoretical ties to existing educational research, poses a new set of challenges and opportunities.

Ed: The MOOC forum networks you modelled seemed to be less efficient at spreading information than randomly generated networks. Do you think this inefficiency is due to structural constraints of the system (or just because inefficiency is not selected against); or is there something deeper happening here, maybe saying something about the nature of learning, and networked interaction?

Rebecca: First off, it’s important to not confuse the structural “inefficiency” of communication with some inherent learning “inefficiency”. The inefficiency in the sub-forums is a matter of information diffusion—i.e., because there are communities that form in the discussion spaces, these communities tend to “trap” knowledge and information instead of promoting the spread of these ideas to a vast array of learners. This information diffusion inefficiency is not necessarily a bad thing, however. It’s a natural human tendency to form communities, and there is much education research that says learning in small groups can be much more beneficial / effective than large-scale learning. The important point that our work hopes to make is that the existence and nature of these communities seems to be influenced by the types of topics that are being discussed (and vice versa)—and that educators may be able to cultivate more isolated or inclusive network dynamics in these course settings by carefully selecting and presenting these different discussion topics to learners.

Ed: Drawing on surveys and learning outcomes you could categorise four ‘learner types’, who tend to behave differently in the network. Could the network be made more efficient by streaming groups by learning objective, or by type of interaction (eg learning / feedback / social)?

Rebecca: Given our network vulnerability analysis, it appears that discussions that focus on problems or issues that are based in real life examples—e.g., those that relate to case studies of real companies and analyses posted by learners of these companies—tend to promote more inclusive engagement and efficient information diffusion. Given that certain types of learners participate in these discussions, one could argue that forming groups around learning preferences and objectives could promote more efficient communications. Still, it’s important to be aware of the potential drawbacks to this, namely, that promoting like-minded/similar people to interact with those they are similar to could further prevent “learning through diverse exposures” that these massive-scale settings can be well-suited to promote.

Ed: In the classroom, the teacher can encourage participation and discussion if it flags: are there mechanisms to trigger or seed interaction if the levels of network activity fall below a certain threshold? How much real-time monitoring tends to occur in these systems?

Rebecca: Yes, it appears that educators may be able to influence or achieve certain types of network patterns. While each MOOC is different (some course staff members tend to be much more engaged than others, learners may have different motivations, etc.), on the whole, there isn’t much real-time monitoring in MOOCs, and MOOC platforms are still in early days where there is little to no automated monitoring or feedback (beyond static analytics dashboards for instructors).

Ed: Does learner participation in these forums improve outcomes? Do the most central users in the interaction network perform better? And do they tend to interact with other very central people?

Rebecca: While we can’t infer causation, we found that when compared to the entire course, a significantly higher percentage of high achievers were also forum participants. The more likely explanation for this is that those who are committed to completing the course and performing well also tend to use the forums—but the plurality of forum participants (44% in one of the courses we analysed) are actually those that “fail” by traditional marks (receive below 50% in the course). Indeed, many central users tend to be those that are simply auditing the course or who are interested in communicating with others without any intention of completing course assignments. These central users tend to communicate with other central users, but also, with those whose participation is much sparser/“on the fringes”.

Ed: Slightly facetiously: you can identify ‘central’ individuals in the network who spark and sustain interaction. Can you also find people who basically cause interaction to die? Who will cause the network to fall apart? And could you start to predict the strength of a network based on the profiles and proportions of the individuals who make it up?

Rebecca: It is certainly possible to further explore how different people seem. One way this can be achieved is by exploring the temporal dynamics at play—e.g., by visualising the communication network at any point in time and creating network “snapshots” at every hour or day, or perhaps, with every new participant, to observe how the trends and structures evolve. While this method still doesn’t allow us to identify the exact influence of any given individual’s participation (since there are so many other confounding factors, for example, how far into the course it is, peoples’ schedules/lives outside of the MOOC, etc.), it may provide some insight into their roles. We could of course define some quantitative measure(s) to measure “network strength” based on learner profiles, but caution against overarching or broad claims in doing so due to confounding forces would be essential.

Ed: The majority of my own interactions are mediated by a keyboard: which is actually a pretty inefficient way of communicating, and certainly a terrible way of arguing through a complex point. Is there any sense from MOOCs that text-based communication might be a barrier to some forms of interaction, or learning?

Rebecca: This is an excellent observation. Given the global student body, varying levels of comfort in English (and written language more broadly), differing preferences for communication, etc., there is much reason to believe that a lack of participation could result from a lack of comfort with the keyboard (or written communication more generally). Indeed, in the MOOCs we’ve studied, many learners have attempted to meet up on Google Hangouts or other non-text based media to form and sustain study groups, suggesting that many learners seek to use alternative technologies to interact with others and achieve their learning objectives.

Ed: Based on this data and analysis, are there any obvious design points that might improve interaction efficiency and learning outcomes in these platforms?

Rebecca: As I have mentioned already, open-ended questions that focus on real-life case studies tend to promote the least vulnerable and most “efficient” discussions, which may be of interest to practitioners looking to cultivate these sorts of environments. More broadly, the lack of sustained participation in the forums suggests that there are a number of “forces of disengagement” at play, one of them being that the sheer amount of content being generated in the discussion spaces (one course had over 2,700 threads and 15,600 posts) could be contributing to a sense of “content overload” and helplessness for learners. Designing platforms that help mitigate this problem will be fundamental to the vitality and effectiveness of these learning spaces in the future.

Ed: I suppose there is an inherent tension between making the online environment very smooth and seductive, and the process of learning; which is often difficult and frustrating: the very opposite experience aimed for (eg) by games designers. How do MOOCs deal with this tension? (And how much gamification is common to these systems, if any?)

Rebecca: To date, gamification seems to have been sparse in most MOOCs, although there are some interesting experiments in the works. Indeed, one study (Anderson et al., 2014) used a randomised control trial to add badges (that indicate student engagement levels) next to the names of learners in MOOC discussion spaces in order to determine if and how this affects further engagement. Coursera has also started to publicly display badges next to the names of learners that have signed up for the paid Signature Track of a specific course (presumably, to signal which learners are “more serious” about completing the course than others). As these platforms become more social (and perhaps career advancement-oriented), it’s quite possible that gamification will become more popular. This gamification may not ease the process of learning or make it more comfortable, but rather, offer additional opportunities to mitigate the challenges massive-scale anonymity and lack of information about peers to facilitate more social learning.

Ed: How much of this work is applicable to other online environments that involve thousands of people exploring and interacting together: for example deliberation, crowd production and interactive gaming, which certainly involve quantifiable interactions and a degree of negotiation and learning?

Rebecca: Since MOOCs are so loosely structured and could largely be considered “informal” learning spaces, we believe the engagement dynamics we’ve found could apply to a number of other large-scale informal learning/interactive spaces online. Similar crowd-like structures can be found in a variety of policy and practice settings.

Ed: This project has adopted a mixed methods approach: what have you gained by this, and how common is it in the field?

Rebecca: Combining computational network analysis and machine learning with qualitative content analysis and in-depth interviews has been one of the greatest strengths of this work, and a great learning opportunity for the research team. Often in empirical research, it is important to validate findings across a variety of methods to ensure that they’re robust. Given the complexity of human subjects, we knew computational methods could only go so far in revealing underlying trends; and given the scale of the dataset, we knew there were patterns that qualitative analysis alone would not enable us to detect. A mixed-methods approach enabled us to simultaneously and robustly address these dimensions. MOOC research to date has been quite interdisciplinary, bringing together computer scientists, educationists, psychologists, statisticians, and a number of other areas of expertise into a single domain. The interdisciplinarity of research in this field is arguably one of the most exciting indicators of what the future might hold.

Ed: As well as the network analysis, you also carried out interviews with MOOC participants. What did you learn from them that wasn’t obvious from the digital trace data?

Rebecca: The interviews were essential to this investigation. In addition to confirming the trends revealed by our computational explorations (which revealed the what of the underlying dynamics at play), the interviews, revealed much of the why. In particular, we learned people’s motivations for participating in (or disengaging from) the discussion forums, which provided an important backdrop for subsequent quantitative (and qualitative) investigations. We have also learned a lot more about people’s experiences of learning, the strategies they employ to their support their learning and issues around power and inequality in MOOCs.

Ed: You handcoded more than 6000 forum posts in one of the MOOCs you investigated. What findings did this yield? How would you characterise the learning and interaction you observed through this content analysis?

Rebecca: The qualitative content analysis of over 6,500 posts revealed several key insights. For one, we confirmed (as the network analysis suggested), that most discussion is insignificant “noise”—people looking to introduce themselves or have short-lived discussions about topics that are beyond the scope of the course. In a few instances, however, we discovered the different patterns (and sometimes, cycles) of knowledge construction that can occur within a specific discussion thread. In some cases, we found that discussion threads grew to be so long (with over hundreds of posts), that topics were repeated or earlier posts disregarded because new participants didn’t read and/or consider them before adding their own replies.

Ed: How are you planning to extend this work?

Rebecca: As mentioned already, feelings of helplessness resulting from sheer “content overload” in the discussion forums appear to be a key force of disengagement. To that end, as we now have a preliminary understanding of communication dynamics and learner tendencies within these sorts of learning environments, we now hope to leverage this background knowledge to develop new methods for promoting engagement and the fulfilment of individual learning objectives in these settings—in particular, by trying to mitigate the “content overload” issues in some way. Stay tuned for updates 🙂


Anderson, A., Huttenlocher, D., Kleinberg, J. & Leskovec, J., Engaging with Massive Open Online Courses.  In: WWW ’14 Proceedings of the 23rd International World Wide Web Conference, Seoul, Korea. New York: ACM (2014).

Read the full paper: Gillani, N., Yasseri, T., Eynon, R., and Hjorth, I. (2014) Structural limitations of learning in a crowd – communication vulnerability and information diffusion in MOOCs. Scientific Reports 4.

Rebecca Eynon was talking to blog editor David Sutcliffe.

Rebecca Eynon holds a joint academic post between the Oxford Internet Institute (OII) and the Department of Education at the University of Oxford. Her research focuses on education, learning and inequalities, and she has carried out projects in a range of settings (higher education, schools and the home) and life stages (childhood, adolescence and late adulthood).

What does the recent LA School District “iPads-for-all” debacle tell us about the structural changes gripping the US K-12 educational system?

Plans were announced last year to place iPads in the hands of all 640,000 students in the Los Angeles Unified School District. Image by flickingerbrad.

In the realm of education and technology, a central question that researchers and policymakers alike have been grappling with has been: why does there continue to be such vast amounts of resources invested in education technology and related initiatives without substantial evidence to suggest that the promises of such technologies and related initiatives are being fulfilled? By adopting a political economy approach, which examines the social, political and economic processes shaping the production, consumption, and distribution of resources including information and communication technologies (Mosco, 2009), we can begin to understand why and how the considerable zeal surrounding education technologies and the sustained investments persist.

An exemplar case for this type of analysis, giving us a deeper understanding of the structural forces shaping the K-12 institutional circuits, would be the recent tech-centred incidents riddling the Los Angeles Unified School District.

iPad-for-all and the MiSiS CriSiS

Last month the Los Angeles Unified School District Superintendent, John Deasy, and Chief Technology Officer, Ron Chandler, both resigned due to the $1 billion iPad initiative and what is being called the MiSiS CriSiS. Underpinning these initiatives are idealistic beliefs in the powers of technology and the trend towards the standardisation and corporatization of the US K-12 education.

Despite the dire need for classroom upgrades and recovery from the recession-induced mass teacher layoffs and library closures, this past year John Deasy announced the plan to direct the district’s resources toward an initiative that places iPads in all 640,000 LAUSD students’ hands. Perpetuating the idealistic promise that technology acts as a leveling tool in society, Deasy pledged that this initiative would afford equal educational opportunities across the board regardless of race or socioeconomic background of students. He stated that this would allow low-income students to have access to the same technological tools as their middle class counterparts. Commendable as the effort was, this overly idealised sentiment that technology will ameliorate the deeply rooted systemic inequities facing society is partly responsible for the furthering of misdirected investments and ineffective policies in the education technology realm.

The My Integrated Student Information System was meant to streamline the course registration process and centralise the storage of all student records. For reasons that haven’t been entirely unearthed yet, the software was pushed by Deasy and was launched a couple months ago despite various warnings from administrators and teachers that it wasn’t ready. Leaving many students disenfranchised without the necessary courses needed for college and unverified accuracy of senior transcripts for college applications, the MiSiS CriSiS brings to light one of the main concerns with these technological integration projects in schools and the interests involved—accountability. Who is accountable for ensuring there is a back up of records or to get students into the classes they need? Who is accountable for ensuring the most reliable software is chosen? While there has been heightened accountability measures in the form of high-stakes testing directed towards teacher effectiveness, there still remains little accountability regarding the process of choosing specific technology services, their plans for implementation, and overall effectiveness of the services or initiatives.

These incidents are direct results of the broader political-economic structure of public education in the United States. This is a structure characterised by interrelationships between the federal, state, and local governments and the private sector. Central to this institutional structure and the proliferation of ICT-related education initiatives are the workings of digital capitalism.

Digital Capitalism and the American Educational-Industrial Complex

Federal, state, and local governments all contribute funding to K-12 public education in the U.S. but according to the U.S. constitution, states are ultimately responsible for their public schools. The bulk of funding for schools comes from local taxes, which creates vast inequalities across the country in terms of educational resources, infrastructure, and teaching. There has been sustained effort from the federal level to ameliorate these inequities in education, from the Elementary and Secondary Education Act of 1965 and its subsequent amendments to the highly contentious Race to the Top initiative that has built-in incentives for underachieving schools to increase student performance. However, these efforts are exceptionally limited and riddled with problems. This has only been complicated further with the economic downturn in the US, which has resulted in massive layoffs in public education resulting in an unrelenting and growing dependence on private sector resources.

Accountability, as mentioned above, is a major theme discussed in the context of education and digital capitalism. Again, while initiatives like No Child Left Behind and Race to the Top are meant to hold teachers accountable, there is still a lack of accountability measures for private sector involvement in education. Another force of digital capitalism that is responsible for the sustained proliferation of ICT-driven initiatives in education is the shift in the perceived goals of education from serving to ensure an enlightened citizenry to being valued for its vocational outputs. This is being fuelled by global competitiveness discourses presented under the veil of “21st century skills” rhetoric in order to keep the U.S. economically competitive. This has made the circular relationship between government and the corporate technology sector more salient in education. Simply put, the government is putting pressure on the public to ensure new generations are primed with the necessary 21st century skills in order to participate in the labor market while also enhancing the position of the US in the global economy and the private sector is pushing this discourse further because they have products to sell to fulfil this elusive goal. This is further exemplified in the new standardised curriculum project rolling out in K-12.

Technology Driven Corporatisation of American Public Education

This past year 45 states adopted the Common Core State Standards, which is meant to standardise curriculum across the country under the assumption that it will place all students regardless of race or class at the same level across all subjects. One major technological implication of this standardisation includes the standardisation of delivery systems, signalled by the iPads with Pearson education software project in LAUSD. This creates a direct line of entry for private companies to become even further entrenched in education. In many ways, these developments are making the privatisation efforts more concrete and foreshadowing the evolving structure of the public K-12 system. Privatisation doesn’t necessarily mean that schools are going to be under direct control of private companies (although this is already happening in certain parts of the country; you can read about it in my forthcoming article) but it does represent the transformation of education as a public good into a profit centre for private interests, as demonstrated in LAUSD.

In the LAUSD situation, it is no coincidence that John Deasy has an extensive background in private industry and specifically education technology industry. Deasy came from the Gates Foundation, one of the leading education partners. Former Deputy Superintendent Jamie Aquino came from Pearson, the curriculum developers that were to provide the software for the iPad initiative. What’s more significant about these close ties and the iPad for all project is that before the bidding went public, Aquino and Deasy had already begun a backdoor deal with both Pearson and Apple to carry out the initiative thus illustrating the conflicts of interests between government, industry, and education.


Perhaps the recent incidents that have been riddling the Los Angeles Unified School District will bring more public attention to these issues and a push for more evidenced-based policies will emerge. Nonetheless, the issues arising in LA Unified represent broader structural, political-economic forces that shed light on the answer to the question posed earlier. Attention to these larger structural processes being propelled by such forces is what drives my own research, which aims to extend digital exclusion scholarship and provide evidence-based suggestions for more sustainable policies that maximise benefits for the populations they seek to serve. In this vein, posing a couple of preliminary policy suggestions might be appropriate.

In a nutshell, it would be naïve to assume that the private sector has nothing to offer the public sector and that public schools are not in a position to benefit from the resources they bring. However, the decision-making power needs to be more balanced among all stakeholders in order for these benefits to be realised. Several suggestions for policymakers are made in an article based on my previous research that is forthcoming but can similarly be applied here. Overall, there is a need for impact assessment measurements for these interventions that would provide valuable insight into the effectiveness of the ICT-driven initiatives. Additionally, a recommendation for more accountability measures in each stage of these projects is needed in order to ensure that the benefits are being realised and promises fulfilled. Ultimately, while perhaps a bit idealistic at this point, a shift from an economic focus to a social rights based approach to policymaking in this realm would help to create more sustainable policies that maximise the benefits for groups they’re meant to serve.

Admittedly, this is a simplified overview of the forces at play in the current restructuring of K-12 but hopefully it has provided useful insight into how technology’s role in society is not determined by the technology itself but rather by a complex ecosystem of networks and power relations that shape larger social structures. It is, of course, much more complex with many layers of discourses and sociopolitical entanglements but my goal is that this snapshot has highlighted the importance for understanding the social, political, and economic underpinnings in order to grasp the larger picture of technology’s role in society and, more specifically, education.


Mosco, V. (2009). The Political Economy of Communication. London. Sage Publishing.

Picciano, A.G., & Spring, J.H. (2013). The Great American Education-Industrial Complex: Ideology, technology, and profit. Routledge.

Schiller, D. (1999) Digital Capitalism. Cambridge: The MIT Press.

For more on Digital Capitalism, see Dan Schiller (1999) Digital Capitalism.

This concept was drawn from Picciano and Spring’s (2014) The Great American Education-Industrial Complex: Ideology, Technology, and profit.

Paige Mustain is a DPhil student at the Oxford Internet Institute. Her research lies at the intersection of education and digital exclusion. More specifically, she focuses on the political economy of information and communication technology (ICT) development initiatives in the realm of education.

What are the limitations of learning at scale? Investigating information diffusion and network vulnerability in MOOCs

Millions of people worldwide are currently enrolled in courses provided on large-scale learning platforms (aka ‘MOOCs’), typically collaborating in online discussion forums with thousands of peers. Current learning theory emphasises the importance of this group interaction for cognition. However, while a lot is known about the mechanics of group learning in smaller and traditionally organised online classrooms, fewer studies have examined participant interactions when learning “at scale.” Some studies have used clickstream data to trace participant behaviour; even predicting dropouts based on their engagement patterns. However, many questions remain about the characteristics of group interactions in these courses, highlighting the need to understand whether—and how—MOOCs allow for deep and meaningful learning by facilitating significant interactions.

But what constitutes a “significant” learning interaction? In large-scale MOOC forums, with socio-culturally diverse learners with different motivations for participating, this is a non-trivial problem. MOOCs are best defined as “non-formal” learning spaces, where learners pick and choose how (and if) they interact. This kind of group membership, together with the short-term nature of these courses, means that relatively weak inter-personal relationships are likely. Many of the tens of thousands of interactions in the forum may have little relevance to the learning process. So can we actually define the underlying network of significant interactions? Only once we have done this can we explore firstly how information flows through the forums, and secondly the robustness of those interaction networks: in short, the effectiveness of the platform design for supporting group learning at scale.

To explore these questions, we analysed data from 167,000 students registered on two business MOOCs offered on the Coursera platform. Almost 8000 students contributed around 30,000 discussion posts over the six weeks of the courses; almost 30,000 students viewed at least one discussion thread, totalling 321,769 discussion thread views. We first modelled these communications as a social network, with nodes representing students who posted in the discussion forums, and edges (ie links) indicating co-participation in at least one discussion thread. Of course, not all links will be equally important: many exchanges will be trivial (‘hello’, ‘thanks’ etc.). Our task, then, was to derive a “true” network of meaningful student interactions (ie iterative, consistent dialogue) by filtering out those links generated by random encounters (Figure 1; see also full paper for methodology).

Figure 1. Comparison of observed (a; ‘all interactions’) and filtered (b; ‘significant interactions’) communication networks for a MOOC forum. Filtering affects network properties such as modularity score (ie degree of clustering). Colours correspond to the automatically detected interest communities.

One feature of networks that has been studied in many disciplines is their vulnerability to fragmentation when nodes are removed (the Internet, for example, emerged from US Army research aiming to develop a disruption-resistant network for critical communications). While we aren’t interested in the effect of missile strike on MOOC exchanges, from an educational perspective it is still useful to ask which “critical set” of learners is mostly responsible for information flow in a communication network—and what would happen to online discussions if these learners were removed. To our knowledge, this is the first time vulnerability of communication networks has been explored in an educational setting.

Network vulnerability is interesting because it indicates how integrated and inclusive the communication flow is. Discussion forums with fleeting participation will have only a very few vocal participants: removing these people from the network will markedly reduce the information flow between the other participants—as the network falls apart, it simply becomes more difficult for information to travel across it via linked nodes. Conversely, forums that encourage repeated engagement and in-depth discussion among participants will have a larger ‘critical set’, with discussion distributed across a wide range of learners.

To understand the structure of group communication in the two courses, we looked at how quickly our modelled communication network fell apart when: (a) the most central nodes were iteratively disconnected (Figure 2; blue), compared with when (b) nodes were removed at random (ie the ‘neutral’ case; green). In the random case, the network degrades evenly, as expected. When we selectively remove the most central nodes, however, we see rapid disintegration: indicating the presence of individuals who are acting as important ‘bridges’ across the network. In other words, the network of student interactions is not random: it has structure.

Figure 2. Rapid network degradation results from removal of central nodes (blue). This indicates the presence of individuals acting as ‘bridges’ between sub-groups. Removing these bridges results in rapid degradation of the overall network. Removal of random nodes (green) results in a more gradual degradation.

Of course, the structure of participant interactions will reflect the purpose and design of the particular forum. We can see from Figure 3 that different forums in the courses have different vulnerability thresholds. Forums with high levels of iterative dialogue and knowledge construction—with learners sharing ideas and insights about weekly questions, strategic analyses, or course outcomes—are the least vulnerable to degradation. A relatively high proportion of nodes have to be removed before the network falls apart (rightmost-blue line). Forums where most individuals post once to introduce themselves and then move their discussions to other platforms (such as Facebook) or cease engagement altogether tend to be more vulnerable to degradation (left-most blue line). The different vulnerability thresholds suggest that different topics (and forum functions) promote different levels of forum engagement. Certainly, asking students open-ended questions tended to encourage significant discussions, leading to greater engagement and knowledge construction as they read analyses posted by their peers and commented with additional insights or critiques.

Figure 3 – Network vulnerabilities of different course forums.

Understanding something about the vulnerability of a communication or interaction network is important, because it will tend to affect how information spreads across it. To investigate this, we simulated an information diffusion model similar to that used to model social contagion. Although simplistic, the SI model (‘susceptible-infected’) is very useful in analysing topological and temporal effects on networked communication systems. While the model doesn’t account for things like decaying interest over time or peer influence, it allows us to compare the efficiency of different network topologies.

We compared our (real-data) network model with a randomised network in order to see how well information would flow if the community structures we observed in Figure 2 did not exist. Figure 4 shows the number of ‘infected’ (or ‘reached’) nodes over time for both the real (solid lines) and randomised networks (dashed lines). In all the forums, we can see that information actually spreads faster in the randomised networks. This is explained by the existence of local community structures in the real-world networks: networks with dense clusters of nodes (i.e. a clumpy network) will result in slower diffusion than a network with a more even distribution of communication, where participants do not tend to favor discussions with a limited cohort of their peers.

Figure 4 (a) shows the percentage of infected nodes vs. simulation time for different networks. The solid lines show the results for the original network and the dashed lines for the random networks. (b) shows the time it took for a simulated “information packet” to come into contact with half the network’s nodes.

Overall, these results reveal an important characteristic of student discussion in MOOCs: when it comes to significant communication between learners, there are simply too many discussion topics and too much heterogeneity (ie clumpiness) to result in truly global-scale discussion. Instead, most information exchange, and by extension, any knowledge construction in the discussion forums occurs in small, short-lived groups: with information “trapped” in small learner groups. This finding is important as it highlights structural limitations that may impact the ability of MOOCs to facilitate communication amongst learners that look to learn “in the crowd”.

These insights into the communication dynamics motivate a number of important questions about how social learning can be better supported, and facilitated, in MOOCs. They certainly suggest the need to leverage intelligent machine learning algorithms to support the needs of crowd-based learners; for example, in detecting different types of discussion and patterns of engagement during the runtime of a course to help students identify and engage in conversations that promote individualised learning. Without such interventions the current structural limitations of social learning in MOOCs may prevent the realisation of a truly global classroom.

The next post addresses qualitative content analysis and how machine-learning community detection schemes can be used to infer latent learner communities from the content of forum posts.

Read the full paper: Gillani, N., Yasseri, T., Eynon, R., and Hjorth, I. (2014) Structural limitations of learning in a crowd – communication vulnerability and information diffusion in MOOCs. Scientific Reports 4.

Rebecca Eynon holds a joint academic post between the Oxford Internet Institute (OII) and the Department of Education at the University of Oxford. Her research focuses on education, learning and inequalities, and she has carried out projects in a range of settings (higher education, schools and the home) and life stages (childhood, adolescence and late adulthood).

UK teenagers without the Internet are ‘educationally disadvantaged’

A major in-depth study examining how teenagers in the UK are using the internet and other mobile devices says the benefits of using such technologies far outweigh any perceived risks. The findings are based on a large-scale study of more than 1,000 randomly selected households in the UK, coupled with regular face-to-face interviews with more than 200 teenagers and their families between 2008 and 2011.

While the study reflects a high level of parental anxiety about the potential of social networking sites to distract their offspring, and shows that some parents despair at their children’s tendency to multitask on mobile devices, the research by Oxford University’s Department of Education and Oxford Internet Institute concludes that there are substantial educational advantages in teenagers being able to access the internet at home.

Teenagers who do not have access to the internet in their home have a strong sense of being ‘educationally disadvantaged’, warns the study. At the time of the study, the researchers estimated that around 10 per cent of the teenagers were without online connectivity at home, with most of this group living in poorer households. While recent figures from the Office of National Statistics suggest this dropped to five per cent in 2012, the researchers say that still leaves around 300,000 children without internet access in their homes.

The researchers’ interviews with teenagers reveal that they felt shut out of their peer group socially and also disadvantaged in their studies as so much of the college or school work set for them to do at home required online research or preparation. One teenager, whose parents had separated, explained that he would ring his father who had internet access and any requested materials were then mailed to him through the post.

Researcher Dr Rebecca Eynon commented: ‘While it’s difficult to state a precise figure for teenagers without access to the internet at home, the fact remains that in the UK, there is something like 300,000 young people who do not – and that’s a significant number. Behind the statistics, our qualitative research shows that these disconnected young people are clearly missing out both educationally and socially.’

In an interview with a researcher, one 14-year old boy said: ‘We get coursework now in Year 9 to see what groups we’re going to go in Year 10. And people with internet, they can get higher marks because they can like research on the internet … my friends are probably on it [MSN] all the day every day. And like they talk about it in school, what happened on MSN.’

Another teenager, aged 15, commented: ‘It was bell gone and I have a lot of things that I could write and I was angry that I haven’t got a computer because I might finish it at home when I’ve got lots of time to do it. But because when I’m at school I need to do it very fast.’

Strikingly, this study contradicts claims that others have made about the potential risks of such technologies adversely affecting the ability of teenagers to concentrate on serious study. The researchers, Dr Chris Davies and Dr Rebecca Eynon, found no evidence to support this claim. Furthermore, their study concludes that the internet has opened up far more opportunities for young people to do their learning at home.

Dr Davies said: ‘Parental anxiety about how teenagers might use the very technologies that they have bought their own children at considerable expense is leading some to discourage their children from becoming confident users. The evidence, based on the survey and hundreds of interviews, shows that parents have tended to focus on the negative side—especially the distracting effects of social networking sites—without always seeing the positive use that their children often make of being online.’

Teenagers’ experiences of the social networking site Facebook appear to be mixed, says the study. Although some regarded Facebook as an integral part of their social life, others were concerned about the number of arguments that had escalated due to others wading in as a result of comments and photographs being posted.

The age of teenagers using Facebook for the first time was found to go down over the three year period from around 16 years old in 2008 to 12 or 13 years old by 2011. Interviews reveal that even the very youngest teenagers who were not particularly interested felt under some peer pressure to join. But the study also suggests that the popularity of Facebook is waning, with teenagers now exploring other forms of social networking.

Dr Davies commented: ‘There is no steady state of teenage technology use—fashions and trends are constantly shifting, and things change very rapidly when they do change.’

The research was part funded by Becta, the British Educational Communications and Technology Agency, a non-departmental public body formed under the last Labour government. The study findings are contained in a new book entitled, Teenagers and Technology, published by Routledge in November 2012.

Understanding low and discontinued Internet use amongst young people in Britain

The Internet has become an important feature of the lives of the majority of young British people, providing them with another avenue to support their learning, inform their life choices about work and life opportunities, make and maintain friendships, and learn about and engage with the world around them. For many it is taken for granted. While the extent to which young people engage with the opportunities of the online world varies considerably, the majority of this age group can be considered to be within the digital mainstream. Indeed, in popular discourse many commentators assume that all young people are digitally included, and notions of the ‘google generation’ or ‘net gen’ continue to flourish.

However, the reality is far more nuanced and complex than this—when we empirically explore how young people really engage with the Internet and related technology we see a significant amount of diversity in how and why they use it, and the influences it has on their lives. We know from nationally representative survey data that around 10% of young people in the UK (aged 17–23) define themselves as people who no longer use the Internet, that is as ‘lapsed users’. This group is fascinating. Why do these people stop using the Internet given its prevalence and value in the lives of the majority of their peers? What difficulties do they face in being unable to connect properly with the online world?

The widely held and very powerful assumption by government, commercial organisations and the wider public that all young people are frequent and confident users of the Internet is clearly inaccurate. Worryingly, however, this public assumption that the current generation of youth is ‘born digital’ is so powerful that it has informed numerous policies and initiatives that determine young people’s lives. Furthermore, the majority of academic research investigating how young people access, use and experience the Internet actually focuses on those we might consider as belonging to the digital mainstream, with a relatively limited focus on those who do not use the Internet, or who use it in very limited ways. This is primarily because most of the work in this area is based on large scale surveys (which fail to pick up detail on minority groups) and because existing qualitative studies tend to focus on moderate to high-end Internet users, who are more willing to participate in academic studies; and who are also far easier to find and recruit.

A recent study undertaken by myself and Anne Geniets for the Nominet Trust examined why these young people are disconnected, how they think and feel about this disconnection, and the extent to which this is due to reasons of exclusion or choice. We also examined the implications for their daily lives and considered how the experiences of these young people could inform the UK’s digital inclusion strategy. Thirty-six in–depth interviews were undertaken with young people (aged 17-23) who considered themselves to be infrequent or lapsed Internet users.

We uncovered a complex set of reasons why these people are not online, including fear of bullying, literacy issues, poverty, lack of skills to use the Internet, and lack of access. However, we also found that the respondents generally recognised the huge importance of the Internet and the tangible benefits it brought to their (online) peers. Some were frustrated that they couldn’t join in; others were resigned about it. Only a few didn’t see the point of the Internet. When so much of today’s world is premised on effective use of the Internet to (for example) drive the economy and employment, this is worrying; particularly given the gap for these young people is only going to widen in the future. As more and more services both in and outside the public sector go ‘digital by default’, for example, when supermarkets only accept online applications, the relative disadvantage for this group increases.

It is therefore important to recognise that while the UK Government’s ‘digital by default’ strategy may be successful at encouraging the unwilling (but capable) online—such as many older users—it may not be appropriate for this group, who tend to be high users of government services but who for various reasons—cognitive, psychological, socio-cultural, physical and material—are still offline, despite recognising the Internet’s importance in the world around them. Perhaps most worrying (and something that was mentioned by many of our respondents) was that being young and therefore supposedly ‘digital’—according to today’s societal norms—actually made it much harder for them to seek help. Simply recognising the fact that not all young people are digitally literate or active is important for the people who interact with them: for example teachers, potential employers, social workers, government employees, and job centre staff. We need to allow for the possibility that young people may need support in using the Internet, enable them to identify problems with their skill sets, and move forward with educational initiatives to ensure that all young people have an opportunity to fully explore the online world and develop the skills needed to support that process while they are still in education.

There were many surprises in this chronically under-researched group, particularly in what they understood by ‘Internet use’. Some defined themselves as non-users but still occasionally used email; others gave their email passwords to a friend to handle their accounts (which including writing emails) for them. This ambiguity surrounding user definitions of ‘use’ and ‘non-use’ should be recognised in future research on this topic; the concept of ‘meaningful use’ of the Internet should also be explored. In some ways this ambiguity offers something of a positive message: while these young people are well aware of their difference in relation to their peers, and do not consider themselves to be ‘proper’ Internet users, they are still sometimes able to access and use the Internet—if to a very limited extent. As this group is often willing to try to use the Internet, seeing it as a normal and necessary part of life, we believe that successful intervention is possible.

However, it was clear from the interviews that many of these young people where experiencing difficult situations—including homelessness, unemployment, bullying and increasing isolation. Being excluded from the Internet’s benefits means that these young people are probably even more likely to belong to a social ‘out–group’; and as social psychological research has shown, this can have a significant negative effect on identity development and the perception of self. For young people who are already disadvantaged this is obviously less than ideal. We need to encourage initiatives that develop and extend social capital for these young people, perhaps by facilitating connections between those who used to be outside the digital mainstream and those who are still outside it.

A good start might be to simply acknowledge that this group actually exists, and to adopt a more nuanced understanding of what it means to actually use the Internet in a meaningful way.

Young people in transition are particularly at risk of being both socially and digitally excluded

On 23 March 2012, the Oxford Internet Institute saw stakeholders from a variety of backgrounds, attending our workshop ‘On the Periphery? Low and Discontinued Internet use by Young People in Britain: Drivers, Impacts and Policies’. One of the key themes that emerged over the course of the day was that digital inclusion cannot be addressed without tackling social exclusion, for many of those who are currently not online are also socially excluded.

The Government’s recent digital inclusion campaigns seem at first sight to recognise this need. For example, the UK ICT Strategy paper pledges that “The Government will work to make citizen-focused transactional services ‘digital by default’ where appropriate using Directgov as the single domain for citizens to access public services and government information. For those for whom digital channels are less accessible (for example, some older or disadvantaged people) the Government will enable a network of ‘assisted digital’ service providers, such as Post Offices, UK online centres and other local service providers” (§45, UK ICT Strategy 2011).

‘By default’ strategies are at the core of a concept called ‘libertarian paternalism’, which initially was advanced and popularised by two American academics, Richard Thaler and Cass Sunstein, and since has been adopted by a number of governments around the world. In the UK, it has inspired the creation of the Cabinet Office’s Behavioural Insight Team, commonly known in Whitehall as the ‘Nudge Unit’.

The idea behind the libertarian paternalism concept is that the government gently encourages citizens to act in socially beneficial ways, without infringing their freedom or liberty, and through these nudges it improves economic welfare and well being for the whole of society. Governments nudge by reorganising the context in which citizens make certain decisions, a strategy also referred to as ‘choice architecture’. To quote a common example, it may not be at the forefront of learner drivers’ mind to sign up for the organ donor register, but by asking learner drivers whether they would like to join the register at the end of their application for a provisional driving licence, many learner drivers may choose to opt in. In other words, while the learner drivers are by default not enrolled as organ donors, they are gently ‘nudged’ by authorities to join the organ donors register and to help tackle the nationwide shortage of organ donations.

To apply libertarian paternalism to issues where citizens have the freedom to make a choice is sensible. Libertarian paternalism after all already has proven to be beneficial in a number of aspects of civic life. But by applying the concept to issues where citizens do not have a choice because of restricted resources, by default strategies risk becoming a tool for social exclusion. This poses a democratic problem.

This, our research suggests, is a current threat for young people who are high users of government services but infrequent users of the Internet.

The benefits of moving government services online are clear. Older citizens who do not go online often do not do so due to a range of factors, such as lack of skills, lack of interest or absence of an Internet connection. While these reasons are complex, there is often, at least to some extent, some element of a digital choice. Thus, for many people within this group, digital by default strategies that encourage citizens to use the government’s online services may work well. For example, through the provision of support at UK online centres and initiatives such as Go On Give an Hour in the context of the UK Race Online 2012 campaign.

However, for younger citizens, who have used the Internet at school and have grown up with the Internet as a part of normal life, not using the Internet or using the Internet in limited ways is more likely to be linked to issues such as the costs of going online. The majority of this group do not need to be nudged into using the Internet.

Preliminary findings of our ‘Lapsed Use of the Internet Amongst Young People in the UK’ project confirm this hypothesis. They suggest that particularly young people in transition often find it difficult to get access to the Internet. These are young people who just left school and don’t have Internet access at home, young people who are in transitory homes or homeless, young people who have just arrived in the UK as a refugee and young people who are working part-time only, or are unemployed and therefore cannot afford to access the Internet.

Sometimes the computers are full, so I go to the British library and can check my email and can see whether I have received something, because at the moment I am looking for jobs. If I am waiting for something important or if I have applied for a job … I have to keep checking my Internet and if I don’t have access to the Internet I really worry. [Alexandra, 20]

They actually cut the funding. And this is why places like the youth club here and Connexions that used to be open are no longer open, and the one-stop shop in L, all got their fundings cut, and they closed down. And, they, I’m surprised this place [youth club] is open, you know. But what can you do?  Nothing, you would have nothing. You would seriously have nothing… [Giorgio, 23]

Young people in transition are particularly at risk of being both socially and digitally excluded. Because of the restriction of their resources, accessing the Internet for them is not typically a matter of choice. This is why an ICT strategy based on choice architecture is not going to work for the majority of young people who are currently ex-users or non-users of the Internet. Instead, there is a danger that digital by default strategies doubly disadvantage those young people without Internet access, by aggravating and slowing down their enrolment process for government services and job programmes.

Therefore, strategies need to be developed that target young ex- and non-users of the Internet specifically, to ensure that these young people who are already part of an ‘Internet by default generation’ do not slip through the net, both technologically and socially.