This is the big year for computational propaganda — using immense data sets to manipulate public opinion over social media. Both the Brexit referendum and US election have revealed the limits of modern democracy, and social media platforms are currently setting those limits.
Platforms like Twitter and Facebook now provide a structure for our political lives. We’ve always relied on many kinds of sources for our political news and information. Family, friends, news organisations, charismatic politicians certainly predate the internet. But whereas those are sources of information, social media now provides the structure for political conversation. And the problem is that these technologies permit too much fake news, encourage our herding instincts, and aren’t expected to provide public goods.
First, social algorithms allow fake news stories from untrustworthy sources to spread like wildfire over networks of family and friends. Many of us just assume that there is a modicum of truth-in-advertising. We expect this from advertisements for commercial goods and services, but not from politicians and political parties. Occasionally a political actor gets punished for betraying the public trust through their misinformation campaigns. But in the United States “political speech” is completely free from reasonable public oversight, and in most other countries the media organisations and public offices for watching politicians are legally constrained, poorly financed, or themselves untrustworthy. Research demonstrates that during the campaigns for Brexit and the U.S. presidency, large volumes of fake news stories, false factoids, and absurd claims were passed over social media networks, often by Twitter’s highly automated accounts and Facebook’s algorithms.
Second, social media algorithms provide very real structure to what political scientists often call “elective affinity” or “selective exposure”. When offered the choice of who to spend time with or which organisations to trust, we prefer to strengthen our ties to the people and organisations we already know and like. When offered a choice of news stories, we prefer to read about the issues we already care about, from pundits and news outlets we’ve enjoyed in the past. Random exposure to content is gone from our diets of news and information. The problem is not that we have constructed our own community silos — humans will always do that. The problem is that social media networks take away the random exposure to new, high-quality information.
This is not a technological problem. We are social beings and so we will naturally look for ways to socialize, and we will use technology to socialise each other. But technology could be part of the solution. A not-so-radical redesign might occasionally expose us to new sources of information, or warn us when our own social networks are getting too bounded.
The third problem is that technology companies, including Facebook and Twitter, have been given a “moral pass” on the obligations we hold journalists and civil society groups to.
In most democracies, the public policy and exit polling systems have been broken for a decade. Many social scientists now find that big data, especially network data, does a better job of revealing public preferences than traditional random digit dial systems. So Facebook actually got a moral pass twice this year. Their data on public opinion would have certainly informed the Brexit debate, and their data on voter preferences would certainly have informed public conversation during the US election.
Facebook has run several experiments now, published in scholarly journals, demonstrating that they have the ability to accurately anticipate and measure social trends. Whereas journalists and social scientists feel an obligation to openly analyse and discuss public preferences, we do not expect this of Facebook. The network effects that clearly were unmeasured by pollsters were almost certainly observable to Facebook. When it comes to news and information about politics, or public preferences on important social questions, Facebook has a moral obligation to share data and prevent computational propaganda. The Brexit referendum and US election have taught us that Twitter and Facebook are now media companies. Their engineering decisions are effectively editorial decisions, and we need to expect more openness about how their algorithms work. And we should expect them to deliberate about their editorial decisions.
There are some ways to fix these problems. Opaque software algorithms shape what people find in their news feeds. We’ve all noticed fake news stories (often called clickbait), and while these can be an entertaining part of using the internet, it is bad when they are used to manipulate public opinion. These algorithms work as “bots” on social media platforms like Twitter, where they were used in both the Brexit and US presidential campaign to aggressively advance the case for leaving Europe and the case for electing Trump. Similar algorithms work behind the scenes on Facebook, where they govern what content from your social networks actually gets your attention.
So the first way to strengthen democratic practices is for academics, journalists, policy makers and the interested public to audit social media algorithms. Was Hillary Clinton really replaced by an alien in the final weeks of the 2016 campaign? We all need to be able to see who wrote this story, whether or not it is true, and how it was spread. Most important, Facebook should not allow such stories to be presented as news, much less spread. If they take ad revenue for promoting political misinformation, they should face the same regulatory punishments that a broadcaster would face for doing such a public disservice.
The second problem is a social one that can be exacerbated by information technologies. This means it can also be mitigated by technologies. Introducing random news stories and ensuring exposure to high quality information would be a simple — and healthy — algorithmic adjustment to social media platforms. The third problem could be resolved with moral leadership from within social media firms, but a little public policy oversight from elections officials and media watchdogs would help. Did Facebook see that journalists and pollsters were wrong about public preferences? Facebook should have told us if so, and shared that data.
Social media platforms have provided a structure for spreading around fake news, we users tend to trust our friends and family, and we don’t hold media technology firms accountable for degrading our public conversations. The next big thing for technology evolution is the Internet of Things, which will generate massive amounts of data that will further harden these structures. Is social media damaging democracy? Yes, but we can also use social media to save democracy.