There has been a resurgence of interest in recent years in setting policies for digital platforms and the challenges of platform power. It has been estimated that there are over 120 public inquiries taking place across different nation‐states, as well as by supranational entities such as the United Nations and the European Union. Similarly, the current surge in enquiries, reviews and policy statements concerning artificial intelligence (AI), such as the Biden Administration’s Executive Order on Safe Secure and Trustworthy Artificial Intelligence in the U.S., the U.K.’s AI Safety Summit and the EU AI Act, also speak to this desire to put regulatory frameworks in place to steer the future development of digital technologies.
The push for greater nation‐state regulation of digital platforms has occurred in the context of the platformisation of the internet, and the concentration of control over key functions of the digital economy by a relatively small number of global technology corporations. This concentration of power and control is clearly apparent with artificial intelligence, where what the U.K. House of Commons Science, Innovation and Technology Committee referred to as the access to data challenge, with ‘the most powerful AI needs very large datasets, which are held by few organisations’, is paramount (House of Commons Science, Innovation and Technology Committee, 2023, p. 18). As a result, the extent to which the political of platform governance appears as a direct contest between corporate and governmental power is clearer than was the case in the early years of the open Internet.
In my Policy & Internet paper, “Mediated Trust, the Internet and Artificial Intelligence: Ideas, interests, institutions and futures”, I argue that trust is a central part of communication, and communication is central to trust. Moreover, the nature of that connection has intensified in an age of universal and pervasive digital media networks.
The push towards nation‐state regulation of digital platforms has come from the intersection of two trust vectors: the lack of transparency and accountability associated with automated decision‐making involving digital technologies; and the growth of the economic, political and communications power of the major platform companies. Many now argue that countervailing regulatory power is required so as to check such untrammelled private power. This has been furthered by clear limitations to the idea of industry and corporate self‐regulation as being the optimal form, that were pervasive in the early years of the Internet’s evolution.
I use the concept of mediated trust to identity the relationship between trust, power and communication technologies. As with other fields of social research, communications scholarship recognises the interconnectedness between three levels of trust: the macro (societal trust), the meso (trust in institutions and organisations), and the micro (interpersonal and intergroup trust). Where communication makes a distinctive contribution rises from the observation that all of these interconnected relationships are founded in discourse. As the philosopher John Dewey observed in 1916, “society exists not only by … communication, but it may fairly be said to exist in … communication”.
Moreover, a key element of a communications approach to trust is to recognise that all of these levels of communication are not only interconnected but are technologically mediated. Using the broad definition of media as any channel or platform (medium) that enables communication, we can see how all levels of societal interaction–micro, meso and macro–have always had an element of technologically mediated communication associated with them, but that the Internet has blurred distinctions between what we used to call ‘the virtual’ and ‘the real’. The concept of mediated trust thus captures both the centrality of communication to trust, and the evolving technological infrastructures through which mediated communication occurs.
The early history of the Internet, then, is one where a dominant set of ideas, clustered around freedom, openness, the market economy and restricting government intervention. This was accompanied by what Tom Streeter termed a romanticism around the relationship between access to information and the creation of a better world, which both preceded and shaped the dominant interests around Internet governance and the institutions and policies which have shaped it.
By contrast, current debates on AI come at a time when it is very much apparent that AI development will be primarily led by those corporate giants that have access to the largest and most diverse data streams on which to train machine learning models. In contrast to the early Internet, we can thus expect dominant corporate and state interests to drive institutional frameworks, and ideas of technological nationalism to become more important than those of multilateralism and a ‘digital commons’.
Trust is emerging as a significant, if shifting, focus across the various AI reviews. Trust has been described as ‘a central driver for widespread acceptance of AI’ (Australian Government Department of Industry, Science and Resources, 2023, p. 4), while Michael Birtwistle of the Ada Lovelace Institute observed to the UK House of Commons Enquiry that to realise the economic benefits of AI ‘we need public trust; we need those technologies to be trustworthy, and that is worth investing regulatory capability in’ (House of Commons Science, Innovation and Technology Committee, 2023, p. 29). In proposing a ‘pro‐innovation’ approach to AI regulation in the U.K., the Department for Science Innovation and Technology and the Office for Artificial Intelligence observed:
Trust is a critical driver for AI adoption. If people do not trust AI, they will be reluctant to use it. Such reluctance can reduce demand for AI products and hinder innovation (Department for Science, Innovation & Technology and Officefor Artificial Intelligence, 2023, p. 33).
There is a need for clarity in these policy discussions about what is meant by ‘trust in AI’. It often defaults to developing trustworthy AI systems, which equate to meeting technical standards or engaging in risk assessment and risk mitigation. These approaches are open to the critique which has been made of AI ethics, which is that such commitments can become mere box‐ticking exercises, and the underlying principles become toothless in the face of a lack of accountability and meaningful sanctions for non-compliance.
The question of trust in AI will inevitably be bound up with wider debates around trust in institutions and trust in both communicative processes and the corporate and government entities engaged with them. It can thus be seen as a subset of wider debates around the future of mediated trust.
Note: the above draws on the author’s work recently published in Policy & Internet.
All articles posted on this blog give the views of the author(s), and not the position of Policy & Internet, nor the Faculty of Arts and Social Sciences at the University of Sydney.