policy

By embracing digital transformation, policymakers can create more efficient, transparent, and fair tax systems that benefit both governments and taxpayers.

Close-up of a calculator and pen on a bar graph, representing finance and accounting.

The digitalisation of tax administration is a hot topic in the EU, with significant implications for VAT collection. Our recent study explores how the level of e-government, measured by the E-Government Development Index (EGDI), impacts VAT evasion, specifically the VAT gap, across EU Member States from 2003 to 2020. The findings reveal that higher levels of digitalisation in tax administration significantly reduce the VAT gap, highlighting the importance of digital transformation in public services. Why is this research important to policymakers? Here are three key elements that resonate with their needs: Enhanced Efficiency and Transparency: Digitalisation improves the efficiency of tax collection by reducing information asymmetry between taxpayers and tax authorities. This exchange leads to better compliance and less tax evasion. Policymakers can leverage these insights to advocate for more robust digital infrastructure in tax administration, ensuring that tax systems are transparent and efficient. Tailored Policy Measures: The study shows that the impact of digitalisation varies between original and new EU Member States. For instance, while digitalisation and corruption perception significantly affect the VAT gap in the original Member States, new Member States are more influenced by household consumption and standard VAT rates. This differentiation suggests that policymakers should tailor their digitalisation strategies to the specific needs and contexts of their countries Combatting Tax Evasion: The research underscores the role of digital tools in combatting VAT fraud, including carousel fraud. By implementing measures such as electronic invoicing and real-time transaction reporting, policymakers can significantly reduce opportunities for tax evasion. These tools not only enhance revenue collection but also build public trust in the tax system. The findings suggest that investing in digitalisation is not just a technological upgrade but a strategic move to enhance tax compliance and reduce evasion. Policymakers should focus on: Promoting digital literacy among taxpayers to ensure they can effectively use e-government services. Implementing comprehensive digital reporting systems to track transactions and detect fraud Customising digitalisation efforts…

How are deepfakes regulated by the AI Act? What are the main shortcomings of the AI Act in regard to regulating deepfakes?

A woman with digital code projections on her face, representing technology and future concepts.

The EU finally accepted the Artificial Intelligence Act, signaling its commitment to global AI governance. The regulation aims to establish a comprehensive regulatory framework for AI, setting new standards that might serve as a global benchmark in the future. Creating clear and precise rules that would enable the implementation of efficient safeguards for citizens against the manipulative potential of technology was not an easy task, and the EU failed to avoid visible shortcomings. In my study “Deep Fakes and the Artificial Intelligence Act – an Important Signal or a Missed Opportunity?” I raise legitimate questions about the effectiveness of the solutions proposed by the EU in regard to crafting protection against harmful applications of deepfakes. I concentrated on two primary research questions: How are deepfakes regulated by the AI Act? What are the main shortcomings of the AI Act in regard to regulating deepfakes? The EU has taken an important step towards regulating deepfakes, but the proposed solutions are, in my opinion, just a transitional phase. They require clarification, standardization, and, above all, appropriate enforcement. Regulations on deepfakes have not been a priority for the regulatory framework crafted by the EU, but experience with synthetic media teaches us that strict provisions are necessary. Deep fakes can be harmful when misused. We have already experienced that, namely, with attempts to manipulate electoral processes, discrediting politicians, and creating non-consensual pornographic content. These are only selected examples from the entire list of malicious applications. The basis for regulating deepfakes is the protection of citizens against disinformation and a strong focus on strictly political processes. In my opinion, this is a mistake. Statistics on video deep fakes show that non-consensual pornography is a key application, disproportionately targeting women. It contributes not only to the victimization of thousands of female individuals but also to misogyny and deepening gender-based discrimination. Failure to address this issue is, in my opinion, the biggest shortcoming of regulating deepfakes in…

Trust is a critical driver for AI adoption. If people do not trust AI, they will be reluctant to use it, writes Professor Terry Flew.

Abstract illustration of AI with silhouette head full of eyes, symbolizing observation and technology.

There has been a resurgence of interest in recent years in setting policies for digital platforms and the challenges of platform power. It has been estimated that there are over 120 public inquiries taking place across different nation‐states, as well as by supranational entities such as the United Nations and the European Union. Similarly, the current surge in enquiries, reviews and policy statements concerning artificial intelligence (AI), such as the Biden Administration’s Executive Order on Safe Secure and Trustworthy Artificial Intelligence in the U.S., the U.K.’s AI Safety Summit and the EU AI Act, also speak to this desire to put regulatory frameworks in place to steer the future development of digital technologies.  The push for greater nation‐state regulation of digital platforms has occurred in the context of the platformisation of the internet, and the concentration of control over key functions of the digital economy by a relatively small number of global technology corporations. This concentration of power and control is clearly apparent with artificial intelligence, where what the U.K. House of Commons Science, Innovation and Technology Committee referred to as the access to data challenge, with ‘the most powerful AI needs very large datasets, which are held by few organisations’, is paramount  (House of Commons Science, Innovation and Technology Committee, 2023, p. 18). As a result, the extent to which the political of platform governance appears as a direct contest between corporate and governmental power is clearer than was the case in the early years of the open Internet. In my Policy & Internet paper, “Mediated Trust, the Internet and Artificial Intelligence: Ideas, interests, institutions and futures”, I argue that trust is a central part of communication, and communication is central to trust. Moreover, the nature of that connection has intensified in an age of universal and pervasive digital media networks. The push towards nation‐state regulation of digital platforms has come from the intersection of two trust vectors: the…

Can e-participation improve policy processes, or do existing conflicts hinder its potential?

Vibrant abstract pattern of illuminated red LED lights forming a dynamic design.

Involving local communities in political decisions is essential for transparent governance. This involvement is especially important in controversial issues, such as the siting of infrastructure, where a balance must be struck between the collective benefits of projects and the personal costs for nearby residents. However, despite efforts to engage communities, participation processes often lead to protests, loss of trust, and project blockades. This is where e-participation tools can play a significant role. As digital transformation reshapes governance, an increasing number of online platforms are being integrated into traditional participation processes. These platforms aim to make participation more inclusive and transparent by allowing individuals to engage regardless of their location or the time and by fostering a space for clear knowledge exchange. Nonetheless, how effective are communities in accessing these tools, particularly when conflicts are already intense? Can e-participation improve policy processes, or do existing conflicts hinder its potential? Our recent article published in Policy & Internet, titled “Digital Citizen Participation in Policy Conflict and Concord: Evaluation of a Web-Based Planning Tool for Railroad Infrastructure” by Ilana Schröder and Nils C. Bandelow, explores these questions. The research examines the performance of e-participation in both low- and high-conflict settings by focusing on a web-based tool that allows citizens to propose alternative railroad routes. Study participants were asked to use the online tool in a hypothetical scenario characterized as either conflictual or consensual. They then assessed the tool’s ability to promote inclusion, transparency, conflict resolution, and efficiency in the decision-making process. Here are the key findings:  E-Participation Can Enhance Transparency and Mutual Understanding: Participants in both low- and high-conflict scenarios indicated that digital participation tools help enhance transparency in decision-making processes. When used effectively, these tools clarify planning criteria, include local knowledge, and improve mutual understanding among stakeholders. Therefore, e-participation tools can help reduce conflict escalation and facilitate creative solutions to complex issues. Digital Tools Aren’t a One-Size-Fits-All Solution: While digital platforms have…

China is perhaps one of the most digitalized societies worldwide. Part of this sweep has been abetted by the rise of large Internet companies that offer key services for everyday social and economic life in the general population.

Digitalization has swept through the global economy worldwide. China is perhaps one of the most digitalized societies worldwide. Part of this sweep has been abetted by the rise of large Internet companies that offer key services for everyday social and economic life in the general population. Such services touch upon social networking sites to enable digital connectivity over geographical distances and time, payment infrastructure to facilitate digital transactions and money transfers, and new platforms to expand video game options and video communication (such as short videos). The prominence of these services has been lucrative for Internet companies.  But their success has also made them a ripe target for regulation. My latest work examined the latest policies that have emerged out of China in response to the growth of Internet companies. Internet companies in China have leveraged their rich balance sheets to acquire or purchase minority stakes in smaller companies deemed conducive to growth. The most salient of these purchases include Tencent’s acquisition of a minority stake in California-based Snapchat and Alibaba’s stake in Chinese streaming platform MangoTV. The two cases capture the growing lengths to which Internet companies would search for new investment targets and engines of growth. Companies were not only looking to acquire competitors, they were also looking to acquire firms beyond the Internet sector and even national borders. This volley of acquisition activity was one of the major legislative battlegrounds for China’s policy crackdown. New policies urged stringent reporting guidelines that covered Internet firm activities across national borders, curbed internal anti-competitive practices, and institutionalized new channels of oversight through a collaboration of government ministries. If balance sheets were the only thing companies needed to acquire without limit, we would see private interests totalize social and economic life, resulting in greater inequality and the recession of government powers (and public interests). These concerns aboutthe growing influence of Internet companies are a story that is not restricted to China.…

In his latest editorial for Policy and Internet, John Hartley argues that a whole-of-humanity effort to meet the challenges of the ‘digital information space’ is impossible, unless we draw from those who have experienced colonialism.

In November 2023, the OECD convened a conference in Paris to ‘identify effective policy responses to the urgent challenges’ member countries face in the ‘information space’. It warned: Today, less than a quarter of citizens say they trust their news media and a majority worry that journalists, governments and political leaders purposely mislead them. In this context, the instantaneous and global spread of information, targeted disinformation campaigns that deceive and confuse the public, and rapidly changing media markets pose a fundamental threat to democracies. As the OECD recognises, ‘a new governance model is needed to establish a whole-of-society approach to fight mis- and disinformation and preserve freedom of speech.’  However, as I argued in a Policy and Internet editorial, a whole-of-humanity effort to meet these challenges is impossible to achieve through incumbent political arrangements.  This quagmire is the result of the ‘information space’ of the Internet being riven by enmities and conflict. Purposeful opposition to this digital ‘New World’ is treated as criminal gangsterism. Anyone who is not one of ‘us’ must be one of ‘them’ – an enemy. As per Ronfeldt and Arquilla, there are plenty: China, Russia, Iran, Wikileaks, criminal cartels (hacking, fraud, ransom), along with religious and nationalist ‘terrorists’ (Palestinians, Kurds, or Kashmiri but not Israel, Türkiye, or India). Andreessen adds accelerationist activists for libertarian sovereignty, while Marwick and others include far-right populists and populism.  However, an additional challenge impedes on universally-inclusive efforts. Namely, the privileged status of OECD countries and their nations that is currently being challenged. According to Frydl, people in OECD countries like to think of themselves as affluent, advanced, and mostly white. However, I argue, as life becomes increasing digitalised, these very people are beginning to learn what it feels like to be messed around, their lives harmed and resources farmed by unaccountable external agents that owe no allegiance to anyone. That is, citizens in OECD countries are somewhat learning what colonialism is through challenges to sovereignty and security delivered via the digital ‘information…

With political advertising increasingly taking place online, the question of regulation is becoming inescapable. In their latest paper, published in Policy&Internet, Junyan Zhu explores the definition of online political advertising and identities key questions regulators must confront when devising effective frameworks.

The rapid surge of online political advertising in recent years has introduced a new dimension to election campaigns. Concerns have arisen regarding the potential consequences of this practice on democracy, including data privacy, voter manipulation, misinformation, and accountability issues. But what exactly is an online political advert? This kind of question is hard to answer, and indeed, reports show that 37 per cent of respondents in the 2021 Eurobarometer Survey couldn’t easily determine whether online content was a political advertisement or not. As of now, only a few platform companies, including Facebook and Google, have defined in their own terms what constitutes this form of content. To address the conceptual challenges faced by policymakers, in our latest paper, we conducted interviews with 19 experts from regulatory bodies, professional advertising associations, and civil society organisations engaged in discussions surrounding online political advertising in both the United Kingdom and the European Union. We delved into the policymakers’ perspectives, seeking to distil their understanding of what constitutes an “advert”, “online” platforms, and “political” content. Instead of crafting new definitions, we pinpointed these alternative factors and illustrated them through a sequence of decision trees. Specifically, our work led us to pose three questions that regulators need to confront: What does it mean for content to be considered an “advert”? When we inquired about the criteria for identifying an advert, a consistent key point that emerged was payment. The central idea revolves around whether payment is involved in content distribution or creation, and it also depends on the timing of the payment. Some interviewees also acknowledged the increasingly blurred boundaries between paid and unpaid content. There are organic ways of spreading material that don’t involve payment, such as an unpaid tweet. These differences matter as they suggest alternative criteria for determining what should or should not count as an advert. What does it mean for an advert to be “online”? This turned out to be the most challenging question…

Today, internet policy must confront issues relating to embedded interests, monopoly power, geopolitics, colonisation, warfare, automation, the environment, misinformation, safety, security and more.

*Submissions for this event have closed. Please refer to the event page for further details* Policy innovation for inclusive internet governance  Location:Social Sciences Building (A02), Lecture Theatre 200, University of Sydney, Camperdown Campus Dates and time:28-29 September, 20238:30am – 4:30pm Call for papers The task of internet policy making has changed markedly over the past two decades. The ‘move fast, break things’ era—during which a central policy concern was how to manage economic disruption across industry sectors from entertainment to journalism, retail, transport and hospitality—has evolved into a digital era characterised by complex and interconnected social, political and economic global challenges. Today, internet policy must confront issues relating to embedded interests, monopoly power, geopolitics, colonisation, warfare, automation, the environment, misinformation, safety, security and more. As DeNardis (2014) has argued, conflicts within internet governance involve critical negotiations over economic and political power and how these conflicts are resolved “will determine some of the most important public interest issues of our time”.  In seeking to resolve these conflicts, there is a risk that the dominant economic and geopolitical actors will structure outcomes in their interest. An inclusive approach to internet governance is needed if we are to achieve an equitable distribution of digital resources and opportunities. Inclusive internet governance requires that the voices, interests and values of the maginalised are included in policy making processes, so that dominant ideologies can be challenged and alternative imaginaries realised (Gurumurthy & Chami, 2016).  Novelty and innovation in internet policy is itself challenging. Typically, policy making is driven by past experiences (Schot and Steinmueller, 2018) and constrained by institutional formalities, hierarchies and procedures (Bauer, 2014). Innovation, on the other hand, requires space for exploration and experimentation with opportunities “only partially known” (Bauer & Bohlin, 2022). How does policy innovation occur?  This conference seeks to bring together a range of international voices to demonstrate how varying approaches towards internet policy are established, embodied and engaged with by…

The growing interest in crowdsourcing for government and public policy must be understood in the context of the contemporary malaise of politics, which is being felt across the democratic world.

If elections were invented today, they would probably be referred to as “crowdsourcing the government.” First coined in a 2006 issue of Wired magazine (Howe, 2006), the term crowdsourcing has come to be applied loosely to a wide variety of situations where ideas, opinions, labor or something else is “sourced” in from a potentially large group of people. Whilst most commonly applied in business contexts, there is an increasing amount of buzz around applying crowdsourcing techniques in government and policy contexts as well (Brabham, 2013). Though there is nothing qualitatively new about involving more people in government and policy processes, digital technologies in principle make it possible to increase the quantity of such involvement dramatically, by lowering the costs of participation (Margetts et al., 2015) and making it possible to tap into people’s free time (Shirky, 2010). This difference in quantity is arguably great enough to obtain a quality of its own. We can thus be justified in using the term “crowdsourcing for public policy and government” to refer to new digitally enabled ways of involving people in any aspect of democratic politics and government, not replacing but rather augmenting more traditional participation routes such as elections and referendums. In this editorial, we will briefly highlight some of the key emerging issues in research on crowdsourcing for public policy and government. Our entry point into the discussion is a collection of research papers first presented at the Internet, Politics & Policy 2014 (IPP2014) conference organised by the Oxford Internet Institute (University of Oxford) and the Policy & Internet journal. The theme of this very successful conference—our third since the founding of the journal—was “crowdsourcing for politics and policy.” Out of almost 80 papers presented at the conference in September last year, 14 of the best have now been published as peer-reviewed articles in this journal, including five in this issue. A further handful of papers from the conference focusing on labor…

If we only undertake research on the nature or extent of risk, then it’s difficult to learn anything useful about who is harmed, and what this means for their lives.

The range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, but there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. Image by Brad Flickinger

Child Internet safety is a topic that continues to gain a great deal of media coverage and policy attention. Recent UK policy initiatives such as Active Choice Plus in which major UK broadband providers agreed to provide household-level filtering options, or the industry-led Internet Matters portal, reflect a public concern with the potential risks and harms of children’s Internet use. At the same time, the range of academic literature analysing the risks and opportunities of Internet use for children has grown substantially in the past decade, in large part due to the extensive international studies funded by the European Commission as part of the excellent EU Kids Online network. Whilst this has greatly helped us understand how children behave online, there’s still surprisingly little empirical evidence on how perceived risks translate into actual harms. This is a problematic, first, because risks can only be identified if we understand what types of harms we wish to avoid, and second, because if we only undertake research on the nature or extent of risk, then it’s difficult to learn anything useful about who is harmed, and what this means for their lives. Of course, the focus on risk rather than harm is understandable from an ethical and methodological perspective. It wouldn’t be ethical, for example, to conduct a trial in which one group of children was deliberately exposed to very violent or sexual content to observe whether any harms resulted. Similarly, surveys can ask respondents to self-report harms experienced online, perhaps through the lens of upsetting images or experiences. But again, there are ethical concerns about adding to children’s distress by questioning them extensively on difficult experiences, and in a survey context it’s also difficult to avoid imposing adult conceptions of ‘harm’ through the wording of the questions. Despite these difficulties, there are many research projects that aim to measure and understand the relationship between various types of physical, emotional or psychological harm…