AI

How are deepfakes regulated by the AI Act? What are the main shortcomings of the AI Act in regard to regulating deepfakes?

A woman with digital code projections on her face, representing technology and future concepts.

The EU finally accepted the Artificial Intelligence Act, signaling its commitment to global AI governance. The regulation aims to establish a comprehensive regulatory framework for AI, setting new standards that might serve as a global benchmark in the future. Creating clear and precise rules that would enable the implementation of efficient safeguards for citizens against the manipulative potential of technology was not an easy task, and the EU failed to avoid visible shortcomings. In my study “Deep Fakes and the Artificial Intelligence Act – an Important Signal or a Missed Opportunity?” I raise legitimate questions about the effectiveness of the solutions proposed by the EU in regard to crafting protection against harmful applications of deepfakes. I concentrated on two primary research questions: How are deepfakes regulated by the AI Act? What are the main shortcomings of the AI Act in regard to regulating deepfakes? The EU has taken an important step towards regulating deepfakes, but the proposed solutions are, in my opinion, just a transitional phase. They require clarification, standardization, and, above all, appropriate enforcement. Regulations on deepfakes have not been a priority for the regulatory framework crafted by the EU, but experience with synthetic media teaches us that strict provisions are necessary. Deep fakes can be harmful when misused. We have already experienced that, namely, with attempts to manipulate electoral processes, discrediting politicians, and creating non-consensual pornographic content. These are only selected examples from the entire list of malicious applications. The basis for regulating deepfakes is the protection of citizens against disinformation and a strong focus on strictly political processes. In my opinion, this is a mistake. Statistics on video deep fakes show that non-consensual pornography is a key application, disproportionately targeting women. It contributes not only to the victimization of thousands of female individuals but also to misogyny and deepening gender-based discrimination. Failure to address this issue is, in my opinion, the biggest shortcoming of regulating deepfakes in…

Trust is a critical driver for AI adoption. If people do not trust AI, they will be reluctant to use it, writes Professor Terry Flew.

Abstract illustration of AI with silhouette head full of eyes, symbolizing observation and technology.

There has been a resurgence of interest in recent years in setting policies for digital platforms and the challenges of platform power. It has been estimated that there are over 120 public inquiries taking place across different nation‐states, as well as by supranational entities such as the United Nations and the European Union. Similarly, the current surge in enquiries, reviews and policy statements concerning artificial intelligence (AI), such as the Biden Administration’s Executive Order on Safe Secure and Trustworthy Artificial Intelligence in the U.S., the U.K.’s AI Safety Summit and the EU AI Act, also speak to this desire to put regulatory frameworks in place to steer the future development of digital technologies.  The push for greater nation‐state regulation of digital platforms has occurred in the context of the platformisation of the internet, and the concentration of control over key functions of the digital economy by a relatively small number of global technology corporations. This concentration of power and control is clearly apparent with artificial intelligence, where what the U.K. House of Commons Science, Innovation and Technology Committee referred to as the access to data challenge, with ‘the most powerful AI needs very large datasets, which are held by few organisations’, is paramount  (House of Commons Science, Innovation and Technology Committee, 2023, p. 18). As a result, the extent to which the political of platform governance appears as a direct contest between corporate and governmental power is clearer than was the case in the early years of the open Internet. In my Policy & Internet paper, “Mediated Trust, the Internet and Artificial Intelligence: Ideas, interests, institutions and futures”, I argue that trust is a central part of communication, and communication is central to trust. Moreover, the nature of that connection has intensified in an age of universal and pervasive digital media networks. The push towards nation‐state regulation of digital platforms has come from the intersection of two trust vectors: the…