Articles

Deep Fakes and the AI Act: an Important Signal or a Missed Opportunity?

How are deepfakes regulated by the AI Act? What are the main shortcomings of the AI Act in regard to regulating deepfakes?

A woman with digital code projections on her face, representing technology and future concepts.

The EU finally accepted the Artificial Intelligence Act, signaling its commitment to global AI governance. The regulation aims to establish a comprehensive regulatory framework for AI, setting new standards that might serve as a global benchmark in the future. Creating clear and precise rules that would enable the implementation of efficient safeguards for citizens against the manipulative potential of technology was not an easy task, and the EU failed to avoid visible shortcomings.

In my study “Deep Fakes and the Artificial Intelligence Act – an Important Signal or a Missed Opportunity?” I raise legitimate questions about the effectiveness of the solutions proposed by the EU in regard to crafting protection against harmful applications of deepfakes. I concentrated on two primary research questions:

How are deepfakes regulated by the AI Act?

What are the main shortcomings of the AI Act in regard to regulating deepfakes?

The EU has taken an important step towards regulating deepfakes, but the proposed solutions are, in my opinion, just a transitional phase. They require clarification, standardization, and, above all, appropriate enforcement.

Regulations on deepfakes have not been a priority for the regulatory framework crafted by the EU, but experience with synthetic media teaches us that strict provisions are necessary. Deep fakes can be harmful when misused. We have already experienced that, namely, with attempts to manipulate electoral processes, discrediting politicians, and creating non-consensual pornographic content. These are only selected examples from the entire list of malicious applications.

The basis for regulating deepfakes is the protection of citizens against disinformation and a strong focus on strictly political processes. In my opinion, this is a mistake. Statistics on video deep fakes show that non-consensual pornography is a key application, disproportionately targeting women. It contributes not only to the victimization of thousands of female individuals but also to misogyny and deepening gender-based discrimination. Failure to address this issue is, in my opinion, the biggest shortcoming of regulating deepfakes in the AI Act. The EU wasted a chance to even symbolically refer to women’s tragedies, for example, in the recitals added to the regulation. The situation is only partially saved by patching the holes in the AI Act in the Directive on combating violence against women and domestic violence, where member states are obliged to penalize the creation and sharing of the so-called deep porn.

The AI Act introduces a clear definition of deepfakes, which should be assessed positively: AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful. The definition will help to standardize the common understanding of what deepfakes really are, especially since it refers to the variety of subjects/objects of depiction and reduces the scope of audio and visual content, excluding texts generated by AI.

Unfortunately, the discrepancies in the various descriptions of deepfakes in three EU legal acts (in the AI Act twice alone) may be problematic. Each time, they differ slightly, which may cause interpretation problems in the future and does not reflect the best legislative technique.

Deepfakes are classified under “limited risk” AI systems, which require transparency obligations, but not outright bans. This decision might be seen as controversial, given the significant risks associated with their particular misuses, especially deep porn. Critics argue that some deepfakes should be categorized as “high-risk” or “systemic risk” due to their potential to harm health, safety, and fundamental rights.

The AI Act imposes transparency obligations on providers and deployers of AI systems. These obligations include marking content as AI-generated or manipulated. However, these measures may not be sufficient to prevent harmful uses undertaken by professional actors of disinformation that would not comply with any rules or, in the case of non-consensual pornography, where disclosure alone does not prevent the harm.

The regulation ties in with the Digital Services Act (DSA), which holds digital platforms accountable for identifying and removing illegal content, including deepfakes. This synergy between the AI Act and the DSA is crucial for mitigating the risks posed by deepfakes. Digital platforms will also fill another hole in the AI Act by forcing individual users to clearly label AI creations. The AI Act does not impose such an obligation for private, non-professional use of AI systems.

The AI Act marks a significant step in regulating deepfakes, setting a foundational legal framework and addressing some of the technology’s complexities. However, it falls short in several areas, particularly in protecting against non-consensual deepfake pornography and ensuring robust enforcement of transparency obligations. It should be seen as a transitional phase requiring further enhancements. These might be achieved through national legislation and future revisions undertaken by the European Commission to adjust safeguards to the technological advancements.

At the moment, we are still balancing an important signal and a missed opportunity. However, I am optimistic. The EU has taken an important step. It has established the foundations of the protection system, and this is where the building of comprehensive countermeasures begins.

Note: the above draws on the author’s work recently published in Policy & Internet.

All articles posted on this blog give the views of the author(s), and not the position of Policy & Internet, nor the Faculty of Arts and Social Sciences at the University of Sydney.