Filtered By: Opinion
Opinion

Getting to the bottom of the deepfake problem


Over the past few months, easily half of my media or news appearances were on the subject of “deepfakes”.

Derived from "deep learning" and "fake," deepfakes utilize advanced AI techniques to manipulate or create hyper-realistic content that can deceive viewers. While the technology holds significant potential for creative and beneficial applications, it also presents substantial risks, particularly in the realms of disinformation and digital trust.

With the recent declaration of the Commission on Elections of its plan to ban deepfakes in the upcoming elections, it’s imperative for us to understand the types of deepfakes, their uses, the difficulties in detecting them, and strategies for policing them as this knowledge is essential in navigating this complex landscape.

Types of deepfakes

Deepfake technology manifests in several forms, each with unique characteristics and applications:

•    Face Swapping - This type involves replacing a person's face in a video with another's, creating highly convincing imitations. Popularized by internet memes and video edits, face-swapping technology can seamlessly blend one person's facial expressions and movements with another's visage.

•    Lip Synching - Here, the technology alters the movement of a person’s lips to match a different audio track. This can make it appear as though someone is saying things they never actually said, often with startling realism.

•    AI-Generated Avatars - These are entirely synthetic personas created from scratch using AI. These avatars can be designed to look and sound like real people, even though they do not exist in the real world. They are increasingly used in virtual environments and digital customer service applications.

Uses of deepfakes

Deepfakes are not inherently malicious and can serve creative and innovative purposes. In filmmaking, for instance, deepfake technology can be used to create lifelike CGI characters or to bring historical figures back to life on screen.

This technology also holds promise for the gaming industry, where AI-generated avatars and realistic character interactions can enhance player experience. Additionally, deepfakes can be used in marketing and advertising to create engaging and personalized content.

Unfortunately, the darker side of deepfake technology poses serious societal risks. Deepfakes can be weaponized to create false narratives, spread disinformation, and tarnish reputations.

In the realm of politics, deepfakes have been used to create fake speeches or statements by public figures, potentially swaying public opinion and undermining democratic processes. In more personal contexts, deepfakes can be used to produce non-consensual explicit content, leading to harassment and severe emotional distress for victims.

The potential for deepfakes to facilitate fraud, extortion, and other malicious activities is a growing concern.

Detecting and policing deepfakes

One of the most significant challenges posed by deepfakes is their detectability. As the technology behind deepfakes continues to advance, distinguishing between real and manipulated content becomes increasingly difficult.

Current detection methods often lag behind the latest deepfake techniques, and the tools available are frequently inaccessible to the general public. This technological arms race between deepfake creators and detectors underscores the need for continuous investment in research and development of robust detection mechanisms.

Policing deepfakes requires a multifaceted approach involving technology, legislation, and public awareness:

•    Technological Solutions: Advancements in AI and machine learning are crucial for developing effective deepfake detection tools. Collaboration between tech companies, academic institutions, and governments can accelerate the creation and deployment of these technologies. Investment in digital forensics and the development of watermarks or digital signatures for authentic content are also promising avenues.

•    Legislation and Regulation: Governments need to enact laws that specifically address the creation and distribution of malicious deepfakes. This includes criminalizing the production and dissemination of deepfakes intended to deceive or harm, while balancing these measures with protections for legitimate creative and journalistic uses.

•    Public Awareness and Education: Educating the public about the existence and risks of deepfakes is vital. Media literacy programs can help individuals develop critical thinking skills to question the authenticity of the content they encounter. Encouraging skepticism and the use of trusted sources for information can mitigate the impact of deepfakes on public perception.

•    Platform Responsibility: Social media and content-sharing platforms must take an active role in identifying and removing deepfake content. Implementing stringent verification processes, improving content moderation, and working with fact-checkers can help reduce the spread of harmful deepfakes.

Vigilance is the price of innovation

The deepfake dilemma is a complex and evolving issue that demands proactive and concerted efforts across multiple sectors. By understanding the nuances of deepfake technology and implementing comprehensive strategies to combat its misuse, we can protect the integrity of digital media and safeguard public trust.

Dominic Ligot is the founder, CEO and CTO of CirroLytix, a social impact AI company and Data Ethics PH, an advocacy group looking at the misuses of data and AI. He also serves as the head of AI and Research at the IT and BPM Association of the Philippines (IBPAP), and the Philippines' representative to the Expert Advisory Panel on the International Scientific Report on Advanced AI Safety.