Deepfakes: From TikTok to Threatening the Fabric of Society

What are deep fakes? Many of you have probably seen them somewhere on the Internet, whether it be a fake Tom Cruise or Jon Snow’s moving apology for the dismal ending to Game of Thrones. However, the technology behind it is still unclear to most, so this piece will be dedicated to explaining deepfakes, their harmful effects, and some potential solutions to differentiate real and fake videos. Let’s begin with a brief overview of how they are made.

At their core, deepfakes are an example of advanced machine learning. Essentially computers are fed thousands and thousands of examples of images and videos, and the computers are then trained to recognize the different patterns present in media. Most deepfake machine learning can be encapsulated using an Encoder-Decoder architecture. As the first step, you run thousands of face shots of people through an algorithm called an encoder. The encoder finds and learns similarities between the faces and reduces them to their shared common features, compressing the images in the process. A second algorithm called a decoder is then taught to recover the faces from the compressed images.

Deepfakes Can Perpetuate Grave Harm

One of the most insidious impacts of deepfakes and other synthetic media and fake news is creating a zero-trust society, where people cannot, or no longer bother to, distinguish truth from falsehood. And when trust is eroded, it is easier to raise doubts about specific events.

Last year, Cameroon’s minister of communication dismissed a video that Amnesty International believes shows Cameroonian soldiers executing civilians as fake news. “The problem may not be so much the faked reality as the fact that real reality becomes plausibly deniable,” says Prof Lilian Edwards, a leading expert in internet law at Newcastle University in a report about deepfakes in The Guardian. As the technology becomes more accessible, deepfakes could cause trouble for court cases, where faked events could be entered as evidence. They also pose a personal security risk: deepfakes can mimic biometric data and can potentially trick systems that rely on face, voice, vein, or gait recognition.

What are the potential solutions or ways to spot deepfakes? Unfortunately, it gets more difficult to discern an actual image from a deepfake as the technology improves. In 2018, researchers discovered that deepfake faces don’t blink normally. At first, it seemed like a silver bullet for the detection problem. But no sooner had the research been published than deepfakes appeared with blinking. As soon as a weakness is revealed, it is fixed.

According to Checkify, technology could be used to identify poor-quality deepfakes. The lip-synching might be glitchy, or the skin tone patchy. There can be flickering around the edges of transposed faces. And fine details, such as hair, are challenging for deepfakes to render well, especially where strands are visible on the fringe. Poorly rendered jewelry and teeth can also be a giveaway, as can strange lighting effects, such as inconsistent illumination and reflections on the iris.

Governments, universities, and tech firms are all funding research to detect deepfakes. Facebook banned deepfake videos that are likely to mislead viewers into thinking someone “said words that they did not actually say” in the run-up to the 2020 US election. Twitter and Youtube made similar adjustments to their Terms of Service. Additionally, there have been various legislative adjustments at both the state and federal levels to help combat the impact of deepfakes. Eleven state and federal bills were introduced to regulate deepfakes. Though the federal proposals have yet to move forward, the state bills have found success. California, Virginia, and Texas have enacted deepfake laws, and legislation is pending in Massachusetts, New York, and Maryland.

For the foreseeable future, deepfakes will continue to pose a grave threat. Their astonishingly broad scope to impact everything from interpersonal to international relations, renders them one of the most critical pieces of technology that governments and institutions need to continue to understand and regulate.

Exploring the moral & ethical dimensions of emerging policy and technology issues.