Yes, your eyes are deceiving you
Deepfakes — where are they now? When the world was first introduced to the concept of “deepfakes” in 2017, every news outlet was quick to warn of the new tech phenomenon, predicting that it would make misinformation even more widespread and harder to debunk. Almost six years later and deepfakes, while never far from the headlines, haven’t exactly triggered the catastrophe that was predicted. So what is the technology being used for now?
First things first: What are deepfakes? Deepfakes use machine learning, AI, and deep learning technology to manipulate visual content, replacing one person’s image with another or adding content to an already existing video. The world was first introduced to deepfakes more than five years ago by a Reddit user of the same username, who shared an NSFW video that superimposed the faces of celebrities on explicit content without their consent, according to AI-focused platform Decoder.
The tech caused an immediate — and justified — backlash: Linked to abuse and misinformation from the start, the spread of deepfake tech has been slowed by concerns from the public, private firms, and policymakers. “Before the alarm was raised on these issues, you had no policies by social media companies to address this … Now you have policies and a variety of actions they take to limit the impact of malicious deepfakes,” AI expert Aviv Ovadya told The Atlantic. Social media platforms have started to look out for deepfakes using content moderation and human and software detection methods.
But the inevitable cannot be stopped, only delayed: Some experts say that despite attempts to control the tech, deepfakes are bound to take off within a few years as AI becomes smarter and more advanced. TikTok, Snapchat and Instagram filters all already integrate some form of deepfake tech to change and enhance images. The website Deepfakes Web allows users to create their own deepfake content. And a number of less advanced applications have popped up allowing users to create GIFs from still images.
Deepfakes are big business on TikTok: A Tom Cruise deepfake Tiktok account by the username DeepTomCruise has accumulated 3.8 mn followers (the account is aiming for parody rather than trying to pass itself off as Cruise.) Cruise not your bag? There are also deepfake accounts for the likes of Keanu Reeves, Harry Styles and Justin Bieber and his wife Hailey.
But deepfakes still have a way to go before they’re interchangeable with the real thing. Although the whole appeal of deepfakes is that they should be identical to the real thing, a study at Cornell University found that people are more likely to detect fake video or audio than they are to identify inaccuracies in written text. Some say that’s because the tech isn’t quite there yet. In one example, viewers of a clip of Ukranian President Volodymyr Zelensky surrendering to his Russian counterpart Vladimir Putin were able to identify the content as fake thanks to Zelensky’s oversized head and questionable accent.
Where do policymakers stand? The World Intellectual Property Organization believes (pdf) that deepfakes have the potential to violate human rights, privacy rights and personal data protection rights, as well as raising questions over copyright and attribution. European laws allow for victims of deepfakes to exercise their “right to be forgotten” if the content shared is inaccurate, under which the content should be erased — though that’s much easier said than done. China, the EU, and several US states have in recent years put out regulation attempting to clamp down on misinformation through deepfakes.
Deepfakes likely can’t be stopped. But improved media literacy could stop us from falling for them. Deepfakes are the latest in a long chain of means to alter or falsify audio, images, and video, all of which have become more common with the rise of the digital age. That makes it all the more crucial that societies double down on teaching media and digital literacy — which helps people identify the sources they can trust, pick up on inconsistencies and inaccuracies in media, and take everything they read, see, or hear with a grain of salt.