The internet is awash in deepfakes — audio, pictures or video made using artificial intelligence tools in which people appear to do or say things they didn’t, be somewhere they weren’t, or that change their appearance. Some involve nudification, where photos are altered to depict someone unclothed. Other deepfakes are deployed to scam consumers, or to damage the reputations of politicians and other people in the public eye.
Advances in AI mean it takes just a few taps on a keyboard to conjure up a realistic deepfake. Alarmed governments are trying to fight back, but it’s been a losing battle. Fraud attempts using deepfakes have grown more than 20-fold in the past three years, according to data from identity verification company Signicat.
Where have deepfakes been in the news?
On July 20, US President Donald Trump circulated a deepfake video appearing to show former President Barack Obama being arrested inside the Oval Office. The video had appeared on TikTok before being reposted on Trump’s Truth Social account, without any comment.
Imposters have used voice cloning tools to pose as prominent politicians, with recent cases involving Trump’s predecessor Joe Biden and Secretary of State Marco Rubio. During the 2024 US presidential election, Elon Musk shared a deepfake campaign video featuring Democratic candidate Kamala Harris without labeling it as misleading. The video’s AI-manipulated voice appeared to show her calling Biden senile and declaring that she didn’t “know the first thing about running the country.” The video garnered tens of millions of views. In response, California Governor Gavin Newsom vowed to ban digitally altered political deepfakes, and signed a law doing so in September.
In January 2024, explicit deepfake images of pop star Taylor Swift were widely shared on social media, drawing the ire of her legions of fans. Chinese internet trolls circulated manipulated images of wildfires on the Hawaiian island of Maui in August 2023 to support an assertion that they were caused by a secret “weather weapon” being tested by the US. In May of that year, US stocks dipped briefly after an image spread online appearing to show the Pentagon on fire. Experts said the fake picture had the hallmarks of being generated by AI. That February, a manufactured audio clip emerged with what sounded like Nigerian presidential candidate Atiku Abubakar plotting to rig that month’s vote. In 2021, a minute-long video published on social media appeared to show Ukrainian President Volodymyr Zelenskiy telling his soldiers to lay down their arms and surrender to Russia.
Some deepfakes are harmless, such as those of soccer star Cristiano Ronaldo singing Arabic poems.
How are deepfake videos made?
They are often crafted using an AI algorithm that’s trained to recognize patterns in real video recordings of a particular person, a process known as deep learning. It’s then possible to swap an element of one video, such as the person’s face, into another piece of content without it looking like a crude montage. The manipulations are most misleading when used with voice-cloning technology, which breaks down an audio clip of someone speaking into half-syllable chunks that can be reassembled into new words that appear to be spoken by the person in the original recording.
How did deepfakes take off?
The technology was initially the domain of academics and researchers. However, Motherboard, a Vice publication, reported in 2017 that a Reddit user called “deepfakes” had devised an algorithm for making fake videos using open-source code. Reddit banned the user, but the practice spread. Initially, deepfakes required video that already existed and a real vocal performance, along with savvy editing skills.
Today’s “generative” AI systems allow users to produce convincing images and video from simple written prompts. Ask a computer to create a video putting words into someone’s mouth and it will appear.
How can you tell if it’s a deepfake?
The digital forgeries have become harder to spot as AI companies apply the new tools to the vast body of material available on the web, from YouTube to stock image and video libraries.
Occasionally there are clear telltale signs that an image or video is generated using AI, such as a misplaced limb or a six-fingered hand. There might be inconsistencies between the colors of edited and unedited parts of an image. Deepfake videos sometimes fail to match speech with mouth movements. AI may struggle to render the fine details of elements such as hair, mouths and shadows, and the edges of objects can sometimes be jagged and pixelated.
But all of this may change as the underlying models improve.
What’s the danger here?
The fear is that deepfakes will eventually become so convincing that it will be impossible to distinguish what’s real from what’s fabricated. Imagine fraudsters manipulating stock prices by producing forged videos of chief executives issuing corporate updates, or falsified videos of soldiers committing war crimes. Politicians, business leaders and celebrities are especially at risk, given how many recordings of them are available.
The technology makes so-called “revenge porn” possible even if no actual naked photo or video exists, with women typically targeted. Once a video goes viral on the internet, it’s almost impossible to contain. An additional concern is that spreading awareness about deepfakes will make it easier for people who truly are caught on tape doing or saying objectionable or illegal things to claim that the evidence against them is bogus. Some people are already using a deepfake defense in court.
What’s being done to combat deepfakes?
In May, Trump signed the “Take It Down Act,” criminalizing AI-generated non-consensual pornography and forcing social media companies to remove the sharing of such explicit sexual imagery upon request.
Last year, the US Federal Communications Commission made it illegal for companies to use AI-generated voices in robocalls. The ban came two days after the FCC issued a cease-and-desist order against the company responsible for an audio deepfake of Biden. New Hampshire residents received a robocall before the state’s presidential primary that sounded like Biden urging them to stay at home and “save your vote for the November election.”
The European Union’s AI Act requires platforms to label deepfakes as such. China implemented similar legislation in 2023. On April 28, the children’s commissioner for England called on the British government to ban nudification apps that are widely available online.
What else can be done to suppress deepfakes?
The kind of machine learning that produces deepfakes can’t easily be reversed to detect them. But a handful of startups such as Netherlands-based Sensity AI and Estonia-based Sentinel have been developing detection technology, as are many big US tech companies.
Companies including Microsoft Corp. have pledged to embed digital watermarks in images created using their AI tools in order to distinguish them as fake. ChatGPT developer OpenAI has developed AI image detection technology, as well as a way to watermark text — though it hasn’t released the latter in part because it says it’s “trivial” for bad actors to get around.
The Reference Shelf
- How celebrity deepfakes supercharged a Florida health-care scheme.
- How investigators solved the Biden deepfake.
- A past Bloomberg explainer on generative AI, and a glossary of AI terms to know.
- A Bloomberg video about Lyrebird, the AI company that puts words in your mouth.
- Research from University College London suggested humans were unable to detect more than a quarter of deepfake audio recordings.
Written by: Nate Lanxon and Omar El Chmouri — With assistance from Mark Bergen @Bloomberg
The post “Why Deepfakes Are Flooding Social Media, and What Can Be Done” first appeared on Bloomberg
