Maybe ... but lower-tech 'cheapfakes' are the much bigger problem at the moment
In a viral video clip from 2018 [cw: profanity], Barack Obama appears to warn viewers to be more vigilant about what we trust on the internet…until a split screen reveals it isn’t him speaking at all, but actor Jordan Peele. The video is a ‘deepfake’ created using face swapping technology.
Deepfakes are digitally manipulated videos that use artificial intelligence software to create a situation that never actually happened. Often, this means placing one person’s head onto another’s body by superimposing existing images or audio onto source material. The end result is film footage of an individual saying something they never actually said.
While the technology required to create deepfakes is becoming more accessible, creating a convincing deepfake currently still takes a fair amount of time, money, and technical expertise.
Deepfakes can be relatively harmless and entertaining — insurance company State Farm, for example, recently used this technology for an ad that appeared to show an ESPN SportsCenter reporter from 1998 making accurate predictions about the year 2020. But concerns are mounting over the possibility of the technology being used for more malicious ends. In 2019, some news outlets worried that deepfakes discrediting political candidates would be a major factor in the lead-up to the 2020 U.S. election.
Deepfakes also raise serious concerns about what information is credible: if we can’t believe what we are watching, how do we know what we can trust? The most pressing threat presented by deepfakes may not be their ability to deceive (to date there are no notable examples of this threat) but rather the power they hold to make us doubt the truth of any information viewed online.
While these concerns are valid, some argue deepfakes may not be the most pressing threat. Though technology is advancing quickly, less sophisticated forms of manipulated content are already frighteningly effective at misleading the public. These aren’t deepfakes, but rather ‘cheapfakes.’
Cheapfakes only involve slight modifications in order to present manipulated content that spreads disinformation. This could be as simple as editing out footage of an event to purposefully mislead viewers or using basic software to edit a background.
For example, on November 1 2020, just two days before the U.S. presidential election, a video was posted on social media that allegedly showed Democratic nominee Joe Biden forgetting what state he was in. In the video, Biden addresses a crowd saying, “Hello, Minnesota,” but the banner behind him suggests that he is actually in Florida. In reality, Biden was in Minnesota, and someone had simply manipulated the signs in the video. Despite being fact-checked almost immediately, the false video was viewed over a million times in just 24 hours. The video is still circulating on social media, but Twitter has since added a “manipulated media” tag.
Show students an example of a deepfake and a cheapfake.
Explain to students that these are both examples of manipulated videos. Ask students if they can identify the difference between how these videos are manipulated.
When they finish brainstorming, explain the difference between deepfakes and “cheapfakes” (see definitions in the glossary). Explain that while deepfakes have not yet been widely used to deceive people, cheapfakes are quite widespread, and people do believe and share them. Afterwards, have a full-class discussion about manipulated videos.
Guiding Questions
2. Verifying Videos
Review the manipulated Joe Biden video.
Ask students to imagine that someone shared this video with them. How would they go about verifying the accuracy of the video?
Watch Skill: Check Other Sources to review techniques for verifying claims. Afterwards, ask students to practice the skill using the Biden story. A keyword search will bring up a number of fact-checks or reports from reputable news organizations showing that the video was manipulated.
Artificial Intelligence (AI): The ability of machines and computers to perform tasks commonly associated with human intelligence. These tasks include learning, reasoning, and self-correction.
Cheapfake: A low-tech manipulated video of a person used to support a story that they did or said something, or behaved in a way that they did not. Often used to undermine or discredit the subject. Cheapfakes take little effort to produce using existing technologies and simple techniques like basic editing or slowing down the speed of a video.
Deepfake: A technologically complex fabricated video of a person that suggests they did or said something, or behaved in a way that they did not. Often used to undermine or discredit the subject. Currently, these take time, effort, and skill to produce and require the use of artificial intelligence.