Deepfake Technology: Assessing Security Risk

By  | 

Imagine scrolling through your favorite social media feed when something catches your eye—a short video clip of a familiar face. Businessman turned celebrity Elon Musk is promoting a new cryptocurrency investment. All you need to do is transfer funds to a crypto wallet and the returns will be guaranteed. After all, you’ve heard stories from friends who have made money from Musk’s other endorsements.

This situation occurred recently, and a small number of investors jumped at the opportunity after seeing the interview clip of Elon Musk. Unfortunately for them, the video was not real, it was a deepfake. Deepfakes, fabricated videos which imitate the likeness of an individual, can take on many forms. Often, these include creating an image of a person that does not exist, creating a video of someone saying or doing something they have never done, or synthesizing a person’s voice in an audio file. Although deepfake technology is relatively primitive, bad actors have increasingly used it for malicious purposes. As the technology progresses, people will likely continue to use it for reputation tarnishing, financial gain, and for harming state security. Additionally, academics and policymakers show varying levels of concern for how deepfakes could harm society. It has yet to be seen if social media giants and governments will holistically address the misuse of deepfake technology, however some efforts are underway.

Deepfakes are potentially threatening to the individual and to the state. Both types of threats use the same communication vector and the same technology. They also provoke similar societal responses. However, differences appear when thinking through the implications of misuse. Solutions to the deepfake problem will likely differ between the two categories as governments and social media platforms weigh its ultimate impact.

The vast majority of threats to the individual are related to nonconsensual pornography. In fact, the term “deepfake” originated from a Reddit user with the same username. This user introduced the technology to the mainstream through the creation and sharing of fabricated pornographic videos. Usually, these videos contain the false likeness of celebrity women. Although counterfeit, these forged pornographic videos have real consequences. Often, they inflict psychological harm on the victim, reduce employability, and affect relationships. Bad actors have also used this technique to threaten and intimidate journalists, politicians, and other semi-public figures.

Furthermore, cyber criminals use deepfake technology to conduct online fraud. For example, a recent scheme utilized artificially generated audio to match an energy company CEO’s voice. When the fake “CEO” called an employee to wire money, his slight German accent and voice cadence matched perfectly. The employee wired $243,000 to the cybercriminal before realizing his mistake. Whether deepfake fraud presents itself as the Elon Musk video mentioned earlier or the phone call described above, the result is the same. Real people are losing money to deepfake-enabled fraud online.

Threats to national security are less frequent, though in theory they may occur in peacetime or war. To distinguish a threat to national security, it comes down to understanding the creator’s intentions. To give a wartime example, let’s look at what occurred during the early stages of Russia’s invasion of Ukraine. Supposed Russian actors disseminated a deepfake video that showed Ukrainian President Volodymyr Zelensky telling his military to stand down. Social media companies quickly removed the video from circulation; however, its immediate impact is unknown. At the very least, it contributed to the barrage of misinformation spread across Ukraine as Russia invaded the country. Like other forms of misinformation, peacetime deepfake threats to national security could take the form of political deception. Academics and government agencies have asserted state-sponsored deepfakes could attempt to sway public opinion about a politician, stoke violence, or erode public trust. For example, during the 2020 U.S. election, experts warned of a potential for deepfake video proliferation on social media. Fortunately, this did not seem to occur.

Academics who study deepfake technology are split regarding its overall impact on society. Those who are more concerned about the technology’s potential for misuse study how deepfakes directly impact consumers’ actions and attitudes; while those who are less concerned study how the technology contributes to the larger misinformation space.

Academics who are more concerned argue deepfake videos are capable of swaying public opinion when deployed with the right message to the right audience. A recent study highlighted how microtargeted deepfakes (fake videos deployed to reach a specific demographic) could impact groups’ political attitudes. The research showed deepfake videos were more apt to sway consumers' political attitudes over other types of online disinformation. An additional study illustrates that those who have controversial views that align with the content of a deepfake are more likely to share the content online. The researchers found that a “single brief exposure to a deepfake can influence implicit attitudes, explicit attitudes, and sharing intentions.” Overall, these studies show deepfakes have the capability to change consumer perceptions, when shown to a targeted audience, and the capability to reinforce existing perceptions. Research is still underway to understand the extent to which deepfakes could cause consumers to change voting habits and potentially disrupt democratic elections.

Conversely, academics who are less concerned with deepfakes argue the technology is just as disruptive as other forms of misinformation online. They claim deepfakes threaten the individual much more often (through pornography) than governments or greater society. Counter to other studies, these scholars are unable to prove deepfakes are more manipulative than other forms of fake news. For example, Murphy and Flynn found no increase of false memories in consumers who perceived deepfakes versus other types of misinformation (like simple text or images). Additional studies found deepfakes are no more effective at tarnishing a politician’s reputation than other forms of misinformation.

On a different note, some academics believe deepfakes and other forms of misinformation contribute to the problem of the so-called liar's dividend (if anything can be faked, nothing has to be real). Professor and deepfake expert Hany Farid refers to the liar’s dividend as his “biggest concern” when it comes to widespread usage of deepfakes. Additionally, researchers Vaccari and Chadwick quantitatively proved deepfakes sow uncertainty and, in turn, reduce trust in news seen online.

Despite a lack of consensus on how deepfakes impact society, policymakers and social media giants have attempted to quell the technology’s negative repercussions. Technology companies like Facebook and Google are spending resources to detect deepfakes through efforts such as Facebook’s Deep Fake Detection Challenge and Google’s recent deepfake ban. Additionally, some experts believe a form of online content authentication (a way to verify all posted content) could solve problems associated with deepfake dissemination.

Policymakers are also attempting to reduce the impact of harmful deepfakes. For example, several states have enacted legislation to provide legal recourse for victims of deepfake pornography. The Department of Homeland Security (DHS) has conducted public threat assessments and other forms of research on deepfake technology. Congress also introduced an act which would create a National Deepfake and Digital Provenance Task Force to monitor deepfakes and bring together academia, government, and industry experts. Although promising, the bill has yet to become law or garner serious support.

Overall, deepfake technology is still in its nascent stage. As the technology improves we will likely begin to see more solutions for addressing misuse of the technology. If additional evidence reveals deepfakes are more manipulative and harmful than other types of misinformation, then government intervention to stop its spread online may be necessary. Ultimately, policy solutions offered today are unlikely to be effective due to social media’s complex and fast-changing environment and challenges associated with its regulation. Without concrete and repeatable quantification of deepfake technology’s impact on society, we are unlikely to see the issue properly addressed.



About the Author: 

Jack Cook is a current graduate student in the School of International Service's Global Governance, Politics, and Security program. Regionally, he is interested in the geopolitics of the Middle East and North Africa. Academically, his interests include cyber operations, counter-terrorism, and 21st century authoritarianism.