top of page
Writer's pictureGWUR

Digital Literacy and Remedying the Deepfake Epidemic

By Jenna Ahart



In a two-minute video posted in April, TikTok user rache.lzh5 sat curled over her camera, hair tattered and speech stuttered by sobs as she addressed leaked deepfake images of her body. Just two days before, she began receiving Instagram message requests from anonymous users sending edited versions of images she posted.


“They had put [the images] through some editing AI program to edit me naked,” Rachel said in her video, picking at her acrylic nails as tracks of mascara painted her cheeks. “It’s already weird to make that on your own time, but it’s even weirder to send it to me.”


A day prior, she posted a different video addressing the deepfakes but deleted it after receiving degrading comments. “You’re lying for attention,” “You brought this upon yourself,” and “Now you have to post the real ones,” were some of the comments she remembered receiving.


The story of Rachel’s exploitation belongs to a recent onslaught of deepfake abuse. According to Home Security Heroes’ recent study, 95,820 deepfake videos currently circulate the internet, with a 550% increase since 2019. Of those videos, 98% are pornographic, and 99% feature women.


This upsurge of deepfake production can be tied to the development of generative adversarial networks (GAN). A GAN is a type of deep-learning-based AI model that creates manufactured output data to replicate real input data—like deepfakes designed to be mistaken for real images.


The GAN process requires two networks: the generator and the discriminator. As the generator creates false images, the discriminator looks for differences between a false image and its real counterparts, identifying errors that make the image less believable, like an inhuman face. The two networks form a feedback loop until they produce the most realistic image possible.


GANs first emerged as a concept in a 2014 academic paper. By 2021, they appeared in almost 700 research publications, and as of 2023, GANs have migrated from the academic sphere to consumer-based apps.


The AI model’s staggering growth means producing deepfakes no longer requires expertise in photoshop or AI algorithms. With 42 user-friendly deepfake tools on the market, the only skill required for false image creation is the ability to press a button.


And for potential deepfake victims, the only surefire escape from non-consensual image creation is to never be photographed—a tall order for any resident of an increasingly digital world.


According to Silvia Semenzin, a digital sociology researcher and activist, the proliferation of deepfake technology endangers women of all walks of life, even minors.


“The consequence is that women, especially young girls who are often underage, are very exposed to this violence,” she said. “And according to a 2020 study, 60% of deepfake porn victims are ordinary women, a percentage that from February 2023 is worryingly growing.


In early November, students at Westfield High School in New Jersey created deepfakes of over 30 female classmates. After learning New Jersey had no laws against obscene deepfakes, Francesca Mani—a 14-year-old victim—wrote a letter to the White House, urging for protections against deepfakes targeting underage individuals.


Currently, only a handful of states have created laws against deepfakes. Hawaii, Virgina, Texas, and Wyoming have criminalized obscene deepfakes, and Texas and California have restricted deepfakes concerning political campaigns. But for the rest of the states, deepfake use seems to run only more rampant each day.


Still, some researchers have hope that the current deepfake epidemic could eventually be remedied. While conducting research within FACETS—a project funded by the European Research Council studying the impacts of digital media on perceptions of the human face—Marco Viola and Cristina Voto found that the distrust created by deepfakes may eventually be their own demise. As deepfakes become more widespread, they may also become less believable for the average consumer.


“There are some grounds for hoping that in five to ten years, deepfakes will become less popular,” Viola said. “It will be harder for the perpetrator to make others think the person depicted in an image is a real person.”


So far, Viola’s hypothesis seems to ring true. In a recent online experiment, he and his colleagues tested whether users reported stronger attraction to images of models based on if an image was marked as real, or AI generated. Ultimately, the believed realness of an image strongly correlated to its ability to elicit arousal. If this trend continues, deepfakes could well lose their appeal as skepticism for them grows.


Viola and Voto are careful to qualify their optimism, however. In the conclusive remarks of their research article, they emphasize that deepfakes’ ability to self-dissolve is not grounds to ignore their immediate dangers: “We cannot exclude that, although the spread of deepfakes will ultimately end up depowering them, during the intermediate steps they may cause a lot of harm.”


Rather, the distrust created by deepfakes could be weaponized to accelerate their decline. By anticipating the impacts of skepticism on deepfake popularity, digital literacy programs can speed up the process of self-destruction.


Already, the MIT Center for Advanced Virtuality has created the free online course Digital Literacy in the age of Deepfakes, which teaches students to analyze misinformation in the context of deepfake videos. And the University of Maryland, Baltimore County has recently implemented a program to train students in discerning audio deepfakes.


Still, educational efforts may not be enough to eradicate the deepfake epidemic. Especially with varying levels of digital literacy across age demographics, Viola and Voto clarify that digital literacy is not a cure-all solution.


Another deepfake remedy may already lie in the hands of their own creators. Semenzin thinks that tech giants have the power to prevent the dangers deepfakes reap, but CEOs avoid liability with the narrative that AI regulation is a futile tactic.


“Not only is this myth false, but it is also useful to keep the decisional power in the hands of those who today create, distribute, and control technology,” she said. “Instead, reclaiming responsibility and change is essential. Only by understanding that the current digital capitalist profits from gender-based violence, will we be able to stop discrimination.”


With digital, legal, and educational caveats littering the deepfake discussion, its solution is far from clear cut, but digital literacy and increased regulation could still be the vital first steps. Even if they aren’t a panacea, they might leave fewer victims with the helplessness Rachel voiced at the end of her video:


“The only reason you would want these pictures of me is because I don’t want them out there. The only reason that you want these pictures of me is because you like that it’s nonconsensual.”


 



Jenna Ahart is a senior studying journalism and astrophysics. This summer, she interned at NASA's Goddard Space Flight Center, where she wrote news and web features about gamma-ray astronomy research. She plans to continue studying science writing upon graduation.

66 views0 comments

Recent Posts

See All

Comments


bottom of page