The New Arms Race: Deep Fakes and Their Impact on Information

Technology

By: Rebecca Hill

On May 23, 2019, House Speaker, Nancy Pelosi was shown in a video slurring and stumbling over her words. Donald Trump retweeted it that evening then Rudy Giuliani retweeted it too, saying, “What is wrong with Nancy Pelosi?”  Fox Business News also picked it up and shared it in one of their short news segments. The speed that it coursed through the Internet was incredible. 

According to USA Today, the video was retweeted, uploaded on Facebook and by May 24th had 2.4 million views and had been shared more than 47,000 times. YouTube removed the video. Facebook did not. The footage, experts pointed out, was manipulated, and the words, “Deep Fakes” began to once again circulate among the news networks. 

What are deep fakes?

The terminology, deep fakes has long been shrouded with multiple dire warnings from legislators to technologists because of the increasing sophistication and malicious intent found in the manipulated works. Recently US House of Representative met on June 13, 2019, to discuss deep fakes in the upcoming 2020 though it is not the first time they have addressed the issue. Thus far, legislation such as the Malicious Deep Fake Prohibition Act of 2018 has failed to gain any traction, although some states have taken action on their own.  On June 28, 2019, U.S. Representatives introduced legislation that would require the Secretary of Homeland Security to publish an annual report on the state of digital content forgery.  There has also been legislation introduced in the Senate.

The Early Days 

Early on deep fakes terminology only applied to a manipulated or doctored video, audio or images, a face swapping manipulation. They began in the world of pornography where celebrity faces were swapped with a porn star’s face then uploaded to the Internet. These videos were easier to detect.

Hollywood, too, has been doing this for years through computer-generated images or CGI. But the difference between valid CGI work and deep fakes is the malicious intent that often cloaks these videos. In today’s roller-coaster technological world, the hackers have gotten more sophisticated using open source software and machine learning algorithms, raising the bar as to how destructive they can be. That’s why Congress is so nervous about the upcoming 2020 elections. Imagine the election impact seeing a video of Donald Trump shooting someone in the street, something he claimed he could do with no recourse during the 2016 election.

Who if anyone is doing anything about it?  

Three years ago, DARPA, the Defense Advanced Research Projects Agency, created a Media Forensic Program or MediFor to develop technological tools to automatically detect what is real and what is not real in deep fakes. Currently, approximately ten research teams participate in the MediFor program with each group is taking a forensic approach, or a more proactive approach or both. Edward Delp, the Charles William Harrison Distinguished Professor of Electrical and Computer Engineering at Purdue University, leads one of those teams. Nasir Memon, Professor of Computer Science and Engineering at New York University’s Tandon School of Engineering also leads a team. 

Delp’s Purdue University’s team is currently using deep learning neural networks to detect inconsistencies across multiple frames in video sequences. Considered a forensics approach, they have been able to detect even the most subtle of differences even as small as a few pixels.

We have designed a system where we collect video examples and then use them to train our AI algorithms to get better even as the work of deep fake manipulators are getting more sophisticated, says Delp.  “We can learn from that technique and incorporate their techniques into our system. Our system will then learn these techniques.” In some cases, the team can detect where the manipulation occurred, says Delp.  Delp likes to think that they are the good guys, and they are looking at all kinds of potential techniques to detect manipulation. But the problem is that it’ll continue to be a cat and mouse game. Eventually says Delp, content providers and social media platforms who host this content will have to join in the fight. 

With another approach, Memon’s NYU team will eventually need digital video camera vendors to get on board. Memon likens their approach to a $100 bill. Currently, to protect our $100 bill from counterfeiting, several watermarks like a color-shifting numeral, a 3-D security ribbon, and a bell in the inkwell that changes color were added. Some of the watermarks can be detected by the light, and others were harder to replicate. 

In Memon’s work, difficult to reproduce watermarks would be inserted into the camera’s imaging pipeline. “These are patterns that we put inside that are not visible by the human eye but can be detected by a computer algorithm, and based on a secret so others cannot put it in. They are very verifiable,” said Memon. 

What matters, says Memon, is the placement of these watermarks.  “I think that the watermarks should be put inside right at the time of capture of the images,” said Memon. Why?  Because they would be harder to alter early in the pipeline than later in the pipeline. The problem, however, is the earlier you place the watermark, the more cooperation you need from camera vendors, says Memon. 

Memon acknowledges that camera vendors may not, until deep fakes become even more threatening, include these because they’d have no incentive to use this type of technology. Still, Memon believes that camera vendors could utilize this technology for a select number of folks who rely on video, images, and audio in their work like journalists, physicians, news agencies, scientists, or insurance companies. While, Memon, Delp’s and other MediFor teams’ research create these technological fixes, Memon and Delp, both acknowledge that technical fixes won’t be enough to solve this threat. “People have to be careful how they consume content,” said Delp, “because there are other implications other than the fake videos.” 

Other Implications

Among the many challenges that have kept technologists lying awake at night is the impact that deep fakes would have on our ability to discern the truth. Think back to the impact of video images you have seen. Remember the February 1968 photograph of a Viet Cong officer being shot in the head by a South Vietnamese officer.  Or the video of First Lady Jackie Kennedy climbing over the back seat to protecting her husband after he was shot. Think of the image we all most recently saw-a man and his daughter lying dead in the shallows of the Rio Grande River. 

All these images have created emotion, a response, and maybe even societal change. But consider if you learned that these images had been manipulated-how, would you feel about the idea that “seeing is believing?” Would you question, from then on, the veracity of every video, audio, or image that you see? 

This is what keeps Memon and Delp up at night, worrying about what they call an “arms race.”  But they are not alone. Most American’s are concerned too. A recent Pew Research Study found that 63% of U.S. adults believe that altered video and images would create a great deal of confusion about the facts of current events.  The majority of those surveyed believed that steps should be taken to restrict deep fakes that are intended to mislead. The problem is; however, we may not like them, but we still consume that content without thinking of the consequences.     

One concerning side effect of a deep fake video on society is the whole idea that it casts aspersions on the veracity of all videos and images whether they are fake or not. According to Renee DiResta, Director of Research, New Knowledge in the article, He Predicted the 2016 Fake News Crisis. Now He’s Worried about an Information Apocalypse, “You don’t need to create a fake video for this technology to have a serious impact.  You just point to the fact that the technology exists, and you can impugn the integrity of the stuff that’s real.”  One example is Donald Trump’s recent attempt to cast aspersions on the Access Hollywood tape when he wrongfully asserted that it was digitally faked. (Keep in mind, however, that during the 2016 campaign and when it was released, he acknowledged it as “locker room talk.”) 

The result?  We won’t know what to believe. As a result, we might stop following the news, or paying attention; something experts call reality apathy. Or perhaps we use the video to confirm our own bias and share it with our online community; something experts call “confirmation bias.”  Either way, the impact on our society, our democracy can be significant.

“Sometimes late at night when I cannot sleep, I can think of some pretty bad scenarios for the future,” said Delp. “The problem is that we are so media-oriented. Part of the problem is that a lot of people think that it is just a technical fix, but we must also find ways to educate the public about their responsibility.” 

Educating the Public

Educating the public has always been the role of libraries in our society, says Delp.  So that role, what we call digital literacy, falls on librarians who could help with this issue.  Librarians, says Memon, can create awareness and help the public understand that what they are seeing may not be accurate and that they shouldn’t jump to any conclusions. “Librarians can run information campaigns about deep fakes,” said Memon.

They can also validate these videos from independent sources and share that information with their patrons. Imagine a placard at the reference desk that says, “Today’s Fake Video is….”

Librarians too can educate the public on how to check them and teach them about the risks of not verifying them.  But mostly, they can show their patrons to be critical assessors of information whether it is media or not. Educating about media manipulation should start at K-12, says Delp. 

If hackers continue to get better at the technology of deep fakes, detection, and other methods will follow close behind.  But unless digital literacy prioritized, we will always be behind the hackers. It’s an arms race, the new Cold War. Technology fixes won’t be enough so librarians will have to step in, preparing patrons for the information wars that are coming.


Rebecca Hill

Rebecca Hill is a freelance writer who writes on libraries, literacy, science education and other topics for a variety of online and national magazines. Currently she writes a science education column for VOYA magazine.  She holds a MLS from Indiana University Purdue University and JD from Valparaiso University. Her interest in intellectual freedom has been peaked by the increase in technology via artificial intelligence and social media. Currently she serves on the Indiana Library Federation Board of Directors and the Purdue University Libraries Dean’s Council. She is also on the Library Board of Trustees for her local library. A long time advocate of libraries, reading, writing and all things words are her passion.

One thought on “The New Arms Race: Deep Fakes and Their Impact on Information

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.