(Deep)Fake News

The creation of the internet has made it easy to find the answers to life’s most pressing questions in the blink of an eye. At the same time, the internet makes it just as easy for individuals to access material that is false. Because of this, the Information Age could also be referred to as the Disinformation Age.

Just in the past few days, a story began to circulate on social media that oregano oil could be used as a treatment against the coronavirus. Similarly, after news broke that NBA legend Kobe Bryant had tragically died in a helicopter crash, social media was flooded with rumors that former NBA player Rick Fox, and all of Bryant’s daughters were on the plane. In actuality, Bryant, his daughter Gianna, and seven other individuals had actually perished in the accident (more information about the victims can be read here).

These examples help highlight the pitfalls of the Disinformation Age. In a world where site traffic can be the key to both clout and money, people are often more focused on being the first individuals to report on a story than they are on making sure they tell that story truthfully and compassionately.

Luckily, fake news is not impossible to combat. Sites like PolitiFact and Snopes exist to help people fact check information, and sometimes our own logical reasoning skills can help us determine whether something is plausible or not.

Unfortunately, it may not be long before even the best of us struggle to differentiate between fact and fiction. Advancements in artificial intelligence have given rise to a controversial form of media known as deepfakes. Grace Shao describes deepfakes as “manipulated videos, or other digital representations produced by sophisticated artificial intelligence, that yield fabricated images and sounds that appear to be real.” Shao then explains that to create a deepfake, a form of artificial intelligence known as a deep-learning system acquires various images and videos of a subject, and eventually learns to copy the way that it speaks and moves. In short, deepfakes can make it look like people are saying or doing things that they aren’t.

One viral example of a deepfake posted by youtuber “Ctrl Shift Face” appears to show comedian Bill Hader’s face seamlessly transform into Tom Cruise’s face as Hader does an impression of him. While this video demonstrates the impressive lengths that technology has gone, and the more meme-able applications for this technology, this video should also serve as a kind of warning of what artificial intelligence can do.

Whereas today we are watching Bill Hader shapeshift into Tom Cruise, tomorrow we may see doctored political ads that show a politician saying disparaging remarks that never actually came out of their mouth, or law enforcement officials use edited surveillance camera footage to set someone up for a crime.

Beyond that, some of the negative potentialities of deepfake media have already been realized. Reports show that 96% of deepfakes currently circulating depict people engaging in pornographic acts. Presumably, these videos are being made without the knowledge or consent of those featured in them.

As AI becomes more advanced, this problem will only get worse. It will become harder and harder to differentiate real videos from doctored ones. People may find themselves believing false information given to them by a deepfake, whereas others may find themselves being lambasted for something a deepfake of them did. Moreover, if people already struggle to have faith in the media in today’s era of fake news, just imagine what could happen if deepfakes were to go mainstream. In light of this, it is important that policymakers find a way to address this issue before they themselves become victims of this new media form.

Though various governmental and corporate powers are concerned about the potential damage that deepfakes can do, the current responses leave much to be desired. Last year, the Senate passed a bipartisan bill that would require the Department of Homeland Security to release a yearly report about the potential national security risks that deepfakes pose, showing that the government has an interest in stopping the spread of deepfakes, but is not yet committed to aggressive action. Similarly, Twitter has recently drafted a policy that would notify users when a tweet contained a deepfake, but would not remove the tweet unless it threatened someone’s safety.

Deepfakes have the potential to do unprecedented damage both to those that view this material, and those that find themselves the subject of them. Moving forward, policymakers and social networking companies will have to demonstrate a commitment to containing their spread before it is too late. As funny as it may be to watch Bill Hader transform into Tom Cruise, deepfakes are no laughing matter.

-Brandon James

 

Referenced Articles:

http://www.dailymail.co.uk/sciencetech/article-7937873/Tech-giants-battle-infection-coronavirus-fake-news-conspiracy-theories.html

http://www.fox61.com/2020/01/26/verify-fact-checking-rumors-that-spread-after-kobe-bryants-death-in-helicopter-crash/

http://www.latimes.com/california/story/2020-01-27/kobe-bryant-helicopter-crash-victims

http://www.cnbc.com/2019/10/14/what-is-deepfake-and-how-it-might-be-dangerous.html

http://www.youtu.be/VWrhRBb-1Ig

http://www.deeptracelabs.com/mapping-the-deepfake-landscape/

http://www.thehill.com/policy/cybersecurity/467462-senate-passes-legislation-to-combat-deepfake-videos

http://www.techcrunch.com/2019/11/11/twitter-drafts-a-deepfake-policy-that-would-label-and-warn-but-not-remove-manipulated-media/

7 thoughts on “(Deep)Fake News

  1. This piece is super well written and I really enjoyed reading it. I have never heard of deepfakes and find it seriously terrifying. The first part of your post reminded me of the short story “Shooting the Apocalypse” that we read. Just like Timo and his reporter friend, people in real life are decreasingly concerned with the truth or ethics of their stories and more concerned with getting clicks and making money. I find this especially compelling in the recent case of Kobe Bryant because when the news of his death first surfaced, most people did not believe it. While this is certainly partially attributed to the fact that he was a young and impactful figure, it also shows the extent to which people are already aware of the prominence of fake news. If news sources were required to confirm and prove their stories before publishing them, people might have been more inclined to believe the story of Bryant’s death as soon as it came out. However, because false advertising and news for clicks is so readily available these days, it took even longer for most people to realize that what was happening was real.

    Like

  2. Like Currie, I did not know these deepfakes existed and they open a realm of horrifying possibility. The video you included is an excellent, uncanny example that made my skin crawl. Even though this technology presents the interesting option of using a talented impressionist in place of the real actor for entertainment purposes, if unregulated, it could also put people at risk. As you mentioned, 96% of deepfakes are in pornography, which could ruin reputations, get people fired from their jobs, and even implicate people for sexual crimes in which they were never truly involved. These videos could also be used to blackmail people into performing actual crimes. It is essential to make the public aware of the possibility that a video they are seeing could be a convincing deepfake, because if not, the concept of using video to represent anyone, especially public figures, would be compromised.

    Like

  3. This is terrifying. I have heard of deepfakes before and remember seeing a deepfake of Keanu Reeves stopping an armed robbery. However, I just shrugged it off and didn’t consider the negative applications it could have. The fact that 96% of deepfakes are sexual is quite concerning for me in numerous ways when it comes to sexual assault allegations. For one thing, it could be used to falsely accuse people or tarnish reputations. It could also be easy for people convicted of sexual assault to claim that it was just a deepfake intended to hurt them and that they actually didn’t commit the crime. There are so many ethical questions that need to be asked when it comes to the creation and use AI and the very existence of deepfakes seems to cross many ethical boundaries.

    Like

  4. I had seen the Bill Hader video before, and it was so seamlessly made that I had not even realized that it was a deepfake until it was pointed out to me. In a way it is very cool that we have this technological capability, but I am glad you bring up the necessity of policy action surrounding this issue before it becomes more widespread. I think it’s hard to come up with policy because the technology moves faster than the policy can be drafted, discussed, and passed.

    On a similar note, one of my friends who does research with AI showed me that machines can learn to generate nonexistent images, such as of people that do not exist in real life, if it has enough data and patterns stored. For example, it can generate pictures of nonexistent humans if given a stick figure. This technology can also be used for deepfakes in a malicious nature.

    Like

  5. Hey Brandon, I love the article and I think it’s very well written. Many people don’t understand the extent of deep fake technology and I think this article gives a great general description of the developments and potential applications of deep fake technology within our society. I think it would be interesting to expand on the ethical debate between corporations, particularly social media conglomerates, self-regulation compared to more democratic broad governmental regulation. I think when combined with the rapid developments in A.I. that you mentioned this ethical debate is becoming more and more pressing. Overall, I really enjoyed this article and it prompted me to consider many broader ethical implications behind media structures and technology developments.

    Like

  6. Thank you for writing this piece; I’ve thought that deepfakes have the potential to be used for revenge porn before, but I’d never heard of the 96% statistic, which is truly more terrifying than I thought. In addition to blackmail, deepfakes can be used for fraud, especially audio deepfakes. As a matter of fact, in 2019, a CEO almost got scammed into transferring €220,000 into a random bank account after a scammer used audio deepfake tech to impersonate an executive.

    When it comes to film and TV, deepfake technology seem to be a lot more innocuous, as it has been used for stunt doubles and flashbacks. However, what are the ethics of using deepfake technology to replicate an actor who has already passed away (such as using Carrie Fisher’s face for a younger Princess Leia in Rogue One)? Furthermore, what if some production companies decided to continue recycling images of established actors through deepfake technology, rather than opening up the casting pool and hiring unknown actors? Such a practice would exacerbate Hollywood’s inclusion and representation issue.

    Like

  7. Brandon,

    This piece was equally informative and frightening. The Bill Hader, Tom Cruise video was terrifying yet at the same time very interesting to me. I was unaware of the use of AI in this way, and unfortunately, I feel as if the more I learn, the more afraid I become of where technology is headed.

    This post reminded me of a podcast I once heard about the ways the Russians interfered with the last election by using Facebook meme pages to push anti-Hillary and Pro-Trump propaganda. Have you – or anyone else in the class? – ever heard of this alarming news? It seems as if nothing is off-limits – including honesty – when people with the right technology have an end goal in mind. It worries me for vulnerable populations in our country, the elderly and immigrants being two that come to mind immediately, who may more easily be taken advantage of with such technology. Quite the thought-provoking and entertaining piece to read!

    – Shannon

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s