The creation of the internet has made it easy to find the answers to life’s most pressing questions in the blink of an eye. At the same time, the internet makes it just as easy for individuals to access material that is false. Because of this, the Information Age could also be referred to as the Disinformation Age.
Just in the past few days, a story began to circulate on social media that oregano oil could be used as a treatment against the coronavirus. Similarly, after news broke that NBA legend Kobe Bryant had tragically died in a helicopter crash, social media was flooded with rumors that former NBA player Rick Fox, and all of Bryant’s daughters were on the plane. In actuality, Bryant, his daughter Gianna, and seven other individuals had actually perished in the accident (more information about the victims can be read here).
These examples help highlight the pitfalls of the Disinformation Age. In a world where site traffic can be the key to both clout and money, people are often more focused on being the first individuals to report on a story than they are on making sure they tell that story truthfully and compassionately.
Luckily, fake news is not impossible to combat. Sites like PolitiFact and Snopes exist to help people fact check information, and sometimes our own logical reasoning skills can help us determine whether something is plausible or not.
Unfortunately, it may not be long before even the best of us struggle to differentiate between fact and fiction. Advancements in artificial intelligence have given rise to a controversial form of media known as deepfakes. Grace Shao describes deepfakes as “manipulated videos, or other digital representations produced by sophisticated artificial intelligence, that yield fabricated images and sounds that appear to be real.” Shao then explains that to create a deepfake, a form of artificial intelligence known as a deep-learning system acquires various images and videos of a subject, and eventually learns to copy the way that it speaks and moves. In short, deepfakes can make it look like people are saying or doing things that they aren’t.
One viral example of a deepfake posted by youtuber “Ctrl Shift Face” appears to show comedian Bill Hader’s face seamlessly transform into Tom Cruise’s face as Hader does an impression of him. While this video demonstrates the impressive lengths that technology has gone, and the more meme-able applications for this technology, this video should also serve as a kind of warning of what artificial intelligence can do.
Whereas today we are watching Bill Hader shapeshift into Tom Cruise, tomorrow we may see doctored political ads that show a politician saying disparaging remarks that never actually came out of their mouth, or law enforcement officials use edited surveillance camera footage to set someone up for a crime.
Beyond that, some of the negative potentialities of deepfake media have already been realized. Reports show that 96% of deepfakes currently circulating depict people engaging in pornographic acts. Presumably, these videos are being made without the knowledge or consent of those featured in them.
As AI becomes more advanced, this problem will only get worse. It will become harder and harder to differentiate real videos from doctored ones. People may find themselves believing false information given to them by a deepfake, whereas others may find themselves being lambasted for something a deepfake of them did. Moreover, if people already struggle to have faith in the media in today’s era of fake news, just imagine what could happen if deepfakes were to go mainstream. In light of this, it is important that policymakers find a way to address this issue before they themselves become victims of this new media form.
Though various governmental and corporate powers are concerned about the potential damage that deepfakes can do, the current responses leave much to be desired. Last year, the Senate passed a bipartisan bill that would require the Department of Homeland Security to release a yearly report about the potential national security risks that deepfakes pose, showing that the government has an interest in stopping the spread of deepfakes, but is not yet committed to aggressive action. Similarly, Twitter has recently drafted a policy that would notify users when a tweet contained a deepfake, but would not remove the tweet unless it threatened someone’s safety.
Deepfakes have the potential to do unprecedented damage both to those that view this material, and those that find themselves the subject of them. Moving forward, policymakers and social networking companies will have to demonstrate a commitment to containing their spread before it is too late. As funny as it may be to watch Bill Hader transform into Tom Cruise, deepfakes are no laughing matter.