Fake News: Threat to democratic systems
In this review I will focus on fake news that has political implications, with special attention to 2016 US presidential election, defining fake news as a form of propaganda that consists of deliberate misinformation (Sadiku, Eze and Musa 2018). False information disseminated with the intent to mislead to damage an agency, person or rival (Sadiku, Eze and Musa 2018), politically motivated to promote the author’s agenda. The roots of fake news phenomenon derive from the loss of confidence in traditional media sources and low levels of critical thinking and media literacy. Fake news resembles credible journalism and is presented and accepted as being accurate without any factual basis. Gaining attention primarily on social media platforms which provide an open space for like-minded individuals to gather around an idea, theory or conspiracy, and support a set of shared beliefs and ideas (Mihailidis and Viotty 2017). With the increasing popularity of social media, more and more people consume news from social media instead of traditional news media. Thus, social media has proved to be a powerful source for fake news propagation (Sadiku, Eze and Musa 2018).
The internet provides users access to information from around the world, greatly expanding the information available to citizens and their choices over news outlets (Flaxman,Goel and Rao 2016), offering a diverse and wide-ranging assortment of political views. However, scholars warn that the more choice individuals have when seeking political news, the more likely they are to exclude opinions with which they disagree (Garrett 2009). Data analytics that drive the internet provide us not simply with more information but feed us more information that we want. Search engines and social media offer personalised content through machine-learning models, creating ‘filter bubbles’ in which algorithms inadvertently amplify ideological segregation by automatically recommending the content an individual is likely to agree with (Flaxman,Goel and Rao 2016). Filter bubbles exploit what is known as confirmation bias, a label used by psychologists to describe the tendency that people have to search for and interpret information in a way that corresponds with their pre-existing beliefs, preferences and attitudes (McNair 2017).
When one entertains only a single explanation, they preclude the possibility of interpreting data supportive of any alternative theory, this model highlights the inability to consider both possibilities simultaneously. To the contrary, research carried out by Ohio State University shows that people do not seek to completely exclude other perspectives, information gathered suggest that people may wish to maintain awareness of diverse political views while ensuring that their own beliefs are well supported (Garrett 2009). Ontological theory suggests that people are motivated to maintain a consistent sense of their own identity and the environment around them. (McNair 2017) Quite simply, they are more inclined to believe a story, and share it with others, when its content corresponds with their opinions (McNair 2017).
Fake news is appealing because it delivers a moral narrative or confirms sentiments that people already hold (Jardine 2019) while satisfying the need for opinion reinforcement. Consequently, research suggests that online communities are increasingly becoming ideologically ‘eco-chambers’, environments in which individuals are largely exposed to conforming opinions, in a filtered environment, the information does not circulate widely and freely (Flaxman, Goel and Rao 2016; Jardine 2019). Information segregation is a serious concern, as it has long been thought that democracies depend critically on voters who are exposed to and understand a variety of political views (Flaxman, Goel and Rao 2016). Given that our online life is personalised, partially due to the algorithms and partially due to our own biases and interests, everything from the ads that we see to the news we read is tailored to satisfy our individual preferences hence limiting the exposure to opposing viewpoints.
The theoretical framework outlined above suggests several reasons why social media may be especially conducive to fake news. During the 2016 Presidential election, there was very little transparency on how much campaigns use citizens data and how ethical such methods are. Research since identified various tools used to influence the public opinion - fake news, social-media bots (automated accounts), and propaganda from inside and outside the United States (Persily 2017). Such tools took advantage of sophisticated data capture, segmentation, and micro-targeting techniques to influence the audience. (Morgan 2018) Political campaigns increasingly look like consumer marketing as they adapt the already existing techniques of digital advertising of which the commercial aim is to give people what they want to encourage consumption of content — tends to play out badly in the political space (Morgan 2018; Jardine 2019).
The US presidential election in 2016 saw many ethically questionable tactics, widespread dissemination of fake news being one of the most effective. Allcott and Getzkow (2017) in their research reveal that the most popular fake news stories were more widely shared on Facebook than the most popular mainstream news and most discussed fake news stories tended to favour Donald Trump over Hilarity Clinton.
We confirm that fake news was both widely shared and heavily tilted in favour of Donald Trump. Our database contains 115 pro-Trump fake stories that were shared on Facebook a total of 30 million times, and 41 pro-Clinton fake stories shared a total of 7.6 million times (212).
The decline of trust in the mainstream media among Republicans could have increased their relative demand for news from non-traditional sources, as could a perception that the mainstream media tended to favour Clinton (224). People look for and share things that match their beliefs and disparage anything that does not (McNair 2017) explaining why Trump supporters turned to alternative media sources to support their beliefs and ideas.
Allcott and Gentzkow (2017) study established that news providers reportedly found higher demand for pro-Trump (or anti-Clinton) fake news, and responded by providing more of it. One of the most popular Pro-Trump stories reported that Pope Francis endorsed Donald Trump and was shared more than one million times on Facebook. Similarly, an anti-Clinton story claiming that Hilary Clinton would be indicted over her email server received more than 140,000 shares, reaction and comments on Facebook (Persily 2018). In various occurrences micro-targeting was used to suppress voter turnout by targeting Clinton supporters, especially “white liberals, young women and African Americans,” with communications designed to reduce turnout among those groups (Morgan 2018; Persily 2018). Days before the election, messages circulated on social media that Hillary Clinton had died, the fake stories about the health of Hillary Clinton during the 2016 election share a common foundation: they propagate “alternative” information and present a moral narrative that people holding similar views can latch on to (Morgan 2018; Jardine 2019).
More striking still, the official campaigns legitimized fake news stories by sharing them, Donald Trump retweeted one suggesting that his support among blue-collar workers was the highest for any candidate since Franklin Delano Roosevelt (Persily 2018). This created a so-called information cascade, whereby people pass along information shared by others without bothering to check if it is true, making it appear more credible in the process (Stehouwer, Dang, Liu, Liu, and Jain 2019). The most salient danger associated with fake news is the fact that it devalues and delegitimizes voices of expertise. The news produced by reputable media outlets such as The New York Times hit its lowest point a day before the election (Jardine 2019). Fake news undermines trust and serious media coverage and makes it more difficult for genuine journalism (Sadiku, Eze and Musa 2018).
The prevalence of false stories online erects barriers to educated political decision making and renders it less likely that voters will choose on the basis of genuine information rather than lies or misleading ‘spin’ (Persily 2018), implying that democratic outcomes can be influenced by malevolent producers of fabricated content (McNair 2017) as seen in the 2016 presidential election which was to some unmeasurable degree (Kurtzleben 2019) affected by fake news. Misinformation amplified by new technological means in the Internet age poses a threat to open societies worldwide. (Sadiku, Eze and Musa 2018). The growing sophistication of artificial intelligence and machine-learning algorithms leads to consider implications of emerging technologies such as deep fake and their potential to further corrode the trust that we have in media and democratic institutions.
Chesney and Citron (2019) define deepfakes as the product of recent advances in form of artificial intelligence knows as deep learning in which set of algorithms learn to infer rules and replicate patterns by scanning through large data sets. Deep fake emerges from a specific type of deep learning in which pairs of algorithms are pitted against each other. Since the pair is constantly training against each other it can lead to rapid improvement, allowing for the production of highly realistic yet fake content. Technologists expect that with advances in AI, soon it may be difficult if not impossible to tell the difference between a real video and a fake one (Stehouwer, Dang, Liu, Liu, and Jain 2019). A picture may be worth a thousand words, but there is nothing that persuades quite like audio or video (Chesney and Citron 2019). Generally, people tend to trust the written word somewhat less than they do audio and, in particular, video media, with time deep fakes may become so sophisticated that not only might we believe the fakery, we might start disbelieving the truth (Jardine 2019; Stehouwer, Dang, Liu, Liu, and Jain 2019).
Chesney and Citron (2019) indicate that the most frightening applications of deep fake technology may well be in the realms of politics and international affairs. There, deep fakes may be used to create unusually effective lies capable of inciting violence, discrediting leaders and institutions, or even tipping elections. The 2016 election interference already demonstrated how easily misinformation can spread and the deep fakes of tomorrow will be more vivid and realistic and consequently more shareable than fake news of 2016. Jardine (2019) explains news story might say that Hillary Clinton is ill, but the story would appear more believable if Clinton were to say so herself — or at least if she were to seem to say so. Artificial Intelligence can now be leveraged to create fake videos involving a person saying fake news, making it more believable to the audience.
Kurtzleben (2019) describes how Trump brands all unfavourable news coverage as fake news. In one tweet, he went so far as to say that "any negative polls are fake news". The term fake news is now used to cast doubt over legitimate media coverage from opposing political standpoint. Technology such as deep fake may not only be used to create social and ideological division, it can also affect democracy less directly. Chesney and Citron (2019) present a theory of ‘liar’s dividend’; the risk that liars will invoke deep fakes to escape their wrongdoing, the way Trump escapes any negative news coverage of himself.
The speed with which misinformation circulates on social media makes debunking it an uphill battle, by the time content is flagged fake the damage may already be done (Chesney and Citron 2019). Fake news violates the core principle of liberal democracy, the need for free and independent media, for objective and reliable journalism as a support for the electoral process and also for effective critical scrutiny of political elites (McNair 2017). This entire assemblage is threatened by carefully crafted influence operations and will only grow worse as new deep fake technologies come into play (Jardine 2019). More broadly, as the public becomes sensitized to the threat of deep fakes, it may also become less inclined to trust news in general (Chesney and Citron 2019).
Theoretical frameworks outlined in this review provide a starting point for understanding how social media behavioural facilitates the widespread dissemination of misinformation online. The 2016 presidential election demonstrates a sustained effort to manipulate the public by exploiting the vulnerabilities in the proposed framework. Moreover, the emerging new technologies such as deep fake and sophistication in the production and dissemination of fake news raise questions on the potential interference in the 2020 presidential election and hence provide topics for further study.
Bibliography
Allcott, H. and Gentzkow, M., 2017. Social media and fake news in the 2016 election. Journal of economic perspectives [online], 31(2), 211-36.
Chesney, R. and Citron, D., 2019. Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics. Foreign Aff [online], 98, 147-155
Flaxman, S., Goel, S. and Rao, J.M., 2016. Filter bubbles, echo chambers, and online news consumption. Public opinion quarterly[online],, 80(1), 295-299.
Garrett, R.K., 2009. Echo chambers online?: Politically motivated selective exposure among Internet news users. Journal of Computer-Mediated Communication [online], 14(2), 265-285.
Howard, P.N., Ganesh, B., Liotsiou, D., Kelly, J. and François, C., 2018. The IRA, social media and political polarization in the United States, 2012-2018. University of Oxford.
Jardine, E., 2019. Beware Fake News. Governing Cyberspace during a Crisis in Trust essay. CIGI, [online] Available at: www.cigionline.org/articles/beware-fake-news
Kurtzleben, D., 2017. With ‘fake news,’Trump moves from alternative facts to alternative language. National Public Radio.[online] Available at: http://drwho.virtadpt.net/files/2017-02/alternative-facts-to-alternative-language.pdf
McNair, B., 2017. Fake News : Falsehood, Fabrication and Fantasy in Journalism [online], Routledge.
Mihailidis, P. and Viotty, S., 2017. Spreadable Spectacle in Digital Culture: Civic Expression, Fake News, and the Role of Media Literacies in “Post-Fact” Society. American Behavioral Scientist [online], 61(4), 441-454.
Morgan, S., 2018. Fake news, disinformation, manipulation and online tactics to undermine democracy. Journal of Cyber Policy [online],3(1), 39-43.
Persily, N., 2017. The 2016 US Election: Can democracy survive the internet?. Journal of democracy [online], 28(2), 63-76.
Sadiku, M., Eze, T. and Musa, S.,2018. FAKE NEWS AND MISINFORMATION. International Journal of Advances in Scientific Research and Engineering [online], Ijasre.net. Available at: http://ijasre.net/uploads/1/3629_pdf.pdf
Stehouwer, J., Dang, H., Liu, F., Liu, X. and Jain, A., 2019. On the Detection of Digital Face Manipulation.[online] arxiv.org . Available at: https://arxiv.org/pdf/1910.01717.pdf