ABSTRACT: Research indicates that the reach of fake news websites is limited to small parts of the population. On the other hand, data demonstrate that large proportions of the public know about notable fake news stories and believe them. These findings imply the possibility that most people hear about fake news stories not from fake news websites but through their coverage in mainstream news outlets. Thus far, only limited attention has been directed to the role of mainstream media in the dissemination of disinformation. To remedy this, this article synthesizes the literature pertaining to understand the role mainstream media play in the dissemination of fake news, the reasons for such coverage and its influences on the audience.
KEYWORDS: Fake news, disinformation, fact checking
4. Implications: audience processing of news coverage of fake news
Thus far, our analysis suggests that an important part of the dissemination of fake news takes place through mainstream news media, and that journalistic role perceptions, news values, social validation, and the fact that news media institutions have the infrastructure both for the detection and for the correction of fake news stories are reasons why mainstream news media pay attention to fake news. In this section we will address the third research question, namely, what are the potential influences of mainstream news coverage of fake news on their audiences?
Important to remember is that for the most part, when mainstream media cover fake news stories, they do so with an explicit intention to correct the false information. Are journalists successful in their attempts to set the record straight? Is it possible that parts of the audience retain the wrong information despite the fact that news media had covered it as fake news? While no empirical research to date has examined precisely how people respond to media reports about fake news, quite a bit of research in psychology and adjacent areas has addressed the question of misinformation correction. While this research relates to misinformation in general (without distinguishing between misinformation and disinformation3) and the current article relates to a specific type of disinformation, we argue that it is possible to import the findings to the context of mainstream news correction of fake news. First, because the studies we cite below were mostly experimental and used invented scenarios and (dis)information. Second, because in the psychological studies on misinformation correction, participants were typically not presented with information regarding the motivation of the disseminator of the information, not knowing whether these disseminators knowingly and intentionally deceived. Similarly, information about the motivation of the disseminator or intentionality is not always available to audiences reading news media reports about fake news outside the lab.
While findings on the effects of the correction of misinformation are not fully consistent (Walter & Murphy, 2018), social psychological research has demonstrated time and again that retractions often fail to completely eliminate the influence of misinformation (Lewandowsky et al., 2012, p. 112). For example, a recent meta-analysis aggregating 32 studies (Walter & Tukachinsky, 2019) found evidence for the continued influence of misinformation, meaning that mis- or disinformation continued to shape people’s beliefs in the face of correction (r = -.05, p = .045).The findings suggest that after people are exposed to misinformation, corrective messages cannot fully revert people’s beliefs to baseline.
Theoretically, the empirical observation that exposure to correction does not always result in correct attitudes is argued to be the result of different information processing mechanisms. In order to comprehend a statement, people must at least temporarily accept it as true (Gilbert et al., 1993). From this perspective, believing even false information is part of processing it. The cognitive explanation for this argument states that when we encounter a report that tries to correct wrong information, for example that ‘Hillary Clinton is related to a pedophile ring’ is a wrong report, we first create a mental model that connects the nodes for ‘Hillary Clinton’ and ‘pedophile,’ and this mental model persists, especially in the absence of a complete and coherent alternative mental model. If the fake news item is reported with an explanation of why the fake news might be true or if the disinformation is consistent with an explanation that is already stored in other mental models (e.g. connecting, ‘Hillary Clinton’ with ‘crooked’), the psychological literature on misinformation suggests that eliminating the mental model will be particularly difficult (Anderson et al., 1980).
An extension of this cognitive explanation for the resilience of misinformation to correction is related to retrieval failures due to negation (Lewandowsky et al., 2012, p. 115). According to this explanation, ‘people encode negative memories by creating a positive memory with a negative tag’ (Walter & Tukachinsky, 2019, p. 8). That is, the statement ‘Hillary Clinton is not connected to pedophile ring’ is encoded as ‘Hillary Clinton, pedophile – not.’ The negation tag on the connection (‘not’) can however be forgotten or otherwise lost, especially for audiences suffering from impaired strategic memory or in situations of high cognitive load (Lewandowsky et al., 2012, p. 115). The ramification is that, for example, old people consuming news reports about fake news may retain the false information in spite of its negation (Wilson & Park, 2008), given the higher likelihood of suffering from strategic memory impairments in old age. In addition, if negations are lost in the retrieval process and thus confirm the misinformation, it stands to reason that corrective messages will be more effective if they attempt to debunk misinformation without explicitly negating it.
One advantage of news reports about fake news stories over the more thoroughly studied ‘fact checks’ or ‘adwatches’ is however that for the most part, there is no time lag between the delivery of misinformation and its correction. This is important, as the literature points out that corrections are less effective when time passes between exposure to misinformation and its negation (Walter & Tukachinsky, 2019, p. 15). With some notable exceptions, in which media organizations mistakenly fall prey to the fake news and report it as real news, the false ‘fake news’ information is reported by mainstream media together with a retraction. This explains why only a minority of respondents, around 30% in the U.S. (Frankovic, 2016; regarding Pizzagate) or 20% in Israel (regarding the wife of opposition leader Benny Gantz being a member of a pro-Palestinian group) believed widely reported fake news stories. However, it is also important to mention that only 29% in the U.S., and 22% in Israel, were sure that these are fake news stories. The modal audience response, in both cases, was thus uncertainty. This suggests that despite media refutations, sizeable shares of the audience deduce that there is a chance that the ‘fake’ information might be right. Thus, doubt (instead of an outright rejection) may be the undesirable consequence of mainstream news coverage of fake news.
Despite the advantage of simultaneously receiving the false information with its correction, a major problem with news coverage of fake news is that, in order to report about fake news stories, mainstream journalists have to repeat the false information. This is problematic, as repetition is known to be a major problem in attempts to correct disinformation (Lewandowsky et al., 2012; Walter & Tukachinsky, 2019). This can be explained by the ‘mere exposure’-effect (Zajonc, 2001) and the ‘truth effect’ (Dechene et al., 2010), according to which mere exposure and repetition of statements increases the likelihood that statements are perceived as true. One key explanation is that repetition breeds familiarity and people tend to perceive familiar information as correct and trustworthy, given the sense of ease and processing fluency that accompanies familiar information (Schwarz et al., 2007; Dechene et al., 2010). As succinctly argued by Schwarz et al. (2016), ‘when thoughts flow smoothly, people nod along’ (p. 85). Repeating false information, even as part of a retraction or a correction, enhances its familiarity, and thus retractions can backfire. Remarkably, studies have also found that people infer the accuracy and consensus of an opinion from the number of times it has been repeated, even when the repeated expression is associated with only one person (Weaver et al., 2007; see also Dechene et al., 2010).
In addition to increasing the familiarity of wrong information, there are other reasons to believe that some audiences will retain the disinformation despite the fact that news media report about it as ‘fake news.’ Research on the psychology of truth assessment has for example found that people tend to believe not only familiar, but also simple and coherent statements (Lewandowsky et al., 2012). Media refutations of fake news stories may thus be less effective when the fake news story is clear and coherent, and the refutation complicated and detailed. In line with this logic, Walter et al. (2019) found that lexical complexity (calculated using such indicators as noun and verb variation) of fact-checking messages was negatively associated with correction of misinformation. Simply put, the more complex the correction, the less fluent the processing will be, and the less likely it is to be effective.
The literature also suggests that the richness of the incorrect mental model makes it harder to substitute it with a correct model (Walter & Tukachinsky, 2019). For example, a model connecting ‘Hillary’ with ‘pedophile’ and ‘pizzeria’ is richer than the model connecting just ‘Hillary’ and ‘pedophile.’ Creating an alternative mental model will necessitate including a coherent explanation for the pizzeria aspect of the story, which is a more mentally difficult task. Research also points out that simple refutations leave remnants of the misinformation untouched (Cappella et al., 2015). To fully remove the false linkages, more ‘enhanced’ refutations are needed, which engage emotions or offer causal linkages. While some news stories about fake news seem to offer refutations that include narratives that provide a fuller account of the misinformation, such coverage is typically found in magazines and other in-depth genres, that do not reach all the audience of mainstream news media.4
Congruence between the misinformation and audiences’ prior attitudes, beliefs and opinions also shapes audience retention of the misinformation from media reports about fake news, given findings demonstrating that the ability to correct misinformation is attenuated by audience's preexisting beliefs (Walter & Tukachinsky, 2019). Unsurprisingly, audiences who already view Hillary Clinton negatively will be more likely to accept a false story demonstrating her negative behavior, as well as more likely to resist corrective information that highlights her positive actions. These findings can be understood as an extension of the motivated reasoning approach – whereby people can process information either with accuracy goals (i.e. trying to reach the most accurate conclusion) or directional goals (i.e. trying to reach a conclusion that is consistent with their broader worldview; see Kunda, 1990). When it comes to value-laden beliefs and polarizing issues, research suggests that information is processed with directional rather than accuracy goals in mind.
Also important in this context is that news stories reporting about fake news in an attempt to correct misinformation are not necessarily perceived as more credible than the fake news they try to expose or correct. Audience trust in the mainstream media is low in many countries, and in about half of the countries studied in the World Values Surveys and European Values Surveys, it is decreasing (Hanitzsch et al., 2018; For a recent overview of research on media trust, see Strömbäck et al., 2020). Even in societies where media trust is high, such as the Philippines or Japan (Hanitzsch et al., 2018, Table 1), rather large segments of society (25% of adults or more) distrust the media. This is important since research has demonstrated that trusting audiences have a stronger tendency to accept media facts and narratives, compared to audience scoring low on trust in media (Ladd, 2012). In a situation in which news outlets try to refute an ideologically-congruent claim (and label it ‘fake news’), and when the audience member does not trust the media, it is thus possible that the retraction will fail or even backfire among large segments of the public, given that it comes from a non-credible source (the news media).