Misinformation on Misinformation: Conceptual and Methodological Challenges. Sacha Altay, Manon Berriche, Alberto Acerbi. Social Media + Society, January 28, 2023. https://doi.org/10.1177/20563051221150412
Abstract: Alarmist narratives about online misinformation continue to gain traction despite evidence that its prevalence and impact are overstated. Drawing on research examining the use of big data in social science and reception studies, we identify six misconceptions about misinformation and highlight the conceptual and methodological challenges they raise. The first set of misconceptions concerns the prevalence and circulation of misinformation. First, scientists focus on social media because it is methodologically convenient, but misinformation is not just a social media problem. Second, the internet is not rife with misinformation or news, but with memes and entertaining content. Third, falsehoods do not spread faster than the truth; how we define (mis)information influences our results and their practical implications. The second set of misconceptions concerns the impact and the reception of misinformation. Fourth, people do not believe everything they see on the internet: the sheer volume of engagement should not be conflated with belief. Fifth, people are more likely to be uninformed than misinformed; surveys overestimate misperceptions and say little about the causal influence of misinformation. Sixth, the influence of misinformation on people’s behavior is overblown as misinformation often “preaches to the choir.” To appropriately understand and fight misinformation, future research needs to address these challenges.
MISCONCEPTIONS
A Large Number of People are Misinformed
Headlines about the ubiquity of misbeliefs are rampant in the media and are most often based on surveys. But how well do surveys measure misbeliefs? Luskin and colleagues (2018) analyzed the design of 180 media surveys with closed-ended questions measuring belief in misinformation. They found that more than 90% of these surveys lacked an explicit “Don’t know” or “Not sure” option and used formulations encouraging guessing such as “As far as you know . . .,” or “Would you say that . . .” Often, participants answer these questions by guessing the correct answer and report holding beliefs that they did not hold before the survey (
Graham, 2021). Not providing, or not encouraging “Don’t know” answers is known to increase guessing even more (
Luskin & Bullock, 2011). Guessing would not be a major issue if it only added noise to the data. To find out, Luskin and colleagues (2018) tested the impact of not providing “Don’t know” answers and encouraging guessing on the prevalence of misbeliefs. They found that it overestimates the proportion of incorrect answers by nine percentage points (25 to 16), and, when considering only people who report being confident in holding a misperception, it overestimates incorrect answers by 20 percentage points (25 to 5). In short, survey items measuring misinformation overestimate the extent to which people are misinformed, eclipsing the share of those who are simply uninformed.
In the same vein, conspiratorial beliefs are notoriously difficult to measure and surveys tend to exaggerate their prevalence (
Clifford et al., 2019). For instance, participants in survey experiments display a preference for positive response options (yes vs no, or agree vs disagree) which inflates agreement with statements, including conspiracy theories, by up to 50% (
Hill & Roberts, 2021;
Krosnick, 2018). Moreover, the absence of “Don’t know” options, together with the impossibility to express one’s preference for conventional explanations in comparison to conspiratorial explanations, greatly overestimate the prevalence of conspiratorial beliefs (
Clifford et al., 2019). These methodological problems contributed to unsupported alarmist narratives about the prevalence of conspiracy theories, such as Qanon going mainstream (
Uscinski et al., 2022a).
Moreover, the misperceptions that surveys measure are skewed toward politically controversial and polarizing misperceptions, which are not representative of the misperceptions that people actually hold (
Nyhan, 2020). This could contribute to fueling affective polarization by emphasizing differences between groups instead of similarities and inflate the prevalence of misbeliefs. When misperceptions become group markers, participants use them to signal group membership—whether they truly believe the misperceptions or not (
Bullock et al., 2013). Responses to factual questions in survey experiments are known to be vulnerable to “partisan cheerleading” (
Bullock et al., 2013;
Prior et al., 2015), in which, instead of stating their true beliefs, participants give politically congenial responses. Quite famously, a large share of Americans believed that Donald Trump’s inauguration in 2017 was more crowded than Barack Obama’s in 2009, despite being presented with visual evidence to the contrary. Partisanship does not directly influence people’s perceptions: misperceptions about the size of the crowds were largely driven by expressive responding and guessing. Respondents who supported President Trump “intentionally provide misinformation” to reaffirm their partisan identity (
Schaffner & Luks, 2018, p. 136). The extent to which expressive responding contributes to the overestimation of other political misbeliefs is debated (
Nyhan, 2020), but it is probably significant.
Solutions have been proposed to overcome these flaws and measure misbeliefs more accurately, such as including confidence-in-knowledge measures (
Graham, 2020) and considering only participants who firmly and confidently say they believe misinformation items as misinformed (
Luskin et al., 2018). Yet, even when people report confidently holding misbeliefs, these misbeliefs are highly unstable across time, much more so than beliefs (
Graham, 2021). For instance, the responses of people saying they are 100% certain that climate change is not occurring have the same measurement properties as responses of people saying they are 100% certain the continents are not moving or that the sun goes around the Earth (
Graham, 2021). A participant’s response at time
T does not predict their answer at time
T + 1. In other words, flipping a coin would give a similar response pattern.
So far, we have seen that even well-designed surveys overestimate the prevalence of misbeliefs. A further issue is that surveys unreliably measure exposure to misinformation and the occurrence of rare events such as fake news exposure. People report being exposed to a substantial amount of misinformation and recall having been exposed to particular fake headlines (
Allcott & Gentzkow, 2017). To estimate the reliability of these measures,
Allcott and Gentzkow (2017) showed participants the 14 most popular fake news during the American election campaign, together with 14 made-up “placebo fake news.” 15% of participants declared having been exposed to one of the 14 “real fake news,” but 14% also declared having been exposed to one of the 14 “fake news placebos.”
During the pandemic, many people supposedly engaged in extremely dangerous hygiene practices to fight COVID-19 because of misinformation encountered on social media, such as drinking diluted bleach (
Islam et al., 2020). This led to headlines such as “COVID-19 disinformation killed thousands of people, according to a study” (
Paris Match Belgique, 2020). Yet, the study is silent regarding causality, and cannot be taken as evidence that misinformation had a causal impact on people’s behavior (
France info, 2020). For instance, 39% of Americans reported having engaged in at least one cleaning practice not recommended by the CDC, 4% of Americans reported drinking or gargling a household disinfectant, while another 4% reported drinking or gargling diluted bleach (
Gharpure et al., 2020). These percentages should not be taken at face value. A replication of the survey found that these worrying responses are entirely attributable to problematic respondents who also reported “recently having had a fatal heart attack” or “eating concrete for its iron content” at a rate similar to that of ingesting household cleaners (
Litman et al., 2020; reminiscent of the “lizardman’s constant” by
Alexander, 2013). The authors conclude that “Once inattentive, mischievous, and careless respondents are taken out of the analytic sample we find no evidence that people ingest cleansers to prevent Covid-19 infection” (
Alexander, 2013, p. 1). This is not to say that COVID-19 misinformation had no harmful effects (such as creating confusion or eroding trust in reliable information), but rather that surveys using self-reported measures of rare and dangerous behaviors should be interpreted with caution.
Misinformation Has a Strong Influence on People’s Behavior
Sometimes, people believe what they see on the internet and engagement metrics do translate into belief. Yet, even when misinformation is believed, it does not necessarily mean that it changed anyone’s mind or behavior. First, people largely consume politically congenial misinformation (
Guess et al., 2019,
2021). That is, they consume misinformation they already agree with, or are predisposed to accept. Congenial misinformation “preaches to the choir” and is unlikely to have drastic effects beyond reinforcing previously held beliefs. Second, even when misinformation changes people’s minds and leads to the formation of new (mis)beliefs, it is not clear if these (mis)beliefs ever translate into behaviors. Attitudes are only weak predictors of behaviors. This problem is well known in public policies as the value-action gap (
Kollmuss & Agyeman, 2002). Most notoriously, people report being increasingly concerned about the environment without adjusting their behaviors accordingly (
Landry et al., 2018).
Common misbeliefs, such as conspiracy theories, are likely to be cognitively held in such a way that limits their influence on behaviors (
Mercier, 2020;
Mercier & Altay, 2022). For instance, the behavioral consequences that follow from common misbeliefs are often at odds with what we would expect from people actually believing them. As
Jonathan Kay (2011, p. 185) noted, “one of the great ironies of the Truth movement is that its activists typically hold their meetings in large, unsecured locations such as college auditoriums—even as they insist that government agents will stop at nothing to protect their conspiracy for world domination from discovery.” Often, these misbeliefs are likely to be post hoc rationalizations of pre-existing attitudes, such as distrust of institutions.
In the real world, it is difficult to measure how much attitude change misinformation causes, and it is a daunting task to assess its impact on people’s behavior. Surveys relying on correlational data tell us little about causation. For example, belief in conspiracy theories is associated with many costly behaviors, such as COVID-19 vaccine refusal (
Uscinski et al., 2022b). Does this mean that vaccine hesitancy is caused by conspiracy theories? No, it could be that both vaccine hesitancy and belief in conspiracy theories are caused by other factors, such as low trust in institutions (
Mercier & Altay, 2022;
Uscinski et al., 2022b). A few ingenious studies allowed some causal inferences to be drawn. For instance,
Kim and Kim (2019) used a longitudinal survey to capture people’s beliefs and behaviors both before and after the diffusion of the “Obama is a Muslim” rumor. They found that after the rumor spread, more people were likely to believe that Obama was a Muslim. Yet, this effect was “driven almost entirely by those predisposed to dislike Obama” (p. 307), and the diffusion of the rumor had no measurable effect on people’s intention to vote for Obama. This should not come as a surprise, considering that even political campaigns and political advertising only have weak and indirect effects on voters (
Kalla & Broockman, 2018). As
David Karpf (2019) writes “Generating social media interactions is easy; mobilizing activists and persuading voters is hard.”
The idea that exposure to misinformation (or information) has a strong and direct influence on people’s attitudes and behaviors comes from a misleading analogy of social influence according to which ideas infect human minds like viruses infect human bodies. Americans did not vote for Trump in 2016 because they were brainwashed. There is no such thing as “brainwashing” (Mercier, 2020). Information is not passed from brain to brain like a virus is passed from body to body. When humans communicate, they constantly reinterpret the messages they receive, and modify the ones they send (
Claidière et al., 2014). The same tweet will create very different mental representations in each brain that reads it, and the public representations people leave behind them, in the form of digital traces, are only an imperfect proxy of their private mental representations. The virus metaphor, all too popular during the COVID-19 pandemic—think of the “infodemic” epithet—is misleading (
Simon & Camargo, 2021). It is reminiscent of outdated models of communication (e.g., “hypodermic needle model”) assuming that audiences were passive and easily swayed by pretty much everything they heard or read (
Lasswell, 1927). As
Anderson (2021) notes “we might see the role of Facebook and other social media platforms as returning us to a pre-Katz and Lazarsfeld era, with fears that Facebook is “radicalizing the world” and that Russian bots are injecting disinformation directly in the bloodstream of the polity.” These premises are at odds with what we know about human psychology and clash with decades of data from communication studies.
No comments:
Post a Comment