Pennycook, Gordon, Ziv Epstein, Mohsen Mosleh, Antonio A. Arechar, Dean Eckles, and David G. Rand. 2019. “Understanding and Reducing the Spread of Misinformation Online.” PsyArXiv. November 13. doi:10.31234/osf.io/3n9u8
Abstract: The spread of false and misleading news on social media is of great societal concern. Why do people share such content, and what can be done about it? In a first survey experiment (N=1,015), we demonstrate a dissociation between accuracy judgments and sharing intentions: even though true headlines are rated as much more accurate than false headlines, headline veracity has little impact on sharing. We argue against a “post-truth” interpretation, whereby people deliberately share false content because it furthers their political agenda. Instead, we propose that the problem is simply distraction: most people do not want to spread misinformation, but are distracted from accuracy by other salient motives when choosing what to share. Indeed, when directly asked, most participants say it is important to only share accurate news. Accordingly, across three survey experiments (total N=2775) and an experiment on Twitter in which we messaged N=5,482 users who had previously shared news from misleading websites, we find that subtly inducing people to think about the concept of accuracy decreases their sharing of false and misleading news relative to accurate news. Together, these results challenge the popular post-truth narrative. Instead, they suggest that many people are capable of detecting low-quality news content, but nonetheless share such content online because social media is not conducive to thinking analytically about truth and accuracy. Furthermore, our results translate directly into a scalable anti-misinformation intervention that is easily implementable by social media platforms.
---
Given the greater complexity of the experimental design and tweet data, there are numerous reasonable ways to analyze the data. For simplicity, we focus on an analysis in which both primary tweets and retweets are analyzed, data is excluded from one day on which a technical issue led to randomization failure, and the simplest admissible model structure is used (wave fixed effects, p-values calculated in the standard fashion using linear regression with robust standard errors clustered on user); and then assess robustness to varying the specification.
Consistent with our survey experiments, we find clear evidence that the accuracy message made users more discerning in their subsequent sharing decisions. Relative to baseline, the accuracy message increased the average quality of the news sources shared, t(5481)=2.61, p=0.009, and the total quality of shared sources summed over all posts, t(5481)=2.68, p=0.007, by 1.9% and 3.5% respectively. Furthermore, the treatment roughly doubled the level of sharing discernment (0.05 more mainstream than misinformation links shared per user-day pre-treatment; 0.10 more mainstream than misinformation links shared per user-day post-treatment; interaction between post-treatment dummy and link type, t(5481)=2.75, p=0.006). Conversely, we found no significant treatment effect on the number of posts without links to any of the 60 rated news sites, t(5481)=.31, p=0.76, which is consistent with the specificity of the treatment.
This pattern of results is not unique to one particular set of analytic choices. Figure 3C shows the distribution of p-values observed in 96 different analyses assessing the treatment effect on average quality, summed quality, or discernment under a variety of analytic choices. Of these analyses, 80.2% indicate a significant positive treatment effect (and none of 32 analyses of posts without links to a rated site find a significant treatment effect). For statistical details, see SI.
Finally, we examine the data at the level of the domain (Figure 3D). We see that the treatment effect is driven by increasing the fraction of rated-site posts with links to mainstream new sites with strong editorial standards such as the New York Times, and decreasing the fraction of rated-site posts that linked to relatively untrustworthy hyperpartisan sites such as the Daily Caller. Indeed, a domain-level pairwise correlation between fact-checker rating and change in sharing due to the intervention shows a very strong positive relationship (domains weighted by number of pre-treatment posts; r=0.70). In sum, our accuracy message successfully induced Twitter users who regularly shared misinformation to increase the quality of the news they shared.
Together, these studies shed new light on why people share misinformation, and introduce a new class of interventions aimed at reducing its spread. Our results suggest that, at least for many people, the misinformation problem is not driven by a basic inability to tell which content is inaccurate, or a desire to purposefully share inaccurate content. Instead, our findings implicate inattention on the part of people who are able to determine the accuracy of content (if they put their mind to it) and who are motivated to avoid sharing inaccurate content (if they realize it is inaccurate). It seems as though people are often distracted from considering the content’s accuracy by other motives when deciding what to share on social media – and therefore, drawing attention to the concept of accuracy can nudge people toward reducing their sharing of misinformation.
These findings have important implications for theories of partisan bias, political psychology, and motivated reasoning. First, at a general level, the dissociation we observed between accuracy judgments and sharing intentions suggests that just because someone shares a piece of news on social media does not necessarily mean that they believe it – and thus, that the widespread sharing of false or misleading partisan content should not necessarily be taken as an indication of the widespread adoption of false beliefs or explicit agreement with hyperpartisan narratives. Furthermore, our results sound a rather optimistic note in an arena which is typically much more pessimistic: rather than partisan bias blinding our participants to the veracity of claims (Kahan, 2017; Kahan et al., 2017), or making them knowing disseminators of ideologically-confirming misinformation (Hochschild & Einstein, 2016; McIntyre, 2018; Petersen et al., 2018), our results suggest that many people mistakenly choose to share misinformation because they were merely distracted from considering the content’s accuracy.
Identifying which particular motives are most active when on social media – and thus are most important for distracting people from accuracy – is an important direction for future work. Another issue for future work is more precisely identifying people’s state of belief when not reflecting on accuracy: Is it that people hold no particular belief one way or the other, or that they tend to assume content is true by default (31)? Although our results do not differentiate between these possibilities, prior work suggesting that intuitive processes support belief in false headlines (10, 32) lends some credence to the latter possibility. Similarly, future work should investigate why most people think it is important to only share accuracy content (33) – differentiating, for example, between an internalized desire for accuracy versus reputation-based concerns. Finally, future work should examine how these results generalize across different subsets of the American population, and – even more importantly – cross-culturally, given that misinformation is a major problem in areas of the world that have very different cultures and histories from the United States.
From an applied perspective, our results highlight an often overlooked avenue by which social media fosters the spread of misinformation. Rather than (or in addition to) the phenomenon of echo chambers and filter-bubbles (34, 35), social media platforms may actually discourage people from reflecting on accuracy (36). These platforms are designed to encourage users to rapidly scroll and spontaneously engage with feeds of content, and mix serious news content with emotionally engaging content where accuracy is not a relevant feature (e.g., photos of babies, videos of cats knocking objects off tables for no good reason). Social media platforms also provide immediate quantified social feedback (e.g., number of likes, shares, etc.) on users’ posts and are a space which users come to relax rather than engage in critical thinking. These factors imply that social media platforms may, by design, tilt users away from considering accuracy when making sharing decisions.
But this need not be the case. Our treatment translates easily into interventions that social media platforms could employ to increase users' focus on accuracy. For example, platforms could periodically ask users to rate the accuracy of randomly selected headlines (e.g. “to help inform algorithms”) – thus reminding them about accuracy in a subtle way that should avoid reactance. The platforms also have the resources to optimize the presentation and details of the messaging, likely leading to effect sizes much larger than what we observed here in the proof-of-concept offered by Study 5. This optimization should include investigations of which messaging and sample headlines lead to the largest effects for which subgroups, how the effect decays over time (our stepped-wedge design did not provide sufficient statistical power to look beyond a 24 hour window), how to minimize adaptation to repeated exposure to the intervention (e.g. by regularly changing the form and content of the messages), and whether adding a normative component to our primarily cognitive intervention can increase its effectiveness. Approaches such as the one we propose could potentially increase the quality of news circulating online without relying on a centralized institution to certify truth and censor falsehood.
No comments:
Post a Comment