The Worst-Motive Fallacy: A Negativity Bias in Motive Attribution. Joel Walmsley, Cathal O’Madagain. Psychological Science, October 21, 2020. https://doi.org/10.1177/0956797620954492
Rolf Degen's take: https://twitter.com/DegenRolf/status/1319146646610022401
Abstract: In this article, we describe a hitherto undocumented fallacy—in the sense of a mistake in reasoning—constituted by a negativity bias in the way that people attribute motives to others. We call this the “worst-motive fallacy,” and we conducted two experiments to investigate it. In Experiment 1 (N = 323), participants expected protagonists in a variety of fictional vignettes to pursue courses of action that satisfy the protagonists’ worst motive, and furthermore, participants significantly expected the protagonist to pursue a worse course of action than they would prefer themselves. Experiment 2 (N = 967) was a preregistered attempted replication of Experiment 1, including a bigger range of vignettes; the first effect was not replicated for the new vignettes tested but was for the original set. Also, we once again found that participants expected protagonists to be more likely than they were themselves to pursue courses of action that they considered morally bad. We discuss the worst-motive fallacy’s relation to other well-known biases as well as its possible evolutionary origins and its ethical (and meta-ethical) consequences.
Keywords: cognitive bias, motives, attribution, meta-ethics, experimental philosophy, moral intuitions, moral judgment, open data, open materials, preregistered
General Discussion
The results of our experiments suggest that the tendency against which Hanlon’s razor warns is, in fact, a real tendency in our judgments of other people’s motives. Across a range of contexts, we found evidence that people were inclined to expect that agents are motivated primarily by the worst of the reasons that they have for a given action and that people expect others to be motivated by worse reasons than they are motivated by themselves. Although we were unable to replicate this effect for a broader range of vignettes than we considered in Experiment 1 (the first four from Experiment 1 plus an additional eight), it was replicated for the original vignettes when these were analyzed separately. Additionally, it is clear that there was a difference in the new vignettes, which could explain this failure: Participants rated the good motives as more extreme than the bad motives in the new vignettes. This difference may have counteracted any tendency to be biased to expect the agent to act primarily on the worse motive. We also found that in all cases, participants expected the agents in the story to be more likely to act on the motives that they found more extreme, as we suspected we might when we set up our study. And finally, the second effect we reported, in which participants expected other people to be more likely to pursue actions that they consider bad than they would prefer to do themselves, was replicated for all of the vignettes in Experiment 2.
The worst-motive fallacy fits naturally within the family of general negativity biases mentioned in the Background section, because it suggests, in effect, that we are also negatively biased in our moral evaluation of other people’s motives. Plausibly, we consider the worst reasons for actions to be the main motives because of our more general tendency to place greater focus on negative stimuli rather than positive, coupled with a more pronounced tendency to evaluate other people’s characters more negatively than we do our own.
We think that the worst-motive fallacy may arise because of the adaptive advantages that are gained from paying more attention to negative rather than positive aspects of other people’s behavior, the wisdom of which is recommended by another common folk aphorism5: “Hope for the best but prepare for the worst.” A cognitive bias may be selected for when the errors in which it results are less costly than erring in the opposite direction (Haselton & Nettle, 2006). In general, the evolutionary story goes, it is more advantageous to pay attention to negative aspects of our environment and thereby avoid harm, even if that means failing to notice positive aspects and thereby missing good opportunities. In the context of the worst-motive fallacy, although being overly suspicious of other people’s motives may incur the cost of failing to take up cooperative opportunities, the cost of naively entering cooperative partnerships with malicious actors may be higher. It will therefore be more advantageous to err on the side of falsely believing that other people have bad motives than to risk falsely believing that they have good motives.
Given the evolutionary account proposed here, several follow-up studies using similar methods could be conducted to explore the boundary conditions of the effect we have identified. For example, one might expect to find an in-group/out-group effect that leads to more benign interpretations of other people’s motives when they are relatives or when more is known about the protagonist’s history, prior behavior, or decision-making context. If we are generally predisposed to treat other people with suspicion, then it seems plausible that this would extend to an increased negativity bias when it comes to out-group attribution of motives. Such an effect would likely manifest along depressingly familiar prejudicial lines of gender, race, nationality, class, and so on.
What about those philosophical theories, considered at the outset, that appeal to an agent’s motives in the assessment of the morality of actions? The present study suggests that we should be cautious about appealing to our assessment of other people’s motives to judge the morality of their actions. The negativity bias we have uncovered casts doubt on the practicalities of any meta-ethical theory that recommends that our moral evaluation of other people’s actions should be rooted in our assessment of their motives. Similarly, the robust effect of extremeness (i.e., the fact that people are more likely to attribute a motive when it is further away from morally neutral in either direction) gives us a further reason to be wary. Given two competing motives that we have reason to believe an agent has in mind, it seems we are likely to consider not only the bad motive to be the main motive but also the more morally extreme motive to be the main motive. Although such meta-ethical theories could still be correct that, objectively, actors’ motives play an essential role in the goodness of their actions, they should nonetheless carry a user warning, as it were, that our subjective assessment of those motives may be far less reliable than is generally supposed.
These are matters for future investigation beyond the scope of this article. Our present focus has been to formally identify this unnoticed fallacy and to demonstrate for the first time that there is a tendency for people actually to commit it. Of course, the reader might suspect that our main motive in writing the present article was something else again: to publish in a top-ranking, peer-reviewed journal for the purposes of fame, glory, and career advancement. We suggest, however, that to suppose this would be to commit a fallacy, whose cause is a demonstrably commonplace cognitive bias.