5. General discussion
Dilemmas about what to believe based on the evidence and based on morality are commonplace. However, we know little about how people evaluate beliefs in these contexts. One line of reasoning, consistent with past work documenting both an objectivity bias (Ross & Ward, 1996), and an aversion to discrepant and inaccurate beliefs in others (Golman et al., 2016), predicts that people will demand that others set aside moral concerns to form beliefs impartially and solely on the basis of the evidence. However, we hypothesized that people will sometimes reject a normative commitment to evidence-based reasoning and treat moral considerations as legitimate influences on belief. This would entail that people sometimes prescribe motivated reasoning to others. We then articulated two ways that people could integrate moral and evidential value into their evaluations of belief. First, they could treat moral considerations as shifting the evidential decision criterion for a belief, which we called the “evidence criterion shifting hypothesis.” Or, they could treat a belief's moral quality as an alternative justification for belief that they weigh against its evidential quality, the “alternative justification” hypothesis. Our studies were capable of detecting whether people prescribe motivated reasoning to others, and further whether they do so in line with the evidence criterion shifting hypothesis or the alternative justification hypothesis.
Across all studies, participants routinely indicated that what a believer ought to believe, or was justified in believing, should be affected by what would be morally beneficial to believe. In Study 1, participants on average reported that what someone ought to believe should be more optimistic (in favor of what is morally beneficial to believe) than what is objectively most accurate for that person to believe based on their evidence. The extent to which participants prescribed these optimistic beliefs was strongly associated with the amount of moral benefit they thought an optimistic belief would confer, as measured by abstract statements such as, “All else being equal, it is morally good to give your friend the benefit of the doubt.” In Studies 2 and 3, participants reported that someone who would gain a moral benefit by being optimistic was more justified in adopting an overly-optimistic belief compared to someone else with the same information but who lacked a moral justification (and so adopted the overly-optimistic belief on the basis of a strong preference). Moreover, when both people adopted an evidence-based belief, the believer who disregarded a moral benefit to do so was judged to be less justified than someone who merely gave up a preference to do so. And finally, in Study 3, participants reported that, even though a spouse and a friend held the same evidence about the objective chances of the spouse's divorce, the spouse had a stronger obligation to remain optimistic about the marriage than the friend did. Taken together, these results provide strong evidence against the idea that people always demand that others form beliefs based on an impartial and objective evaluation of the evidence.
Consistent with the evidence criterion shifting hypothesis, participants also evaluated others' beliefs by applying evidential double-standards to them. In Study 1, participants reported that, relative to an impartial observer with the same information, someone with a moral reason to be optimistic had a wider range of beliefs that could be considered “consistent with” and “based on” the evidence. Critically however, the broader range of beliefs that were consistent with the same evidence were only beliefs that were more morally desirable. Morally undesirable beliefs were not similarly rated more consistent with the evidence for the main character compared to the impartial observer. Studies 2 and 3 provided converging evidence using different measures of perceived evidential quality. In these studies, participants judged that overly-optimistic beliefs were more likely to pass the threshold of “sufficient evidence” when the believer had a good moral reason to adopt those beliefs, compared to a believer who adopted the same beliefs based on a mere preference. Likewise, participants judged that beliefs that disregarded a good moral reason were less likely to have sufficient evidence compared to beliefs that disregarded a preference. Importantly, these differences in evidentiary quality arose even though the two beliefs were backed by the same objective information. Finally, Study 2 (though not Study 3), documented evidence criterion shifting using an indirect measure of evidence quality, namely, attributions of knowledge. In sum, these findings document that one reason why an observer may prescribe a biased belief is because moral considerations change how much evidence they deem necessary to hold the belief in an evidentially satisfactory way.
Finally, these studies were capable of detecting whether or not people thought that moral considerations could justify holding a belief beyond what is supported by the evidence – that is, whether moral reasons constitute an “alternative justification” for belief. Study 1 documented morality playing this role in two out of the six vignettes that we examined. When prescribing beliefs to a newlywed who is trying to judge his chance of divorce, about half of participants who prescribed a motivated belief reported that the newlywed should believe something that was, by their own lights, inconsistent with his evidence. Studies 2 and 3 revealed a more subtle way in which moral considerations directly affect belief evaluation. Across all the vignettes, the moral quality of the belief – such as how helpful or loyal the belief was – predicted participants' evaluations of how justified and permissible the belief was to hold even after accounting for the evidential quality of the belief. Thus, people will sometimes prescribe a belief to someone knowing that the belief is unsupported by that person's evidence because the belief confers a moral benefit.
Though the focus of the current investigation was to determine whether people prescribed motivated reasoning in certain common situations, it is important to note that these studies documented substantial evidence that people think beliefs ought to be constrained by the evidence. In Study 1, participants prescribed beliefs that were close to what they thought was best supported by the evidence (Table 3). Indeed, on average, participants prescribed beliefs that were pessimistic (i.e., closer to the evidence) rather than optimistic (i.e., morally preferable) (Fig. 3). Additionally, Studies 2 and 3 documented a strong association between the perceived evidentiary quality of the belief and judgments of the belief's permissibility and justifiability. Specifically, the less sufficient the believer's evidence, the less justifiable and permissible it was for them to hold the belief. Thus, while these findings show that people will integrate moral considerations into their belief evaluations, they also show that people think others should balance these considerations with the evidence that they have.
5.1. Alternative explanations
Two alternative explanations for our findings stem from the observation that we manipulated moral obligation by changing the social distance between two believers. Rather than social distance affecting the moral norms that apply to one's belief, as we hypothesize, it could instead be the case that being close to the person that one is forming a belief about either (1) makes one's belief more likely to be self-fulfilling, or (2), creates a reason to be more diligent and therefore more withholding of belief in general. We address each of these two concerns below.
5.1.1. Self-fulfilling beliefs
Adopting a belief can make certain outcomes more likely. For instance, adopting an optimistic belief could cause one to feel happier, try harder at some task, or bring about a beneficial outcome. We hypothesized that, sometimes, people treat these effects as constituting moral reasons to adopt a specific belief. For instance, if adopting an optimistic belief about a spouse's prognosis could improve their prognosis, then this benefit may constitute a morally good reason to adopt the optimistic belief. However, when the outcome in question is also what the belief is about, as it is in this example, then the belief is potentially a “self-fulfilling” belief. Self-fulfilling beliefs could confound moral reasons to adopt a belief with evidential reasons to adopt the belief. This can happen if participants attribute to the believer of a self-fulfilling belief the additional belief that their belief is self-fulfilling, which would then entail that this person has more evidence (in the form of the belief that they hold) in favor of the outcome that they have formed a belief about. For instance, participants may infer that, if the newlywed husband in Study 3 makes it more likely that he will not get divorced by adopting the belief that he has a 0% chance of divorce, then the observation that he has formed this belief may constitute additional evidence that he has a 0% chance of divorce. His friend who adopts the same belief would not have access to this additional evidence because the friend's belief does not affect the husband's outcome. If participants reason about beliefs in this way, then it is possible that the cases in which people seem to be endorsing non-evidential grounds for belief are really cases in which participants are inferring the presence of new evidence stemming from the self-fulfilling belief.
Several findings from the studies above speak against this skeptical proposal. First, if the socially close character's beliefs are treated as self-fulfilling, and therefore as evidentially self-supporting, then this feature of their beliefs ought to apply to pessimistic beliefs just as it does to optimistic ones. However, as we observed in Studies 2 and 3, when the husband adopts the pessimistic belief, participants judge his belief as worse than the friend who adopts the same pessimistic belief, directly contradicting this prediction. Put another way, a self-fulfilling account predicts that close others will always be judged as better evidentially situated than distant others. Thus, the statistical interaction between believer and belief, documented in Studies 2 and 3, rules out this interpretation. And second, in Studies 1 and 2, prescribed motivated reasoning, evidence criterion shifting, and alternative justification were all supported in the Friend scenario. In this scenario, the relevant belief concerned something that occurred in the past, namely, whether the cocaine that had been discovered belonged to the friend or not. Because the belief concerns something in the past, neither an optimistic nor pessimistic belief could affect its likelihood of being true. Thus, while it is possible that, in some vignettes, participants could treat self-fulfilling beliefs as evidentially self-supporting, this potential confound cannot fully explain our results.
5.1.2. Norms of due diligence
Norms of “due diligence” could explain, in principle, why two people with the same evidence should hold different beliefs. For example, if there is nothing in your car, then you may be justified to assume that you left the windows down based on your knowledge that you usually do. But if you left a child in the car, then you have a reason to double check before deciding whether you did or did not – even if you otherwise have similar reasons to think that you usually leave them down. Prior work shows that people believe one's diligence in belief formation should vary according to the risk imposed by a false belief (McAllister et al., 1979; Pinillos, 2012), and people in these situations actually do engage in more thorough reasoning when the risks are high (Fiske & Neuberg, 1990; Kunda, 1990; Mayseless & Kruglanski, 1987; Newell & Bröder, 2008; Payne, Bettman, & Johnson, 1993). Thus, perhaps the main characters in Study 1 have a wider range of beliefs consistent with the evidence than the AI because they need to be more diligent in their reasoning than the AI does, and therefore require more evidence before becoming too confident. Likewise, in Studies 2 and 3, perhaps participants think that people should reason more diligently about those to whom they are close compared to those to whom they are distant, and this norm explains why believers were evaluated negatively for adopting pessimistic beliefs. In sum, perhaps changes in social distance affect how diligent one must be when reasoning, rather than affecting whether one ought to reason in a motivated or biased way.
However, norms of diligence also fail to fully explain results from these studies. In Study 1, a due diligence explanation would predict that a wider range of beliefs would be consistent with the evidence, such that a wider range of morally-undesirable beliefs would also be permitted for the characters but not the AI. However, we observed evidence criterion shifting only for more morally-desirable beliefs, not for more morally-undesirable beliefs, inconsistent with predictions based on due diligence. Similarly, in Studies 2 and 3, a due diligence explanation would predict that, based on the same amount of information, the socially close character would be less justified to adopt any belief, whether optimistic or pessimistic. Yet, the statistical interaction we observe rules this out: Whereas socially close observers were judged poorly for adopting the morally-undesirable (but evidentially-better) belief, these differences were either attenuated or reversed for the morally-desirable, optimistic belief. Thus, our data suggest that being a person rather than an AI, or having a close relationship as opposed to having a distal one, does not impose a demand to be careful in your beliefs, but instead imposes a demand to be partial.
5.2. Implications for motivated reasoning
Psychologists have long speculated that commonplace deviations from rational judgments and decisions could reflect commitments to different normative standards for decision making rather than merely cognitive limitations or unintentional errors (Cohen, 1981; Koehler, 1996; Tribe, 1971). This speculation has been largely confirmed in the domain of decision making, where work has documented that people will refuse to make certain decisions because of a normative commitment to not rely on certain kinds of evidence (Nesson, 1985; Wells, 1992), or because of a normative commitment to prioritize deontological concerns over utility-maximizing concerns (Baron & Spranca, 1997; Tetlock et al., 2000). And yet, there has been comparatively little investigation in the domain of belief formation. While some work has suggested that people evaluate beliefs in ways that favor non-objective, or non-evidential criteria (e.g., Armor et al., 2008; Cao et al., 2019; Metz, Weisberg, & Weisberg, 2018; Tenney et al., 2015), this work has failed to demonstrate that people prescribe beliefs that violate what objective, evidence-based reasoning would warrant. To our knowledge, our results are the first to demonstrate that people will knowingly endorse non-evidential norms for belief, and specifically, prescribe motivated reasoning to others.
Our results therefore warrant a fresh look at old explanations for irrationality. Most relevant are overconfidence or optimism biases documented in the domain of close relationships (e.g., Baker & Emery, 1993; Srivastava, McGonigal, Richards, Butler, & Gross, 2006) and health (e.g., Thompson, Sobolew-Shubin, Galbraith, Schwankovsky, & Cruzen, 1993). Past work has suggested that the ultimate explanation for motivated reasoning could derive from the downstream benefits for the believers (Baumeister, 1989; Murray & Holmes, 1997; Taylor & Brown, 1988; Srivastava et al., 2006; but see Neff & Geers, 2013, and Tenney et al., 2015). Our findings suggest more proximate explanations for these biases: That lay people see these beliefs as morally beneficial and treat these moral benefits as legitimate grounds for motivated reasoning. Thus, overconfidence or over-optimism may persist in communities because people hold others to lower standards of evidence for adopting morally-beneficial optimistic beliefs than they do for pessimistic beliefs, or otherwise treat these benefits as legitimate reasons to ignore the evidence that one has.
Beyond this general observation about why motivated reasoning may come about or persist, our results also hint at a possible mechanism for how moral norms for belief facilitate motivated reasoning. Specifically, people could acknowledge that one of their beliefs is supported by less total evidence compared to their other beliefs, but judge that the belief nevertheless satisfies the demand for sufficient evidence because the standards for evidence are lower in light of the belief's moral quality. As a result, they may not judge it necessary to pursue further evidence, or to revise their belief in light of modest counter-evidence. As an example, people could recognize that a belief in God, or a belief in Karma, is supported by little objective evidence, but at the same time believe that the little evidence they have nevertheless constitutes sufficient evidence in light of the moral benefit that the belief confers (see McPhetres & Zuckerman, 2017, and Lobato, Tabatabaeian, Fleming, Sulzmann, & Holbrook, 2019, for some preliminary findings consistent with this proposal; see James, 1937, and Pace, 2011, for fuller discussion of how to evaluate evidence for morally beneficial beliefs). Although this is speculative, it naturally follows from the findings presented here and presents a valuable direction for future research.
5.3. Moderating prescribed motivated reasoning
Though we have demonstrated that people prescribe motivated reasoning to others under some conditions, we have not offered a comprehensive treatment of the conditions under which this occurs. Indeed, we did not observe prescribed motivated reasoning in the Bully vignette in Study 2, or the Sex vignette in Study 1, despite their similarity to the other vignettes. One straightforward explanation is that the relevant moral norms in those vignettes did not outweigh demands to be accurate. Indeed, participants reported on average the least moral concern for the Sex vignette, raising the possibility that a putative demand to favor a particular belief in that scenario was not strong enough to override the norm to be objective.9 Likewise, in the Bully vignette, there could have been strong reasons to be diligent and accurate that directly competed with reasons to be partial, but which we had not foreseen when constructing our materials. For instance, participants may have believed that the teacher had a moral responsibility to be clear-eyed about the bully in order to protect the other students. This explanation is speculative, but it is consistent with prior work documenting that people temper their recommendations for over-optimism when the risks outweigh the potential benefits. For instance, Tenney et al. (2015) found that people were less likely to prescribe optimism to others when those others were in the process of making a decision compared to when a decision had already been made. This was presumably because making a decision on wrong information is unnecessarily risky in a way that over-optimism after a decision is not. In general, these considerations suggest that, just as people are sensitive to the benefits of accuracy and bias when setting their own reasoning goals (c.f. Kruglanski, 2004), it is likely that they incorporate the comparative advantages of accuracy and bias when prescribing beliefs to others.
Though we tested a wide range of scenarios in the current studies, the range of morally beneficial beliefs was still relatively limited. Specifically, many of the scenarios we tested invoked moral obligations that stem from one's close personal relationships. However, it is possible that people will sometimes endorse moral demands that extend to distant others and that outweigh the normal demands to be partial towards one's friends and family. For instance, if someone's friend has been accused of sexual assault, it is possible that observers will no longer prescribe giving that friend the benefit of the doubt. Instead, one's moral obligations to the potential victims may demand either being perfectly objective or perhaps even weighing the alleged victim's testimony more heavily than the friend's. As this example highlights, the moral reasons that sometimes justify motivated beliefs in our studies may be outweighed by reasons that confer different kinds of moral benefits (beyond the possible benefit of accurate reasoning discussed above).
Importantly, which moral norms are salient to observers, and indeed whether observers moralize mental states at all, differs across individuals, religious communities, and cultures (Graham et al., 2013). For instance, it may be that Christians are more likely to demand of others that they form respectful beliefs about parents (irrespective of the evidence) compared to Jews, because Christians (relative to Jews) are more likely to judge disrespectful attitudes as morally wrong and under the believer's control (Cohen & Rozin, 2001). Likewise, conservatives may be more likely to demand partial beliefs about friends or authority figures in light of their tendency to attach greater value to these moral norms (Graham, Haidt, & Nosek, 2009). Future work would benefit from rigorously documenting what beliefs people moralize, and in what situations people believe motivated reasoning will be beneficial.
A final moderating factor that we did not explore in our studies concerns the extent to which epistemic rationality may be valued differently across individuals. Some prior work has suggested that people vary in their intuitive commitment to objective, logical, and evidence-based reasoning (Pennycook et al., 2020; StĂ¥hl et al., 2016). If these individual differences reflect the degree to which individuals intrinsically value epistemic rationality, then on average these individuals should be less sensitive to changes in the moral benefits of motivated reasoning. However, prior work measuring commitment to rationality has not investigated why certain individuals tend to value epistemic rationality more than others. This omission is important because there are potentially many reasons why someone may categorically reject bias – morally motivated or otherwise (Chignell, 2018). In our view, this remains an underexplored, but valuable, domain of research.
To summarize, it is likely the case that whether people prescribe motivated reasoning to others reflects a complex integration of (i) situational demands to be accurate, (ii) situational demands to adopt a morally beneficial belief (where more than one moral norm may come into play, and where such norms are likely to vary across culture), and (iii) individual differences in the extent to which people value accuracy and objectivity over other qualities of belief. Our results suggest that a large proportion of people feel the tug of moral benefits of belief in at least some common social scenarios, but much work remains to be done.
5.4. Prescribing motivated reasoning for moral or non-moral reasons
The studies above provide strong support for the claim that, in the lay ethics of belief, morality can justify motivated reasoning, therein raising the question of whether moral value is the only kind of non-evidential consideration that people explicitly endorse in belief formation. Specifically, it raises the question of whether people think others should adopt beliefs that are merely useful (but not morally beneficial). We found that moral considerations were treated as a better justification for motivated reasoning compared to mere preferences (Studies 2–3), but these studies do not definitively rule out the possibility that a large personal benefit could also justify motivated reasoning in the eyes of observers. Some philosophers have famously argued in favor of this possibility, as when Pascal (1852) concluded that, despite a paucity of evidence, he ought to believe that God exists or else risk incalculable suffering after death. Whether people judge that these kinds of benefits can justify motivated belief warrants further investigation.