The Negativity Bias, Revisited: Evidence from Neuroscience Measures and an Individual Differences Approach. Catherine J. Norris. Social Neuroscience, Nov 21 2019. https://doi.org/10.1080/17470919.2019.1696225
Abstract: Past research has provided support for the existence of a negativity bias, the tendency for negativity to have a stronger impact than positivity. Theoretically, the negativity bias provides an evolutionary advantage, as it is more critical for survival to avoid a harmful stimulus than to pursue a potentially helpful one. The current paper reviews the theoretical grounding of the negativity bias in the Evaluative Space Model, and presents recent findings using a multilevel approach that further elucidate the mechanisms underlying the negativity bias and underscore the importance of the negativity bias for human functioning.
Keywords: ERPs, fMRI, neuroticism, personality, gender, age
Bipartisan Alliance, a Society for the Study of the US Constitution, and of Human Nature, where Republicans and Democrats meet.
Friday, November 22, 2019
We find that affective arousal increases the amount & the severity of self-disclosure, and that self-disclosure is also increased by physiological arousal; often-thought-about thoughts are more likely to be disclosed
Arousal increases self-disclosure. Brent Coker, Ann L. McGill. Journal of Experimental Social Psychology, Volume 87, March 2020, 103928. https://doi.org/10.1016/j.jesp.2019.103928
Abstract: This research tests the hypothesis that arousal increases self-disclosure. We find that affective arousal increases the amount (study 1) and the severity (study 2) of self-disclosure, and that self-disclosure is also increased by physiological arousal (study 3). We further explore the moderating effect of thought frequency on the arousal-disclosure relationship, finding that often-thought-about thoughts are more likely to be disclosed than less thought-about thoughts. This research has practical importance in terms of understanding when and why people self-disclose personal information, and enriches our understanding of the theoretical relationship between arousal and information sharing.
Abstract: This research tests the hypothesis that arousal increases self-disclosure. We find that affective arousal increases the amount (study 1) and the severity (study 2) of self-disclosure, and that self-disclosure is also increased by physiological arousal (study 3). We further explore the moderating effect of thought frequency on the arousal-disclosure relationship, finding that often-thought-about thoughts are more likely to be disclosed than less thought-about thoughts. This research has practical importance in terms of understanding when and why people self-disclose personal information, and enriches our understanding of the theoretical relationship between arousal and information sharing.
In 2010, Fanelli reported a positive result rate of 91.5% for psychology papers; these authors found only 42.65% positive results papers in the Registered Reports they reviewed
Scheel, A. M. (2019, March 12). Positive result rates in psychology: Registered Reports compared to the conventional literature. ZPID (Leibniz Institute for Psychology Information). https://doi.org/10.23668/psycharchives.2390
Abstract
Background: Several studies have found the scientific literature in psychology to be characterised by an exceptionally high rate of publications that report 'positive' results (supporting their main research hypothesis) on the one hand, and notoriously low statistical power on the other (Sterling, 1959; Fanelli, 2010; Maxwell, 2004). These findings are at odds with each other and likely reflect a tendency to under-report negative results, through mechanisms such as file-drawering, publication bias, and 'questionable research practices' like p-hacking and HARKing. A strong bias against negative results can lead to an inflated false positive rate and inflated effect sizes in the literature, making it difficult for researchers to build on previous work and increasing the risk of ineffective or harmful 'evidence-based' applications and policies. In 2013, Registered Reports (RRs) were developed as a new publication format to reduce under-reporting of negative results by mitigating file-drawering, publication bias, and questionable research practices: Before collecting and analysing their data, authors submit a protocol containing their hypotheses and methods to a journal, where it gets reviewed and, if successful, receives 'in-principle acceptance' which guarantees publication once the results are in, regardless of the outcome. Given their bias-reducing safeguards, we should expect a lower positive result rate in RRs compared to the non-RR literature, but to date no structured comparison of RRs and non-RRs has been offered.
Objectives: Fanelli (2010) presented a simple method to assess the positive result rate in a large sample of publications. We used his method to replicate his results for the (non-RR) psychology literature since 2013 and compare it to all published RRs in psychology.
Hypothesis: Using Fanelli's method, we tested the hypothesis that published RRs in psychology have a lower positive result rate than non-RRs in psychology published in the same time range (2013-2018). We would reject this hypothesis if the difference between RRs and non-RRs were found to be significantly smaller than 6%. Method: To obtain the non-RR sample, we applied Fanelli's (2010) sampling strategy: We searched all journals listed in the 'Psychiatry/Psychology' category of the Essential Science Indicators database for the phrase 'test* the hypothes*' and picked a random sample of 150 publications of all search results, deviating from Fanelli only in restricting the year of publication to 2013-2018. To obtain the RR sample, we relied on a list of published RRs curated by the Center for Open Science (https://www.zotero.org/groups/479248/osf/items/collectionKey/KEJP68G9?) which at the time had 152 entries, and excluded all publications that were not in psychology or not certainly RRs, leaving 81 publications. The positive result rate was determined by identifying the first hypothesis mentioned in the abstract or full text and coding whether it was (fully or partially) supported or not supported, and then for each group calculating the proportion of papers that reported support. Methods and analyses were preregistered at https://osf.io/s8e97/.
Results: Eight non-RRs and 13 RRs were excluded because they either did not test a hypothesis or could not be coded for other reasons, leaving 142 non-RRs and 68 RRs. The positive result rate was 95.77% for non-RRs and 42.65% for RRs. The proportion difference was significantly different from zero (one-sided Fisher's exact test, alpha = .05), p < .0001, and not significantly smaller than our smallest effect size of interest of 6% in an equivalence test, Z = -7.564, p > .999. For an exploratory analysis we also coded whether or not a paper contained a replication of previous work and found that none of the non-RRs, but two thirds (42/68) of the RRs did. The positive result rate for replication RRs was slightly lower (35.71%) than for original RRs (53.85%), but this difference was not significant, p = .112.
Conclusions and implications: In 2010, Fanelli reported a positive result rate of 91.5% for the field of psychology. Using the same method, we found a rate of 95.77% for the time between 2013 and 2018, suggesting that the rate has not gone down in recent years. In contrast, with only 42.65% the new population of Registered Reports shows a strikingly lower positive result rate than the non-RR literature. This difference may be somewhat smaller when focussing only on original work, but the RR population is currently too small to draw strong conclusions about any differences between replication and original studies. Our conclusions are limited by the different sampling procedures for RRs and non-RRs and by the observational nature of our study, which did not allow us to account for potential confounding factors. Nonetheless, our results are in line with the assumption that RRs reduce under-reporting of negative results and provide a first estimate for the difference between this new population of studies and the conventional literature.
Abstract
Background: Several studies have found the scientific literature in psychology to be characterised by an exceptionally high rate of publications that report 'positive' results (supporting their main research hypothesis) on the one hand, and notoriously low statistical power on the other (Sterling, 1959; Fanelli, 2010; Maxwell, 2004). These findings are at odds with each other and likely reflect a tendency to under-report negative results, through mechanisms such as file-drawering, publication bias, and 'questionable research practices' like p-hacking and HARKing. A strong bias against negative results can lead to an inflated false positive rate and inflated effect sizes in the literature, making it difficult for researchers to build on previous work and increasing the risk of ineffective or harmful 'evidence-based' applications and policies. In 2013, Registered Reports (RRs) were developed as a new publication format to reduce under-reporting of negative results by mitigating file-drawering, publication bias, and questionable research practices: Before collecting and analysing their data, authors submit a protocol containing their hypotheses and methods to a journal, where it gets reviewed and, if successful, receives 'in-principle acceptance' which guarantees publication once the results are in, regardless of the outcome. Given their bias-reducing safeguards, we should expect a lower positive result rate in RRs compared to the non-RR literature, but to date no structured comparison of RRs and non-RRs has been offered.
Objectives: Fanelli (2010) presented a simple method to assess the positive result rate in a large sample of publications. We used his method to replicate his results for the (non-RR) psychology literature since 2013 and compare it to all published RRs in psychology.
Hypothesis: Using Fanelli's method, we tested the hypothesis that published RRs in psychology have a lower positive result rate than non-RRs in psychology published in the same time range (2013-2018). We would reject this hypothesis if the difference between RRs and non-RRs were found to be significantly smaller than 6%. Method: To obtain the non-RR sample, we applied Fanelli's (2010) sampling strategy: We searched all journals listed in the 'Psychiatry/Psychology' category of the Essential Science Indicators database for the phrase 'test* the hypothes*' and picked a random sample of 150 publications of all search results, deviating from Fanelli only in restricting the year of publication to 2013-2018. To obtain the RR sample, we relied on a list of published RRs curated by the Center for Open Science (https://www.zotero.org/groups/479248/osf/items/collectionKey/KEJP68G9?) which at the time had 152 entries, and excluded all publications that were not in psychology or not certainly RRs, leaving 81 publications. The positive result rate was determined by identifying the first hypothesis mentioned in the abstract or full text and coding whether it was (fully or partially) supported or not supported, and then for each group calculating the proportion of papers that reported support. Methods and analyses were preregistered at https://osf.io/s8e97/.
Results: Eight non-RRs and 13 RRs were excluded because they either did not test a hypothesis or could not be coded for other reasons, leaving 142 non-RRs and 68 RRs. The positive result rate was 95.77% for non-RRs and 42.65% for RRs. The proportion difference was significantly different from zero (one-sided Fisher's exact test, alpha = .05), p < .0001, and not significantly smaller than our smallest effect size of interest of 6% in an equivalence test, Z = -7.564, p > .999. For an exploratory analysis we also coded whether or not a paper contained a replication of previous work and found that none of the non-RRs, but two thirds (42/68) of the RRs did. The positive result rate for replication RRs was slightly lower (35.71%) than for original RRs (53.85%), but this difference was not significant, p = .112.
Conclusions and implications: In 2010, Fanelli reported a positive result rate of 91.5% for the field of psychology. Using the same method, we found a rate of 95.77% for the time between 2013 and 2018, suggesting that the rate has not gone down in recent years. In contrast, with only 42.65% the new population of Registered Reports shows a strikingly lower positive result rate than the non-RR literature. This difference may be somewhat smaller when focussing only on original work, but the RR population is currently too small to draw strong conclusions about any differences between replication and original studies. Our conclusions are limited by the different sampling procedures for RRs and non-RRs and by the observational nature of our study, which did not allow us to account for potential confounding factors. Nonetheless, our results are in line with the assumption that RRs reduce under-reporting of negative results and provide a first estimate for the difference between this new population of studies and the conventional literature.
Credibility, communication, and climate change: Lifestyle inconsistency and do-gooder derogation
Credibility, communication, and climate change: How lifestyle inconsistency and do-gooder derogation impact decarbonization advocacy. Gregg Sparkman, Shahzeen Z. Attari. Energy Research & Social Science, Volume 59, January 2020, 101290. https://doi.org/10.1016/j.erss.2019.101290
Abstract: The present research examines two distinct pitfalls for advocates aiming to motivate others to use renewable energy and reduce their carbon footprint. Recent research has found that science communicators and advocates may be judged for inconsistency between their behavior and advocacy—where information that an advocate's lifestyle has a large carbon footprint can undermine their appeals to live more sustainably or support policies to address climate change. Conversely, in other advocacy domains, research on do-gooder derogation has found that exemplary behavior among advocates can lead people to feel defensive about their own shortcomings and reject the exemplar and their cause. Do environmental advocates have to worry about both do-gooder derogation and behavior-advocacy inconsistency? Further, do different types of advocates have to worry about these pitfalls equally? To answer these questions, we use an online survey in the United States (N = 2362) to contrast the effectiveness of advocacy from peers and from experts across three levels of sustainable lifestyles: not sustainable, somewhat sustainable, and highly sustainable. We find strong evidence for the negative effects of behavior-advocacy inconsistency for both neighbors and experts, albeit much larger impacts for experts. Further, we also find partial evidence for do-gooder derogation for neighbors and experts: highly sustainable advocates were not more influential than somewhat sustainable ones—instead they were marginally worse. Overall, these results suggest that advocates, especially experts, are most credible and influential when they adopt many sustainable behaviors in their day-to-day lives, so long as they are not seen as too extreme.
4. Discussion
These results show that both experts and neighbors suffered from behavior-advocacy inconsistency effects: when advocates lived unsustainable lifestyles, there were less successful at encouraging others to sign up for a residential renewable energy program. However, behavior-advocacy inconsistency effects were significantly worse for experts than neighbors. It appears that people are more forgiving of neighbors’ unsustainable lifestyles than of experts’ shortcomings—perhaps because we hold experts to higher standards for behavior-advocacy consistency than we hold peers. This also appears to be true for perceptions of advocates’ credibility.
Further, these data find that living a highly sustainable lifestyle (buying renewable energy, having an extremely efficient home, completely avoiding flying, and eating no meat or cheese) does not make advocates even more effective than living a somewhat sustainable lifestyle (buying renewable energy, having a fairly energy efficient home, and making substantial efforts to curb meat eating and flying). In fact, disclosing one's highly sustainable lifestyle amid giving others an appeal to change may run the risk of raising do-gooder derogation, where advocates’ exemplary lifestyles may make others’ feel defensive about their own shortcoming leading them to dislike the advocate and their cause. As such, we found that highly sustainable advocates were marginally less effective at increasing interest in the renewable energy program and no more credible than somewhat sustainable ones. Those who were somewhat sustainable fared well and do not appear to have suffered from concerns about behavior-advocacy inconsistency or do-gooder derogation. It's also possible that participants saw less of a contrast between themselves and the somewhat sustainable advocate: participants may have believed they were more sustainable than unsustainable advocates, and less sustainable than the highly sustainable advocate. If true, somewhat sustainable advocates may also benefit from perceptions of greater similarity, and therefore serve more easily as a social model [14]. Indeed, in a post hoc analysis we find that somewhat sustainable advocates are perceived to be slightly less socially distant than highly sustainable advocates (d = 0.11, see the Supplemental Material).
Experts appear to be judged more harshly, as their efforts suffer more greatly from behavior-advocacy inconsistency. This is unfortunate given that experts, with their wealth of knowledge and dedication to the topic, hold an irreplaceable role in increasing understanding by disseminating science and in advocacy for action on climate change. Notably, advocacy itself may not be problematic for climate change experts. Scientists, academics, and others can advocate for climate related policies and solutions in a number of ways [43], and are able to do so without hurting their credibility to the public [44] or their colleagues [45]. Research suggests that experts may be able to make substantial reductions to their footprint, such as reducing flying, without adversely affecting their academic success [46]. However, if experts involved in advocacy are unwilling to live somewhat sustainable lives, they may have trouble avoiding negative effects of behavior-advocacy inconsistency. By comparison, neighbors experienced much weaker behavior-advocacy inconsistency effects. In fact, for neighbors there was no significant difference between being highly sustainable and being unsustainable for participant's interest in adopting renewable energy. This may present a silver lining to these findings: non-professionals, no matter their lifestyle, can still be fairly effective advocates for decarbonization.
4.1. Limitations & future directions
In the present research we examined behavior-advocacy inconsistency effects and do-gooder derogation effects in the context of someone self-disclosing their personal actions. While self-disclosure is not uncommon for advocates of sustainability [47], [48], in other contexts the targets of advocacy may come to learn about an advocate's sustainable practices through their own inquiry, a third party, or some other indirect means. For example, after the release of “An Inconvenient Truth” Al Gore came under attack for his household energy consumption from a series of news articles attempting to impugn his reputation and the sincerity of his cause [49]. It is possible that our results would differ if the information about the advocate's lifestyle were learned through some other means or method. Future research is needed to assess whether the form and source of disclosure about an advocate's lifestyle impacts the results found here.
The operationalization of do-gooders used in the present work required the advocate stating both the criteria for living sustainability (references to home energy, diet, and flying) as well as their excellent performance relevant to that criteria. However, participants may lack personal knowledge about how these behaviors correspond to sustainability. For instance, participants may have been unaware that dietary choices have a substantial impact on the environment. If participants felt great uncertainty about whether these actions were actually important to sustainability, they may not have experienced any negative social comparison to do-gooders. Therefore, one possibility is that do-gooder derogation may be more prominent in cases where people already understand the importance of or care about the domain and behavior their performance is being compared on.
The study design used here relies on asking all our participants to envision highly similar vignettes in order to control for all aspects beyond those we seek to manipulate. This ensures strong internal validity, but raises questions regarding external validity and generalizability. In particular, our approach does not assess actual behavior change and instead assesses self-reported interest in the vignettes which may differ from real-world behavior. Further, a survey experiment is limited in terms of providing realistic experiences with advocates. In particular, a fictional peer may not adequately resemble the vivid information people would have in real life about one's neighbors. Therefore, it's possible that rich social interactions that come with real social ties may produce different and potentially stronger results than those found here. Similarly, envisioning attending a talk may differ from actually attending a presentation in ways that meaningfully change the results observed here. The present research lays the groundwork for studies seeking to assess such phenomenon in the field which can provide greater confidence in how they generalize to real-world experiences.
While the present research examined an important outcome, interest in a residential renewable energy program, it is possible that results may differ for other sustainable behaviors. For instance, past research on eliminating meat consumption has found stronger evidence for do-gooder derogation than we found in the present context [24]. Therefore, the relative strength of behavior-advocacy inconsistency and do-gooder derogation may vary across different domains of sustainability. Further research is needed to explore how behavior-advocacy inconsistency effects and do-gooder derogation may differ depending on the behavior in question.
We also need to better understand how to overcome behavior-advocacy inconsistency concerns and do-gooder derogation. Recent research on advocacy for decarbonization policies finds that when advocates indicate that they have reduced their carbon footprint from a previously high footprint, credibility is restored [10], i.e., advocates are judged on their current carbon footprint and not their past footprint. More generally, information about others changing has been shown to be inspirational [50], and help resolve a variety of psychological barriers that prevent personal change [51]. In the advocacy context, it may also be helpful address threats to one's self image from comparisons to do-gooders. Specifically, if advocates indicate they have changed and had to improve over time, they may present themselves not as perfect exemplars, but as people who have not always acted ideally, much like the audience they're addressing. Exploring the consequences of advocates disclosing that they changed may thus be a fruitful direction for future research.
Abstract: The present research examines two distinct pitfalls for advocates aiming to motivate others to use renewable energy and reduce their carbon footprint. Recent research has found that science communicators and advocates may be judged for inconsistency between their behavior and advocacy—where information that an advocate's lifestyle has a large carbon footprint can undermine their appeals to live more sustainably or support policies to address climate change. Conversely, in other advocacy domains, research on do-gooder derogation has found that exemplary behavior among advocates can lead people to feel defensive about their own shortcomings and reject the exemplar and their cause. Do environmental advocates have to worry about both do-gooder derogation and behavior-advocacy inconsistency? Further, do different types of advocates have to worry about these pitfalls equally? To answer these questions, we use an online survey in the United States (N = 2362) to contrast the effectiveness of advocacy from peers and from experts across three levels of sustainable lifestyles: not sustainable, somewhat sustainable, and highly sustainable. We find strong evidence for the negative effects of behavior-advocacy inconsistency for both neighbors and experts, albeit much larger impacts for experts. Further, we also find partial evidence for do-gooder derogation for neighbors and experts: highly sustainable advocates were not more influential than somewhat sustainable ones—instead they were marginally worse. Overall, these results suggest that advocates, especially experts, are most credible and influential when they adopt many sustainable behaviors in their day-to-day lives, so long as they are not seen as too extreme.
4. Discussion
These results show that both experts and neighbors suffered from behavior-advocacy inconsistency effects: when advocates lived unsustainable lifestyles, there were less successful at encouraging others to sign up for a residential renewable energy program. However, behavior-advocacy inconsistency effects were significantly worse for experts than neighbors. It appears that people are more forgiving of neighbors’ unsustainable lifestyles than of experts’ shortcomings—perhaps because we hold experts to higher standards for behavior-advocacy consistency than we hold peers. This also appears to be true for perceptions of advocates’ credibility.
Further, these data find that living a highly sustainable lifestyle (buying renewable energy, having an extremely efficient home, completely avoiding flying, and eating no meat or cheese) does not make advocates even more effective than living a somewhat sustainable lifestyle (buying renewable energy, having a fairly energy efficient home, and making substantial efforts to curb meat eating and flying). In fact, disclosing one's highly sustainable lifestyle amid giving others an appeal to change may run the risk of raising do-gooder derogation, where advocates’ exemplary lifestyles may make others’ feel defensive about their own shortcoming leading them to dislike the advocate and their cause. As such, we found that highly sustainable advocates were marginally less effective at increasing interest in the renewable energy program and no more credible than somewhat sustainable ones. Those who were somewhat sustainable fared well and do not appear to have suffered from concerns about behavior-advocacy inconsistency or do-gooder derogation. It's also possible that participants saw less of a contrast between themselves and the somewhat sustainable advocate: participants may have believed they were more sustainable than unsustainable advocates, and less sustainable than the highly sustainable advocate. If true, somewhat sustainable advocates may also benefit from perceptions of greater similarity, and therefore serve more easily as a social model [14]. Indeed, in a post hoc analysis we find that somewhat sustainable advocates are perceived to be slightly less socially distant than highly sustainable advocates (d = 0.11, see the Supplemental Material).
Experts appear to be judged more harshly, as their efforts suffer more greatly from behavior-advocacy inconsistency. This is unfortunate given that experts, with their wealth of knowledge and dedication to the topic, hold an irreplaceable role in increasing understanding by disseminating science and in advocacy for action on climate change. Notably, advocacy itself may not be problematic for climate change experts. Scientists, academics, and others can advocate for climate related policies and solutions in a number of ways [43], and are able to do so without hurting their credibility to the public [44] or their colleagues [45]. Research suggests that experts may be able to make substantial reductions to their footprint, such as reducing flying, without adversely affecting their academic success [46]. However, if experts involved in advocacy are unwilling to live somewhat sustainable lives, they may have trouble avoiding negative effects of behavior-advocacy inconsistency. By comparison, neighbors experienced much weaker behavior-advocacy inconsistency effects. In fact, for neighbors there was no significant difference between being highly sustainable and being unsustainable for participant's interest in adopting renewable energy. This may present a silver lining to these findings: non-professionals, no matter their lifestyle, can still be fairly effective advocates for decarbonization.
4.1. Limitations & future directions
In the present research we examined behavior-advocacy inconsistency effects and do-gooder derogation effects in the context of someone self-disclosing their personal actions. While self-disclosure is not uncommon for advocates of sustainability [47], [48], in other contexts the targets of advocacy may come to learn about an advocate's sustainable practices through their own inquiry, a third party, or some other indirect means. For example, after the release of “An Inconvenient Truth” Al Gore came under attack for his household energy consumption from a series of news articles attempting to impugn his reputation and the sincerity of his cause [49]. It is possible that our results would differ if the information about the advocate's lifestyle were learned through some other means or method. Future research is needed to assess whether the form and source of disclosure about an advocate's lifestyle impacts the results found here.
The operationalization of do-gooders used in the present work required the advocate stating both the criteria for living sustainability (references to home energy, diet, and flying) as well as their excellent performance relevant to that criteria. However, participants may lack personal knowledge about how these behaviors correspond to sustainability. For instance, participants may have been unaware that dietary choices have a substantial impact on the environment. If participants felt great uncertainty about whether these actions were actually important to sustainability, they may not have experienced any negative social comparison to do-gooders. Therefore, one possibility is that do-gooder derogation may be more prominent in cases where people already understand the importance of or care about the domain and behavior their performance is being compared on.
The study design used here relies on asking all our participants to envision highly similar vignettes in order to control for all aspects beyond those we seek to manipulate. This ensures strong internal validity, but raises questions regarding external validity and generalizability. In particular, our approach does not assess actual behavior change and instead assesses self-reported interest in the vignettes which may differ from real-world behavior. Further, a survey experiment is limited in terms of providing realistic experiences with advocates. In particular, a fictional peer may not adequately resemble the vivid information people would have in real life about one's neighbors. Therefore, it's possible that rich social interactions that come with real social ties may produce different and potentially stronger results than those found here. Similarly, envisioning attending a talk may differ from actually attending a presentation in ways that meaningfully change the results observed here. The present research lays the groundwork for studies seeking to assess such phenomenon in the field which can provide greater confidence in how they generalize to real-world experiences.
While the present research examined an important outcome, interest in a residential renewable energy program, it is possible that results may differ for other sustainable behaviors. For instance, past research on eliminating meat consumption has found stronger evidence for do-gooder derogation than we found in the present context [24]. Therefore, the relative strength of behavior-advocacy inconsistency and do-gooder derogation may vary across different domains of sustainability. Further research is needed to explore how behavior-advocacy inconsistency effects and do-gooder derogation may differ depending on the behavior in question.
We also need to better understand how to overcome behavior-advocacy inconsistency concerns and do-gooder derogation. Recent research on advocacy for decarbonization policies finds that when advocates indicate that they have reduced their carbon footprint from a previously high footprint, credibility is restored [10], i.e., advocates are judged on their current carbon footprint and not their past footprint. More generally, information about others changing has been shown to be inspirational [50], and help resolve a variety of psychological barriers that prevent personal change [51]. In the advocacy context, it may also be helpful address threats to one's self image from comparisons to do-gooders. Specifically, if advocates indicate they have changed and had to improve over time, they may present themselves not as perfect exemplars, but as people who have not always acted ideally, much like the audience they're addressing. Exploring the consequences of advocates disclosing that they changed may thus be a fruitful direction for future research.