Thursday, February 10, 2022
30 Countries Study Prior to and During the Initial COVID-19 Wave: Fewer respondents reported physical or sexual partner violence during COVID-19 measures (7.0%) compared to the period before COVID-19 measures (9.3%)
Social Desirability In Surveys: In 1966, 7% of subjects initially admitted to same-sex sexual experiences, but many later changed their answers (reaching 22%) when told that they would be given a polygraph test to detect false answers
The Influence of Social Desirability on Sexual Behavior Surveys: A Review. Bruce M. King. Archives of Sexual, Feb 10 2022. https://link.springer.com/article/10.1007/s10508-021-02197-0
Abstract: Research in fields for which self-reported behaviors can be compared with factual data reveals that misreporting is pervasive and often extreme. The degree of misreporting is correlated with the level of social desirability, i.e., the need to respond in a culturally appropriate manner. People who are influenced by social desirability tend to over-report culturally desired behaviors and under-report undesired behaviors. This paper reviews socially desirable responding in sexual behavior research. Given the very private nature of the sexual activity, sex researchers generally lack a gold standard by which to compare self-reported sexual behaviors and have relied on the anonymity of participants as the methodology to assure honest answers on sexual behavior surveys. However, indirect evidence indicates that under-reporting (e.g., of a number of sexual partners, receptive anal intercourse, condom use) is common. Among the general population, several studies have now reported that even with anonymous responding, there are significant correlations between a variety of self-reported sexual behaviors (e.g., use of condoms, sexual fantasies, exposure to pornography, penis size) and social desirability, with evidence that extreme under- or over-reporting is as common as is found in other fields. When asking highly sensitive questions, sex researchers should always include a measure of social desirability and take that into account when analyzing their results.
Social Desirability Responding in Sex Research
In a 1966 study using the personal interview technique, researchers found that 7% of participants initially admitted to same-sex sexual experiences, but many others later changed their answers (resulting in 22%) when they were told that they would be given a polygraph test to detect false answers (Clark & Tiffit, 1966). Same-sex sexual relations were a highly stigmatized behavior in 1966 (see Editorial, 1966).
In another early study, researchers asked women in several repeated personal interviews if they had ever engaged in anal intercourse (Bolling, 1976; Bolling & Voeller, 1987). Very few admitted to doing so in the first interview, but after repeated interviews with the same researcher (and the “development of strong trust”) nearly three-fourths admitted to having tried it at least once.
Conclusions
In a recent review, Schmitt (2017) concluded, “In the end, ample research suggests responses to sexuality surveys are….mostly truthful” (concluding paragraph). This author disagrees. For example, the CDC’s Youth Risk Behavior Survey (YRBS) is a national school-based survey of a large variety of self-reported risky behaviors among U.S. adolescents. Many researchers, including this author, have cited the results from the sexual behaviors portion of the survey. The 2015 survey has been cited over 1420 times (Kann et al., 2016) and the 2017 survey has been cited over 1,400 times (Kann et al., 2018). However, in a study of the validity of their findings, the CDC found that students over-reported their height by an average of 2.7 inches. The misreporting was not random. Only 4% of the participants under-reported their height, with 39.5% over-reporting by 3 inches or more (Brener et al., 2003). Mischievous responding was evident as well as one high school student over-reported height by 16.7 inches. With many of the same students under-reporting their body weight, 12.7% under-reported their body mass index by 5 kg/m2 or more.
There is no rational reason to believe that answers on the sexual behaviors portion of the YRBS, or any other survey of self-reported sexual behaviors, are any more truthful than the YRBS’ self-reports of height. In one of the few studies in which self-reported sexual behavior was compared to the gold standard of factual information, adolescents were asked if they had experienced a sexually transmitted infection in the previous 6 months to 1 year (Clark et al., 1997). Fifty-one percent denied having had an STI, but hospital records confirmed that they had. Another 9% admitted to having had one STI during that time period, but medical records revealed multiple STIs. The results of many studies now indicate that social desirability responding in studies of self-reported sexual behaviors is as pervasive and often as extreme as is found in other research areas.
Enjoyable experiences go stale in three distinct temporal profiles, and the patterns of "hedonic decline" are stable across time and stimuli, within an individual
Identifying the temporal profiles of hedonic decline. Jeff Galak, Jinwoo Kim, Joseph P. Redden. Organizational Behavior and Human Decision Processes, Volume 169, March 2022, 104128. https://doi.org/10.1016/j.obhdp.2022.104128
Highlights
• Hedonic decline unfolds in three distinct temporal profiles (shapes): Flat, Steady Decline, Rapid Onset Decline.
• Hedonic decline temporal profiles are stable across time and stimuli, within an individual.
• Hedonic decline temporal profiles can be explained by variation in Need for Cognition.
• Hedonic decline temporal profiles have significant downstream consequences on future consumption choice and timing.
• Understanding how hedonic decline temporal profiles unfold can be of great benefit to individuals and to organizations.
Abstract: The unfortunate reality of the human condition is that enjoyable experiences become less enjoyable with time and repetition. This hedonic decline has been well documented across a variety of stimuli and experiences. However, previous work has largely ignored the possibility that the temporal profile of hedonic decline varies at the individual level. In the present work, we first identify three temporal profiles of hedonic decline: flat, steady decline, and rapid onset decline. We next demonstrate that these temporal profiles of hedonic decline are relatively stable across both stimuli and time for any given individuals. That is, a temporal profile observed for one stimulus can be used to predict the temporal profile of hedonic decline for a novel stimulus or the same stimulus at a future date. We further explore the psychological underpinnings of these differences and note that Need for Cognition, a stable personality trait, partially explains which individuals will be more likely to experience different temporal profiles. Finally, we demonstrate two important downstream consequences to these three different temporal profiles of hedonic decline: re-consumption choice and re-consumption timing. This work provides a first look into the various ways in which hedonic decline operates at an individual level and documents predictable heterogeneity in such tendencies, an important departure from previous research looking at hedonic decline in aggregate.
7. General discussion
Across five studies we demonstrate that hedonic decline tends to follow one of three distinct patterns: Rapid Onset Decline, Steady Decline, or Flat. Rapid Onset Decline is characterized by a fast initial decline in enjoyment that tapers off over time. Steady Decline is characterized by the opposite in that there is little hedonic decline at first, but then hedonic decline accelerates once a threshold is seemingly reached. Finally, Flat is characterized by little to no hedonic decline at all. These three temporal profiles consistently emerged across stimuli including food, music, art, and videos. Critically, ex ante, it is not obvious that these are the three temporal profiles that must emerge from such an investigation. Indeed, increases in enjoyment (of various temporal profiles), linear decreases in enjoyment, irregular and/or cyclical changes in enjoyment, or simply no clustering at all were all plausible as common profiles.
Instead, we consistently observed the same three temporal profiles of hedonic decline regardless of the stimuli. Critically, not only do these temporal profiles consistently appear across studies, they appear to be stable for an individual both across time and across stimuli. That is, if an individual is classified into one of these three temporal profiles, they will likely experience hedonic decline the same way both for the same stimulus sometime in the future, as well as a novel stimulus. This type of consistency has yet to be documented in any capacity in the literature on hedonic decline. Indeed, previous work has treated hedonic decline either as a monolith where all people experience hedonic decline the same way, or has allowed for individual variation primarily as a nuisance to statistically control for when modeling more general effects. The present work moves well beyond these prior findings by showing that people experience hedonic decline with predicable heterogeneity that is stable across time and stimuli.
This suggests that these three temporal profiles are fundamental to understanding how one experiences hedonically relevant stimuli over time. As far as we know, no fundamental theory in psychology argues that a given person should experience hedonic decline in generally the same way across stimuli. However, there is ample evidence for stable individual traits that could help account for this. Here, we explore just one such well-known trait, that of need for cognition (NFC in Studies 2 and 3). Of course, we expect that a multitude of other individual differences likely contribute to why a person experiences a particular type of hedonic decline. Potential candidates for future research here could include mindfulness (Bishop et al., 2004), optimal stimulation (Raju, 1980), variety seeking (van Trijp & Steenkamp, 1992), self control (Tangney et al., 2004), as well as many others.
In fact, in a post-hoc analysis of our study demographics, we found that older participants were more likely to exhibit a Flat pattern than a Rapid Onset Decline pattern (see Supplemental Materials for details), yet no differences emerged for gender. This is largely consistent with the notion that older individuals tend to require a lower need for stimulation, a possible correlate of hedonic decline (Kish and Busse, 1968, Raju, 1980). There are likely many other demographic and psychological difference that help explain cluster membership, and we expect future work will uncover such differences to further our understanding of the antecedents of hedonic decline. In the present work, we limit ourselves to NFC to first document a novel antecedent to hedonic decline, and second to demonstrate that the three clusters we observe are not just random artifacts of our analytical approach. Rather, these groupings can be predicted, in part, by theory.
Finally, aside from documenting the existence and partial psychological underpinnings of three hedonic decline clusters, we show two critical downstream consequences: re-consumption behavior (Study 4a), and future consumption timing (Study 4b). For people with rapid hedonic decline (Rapid Onset Decline), the choice to re-consume a once enjoyable stimulus is decreased and delayed significantly after just a few exposures. The same is not true for those with little hedonic decline (Flat), as they are more willing to immediately re-consume a stimulus even after repeated exposure to it. In other words, in order to predict either re-consumption behavior or preference for future consumption timing for any given individual, it is not enough to know their initial enjoyment with a stimulus, nor even the number of previous consumption episodes of that stimulus. Rather, to predict re-consumption and preference for future consumption timing, one must also know which hedonic decline trajectory that person is likely to experience.
This research has important implications for our understanding of psychology in that it contributes to our growing understanding of how heterogeneity in experiences can help predict behavior at the individual level (Bolger et al., 2019). In the field of psychology, in particular, there has been limited work devoted to including heterogeneity of human experiences in theory and model development. This work demonstrates the clear importance of doing so, and is meant to be a steppingstone for those working to develop a larger theory of hedonic consumption. Critically, this work does much more than simply claim that people are different (which is largely self-evident), but rather it also identifies specific groups of people in terms of how they respond to hedonic stimuli. Our behavioral results (Studies 4a and 4b) also suggest that some people may naturally show less hedonic decline, making it easier for these Flat decliners to maintain their focus when listening to a speaker, performing a work task, or building expertise. Alternatively, these Flat decliners likely also find it difficult to exhibit self-control at other times such as eating an indulgent food, playing a video game, or spending money on a shopping spree.
This research has implications for practitioners as well. A firm that can identify the hedonic decline type for a person can then use it to predict future preferences. For instance, if a music streaming service sees a person drop a particular song from a playlist after a few plays, this may indicate a Rapid Onset Decliner who needs lots of variety in the future. Alternatively, if a person has been identified as a Flat type, then they are likely to keep using a product more in the long term (suggesting a firm should invest more to acquire and keep them). Given these benefits, we expect managers will find creative ways to identify one’s hedonic decline type. Possibilities include ongoing satisfaction surveys like many retailers and fast food companies offer on the back of receipts, the ongoing ratings of episodes as one watches a streaming series, or the length of time one spends on a media site before losing interest. Likewise, profiles may be built for individuals using other general measures, such as need for cognition, age, etc.
There is also the possibility of our work informing how future researchers should approach the study of hedonic decline more generally. As aforementioned, most research studying hedonic responses of any kind over time, generally assumes that all people follow a similar trajectory (linear) of hedonic decline. To the extent that our work shows this to be far from the case, there is a simple and specific prescription that all researchers should follow: ascertain if the research question of interest varies as a function of cluster membership. That is, much work in this space uses experimental manipulations to demonstrate a shift in overall hedonic decline. A simple addition to that research approach would be to first conduct a cluster analysis as done in this paper, and then test if any experimental manipulation varies as a function of cluster membership. In Supplemental Materials Study S5 we found that disrupting an experience slowed the rate of hedonic decline across all clusters, but this did not need to be the case. It was equally plausible that the disruption would only influence, say, the individuals in the Rapid Onset Decline cluster. For future work, we would encourage all researchers to understand if their interventions are universally applicable, or rather apply to only some subset of individuals. At a minimum, researchers should explore modeling results with individual-level random effects for the intercept, linear, and quadratic terms, and examine the histograms of the individual estimates. Doing so will yield greater insight into the underlying psychology of whatever is being studied.
There are, of course, still some unanswered questions on which the present manuscript can only speculate. For instance, do these same hedonic decline clusters emerge for all stimuli? By design, all of our studies employed repetition in the form of repeating a discrete stimulus (e.g. a single song repeated, or a single type of food repeatedly consumed) to induce hedonic decline, but would we observe the same clusters for stimuli that are structurally different? For instance, videos provide a dynamic stimulus that unfolds in new ways over time. To explore this question, we ran a study in which participants watched a 13-minute nature documentary, and rated their enjoyment every 30 s (without stopping consumption, via an in-experience measure). Consistent with our other studies, we again found the same three clusters of hedonic decline, even for this longer continuous experience (Supplemental Materials Study S6). Beyond continuous versus discrete, another structural difference could be the duration of the experience. In all of our studies, the experiences were relatively short lived, lasting just minutes in totality. In contrast, other work has looked at longer, and perhaps more complex stimuli, such as full length movies or visits to museums (O'Brien, 2019). Indeed, such work has found little hedonic decline with repetition, which may reflect a shift in the mix of our three cluster types for longer and more complex experiences. We leave this and other related questions to future research.
There is also the question of how such clustering would unfold for negative or aversive stimuli. For instance, Nelson & Meyvis (2008) found similar results of the influence of breaks on hedonic decline for both negative and positive stimuli. This seems to suggest that people fundamentally experience similar diminishing hedonic responses to all types of stimuli, be they positive or negative. And yet, some recent work suggests that hedonic responses are not symmetric, at least in some domains. For instance, aversive experience are much more sensitive to hedonic contrasts than positive experiences (Voichek & Novemsky 2021). Might this mean that people experience negative stimuli fundamentally differently from positive stimuli, and thus group into different clusters than what we observed here? Or might there be less stability in clustering when considering clusters observed with a positive stimulus and then projected onto expected experiences with a negative stimulus? This too is an important question that we hope future researchers will tackle.
Going beyond consumption of stimuli, the field of hedonic adaptation has often focused on major life events. The most typical finding is that even after major events like the loss of a child or a change in employment, people’s overall hedonic experience (i.e., their wellbeing) returns to a set point after enough time has passed (Brickman et al., 1978, Lucas et al., 2003, Lucas et al., 2004). Our research, though robust to a variety of stimuli, is relatively mute on whether similar clusters of hedonic decline will emerge for such larger-scale, longer-term experiences focused on overall well-being. That is, following the loss of a job, people tend to initially experience an extreme negative response, which then returns to their pre-job loss levels with time. But does that hedonic adaptation occur uniformly for all individuals, or rather like in the present research, do some people experience little recovery, some rapidly recover, while others’ recovery occurs only after a prolonged period of extreme negativity. If future work documents such clusters for major life events, that would potentially allow for a stronger understanding of which types of individuals require more intense interventions following major negative life experiences to help them return to their pre-negative experience set points. After all, if some individuals experience Rapid Onset Decline (recovery, in this case), they may be less in need of clinical help than those who experience Flat or Steady Decline. Of course, for now, we can only speculate and hope that such questions will be answered with future research.
In sum, hedonic decline, though ubiquitous, is not quite as singularly determined as once believed. While some work has explored why some individuals could systematically differ in their hedonic decline (Chugani et al., 2015, Nelson and Redden, 2017, Redden and Haws, 2013), this research is very limited in scope and generally understudied. Further, none of this work considered how there might be systematic patterns across all people across all domains, which is exactly what more general theories of enjoyment would require. Moreover, these responses are similar across a variety of stimuli, and include both repetitive consumption and continuous consumption. We hope that our present work spurs future research both in the area of hedonic decline, as well as more broadly in the area of predictable heterogeneous psychological responses to all forms of stimuli for all types of people.
Ever wondered how grandiose narcissism is related to vulnerable narcissism in the general population? Hint: At very high levels of grandiosity you also see lots of vulnerability
The nonlinear association between grandiose and vulnerable narcissism: An individual data meta-analysis. Emanuel Jauk, Lisa Ulbrich, Paul Jorschick, Michael Höfler, Scott Barry Kaufman, Philipp Kanske. Journal of Personality, December 3 2021. https://doi.org/10.1111/jopy.12692
Abstract
Objective: Narcissism can manifest in grandiose and vulnerable patterns of experience and behavior. While largely unrelated in the general population, individuals with clinically relevant narcissism are thought to display both. Our previous studies showed that trait measures of grandiosity and vulnerability were unrelated at low-to-moderate levels of grandiose narcissism, but related at high levels.
Method: We replicate and extend these findings in a preregistered individual data meta-analysis (“mega-analysis”) using data from the Narcissistic Personality Inventory (NPI)/Hypersensitive Narcissism Scale (HSNS; N = 10,519, k = 28) and the Five-Factor Narcissism Inventory (FFNI; N = 7,738, k = 17).
Results: There was strong evidence for the hypothesis in the FFNI (βGrandiose < 1SD = .08, βGrandiose > 1SD = .36, βGrandiose > 2SD = .53), and weaker evidence in the NPI/HSNS (βGrandiose < 1SD = .00, βGrandiose > 1SD = .12, βGrandiose > 2SD = .32). Nonlinearity increased with age but was invariant across other moderators. Higher vulnerability was predicted by elevated antagonistic and low agentic narcissism at subfactor level.
Conclusion: Narcissistic vulnerability increases at high levels of grandiosity. Interpreted along Whole Trait Theory, the effects are thought to reflect state changes echoing in trait measures and can help to link personality and clinical models.
DISCUSSION
This study tested the nonlinearity hypothesis on the relation of narcissistic grandiosity and vulnerability using a preregistered individual data meta-analysis (mega-analysis). We observed clear evidence (moderate to large effects) for the hypothesis in the FFNI and weaker evidence (small to moderate effects) in the NPI/HSNS. Specifically, findings for the FFNI showed that there is a sizeable difference in slope (Δβ = .28) between grandiosity and vulnerability at lower versus higher levels (+1 SD) of grandiosity, and this difference becomes stronger as grandiosity further increases (Δβ = .43 at +2 SD). Complementary empirical breakpoint detection yielded an estimate in between those two criteria (+1.35 SD). The effect was not dependent upon moderators such as country of assessment, questionnaire version, or participants' sex but was moderated by participants' age, which we elaborate on in the following. For the NPI/HSNS, we observed a small effect (Δβ = .12) for the hypothesized relation when comparing segments below and above +1 SD, and a moderate effect when applying a stricter criterion (Δβ = .31 at +2 SD). The empirical breakpoint estimate at +1.98 SD aligned with this latter criterion. There was no indication of heterogeneity across samples or a moderation effect, though the interaction seemed to depend on age (as for the FFNI).16 Taken together, these results show that there is evidence for an increase of narcissistic vulnerability at high levels of grandiosity as assessed by trait self-report scales. The differences are subtle, and their detection requires a nuanced and reliable assessment.
4.1 Personality and clinical perspectives on narcissism—paradox lost?
Given the near-orthogonality of grandiose and vulnerable narcissism measures in the general population (Jauk & Kaufman, 2018; Jauk et al., 2017; Krizan & Herlache, 2018; Miller et al., 2011; Wink, 1991), personality models tend to view these two expressions of narcissism as mostly distinct traits. Conversely, clinical perspectives are more inclined to see a common ground for both (cf. Wright & Edershile, 2018), and emphasize that individuals with pathological narcissism can fluctuate between grandiose and vulnerable states (Pincus & Lukowitsky, 2010; Ronningstam, 2009). Higher state variability has also been confirmed in systematic research using different methods (Edershile & Wright, 2020; Gore & Widiger, 2016; Kanske et al., 2017; Oltmanns & Widiger, 2018). Our findings show that personality and clinical perspectives hold true for different subpopulations. While grandiose and vulnerable narcissism reflect largely orthogonal traits at low-to-moderate levels of grandiosity, they become more intertwined at higher levels (+1 SD, or top 15.9%), and substantially related at very high levels (+2 SD, or top 2.6%). This latter criterion lies within the prevalence estimates of NPD (American Psychiatric Association, 2013; Ronningstam, 2009), a personality disorder characterized by extreme grandiosity (Miller et al., 2014).
What mechanisms might drive the increasing correlation of trait measures of grandiosity and vulnerability at high levels of grandiose narcissism? Based on accumulating evidence for variation in grandiose and vulnerable states, particularly at high levels of grandiose narcissism (Edershile & Wright, 2020; Gore & Widiger, 2016; Oltmanns & Widiger, 2018), we assume that increases in trait questionnaires of vulnerability likely reflect increases of such vulnerable states or episodes in those with high levels of grandiosity. That is, to some extent, the experience of vulnerable states likely echoes in trait measures. We base this interpretation on WTT, which assumes that traits can be understood as density distributions of states (Fleeson & Jayawickreme, 2015; Jayawickreme et al., 2019), and trait scales, therefore, indicate the central tendency of intraindividual variation in experience and behavior. The highly grandiose individual might thus experience more frequent and/or more pronounced vulnerable states, which, to some extent, manifests in global self-ratings.
The nonlinear effect is specific for grandiosity and cannot be inversed (see FFNI segmented regression models). Highly vulnerable persons do not show increased grandiosity, which is in line with our previous study (Jauk & Kaufman, 2018) and research demonstrating with other methods that highly grandiose individuals show episodes of vulnerability, but not the other way around (Edershile & Wright, 2020; Gore & Widiger, 2016). However, unexpectedly, the results pattern for the NPI/HSNS deviated, in this regard, from that of the FFNI, as a positive change in slope was also observed along the HSNS distribution. While we have no clear interpretation for this result at this point, tentatively speaking, it might be that the HSNS, which has formerly also been considered a measure of “covert” narcissism (Wink, 1991), draws to some extent on hidden grandiose aspects (“I am secretly ‘put out’ or annoyed when other people come to me with their troubles, asking me for my time and sympathy”; Hendin & Cheek, 1997, p. 592). Higher scale scores might thus be accompanied by higher breakthroughs of grandiosity, so to speak. However, this speculation must remain subject to future studies, and as a whole, the results observed for the FFNI are in greater accordance with studies using different methods (Edershile & Wright, 2020; Gore & Widiger, 2016).
4.2 The nonlinear relationship through the lens of the three-factor model
Factor- and facet-level analyses for the NPI and FFNI showed that with increasing grandiose narcissism, grandiosity becomes less saturated with agentic aspects, and vulnerability becomes more saturated with antagonistic aspects. This is largely in accordance with our previous results (Jauk & Kaufman, 2018) and shows that, on the one hand, adaptive aspects of grandiosity, which could potentially counteract negative consequences (e.g., Kaufman et al., 2020), become less relevant as grandiosity increases. On the other, it shows that vulnerability is tied more strongly to antagonistic aspects, making the common core of grandiose and vulnerable aspects stronger at high levels of grandiosity (though a higher saturation of grandiosity with antagonism, as in our previous study [Jauk & Kaufman, 2018], was not evident).
To further study the interplay of different narcissism aspects directly at the three-factor level, we conducted exploratory response surface analyses, which allow to investigate nonlinear and interactive effects of agentic and antagonistic aspects. For both the NPI and the FFNI, these showed that it is neither agentic nor antagonistic aspects alone that increase vulnerable/neurotic aspects, but a combination of those. Specifically, agentic aspects—at least up to a certain point—seem to buffer antagonistic aspects when it comes to vulnerable/neurotic narcissism. This pattern was more clearly evident in the NPI/HSNS, where, at low levels of agentic narcissism, even mild increases in antagonistic narcissism are accompanied by increases in neurotic narcissism, whereas at high levels of agentic narcissism, it takes longer for antagonistic narcissism to increase neurotic narcissism. Agentic narcissism, however, continues to have this “protective” effect only up to an above-average level, where the relationship levels off. The FFNI results pointed in a similar direction, in that a combination of low agentic and elevated antagonistic narcissism is accompanied by higher neurotic narcissism. Here, however, we observed stronger quadratic effects, which indicate that high scores on either dimension decrease neurotic narcissism again.
Considering the evidence from factor correlation and response surface analyses together, we conclude that antagonistic narcissism does play a key role in explaining vulnerable/neurotic narcissism, but the absence of agentic aspects might be at least as important. Particularly those individuals who have an antagonistic interpersonal style, yet little “positive” and potentially stabilizing (even if self-aggrandizing) experiences linked to agentic narcissism, might display vulnerable/neurotic aspects of narcissism such as shame (which displayed the strongest increase in correlation with overall grandiose narcissism). Similar findings were obtained, for instance, for the absence of positive affect in the development of depression (Wood & Joseph, 2010). More generally, recent research suggested that personality disorders can be understood as emergent interpersonal syndromes (i.e., unlikely and socially problematic trait configurations; Lilienfeld et al., 2019), and the results observed here might be seen as supporting such an account to narcissism.
4.3 Normal and pathological narcissism
The results could further be seen as supporting to some degree the distinction between adaptive and maladaptive or normal and pathological expressions16 of narcissism. Research has long strived to delineate self-report scales of narcissism with respect to the extent to which they assess adaptive or maladaptive aspects (e.g., Ackerman et al., 2011; Pincus et al., 2009). These efforts commonly center around the identification of nomological networks as evident in validity measures, assuming linear effects of the respective scales. While these linear effects do certainly capture the most relevant general trends, it might well be the case that increasing narcissism levels are accompanied by qualitative shifts in the nomological networks. For instance, a person who behaves arrogant in some situations, but not in others, might be quite successful in the social realm, not display signs of psychological maladjustment, and might be considered an example of adaptive/normal narcissism. In contrast, a person who behaves arrogant in almost every situation—including those where others will certainly not tolerate it—will almost inevitably face social problems, which might unveil narcissistic vulnerability. Crucially, both of these persons can be placed on the same narcissism dimension (here: antagonistic narcissism), but in different segments of it. It is thus not necessary to assume qualitative shifts in the narcissism dimension (antagonism) itself, but different (potentially socially mediated) effects of it might manifest in differential relations with other variables, particularly narcissistic vulnerability. These might further be amplified by simultaneous changes in other aspects, most notably the absence of agentic aspects.
It is interesting to note that our findings align well with those from a large-scale study of nonlinear effects of narcissism in the workplace: Grijalava and colleagues (2015) investigated leadership qualities related to narcissism and found narcissism to be positively associated with (supervisor-rated) leadership effectiveness at moderate levels, but negatively related at high levels. As the authors stated, “increasing narcissism in the low range of the trait will lead to more adaptive manifestations of narcissism” whereas “increasing narcissism in the high range of the trait will produce maladaptive manifestations” (p. 26). The effects were not attributable to agentic aspects, but presumably more related to antagonistic aspects (though these were not directly studied), which is in line with the effects observed here.
We thus argue that the adaptiveness or maladaptiveness of inventories such as the NPI or FFNI might not only depend upon their coverage of different construct aspects, but also on the investigated range within the respective dimensions, and potentially interactions with other dimensions. Which form of narcissism might be considered normal or pathological might, from an empirical point of view, well depend upon the level of narcissism, and changes in the nomological network associated with it. We note that the correlation between grandiosity and vulnerability observed here for high levels of FFNI grandiose narcissism is well in line with the intrinsic correlation of grandiose and vulnerable subscales in the PNI—a scale designed to assess maladaptive forms of narcissism, in which the co-occurrence of grandiose and vulnerable aspects is considered vital (Pincus et al., 2009; Wright et al., 2010).
While the idea of qualitative shifts within the same dimension might conflict to some extent with our understanding of desirable psychometric characteristics and necessitate more complex analysis techniques, we believe considering this complexity may better depict the reality of individual differences. Though not very popular in personality psychology, dose–response relationships are common phenomena in science (for instance pharmacology; Tallarida & Jacob, 1979) and also everyday life (considering just the many instances where we say that we “overdid” something). They can be understood as systemic changes within self-organizing systems (e.g., Hayes et al., 2007), which seems a fruitful perspective for the study of personality (Richardson et al., 2013), and specifically personality pathology (Hopwood et al., 2015). Though we used discrete breakpoints here, we do not understand these as isomorphic representations of the empirical relations, but as probabilistic guesses of distribution points around which qualitative shifts are most likely to occur. The results are thus not meant to reflect cutoffs for maladaptive/pathological narcissism, yet, they may provide best guesses for distribution ranges where systemic changes are likely to take place.
4.4 Implications for research and practice
We wish to address three aspects that might be of relevance to narcissism research: first, the difference in slope for the FFNI depended on age to a sizeable degree, as the interaction was stronger for older individuals (though vulnerability was, on average, lower in older individuals). This might be the case because narcissistic vulnerability—even if seeded early in life (Huxley et al., 2021; Kernberg, 1975)—takes time to unfold, or to be unveiled. Someone in their early twenties—on the peak of intellectual and physical capacities, yet in many aspects still protected from the pitfalls of adult life—might, on average, not have experienced a significant amount or intensity of adverse events such as job loss or divorce, or ego-threatening developmental changes such as declines in physical performance or attractiveness. Research has confirmed that such factors do shape our personality (e.g., Specht, 2017), and they might serve as triggers of narcissistic vulnerability particularly after midlife (e.g., Goldstein, 1995). This seems even more important given that grandiose narcissism itself has been found to show longitudinal selection effects in the way that those high in grandiosity have a higher likelihood to experience adversity (Orth & Luciano, 2015). However, cohort effects might also be at play, and future longitudinal studies will be needed to unveil the complex associations. In any case, this result underlines the necessity of studying samples that vary substantially in demographic characteristics such as age, as vulnerable aspects accompanying high grandiosity might otherwise be underestimated.
Second, the results show that considering the absolute level of grandiosity might be important when designing and interpreting studies, particularly those using select populations or extreme groups. Qualitative shifts between lower and higher grandiosity samples could at least partially explain experimentally unveiled signs of vulnerability in highly grandiose individuals, as evident for instance in neuroscience research (Jauk & Kanske, 2021). This can be effectively addressed by, on the one hand, considering the level of narcissistic grandiosity, and, on the other, by complementing designs with measures of narcissistic vulnerability (ibid). For research that aims to test threshold effects, we recommend using the empirically obtained breakpoint estimates as a priori parameters in large and diverse samples.
Third, future studies could assess mediating variables which might explain increases in vulnerability at higher levels of grandiose narcissism. From a clinical perspective, personality functioning, in terms of general self- and other-related emotional competencies, might be a prime candidate, as personality disorders in general (American Psychiatric Association, 2013), and narcissistic pathology specifically (Kernberg, 1975), are conceptualized as constellations where extreme trait expressions meet reduced functioning. Of note, self-regulatory functions (including stabilization of self-esteem) are regarded as central elements of personality functioning (American Psychiatric Association, 2013; OPD Task Force, 2008), and these might be directly relevant for explaining transitions between grandiose and vulnerable states. While personality functioning is not frequently assessed in nonclinical personality research, emotional intelligence might be used as a proxy for it (Jauk & Ehrenthal, 2021). Also, the general factor of psychopathology—closely linked to personality pathology (Oltmanns et al., 2018)—might be studied as a moderator.
For psychological practice, the findings reported here imply that clinicians working with patients who present as highly grandiose should be particularly attentive to signs of narcissistic vulnerability. While the DSM acknowledges that vulnerability can accompany grandiosity (American Psychiatric Association, 2013), the present meta-analysis of large samples from the general population provides quantitative evidence that they are indeed more likely to accompany high grandiosity. Correctly identifying narcissistic vulnerability as such is important as it is associated with a wide range of negative consequences, including suicidal ideation and behavior (e.g., Jaksic et al., 2017). However, since highly grandiose individuals tend to hide or deny vulnerable aspects (cf. Pincus et al., 2014), and, beyond that, also evoke negative reactions in their therapists (Tanzilli et al., 2015), it can be challenging. Seeing vulnerability in those who present as highly grandiose might be even more difficult for those without professional training, as laypeople attribute grandiose behavior to similarly grandiose motives (Koepernik et al., 2021). For an integrated understanding of narcissism, it thus seems important to raise awareness for the interplay of grandiose and vulnerable aspects in highly grandiose individuals, which we hope this study can contribute to.
Wednesday, February 9, 2022
Organic food labels: Participants showed an organic halo effect leading them to consider the organic cookie as healthy as a conventional one despite containing 14% more of the daily reference intake for sugar and 30% more for fat
Organic food labels bias food healthiness perceptions: Estimating healthiness equivalence using a Discrete Choice Experiment. Juliette Richetin et al. Appetite, February 9 2022, 105970. https://doi.org/10.1016/j.appet.2022.105970
Abstract: Individuals perceive organic food as being healthier and containing fewer calories than conventional foods. We provide an alternative way to investigate this organic halo effect using a mirrored method to Choice Experiments applied to healthiness judgments. In an experimental study (N = 415), we examined whether healthiness judgments toward a 200g cookie box are impacted by the organic label, nutrition information (fat and sugar levels), and price and determined the relative importance of these attributes. In particular, we assessed whether food with an organic label could contain more fat or sugar and yet be judged to be of equivalent healthiness to food without this label. We hoped to estimate the magnitude of any such effect. Moreover, we explored whether these effects were obtained when including a widely used system for labeling food healthiness, the Traffic Light System. Although participants' healthiness choices were mainly driven by the reported fat and sugar content, the organic label also influenced healthiness judgments. Participants showed an organic halo effect leading them to consider the organic cookie as healthy as a conventional one despite containing more fat and sugar. Specifically, they considered the organic cookie as equivalent in healthiness to a conventional one, although containing 14% more of the daily reference intake for sugar and 30% more for fat. These effects did not change when including the Traffic Light System. This effect of the organic label could have implications for fat and sugar intake and consequent impacts on health outcomes.
Keywords: Organic food labelPerceived healthinessFat intakeSugar intake
In contrast to substance use or gambling, excessive behaviors (compulsive shopping, sex) are transient for most, and their comparatively lower levels of chronicity questions their designations as ‘addictions’
Addiction chronicity: are all addictions the same? Nolan B. Gooding, Jennifer N. Williams, Robert J. Williams. Addiction Research & Theory , Feb 8 2022. https://doi.org/10.1080/16066359.2022.2035370
Abstract
Background: All addictions have a recurring nature, but their comparative chronicity has never been directly investigated. The purpose of this study is to undertake this investigation.
Method: A secondary analysis was conducted on two large scale 5-year Canadian adult cohort studies. A subset of 1,088 individuals were assessed as having either substance use disorder, gambling disorder, excessive behaviors (e.g. shopping, sex/pornography), or two or more of these designations (‘multiple addictions’) during the course of these studies. Within each dataset comparisons were made between these four groups concerning the number of waves they had their condition; likelihood of having their condition in two or more consecutive waves; and likelihood of relapse following remission.
Results: Multiple addictions had significantly greater chronicity on all measures compared to single addictions. People with an excessive behavior designation had significantly lower chronicity compared to people with gambling disorder and a tendency toward lower chronicity compared to substance use disorder. Gambling disorder had equivalent chronicity to substance use disorder in one dataset but greater chronicity in the other. However, this latter difference is likely an artifact of the different time frames utilized.
Conclusions: Having multiple addictions represents a more pervasive condition that is persistent for most individuals. Substance use disorder and gambling disorder have intermediate and roughly equivalent levels of chronicity, but considerable individual variability, transient for some, but more chronic for others. In contrast, excessive behaviors such as compulsive shopping are transient for most, and their comparatively lower levels of chronicity questions their designations as ‘addictions’.
Keywords: Addictionchronicitylongitudinalcohortgamblingsubstance
By contrast to other tastes, sour taste does not appear to have been lost in any major vertebrate taxa; but for most species, sour taste is aversive: Animals, including humans, that enjoy the sour taste triggered by acidic foods are exceptional
The evolution of sour taste. Hannah E. R. Frank, Katie Amato, Michelle Trautwein, Paula Maia, Emily R. Liman, Lauren M. Nichols, Kurt Schwenk, Paul A. S. Breslin and Robert R. Dunn. Proceedings of the Royal Society B: Biological Sciences, February 9 2022. https://doi.org/10.1098/rspb.2021.1918
Abstract: The evolutionary history of sour taste has been little studied. Through a combination of literature review and trait mapping on the vertebrate phylogenetic tree, we consider the origin of sour taste, potential cases of the loss of sour taste, and those factors that might have favoured changes in the valence of sour taste—from aversive to appealing. We reconstruct sour taste as having evolved in ancient fish. By contrast to other tastes, sour taste does not appear to have been lost in any major vertebrate taxa. For most species, sour taste is aversive. Animals, including humans, that enjoy the sour taste triggered by acidic foods are exceptional. We conclude by considering why sour taste evolved, why it might have persisted as vertebrates made the transition to land and what factors might have favoured the preference for sour-tasting, acidic foods, particularly in hominins, such as humans.
(e) Consequences of sour taste preferences for hominins
Regardless of whether rotting fruits played a role in the shift of the acid preference curve in hominins, we hypothesize that the existence of acid taste preference may have strongly influenced the later relationship between hominins and rotten fruits and other rotten foods. Based on studies in the laboratory, three groups of microorganisms compete during the rot of fruits [78], single-celled budding yeasts (most of which are from the Saccharomycetales clade of fungi), filamentous fungi (such as Penicillium) and lactic acid bacteria. While all of these organisms produce short-chain fatty acids when they ferment fruit, yeasts also tend to produce alcohol, and lactic acid bacteria produce lactic acid. Rotten fruits that become dominated by filamentous fungi can be dangerous [79]. However, rotten fruits that become dominated by yeasts and lactic bacteria are often ‘improved’ from the perspective of consumers. Rot due to lactic acid bacteria and yeasts often increases food caloric, free amino acid and vitamin content and hence improves digestibility by breaking down fibre and plant toxins [80–84]. Therefore, in challenging nutritional environments, fruits rotted by yeasts or lactic acid bacteria likely represented a valuable food source that could increase chances of survival [4]. If the acid-preference of the MRCA (whenever acquired) allowed it to more readily consume heavily fermented fruit, or at least the subset of that fruit rotted by lactic acid bacteria, they might have been able to take advantage of a novel source of safe calories.
There exists molecular evidence that the last common ancestor of gorillas, chimpanzees and humans consumed fermented fruits. For example, a single amino acid replacement in the ADH4 gene in the lineage shared by humans and African apes resulted in a 40-fold improvement in ethanol oxidation [85]. This change would have allowed the MRCA to consume yeast-fermented fruits on the ground with higher concentrations of both ethanol and acids [85] without concomitant neurological toxicity (or drunkenness; [53]). This ability may have allowed the MRCA to survive and reproduce more effectively in nutritionally challenging, seasonal environments, particularly as climate change resulted in more fragmented and open habitats. At about the same time, the MRCA acquired a third copy of the HCA3 gene encoding G protein-coupled receptors for hydroxycarboxylic acids, such as lactic acid, produced by the fermentation of dietary carbohydrates by lactic acid bacteria [86]. While this gene is found in all great apes, it is most strongly activated in chimpanzees, gorillas and humans, with humans exhibiting the strongest effects, suggesting that, in some form acid-producing bacteria (and the detection of their products) played a larger role in apes than in other primates and in humans than in non-human apes. As has been considered elsewhere, a fondness for acidic foods, particularly when combined with preferences for umami tastes, may have predisposed ancestral humans to eventual intentional control of rotting to yield more favourable outcomes, which is to say, fermentation [4,87].