Approaches to Measuring Creativity: A Systematic Literature Review. Sameh Said-Metwaly, Eva Kyndt, Wim van den Noortgate. Creativity, Vol. 4, Issue 2, 2017. https://www.degruyter.com/downloadpdf/j/ctra.2017.4.issue-2/ctra-2017-0013/ctra-2017-0013.pdf
Abstract: This paper presents a review of the literature on the measurement of creativity. Creativity definitions are discussed as a starting point for understanding the nature of this construct. The four major approaches to measuring creativity (process, person, product and press) are reviewed, pointing out commonly used instruments as well as the advantages and weaknesses of each approach. This review reveals that the measurement of creativity is an unsettled issue, and that the existing instruments purporting to measure creativity suffer from serious conceptual and psychometric shortcomings. Research gaps and suggestions for future research are discussed.
Results
From the 2,064 papers identified by the search process, 221 papers were selected based on screening titles and abstracts. Among these, 152 papers met the inclusion criteria. The 152 included papers addressed the measurement of creativity and significant issues related to this measurement. Four distinct approaches to measuring creativity (process, person, product and press), in addition to the most commonly used instruments in each approach were identified. In the following, we first discuss creativity definitions, pointing to the different categories of these definitions. Then, we describe the approaches to measuring creativity and the advantages and weaknesses of each of these approaches, with an emphasis on the psychometric properties of the most common instruments used in each approach.
Defining creativity
Creativity has proven, over the years, to be difficult to define and measure due to its complex and multidimensional nature (Barbot et al., 2011; Batey & Furnham, 2006; Cropley, 2000; Runco, 2004, 2007; Treffinger et al., 2002). Treffinger (1996) reviewed the creativity literature and presented more than 100 different definitions for this concept. Despite these different definitions, the majority of creativity studies tend to employ only a few of these definitions, whereas other studies avoid providing a definition of this construct at all (Kaufman, Plucker, & Russell, 2012; Plucker & Makel, 2010). Furthermore, researchers and educators may use the term creativity to refer to entirely different aspects, including cognitive processes, personal characteristics and past experiences (Treffinger et al., 2002). In addition, researchers sometimes use terms such as innovation, invention, imagination, talent, giftedness and intelligence interchangeably with creativity.
In general, definitions of creativity typically reflect at least one of four different perspectives: cognitive processes associated with creativity (later in this paper referred to as ‘process’), personal characteristics of creative individuals (‘person’), creative products or outcomes (‘product’) and the interaction between the creative individual and the context or environment (‘press’) (Couger, Higgins, & McIntyre, 1993; Horn & Salvendy, 2006; Rhodes, 1961; Thompson & Lordan, 1999; Zeng et al., 2011).
With regard to the process perspective, Torrance (1977), as a pioneer in creativity research, defined creativity as the process of perceiving problems or gaps in knowledge, developing hypotheses or propositions, testing and validating hypotheses and finally sharing the results. Similarly, Mednick (1962) proposed that creativity involves the process of bringing associative elements together into new combinations to meet the task requirements. Guilford (1950) suggested some factors for interpreting variations in creativity including sensitivity to problems, fluency, flexibility, originality, synthesizing, analyzing, reorganizing or redefining, complexity and evaluating. In his Structure-of-Intellect (SOI) Model, Guilford (1975) considered creativity as a form of problem solving and distinguished between two types of cognitive operations: divergent production and convergent production. Divergent production is a broad search used in open problems to generate logical answers or alternatives, whereas convergent production is a focused search that leads to the generation of a specific logical imperative for a problem, in which a particular answer is required. Guilford (1975) considered divergent production process to be more relevant to successful creative thinking.
Focusing on the person perspective, a wide array of personal characteristics and traits have been suggested as being associated with creativity including attraction to complexity, high energy, behavioural flexibility, intuition, emotional variability, self-esteem, risk taking, perseverance, independence, introversion, social poise and tolerance to ambiguity (Barron & Harrington, 1981; Feist, 1998; James & Asmus, 2000-2001; Runco, 2007). However, having such traits does not actually guarantee the occurrence of creative achievement, the effect of intrinsic motivation still remains (Amabile, 1983). In other words, personality may be seen as related to the motivation to be creative rather than to creativity itself, with both of these being necessary for creative achievement (James & Asmus, 2000-2001). Task motivation is one of three key components in Amabile’s (1983, 1988, 1996) componential model of creativity that are necessary for creative performance, together with domain-relevant skills (including knowledge about the domain, technical skills and domain-related talent) and creativity-relevant skills (including personality characteristics and cognitive styles).
By turning the focus of defining creativity towards the creative products, Khatena and Torrance (1973) defined creativity as constructing or organizing ideas, thoughts and feelings into unusual and associative bonds using imagination power. Gardner (1993) stated that creative individuals are able to solve problems, model products, or define new questions in a novel but acceptable way in a particular cultural context. Creativity is also seen as the ability to produce or design something that is original, adaptive with regard to task constraints, of high quality (Kaufman & Sternberg, 2007; Lubart & Guignard, 2004; Sternberg & Lubart, 1999), useful, beautiful and novel (Feist, 1998; Mumford, 2003; Ursyn, 2014).
Finally, regarding the press perspective, that is, the interaction between the creative person and the environment or climate, McLaren (1993) stated that creativity could not be fully understood through human endeavour without taking into account its socio-moral context and intent (James, Clark, & Cropanzano, 1999). Investigating the environment for creativity therefore requires that all the factors that promote or inhibit creativity should be taken into consideration (Thompson & Lordan, 1999). In the componential model of organizational innovation and creativity, Amabile (1988) proposed three broad environmental factors related to creativity: organizational motivation or orientation to innovate, available resources and management practices. Geis (1988) identified five factors to ensure a creative environment: a secure environment with minimum administrative or financial intervention, an organizational culture that makes it easy for people to create and discover independently, rewards for performance to support intrinsic motivation, managerial willingness to take risks in the targeted areas of creativity and providing training to enhance creativity. Several studies have indicated the impact of climate or environment variables on creative achievement (e.g. Couger et al., 1993; Paramithaa & Indarti, 2014), particularly with respect to the initial exploratory stages of creative endeavours in which individuals’ need for approval and support plays an important role in motivating their further efforts (Abbey & Dickson, 1983).
Despite these different perspectives in defining creativity, some aspects are shared by many researchers. Researchers generally agree that creativity involves the production of novel and useful responses (Batey, 2012; Mayer, 1999; Mumford, 2003; Runco & Jaeger, 2012). These two characteristics, novelty and usefulness, are widely mentioned in most definitions of creativity (Zeng, Proctor, & Salvendy, 2009), although there is still some debate about the definitions of these two terms (Batey, 2012; Batey & Furnham, 2006; Runco & Jaeger, 2012). Another area of consensus is that creativity is regarded as a multifaceted phenomenon that involves cognitive, personality and environmental components (Batey & Furnham, 2006; Lemons, 2011; Runco, 2004). As Harrington (1990, p.150) asserted “Creativity does not “reside” in any single cognitive or personality process, does not occur at any single point in time, does not “happen” at any particular place, and is not the product of any single individual”.
Friday, January 10, 2020
Women, but not men, are seen as more attractive with longer eyelashes; perceptions of health and femininity also increase with eyelash length; older women, rather than younger women, benefit the most from enhanced eyelashes
Adam, A. (2020). Beauty is in the eye of the beautiful: Enhanced eyelashes increase perceived health and attractiveness. Evolutionary Behavioral Sciences, Jan 2020. https://doi.org/10.1037/ebs0000192
Abstract: Although some aspects of physical attractiveness are specific to time and culture, other characteristics act as external cues to youth, health, and fertility. Like head hair, eyelashes change with age, and as such, they may also serve as external mating cues. In three experiments, I manipulated eyelash length in photographs of men and women and had participants rate them on attractiveness (Studies 1–3), perceived age (Studies 1–3), perceived health (Studies 2 and 3), and femininity (Study 3). The results indicate that women, but not men, are seen as more attractive with longer eyelashes; that perceptions of health and femininity also increase with eyelash length; and that older women, rather than younger women, benefit the most from enhanced eyelashes—but that longer eyelashes did not reduce perceptions of age.
Abstract: Although some aspects of physical attractiveness are specific to time and culture, other characteristics act as external cues to youth, health, and fertility. Like head hair, eyelashes change with age, and as such, they may also serve as external mating cues. In three experiments, I manipulated eyelash length in photographs of men and women and had participants rate them on attractiveness (Studies 1–3), perceived age (Studies 1–3), perceived health (Studies 2 and 3), and femininity (Study 3). The results indicate that women, but not men, are seen as more attractive with longer eyelashes; that perceptions of health and femininity also increase with eyelash length; and that older women, rather than younger women, benefit the most from enhanced eyelashes—but that longer eyelashes did not reduce perceptions of age.
How much common genetic factors account for the association between general risk-taking preferences and risk taking preferences, and choices in financial investments, stock market participation and business formation
Common genetic effects on risk-taking preferences and choices. Nicos Nicolaou & Scott Shane. Journal of Risk and Uncertainty, Jan 9 2020. https://link.springer.com/article/10.1007/s11166-019-09316-2
Abstract: Although prior research has shown that risk-taking preferences and choices are correlated across many domains, there is a dearth of research investigating whether these correlations are primarily the result of genetic or environmental factors. We examine the extent to which common genetic factors account for the association between general risk-taking preferences and domain-specific risk-taking preferences, and between general risk-taking preferences and risk taking choices in financial investments, stock market participation and business formation. Using data from 1898 monozygotic (MZ) and 1344 same-sex dizygotic (DZ) twins, we find that general risk-taking shares a common genetic component with domain-specific risk-taking preferences and risk-taking choices.
Discussion
Although
prior research has shown that general risk preferences, domain-specific
risk preferences and choices that involve risk are correlated, very
little work has investigated whether these correlations were primarily
the result of genetic or environmental factors. This study showed that
the correlations between general risk preferences, domain-specific risk
preferences, financial investment choices, stock market participation,
and business formation choices are partially the result of genetic
factors.
Human beings may have evolved into different types: Those whose genetic composition predisposes them to high-risk-high-return choices and those whose genetic composition predisposes them to low-risk-low-reward choices. Just as our ancestors chose between hunting and gathering in part because they had innate predispositions toward risk tolerance or risk aversion, today’s humans might choose between low-risk-low-return and high-risk-high-return occupations and investment strategies.
We posit that the common genetic component to these preferences leads to correlated behaviors among people. Genetic factors account for part of the covariance between general risk preferences and domain-specific risk preferences, between general risk preferences and financial investment choices, between general risk preferences and stock market participation, and between general risk preferences and the choice of entrepreneurship as an occupation. People that are more risk tolerant are more likely to invest in stocks, make riskier financial choices and choose risky occupations, in part, because of the biological processes underlying their behavior.
Our study contributes to a biosocial perspective on risk taking. Domain-specific risk preferences have a non-trivial genetic component. In addition, financial investment choices, the choice to become an entrepreneur, and stock market participation have a sizeable genetic component. These patterns suggest that cross-sectional differences in the preference for risk and risk-taking behavior emerge naturally in a society as a function of the distribution of genetic composition (Karlsson Linnèr et al. 2019).
These results have interesting implications for those who examine risk taking. Parent-child similarity in risk taking, a commonly found correlation, may not result from cultural transmission as much as from genetic factors. While our findings do not negate the significance of environmental factors, they show that genetic influences cannot be ignored.
In addition, our results show that all of the environmental influences in risk taking were of the non-shared variety. This suggests that differential experiences outside the family, such as work environment and work colleagues, are more important for risk taking than shared environmental factors such as parental education and shared family rules and upbringing.
Our analysis suggests that a non-trivial fraction of the correlation between risk-taking behaviors results from innate factors. Because the ways to enhance those behaviors vary depending on the levels of genetic and environmental correlations, our results suggest that researchers need to think more carefully about the ways in which interventions might be used to increase the level of risk-taking behavior. Even if variables display a phenotypic correlation, interventions to increase one variable will not be likely to increase the other unless the correlations are largely environmental. Our results showed that a greater fraction of the correlation between general risk-taking preference and stock market participation was genetic than the fraction of the correlation between general risk taking preference and the tendency to be an entrepreneur. Therefore, efforts to increase entry into entrepreneurship by changing risk preferences through education may prove more effective than efforts to increase stock market participation through educationally-induced shifts in general risk preferences.
In addition, our study has implications for molecular genetics research in risk-taking behavior. Because common genetic factors account for a sizeable portion of the correlation between risk-taking preferences and choices in different domains, genes associated with those preferences and choices in one domain are plausible candidate genes for molecular genetics studies of risk-taking preferences and choices in other domains. These genes may also be influential for identifying gene-environment interactions in risk-taking.
It is crucial to stress that our study does not contend that genes determine risk taking behavior. As Johnson et al. (2009) argue, “even highly heritable traits can be strongly manipulated by the environment, so heritability has little if anything to do with controllability” (p. 218). Genes may only predispose some people and not others to develop risk taking preferences and choices. Thus, it is imperative for future research to understand the role that genes play in concert with contextual and environmental factors.
Our study has several limitations. Approximately 92% of the sample is female, hindering our ability to generalize our results to males. If women are less risk-taking than men, the range of our findings might be restricted when applied to males. While we have no reason to believe that genetic factors would only influence the correlation between risk-taking behavior in women and not men, we cannot show the generalizability of our findings across gender either.
Moreover, as in all twin studies, our analysis assumes that there is no assortative mating. Assortative mating—which can arise when individuals have children with individuals who are genetically similar to them—increases the probability that children of similar parents receive more similar gene variants for some attributes than children of “non-similar” parents. Because assortative mating increases the genetic similarity between fraternal twins, but not between identical twins (Guo 2005), it biases the results of twin studies by underestimating the heritability estimates (Plomin et al. 2008). Because we do not know if there is parental assortative mating with respect to risk-taking preferences and choices, we must caution that our findings could be biased downward, and underrepresent the common genetic component to risk-taking.
Furthermore, any violation of the equal environments assumption (EEA) may also affect the robustness of our findings. If environmental factors behave towards identical twins more similarly than towards fraternal twins with respect to risk-taking preferences or choices, the validity of the EEA would be challenged. While we have no reason to believe that this would be the case, we do not have the evidence to empirically verify the validity of the EEA in our study.
In addition, our results may be affected by measurement error. Beauchamp et al. (2017) found that measurement-error-adjusted estimates of heritability were considerably higher than the non-adjusted estimates. They conjecture that “once measurement error is controlled for, the heritability of most economic attitudes will approach that of the ‘Big Five’ in personality research” (Beauchamp et al. 2017: 231). This suggests that the heritability estimates for our risk taking variables may be conservative.
Finally, our analysis says nothing about the specific genetic mechanism involved in risk taking preferences and choices. Our results are consistent with the proposition that people with different genotypes select into different environments for risk-taking, as well as the proposition that genes themselves have a proximal effect on risk-taking preferences and choices. Moreover, we cannot know from these results what genes are involved in risk-taking preferences and choices or how many genes influence the observed outcomes.
We conclude by strongly encouraging additional research on the genetics of risk-taking. Considering the complementary role that biology plays in accounting for risk-taking is important lest we limit our ability to explain this important phenomenon. While most social scientists are comfortable exploring the role of environmental factors, they are less comfortable looking at the part that genetics plays. But, as Song, Li, & Wang (2015) have stressed, the need to account for more of the variance in work-related behaviour suggests that the role of genetics should be more carefully considered.
Human beings may have evolved into different types: Those whose genetic composition predisposes them to high-risk-high-return choices and those whose genetic composition predisposes them to low-risk-low-reward choices. Just as our ancestors chose between hunting and gathering in part because they had innate predispositions toward risk tolerance or risk aversion, today’s humans might choose between low-risk-low-return and high-risk-high-return occupations and investment strategies.
We posit that the common genetic component to these preferences leads to correlated behaviors among people. Genetic factors account for part of the covariance between general risk preferences and domain-specific risk preferences, between general risk preferences and financial investment choices, between general risk preferences and stock market participation, and between general risk preferences and the choice of entrepreneurship as an occupation. People that are more risk tolerant are more likely to invest in stocks, make riskier financial choices and choose risky occupations, in part, because of the biological processes underlying their behavior.
Our study contributes to a biosocial perspective on risk taking. Domain-specific risk preferences have a non-trivial genetic component. In addition, financial investment choices, the choice to become an entrepreneur, and stock market participation have a sizeable genetic component. These patterns suggest that cross-sectional differences in the preference for risk and risk-taking behavior emerge naturally in a society as a function of the distribution of genetic composition (Karlsson Linnèr et al. 2019).
These results have interesting implications for those who examine risk taking. Parent-child similarity in risk taking, a commonly found correlation, may not result from cultural transmission as much as from genetic factors. While our findings do not negate the significance of environmental factors, they show that genetic influences cannot be ignored.
In addition, our results show that all of the environmental influences in risk taking were of the non-shared variety. This suggests that differential experiences outside the family, such as work environment and work colleagues, are more important for risk taking than shared environmental factors such as parental education and shared family rules and upbringing.
Our analysis suggests that a non-trivial fraction of the correlation between risk-taking behaviors results from innate factors. Because the ways to enhance those behaviors vary depending on the levels of genetic and environmental correlations, our results suggest that researchers need to think more carefully about the ways in which interventions might be used to increase the level of risk-taking behavior. Even if variables display a phenotypic correlation, interventions to increase one variable will not be likely to increase the other unless the correlations are largely environmental. Our results showed that a greater fraction of the correlation between general risk-taking preference and stock market participation was genetic than the fraction of the correlation between general risk taking preference and the tendency to be an entrepreneur. Therefore, efforts to increase entry into entrepreneurship by changing risk preferences through education may prove more effective than efforts to increase stock market participation through educationally-induced shifts in general risk preferences.
In addition, our study has implications for molecular genetics research in risk-taking behavior. Because common genetic factors account for a sizeable portion of the correlation between risk-taking preferences and choices in different domains, genes associated with those preferences and choices in one domain are plausible candidate genes for molecular genetics studies of risk-taking preferences and choices in other domains. These genes may also be influential for identifying gene-environment interactions in risk-taking.
It is crucial to stress that our study does not contend that genes determine risk taking behavior. As Johnson et al. (2009) argue, “even highly heritable traits can be strongly manipulated by the environment, so heritability has little if anything to do with controllability” (p. 218). Genes may only predispose some people and not others to develop risk taking preferences and choices. Thus, it is imperative for future research to understand the role that genes play in concert with contextual and environmental factors.
Our study has several limitations. Approximately 92% of the sample is female, hindering our ability to generalize our results to males. If women are less risk-taking than men, the range of our findings might be restricted when applied to males. While we have no reason to believe that genetic factors would only influence the correlation between risk-taking behavior in women and not men, we cannot show the generalizability of our findings across gender either.
Moreover, as in all twin studies, our analysis assumes that there is no assortative mating. Assortative mating—which can arise when individuals have children with individuals who are genetically similar to them—increases the probability that children of similar parents receive more similar gene variants for some attributes than children of “non-similar” parents. Because assortative mating increases the genetic similarity between fraternal twins, but not between identical twins (Guo 2005), it biases the results of twin studies by underestimating the heritability estimates (Plomin et al. 2008). Because we do not know if there is parental assortative mating with respect to risk-taking preferences and choices, we must caution that our findings could be biased downward, and underrepresent the common genetic component to risk-taking.
Furthermore, any violation of the equal environments assumption (EEA) may also affect the robustness of our findings. If environmental factors behave towards identical twins more similarly than towards fraternal twins with respect to risk-taking preferences or choices, the validity of the EEA would be challenged. While we have no reason to believe that this would be the case, we do not have the evidence to empirically verify the validity of the EEA in our study.
In addition, our results may be affected by measurement error. Beauchamp et al. (2017) found that measurement-error-adjusted estimates of heritability were considerably higher than the non-adjusted estimates. They conjecture that “once measurement error is controlled for, the heritability of most economic attitudes will approach that of the ‘Big Five’ in personality research” (Beauchamp et al. 2017: 231). This suggests that the heritability estimates for our risk taking variables may be conservative.
Finally, our analysis says nothing about the specific genetic mechanism involved in risk taking preferences and choices. Our results are consistent with the proposition that people with different genotypes select into different environments for risk-taking, as well as the proposition that genes themselves have a proximal effect on risk-taking preferences and choices. Moreover, we cannot know from these results what genes are involved in risk-taking preferences and choices or how many genes influence the observed outcomes.
We conclude by strongly encouraging additional research on the genetics of risk-taking. Considering the complementary role that biology plays in accounting for risk-taking is important lest we limit our ability to explain this important phenomenon. While most social scientists are comfortable exploring the role of environmental factors, they are less comfortable looking at the part that genetics plays. But, as Song, Li, & Wang (2015) have stressed, the need to account for more of the variance in work-related behaviour suggests that the role of genetics should be more carefully considered.
Women: Socioeconomic status negatively correlated with subjective orgasm experience
Factors Associated with Subjective Orgasm Experience in Heterosexual Relationships. Ana Isabel Arcos-Romero & Juan Carlos Sierra. Journal of Sex & Marital Therapy, Jan 9 2020. https://doi.org/10.1080/0092623X.2019.1711273
Abstract: The main objective of this study was to determine the predictive capacity of different variables, organized based on Ecological theory (i.e., personal, interpersonal, social, and ideological), in the intensity of the subjective orgasm experience within the context of heterosexual relationships. The sample was composed of 1,300 adults (547 men, 753 women). The proposed model for men showed that more intense subjective orgasm experience was predicted by age, sexual sensations seeking, sexual satisfaction, and partner-focused sexual desire. The model for women showed that more intense subjective orgasm experience was predicted by age, erotophilia, sexual sensation seeking, partner-focused sexual desire, and sexual satisfaction.
Abstract: The main objective of this study was to determine the predictive capacity of different variables, organized based on Ecological theory (i.e., personal, interpersonal, social, and ideological), in the intensity of the subjective orgasm experience within the context of heterosexual relationships. The sample was composed of 1,300 adults (547 men, 753 women). The proposed model for men showed that more intense subjective orgasm experience was predicted by age, sexual sensations seeking, sexual satisfaction, and partner-focused sexual desire. The model for women showed that more intense subjective orgasm experience was predicted by age, erotophilia, sexual sensation seeking, partner-focused sexual desire, and sexual satisfaction.
No differences in cognitive or structural MRI measures on those that reported, on average, 5.4 hours, 6.2 hours, 7.0 hours, and 7.9 hours sleep per night over 5 timepoints spanning 28 years
Sleep duration over 28 years, cognition, gray matter volume, and white matter microstructure: a prospective cohort study. Jennifer Zitser et al. Sleep, zsz290, January 6 2020. https://doi.org/10.1093/sleep/zsz290
Abstract
Study Objectives: To examine the association between sleep duration trajectories over 28 years and measures of cognition, gray matter volume, and white matter microstructure. We hypothesize that consistently meeting sleep guidelines that recommend at least 7 hours of sleep per night will be associated with better cognition, greater gray matter volumes, higher fractional anisotropy, and lower radial diffusivity values.
Methods: We studied 613 participants (age 42.3 ± 5.03 years at baseline) who self-reported sleep duration at five time points between 1985 and 2013, and who had cognitive testing and magnetic resonance imaging administered at a single timepoint between 2012 and 2016. We applied latent class growth analysis to estimate membership into trajectory groups based on self-reported sleep duration over time. Analysis of gray matter volumes was carried out using FSL Voxel-Based-Morphometry and white matter microstructure using Tract Based Spatial Statistics. We assessed group differences in cognitive and MRI outcomes using nonparametric permutation testing.
Results: Latent class growth analysis identified four trajectory groups, with an average sleep duration of 5.4 ± 0.2 hours (5%, N = 29), 6.2 ± 0.3 hours (37%, N = 228), 7.0 ± 0.2 hours (45%, N = 278), and 7.9 ± 0.3 hours (13%, N = 78). No differences in cognition, gray matter, and white matter measures were detected between groups.
Conclusions: Our null findings suggest that current sleep guidelines that recommend at least 7 hours of sleep per night may not be supported in relation to an association between sleep patterns and cognitive function or brain structure.
Keywords: aging, cognition, gray matter, sleep, white matter
Statement of Significance: Up to a third of adults report between 6 and 7 hours of sleep per night, and thus fail to meet sleep guidelines which recommend at least 7 hours of sleep per night. Although extreme short sleep (e.g. ≤5 hours per night) has repeatedly been associated with poor cognitive health, it is currently unclear if such relationships subsist for more moderate short-sleep durations. We found no differences in cognitive or structural MRI measures between groups that reported, on average, 5.4 hours, 6.2 hours, 7.0 hours, and 7.9 hours sleep per night over 5 timepoints spanning 28 years. If replicated with longitudinal markers of cognitive health, such null results could challenge the suitability of current sleep guidelines on cognitive outcomes.
To the best of our knowledge, only one study has previously applied latent class growth modeling to examine trajectories of self-reported sleep duration over time within an adult population. In a study of 8,673 Canadian adults, Gilmour et al. identified four sleep trajectory groups with intercepts of 5.57 hours (11% of participants), 6.68 hours (49%), 7.65 hours (37%), and 8.34 hours (2%) [36]. We also identified four trajectory groups, with intercepts of 5.54 hours (5% of participants), 6.57 hours (37%), 6.95 hours (45%), and 7.85 (13%) hours. With regard to shape, the trajectories identified in both studies displayed limited meaningful change over time. For example, average sleep duration differed by less than an hour between timepoints in all groups—indicating that extreme increases or decreases in sleep duration over time are limited in prevalence. Given that our studies differ both in terms of demographics and methods (e.g. sleep duration was assessed over an 8-year time period in Gilmour et al. [36], compared with over 28-years in our study), it is encouraging that our results are broadly complimentary.
Of particular note within our study is that 37% of participants were included in a group with an average sleep duration of 6.2 hours. Guidelines published by the American Society for Sleep Medicine and the Sleep Research Society state that “adults should sleep 7 or more hours per night on a regular basis to promote optimal health” [4]. In addition, the National Sleep Foundation’s guidelines posit that 7–9 hours sleep per night is “recommended” for health and well-being, with less than 6 hours sleep “not recommended” [3]. In these guidelines, 6 hours sleep per night falls in a somewhat gray area between these two groups, and is classified as “may be appropriate.” Relevant to the revision of such guidelines, our study found no evidence to suggest that consistently reporting approximately 6 hours sleep per night is associated with adverse cognitive and MRI outcomes.
Such null findings, however, do not necessarily indicate that sleep duration is not important to cognitive health. Rather, our null findings may reflect the limited number of participants reporting extremes in sleep duration within our sample. At each phase, between 92% and 96% of participants reported 6, 7, or 8 hours sleep per night (Supplementary Material Table S5). The group with the shortest sleep duration in our study contained just 5% of participants and had a mean sleep duration of 5.4 hours. The group with the longest sleep duration, which contained 13% of participants, had a mean sleep duration of 7.87 hours—a value that falls within guidelines for recommended sleep durations. Change in sleep duration was also limited within our sample—between 89% and 94% of participants reported change in sleep duration of 0–1 hour between the baseline and subsequent data waves (Supplementary Material Tables S6–S7), and sleep trajectories remained relatively stable over time overall. Significant group differences for cognitive and MRI measures may only become apparent with larger samples of more extreme sleep durations, groups that have often been the focus of cognitive studies to date. For example, in a meta-analysis that reported significant associations between sleep duration and cognitive outcomes, the most common category for short-sleep duration was 5 hours or less (ranging from <4 to ≤6.5 hours), and the most common category for long-sleep duration was 9 hours or more (ranging from ≥8 to ≥11 hours) [5]. Furthermore, although few studies have examined the change in sleep duration over time, Devore et al. reported that women whose sleep duration changed by 2 hours or more in any direction, had worse cognitive outcomes compared with women with no change in sleep duration [8]. In addition, previous studies based on the entire Whitehall II cohort found that adverse changes in sleep duration are associated with poorer cognitive function [6]. There are many reasons that our findings may diverge, despite overlapping samples. These include differences in sample size (5431 vs 631; thereby impacting on power to assess extremes in sleep duration and change), characteristics (see Supplementary Material Table S8 for comparison), number of assessments of sleep duration (2 vs 5 times), and cognitive test battery administered. Therefore, our study does not contradict the hypothesis that extreme short sleep, extreme long sleep, or extreme changes in sleep duration are associated with adverse outcomes; but instead indicates that such groups may not be well represented in small population-based samples.
An alternative explanation for our null results is that it is not sleep duration alone that is associated with cognitive health in aging, but rather a combination of sleep quality and quantity. Indeed, in an overlapping sample, we have previously published that poor sleep quality is associated with reduced FA and increased RD within fronto-subcortical regions [15]. In an exploratory post hoc analysis, we divided the 6-hour and 7-hour groups into poor and good sleep quality groups dependent upon their PSQI score at the time of the MRI scan (due to limited sample size, we did not include 5-hour and 8-hour groups in this analysis) (Supplementary Material: Text S2, Table S9). F tests showed significant group differences for global FA and voxel-wise RD. The 6-hour good sleep quality group displayed higher global FA and reduced RD in widespread tracts, compared with both the 6-hour poor sleep quality group and 7-hour poor sleep quality group (Figure S2). The 6-hour good sleep quality group also displayed reduced RD compared with the 7-hour good sleep quality group in the corpus callosum (Figure S2). These results indicate that the combination of sleep quality and quantity may be more sensitive to measures of cognitive health in aging. However, it is critical to stress that our measures of sleep quality and quantity are not directly comparable (e.g. quality was measured using a 17-item questionnaire at a single timepoint, duration was measured using a single-item questionnaire at five timepoints). Therefore, these post hoc results should be considered exploratory and require independent replication.
Our study has a number of strengths, including the availability of sleep duration data at five points spanning 28 years prior to cognitive and MRI assessment, which allowed us to examine sleep trajectories over time. A major limitation of our analysis was the reliance on a single-item self-report of sleep duration, in which participants could only report their sleep durations in discrete categories (i.e. “5 hours or less,” “6 hours,” “7 hours,” “8 hours,” or “9 hours or more”). Sleep duration may be more sensitively measured if sleep was measured in hours and minutes and if there were no lower or upper thresholds for sleep duration. A further limitation is that participants were asked to report their sleep duration only on an “average week night.” The discrepancy between weeknight and weekend sleep duration is common in working-age populations and there is debate regarding whether long weekend sleep can compensate for short weeknight sleep for health outcomes such as mortality [37], weight, and insulin sensitivity [38]. In the Whitehall II study, the agreement between self-reported and accelerometer-measured total sleep duration was slightly higher in weekdays (kappa = 0.37, 95% CI 0.34–0.40) than weekend days (kappa = 0.33, 95% CI 0.31–0.36) [39]. Further research is needed to examine the long-term effects of weekend recovery sleep on cognition. Sleep duration is also often overestimated in self-reported compared to objective studies [40–42]. For example, within the Sleep Heart Health Study of 2,113 adults at a mean age of 67 years, morning self-estimated sleep time and total sleep time measured by polysomnography were estimated as 379 and 363 minutes, respectively, with a weak correlation of r = 0.16 between the measures [41]. Self-reported total sleep duration and sleep duration assessed using a wrist-worn accelerometer were moderately related in the Whitehall II study of 4,094 adults aged 60–83 (kappa 0.39, 95% CI 0.36–0.42) [39]. Importantly, differences between measurements may impact upon observed relationships with cognitive outcomes. For example, the Sleep Study of the National Social Life, Health, and Aging Project (NSHAP), a nationally representative cohort of older US adults (2010–2015), found that actigraphic measures of sleep disruption were associated with worse cognition and higher odds of 5-year cognitive decline but there was no association for self-reported sleep [43]. As self-reported measures of sleep duration correlate well with daily sleep diaries [44], are the mainstay of population-based cohort studies, and are the focus of sleep guidelines, further studies using both objective and self-report measures of sleep duration to examine cognitive and MRI outcomes are needed. Furthermore, our findings have limited generalizability, given that the Whitehall participants have relatively high educational attainment which might contribute to increased cognitive reserve. As a result, we may have underestimated the long-term neurocognitive effects associated with unfavorable sleep patterns in subpopulations of low educational attainment, who may be more sensitive to the detrimental health effects of sleep disturbance. We were also unable to rule out potential selection bias; it is plausible that those with the greatest cognitive decline and/or most extreme sleep durations were less likely to return for a follow-up assessment.
Abstract
Study Objectives: To examine the association between sleep duration trajectories over 28 years and measures of cognition, gray matter volume, and white matter microstructure. We hypothesize that consistently meeting sleep guidelines that recommend at least 7 hours of sleep per night will be associated with better cognition, greater gray matter volumes, higher fractional anisotropy, and lower radial diffusivity values.
Methods: We studied 613 participants (age 42.3 ± 5.03 years at baseline) who self-reported sleep duration at five time points between 1985 and 2013, and who had cognitive testing and magnetic resonance imaging administered at a single timepoint between 2012 and 2016. We applied latent class growth analysis to estimate membership into trajectory groups based on self-reported sleep duration over time. Analysis of gray matter volumes was carried out using FSL Voxel-Based-Morphometry and white matter microstructure using Tract Based Spatial Statistics. We assessed group differences in cognitive and MRI outcomes using nonparametric permutation testing.
Results: Latent class growth analysis identified four trajectory groups, with an average sleep duration of 5.4 ± 0.2 hours (5%, N = 29), 6.2 ± 0.3 hours (37%, N = 228), 7.0 ± 0.2 hours (45%, N = 278), and 7.9 ± 0.3 hours (13%, N = 78). No differences in cognition, gray matter, and white matter measures were detected between groups.
Conclusions: Our null findings suggest that current sleep guidelines that recommend at least 7 hours of sleep per night may not be supported in relation to an association between sleep patterns and cognitive function or brain structure.
Keywords: aging, cognition, gray matter, sleep, white matter
Statement of Significance: Up to a third of adults report between 6 and 7 hours of sleep per night, and thus fail to meet sleep guidelines which recommend at least 7 hours of sleep per night. Although extreme short sleep (e.g. ≤5 hours per night) has repeatedly been associated with poor cognitive health, it is currently unclear if such relationships subsist for more moderate short-sleep durations. We found no differences in cognitive or structural MRI measures between groups that reported, on average, 5.4 hours, 6.2 hours, 7.0 hours, and 7.9 hours sleep per night over 5 timepoints spanning 28 years. If replicated with longitudinal markers of cognitive health, such null results could challenge the suitability of current sleep guidelines on cognitive outcomes.
Discussion
The aim of this study was to examine sleep duration trajectories over a 28-year period and their relationship with measures of cognition, gray matter volume, and white matter microstructure. We hypothesized that consistently meeting recommendations for sleep duration (i.e. self-reporting a minimum of 7 hours sleep per night) would be favorably associated with cognition, gray matter volume and white matter microstructure, compared with consistently not meeting the guidelines or transitioning in and out of the guidelines over time. In contrast to our hypotheses, our results did not show any differences in cognitive measures, gray matter volume, or white matter microstructure between different sleep trajectory groups.To the best of our knowledge, only one study has previously applied latent class growth modeling to examine trajectories of self-reported sleep duration over time within an adult population. In a study of 8,673 Canadian adults, Gilmour et al. identified four sleep trajectory groups with intercepts of 5.57 hours (11% of participants), 6.68 hours (49%), 7.65 hours (37%), and 8.34 hours (2%) [36]. We also identified four trajectory groups, with intercepts of 5.54 hours (5% of participants), 6.57 hours (37%), 6.95 hours (45%), and 7.85 (13%) hours. With regard to shape, the trajectories identified in both studies displayed limited meaningful change over time. For example, average sleep duration differed by less than an hour between timepoints in all groups—indicating that extreme increases or decreases in sleep duration over time are limited in prevalence. Given that our studies differ both in terms of demographics and methods (e.g. sleep duration was assessed over an 8-year time period in Gilmour et al. [36], compared with over 28-years in our study), it is encouraging that our results are broadly complimentary.
Of particular note within our study is that 37% of participants were included in a group with an average sleep duration of 6.2 hours. Guidelines published by the American Society for Sleep Medicine and the Sleep Research Society state that “adults should sleep 7 or more hours per night on a regular basis to promote optimal health” [4]. In addition, the National Sleep Foundation’s guidelines posit that 7–9 hours sleep per night is “recommended” for health and well-being, with less than 6 hours sleep “not recommended” [3]. In these guidelines, 6 hours sleep per night falls in a somewhat gray area between these two groups, and is classified as “may be appropriate.” Relevant to the revision of such guidelines, our study found no evidence to suggest that consistently reporting approximately 6 hours sleep per night is associated with adverse cognitive and MRI outcomes.
Such null findings, however, do not necessarily indicate that sleep duration is not important to cognitive health. Rather, our null findings may reflect the limited number of participants reporting extremes in sleep duration within our sample. At each phase, between 92% and 96% of participants reported 6, 7, or 8 hours sleep per night (Supplementary Material Table S5). The group with the shortest sleep duration in our study contained just 5% of participants and had a mean sleep duration of 5.4 hours. The group with the longest sleep duration, which contained 13% of participants, had a mean sleep duration of 7.87 hours—a value that falls within guidelines for recommended sleep durations. Change in sleep duration was also limited within our sample—between 89% and 94% of participants reported change in sleep duration of 0–1 hour between the baseline and subsequent data waves (Supplementary Material Tables S6–S7), and sleep trajectories remained relatively stable over time overall. Significant group differences for cognitive and MRI measures may only become apparent with larger samples of more extreme sleep durations, groups that have often been the focus of cognitive studies to date. For example, in a meta-analysis that reported significant associations between sleep duration and cognitive outcomes, the most common category for short-sleep duration was 5 hours or less (ranging from <4 to ≤6.5 hours), and the most common category for long-sleep duration was 9 hours or more (ranging from ≥8 to ≥11 hours) [5]. Furthermore, although few studies have examined the change in sleep duration over time, Devore et al. reported that women whose sleep duration changed by 2 hours or more in any direction, had worse cognitive outcomes compared with women with no change in sleep duration [8]. In addition, previous studies based on the entire Whitehall II cohort found that adverse changes in sleep duration are associated with poorer cognitive function [6]. There are many reasons that our findings may diverge, despite overlapping samples. These include differences in sample size (5431 vs 631; thereby impacting on power to assess extremes in sleep duration and change), characteristics (see Supplementary Material Table S8 for comparison), number of assessments of sleep duration (2 vs 5 times), and cognitive test battery administered. Therefore, our study does not contradict the hypothesis that extreme short sleep, extreme long sleep, or extreme changes in sleep duration are associated with adverse outcomes; but instead indicates that such groups may not be well represented in small population-based samples.
An alternative explanation for our null results is that it is not sleep duration alone that is associated with cognitive health in aging, but rather a combination of sleep quality and quantity. Indeed, in an overlapping sample, we have previously published that poor sleep quality is associated with reduced FA and increased RD within fronto-subcortical regions [15]. In an exploratory post hoc analysis, we divided the 6-hour and 7-hour groups into poor and good sleep quality groups dependent upon their PSQI score at the time of the MRI scan (due to limited sample size, we did not include 5-hour and 8-hour groups in this analysis) (Supplementary Material: Text S2, Table S9). F tests showed significant group differences for global FA and voxel-wise RD. The 6-hour good sleep quality group displayed higher global FA and reduced RD in widespread tracts, compared with both the 6-hour poor sleep quality group and 7-hour poor sleep quality group (Figure S2). The 6-hour good sleep quality group also displayed reduced RD compared with the 7-hour good sleep quality group in the corpus callosum (Figure S2). These results indicate that the combination of sleep quality and quantity may be more sensitive to measures of cognitive health in aging. However, it is critical to stress that our measures of sleep quality and quantity are not directly comparable (e.g. quality was measured using a 17-item questionnaire at a single timepoint, duration was measured using a single-item questionnaire at five timepoints). Therefore, these post hoc results should be considered exploratory and require independent replication.
Our study has a number of strengths, including the availability of sleep duration data at five points spanning 28 years prior to cognitive and MRI assessment, which allowed us to examine sleep trajectories over time. A major limitation of our analysis was the reliance on a single-item self-report of sleep duration, in which participants could only report their sleep durations in discrete categories (i.e. “5 hours or less,” “6 hours,” “7 hours,” “8 hours,” or “9 hours or more”). Sleep duration may be more sensitively measured if sleep was measured in hours and minutes and if there were no lower or upper thresholds for sleep duration. A further limitation is that participants were asked to report their sleep duration only on an “average week night.” The discrepancy between weeknight and weekend sleep duration is common in working-age populations and there is debate regarding whether long weekend sleep can compensate for short weeknight sleep for health outcomes such as mortality [37], weight, and insulin sensitivity [38]. In the Whitehall II study, the agreement between self-reported and accelerometer-measured total sleep duration was slightly higher in weekdays (kappa = 0.37, 95% CI 0.34–0.40) than weekend days (kappa = 0.33, 95% CI 0.31–0.36) [39]. Further research is needed to examine the long-term effects of weekend recovery sleep on cognition. Sleep duration is also often overestimated in self-reported compared to objective studies [40–42]. For example, within the Sleep Heart Health Study of 2,113 adults at a mean age of 67 years, morning self-estimated sleep time and total sleep time measured by polysomnography were estimated as 379 and 363 minutes, respectively, with a weak correlation of r = 0.16 between the measures [41]. Self-reported total sleep duration and sleep duration assessed using a wrist-worn accelerometer were moderately related in the Whitehall II study of 4,094 adults aged 60–83 (kappa 0.39, 95% CI 0.36–0.42) [39]. Importantly, differences between measurements may impact upon observed relationships with cognitive outcomes. For example, the Sleep Study of the National Social Life, Health, and Aging Project (NSHAP), a nationally representative cohort of older US adults (2010–2015), found that actigraphic measures of sleep disruption were associated with worse cognition and higher odds of 5-year cognitive decline but there was no association for self-reported sleep [43]. As self-reported measures of sleep duration correlate well with daily sleep diaries [44], are the mainstay of population-based cohort studies, and are the focus of sleep guidelines, further studies using both objective and self-report measures of sleep duration to examine cognitive and MRI outcomes are needed. Furthermore, our findings have limited generalizability, given that the Whitehall participants have relatively high educational attainment which might contribute to increased cognitive reserve. As a result, we may have underestimated the long-term neurocognitive effects associated with unfavorable sleep patterns in subpopulations of low educational attainment, who may be more sensitive to the detrimental health effects of sleep disturbance. We were also unable to rule out potential selection bias; it is plausible that those with the greatest cognitive decline and/or most extreme sleep durations were less likely to return for a follow-up assessment.
Targets feel less close to communicators who hide their successes (inferring paternalistic motives when they hide their success, which leads them to feel insulted); sharing success increases closeness, despite also triggering envy
Roberts, Annabelle, Emma Levine, and Ovul Sezer. 2020. “Hiding Success.” PsyArXiv. January 9. doi:10.31234/osf.io/6g3ez
Abstract: Self-promotion is common in everyday life. Yet, across seven studies (N = 1,672) examining a broad range of personal and professional successes, we find that individuals often hide their successes from others and that such hiding has harmful relational consequences. We document these effects among close relational partners, strangers, and within hypothetical relationships. In Study 1, we find that targets feel less close to and more insulted by communicators who hide rather than share their successes. Conversely, sharing success increases closeness, despite also triggering envy. In Study 2, we find that hiding is more costly than sharing success, even when the target does not learn about the act of hiding. That is, hiding success harms relationships both when the success is eventually discovered and when it is not. In Studies 3 and 4, we explore the mechanism underlying these interpersonal costs: Targets infer that communicators have paternalistic motives when they hide their success, which leads them to feel insulted. Studies 5 and 6 explore this mechanism in greater detail by documenting the contextual cues that elicit inferences of paternalistic motives. While a large body of existing research highlights the negative consequences of sharing one’s accomplishments with others, our research demonstrates that sharing is often superior to hiding. In doing so, we shed new light on the consequences of paternalism and the relational costs of hiding information in everyday communication.
Abstract: Self-promotion is common in everyday life. Yet, across seven studies (N = 1,672) examining a broad range of personal and professional successes, we find that individuals often hide their successes from others and that such hiding has harmful relational consequences. We document these effects among close relational partners, strangers, and within hypothetical relationships. In Study 1, we find that targets feel less close to and more insulted by communicators who hide rather than share their successes. Conversely, sharing success increases closeness, despite also triggering envy. In Study 2, we find that hiding is more costly than sharing success, even when the target does not learn about the act of hiding. That is, hiding success harms relationships both when the success is eventually discovered and when it is not. In Studies 3 and 4, we explore the mechanism underlying these interpersonal costs: Targets infer that communicators have paternalistic motives when they hide their success, which leads them to feel insulted. Studies 5 and 6 explore this mechanism in greater detail by documenting the contextual cues that elicit inferences of paternalistic motives. While a large body of existing research highlights the negative consequences of sharing one’s accomplishments with others, our research demonstrates that sharing is often superior to hiding. In doing so, we shed new light on the consequences of paternalism and the relational costs of hiding information in everyday communication.
Decreasing human body temperature in the United States since the industrial revolution
Decreasing human body temperature in the United States since the industrial revolution. Myroslava Protsiv, Catherine Ley, Joanna Lankester, Trevor Hastie, Julie Parsonnet.
eLife 2020;9:e49555, Jan 7, 2020, doi: 10.7554/eLife.49555
Abstract: In the US, the normal, oral temperature of adults is, on average, lower than the canonical 37°C established in the 19th century. We postulated that body temperature has decreased over time. Using measurements from three cohorts--the Union Army Veterans of the Civil War (N = 23,710; measurement years 1860–1940), the National Health and Nutrition Examination Survey I (N = 15,301; 1971–1975), and the Stanford Translational Research Integrated Database Environment (N = 150,280; 2007–2017)--we determined that mean body temperature in men and women, after adjusting for age, height, weight and, in some models date and time of day, has decreased monotonically by 0.03°C per birth decade. A similar decline within the Union Army cohort as between cohorts, makes measurement error an unlikely explanation. This substantive and continuing shift in body temperature—a marker for metabolic rate—provides a framework for understanding changes in human health and longevity over 157 years.
Discussion (links and full text at the original paper above)
eLife 2020;9:e49555, Jan 7, 2020, doi: 10.7554/eLife.49555
Abstract: In the US, the normal, oral temperature of adults is, on average, lower than the canonical 37°C established in the 19th century. We postulated that body temperature has decreased over time. Using measurements from three cohorts--the Union Army Veterans of the Civil War (N = 23,710; measurement years 1860–1940), the National Health and Nutrition Examination Survey I (N = 15,301; 1971–1975), and the Stanford Translational Research Integrated Database Environment (N = 150,280; 2007–2017)--we determined that mean body temperature in men and women, after adjusting for age, height, weight and, in some models date and time of day, has decreased monotonically by 0.03°C per birth decade. A similar decline within the Union Army cohort as between cohorts, makes measurement error an unlikely explanation. This substantive and continuing shift in body temperature—a marker for metabolic rate—provides a framework for understanding changes in human health and longevity over 157 years.
Discussion (links and full text at the original paper above)
In this study, we analyzed 677,423 human body
temperature measurements from three different cohort populations
spanning 157 years of measurement and 197 birth years. We found that men
born in the early 19th century had temperatures 0.59°C
higher than men today, with a monotonic decrease of −0.03°C per birth
decade. Temperature has also decreased in women by −0.32°C since the
1890s with a similar rate of decline (−0.029°C per birth decade).
Although one might posit that the differences among cohorts reflect
systematic measurement bias due to the varied thermometers and methods
used to obtain temperatures, we believe this explanation to be unlikely.
We observed similar temporal change within the UAVCW cohort—in which
measurement were presumably obtained irrespective of the subject's birth
decade—as we did between cohorts. Additionally, we saw a comparable
magnitude of difference in temperature between two modern cohorts using
thermometers that would be expected to be similarly calibrated.
Moreover, biases introduced by the method of thermometry (axillary
presumed in a subset of UAVCW vs. oral for other cohorts) would tend to
underestimate change over time since axillary values typically average
one degree Celsius lower than oral temperatures (Sund-Levander et al., 2002; Niven et al., 2015).
Thus, we believe the observed drop in temperature reflects physiologic
differences rather than measurement bias. Other findings in our
study—for example increased temperature at younger ages, in women, with
increased body mass and with later time of day—support a wealth of other
studies dating back to the time of Wunderlich (Wunderlich and Sequin, 1871; Waalen and Buxbaum, 2011).
Resting metabolic rate is the largest component of a
typical modern human’s energy expenditure, comprising around 65% of
daily energy expenditure for a sedentary individual (Heymsfield et al., 2006).
Heat is a byproduct of metabolic processes, the reason nearly all
warm-blooded animals have temperatures within a narrow range despite
drastic differences in environmental conditions. Over several decades,
studies examining whether metabolism is related to body surface area or
body weight (Du Bois, 1936; Kleiber, 1972), ultimately, converged on weight-dependent models (Mifflin et al., 1990; Schofield, 1985; Nelson et al., 1992). Since US residents have increased in mass since the mid-19th
century, we should have correspondingly expected increased body
temperature. Thus, we interpret our finding of a decrease in body
temperature as indicative of a decrease in metabolic rate independent of
changes in anthropometrics. A decline in metabolic rate in recent years
is supported in the literature when comparing modern experimental data
to those from 1919 (Frankenfield et al., 2005).
Although there are many factors that influence
resting metabolic rate, change in the population-level of inflammation
seems the most plausible explanation for the observed decrease in
temperature over time. Economic development, improved standards of
living and sanitation, decreased chronic infections from war injuries,
improved dental hygiene, the waning of tuberculosis and malaria
infections, and the dawn of the antibiotic age together are likely to
have decreased chronic inflammation since the 19th century. For example, in the mid-19th century, 2–3% of the population would have been living with active tuberculosis (Tiemersma et al., 2011).
This figure is consistent with the UAVCW Surgeons' Certificates that
reported 737 cases of active tuberculosis among 23,757 subjects (3.1%).
That UAVCW veterans who reported either current tuberculosis or
pneumonia had a higher temperature (0.19°C and 0.03°C respectively) than
those without infectious conditions supports this theory (Supplementary file 1).
Although we would have liked to have compared our modern results to
those from a location with a continued high risk of chronic infection,
we could identify no such database that included temperature
measurements. However, a small study of healthy volunteers from
Pakistan—a country with a continued high incidence of tuberculosis and
other chronic infections—confirms temperatures more closely
approximating the values reported by Wunderlich (mean, median and mode,
respectively, of 36.89°C, 36.94°C, and 37°C) (Adhi et al., 2008).
Reduction in inflammation may also explain the
continued drop in temperature observed between the two more modern
cohorts: NHANES and STRIDE. Although many chronic infections had been
conquered before the NHANES study, some—periodontitis as one example (Capilouto and Douglass, 1988)— continued to decrease over this short period. Moreover, the use of anti-inflammatory drugs including aspirin (Luepker et al., 2015), statins (Salami et al., 2017) and non-steroidal anti-inflammatory drugs (NSAIDs) (Lamont and Dias, 2008)
increased over this interval, potentially reducing inflammation. NSAIDs
have been specifically linked to blunting of body temperature, even in
normal volunteers (Murphy et al., 1996).
In support of declining inflammation in the modern era, a study of
NHANES participants demonstrated a 5% decrease in abnormal C-reactive
protein levels between 1999 and 2010 (Ong et al., 2013).
Changes in ambient temperature may also explain
some of the observed change in body temperature over time. Maintaining
constant body temperature despite fluctuations in ambient temperature
consumes up to 50–70% of daily energy intake (Levine, 2007).
Resting metabolic rate (RMR), for which body temperature is a crude
proxy, increases when the ambient temperature decreases below or rises
above the thermoneutral zone, that is the temperature of the environment
at which humans can maintain normal temperature with minimum energy
expenditure (Erikson et al., 1956). In the 19th
century, homes in the US were irregularly and inconsistently heated and
never cooled. By the 1920s, however, heating systems reached a broad
segment of the population with mean night-time temperature continuing to
increase even in the modern era (Mavrogianni et al., 2013). Air conditioning is now found in more than 85% of US homes (US Energy Information Administration, 2011).
Thus, the amount of time the population has spent at thermoneutral
zones has markedly increased, potentially causing a decrease in RMR,
and, by analogy, body temperature.
Some factors known to influence body temperature
were not included in our final model due to missing data (ambient
temperature and time of day) or complete lack of information (dew
point)(Obermeyer et al., 2017).
Adjusting for ambient temperature, however, would likely have amplified
the changes over time due to lack of heating and cooling in the earlier
cohorts. Time of day at which measurement was conducted had a more
significant effect on temperature (Figure 1—figure supplement 4).
Based on the distribution of times of day for temperature measurement
available to us in STRIDE and NHANES, we estimate that even in the worst
case scenario, that is the UAVCW measurements were all were obtained
late in the afternoon, adjustment for time of day would have only a
small influence (<0.05°C) on the −0.59°C change over time.
In summary, normal body temperature is assumed by
many, including a great preponderance of physicians, to be 37°C. Those
who have shown this value to be too high have concluded that
Wunderlich’s 19th century measurements were simply flawed (Mackowiak, 1997; Sund-Levander et al., 2002).
Our investigation indicates that humans in high-income countries have
changed physiologically over the last 200 birth years with a mean body
temperature 1.6% lower than in the pre-industrial era. The role that
this physiologic ‘evolution’ plays in human anthropometrics and
longevity is unknown.
Subscribe to:
Posts (Atom)