Approaches to Measuring Creativity: A Systematic Literature Review. Sameh Said-Metwaly, Eva Kyndt, Wim van den Noortgate. Creativity, Vol. 4, Issue 2, 2017. https://www.degruyter.com/downloadpdf/j/ctra.2017.4.issue-2/ctra-2017-0013/ctra-2017-0013.pdf
Abstract: This paper presents a review of the literature on the measurement of creativity. Creativity definitions are discussed as a starting point for understanding the nature of this construct. The four major approaches to measuring creativity (process, person, product and press) are reviewed, pointing out commonly used instruments as well as the advantages and weaknesses of each approach. This review reveals that the measurement of creativity is an unsettled issue, and that the existing instruments purporting to measure creativity suffer from serious conceptual and psychometric shortcomings. Research gaps and suggestions for future research are discussed.
Results
From the 2,064 papers identified by the search process, 221 papers were selected based on screening titles and abstracts. Among these, 152 papers met the inclusion criteria. The 152 included papers addressed the measurement of creativity and significant issues related to this measurement. Four distinct approaches to measuring creativity (process, person, product and press), in addition to the most commonly used instruments in each approach were identified. In the following, we first discuss creativity definitions, pointing to the different categories of these definitions. Then, we describe the approaches to measuring creativity and the advantages and weaknesses of each of these approaches, with an emphasis on the psychometric properties of the most common instruments used in each approach.
Defining creativity
Creativity has proven, over the years, to be difficult to define and measure due to its complex and multidimensional nature (Barbot et al., 2011; Batey & Furnham, 2006; Cropley, 2000; Runco, 2004, 2007; Treffinger et al., 2002). Treffinger (1996) reviewed the creativity literature and presented more than 100 different definitions for this concept. Despite these different definitions, the majority of creativity studies tend to employ only a few of these definitions, whereas other studies avoid providing a definition of this construct at all (Kaufman, Plucker, & Russell, 2012; Plucker & Makel, 2010). Furthermore, researchers and educators may use the term creativity to refer to entirely different aspects, including cognitive processes, personal characteristics and past experiences (Treffinger et al., 2002). In addition, researchers sometimes use terms such as innovation, invention, imagination, talent, giftedness and intelligence interchangeably with creativity.
In general, definitions of creativity typically reflect at least one of four different perspectives: cognitive processes associated with creativity (later in this paper referred to as ‘process’), personal characteristics of creative individuals (‘person’), creative products or outcomes (‘product’) and the interaction between the creative individual and the context or environment (‘press’) (Couger, Higgins, & McIntyre, 1993; Horn & Salvendy, 2006; Rhodes, 1961; Thompson & Lordan, 1999; Zeng et al., 2011).
With regard to the process perspective, Torrance (1977), as a pioneer in creativity research, defined creativity as the process of perceiving problems or gaps in knowledge, developing hypotheses or propositions, testing and validating hypotheses and finally sharing the results. Similarly, Mednick (1962) proposed that creativity involves the process of bringing associative elements together into new combinations to meet the task requirements. Guilford (1950) suggested some factors for interpreting variations in creativity including sensitivity to problems, fluency, flexibility, originality, synthesizing, analyzing, reorganizing or redefining, complexity and evaluating. In his Structure-of-Intellect (SOI) Model, Guilford (1975) considered creativity as a form of problem solving and distinguished between two types of cognitive operations: divergent production and convergent production. Divergent production is a broad search used in open problems to generate logical answers or alternatives, whereas convergent production is a focused search that leads to the generation of a specific logical imperative for a problem, in which a particular answer is required. Guilford (1975) considered divergent production process to be more relevant to successful creative thinking.
Focusing on the person perspective, a wide array of personal characteristics and traits have been suggested as being associated with creativity including attraction to complexity, high energy, behavioural flexibility, intuition, emotional variability, self-esteem, risk taking, perseverance, independence, introversion, social poise and tolerance to ambiguity (Barron & Harrington, 1981; Feist, 1998; James & Asmus, 2000-2001; Runco, 2007). However, having such traits does not actually guarantee the occurrence of creative achievement, the effect of intrinsic motivation still remains (Amabile, 1983). In other words, personality may be seen as related to the motivation to be creative rather than to creativity itself, with both of these being necessary for creative achievement (James & Asmus, 2000-2001). Task motivation is one of three key components in Amabile’s (1983, 1988, 1996) componential model of creativity that are necessary for creative performance, together with domain-relevant skills (including knowledge about the domain, technical skills and domain-related talent) and creativity-relevant skills (including personality characteristics and cognitive styles).
By turning the focus of defining creativity towards the creative products, Khatena and Torrance (1973) defined creativity as constructing or organizing ideas, thoughts and feelings into unusual and associative bonds using imagination power. Gardner (1993) stated that creative individuals are able to solve problems, model products, or define new questions in a novel but acceptable way in a particular cultural context. Creativity is also seen as the ability to produce or design something that is original, adaptive with regard to task constraints, of high quality (Kaufman & Sternberg, 2007; Lubart & Guignard, 2004; Sternberg & Lubart, 1999), useful, beautiful and novel (Feist, 1998; Mumford, 2003; Ursyn, 2014).
Finally, regarding the press perspective, that is, the interaction between the creative person and the environment or climate, McLaren (1993) stated that creativity could not be fully understood through human endeavour without taking into account its socio-moral context and intent (James, Clark, & Cropanzano, 1999). Investigating the environment for creativity therefore requires that all the factors that promote or inhibit creativity should be taken into consideration (Thompson & Lordan, 1999). In the componential model of organizational innovation and creativity, Amabile (1988) proposed three broad environmental factors related to creativity: organizational motivation or orientation to innovate, available resources and management practices. Geis (1988) identified five factors to ensure a creative environment: a secure environment with minimum administrative or financial intervention, an organizational culture that makes it easy for people to create and discover independently, rewards for performance to support intrinsic motivation, managerial willingness to take risks in the targeted areas of creativity and providing training to enhance creativity. Several studies have indicated the impact of climate or environment variables on creative achievement (e.g. Couger et al., 1993; Paramithaa & Indarti, 2014), particularly with respect to the initial exploratory stages of creative endeavours in which individuals’ need for approval and support plays an important role in motivating their further efforts (Abbey & Dickson, 1983).
Despite these different perspectives in defining creativity, some aspects are shared by many researchers. Researchers generally agree that creativity involves the production of novel and useful responses (Batey, 2012; Mayer, 1999; Mumford, 2003; Runco & Jaeger, 2012). These two characteristics, novelty and usefulness, are widely mentioned in most definitions of creativity (Zeng, Proctor, & Salvendy, 2009), although there is still some debate about the definitions of these two terms (Batey, 2012; Batey & Furnham, 2006; Runco & Jaeger, 2012). Another area of consensus is that creativity is regarded as a multifaceted phenomenon that involves cognitive, personality and environmental components (Batey & Furnham, 2006; Lemons, 2011; Runco, 2004). As Harrington (1990, p.150) asserted “Creativity does not “reside” in any single cognitive or personality process, does not occur at any single point in time, does not “happen” at any particular place, and is not the product of any single individual”.
Friday, January 10, 2020
Women, but not men, are seen as more attractive with longer eyelashes; perceptions of health and femininity also increase with eyelash length; older women, rather than younger women, benefit the most from enhanced eyelashes
Adam, A. (2020). Beauty is in the eye of the beautiful: Enhanced eyelashes increase perceived health and attractiveness. Evolutionary Behavioral Sciences, Jan 2020. https://doi.org/10.1037/ebs0000192
Abstract: Although some aspects of physical attractiveness are specific to time and culture, other characteristics act as external cues to youth, health, and fertility. Like head hair, eyelashes change with age, and as such, they may also serve as external mating cues. In three experiments, I manipulated eyelash length in photographs of men and women and had participants rate them on attractiveness (Studies 1–3), perceived age (Studies 1–3), perceived health (Studies 2 and 3), and femininity (Study 3). The results indicate that women, but not men, are seen as more attractive with longer eyelashes; that perceptions of health and femininity also increase with eyelash length; and that older women, rather than younger women, benefit the most from enhanced eyelashes—but that longer eyelashes did not reduce perceptions of age.
Abstract: Although some aspects of physical attractiveness are specific to time and culture, other characteristics act as external cues to youth, health, and fertility. Like head hair, eyelashes change with age, and as such, they may also serve as external mating cues. In three experiments, I manipulated eyelash length in photographs of men and women and had participants rate them on attractiveness (Studies 1–3), perceived age (Studies 1–3), perceived health (Studies 2 and 3), and femininity (Study 3). The results indicate that women, but not men, are seen as more attractive with longer eyelashes; that perceptions of health and femininity also increase with eyelash length; and that older women, rather than younger women, benefit the most from enhanced eyelashes—but that longer eyelashes did not reduce perceptions of age.
How much common genetic factors account for the association between general risk-taking preferences and risk taking preferences, and choices in financial investments, stock market participation and business formation
Common genetic effects on risk-taking preferences and choices. Nicos Nicolaou & Scott Shane. Journal of Risk and Uncertainty, Jan 9 2020. https://link.springer.com/article/10.1007/s11166-019-09316-2
Abstract: Although prior research has shown that risk-taking preferences and choices are correlated across many domains, there is a dearth of research investigating whether these correlations are primarily the result of genetic or environmental factors. We examine the extent to which common genetic factors account for the association between general risk-taking preferences and domain-specific risk-taking preferences, and between general risk-taking preferences and risk taking choices in financial investments, stock market participation and business formation. Using data from 1898 monozygotic (MZ) and 1344 same-sex dizygotic (DZ) twins, we find that general risk-taking shares a common genetic component with domain-specific risk-taking preferences and risk-taking choices.
Discussion
Although
prior research has shown that general risk preferences, domain-specific
risk preferences and choices that involve risk are correlated, very
little work has investigated whether these correlations were primarily
the result of genetic or environmental factors. This study showed that
the correlations between general risk preferences, domain-specific risk
preferences, financial investment choices, stock market participation,
and business formation choices are partially the result of genetic
factors.
Human beings may have evolved into different types: Those whose genetic composition predisposes them to high-risk-high-return choices and those whose genetic composition predisposes them to low-risk-low-reward choices. Just as our ancestors chose between hunting and gathering in part because they had innate predispositions toward risk tolerance or risk aversion, today’s humans might choose between low-risk-low-return and high-risk-high-return occupations and investment strategies.
We posit that the common genetic component to these preferences leads to correlated behaviors among people. Genetic factors account for part of the covariance between general risk preferences and domain-specific risk preferences, between general risk preferences and financial investment choices, between general risk preferences and stock market participation, and between general risk preferences and the choice of entrepreneurship as an occupation. People that are more risk tolerant are more likely to invest in stocks, make riskier financial choices and choose risky occupations, in part, because of the biological processes underlying their behavior.
Our study contributes to a biosocial perspective on risk taking. Domain-specific risk preferences have a non-trivial genetic component. In addition, financial investment choices, the choice to become an entrepreneur, and stock market participation have a sizeable genetic component. These patterns suggest that cross-sectional differences in the preference for risk and risk-taking behavior emerge naturally in a society as a function of the distribution of genetic composition (Karlsson Linnèr et al. 2019).
These results have interesting implications for those who examine risk taking. Parent-child similarity in risk taking, a commonly found correlation, may not result from cultural transmission as much as from genetic factors. While our findings do not negate the significance of environmental factors, they show that genetic influences cannot be ignored.
In addition, our results show that all of the environmental influences in risk taking were of the non-shared variety. This suggests that differential experiences outside the family, such as work environment and work colleagues, are more important for risk taking than shared environmental factors such as parental education and shared family rules and upbringing.
Our analysis suggests that a non-trivial fraction of the correlation between risk-taking behaviors results from innate factors. Because the ways to enhance those behaviors vary depending on the levels of genetic and environmental correlations, our results suggest that researchers need to think more carefully about the ways in which interventions might be used to increase the level of risk-taking behavior. Even if variables display a phenotypic correlation, interventions to increase one variable will not be likely to increase the other unless the correlations are largely environmental. Our results showed that a greater fraction of the correlation between general risk-taking preference and stock market participation was genetic than the fraction of the correlation between general risk taking preference and the tendency to be an entrepreneur. Therefore, efforts to increase entry into entrepreneurship by changing risk preferences through education may prove more effective than efforts to increase stock market participation through educationally-induced shifts in general risk preferences.
In addition, our study has implications for molecular genetics research in risk-taking behavior. Because common genetic factors account for a sizeable portion of the correlation between risk-taking preferences and choices in different domains, genes associated with those preferences and choices in one domain are plausible candidate genes for molecular genetics studies of risk-taking preferences and choices in other domains. These genes may also be influential for identifying gene-environment interactions in risk-taking.
It is crucial to stress that our study does not contend that genes determine risk taking behavior. As Johnson et al. (2009) argue, “even highly heritable traits can be strongly manipulated by the environment, so heritability has little if anything to do with controllability” (p. 218). Genes may only predispose some people and not others to develop risk taking preferences and choices. Thus, it is imperative for future research to understand the role that genes play in concert with contextual and environmental factors.
Our study has several limitations. Approximately 92% of the sample is female, hindering our ability to generalize our results to males. If women are less risk-taking than men, the range of our findings might be restricted when applied to males. While we have no reason to believe that genetic factors would only influence the correlation between risk-taking behavior in women and not men, we cannot show the generalizability of our findings across gender either.
Moreover, as in all twin studies, our analysis assumes that there is no assortative mating. Assortative mating—which can arise when individuals have children with individuals who are genetically similar to them—increases the probability that children of similar parents receive more similar gene variants for some attributes than children of “non-similar” parents. Because assortative mating increases the genetic similarity between fraternal twins, but not between identical twins (Guo 2005), it biases the results of twin studies by underestimating the heritability estimates (Plomin et al. 2008). Because we do not know if there is parental assortative mating with respect to risk-taking preferences and choices, we must caution that our findings could be biased downward, and underrepresent the common genetic component to risk-taking.
Furthermore, any violation of the equal environments assumption (EEA) may also affect the robustness of our findings. If environmental factors behave towards identical twins more similarly than towards fraternal twins with respect to risk-taking preferences or choices, the validity of the EEA would be challenged. While we have no reason to believe that this would be the case, we do not have the evidence to empirically verify the validity of the EEA in our study.
In addition, our results may be affected by measurement error. Beauchamp et al. (2017) found that measurement-error-adjusted estimates of heritability were considerably higher than the non-adjusted estimates. They conjecture that “once measurement error is controlled for, the heritability of most economic attitudes will approach that of the ‘Big Five’ in personality research” (Beauchamp et al. 2017: 231). This suggests that the heritability estimates for our risk taking variables may be conservative.
Finally, our analysis says nothing about the specific genetic mechanism involved in risk taking preferences and choices. Our results are consistent with the proposition that people with different genotypes select into different environments for risk-taking, as well as the proposition that genes themselves have a proximal effect on risk-taking preferences and choices. Moreover, we cannot know from these results what genes are involved in risk-taking preferences and choices or how many genes influence the observed outcomes.
We conclude by strongly encouraging additional research on the genetics of risk-taking. Considering the complementary role that biology plays in accounting for risk-taking is important lest we limit our ability to explain this important phenomenon. While most social scientists are comfortable exploring the role of environmental factors, they are less comfortable looking at the part that genetics plays. But, as Song, Li, & Wang (2015) have stressed, the need to account for more of the variance in work-related behaviour suggests that the role of genetics should be more carefully considered.
Human beings may have evolved into different types: Those whose genetic composition predisposes them to high-risk-high-return choices and those whose genetic composition predisposes them to low-risk-low-reward choices. Just as our ancestors chose between hunting and gathering in part because they had innate predispositions toward risk tolerance or risk aversion, today’s humans might choose between low-risk-low-return and high-risk-high-return occupations and investment strategies.
We posit that the common genetic component to these preferences leads to correlated behaviors among people. Genetic factors account for part of the covariance between general risk preferences and domain-specific risk preferences, between general risk preferences and financial investment choices, between general risk preferences and stock market participation, and between general risk preferences and the choice of entrepreneurship as an occupation. People that are more risk tolerant are more likely to invest in stocks, make riskier financial choices and choose risky occupations, in part, because of the biological processes underlying their behavior.
Our study contributes to a biosocial perspective on risk taking. Domain-specific risk preferences have a non-trivial genetic component. In addition, financial investment choices, the choice to become an entrepreneur, and stock market participation have a sizeable genetic component. These patterns suggest that cross-sectional differences in the preference for risk and risk-taking behavior emerge naturally in a society as a function of the distribution of genetic composition (Karlsson Linnèr et al. 2019).
These results have interesting implications for those who examine risk taking. Parent-child similarity in risk taking, a commonly found correlation, may not result from cultural transmission as much as from genetic factors. While our findings do not negate the significance of environmental factors, they show that genetic influences cannot be ignored.
In addition, our results show that all of the environmental influences in risk taking were of the non-shared variety. This suggests that differential experiences outside the family, such as work environment and work colleagues, are more important for risk taking than shared environmental factors such as parental education and shared family rules and upbringing.
Our analysis suggests that a non-trivial fraction of the correlation between risk-taking behaviors results from innate factors. Because the ways to enhance those behaviors vary depending on the levels of genetic and environmental correlations, our results suggest that researchers need to think more carefully about the ways in which interventions might be used to increase the level of risk-taking behavior. Even if variables display a phenotypic correlation, interventions to increase one variable will not be likely to increase the other unless the correlations are largely environmental. Our results showed that a greater fraction of the correlation between general risk-taking preference and stock market participation was genetic than the fraction of the correlation between general risk taking preference and the tendency to be an entrepreneur. Therefore, efforts to increase entry into entrepreneurship by changing risk preferences through education may prove more effective than efforts to increase stock market participation through educationally-induced shifts in general risk preferences.
In addition, our study has implications for molecular genetics research in risk-taking behavior. Because common genetic factors account for a sizeable portion of the correlation between risk-taking preferences and choices in different domains, genes associated with those preferences and choices in one domain are plausible candidate genes for molecular genetics studies of risk-taking preferences and choices in other domains. These genes may also be influential for identifying gene-environment interactions in risk-taking.
It is crucial to stress that our study does not contend that genes determine risk taking behavior. As Johnson et al. (2009) argue, “even highly heritable traits can be strongly manipulated by the environment, so heritability has little if anything to do with controllability” (p. 218). Genes may only predispose some people and not others to develop risk taking preferences and choices. Thus, it is imperative for future research to understand the role that genes play in concert with contextual and environmental factors.
Our study has several limitations. Approximately 92% of the sample is female, hindering our ability to generalize our results to males. If women are less risk-taking than men, the range of our findings might be restricted when applied to males. While we have no reason to believe that genetic factors would only influence the correlation between risk-taking behavior in women and not men, we cannot show the generalizability of our findings across gender either.
Moreover, as in all twin studies, our analysis assumes that there is no assortative mating. Assortative mating—which can arise when individuals have children with individuals who are genetically similar to them—increases the probability that children of similar parents receive more similar gene variants for some attributes than children of “non-similar” parents. Because assortative mating increases the genetic similarity between fraternal twins, but not between identical twins (Guo 2005), it biases the results of twin studies by underestimating the heritability estimates (Plomin et al. 2008). Because we do not know if there is parental assortative mating with respect to risk-taking preferences and choices, we must caution that our findings could be biased downward, and underrepresent the common genetic component to risk-taking.
Furthermore, any violation of the equal environments assumption (EEA) may also affect the robustness of our findings. If environmental factors behave towards identical twins more similarly than towards fraternal twins with respect to risk-taking preferences or choices, the validity of the EEA would be challenged. While we have no reason to believe that this would be the case, we do not have the evidence to empirically verify the validity of the EEA in our study.
In addition, our results may be affected by measurement error. Beauchamp et al. (2017) found that measurement-error-adjusted estimates of heritability were considerably higher than the non-adjusted estimates. They conjecture that “once measurement error is controlled for, the heritability of most economic attitudes will approach that of the ‘Big Five’ in personality research” (Beauchamp et al. 2017: 231). This suggests that the heritability estimates for our risk taking variables may be conservative.
Finally, our analysis says nothing about the specific genetic mechanism involved in risk taking preferences and choices. Our results are consistent with the proposition that people with different genotypes select into different environments for risk-taking, as well as the proposition that genes themselves have a proximal effect on risk-taking preferences and choices. Moreover, we cannot know from these results what genes are involved in risk-taking preferences and choices or how many genes influence the observed outcomes.
We conclude by strongly encouraging additional research on the genetics of risk-taking. Considering the complementary role that biology plays in accounting for risk-taking is important lest we limit our ability to explain this important phenomenon. While most social scientists are comfortable exploring the role of environmental factors, they are less comfortable looking at the part that genetics plays. But, as Song, Li, & Wang (2015) have stressed, the need to account for more of the variance in work-related behaviour suggests that the role of genetics should be more carefully considered.
Women: Socioeconomic status negatively correlated with subjective orgasm experience
Factors Associated with Subjective Orgasm Experience in Heterosexual Relationships. Ana Isabel Arcos-Romero & Juan Carlos Sierra. Journal of Sex & Marital Therapy, Jan 9 2020. https://doi.org/10.1080/0092623X.2019.1711273
Abstract: The main objective of this study was to determine the predictive capacity of different variables, organized based on Ecological theory (i.e., personal, interpersonal, social, and ideological), in the intensity of the subjective orgasm experience within the context of heterosexual relationships. The sample was composed of 1,300 adults (547 men, 753 women). The proposed model for men showed that more intense subjective orgasm experience was predicted by age, sexual sensations seeking, sexual satisfaction, and partner-focused sexual desire. The model for women showed that more intense subjective orgasm experience was predicted by age, erotophilia, sexual sensation seeking, partner-focused sexual desire, and sexual satisfaction.
Abstract: The main objective of this study was to determine the predictive capacity of different variables, organized based on Ecological theory (i.e., personal, interpersonal, social, and ideological), in the intensity of the subjective orgasm experience within the context of heterosexual relationships. The sample was composed of 1,300 adults (547 men, 753 women). The proposed model for men showed that more intense subjective orgasm experience was predicted by age, sexual sensations seeking, sexual satisfaction, and partner-focused sexual desire. The model for women showed that more intense subjective orgasm experience was predicted by age, erotophilia, sexual sensation seeking, partner-focused sexual desire, and sexual satisfaction.
No differences in cognitive or structural MRI measures on those that reported, on average, 5.4 hours, 6.2 hours, 7.0 hours, and 7.9 hours sleep per night over 5 timepoints spanning 28 years
Sleep duration over 28 years, cognition, gray matter volume, and white matter microstructure: a prospective cohort study. Jennifer Zitser et al. Sleep, zsz290, January 6 2020. https://doi.org/10.1093/sleep/zsz290
Abstract
Study Objectives: To examine the association between sleep duration trajectories over 28 years and measures of cognition, gray matter volume, and white matter microstructure. We hypothesize that consistently meeting sleep guidelines that recommend at least 7 hours of sleep per night will be associated with better cognition, greater gray matter volumes, higher fractional anisotropy, and lower radial diffusivity values.
Methods: We studied 613 participants (age 42.3 ± 5.03 years at baseline) who self-reported sleep duration at five time points between 1985 and 2013, and who had cognitive testing and magnetic resonance imaging administered at a single timepoint between 2012 and 2016. We applied latent class growth analysis to estimate membership into trajectory groups based on self-reported sleep duration over time. Analysis of gray matter volumes was carried out using FSL Voxel-Based-Morphometry and white matter microstructure using Tract Based Spatial Statistics. We assessed group differences in cognitive and MRI outcomes using nonparametric permutation testing.
Results: Latent class growth analysis identified four trajectory groups, with an average sleep duration of 5.4 ± 0.2 hours (5%, N = 29), 6.2 ± 0.3 hours (37%, N = 228), 7.0 ± 0.2 hours (45%, N = 278), and 7.9 ± 0.3 hours (13%, N = 78). No differences in cognition, gray matter, and white matter measures were detected between groups.
Conclusions: Our null findings suggest that current sleep guidelines that recommend at least 7 hours of sleep per night may not be supported in relation to an association between sleep patterns and cognitive function or brain structure.
Keywords: aging, cognition, gray matter, sleep, white matter
Statement of Significance: Up to a third of adults report between 6 and 7 hours of sleep per night, and thus fail to meet sleep guidelines which recommend at least 7 hours of sleep per night. Although extreme short sleep (e.g. ≤5 hours per night) has repeatedly been associated with poor cognitive health, it is currently unclear if such relationships subsist for more moderate short-sleep durations. We found no differences in cognitive or structural MRI measures between groups that reported, on average, 5.4 hours, 6.2 hours, 7.0 hours, and 7.9 hours sleep per night over 5 timepoints spanning 28 years. If replicated with longitudinal markers of cognitive health, such null results could challenge the suitability of current sleep guidelines on cognitive outcomes.
To the best of our knowledge, only one study has previously applied latent class growth modeling to examine trajectories of self-reported sleep duration over time within an adult population. In a study of 8,673 Canadian adults, Gilmour et al. identified four sleep trajectory groups with intercepts of 5.57 hours (11% of participants), 6.68 hours (49%), 7.65 hours (37%), and 8.34 hours (2%) [36]. We also identified four trajectory groups, with intercepts of 5.54 hours (5% of participants), 6.57 hours (37%), 6.95 hours (45%), and 7.85 (13%) hours. With regard to shape, the trajectories identified in both studies displayed limited meaningful change over time. For example, average sleep duration differed by less than an hour between timepoints in all groups—indicating that extreme increases or decreases in sleep duration over time are limited in prevalence. Given that our studies differ both in terms of demographics and methods (e.g. sleep duration was assessed over an 8-year time period in Gilmour et al. [36], compared with over 28-years in our study), it is encouraging that our results are broadly complimentary.
Of particular note within our study is that 37% of participants were included in a group with an average sleep duration of 6.2 hours. Guidelines published by the American Society for Sleep Medicine and the Sleep Research Society state that “adults should sleep 7 or more hours per night on a regular basis to promote optimal health” [4]. In addition, the National Sleep Foundation’s guidelines posit that 7–9 hours sleep per night is “recommended” for health and well-being, with less than 6 hours sleep “not recommended” [3]. In these guidelines, 6 hours sleep per night falls in a somewhat gray area between these two groups, and is classified as “may be appropriate.” Relevant to the revision of such guidelines, our study found no evidence to suggest that consistently reporting approximately 6 hours sleep per night is associated with adverse cognitive and MRI outcomes.
Such null findings, however, do not necessarily indicate that sleep duration is not important to cognitive health. Rather, our null findings may reflect the limited number of participants reporting extremes in sleep duration within our sample. At each phase, between 92% and 96% of participants reported 6, 7, or 8 hours sleep per night (Supplementary Material Table S5). The group with the shortest sleep duration in our study contained just 5% of participants and had a mean sleep duration of 5.4 hours. The group with the longest sleep duration, which contained 13% of participants, had a mean sleep duration of 7.87 hours—a value that falls within guidelines for recommended sleep durations. Change in sleep duration was also limited within our sample—between 89% and 94% of participants reported change in sleep duration of 0–1 hour between the baseline and subsequent data waves (Supplementary Material Tables S6–S7), and sleep trajectories remained relatively stable over time overall. Significant group differences for cognitive and MRI measures may only become apparent with larger samples of more extreme sleep durations, groups that have often been the focus of cognitive studies to date. For example, in a meta-analysis that reported significant associations between sleep duration and cognitive outcomes, the most common category for short-sleep duration was 5 hours or less (ranging from <4 to ≤6.5 hours), and the most common category for long-sleep duration was 9 hours or more (ranging from ≥8 to ≥11 hours) [5]. Furthermore, although few studies have examined the change in sleep duration over time, Devore et al. reported that women whose sleep duration changed by 2 hours or more in any direction, had worse cognitive outcomes compared with women with no change in sleep duration [8]. In addition, previous studies based on the entire Whitehall II cohort found that adverse changes in sleep duration are associated with poorer cognitive function [6]. There are many reasons that our findings may diverge, despite overlapping samples. These include differences in sample size (5431 vs 631; thereby impacting on power to assess extremes in sleep duration and change), characteristics (see Supplementary Material Table S8 for comparison), number of assessments of sleep duration (2 vs 5 times), and cognitive test battery administered. Therefore, our study does not contradict the hypothesis that extreme short sleep, extreme long sleep, or extreme changes in sleep duration are associated with adverse outcomes; but instead indicates that such groups may not be well represented in small population-based samples.
An alternative explanation for our null results is that it is not sleep duration alone that is associated with cognitive health in aging, but rather a combination of sleep quality and quantity. Indeed, in an overlapping sample, we have previously published that poor sleep quality is associated with reduced FA and increased RD within fronto-subcortical regions [15]. In an exploratory post hoc analysis, we divided the 6-hour and 7-hour groups into poor and good sleep quality groups dependent upon their PSQI score at the time of the MRI scan (due to limited sample size, we did not include 5-hour and 8-hour groups in this analysis) (Supplementary Material: Text S2, Table S9). F tests showed significant group differences for global FA and voxel-wise RD. The 6-hour good sleep quality group displayed higher global FA and reduced RD in widespread tracts, compared with both the 6-hour poor sleep quality group and 7-hour poor sleep quality group (Figure S2). The 6-hour good sleep quality group also displayed reduced RD compared with the 7-hour good sleep quality group in the corpus callosum (Figure S2). These results indicate that the combination of sleep quality and quantity may be more sensitive to measures of cognitive health in aging. However, it is critical to stress that our measures of sleep quality and quantity are not directly comparable (e.g. quality was measured using a 17-item questionnaire at a single timepoint, duration was measured using a single-item questionnaire at five timepoints). Therefore, these post hoc results should be considered exploratory and require independent replication.
Our study has a number of strengths, including the availability of sleep duration data at five points spanning 28 years prior to cognitive and MRI assessment, which allowed us to examine sleep trajectories over time. A major limitation of our analysis was the reliance on a single-item self-report of sleep duration, in which participants could only report their sleep durations in discrete categories (i.e. “5 hours or less,” “6 hours,” “7 hours,” “8 hours,” or “9 hours or more”). Sleep duration may be more sensitively measured if sleep was measured in hours and minutes and if there were no lower or upper thresholds for sleep duration. A further limitation is that participants were asked to report their sleep duration only on an “average week night.” The discrepancy between weeknight and weekend sleep duration is common in working-age populations and there is debate regarding whether long weekend sleep can compensate for short weeknight sleep for health outcomes such as mortality [37], weight, and insulin sensitivity [38]. In the Whitehall II study, the agreement between self-reported and accelerometer-measured total sleep duration was slightly higher in weekdays (kappa = 0.37, 95% CI 0.34–0.40) than weekend days (kappa = 0.33, 95% CI 0.31–0.36) [39]. Further research is needed to examine the long-term effects of weekend recovery sleep on cognition. Sleep duration is also often overestimated in self-reported compared to objective studies [40–42]. For example, within the Sleep Heart Health Study of 2,113 adults at a mean age of 67 years, morning self-estimated sleep time and total sleep time measured by polysomnography were estimated as 379 and 363 minutes, respectively, with a weak correlation of r = 0.16 between the measures [41]. Self-reported total sleep duration and sleep duration assessed using a wrist-worn accelerometer were moderately related in the Whitehall II study of 4,094 adults aged 60–83 (kappa 0.39, 95% CI 0.36–0.42) [39]. Importantly, differences between measurements may impact upon observed relationships with cognitive outcomes. For example, the Sleep Study of the National Social Life, Health, and Aging Project (NSHAP), a nationally representative cohort of older US adults (2010–2015), found that actigraphic measures of sleep disruption were associated with worse cognition and higher odds of 5-year cognitive decline but there was no association for self-reported sleep [43]. As self-reported measures of sleep duration correlate well with daily sleep diaries [44], are the mainstay of population-based cohort studies, and are the focus of sleep guidelines, further studies using both objective and self-report measures of sleep duration to examine cognitive and MRI outcomes are needed. Furthermore, our findings have limited generalizability, given that the Whitehall participants have relatively high educational attainment which might contribute to increased cognitive reserve. As a result, we may have underestimated the long-term neurocognitive effects associated with unfavorable sleep patterns in subpopulations of low educational attainment, who may be more sensitive to the detrimental health effects of sleep disturbance. We were also unable to rule out potential selection bias; it is plausible that those with the greatest cognitive decline and/or most extreme sleep durations were less likely to return for a follow-up assessment.
Abstract
Study Objectives: To examine the association between sleep duration trajectories over 28 years and measures of cognition, gray matter volume, and white matter microstructure. We hypothesize that consistently meeting sleep guidelines that recommend at least 7 hours of sleep per night will be associated with better cognition, greater gray matter volumes, higher fractional anisotropy, and lower radial diffusivity values.
Methods: We studied 613 participants (age 42.3 ± 5.03 years at baseline) who self-reported sleep duration at five time points between 1985 and 2013, and who had cognitive testing and magnetic resonance imaging administered at a single timepoint between 2012 and 2016. We applied latent class growth analysis to estimate membership into trajectory groups based on self-reported sleep duration over time. Analysis of gray matter volumes was carried out using FSL Voxel-Based-Morphometry and white matter microstructure using Tract Based Spatial Statistics. We assessed group differences in cognitive and MRI outcomes using nonparametric permutation testing.
Results: Latent class growth analysis identified four trajectory groups, with an average sleep duration of 5.4 ± 0.2 hours (5%, N = 29), 6.2 ± 0.3 hours (37%, N = 228), 7.0 ± 0.2 hours (45%, N = 278), and 7.9 ± 0.3 hours (13%, N = 78). No differences in cognition, gray matter, and white matter measures were detected between groups.
Conclusions: Our null findings suggest that current sleep guidelines that recommend at least 7 hours of sleep per night may not be supported in relation to an association between sleep patterns and cognitive function or brain structure.
Keywords: aging, cognition, gray matter, sleep, white matter
Statement of Significance: Up to a third of adults report between 6 and 7 hours of sleep per night, and thus fail to meet sleep guidelines which recommend at least 7 hours of sleep per night. Although extreme short sleep (e.g. ≤5 hours per night) has repeatedly been associated with poor cognitive health, it is currently unclear if such relationships subsist for more moderate short-sleep durations. We found no differences in cognitive or structural MRI measures between groups that reported, on average, 5.4 hours, 6.2 hours, 7.0 hours, and 7.9 hours sleep per night over 5 timepoints spanning 28 years. If replicated with longitudinal markers of cognitive health, such null results could challenge the suitability of current sleep guidelines on cognitive outcomes.
Discussion
The aim of this study was to examine sleep duration trajectories over a 28-year period and their relationship with measures of cognition, gray matter volume, and white matter microstructure. We hypothesized that consistently meeting recommendations for sleep duration (i.e. self-reporting a minimum of 7 hours sleep per night) would be favorably associated with cognition, gray matter volume and white matter microstructure, compared with consistently not meeting the guidelines or transitioning in and out of the guidelines over time. In contrast to our hypotheses, our results did not show any differences in cognitive measures, gray matter volume, or white matter microstructure between different sleep trajectory groups.To the best of our knowledge, only one study has previously applied latent class growth modeling to examine trajectories of self-reported sleep duration over time within an adult population. In a study of 8,673 Canadian adults, Gilmour et al. identified four sleep trajectory groups with intercepts of 5.57 hours (11% of participants), 6.68 hours (49%), 7.65 hours (37%), and 8.34 hours (2%) [36]. We also identified four trajectory groups, with intercepts of 5.54 hours (5% of participants), 6.57 hours (37%), 6.95 hours (45%), and 7.85 (13%) hours. With regard to shape, the trajectories identified in both studies displayed limited meaningful change over time. For example, average sleep duration differed by less than an hour between timepoints in all groups—indicating that extreme increases or decreases in sleep duration over time are limited in prevalence. Given that our studies differ both in terms of demographics and methods (e.g. sleep duration was assessed over an 8-year time period in Gilmour et al. [36], compared with over 28-years in our study), it is encouraging that our results are broadly complimentary.
Of particular note within our study is that 37% of participants were included in a group with an average sleep duration of 6.2 hours. Guidelines published by the American Society for Sleep Medicine and the Sleep Research Society state that “adults should sleep 7 or more hours per night on a regular basis to promote optimal health” [4]. In addition, the National Sleep Foundation’s guidelines posit that 7–9 hours sleep per night is “recommended” for health and well-being, with less than 6 hours sleep “not recommended” [3]. In these guidelines, 6 hours sleep per night falls in a somewhat gray area between these two groups, and is classified as “may be appropriate.” Relevant to the revision of such guidelines, our study found no evidence to suggest that consistently reporting approximately 6 hours sleep per night is associated with adverse cognitive and MRI outcomes.
Such null findings, however, do not necessarily indicate that sleep duration is not important to cognitive health. Rather, our null findings may reflect the limited number of participants reporting extremes in sleep duration within our sample. At each phase, between 92% and 96% of participants reported 6, 7, or 8 hours sleep per night (Supplementary Material Table S5). The group with the shortest sleep duration in our study contained just 5% of participants and had a mean sleep duration of 5.4 hours. The group with the longest sleep duration, which contained 13% of participants, had a mean sleep duration of 7.87 hours—a value that falls within guidelines for recommended sleep durations. Change in sleep duration was also limited within our sample—between 89% and 94% of participants reported change in sleep duration of 0–1 hour between the baseline and subsequent data waves (Supplementary Material Tables S6–S7), and sleep trajectories remained relatively stable over time overall. Significant group differences for cognitive and MRI measures may only become apparent with larger samples of more extreme sleep durations, groups that have often been the focus of cognitive studies to date. For example, in a meta-analysis that reported significant associations between sleep duration and cognitive outcomes, the most common category for short-sleep duration was 5 hours or less (ranging from <4 to ≤6.5 hours), and the most common category for long-sleep duration was 9 hours or more (ranging from ≥8 to ≥11 hours) [5]. Furthermore, although few studies have examined the change in sleep duration over time, Devore et al. reported that women whose sleep duration changed by 2 hours or more in any direction, had worse cognitive outcomes compared with women with no change in sleep duration [8]. In addition, previous studies based on the entire Whitehall II cohort found that adverse changes in sleep duration are associated with poorer cognitive function [6]. There are many reasons that our findings may diverge, despite overlapping samples. These include differences in sample size (5431 vs 631; thereby impacting on power to assess extremes in sleep duration and change), characteristics (see Supplementary Material Table S8 for comparison), number of assessments of sleep duration (2 vs 5 times), and cognitive test battery administered. Therefore, our study does not contradict the hypothesis that extreme short sleep, extreme long sleep, or extreme changes in sleep duration are associated with adverse outcomes; but instead indicates that such groups may not be well represented in small population-based samples.
An alternative explanation for our null results is that it is not sleep duration alone that is associated with cognitive health in aging, but rather a combination of sleep quality and quantity. Indeed, in an overlapping sample, we have previously published that poor sleep quality is associated with reduced FA and increased RD within fronto-subcortical regions [15]. In an exploratory post hoc analysis, we divided the 6-hour and 7-hour groups into poor and good sleep quality groups dependent upon their PSQI score at the time of the MRI scan (due to limited sample size, we did not include 5-hour and 8-hour groups in this analysis) (Supplementary Material: Text S2, Table S9). F tests showed significant group differences for global FA and voxel-wise RD. The 6-hour good sleep quality group displayed higher global FA and reduced RD in widespread tracts, compared with both the 6-hour poor sleep quality group and 7-hour poor sleep quality group (Figure S2). The 6-hour good sleep quality group also displayed reduced RD compared with the 7-hour good sleep quality group in the corpus callosum (Figure S2). These results indicate that the combination of sleep quality and quantity may be more sensitive to measures of cognitive health in aging. However, it is critical to stress that our measures of sleep quality and quantity are not directly comparable (e.g. quality was measured using a 17-item questionnaire at a single timepoint, duration was measured using a single-item questionnaire at five timepoints). Therefore, these post hoc results should be considered exploratory and require independent replication.
Our study has a number of strengths, including the availability of sleep duration data at five points spanning 28 years prior to cognitive and MRI assessment, which allowed us to examine sleep trajectories over time. A major limitation of our analysis was the reliance on a single-item self-report of sleep duration, in which participants could only report their sleep durations in discrete categories (i.e. “5 hours or less,” “6 hours,” “7 hours,” “8 hours,” or “9 hours or more”). Sleep duration may be more sensitively measured if sleep was measured in hours and minutes and if there were no lower or upper thresholds for sleep duration. A further limitation is that participants were asked to report their sleep duration only on an “average week night.” The discrepancy between weeknight and weekend sleep duration is common in working-age populations and there is debate regarding whether long weekend sleep can compensate for short weeknight sleep for health outcomes such as mortality [37], weight, and insulin sensitivity [38]. In the Whitehall II study, the agreement between self-reported and accelerometer-measured total sleep duration was slightly higher in weekdays (kappa = 0.37, 95% CI 0.34–0.40) than weekend days (kappa = 0.33, 95% CI 0.31–0.36) [39]. Further research is needed to examine the long-term effects of weekend recovery sleep on cognition. Sleep duration is also often overestimated in self-reported compared to objective studies [40–42]. For example, within the Sleep Heart Health Study of 2,113 adults at a mean age of 67 years, morning self-estimated sleep time and total sleep time measured by polysomnography were estimated as 379 and 363 minutes, respectively, with a weak correlation of r = 0.16 between the measures [41]. Self-reported total sleep duration and sleep duration assessed using a wrist-worn accelerometer were moderately related in the Whitehall II study of 4,094 adults aged 60–83 (kappa 0.39, 95% CI 0.36–0.42) [39]. Importantly, differences between measurements may impact upon observed relationships with cognitive outcomes. For example, the Sleep Study of the National Social Life, Health, and Aging Project (NSHAP), a nationally representative cohort of older US adults (2010–2015), found that actigraphic measures of sleep disruption were associated with worse cognition and higher odds of 5-year cognitive decline but there was no association for self-reported sleep [43]. As self-reported measures of sleep duration correlate well with daily sleep diaries [44], are the mainstay of population-based cohort studies, and are the focus of sleep guidelines, further studies using both objective and self-report measures of sleep duration to examine cognitive and MRI outcomes are needed. Furthermore, our findings have limited generalizability, given that the Whitehall participants have relatively high educational attainment which might contribute to increased cognitive reserve. As a result, we may have underestimated the long-term neurocognitive effects associated with unfavorable sleep patterns in subpopulations of low educational attainment, who may be more sensitive to the detrimental health effects of sleep disturbance. We were also unable to rule out potential selection bias; it is plausible that those with the greatest cognitive decline and/or most extreme sleep durations were less likely to return for a follow-up assessment.
Targets feel less close to communicators who hide their successes (inferring paternalistic motives when they hide their success, which leads them to feel insulted); sharing success increases closeness, despite also triggering envy
Roberts, Annabelle, Emma Levine, and Ovul Sezer. 2020. “Hiding Success.” PsyArXiv. January 9. doi:10.31234/osf.io/6g3ez
Abstract: Self-promotion is common in everyday life. Yet, across seven studies (N = 1,672) examining a broad range of personal and professional successes, we find that individuals often hide their successes from others and that such hiding has harmful relational consequences. We document these effects among close relational partners, strangers, and within hypothetical relationships. In Study 1, we find that targets feel less close to and more insulted by communicators who hide rather than share their successes. Conversely, sharing success increases closeness, despite also triggering envy. In Study 2, we find that hiding is more costly than sharing success, even when the target does not learn about the act of hiding. That is, hiding success harms relationships both when the success is eventually discovered and when it is not. In Studies 3 and 4, we explore the mechanism underlying these interpersonal costs: Targets infer that communicators have paternalistic motives when they hide their success, which leads them to feel insulted. Studies 5 and 6 explore this mechanism in greater detail by documenting the contextual cues that elicit inferences of paternalistic motives. While a large body of existing research highlights the negative consequences of sharing one’s accomplishments with others, our research demonstrates that sharing is often superior to hiding. In doing so, we shed new light on the consequences of paternalism and the relational costs of hiding information in everyday communication.
Abstract: Self-promotion is common in everyday life. Yet, across seven studies (N = 1,672) examining a broad range of personal and professional successes, we find that individuals often hide their successes from others and that such hiding has harmful relational consequences. We document these effects among close relational partners, strangers, and within hypothetical relationships. In Study 1, we find that targets feel less close to and more insulted by communicators who hide rather than share their successes. Conversely, sharing success increases closeness, despite also triggering envy. In Study 2, we find that hiding is more costly than sharing success, even when the target does not learn about the act of hiding. That is, hiding success harms relationships both when the success is eventually discovered and when it is not. In Studies 3 and 4, we explore the mechanism underlying these interpersonal costs: Targets infer that communicators have paternalistic motives when they hide their success, which leads them to feel insulted. Studies 5 and 6 explore this mechanism in greater detail by documenting the contextual cues that elicit inferences of paternalistic motives. While a large body of existing research highlights the negative consequences of sharing one’s accomplishments with others, our research demonstrates that sharing is often superior to hiding. In doing so, we shed new light on the consequences of paternalism and the relational costs of hiding information in everyday communication.
Decreasing human body temperature in the United States since the industrial revolution
Decreasing human body temperature in the United States since the industrial revolution. Myroslava Protsiv, Catherine Ley, Joanna Lankester, Trevor Hastie, Julie Parsonnet.
eLife 2020;9:e49555, Jan 7, 2020, doi: 10.7554/eLife.49555
Abstract: In the US, the normal, oral temperature of adults is, on average, lower than the canonical 37°C established in the 19th century. We postulated that body temperature has decreased over time. Using measurements from three cohorts--the Union Army Veterans of the Civil War (N = 23,710; measurement years 1860–1940), the National Health and Nutrition Examination Survey I (N = 15,301; 1971–1975), and the Stanford Translational Research Integrated Database Environment (N = 150,280; 2007–2017)--we determined that mean body temperature in men and women, after adjusting for age, height, weight and, in some models date and time of day, has decreased monotonically by 0.03°C per birth decade. A similar decline within the Union Army cohort as between cohorts, makes measurement error an unlikely explanation. This substantive and continuing shift in body temperature—a marker for metabolic rate—provides a framework for understanding changes in human health and longevity over 157 years.
Discussion (links and full text at the original paper above)
eLife 2020;9:e49555, Jan 7, 2020, doi: 10.7554/eLife.49555
Abstract: In the US, the normal, oral temperature of adults is, on average, lower than the canonical 37°C established in the 19th century. We postulated that body temperature has decreased over time. Using measurements from three cohorts--the Union Army Veterans of the Civil War (N = 23,710; measurement years 1860–1940), the National Health and Nutrition Examination Survey I (N = 15,301; 1971–1975), and the Stanford Translational Research Integrated Database Environment (N = 150,280; 2007–2017)--we determined that mean body temperature in men and women, after adjusting for age, height, weight and, in some models date and time of day, has decreased monotonically by 0.03°C per birth decade. A similar decline within the Union Army cohort as between cohorts, makes measurement error an unlikely explanation. This substantive and continuing shift in body temperature—a marker for metabolic rate—provides a framework for understanding changes in human health and longevity over 157 years.
Discussion (links and full text at the original paper above)
In this study, we analyzed 677,423 human body
temperature measurements from three different cohort populations
spanning 157 years of measurement and 197 birth years. We found that men
born in the early 19th century had temperatures 0.59°C
higher than men today, with a monotonic decrease of −0.03°C per birth
decade. Temperature has also decreased in women by −0.32°C since the
1890s with a similar rate of decline (−0.029°C per birth decade).
Although one might posit that the differences among cohorts reflect
systematic measurement bias due to the varied thermometers and methods
used to obtain temperatures, we believe this explanation to be unlikely.
We observed similar temporal change within the UAVCW cohort—in which
measurement were presumably obtained irrespective of the subject's birth
decade—as we did between cohorts. Additionally, we saw a comparable
magnitude of difference in temperature between two modern cohorts using
thermometers that would be expected to be similarly calibrated.
Moreover, biases introduced by the method of thermometry (axillary
presumed in a subset of UAVCW vs. oral for other cohorts) would tend to
underestimate change over time since axillary values typically average
one degree Celsius lower than oral temperatures (Sund-Levander et al., 2002; Niven et al., 2015).
Thus, we believe the observed drop in temperature reflects physiologic
differences rather than measurement bias. Other findings in our
study—for example increased temperature at younger ages, in women, with
increased body mass and with later time of day—support a wealth of other
studies dating back to the time of Wunderlich (Wunderlich and Sequin, 1871; Waalen and Buxbaum, 2011).
Resting metabolic rate is the largest component of a
typical modern human’s energy expenditure, comprising around 65% of
daily energy expenditure for a sedentary individual (Heymsfield et al., 2006).
Heat is a byproduct of metabolic processes, the reason nearly all
warm-blooded animals have temperatures within a narrow range despite
drastic differences in environmental conditions. Over several decades,
studies examining whether metabolism is related to body surface area or
body weight (Du Bois, 1936; Kleiber, 1972), ultimately, converged on weight-dependent models (Mifflin et al., 1990; Schofield, 1985; Nelson et al., 1992). Since US residents have increased in mass since the mid-19th
century, we should have correspondingly expected increased body
temperature. Thus, we interpret our finding of a decrease in body
temperature as indicative of a decrease in metabolic rate independent of
changes in anthropometrics. A decline in metabolic rate in recent years
is supported in the literature when comparing modern experimental data
to those from 1919 (Frankenfield et al., 2005).
Although there are many factors that influence
resting metabolic rate, change in the population-level of inflammation
seems the most plausible explanation for the observed decrease in
temperature over time. Economic development, improved standards of
living and sanitation, decreased chronic infections from war injuries,
improved dental hygiene, the waning of tuberculosis and malaria
infections, and the dawn of the antibiotic age together are likely to
have decreased chronic inflammation since the 19th century. For example, in the mid-19th century, 2–3% of the population would have been living with active tuberculosis (Tiemersma et al., 2011).
This figure is consistent with the UAVCW Surgeons' Certificates that
reported 737 cases of active tuberculosis among 23,757 subjects (3.1%).
That UAVCW veterans who reported either current tuberculosis or
pneumonia had a higher temperature (0.19°C and 0.03°C respectively) than
those without infectious conditions supports this theory (Supplementary file 1).
Although we would have liked to have compared our modern results to
those from a location with a continued high risk of chronic infection,
we could identify no such database that included temperature
measurements. However, a small study of healthy volunteers from
Pakistan—a country with a continued high incidence of tuberculosis and
other chronic infections—confirms temperatures more closely
approximating the values reported by Wunderlich (mean, median and mode,
respectively, of 36.89°C, 36.94°C, and 37°C) (Adhi et al., 2008).
Reduction in inflammation may also explain the
continued drop in temperature observed between the two more modern
cohorts: NHANES and STRIDE. Although many chronic infections had been
conquered before the NHANES study, some—periodontitis as one example (Capilouto and Douglass, 1988)— continued to decrease over this short period. Moreover, the use of anti-inflammatory drugs including aspirin (Luepker et al., 2015), statins (Salami et al., 2017) and non-steroidal anti-inflammatory drugs (NSAIDs) (Lamont and Dias, 2008)
increased over this interval, potentially reducing inflammation. NSAIDs
have been specifically linked to blunting of body temperature, even in
normal volunteers (Murphy et al., 1996).
In support of declining inflammation in the modern era, a study of
NHANES participants demonstrated a 5% decrease in abnormal C-reactive
protein levels between 1999 and 2010 (Ong et al., 2013).
Changes in ambient temperature may also explain
some of the observed change in body temperature over time. Maintaining
constant body temperature despite fluctuations in ambient temperature
consumes up to 50–70% of daily energy intake (Levine, 2007).
Resting metabolic rate (RMR), for which body temperature is a crude
proxy, increases when the ambient temperature decreases below or rises
above the thermoneutral zone, that is the temperature of the environment
at which humans can maintain normal temperature with minimum energy
expenditure (Erikson et al., 1956). In the 19th
century, homes in the US were irregularly and inconsistently heated and
never cooled. By the 1920s, however, heating systems reached a broad
segment of the population with mean night-time temperature continuing to
increase even in the modern era (Mavrogianni et al., 2013). Air conditioning is now found in more than 85% of US homes (US Energy Information Administration, 2011).
Thus, the amount of time the population has spent at thermoneutral
zones has markedly increased, potentially causing a decrease in RMR,
and, by analogy, body temperature.
Some factors known to influence body temperature
were not included in our final model due to missing data (ambient
temperature and time of day) or complete lack of information (dew
point)(Obermeyer et al., 2017).
Adjusting for ambient temperature, however, would likely have amplified
the changes over time due to lack of heating and cooling in the earlier
cohorts. Time of day at which measurement was conducted had a more
significant effect on temperature (Figure 1—figure supplement 4).
Based on the distribution of times of day for temperature measurement
available to us in STRIDE and NHANES, we estimate that even in the worst
case scenario, that is the UAVCW measurements were all were obtained
late in the afternoon, adjustment for time of day would have only a
small influence (<0.05°C) on the −0.59°C change over time.
In summary, normal body temperature is assumed by
many, including a great preponderance of physicians, to be 37°C. Those
who have shown this value to be too high have concluded that
Wunderlich’s 19th century measurements were simply flawed (Mackowiak, 1997; Sund-Levander et al., 2002).
Our investigation indicates that humans in high-income countries have
changed physiologically over the last 200 birth years with a mean body
temperature 1.6% lower than in the pre-industrial era. The role that
this physiologic ‘evolution’ plays in human anthropometrics and
longevity is unknown.
Thursday, January 9, 2020
Democrats & Republicans equally dislike & dehumanize each other, but their estimate of how they are seen by members of the other party is more than twice the levels actually reported
Moore-Berg, Samantha, Lee-Or A. Karlinsky, Boaz Hameiri, and Emile Bruneau. 2020. “The Partisan Penumbra: Political Partisans’ Exaggerated Meta-perceptions Predict Intergroup Hostility.” PsyArXiv. January 9. doi:10.31234/osf.io/d6bpe
Abstract: People’s actions towards a competitive outgroup can be motivated not only by their perceptions of the outgroup, but also by how they think the outgroup perceives the ingroup (i.e., meta-perceptions). Here we examine the prevalence, accuracy, and consequences of meta-perceptions among American political partisans. Using representative samples (N=1053) and a longitudinal convenience sample (N=2707), we find that Democrats and Republicans equally dislike and dehumanize each other, but estimate that the levels of prejudice and dehumanization expressed by the outgroup party are more than twice the levels actually reported by representative samples of Democrats and Republicans. Finally, we show that meta-prejudice and meta-dehumanization are independently associated with outgroup hostility through their effects on prejudice and dehumanization. This research demonstrates that partisan meta-perceptions are subject to a strong negativity bias, with Democrats and Republicans agreeing that the shadow of partisanship is much larger than it actually is, which fosters mutual intergroup hostility.
Abstract: People’s actions towards a competitive outgroup can be motivated not only by their perceptions of the outgroup, but also by how they think the outgroup perceives the ingroup (i.e., meta-perceptions). Here we examine the prevalence, accuracy, and consequences of meta-perceptions among American political partisans. Using representative samples (N=1053) and a longitudinal convenience sample (N=2707), we find that Democrats and Republicans equally dislike and dehumanize each other, but estimate that the levels of prejudice and dehumanization expressed by the outgroup party are more than twice the levels actually reported by representative samples of Democrats and Republicans. Finally, we show that meta-prejudice and meta-dehumanization are independently associated with outgroup hostility through their effects on prejudice and dehumanization. This research demonstrates that partisan meta-perceptions are subject to a strong negativity bias, with Democrats and Republicans agreeing that the shadow of partisanship is much larger than it actually is, which fosters mutual intergroup hostility.
Having a goal to change one’s level of Extraversion, Neuroticism, Agreeableness, & Conscientiousness does not lead to any subsequent change in trait levels over the course of 12 months
Personality change goals and plans as predictors of longitudinal trait change in young adults: A replication with an Iranian sample. Samaneh Asadi, Hamideh Mohammadi Dehaja, Oliver Robinson. Journal of Research in Personality, January 9 2020, 103912, https://doi.org/10.1016/j.jrp.2020.103912
Highlights
• Having a goal to change one’s level of Openness to Experience predicts becoming higher on this trait over the course of 12 months, in a sample of Iranian students.
• Having a goal to change one’s level of other traits within the Big Five model. (Extraversion, Neuroticism, Agreeableness, Conscientiousness) does not lead to any subsequent change in trait levels over the course of 12 months.
• 74% of the sample have a goal to reduce their current level of Neuroticism, and 61% have a goal to increase Conscientiousness.
Abstract: Goals and plans for changing one’s personality traits have been found to be commonly held, particularly in young adults. Evidence for whether such goals and plans can predict actual trait change is mixed. The current study replicated and extended the methodology of a previous study to investigate whether trait change goals and plans predict change over a year in an Iranian sample of students. It was found that goals and plans before and after the 12-month period predicted longitudinal change in Openness to Experience, but no association was found for other traits. To explore whether this relationship between goals and change in Openness to Experience is replicable, further research with samples of differing ages and cultures is needed.
Keywords: Personality change goalstrait changeplanslongitudinal
Check also In large part, the wish to change personality did not predict actual change in the desired direction; & desired increases in Extraversion, Agreeableness & Conscientiousness corresponded with decreases:
Highlights
• Having a goal to change one’s level of Openness to Experience predicts becoming higher on this trait over the course of 12 months, in a sample of Iranian students.
• Having a goal to change one’s level of other traits within the Big Five model. (Extraversion, Neuroticism, Agreeableness, Conscientiousness) does not lead to any subsequent change in trait levels over the course of 12 months.
• 74% of the sample have a goal to reduce their current level of Neuroticism, and 61% have a goal to increase Conscientiousness.
Abstract: Goals and plans for changing one’s personality traits have been found to be commonly held, particularly in young adults. Evidence for whether such goals and plans can predict actual trait change is mixed. The current study replicated and extended the methodology of a previous study to investigate whether trait change goals and plans predict change over a year in an Iranian sample of students. It was found that goals and plans before and after the 12-month period predicted longitudinal change in Openness to Experience, but no association was found for other traits. To explore whether this relationship between goals and change in Openness to Experience is replicable, further research with samples of differing ages and cultures is needed.
Keywords: Personality change goalstrait changeplanslongitudinal
Check also In large part, the wish to change personality did not predict actual change in the desired direction; & desired increases in Extraversion, Agreeableness & Conscientiousness corresponded with decreases:
From Desire to Development? A Multi-Sample, Idiographic Examination of Volitional Personality Change. Erica Baransk et al. Journal of Research in Personality, December 26 2019, 103910. https://www.bipartisanalliance.com/2019/12/in-large-part-wish-to-change.html
Acquisition announcement returns and post-merger operating performance are significantly higher when the acquirer and the target have more similar political attitudes
Duchin, Ran and Farroukh, Abed El Karim and Harford, Jarrad and Patel, Tarun, Political Attitudes, Partisanship, and Merger Activity (Nov 30, 2019). SSRN: http://dx.doi.org/10.2139/ssrn.3497907
Abstract: This paper provides novel evidence that similarity in employees’ political attitudes plays a role in mergers and acquisitions. Using detailed data on individual campaign contributions to Democrats and Republicans, our estimates show that firms are considerably more likely to announce a merger, complete a merger, and a have shorter time-to-completion when their political attitudes are closer. Furthermore, acquisition announcement returns and post-merger operating performance are significantly higher when the acquirer and the target have more similar political attitudes. The effects of political partisanship on mergers are stronger in more recent years, when the political polarization in the U.S. is greater. Overall, we provide estimates that political attitudes and polarization have real effects on the allocation of assets in the economy.
Keywords: campaign contributions, mergers and acquisitions, politics, polarization
JEL Classification: G34, D72
Abstract: This paper provides novel evidence that similarity in employees’ political attitudes plays a role in mergers and acquisitions. Using detailed data on individual campaign contributions to Democrats and Republicans, our estimates show that firms are considerably more likely to announce a merger, complete a merger, and a have shorter time-to-completion when their political attitudes are closer. Furthermore, acquisition announcement returns and post-merger operating performance are significantly higher when the acquirer and the target have more similar political attitudes. The effects of political partisanship on mergers are stronger in more recent years, when the political polarization in the U.S. is greater. Overall, we provide estimates that political attitudes and polarization have real effects on the allocation of assets in the economy.
Keywords: campaign contributions, mergers and acquisitions, politics, polarization
JEL Classification: G34, D72
When female speakers increased their pitch they were judged as more attractive; unexpected was that male speakers tended to rate other males who shifted their voice up in pitch as more attractive
Vocal attractiveness and voluntarily pitch-shifted voices. Yi Zheng et al. Evolution and Human Behavior, January 9 2020. https://doi.org/10.1016/j.evolhumbehav.2020.01.002
Abstract: Previous studies have found that using software to pitch shift people's voices can boost their perceived attractiveness to opposite-sex adults: men prefer women's voices when pitch-shifted up, and women prefer men's voices when pitch-shifted down. In this study, we sought to determine whether speakers could affect their perceived vocal attractiveness by voluntarily shifting their own voices to reach specific target pitches (+20 Hz or −20 Hz, a pitch increment that is based on prior research). Two sets of Chinese college students participated in the research: 115 who served as speakers whose voices were recorded, and 167 who served as raters who evaluated the speakers' voices. We found that when female speakers increased their pitch they were judged as more attractive to both opposite-sex and same-sex raters. An additional unexpected finding was that male speakers tended to rate other males who shifted their voice up in pitch as more attractive. These findings suggest that voluntary pitch shifts can affect attractiveness, but that they do not fully match the patterns observed when pitch shifting is done digitally.
Keywords: Vocal attractivenessPitch shifting
Abstract: Previous studies have found that using software to pitch shift people's voices can boost their perceived attractiveness to opposite-sex adults: men prefer women's voices when pitch-shifted up, and women prefer men's voices when pitch-shifted down. In this study, we sought to determine whether speakers could affect their perceived vocal attractiveness by voluntarily shifting their own voices to reach specific target pitches (+20 Hz or −20 Hz, a pitch increment that is based on prior research). Two sets of Chinese college students participated in the research: 115 who served as speakers whose voices were recorded, and 167 who served as raters who evaluated the speakers' voices. We found that when female speakers increased their pitch they were judged as more attractive to both opposite-sex and same-sex raters. An additional unexpected finding was that male speakers tended to rate other males who shifted their voice up in pitch as more attractive. These findings suggest that voluntary pitch shifts can affect attractiveness, but that they do not fully match the patterns observed when pitch shifting is done digitally.
Keywords: Vocal attractivenessPitch shifting
Humans, macaque monkeys, cats, horses, sheep, owls, falcons, & toads have stereopsis; in cuttlefish, stereopsis. works differently to vertebrates (extract stereopsis cues from anticorrelated stimuli)
Cuttlefish use stereopsis to strike at prey. R. C. Feord et al. Science Advances Jan 08 2020: Vol. 6, no. 2, eaay6036. DOI: 10.1126/sciadv.aay6036
Abstract: The camera-type eyes of vertebrates and cephalopods exhibit remarkable convergence, but it is currently unknown whether the mechanisms for visual information processing in these brains, which exhibit wildly disparate architecture, are also shared. To investigate stereopsis in a cephalopod species, we affixed “anaglyph” glasses to cuttlefish and used a three-dimensional perception paradigm. We show that (i) cuttlefish have also evolved stereopsis (i.e., the ability to extract depth information from the disparity between left and right visual fields); (ii) when stereopsis information is intact, the time and distance covered before striking at a target are shorter; (iii) stereopsis in cuttlefish works differently to vertebrates, as cuttlefish can extract stereopsis cues from anticorrelated stimuli. These findings demonstrate that although there is convergent evolution in depth computation, cuttlefish stereopsis is likely afforded by a different algorithm than in humans, and not just a different implementation.
DISCUSSION
To ensure that cuttlefish hit their prey successfully, they must acquire information about its location before the strike. Here, we show that cuttlefish use the disparity between their left and right eyes to perceive depth (Fig. 1). Cuttlefish use this information to aid in prey capture, as animals with intact binocular vision take less time to strike at prey and do so from farther away (Fig. 2.). In animals tested with quasi-monocular stimuli, the significant difference in latency, travel distance, and strike location during the positioning phase is consistent with Messenger’s study (7), as he found that attack success in unilaterally blinded animals decreased to 56% (versus 91% in binocular animals). Nonetheless, binocular cues cannot be the only depth perception mechanism used by the cuttlefish, as many quasi-monocularly and binocularly stimulated animals behaved equally well, both in Messenger’s study and ours. The absence of pictorial cues in our stimulus (the shrimp silhouette lacks any shadowing, shading, or occlusion) leads us to conclude that for monocular but not binocular depth perception, cuttlefish may rely on motion cues such as parallax (13) and/or motion in depth (19).
Before our investigation, cuttlefish were not known to use stereopsis (i.e., calculate depth from disparity between left and right eye views). They had been shown only to have a variable range of binocular overlap (7). Using anaglyph glasses and this 3D perception assay, we provide strong support that cuttlefish have and use stereopsis during the positioning to prey seizure phases of the hunt. However, as suggested by Messenger (7), other depth estimation strategies, such as oculomotor proprioceptive cues provided by the vergence of the two eyes (20, 21), could be at play. Accommodation cues, as used by chameleons to judge distance (22), could also provide an additional explanation as lens movements have been observed in cuttlefish (23). However, if proprioceptive or accommodation cues were being used by cuttlefish for depth estimation, it should not fail as it did when presented with a completely uncorrelated stimulus, i.e., each eye should still fixate and converge on the moving target without requiring correspondence between the images (Fig. 3). We observed that cuttlefish consistently engaged and reached the positioning phase when presented with uncorrelated stimuli, but they quickly aborted and never advanced to the striking phase of the hunt (movie S4). Because they could not solve the uncorrelated stimuli test, we conclude that cuttlefish rely on interocular correspondence to integrate binocular cues and not simply use binocular optomotor cues (vergence) or accommodation to estimate depth. This also indicates that cuttlefish stereopsis is different from praying mantis (also known as mantids) stereopsis, because mantids can resolve targets based on “kinetic disparity” (the difference in the location of moving object between both eyes) (16). Mantids can do this in the absence of “static disparity” provided by the surrounding visual scene, something which humans are unable to do (16).
To see how binocular overlap may play a role in stereopsis, we investigated eye convergence. A disparity difference of up to 10° between the left and right eye angular positions at the moment when they strike may seem large (Fig. 4). However, cuttlefish have a relatively low-resolution retina, 2.5° to 0.57° per photoreceptor (24). Thus, cuttlefish image disparity relative to their eye resolution is comparable to the relative magnitudes observed for these measures in vertebrates. Cuttlefish’s lower spatial resolution makes it plausible that they may also have neurons that encode disparity across a larger array of visual angles, as known to be the case in mice (25). To coordinate the relative positions of the left and right receptive fields for object tracking, cuttlefish may have evolved similar circuits as those used by chameleons for synchronous and disconjugate saccades (26, 27) and by rats for a greater overhead binocular field (28).
The evidence presented here establishes that cuttlefish make use of stereopsis when hunting and that this improves hunting performance by reducing the distance traveled, the time taken to strike at prey, and allowing it to strike from farther away. Further investigation is required to uncover the neural mechanisms underlying the computation of stereopsis in these animals.
Abstract: The camera-type eyes of vertebrates and cephalopods exhibit remarkable convergence, but it is currently unknown whether the mechanisms for visual information processing in these brains, which exhibit wildly disparate architecture, are also shared. To investigate stereopsis in a cephalopod species, we affixed “anaglyph” glasses to cuttlefish and used a three-dimensional perception paradigm. We show that (i) cuttlefish have also evolved stereopsis (i.e., the ability to extract depth information from the disparity between left and right visual fields); (ii) when stereopsis information is intact, the time and distance covered before striking at a target are shorter; (iii) stereopsis in cuttlefish works differently to vertebrates, as cuttlefish can extract stereopsis cues from anticorrelated stimuli. These findings demonstrate that although there is convergent evolution in depth computation, cuttlefish stereopsis is likely afforded by a different algorithm than in humans, and not just a different implementation.
DISCUSSION
To ensure that cuttlefish hit their prey successfully, they must acquire information about its location before the strike. Here, we show that cuttlefish use the disparity between their left and right eyes to perceive depth (Fig. 1). Cuttlefish use this information to aid in prey capture, as animals with intact binocular vision take less time to strike at prey and do so from farther away (Fig. 2.). In animals tested with quasi-monocular stimuli, the significant difference in latency, travel distance, and strike location during the positioning phase is consistent with Messenger’s study (7), as he found that attack success in unilaterally blinded animals decreased to 56% (versus 91% in binocular animals). Nonetheless, binocular cues cannot be the only depth perception mechanism used by the cuttlefish, as many quasi-monocularly and binocularly stimulated animals behaved equally well, both in Messenger’s study and ours. The absence of pictorial cues in our stimulus (the shrimp silhouette lacks any shadowing, shading, or occlusion) leads us to conclude that for monocular but not binocular depth perception, cuttlefish may rely on motion cues such as parallax (13) and/or motion in depth (19).
Before our investigation, cuttlefish were not known to use stereopsis (i.e., calculate depth from disparity between left and right eye views). They had been shown only to have a variable range of binocular overlap (7). Using anaglyph glasses and this 3D perception assay, we provide strong support that cuttlefish have and use stereopsis during the positioning to prey seizure phases of the hunt. However, as suggested by Messenger (7), other depth estimation strategies, such as oculomotor proprioceptive cues provided by the vergence of the two eyes (20, 21), could be at play. Accommodation cues, as used by chameleons to judge distance (22), could also provide an additional explanation as lens movements have been observed in cuttlefish (23). However, if proprioceptive or accommodation cues were being used by cuttlefish for depth estimation, it should not fail as it did when presented with a completely uncorrelated stimulus, i.e., each eye should still fixate and converge on the moving target without requiring correspondence between the images (Fig. 3). We observed that cuttlefish consistently engaged and reached the positioning phase when presented with uncorrelated stimuli, but they quickly aborted and never advanced to the striking phase of the hunt (movie S4). Because they could not solve the uncorrelated stimuli test, we conclude that cuttlefish rely on interocular correspondence to integrate binocular cues and not simply use binocular optomotor cues (vergence) or accommodation to estimate depth. This also indicates that cuttlefish stereopsis is different from praying mantis (also known as mantids) stereopsis, because mantids can resolve targets based on “kinetic disparity” (the difference in the location of moving object between both eyes) (16). Mantids can do this in the absence of “static disparity” provided by the surrounding visual scene, something which humans are unable to do (16).
To see how binocular overlap may play a role in stereopsis, we investigated eye convergence. A disparity difference of up to 10° between the left and right eye angular positions at the moment when they strike may seem large (Fig. 4). However, cuttlefish have a relatively low-resolution retina, 2.5° to 0.57° per photoreceptor (24). Thus, cuttlefish image disparity relative to their eye resolution is comparable to the relative magnitudes observed for these measures in vertebrates. Cuttlefish’s lower spatial resolution makes it plausible that they may also have neurons that encode disparity across a larger array of visual angles, as known to be the case in mice (25). To coordinate the relative positions of the left and right receptive fields for object tracking, cuttlefish may have evolved similar circuits as those used by chameleons for synchronous and disconjugate saccades (26, 27) and by rats for a greater overhead binocular field (28).
The evidence presented here establishes that cuttlefish make use of stereopsis when hunting and that this improves hunting performance by reducing the distance traveled, the time taken to strike at prey, and allowing it to strike from farther away. Further investigation is required to uncover the neural mechanisms underlying the computation of stereopsis in these animals.
During 2018, approximately 12 million (4.7%) U.S. residents aged ≥16 years reported driving under the influence of marijuana, and 2.3 million (0.9%) reported driving under the influence of illicit drugs other than marijuana
Azofeifa A, Rexach-Guzmán BD, Hagemeyer AN, Rudd RA, Sauber-Schatz EK. Driving Under the Influence of Marijuana and Illicit Drugs Among Persons Aged ≥16 Years — United States, 2018. MMWR Morb Mortal Wkly Rep 2019;68:1153–1157. https://www.cdc.gov/mmwr/volumes/68/wr/mm6850a1.htm
Summary
What is already known about this topic? The use and co-use of alcohol and drugs has been associated with impairment of psychomotor and cognitive functions while driving.
What is added by this report? During 2018, approximately 12 million (4.7%) U.S. residents aged ≥16 years reported driving under the influence of marijuana, and 2.3 million (0.9%) reported driving under the influence of illicit drugs other than marijuana during the past 12 months.
What are the implications for public health practice? Development, evaluation, and further implementation of strategies to prevent alcohol-, drug-, and polysubstance-impaired driving coupled with standardized testing of impaired drivers and drivers involved in fatal crashes could advance understanding of drug- and polysubstance-impaired driving and assist states and communities with prevention efforts.
Discussion
Although 4.7% of the U.S. population aged ≥16 years reported driving under the influence of marijuana and 0.9% reported driving under the influence of illicit drugs other than marijuana, these estimates are lower than the 8.0% (20.5 million) who reported driving under the influence of alcohol in 2018 (NSDUH, unpublished data, 2019). The highest prevalence of driving under the influence of marijuana was among persons aged 21–25 years. The second highest was among the youngest drivers (those aged 16–20 years), who already have a heightened crash risk because of inexperience¶; thus, their substance use is of special concern. In a study of injured drivers aged 16–20 years evaluated at level 1 trauma centers in Arizona during 2008–2014 (3), 10% of tested drivers were simultaneously positive for both alcohol and tetrahydrocannabinol, the main psychoactive component of marijuana. Data from the 2018 NSDUH indicate a high prevalence (34.8%) of past-year marijuana use among young adults aged 18–25 years (4). Studies have reported that marijuana use among teenagers and young adults might alter perception, judgement, short-term memory, and cognitive abilities (5). Given these findings, states could consider developing, implementing, and evaluating targeted strategies to reduce marijuana use and potential subsequent impaired driving, especially among teenagers and young adults.Research has determined that co-use of marijuana or illicit drugs with alcohol increases the risk for driving impairment (5,6). The use of these substances has been associated with impairment of psychomotor and cognitive functions while driving (6,7). In addition, previous research has demonstrated evidence of a statistical association between marijuana use and increased risk for motor vehicle crashes; however, methodologic limitations of studies limit inference of causation (8). Scientific studies have been unable to link blood tetrahydrocannabinol levels to driving impairment (8), and the effects of marijuana in drivers likely varies by dose, potency of the product consumed, means of consumption (e.g., smoking, eating, or vaping), length of use, and co-use of other substances, including alcohol. Additional data are needed to clarify the contribution of drug and polysubstance use to impaired driving prevalence and the resulting crashes, injuries, and deaths.
A national roadside survey using biochemical specimens among drivers aged ≥16 years found that during 2013–2014, the percentages of weekend nighttime drivers who tested positive for alcohol, marijuana (i.e., tetrahydrocannabinol) and illicit drugs were 8.3%, 12.6%, and 15.1%, respectively (9), although a positive test does not necessarily imply impairment. Collecting and testing biologic specimens (e.g., blood or oral fluids) currently required to test for drugs has challenges, including, in some circumstances, the need for a judge to order collection and testing (which can delay roadside testing, thus allowing drug levels to drop with time); variation in substances tested and methodology used by different toxicology laboratories; and the current state of development of oral fluid testing. The increased use of marijuana and some illicit drugs in the United States (4) along with the results of this report, point to the need for rapid and sensitive assessment tools to ascertain the presence of and impairment by marijuana and other illicit drugs. In addition, adoption and application of standards for toxicology testing and support for laboratories to implement recommendations are needed to improve understanding of the prevalence of drug- and polysubstance-impaired driving (10).
The findings in this report are subject to at least five limitations. First, because NSDUH data are self-reported, they are subject to recall and social desirability biases. Second, variations in laws and regulations among states and counties regarding marijuana could have resulted in negative responses to the NSDUH substance use survey questions for fear of legal consequences, leading to an underestimation of the prevalence of the use and driving under the influence in some jurisdictions. Third, the NSDUH questions are not limited to driving under the influence of marijuana only or each illegal substance only; therefore, persons might be driving under the influence of more than one substance at a given time. Fourth, self-reported data are subject to the respondents’ interpretations of being under the influence of a drug. Finally, NSDUH does not assess whether all respondents drive; therefore, reported percentages of impaired drivers might be underestimated.
Impaired driving is a serious public health concern that needs to be addressed to safeguard the health and safety of all who use the road, including drivers, passengers, pedestrians, bicyclists, and motorcyclists. Collaboration among public health, transportation safety, law enforcement, and federal and state officials is needed for the development, evaluation, and further implementation of strategies to prevent alcohol-, drug-, and polysubstance-impaired driving (2). In addition, standardized testing for alcohol and drugs among impaired drivers and drivers involved in fatal crashes could advance understanding of drug- and polysubstance-impaired driving and assist states and communities with targeted prevention efforts.
Subscription-based websites such as PollenTree.com and Modamily that match would-be parents who want to share custody of a child without any romantic expectations; it’s a lot like a divorce, without the wedding or the arguments
A new kind of online service matches people who want to have children, but not necessarily romance. Julie Jargon. The Wall Street Journal, Jan 7 2020. https://www.wsj.com/articles/co-parenting-sites-skip-love-and-marriage-go-right-to-the-baby-carriage-11578393000
When Jenica Andersen felt the tug for a second child at age 37, the single mom weighed her options: wait until she meets Mr. Right or choose a sperm donor and go it alone.
The first option didn’t look promising. The idea of a sperm donor wasn’t appealing, either, because she wanted her child to have an active father, just like her 4-year-old son has. After doing some research, Ms. Andersen discovered another option: subscription-based websites such as PollenTree.com and Modamily that match would-be parents who want to share custody of a child without any romantic expectations. It’s a lot like a divorce, without the wedding or the arguments.
When Jenica Andersen felt the tug for a second child at age 37, the single mom weighed her options: wait until she meets Mr. Right or choose a sperm donor and go it alone.
The first option didn’t look promising. The idea of a sperm donor wasn’t appealing, either, because she wanted her child to have an active father, just like her 4-year-old son has. After doing some research, Ms. Andersen discovered another option: subscription-based websites such as PollenTree.com and Modamily that match would-be parents who want to share custody of a child without any romantic expectations. It’s a lot like a divorce, without the wedding or the arguments.
Believing transgender status is biological is correlated with increased support for transgender rights; political conservatives are less likely to believe in biological attribution, but when they do, the impact on support for rights is big
What Drives Support for Transgender Rights? Assessing the Effects of Biological Attribution on U.S. Public Opinion of Transgender Rights. Melanie M. Bowers, Cameron T. Whitley. Sex Roles, January 9 2020. https://link.springer.com/article/10.1007/s11199-019-01118-9
Abstract: Scholars have a limited understanding of what drives opinion on transgender rights. The present study begins to fill this gap by applying attribution theory to data from a national quota-based (U.S. Census approximation) online survey of 1000 U.S. citizens to evaluate how individuals’ beliefs about the biological origin of a person’s transgender status influence support for transgender rights, including employment, housing, healthcare, and bathroom protections. Across all models, we find that believing transgender status is biological is correlated with increased support for transgender rights. Importantly, our results suggest that although political conservatives appear to be less likely to believe in biological attribution, when they do, the belief has a more dramatic impact on support for rights than it does among liberals. Our analysis builds on existing research demonstrating the importance of biological attribution for support of lesbian, gay, and bisexual (LGB) rights and extends our understanding of public opinion on transgender rights. Our findings have important implications for policy experts interested in approaches to addressing transgender rights as well as scholars and practitioners interested in better understanding opinion formation regarding transgender rights because they suggest that providing a biological basis for transgender status may be a way to increase support for protections, particularly among more conservative individuals.
Keywords: Transgender (attitudes toward) Explicit attitudes Public opinion Transgender
Abstract: Scholars have a limited understanding of what drives opinion on transgender rights. The present study begins to fill this gap by applying attribution theory to data from a national quota-based (U.S. Census approximation) online survey of 1000 U.S. citizens to evaluate how individuals’ beliefs about the biological origin of a person’s transgender status influence support for transgender rights, including employment, housing, healthcare, and bathroom protections. Across all models, we find that believing transgender status is biological is correlated with increased support for transgender rights. Importantly, our results suggest that although political conservatives appear to be less likely to believe in biological attribution, when they do, the belief has a more dramatic impact on support for rights than it does among liberals. Our analysis builds on existing research demonstrating the importance of biological attribution for support of lesbian, gay, and bisexual (LGB) rights and extends our understanding of public opinion on transgender rights. Our findings have important implications for policy experts interested in approaches to addressing transgender rights as well as scholars and practitioners interested in better understanding opinion formation regarding transgender rights because they suggest that providing a biological basis for transgender status may be a way to increase support for protections, particularly among more conservative individuals.
Keywords: Transgender (attitudes toward) Explicit attitudes Public opinion Transgender
Subscribe to:
Posts (Atom)