The inferior temporal cortex is a potential cortical precursor of orthographic processing in untrained monkeys. Rishi Rajalingham, Kohitij Kar, Sachi Sanghavi, Stanislas Dehaene & James J. DiCarlo. Nature Communications volume 11, Article number: 3886. Aug 4 2020. https://www.nature.com/articles/s41467-020-17714-3
Abstract: The ability to recognize written letter strings is foundational to human reading, but the underlying neuronal mechanisms remain largely unknown. Recent behavioral research in baboons suggests that non-human primates may provide an opportunity to investigate this question. We recorded the activity of hundreds of neurons in V4 and the inferior temporal cortex (IT) while naïve macaque monkeys passively viewed images of letters, English words and non-word strings, and tested the capacity of those neuronal representations to support a battery of orthographic processing tasks. We found that simple linear read-outs of IT (but not V4) population responses achieved high performance on all tested tasks, even matching the performance and error patterns of baboons on word classification. These results show that the IT cortex of untrained primates can serve as a precursor of orthographic processing, suggesting that the acquisition of reading in humans relies on the recycling of a brain network evolved for other visual functions.
Discussion
A key goal of human cognitive neuroscience is to understand how the human brain supports the ability to learn to recognize written letters and words. This question has been investigated for several decades using human neuroimaging techniques, yielding putative brain regions that may uniquely underlie orthographic abilities7,8,9. In the work presented here, we sought to investigate this issue in the primate ventral visual stream of naïve rhesus macaque monkeys. Non-human primates such as rhesus macaque monkeys have been essential to study the neuronal mechanisms underlying human visual processing, especially in the domain of object recognition where monkeys and humans exhibit remarkably similar behavior and underlying brain mechanisms, both neuroanatomical and functional13,14,15,16,39,40. Given this strong homology, and the relative recency of reading abilities in the human species, we reasoned that the high-level visual representations in the primate ventral visual stream could serve as a precursor that is recycled by developmental experience for human orthographic processing abilities. In other words, we hypothesized that the neural representations that directly underlie human orthographic processing abilities are strongly constrained by the prior evolution of the primate visual cortex, such that representations present in naïve, illiterate, non-human primates could be minimally adapted to support orthographic processing. Here, we observed that orthographic information was explicitly encoded in sampled populations of spatially distributed IT neurons in naïve, illiterate, non-human primates. Our results are consistent with the hypothesis that the population of IT neurons in each subject forms an explicit (i.e., linearly separable, as per ref. 21) representation of orthographic objects, and could serve as a common substrate for learning many visual discrimination tasks, including ones in the domain of orthographic processing.
We tested a battery of 30 orthographic tests, focusing on a word classification task (separating English words from pseudowords). This task is referred to as “lexical decision” when tested on literate subjects recognizing previously learned words (i.e., when referencing a learned lexicon). For nonliterate subjects (e.g., baboons or untrained IT decoders), word classification is the ability to identify orthographic features that distinguish between words and pseudowords and generalize to novel strings. This generalization must rely on specific visual features whose distribution differs between words and pseudowords; previous work suggests that such features may correspond to specific bigrams17, position-specific letter combinations41, or distributed visual features42. While this battery of tasks is not an exhaustive characterization of orthographic processing, we found that it has the power to distinguish between alternative hypotheses. Indeed, these tasks could not be accurately performed by linear readout decoders of the predominant input visual representation to IT (area V4) or by approximations of lower levels of the ventral visual stream, unlike many other coarse discrimination tasks (e.g., contrasting orthographic and nonorthographic stimuli). We note that the successful classifications from IT-based decoders do not necessarily imply that the brain exclusively uses IT or the same coding schemes and algorithms that we have used for decoding. Rather, the existence of this sufficient code in untrained and illiterate non-human primates suggests that the primate ventral visual stream could be minimally adapted through experience-dependent plasticity to support orthographic processing behaviors.
These results are consistent with a variant of the “neuronal recycling” theory, which posits that the features that support visual object recognition may have been coopted for written word recognition5,6,24. Specifically, this variant of the theory is that humans have inherited a pre-existing brain system (here, the ventral visual stream) from recent evolutionary ancestors, and they either inherited or evolved learning mechanisms that enable individuals to adapt the outputs of that system during their lifespan for word recognition and other core aspects of orthographic processing. Consistent with this, our results suggest that prereading children likely have a neural population representation that can readily be reused to learn invariant word recognition. Relatedly, it has been previously proposed that the initial properties of this system may explain the child’s early competence and errors in letter recognition, e.g., explaining why children tend to make left-right inversion errors by the finding that IT neurons tend to respond similarly to horizontal mirror images of objects36,37,43. Consistent with this, we here found that the representation of IT-based decoders exhibited a similar signature of left-right mirror symmetry. According to this proposal, this neural representation would become progressively shaped to support written word recognition in a specific script over the course of reading acquisition, and may also explain why all human writing systems throughout the world rely on a universal repertoire of basic shapes24. As shown in the present work, those visual features are already well encoded in the ventral visual pathway of illiterate primates, and may bias cultural evolution by determining which scripts are more easily recognizable and learnable.
A similar “neuronal recycling hypothesis” has been proposed for the number system: all primates may have inherited a pre-existing brain system (in the intraparietal sulcus) in which approximate number and other quantitative information is well encoded44,45. It has been suggested that these existing representations of numerosity may be adapted to support exact, symbolic arithmetic, and may bias the cultural evolution of numerical symbols6,46. Likewise, such representations have been found to spontaneously emerge in neural network models optimized for other visual functions47. Critically, the term “recycling,” in the narrow sense in which it was introduced, refers to such adaptations of neural mechanisms evolved for evolutionary older functions to support newer cultural functions, where the original function is not entirely lost and the underlying neural functionality constrains what the brain can most easily learn. It remains to be seen whether all instances of developmental plasticity meet this definition, or whether learning may also simply replace unused functions without recycling them48.
In addition to testing a prediction of this neuronal recycling hypothesis, we also explored the question of how orthographic stimuli are encoded in IT neurons. Decades of research has shown that IT neurons exhibit selectivity for complex visual features with remarkable tolerance to changes in viewing conditions (e.g., position, scale, and pose)19,22,23. More recent work demonstrates that the encoding properties of IT neurons, in both humans and monkeys, is best explained by the distributed complex invariant visual features of hierarchical convolutional neural network models30,49,50. Consistent with this prior work, we here found that the firing rate responses of individual neural sites in macaque IT was modulated by, but did not exhibit strong selectivity to orthographic properties, such as letters and letter positions. In other words, we did not observe precise tuning as postulated by “letter detector” neurons, but instead coarse tuning for both letter identity and position. It is possible that, over the course of learning to read, experience-dependent plasticity could fine-tune the representation of IT to reflect the statistics of printed words (e.g., single-neuron tuning for individual letters or bigrams). Moreover, such experience could alter the topographic organization to exhibit millimeter-scale spatial clusters that preferentially respond to orthographic stimuli, as have been shown in juvenile animals in the context of symbol and face recognition behaviors18,51. Together, such putative representational and topographic changes could induce a reorientation of cortical maps towards letters at the expense of other visual object categories, eventually resulting in the specialization observed in the human visual word form area (VWFA). However, our results demonstrate that, even prior to such putative changes, the initial state of IT in untrained monkeys has the capacity to support many learned orthographic discriminations.
In summary, we found that the neural population representation in IT cortex in untrained macaque monkeys is largely able, with some supervised instruction, to extract explicit representations of written letters and words. This did not have to be so—the visual representations that underlie orthographic processing could instead be largely determined over postnatal development by the experience of learning to read. In that case, the IT representation measured in untrained monkeys (or even in illiterate humans) would likely not exhibit the ability to act as a precursor of orthographic processing. Likewise, orthographic processing abilities could have been critically dependent on other brain regions, such as speech and linguistic representations, or putative flexible domain-general learning systems, that evolved well after the evolutionary divergence of humans and Old-World monkeys. Instead, we here report evidence for a precursor of orthographic processing in untrained monkeys. This finding is consistent with the hypothesis that learning rests on pre-existing neural representations which it only partially reshapes.
Wednesday, August 5, 2020
Rolf Degen summarizing... Among children from highly educated parents, genetic factors had a far greater effect on educational attainment than among those whose parents had low levels of education, accounting for 75% of the individual differences
How Social and Genetic Influences Contribute to Differences in Educational Success within the Family. Tina Baier. Doctoral thesis, Faculty of Sociology at Bielefeld University, Germany. July 9, 2019. https://pub.uni-bielefeld.de/download/2940317/2940667/Dissertation_Baier.pdf
4.4 Conclusion and Discussion
This study extended previous research on gene–environment interactions for education in two crucial ways. First, we acknowledged that not only the proximate family but also the broader institutional environment can shape genetic effects on education. Second, we extended previous research that focused originally on IQ to indicators of educational success, namely educational achievement measured in school grades and educational attainment measured in years of education. Specifically, we addressed the following research questions: Do genetic effects on educational success vary across countries, and are there differences in the social stratification of genetic effects on educational success among these countries?
We selected three advanced industrialized societies for our study: Germany, Sweden, and the United States. These countries largely differ in the setup of their educational systems and represent prototypically three different types of welfare regimes, which are often used in internationally comparative social inequality research. We hypothesized that genetic influences on educational success are overall weaker in Germany and the United States than in Sweden. Furthermore, we expected that the association between parents’ socioeconomic standing and genetic effects on educational success is stronger in Germany and in the United States than in Sweden. For Germany, our hypothesis was rooted in the early tracking system and for the United States in the less extensive welfare regime.
Our study yielded three important findings: First, we found that genetic effects on years of education are smaller than genetic effects on school grades –independent of country. Hence, genes are more important for educational achievement than for educational attainment. In addition, shared environment environmental influences on educational attainment were stronger in Germany and the United States. This supports the notion of socially stratified schooling decisions that operate over and above educational achievement (Boudon 1974; Breen and Goldthorpe 1997; Erikson and Jonsson 1996). However, we did not find effects of the shared environmental influences on educational attainment in Sweden, which diverts from previous findings based on an international meta-analysis (Branigan, Mccallum, and Freese 2013).
There are three reasons that could account for conflicting results. First, our results are based on more recent birth cohorts (i.e., we studied birth cohorts for 1975–1982, while meta-analysis examined birth cohorts for 1926–1958), and previous research shows that genetic influences on education have increased among birth cohorts born in the second half of the twentieth century (Branigan, Mccallum, and Freese 2013; Heath et al. 1985). Second, the samples used in the meta-analyses were not all population based, including the sample of Sweden where the Swedish Twin Registry was used. Third, the meta-analysis did not account for assortative mating. Without such an adjustment, genetic influences tend to be underestimated, while shared environmental influences are overestimated (Freese and Jao 2017). That shared environmental influences were absent for educational attainment in Sweden indicates that educational choices are more closely related to educational achievement, which could be explained with the less selective comprehensive schooling system.
Second, we identified cross-country differences in genetic effects on educational success. Genetic effects on educational success were least pronounced in Germany, and most pronounced in Sweden. Our hypothesis on cross-country differences was therefore supported for Germany, since genetic effects were comparatively small for both indicators of educational success. For the United States, our hypothesis was only partly supported, since genetic effects on educational attainment were comparatively small, while genetic effects on educational achievement were at least as large as in Sweden. Together, these findings supported our expectation that more egalitarian educational systems have a positive effect on the development of genetic potential for educational success and that early tracking might be an important factor for the suppression of related genetic effects. Future research should build upon our findings and focus in a more detailed manner on the impact of the tracking system. For instance the educational system in the Nordic countries changed from a tracked to a comprehensive schooling system (see for an overview on the educational reforms in Denmark, Finland, Norway, and Sweden (Gustafsson 2018)). If tracking lowers genetic effects on education, genetic effects on educations should increase after comprehensive schools were introduced. Systematic cross-countries using a culturally homogenous set of countries (“most similar case design (Lijphart 1971)) increase the generalizability of the results.
Third, we found indications for a social stratification of genetic effects in line with the Scarr–Rowe hypothesis for educational success in Germany and the United States. We did not find any evidence for a gene–environment interaction in line with the Scarr–Rowe hypotheses in Sweden. If anything, this underlines the positive impact of more egalitarian educational systems on the development of genetic effects relevant to education. However, differences between countries are too small and not robust enough to clearly support our hypothesis. Yet, the evidence for an interaction in line with the Scarr–Rowe hypothesis for Germany is weaker than previously found using a more fine-grained measure for years of education (Baier and Lang 2019). Thus, differences in the results for Germany between this and the previous study are likely to be driven by the harmonized measure of education which comes at the cost of preciseness. For the international comparison, however, it is crucial to investigate the same measure of education in each country; otherwise, results on genetic and environmental influences can be differently affected by the way educational attainment is measured and, thus, cannot be meaningfully interpreted across countries.
It is important to note that twins’ zygosity was unknown for our sample from Sweden. We adjusted in line with previous research for the missing information based on the assumption that same-sex and opposite-sex dizygotic twin births are equally likely (Figlio et al. 2017). This is assumption is fairly reasonable. In addition, there is no reason to believe that the distribution same-sex and opposite-sex dizygotic twin births varies by parents’ social background which would have affected our results in regards to the Scarr– Rowe hypothesis. Nonetheless, future research is needed to gain the precise estimates of genetic influences on educational success. Since some twin pairs tend to be misclassified, our adjustment can lead to an underestimation of genetic differences between monozygotic and dizygotic twins. Therefore, our results represent lower bounds of genetic influence on educational success. Hence, the overall conclusions we draw from our cross-country comparison should not be affected by this adjustment. If anything, we underestimated the role of genes in Sweden.
For the United States, our sample sizes were comparatively small, and analyses for parents’ EGP class were based on broad categorizations (i.e., EGP classes I and II versus EGP III–VII, including the non-employed). However, the Add Health data are currently the only nationally representative dataset that includes twins. Since the quality of educational institutions varies considerably among federal states, the representativeness across states is crucial for our study purposes. Nonetheless, more research for the United States is needed to test in a more fine-grained way for the social stratification of genetic influences on educational success.
In sum, our study is the first to study cross-country differences in genetic effects on educational success. We found substantial differences in genetic effects on educational success among Germany, Sweden, and the United States. An important factor that causes these cross-country differences may be rooted in the stratification of educational systems, specifically in the strictness and timing of tracking.
4.4 Conclusion and Discussion
This study extended previous research on gene–environment interactions for education in two crucial ways. First, we acknowledged that not only the proximate family but also the broader institutional environment can shape genetic effects on education. Second, we extended previous research that focused originally on IQ to indicators of educational success, namely educational achievement measured in school grades and educational attainment measured in years of education. Specifically, we addressed the following research questions: Do genetic effects on educational success vary across countries, and are there differences in the social stratification of genetic effects on educational success among these countries?
We selected three advanced industrialized societies for our study: Germany, Sweden, and the United States. These countries largely differ in the setup of their educational systems and represent prototypically three different types of welfare regimes, which are often used in internationally comparative social inequality research. We hypothesized that genetic influences on educational success are overall weaker in Germany and the United States than in Sweden. Furthermore, we expected that the association between parents’ socioeconomic standing and genetic effects on educational success is stronger in Germany and in the United States than in Sweden. For Germany, our hypothesis was rooted in the early tracking system and for the United States in the less extensive welfare regime.
Our study yielded three important findings: First, we found that genetic effects on years of education are smaller than genetic effects on school grades –independent of country. Hence, genes are more important for educational achievement than for educational attainment. In addition, shared environment environmental influences on educational attainment were stronger in Germany and the United States. This supports the notion of socially stratified schooling decisions that operate over and above educational achievement (Boudon 1974; Breen and Goldthorpe 1997; Erikson and Jonsson 1996). However, we did not find effects of the shared environmental influences on educational attainment in Sweden, which diverts from previous findings based on an international meta-analysis (Branigan, Mccallum, and Freese 2013).
There are three reasons that could account for conflicting results. First, our results are based on more recent birth cohorts (i.e., we studied birth cohorts for 1975–1982, while meta-analysis examined birth cohorts for 1926–1958), and previous research shows that genetic influences on education have increased among birth cohorts born in the second half of the twentieth century (Branigan, Mccallum, and Freese 2013; Heath et al. 1985). Second, the samples used in the meta-analyses were not all population based, including the sample of Sweden where the Swedish Twin Registry was used. Third, the meta-analysis did not account for assortative mating. Without such an adjustment, genetic influences tend to be underestimated, while shared environmental influences are overestimated (Freese and Jao 2017). That shared environmental influences were absent for educational attainment in Sweden indicates that educational choices are more closely related to educational achievement, which could be explained with the less selective comprehensive schooling system.
Second, we identified cross-country differences in genetic effects on educational success. Genetic effects on educational success were least pronounced in Germany, and most pronounced in Sweden. Our hypothesis on cross-country differences was therefore supported for Germany, since genetic effects were comparatively small for both indicators of educational success. For the United States, our hypothesis was only partly supported, since genetic effects on educational attainment were comparatively small, while genetic effects on educational achievement were at least as large as in Sweden. Together, these findings supported our expectation that more egalitarian educational systems have a positive effect on the development of genetic potential for educational success and that early tracking might be an important factor for the suppression of related genetic effects. Future research should build upon our findings and focus in a more detailed manner on the impact of the tracking system. For instance the educational system in the Nordic countries changed from a tracked to a comprehensive schooling system (see for an overview on the educational reforms in Denmark, Finland, Norway, and Sweden (Gustafsson 2018)). If tracking lowers genetic effects on education, genetic effects on educations should increase after comprehensive schools were introduced. Systematic cross-countries using a culturally homogenous set of countries (“most similar case design (Lijphart 1971)) increase the generalizability of the results.
Third, we found indications for a social stratification of genetic effects in line with the Scarr–Rowe hypothesis for educational success in Germany and the United States. We did not find any evidence for a gene–environment interaction in line with the Scarr–Rowe hypotheses in Sweden. If anything, this underlines the positive impact of more egalitarian educational systems on the development of genetic effects relevant to education. However, differences between countries are too small and not robust enough to clearly support our hypothesis. Yet, the evidence for an interaction in line with the Scarr–Rowe hypothesis for Germany is weaker than previously found using a more fine-grained measure for years of education (Baier and Lang 2019). Thus, differences in the results for Germany between this and the previous study are likely to be driven by the harmonized measure of education which comes at the cost of preciseness. For the international comparison, however, it is crucial to investigate the same measure of education in each country; otherwise, results on genetic and environmental influences can be differently affected by the way educational attainment is measured and, thus, cannot be meaningfully interpreted across countries.
It is important to note that twins’ zygosity was unknown for our sample from Sweden. We adjusted in line with previous research for the missing information based on the assumption that same-sex and opposite-sex dizygotic twin births are equally likely (Figlio et al. 2017). This is assumption is fairly reasonable. In addition, there is no reason to believe that the distribution same-sex and opposite-sex dizygotic twin births varies by parents’ social background which would have affected our results in regards to the Scarr– Rowe hypothesis. Nonetheless, future research is needed to gain the precise estimates of genetic influences on educational success. Since some twin pairs tend to be misclassified, our adjustment can lead to an underestimation of genetic differences between monozygotic and dizygotic twins. Therefore, our results represent lower bounds of genetic influence on educational success. Hence, the overall conclusions we draw from our cross-country comparison should not be affected by this adjustment. If anything, we underestimated the role of genes in Sweden.
For the United States, our sample sizes were comparatively small, and analyses for parents’ EGP class were based on broad categorizations (i.e., EGP classes I and II versus EGP III–VII, including the non-employed). However, the Add Health data are currently the only nationally representative dataset that includes twins. Since the quality of educational institutions varies considerably among federal states, the representativeness across states is crucial for our study purposes. Nonetheless, more research for the United States is needed to test in a more fine-grained way for the social stratification of genetic influences on educational success.
In sum, our study is the first to study cross-country differences in genetic effects on educational success. We found substantial differences in genetic effects on educational success among Germany, Sweden, and the United States. An important factor that causes these cross-country differences may be rooted in the stratification of educational systems, specifically in the strictness and timing of tracking.
Males crabs benefit from grooming because females prefer males with clean claws over dirty claws but also that the time spent grooming detracts from the amount of time available for courting females
Cost of an elaborate trait: a trade-off between attracting females and maintaining a clean ornament. Erin L McCullough, Chun-Chia Chou, Patricia R Y Backwell. Behavioral Ecology, araa072, August 4 2020. https://doi.org/10.1093/beheco/araa072
Abstract: Many sexually selected ornaments and weapons are elaborations of an animal’s outer body surface, including long feathers, colorful skin, and rigid outgrowths. The time and energy required to keep these traits clean, attractive, and in good condition for signaling may represent an important but understudied cost of bearing a sexually selected trait. Male fiddler crabs possess an enlarged and brightly colored claw that is used both as a weapon to fight with rival males and also as an ornament to court females. Here, we demonstrate that males benefit from grooming because females prefer males with clean claws over dirty claws but also that the time spent grooming detracts from the amount of time available for courting females. Males, therefore, face a temporal trade-off between attracting the attention of females and maintaining a clean claw. Our study provides rare evidence of the importance of grooming for mediating sexual interactions in an invertebrate, indicating that sexual selection has likely shaped the evolution of self-maintenance behaviors across a broad range of taxa.
Abstract: Many sexually selected ornaments and weapons are elaborations of an animal’s outer body surface, including long feathers, colorful skin, and rigid outgrowths. The time and energy required to keep these traits clean, attractive, and in good condition for signaling may represent an important but understudied cost of bearing a sexually selected trait. Male fiddler crabs possess an enlarged and brightly colored claw that is used both as a weapon to fight with rival males and also as an ornament to court females. Here, we demonstrate that males benefit from grooming because females prefer males with clean claws over dirty claws but also that the time spent grooming detracts from the amount of time available for courting females. Males, therefore, face a temporal trade-off between attracting the attention of females and maintaining a clean claw. Our study provides rare evidence of the importance of grooming for mediating sexual interactions in an invertebrate, indicating that sexual selection has likely shaped the evolution of self-maintenance behaviors across a broad range of taxa.
Understanding Personality through Patterns of Daily Socializing: Neuroticism is shown to be associated with shorter periods of extended conversation (periods of at least 12 minutes)
Understanding Personality through Patterns of Daily Socializing: Applying Recurrence Quantification Analysis to Naturalistically Observed Intensive Longitudinal Social Interaction Data. Alexander F. Danvers David A. Sbarra Matthias R. Mehl. European Journal of Personality, August 3 2020. https://doi.org/10.1002/per.2282
Abstract: Ambulatory assessment methods provide a rich approach for studying daily behaviour. Too often, however, these data are analysed in terms of averages, neglecting patterning of this behaviour over time. This paper describes recurrence quantification analysis (RQA), a non‐linear time series technique for analysing dynamic systems, as a method for analysing patterns of categorical, intensive longitudinal ambulatory assessment data. We apply RQA to objectively assessed social behaviour (e.g. talking to another person) coded from the Electronically Activated Recorder. Conceptual interpretations of RQA parameters, and an analysis of Electronically Activated Recorder data in adults going through a marital separation, are provided. Using machine learning techniques to avoid model overfitting, we find that adding RQA parameters to models that include just average amount of time spent talking (a static measure) improves prediction of four Big Five personality traits: extraversion, neuroticism, conscientiousness, and openness. Our strongest results suggest that a combination of average amount of time spent talking and four RQA parameters yield an R 2 = .09 for neuroticism. Neuroticism is shown to be associated with shorter periods of extended conversation (periods of at least 12 minutes), demonstrating the utility of RQA to identify new relationships between personality and patterns of daily behaviour.
This article earned Open Data badge through Open Practices Disclosure from the Center for Open Science: https://osf.io/tvyxz/wiki. The materials are permanently and openly accessible at https://osf.io/5nkr9/
Abstract: Ambulatory assessment methods provide a rich approach for studying daily behaviour. Too often, however, these data are analysed in terms of averages, neglecting patterning of this behaviour over time. This paper describes recurrence quantification analysis (RQA), a non‐linear time series technique for analysing dynamic systems, as a method for analysing patterns of categorical, intensive longitudinal ambulatory assessment data. We apply RQA to objectively assessed social behaviour (e.g. talking to another person) coded from the Electronically Activated Recorder. Conceptual interpretations of RQA parameters, and an analysis of Electronically Activated Recorder data in adults going through a marital separation, are provided. Using machine learning techniques to avoid model overfitting, we find that adding RQA parameters to models that include just average amount of time spent talking (a static measure) improves prediction of four Big Five personality traits: extraversion, neuroticism, conscientiousness, and openness. Our strongest results suggest that a combination of average amount of time spent talking and four RQA parameters yield an R 2 = .09 for neuroticism. Neuroticism is shown to be associated with shorter periods of extended conversation (periods of at least 12 minutes), demonstrating the utility of RQA to identify new relationships between personality and patterns of daily behaviour.
This article earned Open Data badge through Open Practices Disclosure from the Center for Open Science: https://osf.io/tvyxz/wiki. The materials are permanently and openly accessible at https://osf.io/5nkr9/
Mobile carrying devices: The fundamental relationship between mobile containers & foresight is easily overlooked, resulting in their significance in the study of human cognitive development being largely unrecognized
Mobile containers in human cognitive evolution studies: Understudied and underrepresented. Michelle C. Langley Thomas Suddendorf. Evolutionary Anthropology: Issues, News, and Reviews, August 3 2020. https://doi.org/10.1002/evan.21857
Abstract: Mobile carrying devices—slings, bags, boxes, containers, etc.—are a ubiquitous tool form among recent human communities. So ingrained are they to our present lifeways that the fundamental relationship between mobile containers and foresight is easily overlooked, resulting in their significance in the study of human cognitive development being largely unrecognized. Exactly when this game‐changing innovation appeared and became an essential component of the human toolkit is currently unknown. Taphonomic processes are obviously a significant factor in this situation; however, we argue that these devices have also not received the attention that they deserve from human evolution researchers. Here we discuss what the current archeological evidence is for Pleistocene‐aged mobile containers and outline the various lines of evidence that they provide for the origins and development of human cognitive and cultural behavior.
Abstract: Mobile carrying devices—slings, bags, boxes, containers, etc.—are a ubiquitous tool form among recent human communities. So ingrained are they to our present lifeways that the fundamental relationship between mobile containers and foresight is easily overlooked, resulting in their significance in the study of human cognitive development being largely unrecognized. Exactly when this game‐changing innovation appeared and became an essential component of the human toolkit is currently unknown. Taphonomic processes are obviously a significant factor in this situation; however, we argue that these devices have also not received the attention that they deserve from human evolution researchers. Here we discuss what the current archeological evidence is for Pleistocene‐aged mobile containers and outline the various lines of evidence that they provide for the origins and development of human cognitive and cultural behavior.
Tuesday, August 4, 2020
Persistence of gross domestic product below precrisis trends remains puzzling; transitory events, especially extreme ones, generate persistent changes in beliefs and macro outcomes
The Tail That Wags the Economy: Beliefs and Persistent Stagnation. Julian Kozlowski, Laura Veldkamp, Venky Venkateswaran. Journal of Political Economy, Volume 128, Number 8, August 2020 (June 10, 2020). https://www.journals.uchicago.edu/doi/abs/10.1086/707735
Abstract: The Great Recession was a deep downturn with long-lasting effects on credit, employment, and output. While narratives about its causes abound, the persistence of gross domestic product below precrisis trends remains puzzling. We propose a simple persistence mechanism that can be quantified and combined with existing models. Our key premise is that agents do not know the true distribution of shocks but use data to estimate it nonparametrically. Then, transitory events, especially extreme ones, generate persistent changes in beliefs and macro outcomes. Embedding this mechanism in a neoclassical model, we find that it endogenously generates persistent drops in economic activity after tail events.
5 Conclusion
Economists typically assume that agents in their models know the distribution of shocks. In this paper, we showed that relaxing this assumption introduces persistent economic responses to tail events. The agents in our model behave like classical econometricians, re-estimating distributions as new data arrives. Under these conditions, observing a tail event like the 2008- 09 Great Recession in the US, causes agents to assign larger weights to similar events in the future, depressing investment and output. Crucially, these effects last for a long time, even when the underlying shocks are transitory. The rarer the event that is observed, the larger and more persistent the revision in beliefs. The effects on economic activity are amplified when investments are financed with debt. This is because debt payoffs (and therefore, borrowing costs) are particularly sensitive to the probability of extreme negative outcomes. When this mechanism is quantified using data for the US economy, the predictions of the model resemble observed macro and asset market outcomes in the wake of the Great Recession, suggesting that the persistent nature of the recent stagnation is due, at least partly, to the fact that the events of 2008-09 changed the way market participants think about tail risk.
Abstract: The Great Recession was a deep downturn with long-lasting effects on credit, employment, and output. While narratives about its causes abound, the persistence of gross domestic product below precrisis trends remains puzzling. We propose a simple persistence mechanism that can be quantified and combined with existing models. Our key premise is that agents do not know the true distribution of shocks but use data to estimate it nonparametrically. Then, transitory events, especially extreme ones, generate persistent changes in beliefs and macro outcomes. Embedding this mechanism in a neoclassical model, we find that it endogenously generates persistent drops in economic activity after tail events.
5 Conclusion
Economists typically assume that agents in their models know the distribution of shocks. In this paper, we showed that relaxing this assumption introduces persistent economic responses to tail events. The agents in our model behave like classical econometricians, re-estimating distributions as new data arrives. Under these conditions, observing a tail event like the 2008- 09 Great Recession in the US, causes agents to assign larger weights to similar events in the future, depressing investment and output. Crucially, these effects last for a long time, even when the underlying shocks are transitory. The rarer the event that is observed, the larger and more persistent the revision in beliefs. The effects on economic activity are amplified when investments are financed with debt. This is because debt payoffs (and therefore, borrowing costs) are particularly sensitive to the probability of extreme negative outcomes. When this mechanism is quantified using data for the US economy, the predictions of the model resemble observed macro and asset market outcomes in the wake of the Great Recession, suggesting that the persistent nature of the recent stagnation is due, at least partly, to the fact that the events of 2008-09 changed the way market participants think about tail risk.
Younger adults report more distress and less well‐being: A cross‐cultural study of event centrality, depression, post‐traumatic stress disorder and life satisfaction
Younger adults report more distress and less well‐being: A cross‐cultural study of event centrality, depression, post‐traumatic stress disorder and life satisfaction. Alejandra Zaragoza, Sinué Salgado, Zhifang Shao, Dorthe Berntsen. Applied Cognitive Psychology, June 10 2020. https://doi.org/10.1002/acp.3707
Summary: The extent to which highly emotional autobiographical memories become central to one's identity and life story influences mental health. Young adults report higher distress and lower well‐being, compared with middle‐aged and/or older adults; whether this replicates across cultures is still unclear. First, we provide a review of the literature that examines age‐differences in depression, post‐traumatic stress disorder (PTSD), and life satisfaction in adulthood across cultures. Second, we report findings from a cross‐cultural study that examined event centrality of highly positive and negative autobiographical memories along with symptoms of depression and PTSD, and levels of life satisfaction in approximately 1000 young and middle‐aged adults from Mexico, Greenland, China and Denmark. Both age groups provided higher centrality ratings to the positive life event; however, the relative difference between the ratings for the positive and negative event was smaller in the young adults. Young adults reported significantly more distress and less well‐being across cultures.
Summary: The extent to which highly emotional autobiographical memories become central to one's identity and life story influences mental health. Young adults report higher distress and lower well‐being, compared with middle‐aged and/or older adults; whether this replicates across cultures is still unclear. First, we provide a review of the literature that examines age‐differences in depression, post‐traumatic stress disorder (PTSD), and life satisfaction in adulthood across cultures. Second, we report findings from a cross‐cultural study that examined event centrality of highly positive and negative autobiographical memories along with symptoms of depression and PTSD, and levels of life satisfaction in approximately 1000 young and middle‐aged adults from Mexico, Greenland, China and Denmark. Both age groups provided higher centrality ratings to the positive life event; however, the relative difference between the ratings for the positive and negative event was smaller in the young adults. Young adults reported significantly more distress and less well‐being across cultures.
In financial cities we observe that on average online porn viewing decreases as financial stress increases; evidence suggests the causing channel to be altered mood
Sex and “the City”: Financial stress and online pornography consumption. Michael Donadelli, Marie Lalanne. Journal of Behavioral and Experimental Finance, August 3 2020, 100379. https://doi.org/10.1016/j.jbef.2020.100379
Abstract: We examine the effects of financial stress on online pornography consumption. We use novel data on daily accesses to one of the most popular porn website (xHamster) for 43 different cities belonging to 10 countries for the year 2016. In financial cities, in which people are more likely to be affected by financial stress, we observe that on average online porn viewing decreases as financial stress increases. We present some evidence suggesting the causing channel to be altered mood.
Keywords: Financial stressUncertaintyOnline pornography
4. Concluding remarks
Abstract: We examine the effects of financial stress on online pornography consumption. We use novel data on daily accesses to one of the most popular porn website (xHamster) for 43 different cities belonging to 10 countries for the year 2016. In financial cities, in which people are more likely to be affected by financial stress, we observe that on average online porn viewing decreases as financial stress increases. We present some evidence suggesting the causing channel to be altered mood.
Keywords: Financial stressUncertaintyOnline pornography
4. Concluding remarks
This paper employs a unique and novel dataset providing accesses to one of the most visited porn website for several cities around the world in order to examine the relationship between financial stress and online pornography consumption. Regression results suggest that during stressing days people are less prone to go for online pornography. The effect seems to be driven by financial cities: financial stress does indeed affect pornography demand as people living in the proximity of financial districts are more likely to be impacted by stock markets fluctuations.
Unfortunately, our data are not granular enough to (strictly) capture the behavior of people working in the financial industry (e.g., financial analysts, traders, market makers, private bankers etc...) or living (strictly) in cities’ financial districts (e.g., Canary Wharf in London or FiDi in New York). Moreover, we rely on accesses to xHamster only. In this respect, future research could make an effort to retrieve richer data in order to get more insights on the relationship between online pornography consumption and financial market dynamics. We believe that examining the effects of rising financial stress on people’s daily standard entertainment activity (including online pornography viewing) represents an interesting avenue of future research.
Does Using Social Media Jeopardize Well-Being? The conclusions about the causal impact of social media on rising mental health problems in the population might be premature
Does Using Social Media Jeopardize Well-Being? The Importance of Separating Within- From Between-Person Effects. Olga Stavrova, Jaap Denissen. Social Psychological and Personality Science, August 3, 2020. https://doi.org/10.1177/1948550620944304
Abstract: Social networking sites (SNS) are frequently criticized as a driving force behind rising depression rates. Yet empirical studies exploring the associations between SNS use and well-being have been predominantly cross-sectional, while the few existing longitudinal studies provided mixed results. We examined prospective associations between SNS use and multiple indicators of well-being in a nationally representative sample of Dutch adults (N ∼ 10,000), comprising six waves of annual measures of SNS use and well-being. We used an analytic method that estimated prospective effects of SNS use and well-being while also estimating time-invariant between-person associations between these variables. Between individuals, SNS use was associated with lower well-being. However, within individuals, year-to-year changes in SNS use were not prospectively associated with changes in well-being (or vice versa). Overall, our analyses suggest that the conclusions about the causal impact of social media on rising mental health problems in the population might be premature.
Keywords: social media, social networking sites, life satisfaction, emotions, loneliness, self-esteem, longitudinal methods, between- and within-person effects
Abstract: Social networking sites (SNS) are frequently criticized as a driving force behind rising depression rates. Yet empirical studies exploring the associations between SNS use and well-being have been predominantly cross-sectional, while the few existing longitudinal studies provided mixed results. We examined prospective associations between SNS use and multiple indicators of well-being in a nationally representative sample of Dutch adults (N ∼ 10,000), comprising six waves of annual measures of SNS use and well-being. We used an analytic method that estimated prospective effects of SNS use and well-being while also estimating time-invariant between-person associations between these variables. Between individuals, SNS use was associated with lower well-being. However, within individuals, year-to-year changes in SNS use were not prospectively associated with changes in well-being (or vice versa). Overall, our analyses suggest that the conclusions about the causal impact of social media on rising mental health problems in the population might be premature.
Keywords: social media, social networking sites, life satisfaction, emotions, loneliness, self-esteem, longitudinal methods, between- and within-person effects
Discussion
Social media are often criticized as a driving force behind the current depression epidemics (Twenge et al., 2018). Yet the empirical evidence supporting the harmful effect of social media use on individuals has been based on predominantly cross-sectional data, while the few existing longitudinal studies provided mixed results. Herein, we used a large nationally representative panel of Dutch adults who contributed to a maximum of six yearly assessments of both SNS use and various indicators of well-being. Importantly, in contrast to many previous longitudinal studies, we relied on advanced statistical methods that are able to disentangle between- from within-person effects. Given policy makers’ recent interest in interventions aimed at curbing the suspected harmful consequences of social media use (UK Commons Select Committees, 2019), assessing whether SNS use is indeed associated with poorer well-being over time at a within-person level is particularly important.
Our results showed that, on average, more heavy SNS users indeed tended to consistently report slightly lower well-being—even though, consistent with recent large-scale cross-sectional studies (Orben & Przybylski, 2019a, 2019b), these effects were small. Importantly, despite the presence of between-person associations, within-individual changes in SNS use were not associated with within-individual changes in well-being (and vice versa). Importantly, our sample size would have allowed us to detect even tiny effects at an α level .05 (N = 10,000 gives a 99% power to detect a correlation of .04), suggesting that these null effects are unlikely to be explained by a lack of power.
How can we reconcile the presence of negative associations between SNS use and well-being at the between-person level with the absence of the prospective effects in either direction? One rather mundane explanation is that between-person associations might be driven by confounding with some third variables. For example, emotionally unstable and introverted individuals might be more likely to use social media (Liu & Campbell, 2017) and to report lower well-being (Diener et al., 2003). As a result, interindividual differences in personality traits, such as neuroticism or introversion, might be responsible for both higher SNS use and lower well-being. Relatedly, the negative between-person associations between SNS use and well-being could be (at least partially) driven by common method variance (Orben & Lakens, 2019). Future research should investigate these possibilities.
Alternatively, SNS use and well-being might affect each other, but on a shorter timescale, such as hours, days, or weeks (rather than years). Hence, assessing SNS use and well-being with shorter time intervals, for example, using daily diary or experience sampling methods would shed some light on this question. Nevertheless, it is important to note that even if SNS use affects daily fluctuations in well-being, the fact that these short-term associations do not translate into longer term effects, as indicated by our results, is worth further investigations.
The presence of between-person associations combined with the lack of within-person prospective effects in our findings might have implications that go beyond the field of social media effects. Specifically, it adds to the literature on the importance of separating effects at different levels of analysis more generally (Curran et al., 2014). The associations between the variables at one level of analysis (e.g., individuals) do not necessarily mirror the associations between these variables at another level (e.g., groups), and using the relations at one level to make inferences about the relations at another level represents an error of inference (ecological fallacy; Robinson, 1950). This has been common knowledge in other social science disciplines, such as sociology or education research, for decades (Raudenbush & Willms, 1995; Robinson, 1950). As psychologists have recently been showing increasing interest in exploring psychological phenomena across different levels of analysis too (e.g., within-person vs. between-person), using methods that allow for a proper differentiation of between- from within-person effects is essential (Usami et al., 2019).
It is important to note this study’s limitations. While the data set we used allowed us to include a broad range of well-being indicators, it did not offer a differentiated selection of SNS use measures. Specifically, the available variables mainly reflected a quantitative aspect of use, such as frequency and intensity. However, the mere number of hours spent on SNS might matter less that the content one is exposed to and the type of activities one is engaged in. For example, researchers have recently started differentiating between passive (browsing other people’s profiles) and active (posting messages and status updates) SNS use, showing that only the former (but not the latter) was associated with lower well-being (Verduyn et al., 2015). In addition, SNS use might have different consequences depending on what motives individuals pursue, with using social media for making new friends (vs. for social skills compensation) having positive (vs. negative) correlates (Teppers et al., 2014). Ultimately, while this study used self-report measures of SNS use, we hope that future studies will rely on objective measures, such as obtained from smartphone screen time applications (Ellis et al., 2018). In addition, our attempt to include as many diverse measures of well-being as possible resulted in varying time lags between SNS use and different measures of well-being. Although our additional analyses (see Supplementary Materials) showed that the length of time lag had no consistent effect on the associations between SNS use and well-being, we hope that data sets will become available with even more regular and fine-grained assessments than LISS.
Does Disgust Increase Unethical Behavior? A Replication of Winterich, Mittal, and Morales (2014) Fails to Reproduce Results
Does Disgust Increase Unethical Behavior? A Replication of Winterich, Mittal, and Morales (2014). Tamar Kugler, Charles N. Noussair, Denton Hatch. Social Psychological and Personality Science, August 3, 2020. https://doi.org/10.1177/1948550620944083
Abstract: We consider the relationship between disgust and ethical behavior. Winterich, Mittal, and Morales report several experiments finding that disgust increases unethical behavior. We replicated three of their studies, using high-powered designs with a total of 1,239 participants, three different procedures to induce disgust, and three different measures of unethical behavior. We observe no effect of disgust on unethical behavior in any of the studies, supporting the contention that disgust has no effect on ethical decision making.
Keywords: unethical behavior, self-interested behavior, disgust, emotion, replication
Abstract: We consider the relationship between disgust and ethical behavior. Winterich, Mittal, and Morales report several experiments finding that disgust increases unethical behavior. We replicated three of their studies, using high-powered designs with a total of 1,239 participants, three different procedures to induce disgust, and three different measures of unethical behavior. We observe no effect of disgust on unethical behavior in any of the studies, supporting the contention that disgust has no effect on ethical decision making.
Keywords: unethical behavior, self-interested behavior, disgust, emotion, replication
Meta-data from 3,143,270 users: Compared to pre- pandemic levels, found increases in the number of meetings per person (+12.9%) and the number of attendees per meeting (+13.5%), but decreases in the average duration (-20.1%)
Collaborating During Coronavirus: The Impact of COVID-19 on the Nature of Work. Evan DeFilippis, Stephen Michael Impink, Madison Singell, Jeffrey T. Polzer, Raffaella Sadun. NBER Working Paper No. 27612, July 2020. https://www.nber.org/papers/w27612
Abstract: We explore the impact of COVID-19 on employee's digital communication patterns through an event study of lockdowns in 16 large metropolitan areas in North America, Europe and the Middle East. Using de- identified, aggregated meeting and email meta-data from 3,143,270 users, we find, compared to pre- pandemic levels, increases in the number of meetings per person (+12.9 percent) and the number of attendees per meeting (+13.5 percent), but decreases in the average length of meetings (-20.1 percent). Collectively, the net effect is that people spent less time in meetings per day (-11.5 percent) in the post- lockdown period. We also find significant and durable increases in length of the average workday (+8.2 percent, or +48.5 minutes), along with short-term increases in email activity. These findings provide insight from a novel dataset into how the nature of work has changed for a large sample of knowledge workers. We discuss these changes in light of the ongoing challenges faced by organizations and workers struggling to adapt and perform in the face of a global pandemic.
Abstract: We explore the impact of COVID-19 on employee's digital communication patterns through an event study of lockdowns in 16 large metropolitan areas in North America, Europe and the Middle East. Using de- identified, aggregated meeting and email meta-data from 3,143,270 users, we find, compared to pre- pandemic levels, increases in the number of meetings per person (+12.9 percent) and the number of attendees per meeting (+13.5 percent), but decreases in the average length of meetings (-20.1 percent). Collectively, the net effect is that people spent less time in meetings per day (-11.5 percent) in the post- lockdown period. We also find significant and durable increases in length of the average workday (+8.2 percent, or +48.5 minutes), along with short-term increases in email activity. These findings provide insight from a novel dataset into how the nature of work has changed for a large sample of knowledge workers. We discuss these changes in light of the ongoing challenges faced by organizations and workers struggling to adapt and perform in the face of a global pandemic.
Highly feminist women who desire sexist men experienced more cognitive dissonance (operationalized as negative affect) than women lower in feminist attitudes
Feminism and mate preference: A study on relational cognitive dissonance. Aslı Yurtsever, Arın Korkmaz, Zeynep Cemalcilar. Personality and Individual Differences, Volume 168, January 1 2021, 110297. https://doi.org/10.1016/j.paid.2020.110297
Abstract: Evolution proposes differences in mate preferences between the two sexes. Females prefer mates who can invest in them and their offspring. In the contemporary era, gender ideologies are not always in line with these premises, but desires still could be. The conflict between ideology and desire could trigger cognitive dissonance in contemporary feminist women. We recruited 246 women online to investigate the occurrence of dissonance based on feminist attitudes, and whether dissonance reduction strategies (i.e., behavior change, cognition change) differed based on their preference for consistency. Results showed that highly feminist women who desire sexist men experienced more cognitive dissonance (operationalized as negative affect) than women lower in feminist attitudes. Preference for consistency moderated cognitive dissonance's association with behavior, but not cognition change.
Keywords: Cognitive dissonanceMate preferenceFeminismPreference for consistency
Abstract: Evolution proposes differences in mate preferences between the two sexes. Females prefer mates who can invest in them and their offspring. In the contemporary era, gender ideologies are not always in line with these premises, but desires still could be. The conflict between ideology and desire could trigger cognitive dissonance in contemporary feminist women. We recruited 246 women online to investigate the occurrence of dissonance based on feminist attitudes, and whether dissonance reduction strategies (i.e., behavior change, cognition change) differed based on their preference for consistency. Results showed that highly feminist women who desire sexist men experienced more cognitive dissonance (operationalized as negative affect) than women lower in feminist attitudes. Preference for consistency moderated cognitive dissonance's association with behavior, but not cognition change.
Keywords: Cognitive dissonanceMate preferenceFeminismPreference for consistency
4. Discussion
The current study showed that desire toward evolutionarily preferable mate behaviors conflicted with feminist attitudes, creating cognitive dissonance. We predicted that when attraction was held constant, such behaviors would trigger cognitive dissonance in heterosexual feminist women, deeming them sexist. Indeed, our pilot study supported this finding, and its association with high negative affect as indicative of cognitive dissonance. In the experiment, in line with our hypothesis, feminist women attributed to the vignette protagonist similar dissonance regardless of the type of sexist behaviors, be it overt or subtle. We found support for findings on within-sex variation in mate preferences; desiring resource display was challenged by those who had strong endorsement of feminism, and their desire toward any sexist-deemed behavior proved problematic. Hughes and Aung (2017) found several individual differences that moderated women's mate preferences. We expanded their list with feminism; feminist women were put off by resource display, unlike their non-feminist counterparts. Less feminist women did not experience dissonance in the subtle condition because the man's manner of displaying resources was deemed attractive (as evolutionary trends and traditional gender roles suggest) and was not misaligned with any prior attitudes. They still experienced higher NA in the overt condition compared to the control. Although Harmon-Jones (2000) found that NA measures dissonance irrespective of aversive situations, this could still be due to the overall unpleasantness of the interaction, and not a reflection of feminist attitudes.
Our findings showed that once women experienced cognitive dissonance, they employed dissonance reduction strategies to relieve the emerging negative arousal. This supports previous research in the validity of assessing affect as an indicator of cognitive dissonance. Overall, we found that women who felt high negative affect were more likely to use behavior change (i.e., terminate the interaction). Furthermore, individual's preference for consistency moderated this effect. Previous research had examined preference for consistency as an individual difference that predicted cognitive dissonance (Nolan & Nail, 2014). We, in return, investigated the moderating role of preference for consistency on dissonance reduction strategies. We showed, contrary to our hypothesis, that the association of negative affect with behavior change was stronger for women who were low (vs. high) on preference for consistency. This may be because high PFC participants sought consistency with the “going on the date” decision, and not with their feminist attitudes. That is, once people engage in attitude deviating behaviors, seeking consistency is fixed on the deviation and not on the attitudes, demonstrating the foot-in-the-door effect (Guadagno & Cialdini, 2010).
Interestingly, there was no systematic explanation for employing cognition change after experiencing dissonance. All participants employed it for overt and subtle sexist men, independent of their negative affect and level of feminism. However, in the control condition, only women low on feminism used it. We are not surprised, as without attitude violation, there is no need for cognition change. Less feminist women, conversely, may have needed to change cognitions to adapt to the feminism-aligned treatment of the control man. As for the unexpected findings on NA and PFC, Vaidis and Bran (2018) differentiated between “inconsistency resolution” and “arousal reduction” in dissonance reduction processes. Following that, we argue that our model was based on negative arousal; therefore, PFC was not a suitable variable in explaining cognitions that seek to resolve the inconsistency.
In the current study, it is evident that various cognitive dissonance reduction strategies, such as behavior and/or cognition change, can be employed depending on the individual's dispositions and the context. This study allowed participants to choose several strategies, thus enabling us to approach real life and see that strategies are not necessarily concomitant. People may change cognitions while they terminate the relationship to modify their narrative about the date and feel better, or they may use it to keep dating without feeling dissonance. McGrath (2017) argued that which strategy to use depends on its likelihood of success and effortfulness. In a dating context, termination is a conclusive strategy to end dissonance, whereas cognition change requires effortful restructuring and has the potential to recur. Therefore, the data revealed higher use of behavior change overall, even though both strategies were used.
4.1. Limitations and suggestions for further research
We treated behavior change and cognition change as concurrent independent dimensions; further research should explore reduction strategies with forced-choice paradigms. Our vignettes read from the point of view of a fictional protagonist, inducing vicarious dissonance, to avoid the attraction constant being met with resistance, as was found in our pilot study. However, making termination decisions for another person might be more straightforward, and this might be why our model did not explain the process of cognition change. We manipulated various behaviors of male gender roles and found an additive effect; different contexts and behaviors should be examined to parse this effect. Measures other than negative affect could be implemented to assess dissonance. Additionally, having recruited participants online resulting in a self-selected sample might limit the generalizability of our findings. Future research can investigate how cognitive change strategies influence long-term attitudes and behaviors. Finally, our findings should be taken into consideration within the cultural characteristics of the Turkish population. As a collectivist culture enforcing traditional gender roles that reinforce evolutionary mate preferences, the dissonance feminist participants felt might be indicative of tension between their ideologies and cultural values rather than with sexual desire. This finding should be further replicated in cultures with higher gender equality and lower traditional values.
Overall, their results demonstrate that increased contact opportunities with forced migrants contribute to increases in prejudice
The dynamic relationship between contact opportunities, positive and negative intergroup contact, and prejudice: A longitudinal investigation. Patrick Kotzur, Ulrich Wagner. Journal of Personality and Social Psychology, Jul 2020. DOI: 10.1037/pspi0000258
Abstract: We investigated the dynamics of naturally increasing contact opportunities, frequencies of positive and negative intergroup contact experiences, and prejudice toward forced migrants, in 2 three-wave longitudinal studies (Study 1, N = 183, adult community sample; Study 2, N = 758, nation-wide adult probability sample) in Germany using latent growth curve and parallel process analyses. We examined (research question 1) whether prejudice increases or decreases with increased contact opportunities; (research question 2) whether the rate of change in prejudice is related to the rate of change of positive/negative contact; (research question 3) whether the trajectories of change in prejudice shift as a function of the histories of prior positive/negative contact; and (research question 4) whether the rate of change in positive/negative contact frequencies depends on prior prejudice levels. Across both studies, prejudice increased with increased contact opportunities, as did positive and negative contact frequencies (ad research question 1). Whereas changes in negative contact were significantly related to changes in prejudice in both studies, no such relationships emerged as significant for positive contact (ad research question 2). We did not find any supportive evidence for our research questions 3 and 4. Overall, our results demonstrate that increased contact opportunities can contribute to increases in prejudice. Moreover, they indicate that the trajectories of negative contact and prejudice may be more substantially intertwined than the trajectories of positive contact and prejudice.
Abstract: We investigated the dynamics of naturally increasing contact opportunities, frequencies of positive and negative intergroup contact experiences, and prejudice toward forced migrants, in 2 three-wave longitudinal studies (Study 1, N = 183, adult community sample; Study 2, N = 758, nation-wide adult probability sample) in Germany using latent growth curve and parallel process analyses. We examined (research question 1) whether prejudice increases or decreases with increased contact opportunities; (research question 2) whether the rate of change in prejudice is related to the rate of change of positive/negative contact; (research question 3) whether the trajectories of change in prejudice shift as a function of the histories of prior positive/negative contact; and (research question 4) whether the rate of change in positive/negative contact frequencies depends on prior prejudice levels. Across both studies, prejudice increased with increased contact opportunities, as did positive and negative contact frequencies (ad research question 1). Whereas changes in negative contact were significantly related to changes in prejudice in both studies, no such relationships emerged as significant for positive contact (ad research question 2). We did not find any supportive evidence for our research questions 3 and 4. Overall, our results demonstrate that increased contact opportunities can contribute to increases in prejudice. Moreover, they indicate that the trajectories of negative contact and prejudice may be more substantially intertwined than the trajectories of positive contact and prejudice.
Monday, August 3, 2020
In India, Don’t Hate the Matchmaker: A Netflix hit about arranged marriages reflects Indian society a lot more than critics want to admit
In India, Don’t Hate the Matchmaker: A Netflix hit about arranged marriages reflects Indian society a lot more than critics want to admit. Shruti Rajagopalan. Bloomberg, August 2, 2020. https://www.bloomberg.com/opinion/articles/2020-08-02/netflix-s-indian-matchmaking-is-only-too-accurate
Even as the Netflix show “Indian Matchmaking” has grown into a global hit, it’s incensed many Indians. The issue isn’t that most couples don’t go for goat yoga on their first date. Critics accuse the show of stereotyping and commodifying women, lacking diversity and promoting a backwards vision of marriage where astrologers and meddling parents are more influential than the preferences of brides and grooms.
They complain that the series, which follows matchmaker Sima Taparia as she jets between Mumbai and the U.S. to arrange marriages, perpetuates an outdated, offensive and regressive marriage market. In fact, the real problem may be their discomfort with the way marriage works in India, with social stability prized over individual happiness.
It’s true that India’s 1.35 billion citizens occupy different centuries simultaneously. A small fraction still practices child marriage, with some communities holding betrothal ceremonies as soon as a girl is born. At the other end of the spectrum, there is growing acceptance of queer relationships, divorce and even avoiding marriage altogether.
But most Indian marriages are still arranged. That’s because, for the most part, the purpose of marriage in Indian society is not love but family, children and social stability expressed by confining marriage within caste boundaries. According to the 2011-12 India Human Development Survey, only about 5% of Indians marry outside their caste. The share has remained remarkably stable over the decades since independence, even though India’s economy and society have progressed in many other ways. 1
Studies show that the education levels of the prospective bride or groom don’t make marriages across castes more likely. Even college-educated, urban, middle-class Indians show a strong preference to marry within caste.
This isn’t only a matter for Hindus either. Muslims in South Asia marry within their biradari or jaat — a stand-in for Hindu caste. Indian Christians differentiate between those who converted and those who came to India centuries ago; they marry based on whatever one’s caste was before conversion.
The reason Guyanese-born Nadia faces a limited set of options in the show is not because of her South American birth, but because Indians who were shipped as indentured laborers to the New World were mostly lower castes, or so perceived. American-born Indians are almost always upper caste and are highly valued in the Indian marriage market, despite or maybe because of, their “foreign” status.
The fact that “Indian Matchmaking” packages women as slim, tall, fair, presentable, likable, flexible and so on is, once again, a consequence of using marriage to preserve caste lines. When the purpose of marriage is to find love, companionship and compatibility, then the focus is on the characteristics of the individual. The marriage market is akin to a matching market, similar to Tinder or Uber.
But, in a world where marriage exists to maintain caste lines, the nature of the marriage market more closely resembles a commodity market, where goods are graded into batches. Within every batch, the commodity is substitutable — as in wheat or coffee exchanges.
This is why reading matrimonial ads or listening to Sima going over biodatas — a kind of matrimonial resume — is triggering for many Indian women. Once caste, family, economic strata, looks, height, etc., are graded, all women within a particular grade are considered substitutes for one another, primarily to continue the family line.
[...]
1 To understand the consequences for the few who dare go against their caste or family’s choice, a more representative film on Netflix is Nagraj Manjule’s “Sairat.”
Even as the Netflix show “Indian Matchmaking” has grown into a global hit, it’s incensed many Indians. The issue isn’t that most couples don’t go for goat yoga on their first date. Critics accuse the show of stereotyping and commodifying women, lacking diversity and promoting a backwards vision of marriage where astrologers and meddling parents are more influential than the preferences of brides and grooms.
They complain that the series, which follows matchmaker Sima Taparia as she jets between Mumbai and the U.S. to arrange marriages, perpetuates an outdated, offensive and regressive marriage market. In fact, the real problem may be their discomfort with the way marriage works in India, with social stability prized over individual happiness.
It’s true that India’s 1.35 billion citizens occupy different centuries simultaneously. A small fraction still practices child marriage, with some communities holding betrothal ceremonies as soon as a girl is born. At the other end of the spectrum, there is growing acceptance of queer relationships, divorce and even avoiding marriage altogether.
But most Indian marriages are still arranged. That’s because, for the most part, the purpose of marriage in Indian society is not love but family, children and social stability expressed by confining marriage within caste boundaries. According to the 2011-12 India Human Development Survey, only about 5% of Indians marry outside their caste. The share has remained remarkably stable over the decades since independence, even though India’s economy and society have progressed in many other ways. 1
Studies show that the education levels of the prospective bride or groom don’t make marriages across castes more likely. Even college-educated, urban, middle-class Indians show a strong preference to marry within caste.
This isn’t only a matter for Hindus either. Muslims in South Asia marry within their biradari or jaat — a stand-in for Hindu caste. Indian Christians differentiate between those who converted and those who came to India centuries ago; they marry based on whatever one’s caste was before conversion.
The reason Guyanese-born Nadia faces a limited set of options in the show is not because of her South American birth, but because Indians who were shipped as indentured laborers to the New World were mostly lower castes, or so perceived. American-born Indians are almost always upper caste and are highly valued in the Indian marriage market, despite or maybe because of, their “foreign” status.
The fact that “Indian Matchmaking” packages women as slim, tall, fair, presentable, likable, flexible and so on is, once again, a consequence of using marriage to preserve caste lines. When the purpose of marriage is to find love, companionship and compatibility, then the focus is on the characteristics of the individual. The marriage market is akin to a matching market, similar to Tinder or Uber.
But, in a world where marriage exists to maintain caste lines, the nature of the marriage market more closely resembles a commodity market, where goods are graded into batches. Within every batch, the commodity is substitutable — as in wheat or coffee exchanges.
This is why reading matrimonial ads or listening to Sima going over biodatas — a kind of matrimonial resume — is triggering for many Indian women. Once caste, family, economic strata, looks, height, etc., are graded, all women within a particular grade are considered substitutes for one another, primarily to continue the family line.
[...]
1 To understand the consequences for the few who dare go against their caste or family’s choice, a more representative film on Netflix is Nagraj Manjule’s “Sairat.”
The social weaver bird create nests than can weigh 1 ton & house 200 birds in individual chambers; their cooperative behaviors include chick rearing & defense against snakes & falcons
Not even scientists can tell these birds apart. But now, computers can. Erik Stokstad. Science Magazine, Jul 28 2020. https://www.sciencemag.org/news/2020/07/not-even-scientists-can-tell-these-birds-apart-now-computers-can
Project: Cooperation and population dynamics in the Sociable Weaver. Cape Town University, 2019. http://www.fitzpatrick.uct.ac.za/fitz/research/programmes/understanding/sociable_weavers
The aptly named Sociable Weaver Philetairus socius is a highly social species that is endemic to the Kalahari region of southern African. As the common name suggests, these weavers work together to accomplish diverse tasks, from building their highly distinctive thatched nests to help raising the chicks and defending the nest and colony mates from predators. Their fascinating social structure and different types of cooperative behaviour make them an ideal study model to investigate the benefits and costs of sociality and the evolutionary mechanisms that allow cooperation to evolve and be maintained.
Cooperation represents an evolutionary puzzle because natural selection is thought to favour selfish individuals over co-operators. However, theory and studies in humans suggest that co-operators are preferred as social and sexual partners. Partner choice may therefore provide a powerful explanation for the evolution and stability of cooperation, alongside kin selection and self-serving benefits, but we lack an understanding of its importance in natural systems
These results suggest that rather than being simply covert partisans, nonpartisans process the world in a way different, different brain areas, from their partisan counterparts
Neural nonpartisans. Darren Schreiber, Greg Fronzo, Alan Simmons, Chris Dawes, Taru Flagan & Martin Paulus. Journal of Elections, Public Opinion and Parties, Aug 3 2020. Download citation https://doi.org/10.1080/17457289.2020.1801695
ABSTRACT: While affective conflict between partisans is driving much of modern politics, it is also driving increasing numbers to eschew partisan labels. A dominant theory is that these self-proclaimed independents are merely covert partisans. In the largest functional brain imaging study of neuropolitics to date, we find differences between partisans and nonpartisans in the right medial temporal pole, orbitofrontal/medial prefrontal cortex, and right ventrolateral prefrontal cortex, three regions often engaged during social cognition. These results suggest that rather than being simply covert partisans, nonpartisans process the world in a way different from their partisan counterparts.
ABSTRACT: While affective conflict between partisans is driving much of modern politics, it is also driving increasing numbers to eschew partisan labels. A dominant theory is that these self-proclaimed independents are merely covert partisans. In the largest functional brain imaging study of neuropolitics to date, we find differences between partisans and nonpartisans in the right medial temporal pole, orbitofrontal/medial prefrontal cortex, and right ventrolateral prefrontal cortex, three regions often engaged during social cognition. These results suggest that rather than being simply covert partisans, nonpartisans process the world in a way different from their partisan counterparts.
The Correlation-Causation Taboo: Making explicit causal inference taboo does not stop people from doing it; they just do it in a less transparent, regulated, sophisticated & informed way
The Correlation-Causation Taboo. Neuroskeptic. Discover Magazine, July 31, 2020. https://www.discovermagazine.com/the-sciences/the-correlation-causation-taboo
Should psychologists be more comfortable discussing causality?
"Correlation does not imply causation" is a basic motto of science. Every scientist knows that observing a correlation between two things doesn't necessarily mean that one of them causes the other.
But according to a provocative new paper, many researchers in psychology are drawing the wrong lessons from this motto. The paper is called The Taboo Against Explicit Causal Inference in Nonexperimental Psychology and it comes from Michael P. Grosz et al. The article makes a lot of points, but to me the main insight of the piece was this: Many studies in psychology are implicitly about causality, without openly saying as much.
Consider, for example, this highly cited 2011 study which showed that children with better self-control have better health and social outcomes years later as adults.
This 2011 paper never claimed to have shown causality. It was, after all, an observational, correlational design, and correlation is not causation. But Grosz et al. say that the study only makes sense in the context of an implicit belief that self-control does (or probably does) causally influence outcomes.
The title of the 2011 paper suggests that it was a study about predicting the outcomes. Prediction can be an important goal, but Grosz et al. point out that if the study had really been about prediction, it would make sense to consider a whole range of possible predictors. A purely predictive study wouldn't focus on a single factor. The paper also probably wouldn't be so highly cited, if readers really thought it said nothing about causality.
Grosz et al. analyze three other influential "observational" psychology papers and in all cases, they find evidence of unstated causal claims and assumptions, swept under a correlational rug.
As they put it, "Similar to when sex or drugs are made taboo, making explicit causal inference taboo does not stop people from doing it; they just do it in a less transparent, regulated, sophisticated and informed way."
The authors go on to argue that there's actually nothing wrong with talking about causality in the context of observational research — but the causal assumptions and claims need to be made explicit, so that they can be critically evaluated.
To be clear, the authors are not saying that correlation implies causation. They argue that it is sometimes possible to draw inferences about causation from correlational evidence, if we have enough evidence to rule out non-causal alternative explanations. This kind of inference is "very difficult. However, this is not a good reason to render explicit causal inference taboo."
Should psychologists be more comfortable discussing causality?
"Correlation does not imply causation" is a basic motto of science. Every scientist knows that observing a correlation between two things doesn't necessarily mean that one of them causes the other.
But according to a provocative new paper, many researchers in psychology are drawing the wrong lessons from this motto. The paper is called The Taboo Against Explicit Causal Inference in Nonexperimental Psychology and it comes from Michael P. Grosz et al. The article makes a lot of points, but to me the main insight of the piece was this: Many studies in psychology are implicitly about causality, without openly saying as much.
Consider, for example, this highly cited 2011 study which showed that children with better self-control have better health and social outcomes years later as adults.
This 2011 paper never claimed to have shown causality. It was, after all, an observational, correlational design, and correlation is not causation. But Grosz et al. say that the study only makes sense in the context of an implicit belief that self-control does (or probably does) causally influence outcomes.
The title of the 2011 paper suggests that it was a study about predicting the outcomes. Prediction can be an important goal, but Grosz et al. point out that if the study had really been about prediction, it would make sense to consider a whole range of possible predictors. A purely predictive study wouldn't focus on a single factor. The paper also probably wouldn't be so highly cited, if readers really thought it said nothing about causality.
Grosz et al. analyze three other influential "observational" psychology papers and in all cases, they find evidence of unstated causal claims and assumptions, swept under a correlational rug.
As they put it, "Similar to when sex or drugs are made taboo, making explicit causal inference taboo does not stop people from doing it; they just do it in a less transparent, regulated, sophisticated and informed way."
The authors go on to argue that there's actually nothing wrong with talking about causality in the context of observational research — but the causal assumptions and claims need to be made explicit, so that they can be critically evaluated.
To be clear, the authors are not saying that correlation implies causation. They argue that it is sometimes possible to draw inferences about causation from correlational evidence, if we have enough evidence to rule out non-causal alternative explanations. This kind of inference is "very difficult. However, this is not a good reason to render explicit causal inference taboo."
Sunday, August 2, 2020
Dates of birth and death for more than 1,600 CEOs of large, publicly listed U.S. firms: We estimate that CEOs' lifespan increases by around two years when insulated from market discipline via anti-takeover laws
Borgschulte, Mark and Guenzel, Marius and Liu, Canyao and Malmendier, Ulrike, CEO Stress, Aging, and Death (June 1, 2020). CEPR Discussion Paper No. DP14933, Available at SSRN: https://ssrn.com/abstract=3638037
Abstract: We show that increased job demands due to takeover threats and industry crises have significant adverse consequences for managers' long-term health. Using hand-collected data on the dates of birth and death for more than 1,600 CEOs of large, publicly listed U.S. firms, we estimate that CEOs' lifespan increases by around two years when insulated from market discipline via anti-takeover laws. CEOs also stay on the job longer, with no evidence of a compensating differential in the form of lower pay. In a second analysis, we find diminished longevity arising from increases in job demands caused by industry-wide downturns during a CEO's tenure. Finally, we utilize machine-learning age-estimation methods to detect visible signs of aging in pictures of CEOs. We estimate that exposure to a distress shock during the Great Recession increases CEOs' apparent age by roughly one year over the next decade.
Abstract: We show that increased job demands due to takeover threats and industry crises have significant adverse consequences for managers' long-term health. Using hand-collected data on the dates of birth and death for more than 1,600 CEOs of large, publicly listed U.S. firms, we estimate that CEOs' lifespan increases by around two years when insulated from market discipline via anti-takeover laws. CEOs also stay on the job longer, with no evidence of a compensating differential in the form of lower pay. In a second analysis, we find diminished longevity arising from increases in job demands caused by industry-wide downturns during a CEO's tenure. Finally, we utilize machine-learning age-estimation methods to detect visible signs of aging in pictures of CEOs. We estimate that exposure to a distress shock during the Great Recession increases CEOs' apparent age by roughly one year over the next decade.
Rolf Degen summarizing... Contrary to an influential psychological finding, most laughs in everyday conversations were responses to something comical, and not just instances of social smoothing
What's your laughter doing there? A taxonomy of the pragmatic functions of laughter. Chiara Mazzocconi, Ye Tian, Jonathan Ginzburg. IEEE Transactions on Affective Computing, May 2020. https://ieeexplore.ieee.org/abstract/document/9093177
Abstract: Laughter is a crucial signal for communication and managing interactions. Until now no consensual approach has emerged for classifying laughter. We propose a new framework for laughter analysis and classification, based on the pivotal assumption that laughter has propositional content. We propose an annotation scheme to classify the pragmatic functions of laughter taking into account the form, the laughable, the social, situational, and linguistic context. We apply the framework and taxonomy proposed in a multilingual corpus study (French, Mandarin Chinese and English), involving a variety of situational contexts. Our results give rise to novel generalizations about the range of meanings laughter exhibits, the placement of the laughable, and how placement and arousal relate to the functions of laughter. We have tested and refuted the validity of the commonly accepted assumption that laughter directly follows its laughable. In the concluding section, we discuss the implications our work has for spoken dialogue systems. We stress that laughter integration in spoken dialogue systems is not only crucial for emotional and affective computing aspects, but also for aspects related to natural language understanding and pragmatic reasoning. We formulate the emergent computational challenges for incorporating laughter in spoken dialogue systems.
Abstract: Laughter is a crucial signal for communication and managing interactions. Until now no consensual approach has emerged for classifying laughter. We propose a new framework for laughter analysis and classification, based on the pivotal assumption that laughter has propositional content. We propose an annotation scheme to classify the pragmatic functions of laughter taking into account the form, the laughable, the social, situational, and linguistic context. We apply the framework and taxonomy proposed in a multilingual corpus study (French, Mandarin Chinese and English), involving a variety of situational contexts. Our results give rise to novel generalizations about the range of meanings laughter exhibits, the placement of the laughable, and how placement and arousal relate to the functions of laughter. We have tested and refuted the validity of the commonly accepted assumption that laughter directly follows its laughable. In the concluding section, we discuss the implications our work has for spoken dialogue systems. We stress that laughter integration in spoken dialogue systems is not only crucial for emotional and affective computing aspects, but also for aspects related to natural language understanding and pragmatic reasoning. We formulate the emergent computational challenges for incorporating laughter in spoken dialogue systems.
The Great Stagnation (the fifty-year decline in growth for the U.S. and other advanced economies) – Causes and Cures
Carr, Douglas, The Great Stagnation – Causes and Cures (July 28, 2020). SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3662638
Abstract
This paper addresses the fifty-year decline in growth for the U.S. and other advanced economies.
The paper develops a growth model based upon an economy’s capital accounts and illustrates how customary growth factors such as labor and total factor productivity are embedded within investment ratios, permitting estimation of investment, that largely determines growth rate as well as the natural rate of interest, which is the capital factor share of growth. The model explains declines of these measures and finds convergence among natural interest, total factor productivity, and labor growth.
The paper identifies two investment regimes which crossed paths in the U.S. in the early 1970’s, one based upon depreciation and the other determined by the capital factor share of the private market sector. Constrictions on the private market sector from growing government spending limit the potential for higher levels of private investment necessary to offset greater depreciation from rapid obsolescence of increasingly high-tech investments.
Present trends worsen stagnation, but lifting constriction of private investment would allow full realization of benefits from technology investment’s high productivity, boosting U.S. growth to over 7% annually, and would benefit other advanced economies as well.
Keywords: Growth model, economic growth, investment, interest rate, natural rate of interest
JEL Classification: E22, E43, E44, F43, O16, O41, O47
Abstract
This paper addresses the fifty-year decline in growth for the U.S. and other advanced economies.
The paper develops a growth model based upon an economy’s capital accounts and illustrates how customary growth factors such as labor and total factor productivity are embedded within investment ratios, permitting estimation of investment, that largely determines growth rate as well as the natural rate of interest, which is the capital factor share of growth. The model explains declines of these measures and finds convergence among natural interest, total factor productivity, and labor growth.
The paper identifies two investment regimes which crossed paths in the U.S. in the early 1970’s, one based upon depreciation and the other determined by the capital factor share of the private market sector. Constrictions on the private market sector from growing government spending limit the potential for higher levels of private investment necessary to offset greater depreciation from rapid obsolescence of increasingly high-tech investments.
Present trends worsen stagnation, but lifting constriction of private investment would allow full realization of benefits from technology investment’s high productivity, boosting U.S. growth to over 7% annually, and would benefit other advanced economies as well.
Keywords: Growth model, economic growth, investment, interest rate, natural rate of interest
JEL Classification: E22, E43, E44, F43, O16, O41, O47
Lay Beliefs about Meaning in Life: Examinations Across Targets, Time, and Countries
Lay Beliefs about Meaning in Life: Examinations Across Targets, Time, and Countries. Samantha J.Heintzelman et al. Journal of Research in Personality, August 1 2020, 104003. https://doi.org/10.1016/j.jrp.2020.104003
Highlights
• Meaning in life was perceived to be both created and discovered, and to be common.
• Beliefs about meaning related to experiences of meaning in life.
• Technology was perceived as both providing supports and challenges to meaning.
• There were national differences in perceptions and experiences of meaning.
• Relationships and happiness were rated as top sources of meaning across 8 nations.
Abstract: We examined how lay beliefs about meaning in life relate to experiences of personal meaning. In Study 1 (N=406) meaning in life was perceived to be a common experience, but one that requires effort to attain, and these beliefs related to levels of meaning in life. Participants viewed their own lives as more meaningful than the average person’s, and technology as both creating challenges and providing supports for meaning. Study 2 (N=1,719) showed cross-country variation in levels of and beliefs about meaning across eight countries. However, social relationships and happiness were identified as the strongest sources of meaning in life consistently across countries. We discuss the value of lay beliefs for understanding meaning in life both within and across cultures.
Keywords: meaning in lifepsychological well-beinglay beliefscross-cultural
Check also Meaning and Evolution: Why Nature Selected Human Minds to Use Meaning. Roy F. Baumeister and William von Hippel. Evolutionary Studies in Imaginative Culture, Vol. 4, No. 1, Symposium on Meaning and Evolution (Spring 2020), pp. 1-18. https://www.bipartisanalliance.com/2020/05/the-scientific-worldview-suggested-that.html
Also Happiness, Meaning, and Psychological Richness. Shigehiro Oishi, Hyewon Choi, Minkyung Koo, Iolanda Galinha, Keiko Ishii, Asuka Komiya, Maike Luhmann, Christie Scollon, Ji-eun Shin, Hwaryung Lee, Eunkook M. Suh, Joar Vittersø, Samantha J. Heintzelman, Kostadin Kushlev, Erin C. Westgate, Nicholas Buttrick, Jane Tucker, Charles R. Ebersole, Jordan Axt, Elizabeth Gilbert, Brandon W. Ng, Jaime Kurtz & Lorraine L. Besser . Affective Science volume 1, pages107–115, Jun 23 2020. https://www.bipartisanalliance.com/2020/06/investigating-whether-some-people.html
Highlights
• Meaning in life was perceived to be both created and discovered, and to be common.
• Beliefs about meaning related to experiences of meaning in life.
• Technology was perceived as both providing supports and challenges to meaning.
• There were national differences in perceptions and experiences of meaning.
• Relationships and happiness were rated as top sources of meaning across 8 nations.
Abstract: We examined how lay beliefs about meaning in life relate to experiences of personal meaning. In Study 1 (N=406) meaning in life was perceived to be a common experience, but one that requires effort to attain, and these beliefs related to levels of meaning in life. Participants viewed their own lives as more meaningful than the average person’s, and technology as both creating challenges and providing supports for meaning. Study 2 (N=1,719) showed cross-country variation in levels of and beliefs about meaning across eight countries. However, social relationships and happiness were identified as the strongest sources of meaning in life consistently across countries. We discuss the value of lay beliefs for understanding meaning in life both within and across cultures.
Keywords: meaning in lifepsychological well-beinglay beliefscross-cultural
Check also Meaning and Evolution: Why Nature Selected Human Minds to Use Meaning. Roy F. Baumeister and William von Hippel. Evolutionary Studies in Imaginative Culture, Vol. 4, No. 1, Symposium on Meaning and Evolution (Spring 2020), pp. 1-18. https://www.bipartisanalliance.com/2020/05/the-scientific-worldview-suggested-that.html
Also Happiness, Meaning, and Psychological Richness. Shigehiro Oishi, Hyewon Choi, Minkyung Koo, Iolanda Galinha, Keiko Ishii, Asuka Komiya, Maike Luhmann, Christie Scollon, Ji-eun Shin, Hwaryung Lee, Eunkook M. Suh, Joar Vittersø, Samantha J. Heintzelman, Kostadin Kushlev, Erin C. Westgate, Nicholas Buttrick, Jane Tucker, Charles R. Ebersole, Jordan Axt, Elizabeth Gilbert, Brandon W. Ng, Jaime Kurtz & Lorraine L. Besser . Affective Science volume 1, pages107–115, Jun 23 2020. https://www.bipartisanalliance.com/2020/06/investigating-whether-some-people.html
Saturday, August 1, 2020
Driverless dilemmas (the need for autonomous vehicles to make high-stakes ethical decisions): Those arguments are too contrived to be of practical use, are an inappropriate method for making decisions on issues of safety
Doubting Driverless Dilemmas. Julian De Freitas et al. Perspectives on Psychological Science, July 31, 2020. https://doi.org/10.1177/1745691620922201
Abstract: The alarm has been raised on so-called driverless dilemmas, in which autonomous vehicles will need to make high-stakes ethical decisions on the road. We argue that these arguments are too contrived to be of practical use, are an inappropriate method for making decisions on issues of safety, and should not be used to inform engineering or policy.
Keywords: moral judgment, autonomous vehicles, driverless policy
Trolley dilemmas are incredibly unlikely to occur on real roads
The point of the two-alternative forced-choice in the thought experiments is to simplify realworld complexity and expose people’s intuitions clearly. But such situations are vanishingly unlikely on real roads. This is because they require that the vehicle will certainly kill one individual or another, with no other location to steer the vehicle, no way to buy more time, and no steering maneuver other than driving head-on to a death. Some variants of the dilemmas also assume that AVs can gather information about the social characteristics of people, e.g., whether they are criminals, or contributors to society. Yet many of these social characteristics are inherently unobservable. You can’t ethically choose whom to kill if you don’t know whom you are choosing between.
Lacking in these discussions are realistic examples or evidence of situations where human drivers have had to make such choices. This makes it premature to consider them as part of any practical engineering endeavor (Dewitt, Fischhoff et al., 2019). The authors of these papers acknowledge this point, saying, for example, that “it is extremely hard to estimate the rate at which human drivers find themselves in comparable situations” yet they nevertheless say, “Regardless of how rare these cases are, we need to agree beforehand how they should be solved” (p. 59) (Awad et al., 2018). We disagree. Without evidence that (i) such situations occur, and (ii) the social alternatives in the thought experiments can be identified in reality, it is unhelpful to consider them when making AV policies or regulations.
Trolley dilemmas cannot be reliably detected by any real-world perception system
For the purposes of a thought experiment, it is simplifying to assume that one is already in a trolley dilemma. But on real roads, the AV would have to detect this fact, which means that it would first need to be trained how to do this perfectly. After all, since the overwhelming majority of driving is not a trolley dilemma, a driver should only choose to hit someone if they’re definitely in a trolley dilemma. The problem is that it is nearly impossible for a driver to robustly differentiate when they are in a true dilemma that forces them to choose between whom to hit (and possibly kill), versus an ordinary emergency that doesn’t require such a drastic action. Accurately detecting this distinction would require unrealistic capabilities for technology in the present or near future, including (i) knowing all relevant physical details about the environment that could influence whether less deadly options are viable e.g., the speed of each car’s breaking system, and slipperiness of the road, (ii) accurately simulating all the ways the world could unfold, so as to confirm that one is in a true dilemma no matter what happens next, and (iii) anticipating the reactions and actions of pedestrians and drivers, so that their choices can be taken into account. Trying to teach AVs to solve trolley dilemmas is thus a risky safety strategy, because the AV must optimize toward solving a dilemma whose very existence is incredibly challenging to detect. Finally, if we take a learning approach to this problem, then these algorithms need to be exposed to a large number of dilemmas. Yet the conspicuous absence of such dilemmas from real roads means that they would need to be simulated and multiplied within any dataset, potentially introducing unnatural behavioral biases when AVs are deployed on real roads, e.g., ‘hallucinating’ dilemmas where there aren’t any.
Trolley dilemmas cannot be reliably acted upon by any real-world control system
Driverless dilemmas also assume a fundamental paradox: An AV has the freedom to make a considered decision about whom of two people to harm, yet does not have enough control to instead take some simple action, like swerving or slowing down, to avoid harming anyone altogether (Himmelreich, 2018). In reality, if a driver is in such a bad emergency that it only has two options left, it’s unlikely that these options will neatly map onto two options that require a moral rule to arbitrate between. Similarly, even if an AV does have a particular moral choice planned, the more constrained its options are the less likely it is to have the control to successfully execute a choice— and if it can’t execute a choice, then there’s no real dilemma.
Abstract: The alarm has been raised on so-called driverless dilemmas, in which autonomous vehicles will need to make high-stakes ethical decisions on the road. We argue that these arguments are too contrived to be of practical use, are an inappropriate method for making decisions on issues of safety, and should not be used to inform engineering or policy.
Keywords: moral judgment, autonomous vehicles, driverless policy
Trolley dilemmas are incredibly unlikely to occur on real roads
The point of the two-alternative forced-choice in the thought experiments is to simplify realworld complexity and expose people’s intuitions clearly. But such situations are vanishingly unlikely on real roads. This is because they require that the vehicle will certainly kill one individual or another, with no other location to steer the vehicle, no way to buy more time, and no steering maneuver other than driving head-on to a death. Some variants of the dilemmas also assume that AVs can gather information about the social characteristics of people, e.g., whether they are criminals, or contributors to society. Yet many of these social characteristics are inherently unobservable. You can’t ethically choose whom to kill if you don’t know whom you are choosing between.
Lacking in these discussions are realistic examples or evidence of situations where human drivers have had to make such choices. This makes it premature to consider them as part of any practical engineering endeavor (Dewitt, Fischhoff et al., 2019). The authors of these papers acknowledge this point, saying, for example, that “it is extremely hard to estimate the rate at which human drivers find themselves in comparable situations” yet they nevertheless say, “Regardless of how rare these cases are, we need to agree beforehand how they should be solved” (p. 59) (Awad et al., 2018). We disagree. Without evidence that (i) such situations occur, and (ii) the social alternatives in the thought experiments can be identified in reality, it is unhelpful to consider them when making AV policies or regulations.
Trolley dilemmas cannot be reliably detected by any real-world perception system
For the purposes of a thought experiment, it is simplifying to assume that one is already in a trolley dilemma. But on real roads, the AV would have to detect this fact, which means that it would first need to be trained how to do this perfectly. After all, since the overwhelming majority of driving is not a trolley dilemma, a driver should only choose to hit someone if they’re definitely in a trolley dilemma. The problem is that it is nearly impossible for a driver to robustly differentiate when they are in a true dilemma that forces them to choose between whom to hit (and possibly kill), versus an ordinary emergency that doesn’t require such a drastic action. Accurately detecting this distinction would require unrealistic capabilities for technology in the present or near future, including (i) knowing all relevant physical details about the environment that could influence whether less deadly options are viable e.g., the speed of each car’s breaking system, and slipperiness of the road, (ii) accurately simulating all the ways the world could unfold, so as to confirm that one is in a true dilemma no matter what happens next, and (iii) anticipating the reactions and actions of pedestrians and drivers, so that their choices can be taken into account. Trying to teach AVs to solve trolley dilemmas is thus a risky safety strategy, because the AV must optimize toward solving a dilemma whose very existence is incredibly challenging to detect. Finally, if we take a learning approach to this problem, then these algorithms need to be exposed to a large number of dilemmas. Yet the conspicuous absence of such dilemmas from real roads means that they would need to be simulated and multiplied within any dataset, potentially introducing unnatural behavioral biases when AVs are deployed on real roads, e.g., ‘hallucinating’ dilemmas where there aren’t any.
Trolley dilemmas cannot be reliably acted upon by any real-world control system
Driverless dilemmas also assume a fundamental paradox: An AV has the freedom to make a considered decision about whom of two people to harm, yet does not have enough control to instead take some simple action, like swerving or slowing down, to avoid harming anyone altogether (Himmelreich, 2018). In reality, if a driver is in such a bad emergency that it only has two options left, it’s unlikely that these options will neatly map onto two options that require a moral rule to arbitrate between. Similarly, even if an AV does have a particular moral choice planned, the more constrained its options are the less likely it is to have the control to successfully execute a choice— and if it can’t execute a choice, then there’s no real dilemma.
Friday, July 31, 2020
Does indoctrination of youngsters work? Teaching the ethics of eating meat shows robust decreases of meat consumption
Do ethics classes influence student behavior? Case study: Teaching the ethics of eating meat. Eric Schwitzgebel, Bradford Cokelet, Peter Singer. Cognition, Volume 203, October 2020, 104397. https://doi.org/10.1016/j.cognition.2020.104397
Abstract: Do university ethics classes influence students' real-world moral choices? We aimed to conduct the first controlled study of the effects of ordinary philosophical ethics classes on real-world moral choices, using non-self-report, non-laboratory behavior as the dependent measure. We assigned 1332 students in four large philosophy classes to either an experimental group on the ethics of eating meat or a control group on the ethics of charitable giving. Students in each group read a philosophy article on their assigned topic and optionally viewed a related video, then met with teaching assistants for 50-minute group discussion sections. They expressed their opinions about meat ethics and charitable giving in a follow-up questionnaire (1032 respondents after exclusions). We obtained 13,642 food purchase receipts from campus restaurants for 495 of the students, before and after the intervention. Purchase of meat products declined in the experimental group (52% of purchases of at least $4.99 contained meat before the intervention, compared to 45% after) but remained the same in the control group (52% both before and after). Ethical opinion also differed, with 43% of students in the experimental group agreeing that eating the meat of factory farmed animals is unethical compared to 29% in the control group. We also attempted to measure food choice using vouchers, but voucher redemption rates were low and no effect was statistically detectable. It remains unclear what aspect of instruction influenced behavior.
Keywords: Consumer choiceEthics instructionExperimental philosophyMoral psychologyMoral reasoningVegetarianism
Check also Chapter 15. The Behavior of Ethicists. Eric Schwitzgebel and Joshua Rust. In A Companion to Experimental Philosophy, edited by Justin Sytsma and Wesley Buckwalter. Aug 17 2017. https://www.bipartisanalliance.com/2017/08/the-behavior-of-ethicists-ch-15-of.html
Abstract: Do university ethics classes influence students' real-world moral choices? We aimed to conduct the first controlled study of the effects of ordinary philosophical ethics classes on real-world moral choices, using non-self-report, non-laboratory behavior as the dependent measure. We assigned 1332 students in four large philosophy classes to either an experimental group on the ethics of eating meat or a control group on the ethics of charitable giving. Students in each group read a philosophy article on their assigned topic and optionally viewed a related video, then met with teaching assistants for 50-minute group discussion sections. They expressed their opinions about meat ethics and charitable giving in a follow-up questionnaire (1032 respondents after exclusions). We obtained 13,642 food purchase receipts from campus restaurants for 495 of the students, before and after the intervention. Purchase of meat products declined in the experimental group (52% of purchases of at least $4.99 contained meat before the intervention, compared to 45% after) but remained the same in the control group (52% both before and after). Ethical opinion also differed, with 43% of students in the experimental group agreeing that eating the meat of factory farmed animals is unethical compared to 29% in the control group. We also attempted to measure food choice using vouchers, but voucher redemption rates were low and no effect was statistically detectable. It remains unclear what aspect of instruction influenced behavior.
Keywords: Consumer choiceEthics instructionExperimental philosophyMoral psychologyMoral reasoningVegetarianism
Check also Chapter 15. The Behavior of Ethicists. Eric Schwitzgebel and Joshua Rust. In A Companion to Experimental Philosophy, edited by Justin Sytsma and Wesley Buckwalter. Aug 17 2017. https://www.bipartisanalliance.com/2017/08/the-behavior-of-ethicists-ch-15-of.html
Scientists shocked! Rainfall, drought, flooding, and extreme storms modeling is poor, “It could mean we’re not getting future climate projections right.”
Missed wind patterns are throwing off climate forecasts of rain and storms. Paul Voosen. Science Magazine, Jul 29, 2020 , doi:10.1126/science.abe0713
Climate scientists can confidently tie global warming to impacts such as sea-level rise and extreme heat. But ask how rising temperatures will affect rainfall and storms, and the answers get a lot shakier. For a long time, researchers chalked the problem up to natural variability in wind patterns—the inherently unpredictable fluctuations of a chaotic atmosphere.
Now, however, a new analysis has found that the problem is not with the climate, it’s with the massive computer models designed to forecast its behavior. “The climate is much more predictable than we previously thought,” says Doug Smith, a climate scientist at the United Kingdom’s Met Office who led the 39-person effort published this week in Nature. But models don’t capture that predictability, which means they are unlikely to correctly predict the long-term changes that are most influenced by large-scale wind patterns: rainfall, drought, flooding, and extreme storms. “Obviously we need to solve it,” Smith says.
The study, which includes authors from several leading modeling centers, casts doubt on many forecasts of regional climate change, which are crucial for policymaking. It also means efforts to attribute specific weather events to global warming, now much in vogue, are rife with errors. “The whole thing is concerning,” says Isla Simpson, an atmospheric dynamicist and modeler at the National Center for Atmospheric Research, who was not involved in the study. “It could mean we’re not getting future climate projections right.”
The study does not cast doubt on forecasts of overall global warming, which is driven by human emissions of greenhouse gases. And it has a hopeful side: If models could be refined to capture the newfound predictability of winds and rains, they could be a boon for farming, flood management, and much else, says Laura Baker, a meteorologist at the University of Reading who was not involved in the study. “If you have reliable seasonal forecasts, that could make a big difference.”
The study stems from efforts at the Met Office to predict changes in the North Atlantic Oscillation (NAO), a large-scale wind pattern driven by the air pressure difference between Iceland and the Azores. The pressure difference reverses every few years, shunting the jet stream north or south; a more northerly jet stream drives warm, wet winters in northern Europe while drying out the continent’s south, and vice versa. In previous attempts to project the pattern decades into the future, a single model might yield opposite forecasts in different runs. The uncertainty seemed “huge and irreducible,” Smith says.
At first, the Met Office model did no better. But when the team ran the same model multiple times, with slightly different initial conditions, to forecast the NAO a season or a year into the future, a weak signal appeared in the ensemble average. Although it did not match the strength of the real NAO, it did match the overall pattern of its gyrations. But on individual model runs, the signal was drowning in noise.
The new work uses an ensemble of 169 model runs to find the same weak but predictable NAO pattern persisting for up to a decade. For each year since 1960, the team forecasted the NAO pattern 2 to 9 years in the future. When compared with weather records, the ensemble results showed the same pattern, ultimately explaining four-fifths of the NAO’s behavior. The massive computational effort suggests changes in the NAO are more predictable than models capture by an order of magnitude, Smith says. It also suggests individual models aren’t properly accounting for the ocean or atmospheric forces shaping the NAO.
The missed predictability appears to be universal. “This is being pursued everywhere,” says Yochanan Kushnir, a climate scientist at Columbia University, whose team reported last week in Scientific Reports that rainfall in the Sahel zone is more predictable than models indicate. In forthcoming work, a group led by Benjamin Kirtman, an atmospheric scientist and model developer at the University of Miami, will flag similar missed predictability in wind patterns above many of the world’s oceans.
Kirtman thinks something fundamental is wrong with the models’ code. For the time being, he says, “You’re probably making pretty profound mistakes in your climate change assessment” by relying on regional forecasts. For example, models predicted that the Horn of Africa, which is heavily influenced by Indian Ocean winds, would get wetter with climate change. But since the early 1990s, rains have plummeted and the region has dried.
The missing predictability also undermines so-called event attribution, which attempts to link extreme weather to climate change by using models to predict how sea surface warming is altering wind patterns. The changes in winds, in turn, affect the odds of extreme weather events, like hurricanes or floods. But the new work suggests “the probabilities they derive will probably not be correct,” Smith says.
What’s not clear yet is why climate models get circulation changes so wrong. One leading hypothesis is that the models fail to capture feedbacks into overall wind patterns from individual weather systems, called eddies. “Part of that eddy spectrum may simply be missing,” Smith says. Models do try to approximate the effects of eddies, but at just kilometers across, they are too small to simulate directly. The problem could also reflect poor rendering of the stratosphere, or of interactions between the ocean and atmosphere. “It’s fascinating,” says Jennifer Kay, a climate scientist at the University of Colorado, Boulder. “But there’s also a lot left unanswered.”
While researchers around the globe hunt down the missing predictability, Smith and his colleagues will take advantage of the weak NAO signal they have in hand. The Met Office and its partners announced this month they will produce temperature and precipitation forecasts looking 5 years ahead, and will use the NAO signal to help calibrate regional climate forecasts for Europe and elsewhere.
But until modelers figure out how to confidently forecast changes in the winds, Smith says, “We can’t take the models at face value."
Climate scientists can confidently tie global warming to impacts such as sea-level rise and extreme heat. But ask how rising temperatures will affect rainfall and storms, and the answers get a lot shakier. For a long time, researchers chalked the problem up to natural variability in wind patterns—the inherently unpredictable fluctuations of a chaotic atmosphere.
Now, however, a new analysis has found that the problem is not with the climate, it’s with the massive computer models designed to forecast its behavior. “The climate is much more predictable than we previously thought,” says Doug Smith, a climate scientist at the United Kingdom’s Met Office who led the 39-person effort published this week in Nature. But models don’t capture that predictability, which means they are unlikely to correctly predict the long-term changes that are most influenced by large-scale wind patterns: rainfall, drought, flooding, and extreme storms. “Obviously we need to solve it,” Smith says.
The study, which includes authors from several leading modeling centers, casts doubt on many forecasts of regional climate change, which are crucial for policymaking. It also means efforts to attribute specific weather events to global warming, now much in vogue, are rife with errors. “The whole thing is concerning,” says Isla Simpson, an atmospheric dynamicist and modeler at the National Center for Atmospheric Research, who was not involved in the study. “It could mean we’re not getting future climate projections right.”
The study does not cast doubt on forecasts of overall global warming, which is driven by human emissions of greenhouse gases. And it has a hopeful side: If models could be refined to capture the newfound predictability of winds and rains, they could be a boon for farming, flood management, and much else, says Laura Baker, a meteorologist at the University of Reading who was not involved in the study. “If you have reliable seasonal forecasts, that could make a big difference.”
The study stems from efforts at the Met Office to predict changes in the North Atlantic Oscillation (NAO), a large-scale wind pattern driven by the air pressure difference between Iceland and the Azores. The pressure difference reverses every few years, shunting the jet stream north or south; a more northerly jet stream drives warm, wet winters in northern Europe while drying out the continent’s south, and vice versa. In previous attempts to project the pattern decades into the future, a single model might yield opposite forecasts in different runs. The uncertainty seemed “huge and irreducible,” Smith says.
At first, the Met Office model did no better. But when the team ran the same model multiple times, with slightly different initial conditions, to forecast the NAO a season or a year into the future, a weak signal appeared in the ensemble average. Although it did not match the strength of the real NAO, it did match the overall pattern of its gyrations. But on individual model runs, the signal was drowning in noise.
The new work uses an ensemble of 169 model runs to find the same weak but predictable NAO pattern persisting for up to a decade. For each year since 1960, the team forecasted the NAO pattern 2 to 9 years in the future. When compared with weather records, the ensemble results showed the same pattern, ultimately explaining four-fifths of the NAO’s behavior. The massive computational effort suggests changes in the NAO are more predictable than models capture by an order of magnitude, Smith says. It also suggests individual models aren’t properly accounting for the ocean or atmospheric forces shaping the NAO.
The missed predictability appears to be universal. “This is being pursued everywhere,” says Yochanan Kushnir, a climate scientist at Columbia University, whose team reported last week in Scientific Reports that rainfall in the Sahel zone is more predictable than models indicate. In forthcoming work, a group led by Benjamin Kirtman, an atmospheric scientist and model developer at the University of Miami, will flag similar missed predictability in wind patterns above many of the world’s oceans.
Kirtman thinks something fundamental is wrong with the models’ code. For the time being, he says, “You’re probably making pretty profound mistakes in your climate change assessment” by relying on regional forecasts. For example, models predicted that the Horn of Africa, which is heavily influenced by Indian Ocean winds, would get wetter with climate change. But since the early 1990s, rains have plummeted and the region has dried.
The missing predictability also undermines so-called event attribution, which attempts to link extreme weather to climate change by using models to predict how sea surface warming is altering wind patterns. The changes in winds, in turn, affect the odds of extreme weather events, like hurricanes or floods. But the new work suggests “the probabilities they derive will probably not be correct,” Smith says.
What’s not clear yet is why climate models get circulation changes so wrong. One leading hypothesis is that the models fail to capture feedbacks into overall wind patterns from individual weather systems, called eddies. “Part of that eddy spectrum may simply be missing,” Smith says. Models do try to approximate the effects of eddies, but at just kilometers across, they are too small to simulate directly. The problem could also reflect poor rendering of the stratosphere, or of interactions between the ocean and atmosphere. “It’s fascinating,” says Jennifer Kay, a climate scientist at the University of Colorado, Boulder. “But there’s also a lot left unanswered.”
While researchers around the globe hunt down the missing predictability, Smith and his colleagues will take advantage of the weak NAO signal they have in hand. The Met Office and its partners announced this month they will produce temperature and precipitation forecasts looking 5 years ahead, and will use the NAO signal to help calibrate regional climate forecasts for Europe and elsewhere.
But until modelers figure out how to confidently forecast changes in the winds, Smith says, “We can’t take the models at face value."
Population studies suggest that increased availability of pornography is associated with reduced sexual aggression at the population level
Pornography and Sexual Aggression: Can Meta-Analysis Find a Link? Christopher J. Ferguson, Richard D. Hartley. Trauma, Violence, & Abuse, July 21, 2020. https://doi.org/10.1177/1524838020942754
Abstract: Whether pornography contributes to sexual aggression in real life has been the subject of dozens of studies over multiple decades. Nevertheless, scholars have not come to a consensus about whether effects are real. The current meta-analysis examined experimental, correlational, and population studies of the pornography/sexual aggression link dating back from the 1970s to the current time. Methodological weaknesses were very common in this field of research. Nonetheless, evidence did not suggest that nonviolent pornography was associated with sexual aggression. Evidence was particularly weak for longitudinal studies, suggesting an absence of long-term effects. Violent pornography was weakly correlated with sexual aggression, although the current evidence was unable to distinguish between a selection effect as compared to a socialization effect. Studies that employed more best practices tended to provide less evidence for relationships whereas studies with citation bias, an indication of researcher expectancy effects, tended to have higher effect sizes. Population studies suggested that increased availability of pornography is associated with reduced sexual aggression at the population level. More studies with improved practices and preregistration would be welcome.
Keywords: pornography, sexual aggression, rape, domestic violence
Abstract: Whether pornography contributes to sexual aggression in real life has been the subject of dozens of studies over multiple decades. Nevertheless, scholars have not come to a consensus about whether effects are real. The current meta-analysis examined experimental, correlational, and population studies of the pornography/sexual aggression link dating back from the 1970s to the current time. Methodological weaknesses were very common in this field of research. Nonetheless, evidence did not suggest that nonviolent pornography was associated with sexual aggression. Evidence was particularly weak for longitudinal studies, suggesting an absence of long-term effects. Violent pornography was weakly correlated with sexual aggression, although the current evidence was unable to distinguish between a selection effect as compared to a socialization effect. Studies that employed more best practices tended to provide less evidence for relationships whereas studies with citation bias, an indication of researcher expectancy effects, tended to have higher effect sizes. Population studies suggested that increased availability of pornography is associated with reduced sexual aggression at the population level. More studies with improved practices and preregistration would be welcome.
Keywords: pornography, sexual aggression, rape, domestic violence
It seems that the tendency to adjust appraisals of ourselves in the past and future in order to maintain a favourable view of ourselves in the present doesn't require episodic memory
Getting Better Without Memory. Julia G Halilova, Donna Rose Addis, R Shayna Rosenbaum. Social Cognitive and Affective Neuroscience, nsaa105, July 30 2020. https://doi.org/10.1093/scan/nsaa105
Abstract: Does the tendency to adjust appraisals of ourselves in the past and future in order to maintain a favourable view of ourselves in the present require episodic memory? A developmental amnesic person with impaired episodic memory (H.C.) was compared with two groups of age-matched controls on tasks assessing the Big Five personality traits and social competence in relation to the past, present, and future. Consistent with previous research, controls believed that their personality had changed more in the past five years than it will change in the next five years (i.e. the end-of-history illusion), and rated their present and future selves as more socially competent than their past selves (i.e. social improvement illusion), although this was moderated by self-esteem. Despite her lifelong episodic memory impairment, H.C. also showed these biases of temporal self-appraisal. Together, these findings do not support the theory that the temporal extension of the self-concept requires the ability to recollect richly detailed memories of the self in the past and future.
Keyword: episodic memory, self-appraisal, developmental amnesia, case study, end-of-history illusion, social improvement illusion
Abstract: Does the tendency to adjust appraisals of ourselves in the past and future in order to maintain a favourable view of ourselves in the present require episodic memory? A developmental amnesic person with impaired episodic memory (H.C.) was compared with two groups of age-matched controls on tasks assessing the Big Five personality traits and social competence in relation to the past, present, and future. Consistent with previous research, controls believed that their personality had changed more in the past five years than it will change in the next five years (i.e. the end-of-history illusion), and rated their present and future selves as more socially competent than their past selves (i.e. social improvement illusion), although this was moderated by self-esteem. Despite her lifelong episodic memory impairment, H.C. also showed these biases of temporal self-appraisal. Together, these findings do not support the theory that the temporal extension of the self-concept requires the ability to recollect richly detailed memories of the self in the past and future.
Keyword: episodic memory, self-appraisal, developmental amnesia, case study, end-of-history illusion, social improvement illusion
Effectiveness of acting extraverted (both socially and non-socially) as a well-being strategy: Those who engaged in extraverted behavior reported greater levels of positive affect ‘in-the-moment’
van Allen, Zack, Deanna Walker, Tamir Streiner, and John M. Zelenski. 2020. “Enacted Extraversion as a Well-being Enhancing Strategy in Everyday Life.” PsyArXiv. July 30. doi:10.31234/osf.io/349yh
Abstract: Lab-based experiments and observational data have consistently shown that extraverted behavior is associated with elevated levels of positive affect. This association typically holds regardless of one’s dispositional level of trait extraversion, and individuals who enact extraverted behaviors in laboratory settings do not demonstrate costs associated with acting counter-dispositionally. Inspired by these findings, we sought to test the efficacy of week-long ‘enacted extraversion’ interventions. In three studies, participants engaged in fifteen minutes of assigned behaviors in their daily life for five consecutive days. Studies 1 and 2 compared the effect of adding more introverted or extraverted behavior (or a control task). Study 3 compared the effect of adding social extraverted behavior or non-social extraverted behavior (or a control task). We assessed positive affect and several indicators of well-being during pretest (day 1) and post-test (day 7), as well as ‘in-the-moment’ (days 2-6). Participants who engaged in extraverted behavior reported greater levels of positive affect ‘in-the-moment’ when compared to introverted and control behaviors. We did not observe strong evidence to suggest that this effect was more pronounced for dispositional extraverts. The current research explores the effects of extraverted behavior on other indicators of well-being and examines the effectiveness of acting extraverted (both socially and non-socially) as a well-being strategy.
Abstract: Lab-based experiments and observational data have consistently shown that extraverted behavior is associated with elevated levels of positive affect. This association typically holds regardless of one’s dispositional level of trait extraversion, and individuals who enact extraverted behaviors in laboratory settings do not demonstrate costs associated with acting counter-dispositionally. Inspired by these findings, we sought to test the efficacy of week-long ‘enacted extraversion’ interventions. In three studies, participants engaged in fifteen minutes of assigned behaviors in their daily life for five consecutive days. Studies 1 and 2 compared the effect of adding more introverted or extraverted behavior (or a control task). Study 3 compared the effect of adding social extraverted behavior or non-social extraverted behavior (or a control task). We assessed positive affect and several indicators of well-being during pretest (day 1) and post-test (day 7), as well as ‘in-the-moment’ (days 2-6). Participants who engaged in extraverted behavior reported greater levels of positive affect ‘in-the-moment’ when compared to introverted and control behaviors. We did not observe strong evidence to suggest that this effect was more pronounced for dispositional extraverts. The current research explores the effects of extraverted behavior on other indicators of well-being and examines the effectiveness of acting extraverted (both socially and non-socially) as a well-being strategy.
Women rate feeling bad about themselves in breakup sex, maybe due to women’s sexual regret when participating in a one-time sexual encounter
The psychology of breakup sex: Exploring the motivational factors and affective consequences of post-breakup sexual activity. James B. Moran, T. Joel Wade, Damian R. Murray. Evolutionary Psychology, July 30, 2020. https://doi.org/10.1177/1474704920936916
Abstract: Popular culture has recently publicized a seemingly new postbreakup behavior called breakup sex. While the media expresses the benefits of participating in breakup sex, there is no research to support these claimed benefits. The current research was designed to begin to better understand this postbreakup behavior. In the first study, we examined how past breakup sex experiences made the individuals feel and how people predict they would feel in the future (n = 212). Results suggested that men are more likely than women to have felt better about themselves, while women tend to state they felt better about the relationship after breakup sex. The second study (n = 585) investigated why men and women engage in breakup sex. Results revealed that most breakup sex appears to be motivated by three factors: relationship maintenance, hedonism, and ambivalence. Men tended to support hedonistic and ambivalent reasons for having breakup sex more often than women. The two studies revealed that breakup sex may be differentially motivated (and may have different psychological consequences) for men and women and may not be as beneficial as the media suggests.
Keywords: breakup sex, sexual strategy theory, fiery limbo, postbreakup behavior, ex-sex, gender differences
Study 1: Discussion
Study 1 was conducted to understand how individuals feel when they have engaged in breakup sex and to understand how they might feel about it in the future. The 11 items were further used to assess whether there were gender differences between men and women. Results revealed that men, more than women, reported greater receptivity to breakup sex regardless of the extraneous factors in the relationship (e.g., differences in mate value, who initiated the breakup).
There was no gender difference regarding whether individuals would have breakup sex if they loved their partner. However, unexpectedly, men more than women reported that they would participate in sexual behaviors they normally would not engage in. This engagement in atypical/less frequent sexual behavior may reflect a mate retention tactic since research indicates that men perform oral sex as a benefit-provisioning mate retention tactic (Pham & Shackelford, 2013). Thus, performing sexual behaviors they normally would not do could be an indicator of mate retentive behaviors.
The hypotheses that women would rate feeling bad about themselves was supported. This finding could be due to women’s sexual regret when participating in a one-time sexual encounter (Eshbaugh & Gute, 2008; Galperin et al., 2013). These findings are contrary to the popular media idea that breakup sex is good for both men and women. These results suggest that between men and women, men feel best after breakup sex and would have breakup sex for some different reasons than women would.
Abstract: Popular culture has recently publicized a seemingly new postbreakup behavior called breakup sex. While the media expresses the benefits of participating in breakup sex, there is no research to support these claimed benefits. The current research was designed to begin to better understand this postbreakup behavior. In the first study, we examined how past breakup sex experiences made the individuals feel and how people predict they would feel in the future (n = 212). Results suggested that men are more likely than women to have felt better about themselves, while women tend to state they felt better about the relationship after breakup sex. The second study (n = 585) investigated why men and women engage in breakup sex. Results revealed that most breakup sex appears to be motivated by three factors: relationship maintenance, hedonism, and ambivalence. Men tended to support hedonistic and ambivalent reasons for having breakup sex more often than women. The two studies revealed that breakup sex may be differentially motivated (and may have different psychological consequences) for men and women and may not be as beneficial as the media suggests.
Keywords: breakup sex, sexual strategy theory, fiery limbo, postbreakup behavior, ex-sex, gender differences
Study 1: Discussion
Study 1 was conducted to understand how individuals feel when they have engaged in breakup sex and to understand how they might feel about it in the future. The 11 items were further used to assess whether there were gender differences between men and women. Results revealed that men, more than women, reported greater receptivity to breakup sex regardless of the extraneous factors in the relationship (e.g., differences in mate value, who initiated the breakup).
There was no gender difference regarding whether individuals would have breakup sex if they loved their partner. However, unexpectedly, men more than women reported that they would participate in sexual behaviors they normally would not engage in. This engagement in atypical/less frequent sexual behavior may reflect a mate retention tactic since research indicates that men perform oral sex as a benefit-provisioning mate retention tactic (Pham & Shackelford, 2013). Thus, performing sexual behaviors they normally would not do could be an indicator of mate retentive behaviors.
The hypotheses that women would rate feeling bad about themselves was supported. This finding could be due to women’s sexual regret when participating in a one-time sexual encounter (Eshbaugh & Gute, 2008; Galperin et al., 2013). These findings are contrary to the popular media idea that breakup sex is good for both men and women. These results suggest that between men and women, men feel best after breakup sex and would have breakup sex for some different reasons than women would.
Fantasies About Consensual Nonmonogamy Among Persons in Monogamous Relationships: Those who identified as male or non-binary reported more such fantasies than those who identified as female
Fantasies About Consensual Nonmonogamy Among Persons in Monogamous Romantic Relationships. Justin J. Lehmiller. Archives of Sexual Behavior,Jul 29 2020. https://rd.springer.com/article/10.1007/s10508-020-01788-7
Abstract: The present research explored fantasies about consensual nonmonogamous relationships (CNMRs) and the factors that predict such fantasies in a large and diverse online sample (N = 822) of persons currently involved in monogamous relationships. Nearly one-third (32.6%) of participants reported that being in some type of sexually open relationship was part of their favorite sexual fantasy of all time, of whom most (80.0%) said that they want to act on this fantasy in the future. Those who had shared and/or acted on CNMR fantasies previously generally reported positive outcomes (i.e., meeting or exceeding their expectations and improving their relationships). In addition, a majority of participants reported having fantasized about being in a CNMR at least once before, with open relationships being the most popular variety. Those who identified as male or non-binary reported more CNMR fantasies than those who identified as female. CNMR fantasies were also more common among persons who identified as anything other than heterosexual and among older adults. Erotophilia and sociosexual orientation were uniquely and positively associated with CNMR fantasies of all types; however, other individual difference factors (e.g., Big Five personality traits, attachment style) had less consistent associations. Unique predictors of infidelity fantasies differed from CNMR fantasies, suggesting that they are propelled by different psychological factors. Overall, these results suggest that CNMRs are a popular fantasy and desire among persons in monogamous romantic relationships. Clinical implications and implications for sexual fantasy research more broadly are discussed.
Abstract: The present research explored fantasies about consensual nonmonogamous relationships (CNMRs) and the factors that predict such fantasies in a large and diverse online sample (N = 822) of persons currently involved in monogamous relationships. Nearly one-third (32.6%) of participants reported that being in some type of sexually open relationship was part of their favorite sexual fantasy of all time, of whom most (80.0%) said that they want to act on this fantasy in the future. Those who had shared and/or acted on CNMR fantasies previously generally reported positive outcomes (i.e., meeting or exceeding their expectations and improving their relationships). In addition, a majority of participants reported having fantasized about being in a CNMR at least once before, with open relationships being the most popular variety. Those who identified as male or non-binary reported more CNMR fantasies than those who identified as female. CNMR fantasies were also more common among persons who identified as anything other than heterosexual and among older adults. Erotophilia and sociosexual orientation were uniquely and positively associated with CNMR fantasies of all types; however, other individual difference factors (e.g., Big Five personality traits, attachment style) had less consistent associations. Unique predictors of infidelity fantasies differed from CNMR fantasies, suggesting that they are propelled by different psychological factors. Overall, these results suggest that CNMRs are a popular fantasy and desire among persons in monogamous romantic relationships. Clinical implications and implications for sexual fantasy research more broadly are discussed.
Subscribe to:
Posts (Atom)