Borgschulte, Mark and Guenzel, Marius and Liu, Canyao and Malmendier, Ulrike, CEO Stress, Aging, and Death (June 1, 2020). CEPR Discussion Paper No. DP14933, Available at SSRN: https://ssrn.com/abstract=3638037
Abstract: We show that increased job demands due to takeover threats and industry crises have significant adverse consequences for managers' long-term health. Using hand-collected data on the dates of birth and death for more than 1,600 CEOs of large, publicly listed U.S. firms, we estimate that CEOs' lifespan increases by around two years when insulated from market discipline via anti-takeover laws. CEOs also stay on the job longer, with no evidence of a compensating differential in the form of lower pay. In a second analysis, we find diminished longevity arising from increases in job demands caused by industry-wide downturns during a CEO's tenure. Finally, we utilize machine-learning age-estimation methods to detect visible signs of aging in pictures of CEOs. We estimate that exposure to a distress shock during the Great Recession increases CEOs' apparent age by roughly one year over the next decade.
Sunday, August 2, 2020
Rolf Degen summarizing... Contrary to an influential psychological finding, most laughs in everyday conversations were responses to something comical, and not just instances of social smoothing
What's your laughter doing there? A taxonomy of the pragmatic functions of laughter. Chiara Mazzocconi, Ye Tian, Jonathan Ginzburg. IEEE Transactions on Affective Computing, May 2020. https://ieeexplore.ieee.org/abstract/document/9093177
Abstract: Laughter is a crucial signal for communication and managing interactions. Until now no consensual approach has emerged for classifying laughter. We propose a new framework for laughter analysis and classification, based on the pivotal assumption that laughter has propositional content. We propose an annotation scheme to classify the pragmatic functions of laughter taking into account the form, the laughable, the social, situational, and linguistic context. We apply the framework and taxonomy proposed in a multilingual corpus study (French, Mandarin Chinese and English), involving a variety of situational contexts. Our results give rise to novel generalizations about the range of meanings laughter exhibits, the placement of the laughable, and how placement and arousal relate to the functions of laughter. We have tested and refuted the validity of the commonly accepted assumption that laughter directly follows its laughable. In the concluding section, we discuss the implications our work has for spoken dialogue systems. We stress that laughter integration in spoken dialogue systems is not only crucial for emotional and affective computing aspects, but also for aspects related to natural language understanding and pragmatic reasoning. We formulate the emergent computational challenges for incorporating laughter in spoken dialogue systems.
Abstract: Laughter is a crucial signal for communication and managing interactions. Until now no consensual approach has emerged for classifying laughter. We propose a new framework for laughter analysis and classification, based on the pivotal assumption that laughter has propositional content. We propose an annotation scheme to classify the pragmatic functions of laughter taking into account the form, the laughable, the social, situational, and linguistic context. We apply the framework and taxonomy proposed in a multilingual corpus study (French, Mandarin Chinese and English), involving a variety of situational contexts. Our results give rise to novel generalizations about the range of meanings laughter exhibits, the placement of the laughable, and how placement and arousal relate to the functions of laughter. We have tested and refuted the validity of the commonly accepted assumption that laughter directly follows its laughable. In the concluding section, we discuss the implications our work has for spoken dialogue systems. We stress that laughter integration in spoken dialogue systems is not only crucial for emotional and affective computing aspects, but also for aspects related to natural language understanding and pragmatic reasoning. We formulate the emergent computational challenges for incorporating laughter in spoken dialogue systems.
The Great Stagnation (the fifty-year decline in growth for the U.S. and other advanced economies) – Causes and Cures
Carr, Douglas, The Great Stagnation – Causes and Cures (July 28, 2020). SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3662638
Abstract
This paper addresses the fifty-year decline in growth for the U.S. and other advanced economies.
The paper develops a growth model based upon an economy’s capital accounts and illustrates how customary growth factors such as labor and total factor productivity are embedded within investment ratios, permitting estimation of investment, that largely determines growth rate as well as the natural rate of interest, which is the capital factor share of growth. The model explains declines of these measures and finds convergence among natural interest, total factor productivity, and labor growth.
The paper identifies two investment regimes which crossed paths in the U.S. in the early 1970’s, one based upon depreciation and the other determined by the capital factor share of the private market sector. Constrictions on the private market sector from growing government spending limit the potential for higher levels of private investment necessary to offset greater depreciation from rapid obsolescence of increasingly high-tech investments.
Present trends worsen stagnation, but lifting constriction of private investment would allow full realization of benefits from technology investment’s high productivity, boosting U.S. growth to over 7% annually, and would benefit other advanced economies as well.
Keywords: Growth model, economic growth, investment, interest rate, natural rate of interest
JEL Classification: E22, E43, E44, F43, O16, O41, O47
Abstract
This paper addresses the fifty-year decline in growth for the U.S. and other advanced economies.
The paper develops a growth model based upon an economy’s capital accounts and illustrates how customary growth factors such as labor and total factor productivity are embedded within investment ratios, permitting estimation of investment, that largely determines growth rate as well as the natural rate of interest, which is the capital factor share of growth. The model explains declines of these measures and finds convergence among natural interest, total factor productivity, and labor growth.
The paper identifies two investment regimes which crossed paths in the U.S. in the early 1970’s, one based upon depreciation and the other determined by the capital factor share of the private market sector. Constrictions on the private market sector from growing government spending limit the potential for higher levels of private investment necessary to offset greater depreciation from rapid obsolescence of increasingly high-tech investments.
Present trends worsen stagnation, but lifting constriction of private investment would allow full realization of benefits from technology investment’s high productivity, boosting U.S. growth to over 7% annually, and would benefit other advanced economies as well.
Keywords: Growth model, economic growth, investment, interest rate, natural rate of interest
JEL Classification: E22, E43, E44, F43, O16, O41, O47
Lay Beliefs about Meaning in Life: Examinations Across Targets, Time, and Countries
Lay Beliefs about Meaning in Life: Examinations Across Targets, Time, and Countries. Samantha J.Heintzelman et al. Journal of Research in Personality, August 1 2020, 104003. https://doi.org/10.1016/j.jrp.2020.104003
Highlights
• Meaning in life was perceived to be both created and discovered, and to be common.
• Beliefs about meaning related to experiences of meaning in life.
• Technology was perceived as both providing supports and challenges to meaning.
• There were national differences in perceptions and experiences of meaning.
• Relationships and happiness were rated as top sources of meaning across 8 nations.
Abstract: We examined how lay beliefs about meaning in life relate to experiences of personal meaning. In Study 1 (N=406) meaning in life was perceived to be a common experience, but one that requires effort to attain, and these beliefs related to levels of meaning in life. Participants viewed their own lives as more meaningful than the average person’s, and technology as both creating challenges and providing supports for meaning. Study 2 (N=1,719) showed cross-country variation in levels of and beliefs about meaning across eight countries. However, social relationships and happiness were identified as the strongest sources of meaning in life consistently across countries. We discuss the value of lay beliefs for understanding meaning in life both within and across cultures.
Keywords: meaning in lifepsychological well-beinglay beliefscross-cultural
Check also Meaning and Evolution: Why Nature Selected Human Minds to Use Meaning. Roy F. Baumeister and William von Hippel. Evolutionary Studies in Imaginative Culture, Vol. 4, No. 1, Symposium on Meaning and Evolution (Spring 2020), pp. 1-18. https://www.bipartisanalliance.com/2020/05/the-scientific-worldview-suggested-that.html
Also Happiness, Meaning, and Psychological Richness. Shigehiro Oishi, Hyewon Choi, Minkyung Koo, Iolanda Galinha, Keiko Ishii, Asuka Komiya, Maike Luhmann, Christie Scollon, Ji-eun Shin, Hwaryung Lee, Eunkook M. Suh, Joar Vittersø, Samantha J. Heintzelman, Kostadin Kushlev, Erin C. Westgate, Nicholas Buttrick, Jane Tucker, Charles R. Ebersole, Jordan Axt, Elizabeth Gilbert, Brandon W. Ng, Jaime Kurtz & Lorraine L. Besser . Affective Science volume 1, pages107–115, Jun 23 2020. https://www.bipartisanalliance.com/2020/06/investigating-whether-some-people.html
Highlights
• Meaning in life was perceived to be both created and discovered, and to be common.
• Beliefs about meaning related to experiences of meaning in life.
• Technology was perceived as both providing supports and challenges to meaning.
• There were national differences in perceptions and experiences of meaning.
• Relationships and happiness were rated as top sources of meaning across 8 nations.
Abstract: We examined how lay beliefs about meaning in life relate to experiences of personal meaning. In Study 1 (N=406) meaning in life was perceived to be a common experience, but one that requires effort to attain, and these beliefs related to levels of meaning in life. Participants viewed their own lives as more meaningful than the average person’s, and technology as both creating challenges and providing supports for meaning. Study 2 (N=1,719) showed cross-country variation in levels of and beliefs about meaning across eight countries. However, social relationships and happiness were identified as the strongest sources of meaning in life consistently across countries. We discuss the value of lay beliefs for understanding meaning in life both within and across cultures.
Keywords: meaning in lifepsychological well-beinglay beliefscross-cultural
Check also Meaning and Evolution: Why Nature Selected Human Minds to Use Meaning. Roy F. Baumeister and William von Hippel. Evolutionary Studies in Imaginative Culture, Vol. 4, No. 1, Symposium on Meaning and Evolution (Spring 2020), pp. 1-18. https://www.bipartisanalliance.com/2020/05/the-scientific-worldview-suggested-that.html
Also Happiness, Meaning, and Psychological Richness. Shigehiro Oishi, Hyewon Choi, Minkyung Koo, Iolanda Galinha, Keiko Ishii, Asuka Komiya, Maike Luhmann, Christie Scollon, Ji-eun Shin, Hwaryung Lee, Eunkook M. Suh, Joar Vittersø, Samantha J. Heintzelman, Kostadin Kushlev, Erin C. Westgate, Nicholas Buttrick, Jane Tucker, Charles R. Ebersole, Jordan Axt, Elizabeth Gilbert, Brandon W. Ng, Jaime Kurtz & Lorraine L. Besser . Affective Science volume 1, pages107–115, Jun 23 2020. https://www.bipartisanalliance.com/2020/06/investigating-whether-some-people.html
Saturday, August 1, 2020
Driverless dilemmas (the need for autonomous vehicles to make high-stakes ethical decisions): Those arguments are too contrived to be of practical use, are an inappropriate method for making decisions on issues of safety
Doubting Driverless Dilemmas. Julian De Freitas et al. Perspectives on Psychological Science, July 31, 2020. https://doi.org/10.1177/1745691620922201
Abstract: The alarm has been raised on so-called driverless dilemmas, in which autonomous vehicles will need to make high-stakes ethical decisions on the road. We argue that these arguments are too contrived to be of practical use, are an inappropriate method for making decisions on issues of safety, and should not be used to inform engineering or policy.
Keywords: moral judgment, autonomous vehicles, driverless policy
Trolley dilemmas are incredibly unlikely to occur on real roads
The point of the two-alternative forced-choice in the thought experiments is to simplify realworld complexity and expose people’s intuitions clearly. But such situations are vanishingly unlikely on real roads. This is because they require that the vehicle will certainly kill one individual or another, with no other location to steer the vehicle, no way to buy more time, and no steering maneuver other than driving head-on to a death. Some variants of the dilemmas also assume that AVs can gather information about the social characteristics of people, e.g., whether they are criminals, or contributors to society. Yet many of these social characteristics are inherently unobservable. You can’t ethically choose whom to kill if you don’t know whom you are choosing between.
Lacking in these discussions are realistic examples or evidence of situations where human drivers have had to make such choices. This makes it premature to consider them as part of any practical engineering endeavor (Dewitt, Fischhoff et al., 2019). The authors of these papers acknowledge this point, saying, for example, that “it is extremely hard to estimate the rate at which human drivers find themselves in comparable situations” yet they nevertheless say, “Regardless of how rare these cases are, we need to agree beforehand how they should be solved” (p. 59) (Awad et al., 2018). We disagree. Without evidence that (i) such situations occur, and (ii) the social alternatives in the thought experiments can be identified in reality, it is unhelpful to consider them when making AV policies or regulations.
Trolley dilemmas cannot be reliably detected by any real-world perception system
For the purposes of a thought experiment, it is simplifying to assume that one is already in a trolley dilemma. But on real roads, the AV would have to detect this fact, which means that it would first need to be trained how to do this perfectly. After all, since the overwhelming majority of driving is not a trolley dilemma, a driver should only choose to hit someone if they’re definitely in a trolley dilemma. The problem is that it is nearly impossible for a driver to robustly differentiate when they are in a true dilemma that forces them to choose between whom to hit (and possibly kill), versus an ordinary emergency that doesn’t require such a drastic action. Accurately detecting this distinction would require unrealistic capabilities for technology in the present or near future, including (i) knowing all relevant physical details about the environment that could influence whether less deadly options are viable e.g., the speed of each car’s breaking system, and slipperiness of the road, (ii) accurately simulating all the ways the world could unfold, so as to confirm that one is in a true dilemma no matter what happens next, and (iii) anticipating the reactions and actions of pedestrians and drivers, so that their choices can be taken into account. Trying to teach AVs to solve trolley dilemmas is thus a risky safety strategy, because the AV must optimize toward solving a dilemma whose very existence is incredibly challenging to detect. Finally, if we take a learning approach to this problem, then these algorithms need to be exposed to a large number of dilemmas. Yet the conspicuous absence of such dilemmas from real roads means that they would need to be simulated and multiplied within any dataset, potentially introducing unnatural behavioral biases when AVs are deployed on real roads, e.g., ‘hallucinating’ dilemmas where there aren’t any.
Trolley dilemmas cannot be reliably acted upon by any real-world control system
Driverless dilemmas also assume a fundamental paradox: An AV has the freedom to make a considered decision about whom of two people to harm, yet does not have enough control to instead take some simple action, like swerving or slowing down, to avoid harming anyone altogether (Himmelreich, 2018). In reality, if a driver is in such a bad emergency that it only has two options left, it’s unlikely that these options will neatly map onto two options that require a moral rule to arbitrate between. Similarly, even if an AV does have a particular moral choice planned, the more constrained its options are the less likely it is to have the control to successfully execute a choice— and if it can’t execute a choice, then there’s no real dilemma.
Abstract: The alarm has been raised on so-called driverless dilemmas, in which autonomous vehicles will need to make high-stakes ethical decisions on the road. We argue that these arguments are too contrived to be of practical use, are an inappropriate method for making decisions on issues of safety, and should not be used to inform engineering or policy.
Keywords: moral judgment, autonomous vehicles, driverless policy
Trolley dilemmas are incredibly unlikely to occur on real roads
The point of the two-alternative forced-choice in the thought experiments is to simplify realworld complexity and expose people’s intuitions clearly. But such situations are vanishingly unlikely on real roads. This is because they require that the vehicle will certainly kill one individual or another, with no other location to steer the vehicle, no way to buy more time, and no steering maneuver other than driving head-on to a death. Some variants of the dilemmas also assume that AVs can gather information about the social characteristics of people, e.g., whether they are criminals, or contributors to society. Yet many of these social characteristics are inherently unobservable. You can’t ethically choose whom to kill if you don’t know whom you are choosing between.
Lacking in these discussions are realistic examples or evidence of situations where human drivers have had to make such choices. This makes it premature to consider them as part of any practical engineering endeavor (Dewitt, Fischhoff et al., 2019). The authors of these papers acknowledge this point, saying, for example, that “it is extremely hard to estimate the rate at which human drivers find themselves in comparable situations” yet they nevertheless say, “Regardless of how rare these cases are, we need to agree beforehand how they should be solved” (p. 59) (Awad et al., 2018). We disagree. Without evidence that (i) such situations occur, and (ii) the social alternatives in the thought experiments can be identified in reality, it is unhelpful to consider them when making AV policies or regulations.
Trolley dilemmas cannot be reliably detected by any real-world perception system
For the purposes of a thought experiment, it is simplifying to assume that one is already in a trolley dilemma. But on real roads, the AV would have to detect this fact, which means that it would first need to be trained how to do this perfectly. After all, since the overwhelming majority of driving is not a trolley dilemma, a driver should only choose to hit someone if they’re definitely in a trolley dilemma. The problem is that it is nearly impossible for a driver to robustly differentiate when they are in a true dilemma that forces them to choose between whom to hit (and possibly kill), versus an ordinary emergency that doesn’t require such a drastic action. Accurately detecting this distinction would require unrealistic capabilities for technology in the present or near future, including (i) knowing all relevant physical details about the environment that could influence whether less deadly options are viable e.g., the speed of each car’s breaking system, and slipperiness of the road, (ii) accurately simulating all the ways the world could unfold, so as to confirm that one is in a true dilemma no matter what happens next, and (iii) anticipating the reactions and actions of pedestrians and drivers, so that their choices can be taken into account. Trying to teach AVs to solve trolley dilemmas is thus a risky safety strategy, because the AV must optimize toward solving a dilemma whose very existence is incredibly challenging to detect. Finally, if we take a learning approach to this problem, then these algorithms need to be exposed to a large number of dilemmas. Yet the conspicuous absence of such dilemmas from real roads means that they would need to be simulated and multiplied within any dataset, potentially introducing unnatural behavioral biases when AVs are deployed on real roads, e.g., ‘hallucinating’ dilemmas where there aren’t any.
Trolley dilemmas cannot be reliably acted upon by any real-world control system
Driverless dilemmas also assume a fundamental paradox: An AV has the freedom to make a considered decision about whom of two people to harm, yet does not have enough control to instead take some simple action, like swerving or slowing down, to avoid harming anyone altogether (Himmelreich, 2018). In reality, if a driver is in such a bad emergency that it only has two options left, it’s unlikely that these options will neatly map onto two options that require a moral rule to arbitrate between. Similarly, even if an AV does have a particular moral choice planned, the more constrained its options are the less likely it is to have the control to successfully execute a choice— and if it can’t execute a choice, then there’s no real dilemma.
Friday, July 31, 2020
Does indoctrination of youngsters work? Teaching the ethics of eating meat shows robust decreases of meat consumption
Do ethics classes influence student behavior? Case study: Teaching the ethics of eating meat. Eric Schwitzgebel, Bradford Cokelet, Peter Singer. Cognition, Volume 203, October 2020, 104397. https://doi.org/10.1016/j.cognition.2020.104397
Abstract: Do university ethics classes influence students' real-world moral choices? We aimed to conduct the first controlled study of the effects of ordinary philosophical ethics classes on real-world moral choices, using non-self-report, non-laboratory behavior as the dependent measure. We assigned 1332 students in four large philosophy classes to either an experimental group on the ethics of eating meat or a control group on the ethics of charitable giving. Students in each group read a philosophy article on their assigned topic and optionally viewed a related video, then met with teaching assistants for 50-minute group discussion sections. They expressed their opinions about meat ethics and charitable giving in a follow-up questionnaire (1032 respondents after exclusions). We obtained 13,642 food purchase receipts from campus restaurants for 495 of the students, before and after the intervention. Purchase of meat products declined in the experimental group (52% of purchases of at least $4.99 contained meat before the intervention, compared to 45% after) but remained the same in the control group (52% both before and after). Ethical opinion also differed, with 43% of students in the experimental group agreeing that eating the meat of factory farmed animals is unethical compared to 29% in the control group. We also attempted to measure food choice using vouchers, but voucher redemption rates were low and no effect was statistically detectable. It remains unclear what aspect of instruction influenced behavior.
Keywords: Consumer choiceEthics instructionExperimental philosophyMoral psychologyMoral reasoningVegetarianism
Check also Chapter 15. The Behavior of Ethicists. Eric Schwitzgebel and Joshua Rust. In A Companion to Experimental Philosophy, edited by Justin Sytsma and Wesley Buckwalter. Aug 17 2017. https://www.bipartisanalliance.com/2017/08/the-behavior-of-ethicists-ch-15-of.html
Abstract: Do university ethics classes influence students' real-world moral choices? We aimed to conduct the first controlled study of the effects of ordinary philosophical ethics classes on real-world moral choices, using non-self-report, non-laboratory behavior as the dependent measure. We assigned 1332 students in four large philosophy classes to either an experimental group on the ethics of eating meat or a control group on the ethics of charitable giving. Students in each group read a philosophy article on their assigned topic and optionally viewed a related video, then met with teaching assistants for 50-minute group discussion sections. They expressed their opinions about meat ethics and charitable giving in a follow-up questionnaire (1032 respondents after exclusions). We obtained 13,642 food purchase receipts from campus restaurants for 495 of the students, before and after the intervention. Purchase of meat products declined in the experimental group (52% of purchases of at least $4.99 contained meat before the intervention, compared to 45% after) but remained the same in the control group (52% both before and after). Ethical opinion also differed, with 43% of students in the experimental group agreeing that eating the meat of factory farmed animals is unethical compared to 29% in the control group. We also attempted to measure food choice using vouchers, but voucher redemption rates were low and no effect was statistically detectable. It remains unclear what aspect of instruction influenced behavior.
Keywords: Consumer choiceEthics instructionExperimental philosophyMoral psychologyMoral reasoningVegetarianism
Check also Chapter 15. The Behavior of Ethicists. Eric Schwitzgebel and Joshua Rust. In A Companion to Experimental Philosophy, edited by Justin Sytsma and Wesley Buckwalter. Aug 17 2017. https://www.bipartisanalliance.com/2017/08/the-behavior-of-ethicists-ch-15-of.html
Scientists shocked! Rainfall, drought, flooding, and extreme storms modeling is poor, “It could mean we’re not getting future climate projections right.”
Missed wind patterns are throwing off climate forecasts of rain and storms. Paul Voosen. Science Magazine, Jul 29, 2020 , doi:10.1126/science.abe0713
Climate scientists can confidently tie global warming to impacts such as sea-level rise and extreme heat. But ask how rising temperatures will affect rainfall and storms, and the answers get a lot shakier. For a long time, researchers chalked the problem up to natural variability in wind patterns—the inherently unpredictable fluctuations of a chaotic atmosphere.
Now, however, a new analysis has found that the problem is not with the climate, it’s with the massive computer models designed to forecast its behavior. “The climate is much more predictable than we previously thought,” says Doug Smith, a climate scientist at the United Kingdom’s Met Office who led the 39-person effort published this week in Nature. But models don’t capture that predictability, which means they are unlikely to correctly predict the long-term changes that are most influenced by large-scale wind patterns: rainfall, drought, flooding, and extreme storms. “Obviously we need to solve it,” Smith says.
The study, which includes authors from several leading modeling centers, casts doubt on many forecasts of regional climate change, which are crucial for policymaking. It also means efforts to attribute specific weather events to global warming, now much in vogue, are rife with errors. “The whole thing is concerning,” says Isla Simpson, an atmospheric dynamicist and modeler at the National Center for Atmospheric Research, who was not involved in the study. “It could mean we’re not getting future climate projections right.”
The study does not cast doubt on forecasts of overall global warming, which is driven by human emissions of greenhouse gases. And it has a hopeful side: If models could be refined to capture the newfound predictability of winds and rains, they could be a boon for farming, flood management, and much else, says Laura Baker, a meteorologist at the University of Reading who was not involved in the study. “If you have reliable seasonal forecasts, that could make a big difference.”
The study stems from efforts at the Met Office to predict changes in the North Atlantic Oscillation (NAO), a large-scale wind pattern driven by the air pressure difference between Iceland and the Azores. The pressure difference reverses every few years, shunting the jet stream north or south; a more northerly jet stream drives warm, wet winters in northern Europe while drying out the continent’s south, and vice versa. In previous attempts to project the pattern decades into the future, a single model might yield opposite forecasts in different runs. The uncertainty seemed “huge and irreducible,” Smith says.
At first, the Met Office model did no better. But when the team ran the same model multiple times, with slightly different initial conditions, to forecast the NAO a season or a year into the future, a weak signal appeared in the ensemble average. Although it did not match the strength of the real NAO, it did match the overall pattern of its gyrations. But on individual model runs, the signal was drowning in noise.
The new work uses an ensemble of 169 model runs to find the same weak but predictable NAO pattern persisting for up to a decade. For each year since 1960, the team forecasted the NAO pattern 2 to 9 years in the future. When compared with weather records, the ensemble results showed the same pattern, ultimately explaining four-fifths of the NAO’s behavior. The massive computational effort suggests changes in the NAO are more predictable than models capture by an order of magnitude, Smith says. It also suggests individual models aren’t properly accounting for the ocean or atmospheric forces shaping the NAO.
The missed predictability appears to be universal. “This is being pursued everywhere,” says Yochanan Kushnir, a climate scientist at Columbia University, whose team reported last week in Scientific Reports that rainfall in the Sahel zone is more predictable than models indicate. In forthcoming work, a group led by Benjamin Kirtman, an atmospheric scientist and model developer at the University of Miami, will flag similar missed predictability in wind patterns above many of the world’s oceans.
Kirtman thinks something fundamental is wrong with the models’ code. For the time being, he says, “You’re probably making pretty profound mistakes in your climate change assessment” by relying on regional forecasts. For example, models predicted that the Horn of Africa, which is heavily influenced by Indian Ocean winds, would get wetter with climate change. But since the early 1990s, rains have plummeted and the region has dried.
The missing predictability also undermines so-called event attribution, which attempts to link extreme weather to climate change by using models to predict how sea surface warming is altering wind patterns. The changes in winds, in turn, affect the odds of extreme weather events, like hurricanes or floods. But the new work suggests “the probabilities they derive will probably not be correct,” Smith says.
What’s not clear yet is why climate models get circulation changes so wrong. One leading hypothesis is that the models fail to capture feedbacks into overall wind patterns from individual weather systems, called eddies. “Part of that eddy spectrum may simply be missing,” Smith says. Models do try to approximate the effects of eddies, but at just kilometers across, they are too small to simulate directly. The problem could also reflect poor rendering of the stratosphere, or of interactions between the ocean and atmosphere. “It’s fascinating,” says Jennifer Kay, a climate scientist at the University of Colorado, Boulder. “But there’s also a lot left unanswered.”
While researchers around the globe hunt down the missing predictability, Smith and his colleagues will take advantage of the weak NAO signal they have in hand. The Met Office and its partners announced this month they will produce temperature and precipitation forecasts looking 5 years ahead, and will use the NAO signal to help calibrate regional climate forecasts for Europe and elsewhere.
But until modelers figure out how to confidently forecast changes in the winds, Smith says, “We can’t take the models at face value."
Climate scientists can confidently tie global warming to impacts such as sea-level rise and extreme heat. But ask how rising temperatures will affect rainfall and storms, and the answers get a lot shakier. For a long time, researchers chalked the problem up to natural variability in wind patterns—the inherently unpredictable fluctuations of a chaotic atmosphere.
Now, however, a new analysis has found that the problem is not with the climate, it’s with the massive computer models designed to forecast its behavior. “The climate is much more predictable than we previously thought,” says Doug Smith, a climate scientist at the United Kingdom’s Met Office who led the 39-person effort published this week in Nature. But models don’t capture that predictability, which means they are unlikely to correctly predict the long-term changes that are most influenced by large-scale wind patterns: rainfall, drought, flooding, and extreme storms. “Obviously we need to solve it,” Smith says.
The study, which includes authors from several leading modeling centers, casts doubt on many forecasts of regional climate change, which are crucial for policymaking. It also means efforts to attribute specific weather events to global warming, now much in vogue, are rife with errors. “The whole thing is concerning,” says Isla Simpson, an atmospheric dynamicist and modeler at the National Center for Atmospheric Research, who was not involved in the study. “It could mean we’re not getting future climate projections right.”
The study does not cast doubt on forecasts of overall global warming, which is driven by human emissions of greenhouse gases. And it has a hopeful side: If models could be refined to capture the newfound predictability of winds and rains, they could be a boon for farming, flood management, and much else, says Laura Baker, a meteorologist at the University of Reading who was not involved in the study. “If you have reliable seasonal forecasts, that could make a big difference.”
The study stems from efforts at the Met Office to predict changes in the North Atlantic Oscillation (NAO), a large-scale wind pattern driven by the air pressure difference between Iceland and the Azores. The pressure difference reverses every few years, shunting the jet stream north or south; a more northerly jet stream drives warm, wet winters in northern Europe while drying out the continent’s south, and vice versa. In previous attempts to project the pattern decades into the future, a single model might yield opposite forecasts in different runs. The uncertainty seemed “huge and irreducible,” Smith says.
At first, the Met Office model did no better. But when the team ran the same model multiple times, with slightly different initial conditions, to forecast the NAO a season or a year into the future, a weak signal appeared in the ensemble average. Although it did not match the strength of the real NAO, it did match the overall pattern of its gyrations. But on individual model runs, the signal was drowning in noise.
The new work uses an ensemble of 169 model runs to find the same weak but predictable NAO pattern persisting for up to a decade. For each year since 1960, the team forecasted the NAO pattern 2 to 9 years in the future. When compared with weather records, the ensemble results showed the same pattern, ultimately explaining four-fifths of the NAO’s behavior. The massive computational effort suggests changes in the NAO are more predictable than models capture by an order of magnitude, Smith says. It also suggests individual models aren’t properly accounting for the ocean or atmospheric forces shaping the NAO.
The missed predictability appears to be universal. “This is being pursued everywhere,” says Yochanan Kushnir, a climate scientist at Columbia University, whose team reported last week in Scientific Reports that rainfall in the Sahel zone is more predictable than models indicate. In forthcoming work, a group led by Benjamin Kirtman, an atmospheric scientist and model developer at the University of Miami, will flag similar missed predictability in wind patterns above many of the world’s oceans.
Kirtman thinks something fundamental is wrong with the models’ code. For the time being, he says, “You’re probably making pretty profound mistakes in your climate change assessment” by relying on regional forecasts. For example, models predicted that the Horn of Africa, which is heavily influenced by Indian Ocean winds, would get wetter with climate change. But since the early 1990s, rains have plummeted and the region has dried.
The missing predictability also undermines so-called event attribution, which attempts to link extreme weather to climate change by using models to predict how sea surface warming is altering wind patterns. The changes in winds, in turn, affect the odds of extreme weather events, like hurricanes or floods. But the new work suggests “the probabilities they derive will probably not be correct,” Smith says.
What’s not clear yet is why climate models get circulation changes so wrong. One leading hypothesis is that the models fail to capture feedbacks into overall wind patterns from individual weather systems, called eddies. “Part of that eddy spectrum may simply be missing,” Smith says. Models do try to approximate the effects of eddies, but at just kilometers across, they are too small to simulate directly. The problem could also reflect poor rendering of the stratosphere, or of interactions between the ocean and atmosphere. “It’s fascinating,” says Jennifer Kay, a climate scientist at the University of Colorado, Boulder. “But there’s also a lot left unanswered.”
While researchers around the globe hunt down the missing predictability, Smith and his colleagues will take advantage of the weak NAO signal they have in hand. The Met Office and its partners announced this month they will produce temperature and precipitation forecasts looking 5 years ahead, and will use the NAO signal to help calibrate regional climate forecasts for Europe and elsewhere.
But until modelers figure out how to confidently forecast changes in the winds, Smith says, “We can’t take the models at face value."
Population studies suggest that increased availability of pornography is associated with reduced sexual aggression at the population level
Pornography and Sexual Aggression: Can Meta-Analysis Find a Link? Christopher J. Ferguson, Richard D. Hartley. Trauma, Violence, & Abuse, July 21, 2020. https://doi.org/10.1177/1524838020942754
Abstract: Whether pornography contributes to sexual aggression in real life has been the subject of dozens of studies over multiple decades. Nevertheless, scholars have not come to a consensus about whether effects are real. The current meta-analysis examined experimental, correlational, and population studies of the pornography/sexual aggression link dating back from the 1970s to the current time. Methodological weaknesses were very common in this field of research. Nonetheless, evidence did not suggest that nonviolent pornography was associated with sexual aggression. Evidence was particularly weak for longitudinal studies, suggesting an absence of long-term effects. Violent pornography was weakly correlated with sexual aggression, although the current evidence was unable to distinguish between a selection effect as compared to a socialization effect. Studies that employed more best practices tended to provide less evidence for relationships whereas studies with citation bias, an indication of researcher expectancy effects, tended to have higher effect sizes. Population studies suggested that increased availability of pornography is associated with reduced sexual aggression at the population level. More studies with improved practices and preregistration would be welcome.
Keywords: pornography, sexual aggression, rape, domestic violence
Abstract: Whether pornography contributes to sexual aggression in real life has been the subject of dozens of studies over multiple decades. Nevertheless, scholars have not come to a consensus about whether effects are real. The current meta-analysis examined experimental, correlational, and population studies of the pornography/sexual aggression link dating back from the 1970s to the current time. Methodological weaknesses were very common in this field of research. Nonetheless, evidence did not suggest that nonviolent pornography was associated with sexual aggression. Evidence was particularly weak for longitudinal studies, suggesting an absence of long-term effects. Violent pornography was weakly correlated with sexual aggression, although the current evidence was unable to distinguish between a selection effect as compared to a socialization effect. Studies that employed more best practices tended to provide less evidence for relationships whereas studies with citation bias, an indication of researcher expectancy effects, tended to have higher effect sizes. Population studies suggested that increased availability of pornography is associated with reduced sexual aggression at the population level. More studies with improved practices and preregistration would be welcome.
Keywords: pornography, sexual aggression, rape, domestic violence
It seems that the tendency to adjust appraisals of ourselves in the past and future in order to maintain a favourable view of ourselves in the present doesn't require episodic memory
Getting Better Without Memory. Julia G Halilova, Donna Rose Addis, R Shayna Rosenbaum. Social Cognitive and Affective Neuroscience, nsaa105, July 30 2020. https://doi.org/10.1093/scan/nsaa105
Abstract: Does the tendency to adjust appraisals of ourselves in the past and future in order to maintain a favourable view of ourselves in the present require episodic memory? A developmental amnesic person with impaired episodic memory (H.C.) was compared with two groups of age-matched controls on tasks assessing the Big Five personality traits and social competence in relation to the past, present, and future. Consistent with previous research, controls believed that their personality had changed more in the past five years than it will change in the next five years (i.e. the end-of-history illusion), and rated their present and future selves as more socially competent than their past selves (i.e. social improvement illusion), although this was moderated by self-esteem. Despite her lifelong episodic memory impairment, H.C. also showed these biases of temporal self-appraisal. Together, these findings do not support the theory that the temporal extension of the self-concept requires the ability to recollect richly detailed memories of the self in the past and future.
Keyword: episodic memory, self-appraisal, developmental amnesia, case study, end-of-history illusion, social improvement illusion
Abstract: Does the tendency to adjust appraisals of ourselves in the past and future in order to maintain a favourable view of ourselves in the present require episodic memory? A developmental amnesic person with impaired episodic memory (H.C.) was compared with two groups of age-matched controls on tasks assessing the Big Five personality traits and social competence in relation to the past, present, and future. Consistent with previous research, controls believed that their personality had changed more in the past five years than it will change in the next five years (i.e. the end-of-history illusion), and rated their present and future selves as more socially competent than their past selves (i.e. social improvement illusion), although this was moderated by self-esteem. Despite her lifelong episodic memory impairment, H.C. also showed these biases of temporal self-appraisal. Together, these findings do not support the theory that the temporal extension of the self-concept requires the ability to recollect richly detailed memories of the self in the past and future.
Keyword: episodic memory, self-appraisal, developmental amnesia, case study, end-of-history illusion, social improvement illusion
Effectiveness of acting extraverted (both socially and non-socially) as a well-being strategy: Those who engaged in extraverted behavior reported greater levels of positive affect ‘in-the-moment’
van Allen, Zack, Deanna Walker, Tamir Streiner, and John M. Zelenski. 2020. “Enacted Extraversion as a Well-being Enhancing Strategy in Everyday Life.” PsyArXiv. July 30. doi:10.31234/osf.io/349yh
Abstract: Lab-based experiments and observational data have consistently shown that extraverted behavior is associated with elevated levels of positive affect. This association typically holds regardless of one’s dispositional level of trait extraversion, and individuals who enact extraverted behaviors in laboratory settings do not demonstrate costs associated with acting counter-dispositionally. Inspired by these findings, we sought to test the efficacy of week-long ‘enacted extraversion’ interventions. In three studies, participants engaged in fifteen minutes of assigned behaviors in their daily life for five consecutive days. Studies 1 and 2 compared the effect of adding more introverted or extraverted behavior (or a control task). Study 3 compared the effect of adding social extraverted behavior or non-social extraverted behavior (or a control task). We assessed positive affect and several indicators of well-being during pretest (day 1) and post-test (day 7), as well as ‘in-the-moment’ (days 2-6). Participants who engaged in extraverted behavior reported greater levels of positive affect ‘in-the-moment’ when compared to introverted and control behaviors. We did not observe strong evidence to suggest that this effect was more pronounced for dispositional extraverts. The current research explores the effects of extraverted behavior on other indicators of well-being and examines the effectiveness of acting extraverted (both socially and non-socially) as a well-being strategy.
Abstract: Lab-based experiments and observational data have consistently shown that extraverted behavior is associated with elevated levels of positive affect. This association typically holds regardless of one’s dispositional level of trait extraversion, and individuals who enact extraverted behaviors in laboratory settings do not demonstrate costs associated with acting counter-dispositionally. Inspired by these findings, we sought to test the efficacy of week-long ‘enacted extraversion’ interventions. In three studies, participants engaged in fifteen minutes of assigned behaviors in their daily life for five consecutive days. Studies 1 and 2 compared the effect of adding more introverted or extraverted behavior (or a control task). Study 3 compared the effect of adding social extraverted behavior or non-social extraverted behavior (or a control task). We assessed positive affect and several indicators of well-being during pretest (day 1) and post-test (day 7), as well as ‘in-the-moment’ (days 2-6). Participants who engaged in extraverted behavior reported greater levels of positive affect ‘in-the-moment’ when compared to introverted and control behaviors. We did not observe strong evidence to suggest that this effect was more pronounced for dispositional extraverts. The current research explores the effects of extraverted behavior on other indicators of well-being and examines the effectiveness of acting extraverted (both socially and non-socially) as a well-being strategy.
Women rate feeling bad about themselves in breakup sex, maybe due to women’s sexual regret when participating in a one-time sexual encounter
The psychology of breakup sex: Exploring the motivational factors and affective consequences of post-breakup sexual activity. James B. Moran, T. Joel Wade, Damian R. Murray. Evolutionary Psychology, July 30, 2020. https://doi.org/10.1177/1474704920936916
Abstract: Popular culture has recently publicized a seemingly new postbreakup behavior called breakup sex. While the media expresses the benefits of participating in breakup sex, there is no research to support these claimed benefits. The current research was designed to begin to better understand this postbreakup behavior. In the first study, we examined how past breakup sex experiences made the individuals feel and how people predict they would feel in the future (n = 212). Results suggested that men are more likely than women to have felt better about themselves, while women tend to state they felt better about the relationship after breakup sex. The second study (n = 585) investigated why men and women engage in breakup sex. Results revealed that most breakup sex appears to be motivated by three factors: relationship maintenance, hedonism, and ambivalence. Men tended to support hedonistic and ambivalent reasons for having breakup sex more often than women. The two studies revealed that breakup sex may be differentially motivated (and may have different psychological consequences) for men and women and may not be as beneficial as the media suggests.
Keywords: breakup sex, sexual strategy theory, fiery limbo, postbreakup behavior, ex-sex, gender differences
Study 1: Discussion
Study 1 was conducted to understand how individuals feel when they have engaged in breakup sex and to understand how they might feel about it in the future. The 11 items were further used to assess whether there were gender differences between men and women. Results revealed that men, more than women, reported greater receptivity to breakup sex regardless of the extraneous factors in the relationship (e.g., differences in mate value, who initiated the breakup).
There was no gender difference regarding whether individuals would have breakup sex if they loved their partner. However, unexpectedly, men more than women reported that they would participate in sexual behaviors they normally would not engage in. This engagement in atypical/less frequent sexual behavior may reflect a mate retention tactic since research indicates that men perform oral sex as a benefit-provisioning mate retention tactic (Pham & Shackelford, 2013). Thus, performing sexual behaviors they normally would not do could be an indicator of mate retentive behaviors.
The hypotheses that women would rate feeling bad about themselves was supported. This finding could be due to women’s sexual regret when participating in a one-time sexual encounter (Eshbaugh & Gute, 2008; Galperin et al., 2013). These findings are contrary to the popular media idea that breakup sex is good for both men and women. These results suggest that between men and women, men feel best after breakup sex and would have breakup sex for some different reasons than women would.
Abstract: Popular culture has recently publicized a seemingly new postbreakup behavior called breakup sex. While the media expresses the benefits of participating in breakup sex, there is no research to support these claimed benefits. The current research was designed to begin to better understand this postbreakup behavior. In the first study, we examined how past breakup sex experiences made the individuals feel and how people predict they would feel in the future (n = 212). Results suggested that men are more likely than women to have felt better about themselves, while women tend to state they felt better about the relationship after breakup sex. The second study (n = 585) investigated why men and women engage in breakup sex. Results revealed that most breakup sex appears to be motivated by three factors: relationship maintenance, hedonism, and ambivalence. Men tended to support hedonistic and ambivalent reasons for having breakup sex more often than women. The two studies revealed that breakup sex may be differentially motivated (and may have different psychological consequences) for men and women and may not be as beneficial as the media suggests.
Keywords: breakup sex, sexual strategy theory, fiery limbo, postbreakup behavior, ex-sex, gender differences
Study 1: Discussion
Study 1 was conducted to understand how individuals feel when they have engaged in breakup sex and to understand how they might feel about it in the future. The 11 items were further used to assess whether there were gender differences between men and women. Results revealed that men, more than women, reported greater receptivity to breakup sex regardless of the extraneous factors in the relationship (e.g., differences in mate value, who initiated the breakup).
There was no gender difference regarding whether individuals would have breakup sex if they loved their partner. However, unexpectedly, men more than women reported that they would participate in sexual behaviors they normally would not engage in. This engagement in atypical/less frequent sexual behavior may reflect a mate retention tactic since research indicates that men perform oral sex as a benefit-provisioning mate retention tactic (Pham & Shackelford, 2013). Thus, performing sexual behaviors they normally would not do could be an indicator of mate retentive behaviors.
The hypotheses that women would rate feeling bad about themselves was supported. This finding could be due to women’s sexual regret when participating in a one-time sexual encounter (Eshbaugh & Gute, 2008; Galperin et al., 2013). These findings are contrary to the popular media idea that breakup sex is good for both men and women. These results suggest that between men and women, men feel best after breakup sex and would have breakup sex for some different reasons than women would.
Fantasies About Consensual Nonmonogamy Among Persons in Monogamous Relationships: Those who identified as male or non-binary reported more such fantasies than those who identified as female
Fantasies About Consensual Nonmonogamy Among Persons in Monogamous Romantic Relationships. Justin J. Lehmiller. Archives of Sexual Behavior,Jul 29 2020. https://rd.springer.com/article/10.1007/s10508-020-01788-7
Abstract: The present research explored fantasies about consensual nonmonogamous relationships (CNMRs) and the factors that predict such fantasies in a large and diverse online sample (N = 822) of persons currently involved in monogamous relationships. Nearly one-third (32.6%) of participants reported that being in some type of sexually open relationship was part of their favorite sexual fantasy of all time, of whom most (80.0%) said that they want to act on this fantasy in the future. Those who had shared and/or acted on CNMR fantasies previously generally reported positive outcomes (i.e., meeting or exceeding their expectations and improving their relationships). In addition, a majority of participants reported having fantasized about being in a CNMR at least once before, with open relationships being the most popular variety. Those who identified as male or non-binary reported more CNMR fantasies than those who identified as female. CNMR fantasies were also more common among persons who identified as anything other than heterosexual and among older adults. Erotophilia and sociosexual orientation were uniquely and positively associated with CNMR fantasies of all types; however, other individual difference factors (e.g., Big Five personality traits, attachment style) had less consistent associations. Unique predictors of infidelity fantasies differed from CNMR fantasies, suggesting that they are propelled by different psychological factors. Overall, these results suggest that CNMRs are a popular fantasy and desire among persons in monogamous romantic relationships. Clinical implications and implications for sexual fantasy research more broadly are discussed.
Abstract: The present research explored fantasies about consensual nonmonogamous relationships (CNMRs) and the factors that predict such fantasies in a large and diverse online sample (N = 822) of persons currently involved in monogamous relationships. Nearly one-third (32.6%) of participants reported that being in some type of sexually open relationship was part of their favorite sexual fantasy of all time, of whom most (80.0%) said that they want to act on this fantasy in the future. Those who had shared and/or acted on CNMR fantasies previously generally reported positive outcomes (i.e., meeting or exceeding their expectations and improving their relationships). In addition, a majority of participants reported having fantasized about being in a CNMR at least once before, with open relationships being the most popular variety. Those who identified as male or non-binary reported more CNMR fantasies than those who identified as female. CNMR fantasies were also more common among persons who identified as anything other than heterosexual and among older adults. Erotophilia and sociosexual orientation were uniquely and positively associated with CNMR fantasies of all types; however, other individual difference factors (e.g., Big Five personality traits, attachment style) had less consistent associations. Unique predictors of infidelity fantasies differed from CNMR fantasies, suggesting that they are propelled by different psychological factors. Overall, these results suggest that CNMRs are a popular fantasy and desire among persons in monogamous romantic relationships. Clinical implications and implications for sexual fantasy research more broadly are discussed.
Thursday, July 30, 2020
Strongly unified belief in the linear non-threshold model among panel members and their refusal to acknowledge that a low dose of radiation could exhibit a threshold, & an excessive degree of self-interest
The Muller-Neel dispute and the fate of cancer risk assessment. Edward J. Calabrese. Environmental Research, July 23 2020, 109961. https://www.sciencedirect.com/science/article/abs/pii/S0013935120308562
ABSTRACT: The National Academy of Sciences (NAS) Atomic Bomb Casualty Commission (ABCC) human genetic study (i.e., The Neel and Schull, 1956a report) showed an absence of genetic damage in offspring of atomic bomb survivors in support of a threshold model, but was not considered for evaluation by the NAS Biological Effects of Atomic Radiation (BEAR) I Genetics Panel. The study therefore could not impact the Panel's decision to recommend the linear non-threshold (LNT) dose-response model for risk assessment.1 Summaries and transcripts of the Panel meetings failed to reveal an evaluation of this study, despite its human relevance and ready availability, relying instead on data from Drosophila and mice. This paper explores correspondence among and between BEAR Genetics Panel members, including James Néel, the study director, and other contemporaries to assess why the Panel failed to use these data and how the decision to recommend the LNT model affected future cancer risk assessment policies and practices. This failure of the Genetics Panel was due to: (1) a strongly unified belief in the LNT model among panel members and their refusal to acknowledge that a low dose of radiation could exhibit a threshold, a conclusion that the Néel/Schull atomic bomb study could support, and (2) an excessive degree of self-interest among panel members who experimented with animal models, such as Hermann J. Muller, and feared that human genetic studies would expose the limitations of extrapolating from animal (especially Drosophila) to human responses and would strongly shift research investments/academic grants from animal to human studies. Thus, the failure to consider the Néel/Schull atomic bomb study served both the purposes of preserving the LNT policy goal and ensuring the continued dominance of Muller and his similarly research-oriented colleagues.
6. Conclusion
Human genetic data from over 25 years of the ABCC study (i.e., 1946–1972) demonstrated support for a threshold model for radiation-induced genetic damage in humans, but that information were both ignored and then rejected by the BEAR I and BEIR II Genetics Committees, respectively. The findings, now nearly 50 years later (Grant et al., 2015), have consistently continued to contradict a linear dose response, supporting a threshold response for a complex array of endpoints of genetic damage in humans. Furthermore, the decision to base the LNT recommendation on the male mouse data of Russell is now seen as flawed (Calabrese, 2017a,b), providing no support for the BEIR (1972) decision in favor of LNT.
The failure to assess the human genetic study of Neel and Schull (1956a) at this most crucial time in risk-assessment history represents a profound abrogation of responsibility by the NAS leadership and the BEAR Genetics Panels. This affirmative “failure of responsibility” appears to have been a goal of Muller as it would ensure the adoption of LNT and the continued professional dominance of Muller and his like-thinking and similar research-oriented colleagues. The adoption of LNT occurred during a “perfect storm” consisting of: heightened societal fear of nuclear confrontation; continuing nuclear fallout from atmospheric testing; ideologically based policy and scientific leadership of the Rockefeller Foundation and the US NAS; and a handpicked, highly LNT-biased Genetics Panel that was dominated by an even more-determined Hermann Muller to ensure adoption of the LNT. This history should represent a profound embarrassment to the US NAS, regulatory agencies worldwide, and especially the US EPA, and the risk-assessment community whose founding principles were so ideologically determined and accepted with little if any critical reflection.
ABSTRACT: The National Academy of Sciences (NAS) Atomic Bomb Casualty Commission (ABCC) human genetic study (i.e., The Neel and Schull, 1956a report) showed an absence of genetic damage in offspring of atomic bomb survivors in support of a threshold model, but was not considered for evaluation by the NAS Biological Effects of Atomic Radiation (BEAR) I Genetics Panel. The study therefore could not impact the Panel's decision to recommend the linear non-threshold (LNT) dose-response model for risk assessment.1 Summaries and transcripts of the Panel meetings failed to reveal an evaluation of this study, despite its human relevance and ready availability, relying instead on data from Drosophila and mice. This paper explores correspondence among and between BEAR Genetics Panel members, including James Néel, the study director, and other contemporaries to assess why the Panel failed to use these data and how the decision to recommend the LNT model affected future cancer risk assessment policies and practices. This failure of the Genetics Panel was due to: (1) a strongly unified belief in the LNT model among panel members and their refusal to acknowledge that a low dose of radiation could exhibit a threshold, a conclusion that the Néel/Schull atomic bomb study could support, and (2) an excessive degree of self-interest among panel members who experimented with animal models, such as Hermann J. Muller, and feared that human genetic studies would expose the limitations of extrapolating from animal (especially Drosophila) to human responses and would strongly shift research investments/academic grants from animal to human studies. Thus, the failure to consider the Néel/Schull atomic bomb study served both the purposes of preserving the LNT policy goal and ensuring the continued dominance of Muller and his similarly research-oriented colleagues.
6. Conclusion
Human genetic data from over 25 years of the ABCC study (i.e., 1946–1972) demonstrated support for a threshold model for radiation-induced genetic damage in humans, but that information were both ignored and then rejected by the BEAR I and BEIR II Genetics Committees, respectively. The findings, now nearly 50 years later (Grant et al., 2015), have consistently continued to contradict a linear dose response, supporting a threshold response for a complex array of endpoints of genetic damage in humans. Furthermore, the decision to base the LNT recommendation on the male mouse data of Russell is now seen as flawed (Calabrese, 2017a,b), providing no support for the BEIR (1972) decision in favor of LNT.
The failure to assess the human genetic study of Neel and Schull (1956a) at this most crucial time in risk-assessment history represents a profound abrogation of responsibility by the NAS leadership and the BEAR Genetics Panels. This affirmative “failure of responsibility” appears to have been a goal of Muller as it would ensure the adoption of LNT and the continued professional dominance of Muller and his like-thinking and similar research-oriented colleagues. The adoption of LNT occurred during a “perfect storm” consisting of: heightened societal fear of nuclear confrontation; continuing nuclear fallout from atmospheric testing; ideologically based policy and scientific leadership of the Rockefeller Foundation and the US NAS; and a handpicked, highly LNT-biased Genetics Panel that was dominated by an even more-determined Hermann Muller to ensure adoption of the LNT. This history should represent a profound embarrassment to the US NAS, regulatory agencies worldwide, and especially the US EPA, and the risk-assessment community whose founding principles were so ideologically determined and accepted with little if any critical reflection.
Novel psychological construct characterised by high empathy and dark traits, the Dark Empath, is identified and described relative to personality, aggression, dark triad (DT) facets and wellbeing
The Dark Empath: Characterising dark traits in the presence of empathy. Nadja Heym et al. Personality and Individual Differences, July 29 2020, 110172. https://doi.org/10.1016/j.paid.2020.110172
Highlights
• Latent profile analysis identifies 4 groups based on empathy and dark traits.
• Dark empath (DE, high empathy, dark traits) partly maintains an antagonistic core.
• DE and DT (low empathy, dark traits) are similar in vulnerable dark triad facets.
• DE and DT differ in extraversion, agreeableness, indirect aggression & wellbeing.
• Outside of the dark triad (empaths, typicals), empathy is unrelated to aggression.
Abstract: A novel psychological construct characterised by high empathy and dark traits: the Dark Empath (DE) is identified and described relative to personality, aggression, dark triad (DT) facets and wellbeing. Participants (n = 991) were assessed for narcissism, Machiavellianism, psychopathy, cognitive empathy and affective empathy. Sub-cohorts also completed measures of (i) personality (BIG5), indirect interpersonal aggression (n = 301); (ii) DT facets of vulnerable and grandiose narcissism, primary and secondary psychopathy and Machiavellianism (n = 285); and (iii) wellbeing (depression, anxiety, stress, anhedonia, self-compassion; n = 240). Latent profile analysis identified a four-class solution comprising the traditional DT (n = 128; high DT, low empathy), DE (n = 175; high DT, high empathy), Empaths (n = 357; low DT, high empathy) and Typicals (n = 331; low DT, average empathy). DT and DE were higher in aggression and DT facets, and lower in agreeableness than Typicals and Empaths. DE had higher extraversion and agreeableness, and lower aggression than DT. DE and DT did not differ in grandiose and vulnerable DT facets, but DT showed lower wellbeing. The DE is less aggressive and shows better wellbeing than DT, but partially maintains an antagonistic core, despite having high extraversion. The presence of empathy did not increase risk of vulnerability in the DE.
Highlights
• Latent profile analysis identifies 4 groups based on empathy and dark traits.
• Dark empath (DE, high empathy, dark traits) partly maintains an antagonistic core.
• DE and DT (low empathy, dark traits) are similar in vulnerable dark triad facets.
• DE and DT differ in extraversion, agreeableness, indirect aggression & wellbeing.
• Outside of the dark triad (empaths, typicals), empathy is unrelated to aggression.
Abstract: A novel psychological construct characterised by high empathy and dark traits: the Dark Empath (DE) is identified and described relative to personality, aggression, dark triad (DT) facets and wellbeing. Participants (n = 991) were assessed for narcissism, Machiavellianism, psychopathy, cognitive empathy and affective empathy. Sub-cohorts also completed measures of (i) personality (BIG5), indirect interpersonal aggression (n = 301); (ii) DT facets of vulnerable and grandiose narcissism, primary and secondary psychopathy and Machiavellianism (n = 285); and (iii) wellbeing (depression, anxiety, stress, anhedonia, self-compassion; n = 240). Latent profile analysis identified a four-class solution comprising the traditional DT (n = 128; high DT, low empathy), DE (n = 175; high DT, high empathy), Empaths (n = 357; low DT, high empathy) and Typicals (n = 331; low DT, average empathy). DT and DE were higher in aggression and DT facets, and lower in agreeableness than Typicals and Empaths. DE had higher extraversion and agreeableness, and lower aggression than DT. DE and DT did not differ in grandiose and vulnerable DT facets, but DT showed lower wellbeing. The DE is less aggressive and shows better wellbeing than DT, but partially maintains an antagonistic core, despite having high extraversion. The presence of empathy did not increase risk of vulnerability in the DE.
Music training is ineffective regardless of outcome measure (verbal, non-verbal, speed-related, etc.), participants’ age, & duration of training; & has no impact on people’s non-music cognitive skills or academic achievement
Cognitive and academic benefits of music training with children: A multilevel meta-analysis. Giovanni Sala & Fernand Gobet. Memory & Cognition, Jul 29 2020. https://rd.springer.com/article/10.3758/s13421-020-01060-2
Abstract: Music training has repeatedly been claimed to positively impact children’s cognitive skills and academic achievement (literacy and mathematics). This claim relies on the assumption that engaging in intellectually demanding activities fosters particular domain-general cognitive skills, or even general intelligence. The present meta-analytic review (N = 6,984, k = 254, m = 54) shows that this belief is incorrect. Once the quality of study design is controlled for, the overall effect of music training programs is null (g¯ ≈ 0) and highly consistent across studies (Ï„2 ≈ 0). Results of Bayesian analyses employing distributional assumptions (informative priors) derived from previous research in cognitive training corroborate these conclusions. Small statistically significant overall effects are obtained only in those studies implementing no random allocation of participants and employing non-active controls (g¯ ≈ 0.200, p < .001). Interestingly, music training is ineffective regardless of the type of outcome measure (e.g., verbal, non-verbal, speed-related, etc.), participants’ age, and duration of training. Furthermore, we note that, beyond meta-analysis of experimental studies, a considerable amount of cross-sectional evidence indicates that engagement in music has no impact on people’s non-music cognitive skills or academic achievement. We conclude that researchers’ optimism about the benefits of music training is empirically unjustified and stems from misinterpretation of the empirical data and, possibly, confirmation bias.
Abstract: Music training has repeatedly been claimed to positively impact children’s cognitive skills and academic achievement (literacy and mathematics). This claim relies on the assumption that engaging in intellectually demanding activities fosters particular domain-general cognitive skills, or even general intelligence. The present meta-analytic review (N = 6,984, k = 254, m = 54) shows that this belief is incorrect. Once the quality of study design is controlled for, the overall effect of music training programs is null (g¯ ≈ 0) and highly consistent across studies (Ï„2 ≈ 0). Results of Bayesian analyses employing distributional assumptions (informative priors) derived from previous research in cognitive training corroborate these conclusions. Small statistically significant overall effects are obtained only in those studies implementing no random allocation of participants and employing non-active controls (g¯ ≈ 0.200, p < .001). Interestingly, music training is ineffective regardless of the type of outcome measure (e.g., verbal, non-verbal, speed-related, etc.), participants’ age, and duration of training. Furthermore, we note that, beyond meta-analysis of experimental studies, a considerable amount of cross-sectional evidence indicates that engagement in music has no impact on people’s non-music cognitive skills or academic achievement. We conclude that researchers’ optimism about the benefits of music training is empirically unjustified and stems from misinterpretation of the empirical data and, possibly, confirmation bias.
Wednesday, July 29, 2020
Political affiliation of prospective partners: Those in the political out-group are seen as less attractive, less dateable, and less worthy of matchmaking efforts; these effects are modest in size
The Democracy of Dating: How Political Affiliations Shape Relationship Formation. Matthew J. Easton and John B. Holbein. Journal of Experimental Political Science, Jul 29 2020. https://doi.org/10.1017/XPS.2020.21
Abstract: How much does politics affect relationship building? Previous experimental studies have come to vastly different conclusions – ranging from null to truly transformative effects. To explore these differences, this study replicates and extends previous research by conducting five survey experiments meant to expand our understanding of how politics does/does not shape the formation of romantic relationships. We find that people, indeed, are influenced by the politics of prospective partners; respondents evaluate those in the political out-group as being less attractive, less dateable, and less worthy of matchmaking efforts. However, these effects are modest in size – falling almost exactly in between previous study estimates. Our results shine light on a literature that has, up until this point, produced a chasm in study results – a vital task given concerns over growing levels of partisan animus in the USA and the rapidly expanding body of research on affective polarization.
Abstract: How much does politics affect relationship building? Previous experimental studies have come to vastly different conclusions – ranging from null to truly transformative effects. To explore these differences, this study replicates and extends previous research by conducting five survey experiments meant to expand our understanding of how politics does/does not shape the formation of romantic relationships. We find that people, indeed, are influenced by the politics of prospective partners; respondents evaluate those in the political out-group as being less attractive, less dateable, and less worthy of matchmaking efforts. However, these effects are modest in size – falling almost exactly in between previous study estimates. Our results shine light on a literature that has, up until this point, produced a chasm in study results – a vital task given concerns over growing levels of partisan animus in the USA and the rapidly expanding body of research on affective polarization.
Dementia Incidence Among US Adults Born 1893-1949: Incidence is lower for those born after the mid-1920s, & this lower incidence is not associated with early-life environment as measured in this study
Association of Demographic and Early-Life Socioeconomic Factors by Birth Cohort With Dementia Incidence Among US Adults Born Between 1893 and 1949. Sarah E. Tom et al. JAMA Netw Open. 2020;3(7):e2011094, July 27 2020, doi:10.1001/jamanetworkopen.2020.11094
Key Points
Question Are dementia incidence trends by birth cohort associated with early-life environment?
Findings In this cohort study of 4277 participants in the Adult Changes in Thought study who were born between 1893 and 1949 and were followed up for up to 20 years (1994-2015), the age- and sex-adjusted dementia incidence was lower among those born during the Great Depression (1929-1939) and the period during World War II and postwar (1940-1949) compared with those born in the period before the Great Depression (1921-1928). The association between birth cohort and dementia incidence remained when accounting for early-life socioeconomic environment, educational level, and late-life vascular risk factors.
Meaning The study’s findings indicate that dementia incidence is lower for individuals born after the mid-1920s compared with those born earlier, and this lower incidence is not associated with early-life environment as measured in this study.
Abstract
Importance Early-life factors may be important for later dementia risk. The association between a more advantaged early-life environment, as reflected through an individual’s height and socioeconomic status indicators, and decreases in dementia incidence by birth cohort is unknown.
Objectives To examine the association of birth cohort and early-life environment with dementia incidence among participants in the Adult Changes in Thought study from 1994 to 2015.
Design, Setting, and Participants This prospective cohort study included 4277 participants from the Adult Changes in Thought study, an ongoing longitudinal population-based study of incident dementia in a random sample of adults 65 years and older who were born between 1893 and 1949 and are members of Kaiser Permanente Washington in the Seattle region. Participants in the present analysis were followed up from 1994 to 2015. At enrollment, all participants were dementia-free and completed a baseline evaluation. Subsequent study visits were held every 2 years until a diagnosis of dementia, death, or withdrawal from the study. Participants were categorized by birth period (defined by historically meaningful events) into 5 cohorts: pre–World War I (1893-1913), World War I and Spanish influenza (1914-1920), pre–Great Depression (1921-1928), Great Depression (1929-1939), and World War II and postwar (1940-1949). Participants’ height, educational level, childhood financial stability, and childhood household density were examined as indicators of early-life environment, and later-life vascular risk factors for dementia were assessed. Cox proportional hazards regression models, adjusted for competing survival risk, were used to analyze data. Data were analyzed from June 1, 2018, to April 29, 2020.
Main Outcomes and Measures Participants completed the Cognitive Abilities Screening Instrument every 2 years to assess global cognition. Those with scores indicative of cognitive impairment completed an evaluation for dementia, with dementia diagnoses determined during consensus conferences using criteria from the Diagnostic and Statistical Manual of Mental Disorders, 4th edition.
Results Among 4277 participants, the mean (SD) age was 74.5 (6.4) years, and 2519 participants (58.9%) were women. The median follow-up was 8 years (interquartile range, 4-12 years), with 730 participants developing dementia over 24 378 person-years. The age-specific dementia incidence was lower for those born in 1929 and later compared with those born earlier. Compared with participants born in the pre–Great Depression years (1921-1928), the age- and sex-adjusted hazard ratio was 0.67 (95% CI, 0.53-0.85) for those born in the Great Depression period (1929-1939) and 0.62 (95% CI, 0.29-1.31) for those born in the World War II and postwar period (1940-1949). Although indicators of a more advantaged early-life environment and higher educational level (college or higher) were associated with a lower incidence of dementia, these variables did not explain the association between birth cohort and dementia incidence, which remained when vascular risk factors were included and were similar by sex.
Conclusions and Relevance Age-specific dementia incidence was lower in participants born after the mid-1920s compared with those born earlier. In this population, the decrease in dementia incidence may reflect societal-level changes or individual differences over the life course rather than early-life environment, as reflected through recalled childhood socioeconomic status and measured height, educational level, and later-life vascular risk.
Key Points
Question Are dementia incidence trends by birth cohort associated with early-life environment?
Findings In this cohort study of 4277 participants in the Adult Changes in Thought study who were born between 1893 and 1949 and were followed up for up to 20 years (1994-2015), the age- and sex-adjusted dementia incidence was lower among those born during the Great Depression (1929-1939) and the period during World War II and postwar (1940-1949) compared with those born in the period before the Great Depression (1921-1928). The association between birth cohort and dementia incidence remained when accounting for early-life socioeconomic environment, educational level, and late-life vascular risk factors.
Meaning The study’s findings indicate that dementia incidence is lower for individuals born after the mid-1920s compared with those born earlier, and this lower incidence is not associated with early-life environment as measured in this study.
Abstract
Importance Early-life factors may be important for later dementia risk. The association between a more advantaged early-life environment, as reflected through an individual’s height and socioeconomic status indicators, and decreases in dementia incidence by birth cohort is unknown.
Objectives To examine the association of birth cohort and early-life environment with dementia incidence among participants in the Adult Changes in Thought study from 1994 to 2015.
Design, Setting, and Participants This prospective cohort study included 4277 participants from the Adult Changes in Thought study, an ongoing longitudinal population-based study of incident dementia in a random sample of adults 65 years and older who were born between 1893 and 1949 and are members of Kaiser Permanente Washington in the Seattle region. Participants in the present analysis were followed up from 1994 to 2015. At enrollment, all participants were dementia-free and completed a baseline evaluation. Subsequent study visits were held every 2 years until a diagnosis of dementia, death, or withdrawal from the study. Participants were categorized by birth period (defined by historically meaningful events) into 5 cohorts: pre–World War I (1893-1913), World War I and Spanish influenza (1914-1920), pre–Great Depression (1921-1928), Great Depression (1929-1939), and World War II and postwar (1940-1949). Participants’ height, educational level, childhood financial stability, and childhood household density were examined as indicators of early-life environment, and later-life vascular risk factors for dementia were assessed. Cox proportional hazards regression models, adjusted for competing survival risk, were used to analyze data. Data were analyzed from June 1, 2018, to April 29, 2020.
Main Outcomes and Measures Participants completed the Cognitive Abilities Screening Instrument every 2 years to assess global cognition. Those with scores indicative of cognitive impairment completed an evaluation for dementia, with dementia diagnoses determined during consensus conferences using criteria from the Diagnostic and Statistical Manual of Mental Disorders, 4th edition.
Results Among 4277 participants, the mean (SD) age was 74.5 (6.4) years, and 2519 participants (58.9%) were women. The median follow-up was 8 years (interquartile range, 4-12 years), with 730 participants developing dementia over 24 378 person-years. The age-specific dementia incidence was lower for those born in 1929 and later compared with those born earlier. Compared with participants born in the pre–Great Depression years (1921-1928), the age- and sex-adjusted hazard ratio was 0.67 (95% CI, 0.53-0.85) for those born in the Great Depression period (1929-1939) and 0.62 (95% CI, 0.29-1.31) for those born in the World War II and postwar period (1940-1949). Although indicators of a more advantaged early-life environment and higher educational level (college or higher) were associated with a lower incidence of dementia, these variables did not explain the association between birth cohort and dementia incidence, which remained when vascular risk factors were included and were similar by sex.
Conclusions and Relevance Age-specific dementia incidence was lower in participants born after the mid-1920s compared with those born earlier. In this population, the decrease in dementia incidence may reflect societal-level changes or individual differences over the life course rather than early-life environment, as reflected through recalled childhood socioeconomic status and measured height, educational level, and later-life vascular risk.
Discussion
Among those born at the turn of the 20th century through the mid-20th century who participated in the ACT study, the age-specific dementia incidence was lower for participants born in 1929 and later compared with those born earlier. This trend was not explained by recalled childhood socioeconomic status and measured height, which reflect early-life environment, nor was it explained by educational level and vascular risk as an older adult. The literature on secular dementia trends reports a decrease in dementia incidence starting in the 1990s.1-5 This timing is consistent with participants in the 1929 to 1939 birth cohorts who are entering the eighth decade of life, when dementia risk increases.2,4,31 Political and economic changes during the first half of the 20th century may have had different implications for dementia risk based on the participant’s age during those experiences.32 Analysis by birth cohort captures this intersection of age and calendar time. Our results suggest that societal-level changes in the first half of the 20th century that were not captured by the individual early-life measures or the educational levels used in this study may have been associated with decreases in dementia incidence.
The 40% decrease in the US mortality rate from 1900 to 1940 was likely owing to the decrease in infectious diseases,33 which disproportionately occur in the young. The decrease in dementia incidence observed in the ACT study began with birth cohorts who were born in the middle of this period. These early-life health gains may be factors in the decreased dementia incidence. Although we accounted for family-level socioeconomic status variables and height, these variables may not have captured all changes, such as economic innovation13 and nutritional improvement,12 that may have been associated with decreases in mortality. In addition, variables included in this study may not have captured public heath improvements during this period.33 It is possible that unmeasured differences were more important for assessing dementia risk by birth cohort than the socioeconomic factors we measured.
Across birth cohorts, participants with lower financial status and greater household density in childhood had a lower risk of developing dementia, which is inconsistent with our hypothesis and the results of previous studies.34,35 While the Great Depression was a time of financial hardship, those in the pre–Great Depression and the World War I and Spanish influenza birth cohorts were the least likely to report the ability to afford both basic needs and small luxuries, and they had the smallest proportion of participants reporting the most stable childhood financial quartile. This pattern may reflect problems with measurement or sample selection. Participant responses may reflect experiences in later childhood and early adolescence, as recall of early-life experiences may be difficult. In contrast, parental educational levels, which were constant throughout childhood and adolescence for most of the birth cohorts, were higher for the World War I and Spanish influenza cohort and the pre–Great Depression cohort compared with cohorts born earlier. This pattern suggests a higher early-life standard of living in the more recent birth cohorts. Another possibility is that because these 2 birth cohorts were the oldest, those who survived to participate in the study were able to compensate for adverse early-life environments or had less accurate recall than younger participants.
Our study considered death as a competing risk, while a previous case-control study did not.34,35 Most ACT participants were members of Kaiser Permanente Washington (formerly Group Health) when they were younger than 65 years, during which they primarily received health insurance through large employers. It is likely that those with lower financial status and higher household density during childhood survived adverse experiences to be able to participate the sample.
Together with height, an individual’s parental educational level, childhood financial stability, and childhood household density are likely to reflect their early-life environment. These variables did not explain the decrease in dementia incidence among the more recent birth cohorts. In a minimally adjusted model, the decrease in dementia incidence began with the Great Depression birth cohort, suggesting that societal-level experiences during later childhood to adolescence may have been more important than those during the in-utero through early childhood phase. If this earliest stage of life were important for dementia incidence, we would expect those born in the Great Depression cohort to have the greatest dementia risk. The largest difference in college completion was found between the pre–Great Depression and Great Depression birth cohorts. This disruption to economic opportunity for those born in the pre–Great Depression years may have had implications for dementia risk. The inclusion of late-life vascular risk factors did not appreciably alter the association between a more recent birth cohort and a lower incidence of dementia, which is consistent with analyses of the Einstein Aging Cohort10 and the Framingham Heart Study, which considered the cohort of study entry.1
We found similar associations between birth cohort and decreased dementia incidence in 2 previous studies. An analysis of the English Longitudinal Study of Aging examined 2 birth cohorts based on birth-year median (1902-1925 and 1926-1943),9 and an analysis of the Einstein Aging Study used a data-focused approach to detect a changing point in continuous birth years.10 Our birth cohort categories were based on historically meaningful events. Because the ACT study is larger than the Einstein Aging Study, we were able to separate participants born after 1928 into 2 groups. In the ACT study, the most recent birth cohort (1940–1949) had higher educational levels and childhood financial stability compared with cohorts born earlier. Such categorization also allowed for the separation of worldwide economic disruption from family-level financial stability.
Our analysis may not have captured differences in adult social experiences. Educational level is associated with subsequent occupation and employment patterns. However, birth cohort may reflect experience of events during the 20th century that had broad implications, regardless of educational level. For example, men born in the first 2 decades of the 20th century are likely to have served in the armed forces during World War II and to have benefitted from the GI bill. Men and women from those birth cohorts would also have benefitted from the postwar economic expansion. Our analysis did not capture such adult experiences.
Limitations
Our study has several limitations. Participants in older cohorts necessarily had to survive longer to be included in the study. Because the greatest risk factor for dementia is age, the requirement of survival among the pre–World War I and World War I and Spanish influenza birth cohorts as a requirement to enter the ACT study may create differences in dementia risk that are difficult to detect in these groups. Our results suggest that the most recent birth cohorts may continue to experience lower age-specific dementia incidence. However, follow-up period is shorter in these birth cohorts. The ACT study participants are from 1 health system in the Pacific Northwest, and their educational level is high. The cohort is a random sample of age-eligible members of Kaiser Permanente Washington; results therefore reflect this specific population but may not be generalizable to the US population. Our results are consistent with a sample from the Bronx, New York,10 and a nationally representative sample from the United Kingdom,9 suggesting that the decrease in dementia incidence by birth cohort may be a widespread phenomenon. Because ACT study participants may be socioeconomically advantaged, the measures of early-life environment included in this study may not be sensitive enough to detect meaningful differences that have implications for dementia incidence trends by birth cohort.
The study did not include key health variables from later in the life course that are associated with dementia risk, notably midlife hypertension, hearing loss, late-life depression, diabetes, physical inactivity, and social isolation.6 As the ACT study is currently collecting data on most of these variables, future studies will be able to more fully capture life-course dementia risk factors. As a long-standing study, the follow-up included substantial age overlap of multiple birth cohorts, which had been a limitation in previous studies.9 Dementia diagnosis procedures have been consistent throughout the study. The large size of the ACT study and the theoretical basis of the cohort groups allowed for the inclusion of 2 cohort groups born after 1928 that aligned with historically meaningful events, whereas previous studies have considered only 1 group born after the mid-1920s.9,10
Dementia incidence has decreased in more recent birth cohorts. Our measures of early-life socioeconomic status and educational level do not account for these differences in this study population. Birth cohort may reflect other historical and social changes that occurred during childhood or adulthood.
Self-control is associated with numerous positive outcomes, such as well-being; we argue that hedonic goal pursuit is equally important, & conflicting long-term goals can undermine it in the form of intrusive thoughts
Beyond Self-Control: Mechanisms of Hedonic Goal Pursuit and Its Relevance for Well-Being. Katharina Bernecker, Daniela Becker. Personality and Social Psychology Bulletin, July 26, 2020. https://doi.org/10.1177/0146167220941998
Abstract: Self-control helps to align behavior with long-term goals (e.g., exercising to stay fit) and shield it from conflicting hedonic goals (e.g., relaxing). Decades of research have shown that self-control is associated with numerous positive outcomes, such as well-being. In the present article, we argue that hedonic goal pursuit is equally important for well-being, and that conflicting long-term goals can undermine it in the form of intrusive thoughts. In Study 1, we developed a measure of trait hedonic capacity, which captures people’s success in hedonic goal pursuit and the occurrence of intrusive thoughts. In Studies 2A and 2B, people’s trait hedonic capacity relates positively to well-being. Study 3 confirms intrusive thoughts as major impeding mechanism of hedonic success. Studies 4 and 5 demonstrate that trait hedonic capacity predicts successful hedonic goal pursuit in everyday life. We conclude that hedonic goal pursuit represents a largely neglected but adaptive aspect of self-regulation.
Keywords: hedonic goals, self-control, self-regulation, well-being
Popular version: Hedonism Leads to Happiness. Zurich Univ. Press Release, Jul 27 2020. https://www.media.uzh.ch/en/Press-Releases/2020/Hedonism.html
Abstract: Self-control helps to align behavior with long-term goals (e.g., exercising to stay fit) and shield it from conflicting hedonic goals (e.g., relaxing). Decades of research have shown that self-control is associated with numerous positive outcomes, such as well-being. In the present article, we argue that hedonic goal pursuit is equally important for well-being, and that conflicting long-term goals can undermine it in the form of intrusive thoughts. In Study 1, we developed a measure of trait hedonic capacity, which captures people’s success in hedonic goal pursuit and the occurrence of intrusive thoughts. In Studies 2A and 2B, people’s trait hedonic capacity relates positively to well-being. Study 3 confirms intrusive thoughts as major impeding mechanism of hedonic success. Studies 4 and 5 demonstrate that trait hedonic capacity predicts successful hedonic goal pursuit in everyday life. We conclude that hedonic goal pursuit represents a largely neglected but adaptive aspect of self-regulation.
Keywords: hedonic goals, self-control, self-regulation, well-being
Popular version: Hedonism Leads to Happiness. Zurich Univ. Press Release, Jul 27 2020. https://www.media.uzh.ch/en/Press-Releases/2020/Hedonism.html
The inherent difficulty in accurately appreciating the engaging aspect of thinking activity could explain why people prefer keeping themselves busy, rather than taking a moment for reflection & imagination
Hatano, Aya, Cansu Ogulmus, Hiroaki Shigemasu, and Kou Murayama. 2020. “Thinking About Thinking: People Underestimate Intrinsically Motivating Experiences of Waiting.” PsyArXiv. July 29. doi:10.31234/osf.io/n2ctk
Abstract: The ability to engage in internal thoughts without external stimulation is one of the hallmarks of unique characteristics in humans. The current research tested the hypothesis that people metacognitively underestimate their capability to positively engage in just thinking. Participants were asked to sit and wait in a quiet room without doing anything for a certain amount of time (e.g., 20 min). Before the waiting task, they made a prediction about how intrinsically motivating the task would be at the end of the task; they also rated their experienced intrinsic motivation after the task. Across six experiments we consistently found that participants’ predicted intrinsic motivation for the waiting task was significantly less than experienced intrinsic motivation. This underestimation effect was robustly observed regardless of the independence of predictive rating, the amount of sensory input, duration of the waiting task, timing of assessment, and cultural contexts of participants. This underappreciation of just thinking also led participants to proactively avoid the waiting task when there was an alternative task (i.e. internet news checking), despite that their experienced intrinsic motivation was actually not statistically different. These results suggest the inherent difficulty in accurately appreciating the engaging aspect of thinking activity, and could explain why people prefer keeping themselves busy, rather than taking a moment for reflection and imagination, in our daily life.
Abstract: The ability to engage in internal thoughts without external stimulation is one of the hallmarks of unique characteristics in humans. The current research tested the hypothesis that people metacognitively underestimate their capability to positively engage in just thinking. Participants were asked to sit and wait in a quiet room without doing anything for a certain amount of time (e.g., 20 min). Before the waiting task, they made a prediction about how intrinsically motivating the task would be at the end of the task; they also rated their experienced intrinsic motivation after the task. Across six experiments we consistently found that participants’ predicted intrinsic motivation for the waiting task was significantly less than experienced intrinsic motivation. This underestimation effect was robustly observed regardless of the independence of predictive rating, the amount of sensory input, duration of the waiting task, timing of assessment, and cultural contexts of participants. This underappreciation of just thinking also led participants to proactively avoid the waiting task when there was an alternative task (i.e. internet news checking), despite that their experienced intrinsic motivation was actually not statistically different. These results suggest the inherent difficulty in accurately appreciating the engaging aspect of thinking activity, and could explain why people prefer keeping themselves busy, rather than taking a moment for reflection and imagination, in our daily life.
Gender differences in the trade-off between objective equality and efficiency: The results show that females prefer objective equality over efficiency to a greater extent than males do
Gender differences in the trade-off between objective equality and efficiency. Valerio Capraro. Judgment and Decision Making, Vol. 15, No. 4, July 2020, pp. 534–544. http://journal.sjdm.org/19/190510/jdm190510.pdf
Abstract: Generations of social scientists have explored whether males and females act differently in domains involving competition, risk taking, cooperation, altruism, honesty, as well as many others. Yet, little is known about gender differences in the trade-off between objective equality (i.e., equality of outcomes) and efficiency. It has been suggested that females are more equal than males, but the empirical evidence is relatively weak. This gap is particularly important, because people in power of redistributing resources often face a conflict between equality and efficiency. The recently introduced Trade-Off Game (TOG) – in which a decision-maker has to unilaterally choose between being equal or being efficient – offers a unique opportunity to fill this gap. To this end, I analyse gender differences on a large dataset including N=6,955 TOG decisions. The results show that females prefer objective equality over efficiency to a greater extent than males do. The effect turns out to be particularly strong when the TOG available options are “morally” framed in such a way to suggest that choosing the equal option is the right thing to do.
Keywords: trade-off game, gender, equality, efficiency
Abstract: Generations of social scientists have explored whether males and females act differently in domains involving competition, risk taking, cooperation, altruism, honesty, as well as many others. Yet, little is known about gender differences in the trade-off between objective equality (i.e., equality of outcomes) and efficiency. It has been suggested that females are more equal than males, but the empirical evidence is relatively weak. This gap is particularly important, because people in power of redistributing resources often face a conflict between equality and efficiency. The recently introduced Trade-Off Game (TOG) – in which a decision-maker has to unilaterally choose between being equal or being efficient – offers a unique opportunity to fill this gap. To this end, I analyse gender differences on a large dataset including N=6,955 TOG decisions. The results show that females prefer objective equality over efficiency to a greater extent than males do. The effect turns out to be particularly strong when the TOG available options are “morally” framed in such a way to suggest that choosing the equal option is the right thing to do.
Keywords: trade-off game, gender, equality, efficiency
Some charities are much more cost-effective than others, which means that they can do more with the same amount of money; yet most donations do not go to the most effective charities. Why is that?
Donors vastly underestimate differences in charities’ effectiveness. Lucius Caviola et al. Judgment and Decision Making, Vol. 15, No. 4, July 2020, pp. 509–516. http://journal.sjdm.org/20/200504/jdm200504.pdf
Abstract: Some charities are much more cost-effective than other charities, which means that they can save many more lives with the same amount of money. Yet most donations do not go to the most effective charities. Why is that? We hypothesized that part of the reason is that people underestimate how much more effective the most effective charities are compared with the average charity. Thus, they do not know how much more good they could do if they donated to the most effective charities. We studied this hypothesis using samples of the general population, students, experts, and effective altruists in six studies. We found that lay people estimated that among charities helping the global poor, the most effective charities are 1.5 times more effective than the average charity (Studies 1 and 2). Effective altruists, in contrast, estimated the difference to be factor 30 (Study 3) and experts estimated the factor to be 100 (Study 4). We found that participants donated more to the most effective charity, and less to an average charity, when informed about the large difference in cost-effectiveness (Study 5). In conclusion, misconceptions about the difference in effectiveness between charities is thus likely one reason, among many, why people donate ineffectively.
Keywords: cost-effectiveness, charitable giving, effective altruism, prosocial behavior, helping
Abstract: Some charities are much more cost-effective than other charities, which means that they can save many more lives with the same amount of money. Yet most donations do not go to the most effective charities. Why is that? We hypothesized that part of the reason is that people underestimate how much more effective the most effective charities are compared with the average charity. Thus, they do not know how much more good they could do if they donated to the most effective charities. We studied this hypothesis using samples of the general population, students, experts, and effective altruists in six studies. We found that lay people estimated that among charities helping the global poor, the most effective charities are 1.5 times more effective than the average charity (Studies 1 and 2). Effective altruists, in contrast, estimated the difference to be factor 30 (Study 3) and experts estimated the factor to be 100 (Study 4). We found that participants donated more to the most effective charity, and less to an average charity, when informed about the large difference in cost-effectiveness (Study 5). In conclusion, misconceptions about the difference in effectiveness between charities is thus likely one reason, among many, why people donate ineffectively.
Keywords: cost-effectiveness, charitable giving, effective altruism, prosocial behavior, helping
Action and inaction are perceived and evaluated differently; these asymmetries have been shown to have real impact on choice behavior in both personal & interpersonal contexts
Omission and commission in judgment and decision making: Understanding and linking action‐inaction effects using the concept of normality. Gilad Feldman Lucas Kutscher Tijen Yay. Social and Personality Psychology Compass, July 27 2020. https://doi.org/10.1111/spc3.12557
Abstract: Research on action and inaction in judgment and decision making now spans over 35 years, with ever‐growing interest. Accumulating evidence suggests that action and inaction are perceived and evaluated differently, affecting a wide array of psychological factors from emotions to morality. These asymmetries have been shown to have real impact on choice behavior in both personal and interpersonal contexts, with implications for individuals and society. We review impactful action‐inaction related phenomena, with a summary and comparison of key findings and insights, reinterpreting these effects and mapping links between effects using norm theory's (Kahneman & Miller, 1986) concept of normality. Together, these aim to contribute towards an integrated understanding of the human psyche regarding action and inaction.
Abstract: Research on action and inaction in judgment and decision making now spans over 35 years, with ever‐growing interest. Accumulating evidence suggests that action and inaction are perceived and evaluated differently, affecting a wide array of psychological factors from emotions to morality. These asymmetries have been shown to have real impact on choice behavior in both personal and interpersonal contexts, with implications for individuals and society. We review impactful action‐inaction related phenomena, with a summary and comparison of key findings and insights, reinterpreting these effects and mapping links between effects using norm theory's (Kahneman & Miller, 1986) concept of normality. Together, these aim to contribute towards an integrated understanding of the human psyche regarding action and inaction.
Tuesday, July 28, 2020
Why Do Leaders Express Humility and How Does This Matter: A Rational Choice Perspective
Why Do Leaders Express Humility and How Does This Matter: A Rational Choice Perspective. JianChun Yang, Wei Zhang and Xiao Chen. Front. Psychol., August 21 2019. https://doi.org/10.3389/fpsyg.2019.01925
Abstract: The utility of leader humility expressing behavior has been examined by several studies across multiple levels. However, our knowledge about why leaders express humility continues to be sparse. Drawing on rational choice theory, this paper proposes a model examining whether followers’ capability triggers leader’s humility expressing behavior and how followers’ interpretations of it influence its effectiveness. Results from 278 leader-follower dyads from a time-lagged research design showed that followers’ capability as perceived by the leader is positively related to leader-expressed humility and, in turn, this behavior would conditionally enhance follower trust, that is, followers will trust the humble leader less when they attribute leader’s expressed humility more to serving impression management motives. Several theoretical and practical implications of this observation are discussed in this study.
Abstract: The utility of leader humility expressing behavior has been examined by several studies across multiple levels. However, our knowledge about why leaders express humility continues to be sparse. Drawing on rational choice theory, this paper proposes a model examining whether followers’ capability triggers leader’s humility expressing behavior and how followers’ interpretations of it influence its effectiveness. Results from 278 leader-follower dyads from a time-lagged research design showed that followers’ capability as perceived by the leader is positively related to leader-expressed humility and, in turn, this behavior would conditionally enhance follower trust, that is, followers will trust the humble leader less when they attribute leader’s expressed humility more to serving impression management motives. Several theoretical and practical implications of this observation are discussed in this study.
Discussion
The present study has investigated why leaders often express humility and how this matters to followers based on rational choice theory. We have found that when the leader perceives that his/her followers possess capabilities of a high order, the leader would be more likely to express humility. We have also found that leader humility could promote trusting relationships among the followers toward the leader. Finally, we have presented the total process underpinning dyadic level leader-follower interactions. By making their abilities more visible to their leader, followers can enhance leader-expressed humility, and, in turn, through leaders’ humility expressions, followers can develop greater trust in their leaders. This interaction hinges on followers’ positive inferences about the motives behind the leader’s expressions of humility, that is, when followers interpret leader humility as serving impression management motives, it is less likely that such leader behavior will increase follower trust.
However, as for inferring leader humility, with regard to performance enhancement motives in the relationship between leader humility and follower trust, we did not find a moderation role of inferred leader humility motives. Initially, we thought that this result was beyond expectation, but reasonable. Drawing from rational choice theory (Lewicki et al., 2006), we found that the attribution of behavior motives is more to do with identifying a mismatch between behavior and intention. Such a matching process will help individuals avoid trusting the wrong person (Elangovan et al., 2007). However, the actual source of increase or decrease of trust is usually more related to the characteristics of the trustees (Mayer et al., 1995). Thus, compared to leaders’ humility characteristics, followers’ attribution of leader humility motives may have less impact on followers’ trust building toward the leader. Additionally, individuals are more sensitive about negative information and events (“negative bias”, Rozin and Royzman, 2001), which may serve as an explanation for the untested hypothesis. We strongly suggest future research to dig further into this issue.
Theoretical Implications
The present research has contributed to leadership and leader humility literature in several ways. Firstly, our study is the first to examine situational predictors of leader humility. By treating humility expression targets as possible antecedents of leader humility, the present study has provided a novel understanding of why leaders express humility. Most previous studies on leader humility have focused on its positive outcomes (Owens et al., 2013; Ou et al., 2014b, 2017; Owens and Hekman, 2016); few examined the antecedents of leader humility. Moreover, the few research scholars who had evaluated individual differences such as personal traits or life experience as the antecedents of humility, have treated humble leadership as a trait-relevant leadership style (Morris et al., 2005; Owens, 2009). Although many scholars have proposed that contextual factors such as safe climate could trigger greater leader-expressed humility (Owens, 2009), empirical research examining the situational predictors of leader humility is scarce. Drawing from rational choice theory, this research has found that follower capability would trigger leaders to express humility. Thus, this research has been able to explain leaders’ expressions of humility.
Secondly, our study has contributed to leadership literature by emphasizing the follower-centric view which values followers as a critical factor that could shape leaders’ behavior and influence effectiveness of leadership. Our review of the leadership literature has noted that most previous leadership studies had endorsed the leader-centric view (i.e., followers are only considered as recipients or moderators of leadership) while ignoring the follower-centric view (i.e., followers can be seen as “constructors” of leadership) (Howell and Shamir, 2005; Uhl-Bien et al., 2014). The present study is the first to empirically test the followers’ role in affecting the processes underlying humble leadership. Firstly, followers play an important role in shaping expressions of humility on the part of the leader; specifically, followers could have the power to trigger more leader humility when their abilities become salient to their leaders. Secondly, followers’ interpretations play a key role in affecting the outcomes of humble leadership. Like many other studies related to positive leadership, e.g., transformational leadership (Dasborough and Ashkanasy, 2002), positive outcomes could not be guaranteed if followers interpret leaders’ behavior as being distorted in some way. Similarly, we found that leader humility cannot lead to greater follower trust if it gets interpreted as serving impression management motives.
Thirdly, the present study has furthered the understanding of leader humility by integrating rational choice theory and leadership theory. Just as the Confucian proverb says “haughtiness invites loss while humility brings benefits,” humility has been credited with bringing intrapersonal and interpersonal benefits (Cai et al., 2010). Consistent with rational choice theory, the present research has found that leaders’ perception of followers’ capabilities positively influences the leader’s humility expressing behavior since the leader can benefit more by being humble with capable followers. This is in consonance with the opinion that leaders would exhibit more positive behaviors to outstandingly good followers (Uhl-Bien et al., 2014). Furthermore, we have proposed and found that followers’ rational attribution about leader humility would influence the relationship between leader humility and follower trust. Therefore, by integrating rational choice theory with the construct of leader humility, we have been able to obtain a deeper understanding of the interaction between humble leaders and their followers.
Practical Implications
For managerial practice, we hope both leaders and followers would pick up some insights during the daily workplace interactions. For followers, they should realize the malleability of leader-expressed humility. This might provide two pieces of advice for promoting better interactions between followers and leaders. Firstly, followers should realize that they have a role in stimulating positive behavior on the part of the leader. By actively performing better at their respective jobs, followers could be appreciated by others at the workplace (including leaders). They should also realize that leaders exhibit certain behaviors based on some instrumental calculation, suggesting that followers should seek to inform themselves more assiduously before arriving at a final evaluation of their leader (Tepper et al., 2011).
As for leaders, although they may expect positive outcomes when they constantly express humility toward their followers, they should reflect upon the sincerity of their own humility expressions, in case the outcomes fail to meet expectations. Leaders should be aware of the importance of being more in service of the followers and the group rather than about themselves. However, if leaders constantly put up an act but in reality look after their own interests, their true intentions behind their behaviors would soon become apparent to their followers and, in time, erode existing trust. By sending feelers that their intentions and behavior have been consistent (Simons, 2002), leaders can protect themselves from being perceived as hypocritical.
What is also worth noting is that one limitation of this study is that we reported small effect sizes of follower competence on leader humility expression, which raises the question of whether these effects have meaningful implications for practice in management. The answer to this question is “yes.” Firstly, our small effect sizes are comparable to some previous studies of leader humility (Qian et al., 2018). Secondly, despite the small sizes, we obtain such effects after ruling out the influence of leaders’ humility trait. These results are practically meaningful because it indicates that we can cultivate humble leadership (e.g., humility-expressing behavior) through shaping the situational factors. Different from previous studies that only valued individual differences as antecedents of leader humility, our study indicates that organizations can create a better environment to trigger positive humble leader behaviors rather than cultivating humble leadership largely depending on the selection of leaders (e.g., selecting leaders with a high level of humility).
Limitations and Future Research
Firstly, although we utilized matching data analysis and multi-wave data collection as a method to verify our hypotheses, the sampling data could not offer causal inferences about our hypotheses. We recommend a longitudinal study to evaluate the actual causal relationship. Secondly, our study was conducted in China, where acting humbly is among the cultural norms and so individuals are suggested not to show off (Hwang, 1982; Kurman and Sriram, 1997). It is possible that people in such situations might be acting humbly against one’s true will and feelings. Further, the Chinese might be having varying interpretations of others’ humility. It is therefore not clear to what extent our results can be generalized or if our findings can be applicable to the Western context. We advocate further research to explore whether and how cultural differences influence the model proposed in this research—for example, whether a leader with higher dependent self-construal who values more harmonious interpersonal relationship will express more humble behaviors. Thirdly, the present study has left a hypothesis implicit in multiple empirical studies, namely, followers with higher capability would be perceived as having high utility by leaders. However, as many researchers have argued, there could be the possibility that when followers have high capability, leaders would sense both utility and be personally threatened at the same time (Khan et al., 2018). Finally, we acknowledge that rational consideration is one possible angle to understand the situational predictors of leaders’ humility expression. Beyond that, we think leaders’ less rational emotional perception can also be situational predictors of leader humility. These limitations also point to possible future directions for humble leadership studies.
Subscribe to:
Posts (Atom)