Are Humans Prepared to Detect, Fear, and Avoid Snakes? The Mismatch between Laboratory and Ecological Evidence. Carlos M. Coelho et al. Front. Psychol. Aug 28 2019, doi: 10.3389/fpsyg.2019.02094
Abstract: Since Seligman's 1971 statement that the vast majority of phobias are about objects essential to the survival of a species, a multitude of laboratory studies followed, supporting the finding that humans learn to fear and detect snakes (and other animals) faster than other stimuli. Most of these studies used schematic drawings, images, or pictures of snakes, and only a small amount of fieldwork in naturalistic environments was done. We address fear preparedness theories, and automatic fast detection data from mainstream laboratory data and compares it with ethobehavioural information relative to snakes, predator-prey interaction, and snakes’ defensive kinematics strikes in order to analyse their potential matching. From this analysis four main findings arose, namely that: 1) Snakebites occur when people are very close to the snake and are unaware or unable to escape the bite; 2) Human visual detection and escape response is slow compared to the speed of snake strikes; 3) In natural environments, snake experts are often unable to see snakes existing nearby; 4) animate objects in general capture more attention over other stimuli and dangerous but recent objects in evolutionary terms are also able to be detected fast. The issues mentioned above pose several challenges to evolutionary psychology-based theories expecting to find special-purpose neural modules. The older selective habituation hypothesis (Schleidt, 1961), that prey animals start with a rather general predator image from which specific harmless cues are removed by habituation might deserve reconsideration.
Keywords: General feature detection, modular theory, snake bite kinematics, selective habituation hypothesis, evolutionary psychology
Thursday, August 29, 2019
Wednesday, August 28, 2019
A difficult conversation: Delivering news of a fatal illness to one's own wife
A Difficult Conversation. Martin B. Wice. August 27, 2019. JAMA. 2019;322(8):727-728. doi:10.1001/jama.2019.11757
[Full text, references, etc., at the journal above]
In her late 50s, Cathy developed fatigue. Over the course of a few months, she began to take afternoon naps and fall asleep earlier than normal. Then, over a 2-week period, she gained 15 pounds. Overnight, her belly swelled full of fluid, and she appeared 7 months pregnant. The next day, Cathy's primary care physician sent her to the hospital emergency department for a CT scan of her abdomen and pelvis. As a physician myself seeing patients at another nearby hospital, I logged into the network computer and, with Cathy’s permission, reviewed her scan results with the emergency department physician over the phone. Cathy's imaging and subsequent peritoneal fluid were consistent with advanced ovarian cancer—to me, a death sentence. I had delivered poor prognoses many times during my career, so I took it upon myself to give Cathy the bad news and ensure that I would be with her during this difficult conversation. This was especially challenging because Cathy was no ordinary patient—she was my wife of 31 years.
I finished my rounds and quickly drove to the emergency department where Cathy was waiting. On the drive, questions rushed through my head. How do you tell the woman you love that this disease can be only temporarily contained with surgery and chemotherapy? How do you tell her that she will never see her children graduate from their respective professional schools, will never watch her children get married, will never meet her grandchildren, and will never enjoy her golden years? How do you tell your children that their mother will be gone in a few short years? How do I as a husband cope with the stress of caring for a wife severely compromised not only from her cancer but from the cancer treatments? How do I cope knowing that soon I will need to learn to live without my life partner, the love of my life, the person whom I and others consistently relied on for support?
Cathy would have known the answers to these questions. She was the one to whom friends, children of friends, and even her own children's classmates came for unbiased and compassionate advice. As a wife, she always provided support and direction. If I ever questioned her, she reminded me that agreeing with her demonstrated good judgment.
As a mother of twin boys, Cathy guided our children on their journey to becoming kind, giving, and productive adults. She was also their advocate. One of our sons struggled in school from a “central processing” hearing disorder because of which he could hear the sounds of speech but could not distinguish their meaning. Cathy supported him through a grueling therapy program and installed sound amplification systems in each of his classrooms so that he could hear his teachers' voices over the background noise. Cathy then went on to advocate for better diagnosis and treatment of all hearing impaired children in our community and had great success advocating for proper acoustics in each K-12 classroom in our area.
Cathy also supported my parents as if she were their own daughter. When my father developed Alzheimer disease, she helped care for him until his death. With our local Alzheimer Association, she created a handbook for hospitals on how to interact with patients with dementia, an endeavor that earned her the volunteer of the year award. When my mother had a stroke, Cathy cared for her in our home. Now, it was our turn to support, advocate for, and care for Cathy.
Once I arrived at the emergency department, I pulled myself together to have the most difficult conversation of my life. I called upon my training as a physician to deliver bad news: you assess the patient's medical, functional, emotional, and spiritual needs, as well as the patient’s family's needs. After weighing the pros and cons of each option, you determine the best approach to address these needs. You then sit down with the patient and family members and have a private, nonjudgmental, and supportive conversation. Speaking at eye level, you discuss the situation as partners. You pause and allow the information to sink in. You have a box of tissues at the ready. You offer a calming touch as appropriate. If and when the patient and family members are able to continue the discussion, you outline the various scenarios and the best outcomes. You pause again. You allow the patient and family to respond further. You address their concerns and questions. I had done this many times but never as both the doctor guiding the conversation and the family taking it in.
I sat at Cathy's side and held her hand. I called upon my extensive training to navigate this conversation, and then delivered the horrible news. “Your abdomen is filled with abnormal cells, cells that are from advanced ovarian cancer,” I said, still overwhelmed and trying to contain my own shock. I paused to give both her and me time to process the information. As husband and wife, we shed tears together. I promised to always advocate for her, always seek out the best possible care for her, and always be there for her. The gynecologic oncologist arrived, and we discussed the treatment plan. I vacillated between 2 worlds: at times, a physician; at times, a worried husband and now caregiver. This would be a twilight zone I would never leave. We shared the initial shock of her diagnosis the remainder of the night. Neither one of us had much sleep. We did what we always did in a crisis: we held each other tight for mutual support.
The next day, we celebrated our wedding anniversary with Cathy undergoing a bowel prep for the imminent tumor debulking procedure. Being board-certified in both internal medicine as well as physical medicine and rehabilitation and with official orders from her oncologist, I organized a customized cancer rehabilitation program for Cathy. And I kept my promise. I supported Cathy through her initial surgery and then through her multiple cycles of chemotherapy and its debilitating consequences: the fatigue, low blood counts and multiple transfusions, pneumonias and urinary tract infections, recurrent nausea and vomiting, hair loss, painful peripheral neuropathy, edema, and “chemo brain.” She lost strength, balance, hearing, renal function, mobility, and she lost her way of life.
As a husband, I provided her emotional support to counteract her frustrations, fears, and depression. I provided physical support when necessary, helping her bathe, dress, stand, and walk. When I could, I took her to chemotherapy. When others took her, I would visit her in the outpatient cancer center. When she was hospitalized, I would spend the night with her. Later, when she lost all bowel function, I connected her to intravenous fluids each morning and night. When her liver stopped working altogether, I took her home to die in a familiar setting, surrounded by her immediate family. Cathy passed away 2 days later, 3 and a half years after her initial diagnosis, with her sons and me at her bedside.
In this time of challenge, I found inspiration in these words by philosopher John O’Donohue1:
When the reverberations of shock subside in you,
May grace come to restore you to balance.
May it shape a new space in your heart
To embrace this illness as a teacher
Who has come to open your life to new worlds.
I have grown a great deal in my “new world.” I rebalanced my life during Cathy’s illness and after her death. She and I spent our remaining time together to the fullest, and I have since continued living each day as if it is my last. I developed a better appreciation for relationships, the wonders of nature, and my connection to the rest of the world. There is new space in my heart that has allowed me to deepen established and new relationships. My love and support for our children increased, as I did my best to compensate for the love and support that Cathy had so generously given in life. In her final days, Cathy told me to find someone new to share my life, and I did. Cathy’s illness taught me that the challenges of disease can enhance people’s lives, and I am grateful for this lesson in my own.
On a professional level, Cathy’s illness strengthened my empathy for and commitment to patients and their families at intense times of need. Each time I give bad news, I am transported back to the emergency department with Cathy. I not only relive the delivery of Cathy’s cancer diagnosis; I relive the shock of just having received it. I will never forget being caught in the world of a physician, a loving husband, and a caregiver. My new tripartite identity may not be a perspective I anticipated but is one that has enhanced the quality of care I provide.
With her diagnosis, Cathy and I were forced to acknowledge that life is limited, which gave our lives and the lives around us so much more meaning. This is a legacy Cathy gave to me. No difficult conversation can ever replace this.
...
Additional Information: I thank Elizabeth Mueller, BS, Elise Alspach, PhD, and Kathleen Schoch, PhD, for editorial assistance and feedback in association with InPrint: A Scientific Editing Network at Washington University in St Louis. None were compensated beyond their usual salary. I also thank my son for allowing me to share this story.
[Full text, references, etc., at the journal above]
In her late 50s, Cathy developed fatigue. Over the course of a few months, she began to take afternoon naps and fall asleep earlier than normal. Then, over a 2-week period, she gained 15 pounds. Overnight, her belly swelled full of fluid, and she appeared 7 months pregnant. The next day, Cathy's primary care physician sent her to the hospital emergency department for a CT scan of her abdomen and pelvis. As a physician myself seeing patients at another nearby hospital, I logged into the network computer and, with Cathy’s permission, reviewed her scan results with the emergency department physician over the phone. Cathy's imaging and subsequent peritoneal fluid were consistent with advanced ovarian cancer—to me, a death sentence. I had delivered poor prognoses many times during my career, so I took it upon myself to give Cathy the bad news and ensure that I would be with her during this difficult conversation. This was especially challenging because Cathy was no ordinary patient—she was my wife of 31 years.
I finished my rounds and quickly drove to the emergency department where Cathy was waiting. On the drive, questions rushed through my head. How do you tell the woman you love that this disease can be only temporarily contained with surgery and chemotherapy? How do you tell her that she will never see her children graduate from their respective professional schools, will never watch her children get married, will never meet her grandchildren, and will never enjoy her golden years? How do you tell your children that their mother will be gone in a few short years? How do I as a husband cope with the stress of caring for a wife severely compromised not only from her cancer but from the cancer treatments? How do I cope knowing that soon I will need to learn to live without my life partner, the love of my life, the person whom I and others consistently relied on for support?
Cathy would have known the answers to these questions. She was the one to whom friends, children of friends, and even her own children's classmates came for unbiased and compassionate advice. As a wife, she always provided support and direction. If I ever questioned her, she reminded me that agreeing with her demonstrated good judgment.
As a mother of twin boys, Cathy guided our children on their journey to becoming kind, giving, and productive adults. She was also their advocate. One of our sons struggled in school from a “central processing” hearing disorder because of which he could hear the sounds of speech but could not distinguish their meaning. Cathy supported him through a grueling therapy program and installed sound amplification systems in each of his classrooms so that he could hear his teachers' voices over the background noise. Cathy then went on to advocate for better diagnosis and treatment of all hearing impaired children in our community and had great success advocating for proper acoustics in each K-12 classroom in our area.
Cathy also supported my parents as if she were their own daughter. When my father developed Alzheimer disease, she helped care for him until his death. With our local Alzheimer Association, she created a handbook for hospitals on how to interact with patients with dementia, an endeavor that earned her the volunteer of the year award. When my mother had a stroke, Cathy cared for her in our home. Now, it was our turn to support, advocate for, and care for Cathy.
Once I arrived at the emergency department, I pulled myself together to have the most difficult conversation of my life. I called upon my training as a physician to deliver bad news: you assess the patient's medical, functional, emotional, and spiritual needs, as well as the patient’s family's needs. After weighing the pros and cons of each option, you determine the best approach to address these needs. You then sit down with the patient and family members and have a private, nonjudgmental, and supportive conversation. Speaking at eye level, you discuss the situation as partners. You pause and allow the information to sink in. You have a box of tissues at the ready. You offer a calming touch as appropriate. If and when the patient and family members are able to continue the discussion, you outline the various scenarios and the best outcomes. You pause again. You allow the patient and family to respond further. You address their concerns and questions. I had done this many times but never as both the doctor guiding the conversation and the family taking it in.
I sat at Cathy's side and held her hand. I called upon my extensive training to navigate this conversation, and then delivered the horrible news. “Your abdomen is filled with abnormal cells, cells that are from advanced ovarian cancer,” I said, still overwhelmed and trying to contain my own shock. I paused to give both her and me time to process the information. As husband and wife, we shed tears together. I promised to always advocate for her, always seek out the best possible care for her, and always be there for her. The gynecologic oncologist arrived, and we discussed the treatment plan. I vacillated between 2 worlds: at times, a physician; at times, a worried husband and now caregiver. This would be a twilight zone I would never leave. We shared the initial shock of her diagnosis the remainder of the night. Neither one of us had much sleep. We did what we always did in a crisis: we held each other tight for mutual support.
The next day, we celebrated our wedding anniversary with Cathy undergoing a bowel prep for the imminent tumor debulking procedure. Being board-certified in both internal medicine as well as physical medicine and rehabilitation and with official orders from her oncologist, I organized a customized cancer rehabilitation program for Cathy. And I kept my promise. I supported Cathy through her initial surgery and then through her multiple cycles of chemotherapy and its debilitating consequences: the fatigue, low blood counts and multiple transfusions, pneumonias and urinary tract infections, recurrent nausea and vomiting, hair loss, painful peripheral neuropathy, edema, and “chemo brain.” She lost strength, balance, hearing, renal function, mobility, and she lost her way of life.
As a husband, I provided her emotional support to counteract her frustrations, fears, and depression. I provided physical support when necessary, helping her bathe, dress, stand, and walk. When I could, I took her to chemotherapy. When others took her, I would visit her in the outpatient cancer center. When she was hospitalized, I would spend the night with her. Later, when she lost all bowel function, I connected her to intravenous fluids each morning and night. When her liver stopped working altogether, I took her home to die in a familiar setting, surrounded by her immediate family. Cathy passed away 2 days later, 3 and a half years after her initial diagnosis, with her sons and me at her bedside.
In this time of challenge, I found inspiration in these words by philosopher John O’Donohue1:
When the reverberations of shock subside in you,
May grace come to restore you to balance.
May it shape a new space in your heart
To embrace this illness as a teacher
Who has come to open your life to new worlds.
I have grown a great deal in my “new world.” I rebalanced my life during Cathy’s illness and after her death. She and I spent our remaining time together to the fullest, and I have since continued living each day as if it is my last. I developed a better appreciation for relationships, the wonders of nature, and my connection to the rest of the world. There is new space in my heart that has allowed me to deepen established and new relationships. My love and support for our children increased, as I did my best to compensate for the love and support that Cathy had so generously given in life. In her final days, Cathy told me to find someone new to share my life, and I did. Cathy’s illness taught me that the challenges of disease can enhance people’s lives, and I am grateful for this lesson in my own.
On a professional level, Cathy’s illness strengthened my empathy for and commitment to patients and their families at intense times of need. Each time I give bad news, I am transported back to the emergency department with Cathy. I not only relive the delivery of Cathy’s cancer diagnosis; I relive the shock of just having received it. I will never forget being caught in the world of a physician, a loving husband, and a caregiver. My new tripartite identity may not be a perspective I anticipated but is one that has enhanced the quality of care I provide.
With her diagnosis, Cathy and I were forced to acknowledge that life is limited, which gave our lives and the lives around us so much more meaning. This is a legacy Cathy gave to me. No difficult conversation can ever replace this.
...
Additional Information: I thank Elizabeth Mueller, BS, Elise Alspach, PhD, and Kathleen Schoch, PhD, for editorial assistance and feedback in association with InPrint: A Scientific Editing Network at Washington University in St Louis. None were compensated beyond their usual salary. I also thank my son for allowing me to share this story.
Proximity (Mis)perception: Public Awareness of Nuclear, Refinery, and Fracking Sites
Proximity (Mis)perception: Public Awareness of Nuclear, Refinery, and
Fracking Sites. Benjamin A. Lyons, Heather Akin, Natalie Jomini Stroud.
Risk Analysis, August 27 2019. https://doi.org/10.1111/risa.13387
Abstract: Whether on grounds of perceived safety, aesthetics, or overall quality of life, residents may wish to be aware of nearby energy sites such as nuclear reactors, refineries, and fracking wells. Yet people are not always accurate in their impressions of proximity. Indeed, our data show that only 54% of Americans living within 25 miles of a nuclear site say they do, and even fewer fracking‐proximal (30%) and refinery‐proximal (24%) residents respond accurately. In this article, we analyze factors that could either help people form more accurate perceptions or distort their impressions of proximity. We evaluate these hypotheses using a large national survey sample and corresponding geographic information system (GIS) data. Results show that among those living in close proximity to energy sites, those who perceive greater risk are less likely to report living nearby. Conversely, social contact with employees of these industries increases perceived proximity regardless of actual distance. These relationships are consistent across each site type we examine. Other potential factors—such as local news use—may play a role in proximity perception on a case‐by‐case basis. Our findings are an important step toward a more generalizable understanding of how the public forms perceptions of proximity to risk sites, showing multiple potential mechanisms of bias.
1 INTRODUCTION
Living near sites such as nuclear reactors, refineries, and fracking wells can cause anxiety. Sites like these can pose high‐magnitude risks to human health, although the likelihood is low (e.g., Bertazzi, Pesatori, Zocchetti, & Latocca, 1989; Mitka, 2012; Vesely & Rasmuson, 1984). The proximity of such sites to one's residence can factor into important life decisions like home ownership or beginning a family (Boyle & Kiel, 2001; Doyle et al., 2000). The not‐in‐my‐backyard (NIMBYism) phenomenon, in which locals oppose new development, is a manifestation of such concerns (Lima, 2004; Lima & Marques, 2005). In addition, living near these sites may be undesirable to some solely on aesthetic grounds (Kiel & McClain, 1995). There also are desirable consequences from knowing that one lives near a particular site. For instance, this knowledge can lead residents to develop plans of action in case of complications or emergencies (Cuite, Schwom, & Hallman, 2016; Perko, Železnik, Turcanu, & Thijssen, 2012; Zeigler, Brunn, & Johnson, 1981).
However, people are not always correct in their impressions of whether they live near energy sites. Indeed, our data show that only 54% of Americans living within 25 miles of a nuclear site say they do, and even fewer fracking‐proximal (30%) and refinery‐proximal (24%) residents respond accurately. There is ample evidence that factors beyond reality affect beliefs about one's surroundings, and of proximity, in particular (Cesario & Navarrete, 2014; Craun, 2010; Giordano, Anderson, & He, 2010; Howe, 1988). In this article, we analyze what factors correlate with perceived proximity to three distinct types of sites: nuclear sites, refineries, and fracking wells. We model how orientations toward information (risk perception, general science knowledge) and access to sources of information (news consumption, social contact) relate with perceptions of proximity.
As outlined shortly, each of these factors can lead to correct beliefs about one's proximity to energy sites. At the same time, they also can have a distorting effect, making people believe that they live closer (or farther) than they do in actuality. Watching local news, for instance, could yield a better understanding of where these sites exist, or could correlate with the belief that these sites are more proximate than they are in reality. We evaluate perceived proximity using a large national survey sample and corresponding GIS data that allow us to know exactly how proximate each respondent is from one of these sites.
Examining perceived proximity across three different types of sites allows us to move research on proximity perception forward. We find that risk perception and social contact are consistently associated with proximity misperception. However, our results show that it is not the case that these factors solely promote correct or incorrect beliefs. Rather, context—in this case, actual distance—is key. Dependent on actual distance, factors like risk perception and social contact can increase the probability that one's reported proximity is accurate for some, but increase the probability that one inaccurately reports that one lives nearby for others. Ultimately, our findings illuminate barriers to successful information campaigns, and potential ways to overcome them.
Abstract: Whether on grounds of perceived safety, aesthetics, or overall quality of life, residents may wish to be aware of nearby energy sites such as nuclear reactors, refineries, and fracking wells. Yet people are not always accurate in their impressions of proximity. Indeed, our data show that only 54% of Americans living within 25 miles of a nuclear site say they do, and even fewer fracking‐proximal (30%) and refinery‐proximal (24%) residents respond accurately. In this article, we analyze factors that could either help people form more accurate perceptions or distort their impressions of proximity. We evaluate these hypotheses using a large national survey sample and corresponding geographic information system (GIS) data. Results show that among those living in close proximity to energy sites, those who perceive greater risk are less likely to report living nearby. Conversely, social contact with employees of these industries increases perceived proximity regardless of actual distance. These relationships are consistent across each site type we examine. Other potential factors—such as local news use—may play a role in proximity perception on a case‐by‐case basis. Our findings are an important step toward a more generalizable understanding of how the public forms perceptions of proximity to risk sites, showing multiple potential mechanisms of bias.
1 INTRODUCTION
Living near sites such as nuclear reactors, refineries, and fracking wells can cause anxiety. Sites like these can pose high‐magnitude risks to human health, although the likelihood is low (e.g., Bertazzi, Pesatori, Zocchetti, & Latocca, 1989; Mitka, 2012; Vesely & Rasmuson, 1984). The proximity of such sites to one's residence can factor into important life decisions like home ownership or beginning a family (Boyle & Kiel, 2001; Doyle et al., 2000). The not‐in‐my‐backyard (NIMBYism) phenomenon, in which locals oppose new development, is a manifestation of such concerns (Lima, 2004; Lima & Marques, 2005). In addition, living near these sites may be undesirable to some solely on aesthetic grounds (Kiel & McClain, 1995). There also are desirable consequences from knowing that one lives near a particular site. For instance, this knowledge can lead residents to develop plans of action in case of complications or emergencies (Cuite, Schwom, & Hallman, 2016; Perko, Železnik, Turcanu, & Thijssen, 2012; Zeigler, Brunn, & Johnson, 1981).
However, people are not always correct in their impressions of whether they live near energy sites. Indeed, our data show that only 54% of Americans living within 25 miles of a nuclear site say they do, and even fewer fracking‐proximal (30%) and refinery‐proximal (24%) residents respond accurately. There is ample evidence that factors beyond reality affect beliefs about one's surroundings, and of proximity, in particular (Cesario & Navarrete, 2014; Craun, 2010; Giordano, Anderson, & He, 2010; Howe, 1988). In this article, we analyze what factors correlate with perceived proximity to three distinct types of sites: nuclear sites, refineries, and fracking wells. We model how orientations toward information (risk perception, general science knowledge) and access to sources of information (news consumption, social contact) relate with perceptions of proximity.
As outlined shortly, each of these factors can lead to correct beliefs about one's proximity to energy sites. At the same time, they also can have a distorting effect, making people believe that they live closer (or farther) than they do in actuality. Watching local news, for instance, could yield a better understanding of where these sites exist, or could correlate with the belief that these sites are more proximate than they are in reality. We evaluate perceived proximity using a large national survey sample and corresponding GIS data that allow us to know exactly how proximate each respondent is from one of these sites.
Examining perceived proximity across three different types of sites allows us to move research on proximity perception forward. We find that risk perception and social contact are consistently associated with proximity misperception. However, our results show that it is not the case that these factors solely promote correct or incorrect beliefs. Rather, context—in this case, actual distance—is key. Dependent on actual distance, factors like risk perception and social contact can increase the probability that one's reported proximity is accurate for some, but increase the probability that one inaccurately reports that one lives nearby for others. Ultimately, our findings illuminate barriers to successful information campaigns, and potential ways to overcome them.
Science teams' impact is predicted more by the lower-citation rather than the higher-citation members; teams tend to assemble among individuals with similar citation impact in all fields of science and patenting
Decoding team and individual impact in science and invention. Mohammad Ahmadpoor and Benjamin F. Jones. Proceedings of the National Academy of Sciences, July 9, 2019 116 (28) 13885-13890. https://doi.org/10.1073/pnas.1812341116
Significance: Scientists and inventors increasingly work in teams. We track millions of individuals across their collaboration networks to help inform fundamental features of team science and invention and help solve the challenge of assessing individuals in the team production era. We find that in all fields of science and patenting, team impact is weighted toward the lower-impact rather than higher-impact team members, with implications for the output of specific teams and team assembly. In assessing individuals, our index substantially outperforms existing measures, including the h index, when predicting paper and patent outcomes or when characterizing eminent careers. The findings provide guidance to research institutions, science funders, and scientists themselves in predicting team output, forming teams, and evaluating individual impact.
Abstract: Scientists and inventors increasingly work in teams, raising fundamental questions about the nature of team production and making individual assessment increasingly difficult. Here we present a method for describing individual and team citation impact that both is computationally feasible and can be applied in standard, wide-scale databases. We track individuals across collaboration networks to define an individual citation index and examine outcomes when each individual works alone or in teams. Studying 24 million research articles and 3.9 million US patents, we find a substantial impact advantage of teamwork over solo work. However, this advantage declines as differences between the team members’ individual citation indices grow. Team impact is predicted more by the lower-citation rather than the higher-citation team members, typically centering near the harmonic average of the individual citation indices. Consistent with this finding, teams tend to assemble among individuals with similar citation impact in all fields of science and patenting. In assessing individuals, our index, which accounts for each coauthor, is shown to have substantial advantages over existing measures. First, it more accurately predicts out-of-sample paper and patent outcomes. Second, it more accurately characterizes which scholars are elected to the National Academy of Sciences. Overall, the methodology uncovers universal regularities that inform team organization while also providing a tool for individual evaluation in the team production era.
Keywords: team science collaboration prediction team organization
Significance: Scientists and inventors increasingly work in teams. We track millions of individuals across their collaboration networks to help inform fundamental features of team science and invention and help solve the challenge of assessing individuals in the team production era. We find that in all fields of science and patenting, team impact is weighted toward the lower-impact rather than higher-impact team members, with implications for the output of specific teams and team assembly. In assessing individuals, our index substantially outperforms existing measures, including the h index, when predicting paper and patent outcomes or when characterizing eminent careers. The findings provide guidance to research institutions, science funders, and scientists themselves in predicting team output, forming teams, and evaluating individual impact.
Abstract: Scientists and inventors increasingly work in teams, raising fundamental questions about the nature of team production and making individual assessment increasingly difficult. Here we present a method for describing individual and team citation impact that both is computationally feasible and can be applied in standard, wide-scale databases. We track individuals across collaboration networks to define an individual citation index and examine outcomes when each individual works alone or in teams. Studying 24 million research articles and 3.9 million US patents, we find a substantial impact advantage of teamwork over solo work. However, this advantage declines as differences between the team members’ individual citation indices grow. Team impact is predicted more by the lower-citation rather than the higher-citation team members, typically centering near the harmonic average of the individual citation indices. Consistent with this finding, teams tend to assemble among individuals with similar citation impact in all fields of science and patenting. In assessing individuals, our index, which accounts for each coauthor, is shown to have substantial advantages over existing measures. First, it more accurately predicts out-of-sample paper and patent outcomes. Second, it more accurately characterizes which scholars are elected to the National Academy of Sciences. Overall, the methodology uncovers universal regularities that inform team organization while also providing a tool for individual evaluation in the team production era.
Keywords: team science collaboration prediction team organization
Does Media Literacy Help Identification of Fake News? Information Literacy Helps, but Other Literacies Don’t
Does Media Literacy Help Identification of Fake News? Information Literacy Helps, but Other Literacies Don’t. S. Mo Jones-Jang, Tara Mortensen, Jingjing Liu. American Behavioral Scientist, August 28, 2019. https://doi.org/10.1177/0002764219869406
Abstract: Concerns over fake news have triggered a renewed interest in various forms of media literacy. Prevailing expectations posit that literacy interventions help audiences to be “inoculated” against any harmful effects of misleading information. This study empirically investigates such assumptions by assessing whether individuals with greater literacy (media, information, news, and digital literacies) are better at recognizing fake news, and which of these literacies are most relevant. The results reveal that information literacy—but not other literacies—significantly increases the likelihood of identifying fake news stories. Interpreting the results, we provide both conceptual and methodological explanations. Particularly, we raise questions about the self-reported competencies that are commonly used in literacy scales.
Keywords: fake news, media literacy, information literacy, digital literacy, news literacy, misinformation, disinformation
Abstract: Concerns over fake news have triggered a renewed interest in various forms of media literacy. Prevailing expectations posit that literacy interventions help audiences to be “inoculated” against any harmful effects of misleading information. This study empirically investigates such assumptions by assessing whether individuals with greater literacy (media, information, news, and digital literacies) are better at recognizing fake news, and which of these literacies are most relevant. The results reveal that information literacy—but not other literacies—significantly increases the likelihood of identifying fake news stories. Interpreting the results, we provide both conceptual and methodological explanations. Particularly, we raise questions about the self-reported competencies that are commonly used in literacy scales.
Keywords: fake news, media literacy, information literacy, digital literacy, news literacy, misinformation, disinformation
Gangestad et al. (this issue) recently published alternative analyses of our open data & state that women show ovulatory shifts in preferences for men’s bodies; we think the results are not robust
Penke, Lars, Julia Stern, Ruben C. Arslan, and Tanja M. Gerlach. 2019.
“No Robust Evidence for Cycle Shifts in Preferences for Men's Bodies in a
Multiverse Analysis: A Response to Gangestad Et Al. (2019).” PsyArXiv.
August 28. doi:10.31234/osf.io/pdsuy
Abstract: Gangestad et al. (this issue) recently published alternative analyses of our open data to investigate whether women show ovulatory shifts in preferences for men’s bodies. They argue that a significant three-way interaction between log-transformed hormones, a muscularity component, and women’s relationship status provides evidence for the ovulatory shift hypothesis. Their conclusion is opposite to the one we previously reported (Jünger et al., 2018). Here, we provide evidence that Gangestad et al.’s differing conclusions are contaminated by overfitting, clarify reasons for deviating from our preregistration in some aspects, discuss the implications of data-dependent re-analysis, and report a multiverse analysis which provides evidence that their reported results are not robust. Further, we use the current debate to contrast the risk of prematurely concluding a null effect against the risk of shielding hypotheses from falsification. Finally, we discuss the benefits and challenges of open scientific practices, as contested by Gangestad et al., and conclude with implications for future studies.
No Robust Evidence for Cycle Shifts in Preferences for Men's Bodies in a Multiverse Analysis
Abstract: Gangestad et al. (this issue) recently published alternative analyses of our open data to investigate whether women show ovulatory shifts in preferences for men’s bodies. They argue that a significant three-way interaction between log-transformed hormones, a muscularity component, and women’s relationship status provides evidence for the ovulatory shift hypothesis. Their conclusion is opposite to the one we previously reported (Jünger et al., 2018). Here, we provide evidence that Gangestad et al.’s differing conclusions are contaminated by overfitting, clarify reasons for deviating from our preregistration in some aspects, discuss the implications of data-dependent re-analysis, and report a multiverse analysis which provides evidence that their reported results are not robust. Further, we use the current debate to contrast the risk of prematurely concluding a null effect against the risk of shielding hypotheses from falsification. Finally, we discuss the benefits and challenges of open scientific practices, as contested by Gangestad et al., and conclude with implications for future studies.
No Robust Evidence for Cycle Shifts in Preferences for Men's Bodies in a Multiverse Analysis
Results bolster the small body of literature showing that flashbulb memories are subject to reconstructive processes & suggest that memories decay further between 3 & 5 months after events
Krackow, E., Deming, E., Longo, A., & DiSciullo, V. (2019). Memories of learning the news of the 2016 U.S. presidential election results. Psychology of Consciousness: Theory, Research, and Practice, http://dx.doi.org/10.1037/cns0000201
Abstract: The current study examined the consistency of flashbulb memories for the 2016 U.S. presidential election outcome by comparing a Time 1 memory that was obtained after the expected point of memory consolidation to a Time 2 memory obtained within a delay in which memories were expected to remain consistent based on the majority of literature. Despite expected consistency, narrative reports showed substantial change and several specific question responses showed a substantial range of change from Time 1 to Time 2. Changes in response to specific questions correlated significantly with the tendency to provide new information in the Time 2 narrative. Emotional determinants (feelings about the election result outcome) and emotion regulation abilities did not predict consistency of memories. Participant stress ratings showed a small but significant negative correlation with change in memory (greater stress = greater omission of information). These results bolster the small body of literature showing that flashbulb memories are subject to reconstructive processes and combined with other results (Krackow, Lynn, & Payne, 2005), they suggest that memories may decay further between 3 and 5 months following an event.
Abstract: The current study examined the consistency of flashbulb memories for the 2016 U.S. presidential election outcome by comparing a Time 1 memory that was obtained after the expected point of memory consolidation to a Time 2 memory obtained within a delay in which memories were expected to remain consistent based on the majority of literature. Despite expected consistency, narrative reports showed substantial change and several specific question responses showed a substantial range of change from Time 1 to Time 2. Changes in response to specific questions correlated significantly with the tendency to provide new information in the Time 2 narrative. Emotional determinants (feelings about the election result outcome) and emotion regulation abilities did not predict consistency of memories. Participant stress ratings showed a small but significant negative correlation with change in memory (greater stress = greater omission of information). These results bolster the small body of literature showing that flashbulb memories are subject to reconstructive processes and combined with other results (Krackow, Lynn, & Payne, 2005), they suggest that memories may decay further between 3 and 5 months following an event.
Tuesday, August 27, 2019
Understanding hostility in online political discussions: Non-hostile people opt out of the discussions, individuals prone to hostility could be more likely to participate in online than offline discussions
Why so angry? Understanding hostility in online political discussions. Alexander Bor &Michael Bang Petersen. Online disinformation: an integrated view conference 2019. Aarhus University. https://nordis.research.it.uu.se/wp-content/uploads/2019/04/Online-Disinformation-Conference-Abstracts.pdf
Most US citizens consider online political discussions to be uncivil, aggressive and hostile. Although political discussions can get heated by their very nature, the available data suggests that online discussions are seen as much worse than offline discussions. Yet, while a range of studies has documented the existence of widespread perceptions of online political hostility, our understanding of the causes of these perceptions are limited. The aim of the present paper is to provide the first comprehensive review and empirical test of the potential mechanisms that could cause widespread online political hostility.
As our theoretical starting point, we distinguish between two broad factors: the messages sent and the message received. Two potential explanations relate to the first factor. The first explanation proposes negative behavioral changes. This suggests that people are more likely to send hostile messages online than offline. Anonymity is a frequently blamed culprit of online hostility, but online environments have several other unique features too; for example, people are often distracted or tired while crafting their messages. Such characteristics may undermine people’s emotion regulation mechanisms and may lead to increased hostility on online platforms. The second explanation proposes that online environments attract particular individuals and, thereby, change the composition of people participating in political discussions. Through such sorting, individuals prone to hostility could be more likely to participate in online than offline discussions. It is possible that whereas social defense mechanisms effectively guard offline discussions against hostile intruders, it is more difficult to exclude hostile parties from online discussions.
It is also important to consider whether the perceived hostility is exacerbated on the receivers’ end too. A third explanation suggests that the density of communication networks in online environments may also contribute to higher perceived hostility, even if the same messages are being sent by the same people. Whereas offline political discussions typically involve only a handful of people and rarely more than a few dozen, in online discussions hundreds can participate. In other words, any hostile message is likely to be received by many more people inan online discussion. Computer algorithms prioritizing comments with attention grabbing or controversial content likely intensify this effect. Finally, the perception effect proposes that even if none of the explanations above were true, it is possible that the same messages are perceived as more hostile online than in face-to-face interactions. People may have a harder time judging the hostility of messages they receive and may make more false-positive mistakes online, where they lack non-verbal cues, lack reputational information about the sender, and often see only fragments of a conversation thread. We test observable implications for these explanations relying on an original online survey of US citizens (N = 1500), conducted by YouGov on an approximately representative sample. The survey includes measures of political participation and hostility online and offline, perceptions of these environments as well as a wide array of personality measures.
Against common predictions, we find little evidence for behavior being corrupted by online environments or for sorting effects revealing the entry of new, hostile parties to online discussions. Our data shows very high correlations between online and offline behaviors including hostility (rs> 0.8). Moreover, personality measures, which indicate a general tendency for hostile political behavior(such as need for chaos, trait aggression, status driven risk-seeking and difficulties in emotion regulation), correlate highly with both online and offline hostility (0.4 < rs <0.6). We do find firm evidence for sorting caused by hostile individuals being more active onlinethan non-hostile individuals. According to our data, people who self-report sending hostile messages 1) spend more time on social media (but less time on other parts of the internet) and 2) discuss politics more than non-hostile individuals. These differences are particularly large when it comes to discussing politics with strangers.
We find suggestive evidence that perception and network effects also increase perceived hostility. Discussions with strangers correlate with negative impressions of online discussions. Importantly, this is not true for offline discussions: the more people discuss politics with strangers face-to-face, the more positive their evaluations get. Furthermore, due to the density of online communication networks, discussions with strangers are more frequent both in absolute and relative terms online than offline. The paper concludes with discussing the implications of these findings.
Lifting short-sale constraints leads to a decrease in stock price crash risk; effect is more pronounced for firms whose managers are more likely to hoard bad news & obfuscate financial information; & for those with more severe overinvestment
Short-sale constraints and stock price crash risk: Causal evidence from a natural experiment. Xiaohu Deng, Lei Gao, Jeong-Bon Kim. Journal of Corporate Finance, August 20 2019, 101498. https://doi.org/10.1016/j.jcorpfin.2019.101498
Highlights
• We study the causal relation between short-sale constraints and stock price crash risk.
• Lifting short-sale constraints leads to a decrease in stock price crash risk.
• The effect is more pronounced for firms whose managers are more likely to hoard bad news and obfuscate financial information.
• The effect is more pronounced for firms with more severe overinvestment problems.
• Short sellers play important roles in monitoring managerial disclosure strategies and real investment decisions.
Abstract: We examine the relation between short-sale constraints and stock price crash risk. To establish causality, we take advantage of a regulatory change from the Securities and Exchange Commission (SEC)’s Regulation SHO pilot program, which temporarily lifted short-sale constraints for randomly designated stocks. Using Regulation SHO as a natural experiment setting in which to apply a difference-in-differences research design, we find that the lifting of short-sale constraints leads to a significant decrease in stock price crash risk. We further investigate the possible underlying mechanisms through which short-sale constraints affect stock price crash risk. We provide evidence suggesting that lifting of short-sale constraints reduces crash risk by constraining managerial bad news hoarding and improving corporate investment efficiency. The results of our study shed new light on the cause of stock price crash risk as well as the roles that short sellers play in monitoring managerial disclosure strategies and real investment decisions.
Highlights
• We study the causal relation between short-sale constraints and stock price crash risk.
• Lifting short-sale constraints leads to a decrease in stock price crash risk.
• The effect is more pronounced for firms whose managers are more likely to hoard bad news and obfuscate financial information.
• The effect is more pronounced for firms with more severe overinvestment problems.
• Short sellers play important roles in monitoring managerial disclosure strategies and real investment decisions.
Abstract: We examine the relation between short-sale constraints and stock price crash risk. To establish causality, we take advantage of a regulatory change from the Securities and Exchange Commission (SEC)’s Regulation SHO pilot program, which temporarily lifted short-sale constraints for randomly designated stocks. Using Regulation SHO as a natural experiment setting in which to apply a difference-in-differences research design, we find that the lifting of short-sale constraints leads to a significant decrease in stock price crash risk. We further investigate the possible underlying mechanisms through which short-sale constraints affect stock price crash risk. We provide evidence suggesting that lifting of short-sale constraints reduces crash risk by constraining managerial bad news hoarding and improving corporate investment efficiency. The results of our study shed new light on the cause of stock price crash risk as well as the roles that short sellers play in monitoring managerial disclosure strategies and real investment decisions.
Blocking or unfriending others for political reasons is related to more expressive participation (discussing & posting about politics), & more demonstrative forms of participation (donating money & volunteering time)
Robertson, Craig and Fernandez, Laleah and Shillair, Ruth, The Political Outcomes of Unfriending: Social Network Curation, Network Agreeability, and Political Participation (July 24, 2019). SSRN: http://dx.doi.org/10.2139/ssrn.3426216
Abstract: Research has noted a link between social media use and political participation. Scholars have also identified a need to explain this link. The present study is a theoretical and empirical probe into the political outcomes of unfriending people on social media. Drawing on privacy management theory and the social identity perspective, it explores the relationship between social network curation (blocking or unfriending others on social media for political reasons), perceived social network agreeability (how often people agree with the political opinions or political content of friends on social media), and forms of political participation. Using data from a survey of US adults (N=2,018) and a structural equation modelling approach, study results indicate a relational path from social network curation, through expressive participation (e.g. discussing politics and posting about politics on social media), to more demonstrative forms of participation (e.g. donating money and volunteering time). The study contributes to our understanding of the link between social media use and political outcomes by focusing on a unique explanatory mechanism. Policy implications pertain to the role that social media use plays in fostering political involvement. Specifically, if cutting disagreeable friends out of one’s social network is associated with political participation, this raises normative concerns regarding engagement which is underpinned by political polarization and intolerance.
Keywords: social media, political participation, unfriending
Abstract: Research has noted a link between social media use and political participation. Scholars have also identified a need to explain this link. The present study is a theoretical and empirical probe into the political outcomes of unfriending people on social media. Drawing on privacy management theory and the social identity perspective, it explores the relationship between social network curation (blocking or unfriending others on social media for political reasons), perceived social network agreeability (how often people agree with the political opinions or political content of friends on social media), and forms of political participation. Using data from a survey of US adults (N=2,018) and a structural equation modelling approach, study results indicate a relational path from social network curation, through expressive participation (e.g. discussing politics and posting about politics on social media), to more demonstrative forms of participation (e.g. donating money and volunteering time). The study contributes to our understanding of the link between social media use and political outcomes by focusing on a unique explanatory mechanism. Policy implications pertain to the role that social media use plays in fostering political involvement. Specifically, if cutting disagreeable friends out of one’s social network is associated with political participation, this raises normative concerns regarding engagement which is underpinned by political polarization and intolerance.
Keywords: social media, political participation, unfriending
Out of all the participating users who post comments in a particular shaming event, the majority of them are likely to shame the victim; shamers' follower counts increase faster than that of the nonshamers in Twitter
Online Public Shaming on Twitter: Detection, Analysis, and Mitigation. Rajesh Basak; Shamik Sural; Niloy Ganguly; Soumya K. Ghosh. IEEE Transactions on Computational Social Systems, Volume 6, Issue 2, April 2019, pp 208 - 220, DOI: 10.1109/TCSS.2019.2895734
Abstract: Public shaming in online social networks and related online public forums like Twitter has been increasing in recent years. These events are known to have a devastating impact on the victim's social, political, and financial life. Notwithstanding its known ill effects, little has been done in popular online social media to remedy this, often by the excuse of large volume and diversity of such comments and, therefore, unfeasible number of human moderators required to achieve the task. In this paper, we automate the task of public shaming detection in Twitter from the perspective of victims and explore primarily two aspects, namely, events and shamers. Shaming tweets are categorized into six types: abusive, comparison, passing judgment, religious/ethnic, sarcasm/joke, and whataboutery, and each tweet is classified into one of these types or as nonshaming. It is observed that out of all the participating users who post comments in a particular shaming event, majority of them are likely to shame the victim. Interestingly, it is also the shamers whose follower counts increase faster than that of the nonshamers in Twitter. Finally, based on categorization and classification of shaming tweets, a web application called BlockShame has been designed and deployed for on-the-fly muting/blocking of shamers attacking a victim on the Twitter.
Abstract: Public shaming in online social networks and related online public forums like Twitter has been increasing in recent years. These events are known to have a devastating impact on the victim's social, political, and financial life. Notwithstanding its known ill effects, little has been done in popular online social media to remedy this, often by the excuse of large volume and diversity of such comments and, therefore, unfeasible number of human moderators required to achieve the task. In this paper, we automate the task of public shaming detection in Twitter from the perspective of victims and explore primarily two aspects, namely, events and shamers. Shaming tweets are categorized into six types: abusive, comparison, passing judgment, religious/ethnic, sarcasm/joke, and whataboutery, and each tweet is classified into one of these types or as nonshaming. It is observed that out of all the participating users who post comments in a particular shaming event, majority of them are likely to shame the victim. Interestingly, it is also the shamers whose follower counts increase faster than that of the nonshamers in Twitter. Finally, based on categorization and classification of shaming tweets, a web application called BlockShame has been designed and deployed for on-the-fly muting/blocking of shamers attacking a victim on the Twitter.
Did parasite manipulation influence human neurological evolution? About Marco del Giudice's Invisible Designers: Brain Evolution Through the Lens of Parasite Manipulation
Did parasite manipulation influence human neurological evolution? Christopher Packham. Phys.org. Aug 26 2019. https://phys.org/news/2019-08-parasite-human-neurological-evolution.html
It seems so obvious that someone should have thought of it decades ago: Since parasites have plagued eukaryotic life for millions of years, their prevalence likely affected evolution. Psychologist Marco Del Giudice of the University of New Mexico is not the first researcher to suggest that the evolution of the human brain could have been influenced by parasites that manipulate host behavior. But tired of waiting for neurologists to pick up the ball and run with it, he has published a paper in the Quarterly Review of Biology that suggests four categories of adaptive host countermeasures against brain-manipulating parasites and the likely evolutionary responses of the parasites themselves. The idea has implications across a host of fields, and may explain human psychology, functional brain network structure, and the frustratingly variable effects of psychopharmaceuticals.
Detailed and gruesomely readable, the paper is a work of theory intended to provide a roadmap for deeper study that is likely to be agonizingly complex, and which will eventually require the involvement of neurologists, evolutionary biologists, psychologists, parasitologists and many others.
Manipulating host behavior
Many parasites manipulate host behavior in order to increase reproductive success and to spread across wider areas. Dr. Del Giudice cites such examples as Toxoplasma gondii, which hitches a ride in a rat and induces epigenetic changes in the rodent's amygdala. These changes diminish its predator aversion around cats, the protozoan's intended destination, and the only animal in which it can reproduce. (As a side effect, it can infect humans—people are a reproductive dead end for T. gondii, but it is also believed to alter human behavior.)
Del Giudice also cites rabies, which increases production of infectious saliva and induces the host's aversion to water, which further concentrates the saliva, and then engenders violent aggression to increase the likelihood of biting, a transmission route. And many sexually transmitted pathogens are known to manipulate host sexual behavior.
The point is that parasites are really bad for hosts, and it therefore stands to reason that the evolution of modern humans includes protective countermeasures that were selected for success and likely shaped the stupefyingly complex central nervous system
The paper is organized by four countermeasures hosts have evolved against manipulative parasites: restricting access to the brain; increasing the costs of manipulation; increasing the complexity of signaling; and increasing robustness. Within each category, Del Giudice suggests evolutionary responses by parasites to these countermeasures.
Restricting access to the brain
For aspiring higher organisms, keeping parasites out of the central nervous system is like Immunology 101; as Del Giudice points out, the adaptive benefits of restricting access to the brain also apply to non-parasitic pathogens. So the blood-brain barrier comprises the first line of defense as a layer of physical and chemical security.
Parasites have evolved other options to manipulate behavior from outside of the brain: Some produce behavior-altering substances like dopamine and release them into the blood; some manipulate the secretion of hormones; others activate specific immune responses in order to manipulate the host. Del Giudice also cites a number of parasites that evolved methods of passing through the blood-brain barrier in order to reach the brain physically.
Increasing the costs of manipulation
Some parasites release certain neurochemicals to alter host behavior. As a countermeasure, hosts could adapt by increasing the amount of particular neurochemicals required to induce such responses, greatly increasing the metabolic cost to the parasites. Since hosts are generally much larger, this increased cost could be completely negligible to the host while overwhelming the parasite's ability to produce enough of the neuroactive substance.
Del Giudice adds, "Since present-day instances of manipulation are mostly of the indirect kind, selection to increase the costs of signaling would have peaked a long time ago, possibly in the early stages of brain evolution… Paradoxically, if those countermeasures were so effective that they forced most parasites to adopt indirect strategies, they would have rendered themselves obsolete, eventually becoming a net cost without any prevailing benefits. If so, they may have been selected out owing to the relentless pressure for efficiency."
Increasing the complexity of signals
The central nervous system uses neuroactive substances as internal signals between neurons, brain networks and between the brain and other organs. Parasites can hijack these pathways to alter behavior by producing overriding signals or, as Del Giudice points out, corrupting existing ones. This entails breaking the host's internal signaling code.
Thus, a more complex signaling code is more difficult for a parasite to break. Instances of such a complexity increase include the requirement of joint action of different neurochemicals, or releasing neuroactive substances in specifically timed pulses. Expanding the set of transmission molecules and their binding receptors also increases complexity. More elaborate internal signals increase the time required to break. From an adaptive standpoint, this can close off the parasite's options, forcing it to develop other means of manipulation.
However, rising complexity raises the metabolic costs for the hosts, though these costs are disproportionately more expensive for parasites. And Del Giudice points out that increasing the complexity of a system "tends to create new points of fragility," which may be exploited by adapting parasites.
Increasing robustness
Increasing the robustness of a system basically amounts to damage control. Higher organisms tend to evolve in such a way that they can maintain normal behavior functionality, even during attack by a parasite. Del Giudice discusses a number of passive, reactive and proactive robustness host strategies, including redundancy and modularity of systems; so-called bow-tie network architectures; feedback-regulated systems that detect perturbations of the system and make corrective adjustments; and the monitoring of nonspecific cues such as immune system activities that indicate the presence of a parasitic pathogen.
Largely, robustness adaptations are likely to exclude fixed physiological adjustments, and instead favor the development of "plastic responses triggered by cues of infection." The reason is that if brain physiology and behavior are adapted to function best in the presence of a pathogen, then its absence would lead to non-optimal behaviors and reduced survival.
Del Giudice includes in the paper a discussion of the constraints on the evolution of countermeasures by hosts. These include metabolic and computational constraints such as energy availability and small body size—animals with larger brains can more easily evolve higher levels of protective complexity. This is one reason that behavior-altering parasites are more commonly observed in insects, which have provided fundamental examples of parasite strategies and host countermeasures.
Psychopharmacology
Finally, the author includes a fascinating discussion of the implications of such adaptations for psychopharmacology. "Using psychoactive drugs to treat psychiatric symptoms is an attempt to alter behavior by pharmacological means. This is also what manipulative parasites do—even though, in the case of psychiatric treatment, the goal is to benefit the patient," Del Giudice writes.
Thus, adaptive responses to attacks by parasites could explain why antidepressants tend to induce tolerance in some patients—like parasites, the drugs seek to alter the organism's behavior, with the possibility that robust neural systems rebalance behavior pathways that have been altered by the drug. "It is worth considering the possibility that at least some of these reactive mechanisms may be specifically designed to detect and respond to parasite intrusions," Del Giudice writes. "If so, standard pharmacological treatments may unwittingly mimic a parasite attack and trigger specialized defensive responses." He adds that certain undesirable side effects of drugs could be metabolically expensive but useful adaptive features during a parasite infection, but detrimental to psychiatric treatment.
The paper is a theoretical exploration of the ideas surrounding parasitism as an evolutionary pressure, and as such, usefully illuminates how complex and difficult the question will be for researchers tackling the already challenging fields of neurophysiology and brain networks.
>>>> Marco del Giudice. Invisible Designers: Brain Evolution Through the Lens of Parasite Manipulation, The Quarterly Review of Biology (2019). DOI: 10.1086/705038
It seems so obvious that someone should have thought of it decades ago: Since parasites have plagued eukaryotic life for millions of years, their prevalence likely affected evolution. Psychologist Marco Del Giudice of the University of New Mexico is not the first researcher to suggest that the evolution of the human brain could have been influenced by parasites that manipulate host behavior. But tired of waiting for neurologists to pick up the ball and run with it, he has published a paper in the Quarterly Review of Biology that suggests four categories of adaptive host countermeasures against brain-manipulating parasites and the likely evolutionary responses of the parasites themselves. The idea has implications across a host of fields, and may explain human psychology, functional brain network structure, and the frustratingly variable effects of psychopharmaceuticals.
Detailed and gruesomely readable, the paper is a work of theory intended to provide a roadmap for deeper study that is likely to be agonizingly complex, and which will eventually require the involvement of neurologists, evolutionary biologists, psychologists, parasitologists and many others.
Manipulating host behavior
Many parasites manipulate host behavior in order to increase reproductive success and to spread across wider areas. Dr. Del Giudice cites such examples as Toxoplasma gondii, which hitches a ride in a rat and induces epigenetic changes in the rodent's amygdala. These changes diminish its predator aversion around cats, the protozoan's intended destination, and the only animal in which it can reproduce. (As a side effect, it can infect humans—people are a reproductive dead end for T. gondii, but it is also believed to alter human behavior.)
Del Giudice also cites rabies, which increases production of infectious saliva and induces the host's aversion to water, which further concentrates the saliva, and then engenders violent aggression to increase the likelihood of biting, a transmission route. And many sexually transmitted pathogens are known to manipulate host sexual behavior.
The point is that parasites are really bad for hosts, and it therefore stands to reason that the evolution of modern humans includes protective countermeasures that were selected for success and likely shaped the stupefyingly complex central nervous system
The paper is organized by four countermeasures hosts have evolved against manipulative parasites: restricting access to the brain; increasing the costs of manipulation; increasing the complexity of signaling; and increasing robustness. Within each category, Del Giudice suggests evolutionary responses by parasites to these countermeasures.
Restricting access to the brain
For aspiring higher organisms, keeping parasites out of the central nervous system is like Immunology 101; as Del Giudice points out, the adaptive benefits of restricting access to the brain also apply to non-parasitic pathogens. So the blood-brain barrier comprises the first line of defense as a layer of physical and chemical security.
Parasites have evolved other options to manipulate behavior from outside of the brain: Some produce behavior-altering substances like dopamine and release them into the blood; some manipulate the secretion of hormones; others activate specific immune responses in order to manipulate the host. Del Giudice also cites a number of parasites that evolved methods of passing through the blood-brain barrier in order to reach the brain physically.
Increasing the costs of manipulation
Some parasites release certain neurochemicals to alter host behavior. As a countermeasure, hosts could adapt by increasing the amount of particular neurochemicals required to induce such responses, greatly increasing the metabolic cost to the parasites. Since hosts are generally much larger, this increased cost could be completely negligible to the host while overwhelming the parasite's ability to produce enough of the neuroactive substance.
Del Giudice adds, "Since present-day instances of manipulation are mostly of the indirect kind, selection to increase the costs of signaling would have peaked a long time ago, possibly in the early stages of brain evolution… Paradoxically, if those countermeasures were so effective that they forced most parasites to adopt indirect strategies, they would have rendered themselves obsolete, eventually becoming a net cost without any prevailing benefits. If so, they may have been selected out owing to the relentless pressure for efficiency."
Increasing the complexity of signals
The central nervous system uses neuroactive substances as internal signals between neurons, brain networks and between the brain and other organs. Parasites can hijack these pathways to alter behavior by producing overriding signals or, as Del Giudice points out, corrupting existing ones. This entails breaking the host's internal signaling code.
Thus, a more complex signaling code is more difficult for a parasite to break. Instances of such a complexity increase include the requirement of joint action of different neurochemicals, or releasing neuroactive substances in specifically timed pulses. Expanding the set of transmission molecules and their binding receptors also increases complexity. More elaborate internal signals increase the time required to break. From an adaptive standpoint, this can close off the parasite's options, forcing it to develop other means of manipulation.
However, rising complexity raises the metabolic costs for the hosts, though these costs are disproportionately more expensive for parasites. And Del Giudice points out that increasing the complexity of a system "tends to create new points of fragility," which may be exploited by adapting parasites.
Increasing robustness
Increasing the robustness of a system basically amounts to damage control. Higher organisms tend to evolve in such a way that they can maintain normal behavior functionality, even during attack by a parasite. Del Giudice discusses a number of passive, reactive and proactive robustness host strategies, including redundancy and modularity of systems; so-called bow-tie network architectures; feedback-regulated systems that detect perturbations of the system and make corrective adjustments; and the monitoring of nonspecific cues such as immune system activities that indicate the presence of a parasitic pathogen.
Largely, robustness adaptations are likely to exclude fixed physiological adjustments, and instead favor the development of "plastic responses triggered by cues of infection." The reason is that if brain physiology and behavior are adapted to function best in the presence of a pathogen, then its absence would lead to non-optimal behaviors and reduced survival.
Del Giudice includes in the paper a discussion of the constraints on the evolution of countermeasures by hosts. These include metabolic and computational constraints such as energy availability and small body size—animals with larger brains can more easily evolve higher levels of protective complexity. This is one reason that behavior-altering parasites are more commonly observed in insects, which have provided fundamental examples of parasite strategies and host countermeasures.
Psychopharmacology
Finally, the author includes a fascinating discussion of the implications of such adaptations for psychopharmacology. "Using psychoactive drugs to treat psychiatric symptoms is an attempt to alter behavior by pharmacological means. This is also what manipulative parasites do—even though, in the case of psychiatric treatment, the goal is to benefit the patient," Del Giudice writes.
Thus, adaptive responses to attacks by parasites could explain why antidepressants tend to induce tolerance in some patients—like parasites, the drugs seek to alter the organism's behavior, with the possibility that robust neural systems rebalance behavior pathways that have been altered by the drug. "It is worth considering the possibility that at least some of these reactive mechanisms may be specifically designed to detect and respond to parasite intrusions," Del Giudice writes. "If so, standard pharmacological treatments may unwittingly mimic a parasite attack and trigger specialized defensive responses." He adds that certain undesirable side effects of drugs could be metabolically expensive but useful adaptive features during a parasite infection, but detrimental to psychiatric treatment.
The paper is a theoretical exploration of the ideas surrounding parasitism as an evolutionary pressure, and as such, usefully illuminates how complex and difficult the question will be for researchers tackling the already challenging fields of neurophysiology and brain networks.
>>>> Marco del Giudice. Invisible Designers: Brain Evolution Through the Lens of Parasite Manipulation, The Quarterly Review of Biology (2019). DOI: 10.1086/705038
An additional marijuana dispensary leads to a reduction of 17 crimes per month per 10,000 residents (19 pct decline); crime reductions are highly localized, no evidence of spillover benefits to adjacent neighborhoods
Not in my backyard? Not so fast. The effect of marijuana legalization on neighborhood crime. Jeffrey Brinkman, David Mok-Lamme. Regional Science and Urban Economics, August 24 2019, 103460, https://doi.org/10.1016/j.regsciurbeco.2019.103460
Abstract: This paper studies the effects of marijuana legalization on neighborhood crime and documents the patterns in retail dispensary locations over time using detailed micro-level data from Denver, Colorado. To account for endogenous retail dispensary locations, we use a novel identification strategy that exploits exogenous changes in demand across different locations arising from the increased importance of external markets after the legalization of recreational marijuana sales. The results imply that an additional dispensary in a neighborhood leads to a reduction of 17 crimes per month per 10,000 residents, which corresponds to roughly a 19 percent decline relative to the average crime rate over the sample period. Reductions in crime are highly localized, with no evidence of spillover benefits to adjacent neighborhoods. Analysis of detailed crime categories provides insights into the mechanisms underlying the reductions.
Abstract: This paper studies the effects of marijuana legalization on neighborhood crime and documents the patterns in retail dispensary locations over time using detailed micro-level data from Denver, Colorado. To account for endogenous retail dispensary locations, we use a novel identification strategy that exploits exogenous changes in demand across different locations arising from the increased importance of external markets after the legalization of recreational marijuana sales. The results imply that an additional dispensary in a neighborhood leads to a reduction of 17 crimes per month per 10,000 residents, which corresponds to roughly a 19 percent decline relative to the average crime rate over the sample period. Reductions in crime are highly localized, with no evidence of spillover benefits to adjacent neighborhoods. Analysis of detailed crime categories provides insights into the mechanisms underlying the reductions.
Openness, low disgust sensitivity, and cognitive ability - traits and individual differences historically associated with less prejudice - may in fact also show evidence of worldview conflict
Brandt, M. J., & Crawford, J. T. (Accepted/In press). Worldview conflict and prejudice. Advances in Experimental Social Psychology, 1-99. Aug 27 2019. https://pure.uvt.nl/ws/portalfiles/portal/30699390/2019.BrandtCrawford.Worldviewconflictprejudice.Advances.pdf
Abstract: People are motivated to protect their worldviews. One way to protect one’ worldviews is through prejudice towards worldview-dissimilar groups and individuals. The traditional hypothesis predicts that people with more traditional and conservative worldviews will be more likely to protect their worldviews with prejudice than people with more liberal and progressive worldviews, whereas the worldview conflict hypothesis predicts that people with both traditional and liberal worldviews will be protect their worldviews through prejudice. We review evidence across both political and religious domains, as well as evidence using disgust sensitivity, Big Five personality traits, and cognitive ability as measures of individual differences historically associated with prejudice. We discuss four core findings that are consistent with the worldview conflict hypothesis: (1) The link between worldview conflict and prejudice is consistent across worldviews. (2) The link between worldview conflict and prejudice is found across various expressions of prejudice. (3) The link between worldview conflict and prejudice is found in multiple countries. (4) Openness, low disgust sensitivity, and cognitive ability - traits and individual differences historically associated with less prejudice - may in fact also show evidence of worldview conflict. We discuss how worldview conflict may be rooted in value dissimilarity, identity, and uncertainty management, as well as potential routes for reducing worldview conflict.
Abstract: People are motivated to protect their worldviews. One way to protect one’ worldviews is through prejudice towards worldview-dissimilar groups and individuals. The traditional hypothesis predicts that people with more traditional and conservative worldviews will be more likely to protect their worldviews with prejudice than people with more liberal and progressive worldviews, whereas the worldview conflict hypothesis predicts that people with both traditional and liberal worldviews will be protect their worldviews through prejudice. We review evidence across both political and religious domains, as well as evidence using disgust sensitivity, Big Five personality traits, and cognitive ability as measures of individual differences historically associated with prejudice. We discuss four core findings that are consistent with the worldview conflict hypothesis: (1) The link between worldview conflict and prejudice is consistent across worldviews. (2) The link between worldview conflict and prejudice is found across various expressions of prejudice. (3) The link between worldview conflict and prejudice is found in multiple countries. (4) Openness, low disgust sensitivity, and cognitive ability - traits and individual differences historically associated with less prejudice - may in fact also show evidence of worldview conflict. We discuss how worldview conflict may be rooted in value dissimilarity, identity, and uncertainty management, as well as potential routes for reducing worldview conflict.
Do we find it desirable when people who agree with us nonetheless seek out views that we oppose? Observers strongly prefer individuals who seek out political views that the observer opposes
Seek and ye shall be fine: attitudes towards political perspective-seekers. Gordon Heltzel. Masters Thesis, University of British Columbia, Psychology. Aug 22 2019. http://hdl.handle.net/2429/71391
Description: Over the past two decades, growing political polarization has led to increasing calls for people to seek out and try to understand opposing political views. Although seeking out opposing views is objectively desirable behavior, do we find it socially desirable when people who agree with us nonetheless seek out views that we oppose? We find that observers strongly prefer individuals who seek out, rather than avoid, political views that the observer opposes. Across nine online studies we find a large preference for these political perspective-seekers, and in a lab study, 73% of participants chose to interact with a perspective-seeking confederate. This preference is weakly moderated by the direction of participants’ ideology and the strength of their beliefs. Moreover, it is robust regardless of why the individual seeks or avoids opposing views, and emerges even when the perspective-seeker is undecided and not already committed to participants’ own views. However, the preference disappears when a perspective-seeker attends only to the perspective that observers disagree with, disregarding the observer’s side. These findings suggest that, despite growing polarization, people still think it is important to understand and tolerate political opponents. This work also informs future interventions, which could leverage social pressures to promote political perspective-seeking and combat selective-exposure, thus improving political relations.
Description: Over the past two decades, growing political polarization has led to increasing calls for people to seek out and try to understand opposing political views. Although seeking out opposing views is objectively desirable behavior, do we find it socially desirable when people who agree with us nonetheless seek out views that we oppose? We find that observers strongly prefer individuals who seek out, rather than avoid, political views that the observer opposes. Across nine online studies we find a large preference for these political perspective-seekers, and in a lab study, 73% of participants chose to interact with a perspective-seeking confederate. This preference is weakly moderated by the direction of participants’ ideology and the strength of their beliefs. Moreover, it is robust regardless of why the individual seeks or avoids opposing views, and emerges even when the perspective-seeker is undecided and not already committed to participants’ own views. However, the preference disappears when a perspective-seeker attends only to the perspective that observers disagree with, disregarding the observer’s side. These findings suggest that, despite growing polarization, people still think it is important to understand and tolerate political opponents. This work also informs future interventions, which could leverage social pressures to promote political perspective-seeking and combat selective-exposure, thus improving political relations.
The neural profiles of overly positive self-evaluations that are driven by the desire to defend self-esteem are predictable & distinguishable from those arising from limited cognitive engagement
The advantages and disadvantages of self-insight: New psychological and neural perspectives. Jennifer S. Beer, Michelle A. Harris. Advances in Experimental Social Psychology, May 24 2019. https://doi.org/10.1016/bs.aesp.2019.04.003
Abstract: People quickly assess the honesty of others but how honest are they with themselves and does it matter for their success and happiness? Our understanding of the advantages and disadvantages of self-insight is currently clouded by a number of measurement issues in the existing literature on self-evaluation and related constructs such as authenticity. Most studies do not use independent sources to evaluate self-concept, objectivity of the self-concept, and the consequence of their discrepancy. Additionally, daily diary studies indicate that research on related constructs such as authenticity may fail to capture accurate self-insight as intended. Furthermore, research drawing on neuropsychological populations, fMRI with healthy populations, and computational modeling has shown that not all self-insight failures are alike even if they appear the same at the level of behavioral measurement. For example, the neural profiles of overly positive self-evaluations that are driven by the desire to defend self-esteem are predictable and distinguishable from neural profiles of positive self-evaluation arising from limited cognitive engagement. Therefore, future research must aim for more rigorous measurement to understand the underlying cause of self-insight failure (rather than just identifying a shortcoming) to meaningfully understand the benefits and costs of different types of self-insight failure. The current research does not allow us to confidently conclude that self-insight has advantages over some types of self-insight failure (or vice versa) and we conclude by calling for more systematic investigation of why, when, where, and for whom self-insight is costly or beneficial.
Keywords: AuthenticityBrainComputational modelingFrontal lobefMRILesionSelfEmotionMotivationWell-being
Abstract: People quickly assess the honesty of others but how honest are they with themselves and does it matter for their success and happiness? Our understanding of the advantages and disadvantages of self-insight is currently clouded by a number of measurement issues in the existing literature on self-evaluation and related constructs such as authenticity. Most studies do not use independent sources to evaluate self-concept, objectivity of the self-concept, and the consequence of their discrepancy. Additionally, daily diary studies indicate that research on related constructs such as authenticity may fail to capture accurate self-insight as intended. Furthermore, research drawing on neuropsychological populations, fMRI with healthy populations, and computational modeling has shown that not all self-insight failures are alike even if they appear the same at the level of behavioral measurement. For example, the neural profiles of overly positive self-evaluations that are driven by the desire to defend self-esteem are predictable and distinguishable from neural profiles of positive self-evaluation arising from limited cognitive engagement. Therefore, future research must aim for more rigorous measurement to understand the underlying cause of self-insight failure (rather than just identifying a shortcoming) to meaningfully understand the benefits and costs of different types of self-insight failure. The current research does not allow us to confidently conclude that self-insight has advantages over some types of self-insight failure (or vice versa) and we conclude by calling for more systematic investigation of why, when, where, and for whom self-insight is costly or beneficial.
Keywords: AuthenticityBrainComputational modelingFrontal lobefMRILesionSelfEmotionMotivationWell-being
Monday, August 26, 2019
Electroencephalography & The Time-course of Moral Perception: Moral content might be prioritized in conscious awareness after an initial perceptual encoding but before subsequent memory processing or action preparation
Gantman, Ana P., Sayeed Devraj-Kizuk, Peter Mende-Siedlecki, Jay J. Van Bavel, and Kyle E. Mathewson. 2019. “The Time-course of Moral Perception: An Electroencephalography Investigation.” PsyArXiv. August 26. doi:10.31234/osf.io/72dxa
Abstract: Humans are highly attuned to perceptual cues about their values. A growing body of evidence suggests that people selectively attend to moral stimuli. However, it is unknown whether morality is prioritized early in perception or much later in cognitive processing. We use a combination of behavioral methods and electroencephalography to investigate how early in perception moral words are prioritized relative to non-moral words. The behavioral data replicate previous research indicating that people are more likely to correctly identify moral than non-moral words in a modified lexical decision task. The electroencephalography data reveal that words are distinguished from non-words as early as 200 milliseconds after onset over frontal brain areas, and moral words are distinguished from non-moral words 100 milliseconds later over left-posterior cortex. Further analyses reveal that differences in brain activity to moral vs. non-moral words cannot be explained by differences in arousal associated with the words. These results suggest that moral content might be prioritized in conscious awareness after an initial perceptual encoding but before subsequent memory processing or action preparation. This work offers a more precise theoretical framework for understanding how morality impacts vision and behavior.
Abstract: Humans are highly attuned to perceptual cues about their values. A growing body of evidence suggests that people selectively attend to moral stimuli. However, it is unknown whether morality is prioritized early in perception or much later in cognitive processing. We use a combination of behavioral methods and electroencephalography to investigate how early in perception moral words are prioritized relative to non-moral words. The behavioral data replicate previous research indicating that people are more likely to correctly identify moral than non-moral words in a modified lexical decision task. The electroencephalography data reveal that words are distinguished from non-words as early as 200 milliseconds after onset over frontal brain areas, and moral words are distinguished from non-moral words 100 milliseconds later over left-posterior cortex. Further analyses reveal that differences in brain activity to moral vs. non-moral words cannot be explained by differences in arousal associated with the words. These results suggest that moral content might be prioritized in conscious awareness after an initial perceptual encoding but before subsequent memory processing or action preparation. This work offers a more precise theoretical framework for understanding how morality impacts vision and behavior.
Gender Differences in Life Satisfaction Among Children and Adolescents: almost no differences
Gender Differences in Life Satisfaction Among Children and Adolescents: A Meta-analysis. Xinjie Chen et al. Journal of Happiness Studies, August 26 2019. https://link.springer.com/article/10.1007/s10902-019-00169-9
Abstract: Gender differences in life satisfaction (LS) have been studied for a long time, and the first meta-analysis on this issue was conducted almost 40 years ago. Since then, the social status of females has changed considerably across different nations and cultures. The individual studies in this area continued to show inconsistent results concerning gender group differences in their respective perception of LS. In this study, 46 empirical studies from 1980 to 2017 (with a cumulated total N = 11,772) were meta-analyzed to examine potential gender differences in LS among children and adolescents, and to explore if some study features could be moderators that could account for the observed inconsistencies in the findings across studies. The findings revealed that LS remains invariant across gender groups, but with a slight difference in favor of male children and adolescents. Our results further suggested that four study features were shown to contribute to the variations of the reported gender difference in LS across individual studies: geographical region, population type, age, and domain specific LS measurements. Such different features across the individual studies could have led to the observed inconsistency of the findings. Understanding how gender differences in LS vary by these study features could allow us to consider more targeted support to increase LS of children and adolescents in different situations.
Keywords: Life satisfaction Cognitive well-being Children and adolescents Gender difference Meta-analysis
Abstract: Gender differences in life satisfaction (LS) have been studied for a long time, and the first meta-analysis on this issue was conducted almost 40 years ago. Since then, the social status of females has changed considerably across different nations and cultures. The individual studies in this area continued to show inconsistent results concerning gender group differences in their respective perception of LS. In this study, 46 empirical studies from 1980 to 2017 (with a cumulated total N = 11,772) were meta-analyzed to examine potential gender differences in LS among children and adolescents, and to explore if some study features could be moderators that could account for the observed inconsistencies in the findings across studies. The findings revealed that LS remains invariant across gender groups, but with a slight difference in favor of male children and adolescents. Our results further suggested that four study features were shown to contribute to the variations of the reported gender difference in LS across individual studies: geographical region, population type, age, and domain specific LS measurements. Such different features across the individual studies could have led to the observed inconsistency of the findings. Understanding how gender differences in LS vary by these study features could allow us to consider more targeted support to increase LS of children and adolescents in different situations.
Keywords: Life satisfaction Cognitive well-being Children and adolescents Gender difference Meta-analysis
Morningness–Eveningness and Sociosexuality from a Life History Perspective
Chapter 4: Morningness–Eveningness and Sociosexuality from a Life History Perspective. James Marvel-Coen, Coltan Scrivner & Dario Maestripieri. In: The SAGE Handbook of Personality and Individual Differences: Volume II: Origins of Personality and Individual Differences. Edited by: Virgil Zeigler-Hill & Todd K. Shackelford. May 2018. http://dx.doi.org/10.4135/9781526451200.n4
Basic Aspects of Morningness.Eveningness
Many human biological processes are regulated by circadian rhythms, sometimes referred to as "internal clocks". These circadian rhythms apply to hormone concentrations, brain activity, heart rate, and body temperature. In humans and many other animals, a "master clock" is attuned to a 24-hour cycle, and corresponds to sleep and wakefulness. The master clock in humans operates through the action of the suprachiasmatic nucleus (SCN) in the hypothalamus (Herzog et al., 1998). Although our circadian rhythms have been selected for based on a general pattern of light and dark, environmental factors can influence circadian rhythms, and rhythms can vary between people.
Morningness-eveningness -or chronotype- refers to the notion that individuals vary from one another in preferences for the timing of waking up and falling asleep, as well as for diurnal peaks in activity and performance, such that some individuals tend to be more active, both cognitively and physiologically, in the morning, whereas others tend to be more active in the evening (Randler et al., 2016). Variation in morningness-eveningness tends to occur along a continuum, and the individuals at the two extremes of this continuum are often denoted as morning-types and evening-types, or "early birds" and "night owls". Research has shown that approximately 40% of individuals are either morning- or evening-types, with the other 60% falling into a more neutral category (Adan et al., 2012). Propensities for being a morning- or an evening-type are significantly heritable (e.g., Hur, 2007; Hur et al., 1998; Vink et al., 2001) but age, sex, and environment are important as well.
Children are typically morning-oriented but evening orientation tends to increase in both males and females throughout adolescence (Randler, 2011; Roenneberg et al., 2004). Sex differences in morningness-eveninness also begin to appear in adolescence, with more males being represented in the evening type category than females (Randler, 2007). However, these sex differences disappear after women reach menopause, suggesting that that they may be functionally linked to reproduction and be regulated by reproductive physiology, at least in women (Adan et al., 2012). Early experience and environment can influence variation in morningness-eveningness. For example, individuals who spend their first few months of life in a short photoperiod (i.e., autumn and winter) tend to be morning-types, whereas those who spend their first few months in a long photoperiod (i.e., spring and summer) tend to be evening-types (Mongrain et al., 2006; Natale and Di Milia, 2011). Latitude has also been shown to have a strong effect on chronotype, with people at northern latitudes having significantly later midpoints of sleep (Natale et al., 2009). This effect is moderated by residency type, however, with larger towns being less affected by latitude (Borisenkov et al., 2012). Thus, it is probable that sunlight, and potentially artificial light as well, plays a role in the development and shaping of chronotype. However, this effect is not entirely clear, as evening-types tend to have been exposed to more sunlight post-birth, but less during life.
Basic Aspects of Morningness.Eveningness
Many human biological processes are regulated by circadian rhythms, sometimes referred to as "internal clocks". These circadian rhythms apply to hormone concentrations, brain activity, heart rate, and body temperature. In humans and many other animals, a "master clock" is attuned to a 24-hour cycle, and corresponds to sleep and wakefulness. The master clock in humans operates through the action of the suprachiasmatic nucleus (SCN) in the hypothalamus (Herzog et al., 1998). Although our circadian rhythms have been selected for based on a general pattern of light and dark, environmental factors can influence circadian rhythms, and rhythms can vary between people.
Morningness-eveningness -or chronotype- refers to the notion that individuals vary from one another in preferences for the timing of waking up and falling asleep, as well as for diurnal peaks in activity and performance, such that some individuals tend to be more active, both cognitively and physiologically, in the morning, whereas others tend to be more active in the evening (Randler et al., 2016). Variation in morningness-eveningness tends to occur along a continuum, and the individuals at the two extremes of this continuum are often denoted as morning-types and evening-types, or "early birds" and "night owls". Research has shown that approximately 40% of individuals are either morning- or evening-types, with the other 60% falling into a more neutral category (Adan et al., 2012). Propensities for being a morning- or an evening-type are significantly heritable (e.g., Hur, 2007; Hur et al., 1998; Vink et al., 2001) but age, sex, and environment are important as well.
Children are typically morning-oriented but evening orientation tends to increase in both males and females throughout adolescence (Randler, 2011; Roenneberg et al., 2004). Sex differences in morningness-eveninness also begin to appear in adolescence, with more males being represented in the evening type category than females (Randler, 2007). However, these sex differences disappear after women reach menopause, suggesting that that they may be functionally linked to reproduction and be regulated by reproductive physiology, at least in women (Adan et al., 2012). Early experience and environment can influence variation in morningness-eveningness. For example, individuals who spend their first few months of life in a short photoperiod (i.e., autumn and winter) tend to be morning-types, whereas those who spend their first few months in a long photoperiod (i.e., spring and summer) tend to be evening-types (Mongrain et al., 2006; Natale and Di Milia, 2011). Latitude has also been shown to have a strong effect on chronotype, with people at northern latitudes having significantly later midpoints of sleep (Natale et al., 2009). This effect is moderated by residency type, however, with larger towns being less affected by latitude (Borisenkov et al., 2012). Thus, it is probable that sunlight, and potentially artificial light as well, plays a role in the development and shaping of chronotype. However, this effect is not entirely clear, as evening-types tend to have been exposed to more sunlight post-birth, but less during life.
Associations of dairy product consumption with mortality in the European Prospective Investigation into Cancer and Nutrition (EPIC)–Italy cohort
Associations of dairy product consumption with mortality in the European
Prospective Investigation into Cancer and Nutrition (EPIC)–Italy
cohort. Valeria Pala et al. The American Journal of Clinical Nutrition,
nqz183, August 21 2019, https://doi.org/10.1093/ajcn/nqz183
ABSTRACT
Background:The relation of dairy product consumption to health and mortality is controversial.
Objectives: We investigated associations of consumption of various dairy products with mortality in the Italian cohort of the European Prospective Investigation into Cancer and Nutrition (EPIC)–Italy study.
Methods: Dairy product consumption was assessed by validated semiquantitative FFQs. Multivariable Cox models stratified by center, age, and sex and adjusted for confounders estimated associations of milk (total, full fat, and reduced fat), yogurt, cheese, butter, and dairy calcium consumption with mortality for cancer, cardiovascular disease, and all causes. Nonlinearity was tested by restricted cubic spline regression.
Results: After a median follow-up of 14.9 y, 2468 deaths were identified in 45,009 participants: 59% from cancer and 19% from cardiovascular disease. No significant association of consumption of any dairy product with mortality was found in the fully adjusted models. A 25% reduction in risk of all-cause mortality was found for milk intake from 160 to 120 g/d (HR: 0.75; 95% CI: 0.61, 0.91) but not for the highest (>200 g/d) category of intake (HR: 0.95; 95% CI: 0.84, 1.08) compared with nonconsumption. Associations of full-fat and reduced-fat milk consumption with all-cause and cause-specific mortality were similar to those for milk as a whole.
Conclusions: In this Italian cohort characterized by low to average milk consumption, we found no evidence of a dose–response association between milk consumption and mortality and also no association of consumption of other dairy products investigated with mortality.
Keywords: dairy product consumption, mortality, EPIC-Italy, cancer, cardiovascular disease
ABSTRACT
Background:The relation of dairy product consumption to health and mortality is controversial.
Objectives: We investigated associations of consumption of various dairy products with mortality in the Italian cohort of the European Prospective Investigation into Cancer and Nutrition (EPIC)–Italy study.
Methods: Dairy product consumption was assessed by validated semiquantitative FFQs. Multivariable Cox models stratified by center, age, and sex and adjusted for confounders estimated associations of milk (total, full fat, and reduced fat), yogurt, cheese, butter, and dairy calcium consumption with mortality for cancer, cardiovascular disease, and all causes. Nonlinearity was tested by restricted cubic spline regression.
Results: After a median follow-up of 14.9 y, 2468 deaths were identified in 45,009 participants: 59% from cancer and 19% from cardiovascular disease. No significant association of consumption of any dairy product with mortality was found in the fully adjusted models. A 25% reduction in risk of all-cause mortality was found for milk intake from 160 to 120 g/d (HR: 0.75; 95% CI: 0.61, 0.91) but not for the highest (>200 g/d) category of intake (HR: 0.95; 95% CI: 0.84, 1.08) compared with nonconsumption. Associations of full-fat and reduced-fat milk consumption with all-cause and cause-specific mortality were similar to those for milk as a whole.
Conclusions: In this Italian cohort characterized by low to average milk consumption, we found no evidence of a dose–response association between milk consumption and mortality and also no association of consumption of other dairy products investigated with mortality.
Keywords: dairy product consumption, mortality, EPIC-Italy, cancer, cardiovascular disease
Cato review: That 'Vaping-Linked Lung Disease' Might Not Really Be Linked to Vaping
That 'Vaping-Linked Lung Disease' Might Not Really Be Linked to Vaping. Elizabeth Nolan Brown. Cato Roundup, Aug 23 2019. https://reason.com/2019/08/23/that-vaping-linked-lung-disease-might-not-really-be-linked-to-vaping/
There's a bit of panic brewing in the press over lung problems that could be linked to vape products. The Centers for Disease Control and Prevention (CDC) "reports more than 150 cases of possible vaping-linked lung disease," says The Hill. Others make even bolder claims.
"More than 100 vapers have contracted a severe lung disease," The Verge reports. "Vaping lung disease: CDC reports 153 cases," says USA Today. Ars Technica warns that "vaping-linked lung disease cases" have jumped "from 94 to 153 in 5 days."
But read closely, and it becomes apparent that nobody actually knows if vaping is causing this mystery disease or not. Nobody even knows if there is a disease, or how many people actually have it. That's what the CDC is at the beginning of investigating.
For now, all officials know is that states keep reporting people with cases of mysterious lung and chest problems. "Many states have alerted CDC to possible (not confirmed) cases and investigations into these cases are ongoing," says the CDC. Symptoms include shortness of breath, chest pain, and coughing—all common issues that can stem from a range of causes and ailments.
"The CDC and impacted states haven't identified a cause," notes The Verge. Nor has it actually verified suspected cases.
Those reporting the problems all say they have used vape products—albeit not what sort. Which leaves us with another possibility: that some particular faulty product or line of products is indeed causing trouble, but that this is not an issue with vaping at large.
We know that some patients in potential cases used THC-containing vape products, not nicotine-containing e-cigarettes. The Vapor Technology Association told The Hill that no nicotine e-cigarettes have been linked to the lung issues:
The fact that cases have spiked dramatically in the brief time since news of this "vaping lung disease" started spreading suggests we may have a different sort of contagion on our hands. Perhaps people who vape have been starting to freak out upon hearing the "lung disease" news and either suddenly noticed new symptoms (which also sound a lot like symptoms of a panic attack) or began interpreting ongoing symptoms in a new way.
Or maybe vaping is going to kill us! That's certainly possible. The point is that right now, anything is possible. And until we know more, it's irresponsible for folks to spread panic about products that have been helping many people leave more dangerous habits behind.
There's a bit of panic brewing in the press over lung problems that could be linked to vape products. The Centers for Disease Control and Prevention (CDC) "reports more than 150 cases of possible vaping-linked lung disease," says The Hill. Others make even bolder claims.
"More than 100 vapers have contracted a severe lung disease," The Verge reports. "Vaping lung disease: CDC reports 153 cases," says USA Today. Ars Technica warns that "vaping-linked lung disease cases" have jumped "from 94 to 153 in 5 days."
But read closely, and it becomes apparent that nobody actually knows if vaping is causing this mystery disease or not. Nobody even knows if there is a disease, or how many people actually have it. That's what the CDC is at the beginning of investigating.
For now, all officials know is that states keep reporting people with cases of mysterious lung and chest problems. "Many states have alerted CDC to possible (not confirmed) cases and investigations into these cases are ongoing," says the CDC. Symptoms include shortness of breath, chest pain, and coughing—all common issues that can stem from a range of causes and ailments.
"The CDC and impacted states haven't identified a cause," notes The Verge. Nor has it actually verified suspected cases.
Those reporting the problems all say they have used vape products—albeit not what sort. Which leaves us with another possibility: that some particular faulty product or line of products is indeed causing trouble, but that this is not an issue with vaping at large.
We know that some patients in potential cases used THC-containing vape products, not nicotine-containing e-cigarettes. The Vapor Technology Association told The Hill that no nicotine e-cigarettes have been linked to the lung issues:
The e-cigarette makers' trade group called for public health officials to "refrain from assigning unsubstantiated blame until the facts are known," and said traditional nicotine-containing e-cigarettes are being wrongly conflated with THC-containing products.In actuality, we don't know at all what folks with many of the suspected cases were smoking, nor what other habits they may have shared, such as any history of regular cigarette or marijuana smoking. We don't—and this is pretty damn crucial—even know if all of these patients suffer from the same affliction at all.
The fact that cases have spiked dramatically in the brief time since news of this "vaping lung disease" started spreading suggests we may have a different sort of contagion on our hands. Perhaps people who vape have been starting to freak out upon hearing the "lung disease" news and either suddenly noticed new symptoms (which also sound a lot like symptoms of a panic attack) or began interpreting ongoing symptoms in a new way.
Or maybe vaping is going to kill us! That's certainly possible. The point is that right now, anything is possible. And until we know more, it's irresponsible for folks to spread panic about products that have been helping many people leave more dangerous habits behind.
"Eating with familiar others has a powerful effect of increasing food intakes." Except when women eat in the company of men.
A systematic review and meta-analysis of the social facilitation of eating. Helen K Ruddock, Jeffrey M Brunstrom, Lenny R Vartanian, Suzanne Higgs. The American Journal of Clinical Nutrition, nqz155, August 21 2019, https://doi.org/10.1093/ajcn/nqz155
ABSTRACT
Background: Research suggests that people tend to eat more when eating with other people, compared with when they eat alone, and this is known as the social facilitation of eating. However, little is known about when and why this phenomenon occurs.
Objectives: This review aimed to quantify the evidence for social facilitation of eating and identify moderating factors and underlying mechanisms.
Methods: We systematically reviewed studies that used experimental and nonexperimental approaches to examine food intake/food choice as a function of the number of co-eaters. The following databases were searched during April 2019: PsychInfo, Embase, Medline, and Social Sciences Citation Index. Studies that used naturalistic techniques were narratively synthesized, and meta-analyses were conducted to synthesize results from experimental studies.
Results: We reviewed 42 studies. We found strong evidence that people select and eat more when eating with friends, compared with when they eat alone [Z = 5.32; P < 0.001; standardized mean difference (SMD) = 0.76; 95% CI: 0.48, 1.03]. The meta-analysis revealed no evidence for social facilitation across studies that had examined food intake when participants ate alone or with strangers/acquaintances (Z = 1.32; P = 0.19; SMD = 0.21, 95% CI: −0.10, 0.51). There was some evidence that the social facilitation of eating is moderated by gender, weight status, and food type. However, this evidence was limited by a lack of experimental research examining the moderating effect of these factors on the social facilitation of eating among friends. In 2 studies, there was evidence that the effect of the social context on eating may be partly mediated by longer meal durations and the perceived appropriateness of eating.
Conclusions: Findings suggest that eating with others increases food intake relative to eating alone, and this is moderated by the familiarity of co-eaters. The review identifies potential mechanisms for the social facilitation of eating and highlights the need for further research to establish mediating factors. Finally, we propose a new theoretical framework in which we suggest that the social facilitation of eating has evolved as an efficient evolutionary adaptation.
Keywords: social facilitation, social influences, food intake, food choice, meta-analysis
ABSTRACT
Background: Research suggests that people tend to eat more when eating with other people, compared with when they eat alone, and this is known as the social facilitation of eating. However, little is known about when and why this phenomenon occurs.
Objectives: This review aimed to quantify the evidence for social facilitation of eating and identify moderating factors and underlying mechanisms.
Methods: We systematically reviewed studies that used experimental and nonexperimental approaches to examine food intake/food choice as a function of the number of co-eaters. The following databases were searched during April 2019: PsychInfo, Embase, Medline, and Social Sciences Citation Index. Studies that used naturalistic techniques were narratively synthesized, and meta-analyses were conducted to synthesize results from experimental studies.
Results: We reviewed 42 studies. We found strong evidence that people select and eat more when eating with friends, compared with when they eat alone [Z = 5.32; P < 0.001; standardized mean difference (SMD) = 0.76; 95% CI: 0.48, 1.03]. The meta-analysis revealed no evidence for social facilitation across studies that had examined food intake when participants ate alone or with strangers/acquaintances (Z = 1.32; P = 0.19; SMD = 0.21, 95% CI: −0.10, 0.51). There was some evidence that the social facilitation of eating is moderated by gender, weight status, and food type. However, this evidence was limited by a lack of experimental research examining the moderating effect of these factors on the social facilitation of eating among friends. In 2 studies, there was evidence that the effect of the social context on eating may be partly mediated by longer meal durations and the perceived appropriateness of eating.
Conclusions: Findings suggest that eating with others increases food intake relative to eating alone, and this is moderated by the familiarity of co-eaters. The review identifies potential mechanisms for the social facilitation of eating and highlights the need for further research to establish mediating factors. Finally, we propose a new theoretical framework in which we suggest that the social facilitation of eating has evolved as an efficient evolutionary adaptation.
Keywords: social facilitation, social influences, food intake, food choice, meta-analysis
Study of telomere length in older persons: Little evidence for associations were found with parental lifespan, centenarian status of parents, cognitive function, grip strength, sarcopenia, or falls
Telomere length and aging-related outcomes in humans: A Mendelian randomization study in 261,000 older participants. Chia-Ling Kuo Luke C. Pilling George A. Kuchel Luigi Ferrucci David Melzer. Aging Cell, August 24 2019. https://doi.org/10.1111/acel.13017
Abstract: Inherited genetic variation influencing leukocyte telomere length provides a natural experiment for testing associations with health outcomes, more robust to confounding and reverse causation than observational studies. We tested associations between genetically determined telomere length and aging‐related health outcomes in a large European ancestry older cohort. Data were from n = 379,758 UK Biobank participants aged 40–70, followed up for mean of 7.5 years (n = 261,837 participants aged 60 and older by end of follow‐up). Thirteen variants strongly associated with longer telomere length in peripheral white blood cells were analyzed using Mendelian randomization methods with Egger plots to assess pleiotropy. Variants in TERC, TERT, NAF1, OBFC1, and RTEL1 were included, and estimates were per 250 base pairs increase in telomere length, approximately equivalent to the average change over a decade in the general white population. We highlighted associations with false discovery rate‐adjusted p‐values smaller than .05. Genetically determined longer telomere length was associated with lowered risk of coronary heart disease (CHD; OR = 0.95, 95% CI: 0.92–0.98) but raised risk of cancer (OR = 1.11, 95% CI: 1.06–1.16). Little evidence for associations were found with parental lifespan, centenarian status of parents, cognitive function, grip strength, sarcopenia, or falls. The results for those aged 60 and older were similar in younger or all participants. Genetically determined telomere length was associated with increased risk of cancer and reduced risk of CHD but little change in other age‐related health outcomes. Telomere lengthening may offer little gain in later‐life health status and face increasing cancer risks.
1 INTRODUCTION
Telomeres are end fragments of chromosomes consisting of thousands of repeats of the noncoding sequence TTAGGG. Telomeres function to protect chromosome ends against genomic instability. Telomeres shorten with each cell cycle and contribute to replicative senescence when reaching the Hayflick limit (Hayflick & Moorhead, 1961). Telomerase is a ribonucleoprotein complex, which replenishes telomere loss during replication. Telomerase is active at early developmental stages but almost completely inactive in somatic tissues of adults (Collins and Mitchell, 2002). Telomerase activation may treat aging‐related diseases and prolong human lifespan (de Jesus & Blasco, 2013). Previous studies on adult or old mice have shown successes from improving physical function and lifespan without increasing incidence of cancer, but the translation from mice to humans is unknown (de Jesus & Blasco, 2013).
Telomere length is often approximated using leukocyte telomere length, which is easy to extract from blood and highly correlated with telomere length in other tissues (Daniali et al., (2013)). Measured telomere length has been associated with mortality and aging‐related outcomes in humans (Mather, Jorm, Parslow, & Christensen 2011; Sanders & Newman, 2013; Brown, Zhang, Mitchel, & Ailshire, 2018), including cancer (Zhang et al., 2017), cardiovascular disease (Haycock et al., 2014), cognitive function, physical performance such as grip strength, sarcopenia, and frailty (Lorenzi et al., 2018; Zhou et al., 2018), plus biomarkers of lung function, blood pressure, bone mineral density, cholesterol, interleukin 6, and C‐reactive protein. Observational associations cannot be consistently replicated likely due to study populations, measurement methods, and statistical modelling (Sanders & Newman, 2013). In addition, a number of factors may confound observational associations such as sex and race/ethnicity, paternal age at birth, smoking, psychological stress, and other psychosocial, environmental, and behavioral factors (Blackburn, Epel, & Lin, 2015; Starkweather et al., 2014).
Telomere length has a strong inherited genetic component in humans (heritability estimates ranging from 34% to 82% (Broer, Codd, & Nyholt 2013). Mendelian randomization (MR) is a powerful statistical method to evaluate the causal relationship between an exposure and an outcome, under certain assumptions (Davey Smith & Hemani, 2014). Analogous to randomized clinical trials, MR creates groups determined by genotypes, which are inherited at random and are independent of confounding factors. In theory, if the groups are associated with the outcome, the association is independent of confounders and is via the exposure, assuming no pleiotropy is present. MR studies are more robust than observational studies to confounding effects, measurement errors or bias, and reverse causation (i.e., free of downstream effects appearing to be causes).
By applying MR, we were able to study the effect of telomere length on aging, with robustness to confounding effects. To date, 16 inherited genetic variants from genome‐wide association studies (GWAS) have been shown to be strongly associated with human leukocyte telomere length using European‐descent population samples (Haycock et al., 2017). Many of these loci harbor telomerase and telomere‐protective protein genes, including TERC, TERT, NAF1, OBFC1, and RTEL1 (Codd et al., 2013; Haycock et al., 2017). These variants have been used to perform MR, but the focus was on diseases (Haycock et al., 2017; Zhan et al., 2015). Additionally, previous studies tend to be underpowered due to an insufficiently large sample size for a small percent of variance (2%–3%) explained by the genetic variants (Haycock et al., 2017). The small percent of variance affects the power but not validity of the causal inference, if the genetic variants meet the Mendelian randomization assumptions: (a) associated with telomere length, (b) independent of all confounders for the association between telomere length and the outcome, and (c) independent of the outcome conditional on telomere length and all the confounders (Haycock et al., 2017).
In this study, we investigated causal relationships between telomere length and aging‐related outcomes with the focus on common measures of human aging such as grip strength, frailty, and cognitive function. We analyzed European‐descent participants from UK Biobank, with a wealth of genetic and phenotypic data. This study was not designed to analyze every aging trait in UK Biobank. Instead, we selected traits to cover different aspects of aging, using inputs from senior investigators in the team. Cancer, coronary heart disease, hypertension, and pneumonia were selected as they were common in older adults, but we did not attempt to include every individual disease. Disease‐specific MR associations were reported elsewhere (Haycock et al., 2017). Our project is focused on aging traits and is not powered for diseases that require a longer time to accumulate sufficient cases.
Abstract: Inherited genetic variation influencing leukocyte telomere length provides a natural experiment for testing associations with health outcomes, more robust to confounding and reverse causation than observational studies. We tested associations between genetically determined telomere length and aging‐related health outcomes in a large European ancestry older cohort. Data were from n = 379,758 UK Biobank participants aged 40–70, followed up for mean of 7.5 years (n = 261,837 participants aged 60 and older by end of follow‐up). Thirteen variants strongly associated with longer telomere length in peripheral white blood cells were analyzed using Mendelian randomization methods with Egger plots to assess pleiotropy. Variants in TERC, TERT, NAF1, OBFC1, and RTEL1 were included, and estimates were per 250 base pairs increase in telomere length, approximately equivalent to the average change over a decade in the general white population. We highlighted associations with false discovery rate‐adjusted p‐values smaller than .05. Genetically determined longer telomere length was associated with lowered risk of coronary heart disease (CHD; OR = 0.95, 95% CI: 0.92–0.98) but raised risk of cancer (OR = 1.11, 95% CI: 1.06–1.16). Little evidence for associations were found with parental lifespan, centenarian status of parents, cognitive function, grip strength, sarcopenia, or falls. The results for those aged 60 and older were similar in younger or all participants. Genetically determined telomere length was associated with increased risk of cancer and reduced risk of CHD but little change in other age‐related health outcomes. Telomere lengthening may offer little gain in later‐life health status and face increasing cancer risks.
1 INTRODUCTION
Telomeres are end fragments of chromosomes consisting of thousands of repeats of the noncoding sequence TTAGGG. Telomeres function to protect chromosome ends against genomic instability. Telomeres shorten with each cell cycle and contribute to replicative senescence when reaching the Hayflick limit (Hayflick & Moorhead, 1961). Telomerase is a ribonucleoprotein complex, which replenishes telomere loss during replication. Telomerase is active at early developmental stages but almost completely inactive in somatic tissues of adults (Collins and Mitchell, 2002). Telomerase activation may treat aging‐related diseases and prolong human lifespan (de Jesus & Blasco, 2013). Previous studies on adult or old mice have shown successes from improving physical function and lifespan without increasing incidence of cancer, but the translation from mice to humans is unknown (de Jesus & Blasco, 2013).
Telomere length is often approximated using leukocyte telomere length, which is easy to extract from blood and highly correlated with telomere length in other tissues (Daniali et al., (2013)). Measured telomere length has been associated with mortality and aging‐related outcomes in humans (Mather, Jorm, Parslow, & Christensen 2011; Sanders & Newman, 2013; Brown, Zhang, Mitchel, & Ailshire, 2018), including cancer (Zhang et al., 2017), cardiovascular disease (Haycock et al., 2014), cognitive function, physical performance such as grip strength, sarcopenia, and frailty (Lorenzi et al., 2018; Zhou et al., 2018), plus biomarkers of lung function, blood pressure, bone mineral density, cholesterol, interleukin 6, and C‐reactive protein. Observational associations cannot be consistently replicated likely due to study populations, measurement methods, and statistical modelling (Sanders & Newman, 2013). In addition, a number of factors may confound observational associations such as sex and race/ethnicity, paternal age at birth, smoking, psychological stress, and other psychosocial, environmental, and behavioral factors (Blackburn, Epel, & Lin, 2015; Starkweather et al., 2014).
Telomere length has a strong inherited genetic component in humans (heritability estimates ranging from 34% to 82% (Broer, Codd, & Nyholt 2013). Mendelian randomization (MR) is a powerful statistical method to evaluate the causal relationship between an exposure and an outcome, under certain assumptions (Davey Smith & Hemani, 2014). Analogous to randomized clinical trials, MR creates groups determined by genotypes, which are inherited at random and are independent of confounding factors. In theory, if the groups are associated with the outcome, the association is independent of confounders and is via the exposure, assuming no pleiotropy is present. MR studies are more robust than observational studies to confounding effects, measurement errors or bias, and reverse causation (i.e., free of downstream effects appearing to be causes).
By applying MR, we were able to study the effect of telomere length on aging, with robustness to confounding effects. To date, 16 inherited genetic variants from genome‐wide association studies (GWAS) have been shown to be strongly associated with human leukocyte telomere length using European‐descent population samples (Haycock et al., 2017). Many of these loci harbor telomerase and telomere‐protective protein genes, including TERC, TERT, NAF1, OBFC1, and RTEL1 (Codd et al., 2013; Haycock et al., 2017). These variants have been used to perform MR, but the focus was on diseases (Haycock et al., 2017; Zhan et al., 2015). Additionally, previous studies tend to be underpowered due to an insufficiently large sample size for a small percent of variance (2%–3%) explained by the genetic variants (Haycock et al., 2017). The small percent of variance affects the power but not validity of the causal inference, if the genetic variants meet the Mendelian randomization assumptions: (a) associated with telomere length, (b) independent of all confounders for the association between telomere length and the outcome, and (c) independent of the outcome conditional on telomere length and all the confounders (Haycock et al., 2017).
In this study, we investigated causal relationships between telomere length and aging‐related outcomes with the focus on common measures of human aging such as grip strength, frailty, and cognitive function. We analyzed European‐descent participants from UK Biobank, with a wealth of genetic and phenotypic data. This study was not designed to analyze every aging trait in UK Biobank. Instead, we selected traits to cover different aspects of aging, using inputs from senior investigators in the team. Cancer, coronary heart disease, hypertension, and pneumonia were selected as they were common in older adults, but we did not attempt to include every individual disease. Disease‐specific MR associations were reported elsewhere (Haycock et al., 2017). Our project is focused on aging traits and is not powered for diseases that require a longer time to accumulate sufficient cases.
Sunday, August 25, 2019
Gender differences in the behavioral and subjective effects of methamphetamine in healthy humans
Gender
differences in the behavioral and subjective effects of methamphetamine
in healthy humans. Leah M. Mayo et al. Psychopharmacology, August 2019,
Volume 236, Issue 8, pp 2413–2423.
https://link.springer.com/article/10.1007/s00213-019-05276-2
Abstract
Rationale: Methamphetamine (MA) use is steadily increasing and thus constitutes a major public health concern. Women seem to be particularly vulnerable to developing MA use disorder, as they initiate use at a younger age and transition more quickly to problematic use. Initial drug responses may predict subsequent use, but little information exists on potential gender differences in the acute effects of MA prior to dependence.
Objective: We examined gender differences in the acute effects of MA on subjective mood and reward-related behavior in healthy, non-dependent humans.
Methods: Men (n = 44) and women (n = 29) completed 4 sessions in which they received placebo or MA under double-blind conditions twice each. During peak drug effect, participants completed the monetary incentive delay task to assess reaction times to cues signaling potential monetary losses or gains, in an effort to determine if MA would potentiate reward-motivated behavior. Cardiovascular and subjective drug effects were assessed throughout sessions.
Results: Overall, participants responded more quickly to cues predicting incentivized trials, particularly large-magnitude incentives, than to cues predicting no incentive. MA produced faster reaction times in women, but not in men. MA produced typical stimulant-like subjective and cardiovascular effects in all participants, but subjective ratings of vigor and (reduced) sedation were greater in women than in men.
Conclusions: Women appear to be more sensitive to the psychomotor-related behavioral and subjective effects of MA. These findings provide initial insight into gender differences in acute effects of MA that may contribute to gender differences in problematic MA use.
Keywords: Methamphetamine Monetary incentive delay Gender differences Sex differences Subjective effects Psychomotor activation
Abstract
Rationale: Methamphetamine (MA) use is steadily increasing and thus constitutes a major public health concern. Women seem to be particularly vulnerable to developing MA use disorder, as they initiate use at a younger age and transition more quickly to problematic use. Initial drug responses may predict subsequent use, but little information exists on potential gender differences in the acute effects of MA prior to dependence.
Objective: We examined gender differences in the acute effects of MA on subjective mood and reward-related behavior in healthy, non-dependent humans.
Methods: Men (n = 44) and women (n = 29) completed 4 sessions in which they received placebo or MA under double-blind conditions twice each. During peak drug effect, participants completed the monetary incentive delay task to assess reaction times to cues signaling potential monetary losses or gains, in an effort to determine if MA would potentiate reward-motivated behavior. Cardiovascular and subjective drug effects were assessed throughout sessions.
Results: Overall, participants responded more quickly to cues predicting incentivized trials, particularly large-magnitude incentives, than to cues predicting no incentive. MA produced faster reaction times in women, but not in men. MA produced typical stimulant-like subjective and cardiovascular effects in all participants, but subjective ratings of vigor and (reduced) sedation were greater in women than in men.
Conclusions: Women appear to be more sensitive to the psychomotor-related behavioral and subjective effects of MA. These findings provide initial insight into gender differences in acute effects of MA that may contribute to gender differences in problematic MA use.
Keywords: Methamphetamine Monetary incentive delay Gender differences Sex differences Subjective effects Psychomotor activation
From 2018... If all acts of love and pleasure are Her rituals, what about BDSM? Feminist culture wars in contemporary Paganism
From 2018... If all acts of love and pleasure are Her rituals, what about BDSM? Feminist culture wars in contemporary Paganism. Michelle Mueller. Theology & Sexuality, Volume 24, 2018, Issue 1, Jun 20 2017, http://www.tandfonline.com/10.1080/13558358.2017.1339930
Abstract: Since the 1970s, some religious practitioners of the contemporary Pagan movement (a.k.a. Neo-Paganism) have embraced spiritual BDSM, or “sacred kink,” as a spiritual discipline relating to their tradition. The “sex wars,” debates around pornography, prostitution, and sadomasochism, have appeared in the history of Wicca and contemporary Paganism. Pagan feminists have brought theological questions to the same debates. They have focused on the Wiccan Rede (“harm none”) and the affirmation of pleasure in Doreen Valiente’s Charge of the Goddess that states that, “All acts of pleasure are [the Goddess’s] rituals.” While support for BDSM has become the dominant public perspective in twenty-first century Paganism, the movement’s late twentieth-century history includes instances of anguish as individuals wrestled with their personal sexual desire and their feminist principles.
Keywords: Sadomasochism, Pagan, witchcraft, Wicca, BDSM, alternative sexuality
Abstract: Since the 1970s, some religious practitioners of the contemporary Pagan movement (a.k.a. Neo-Paganism) have embraced spiritual BDSM, or “sacred kink,” as a spiritual discipline relating to their tradition. The “sex wars,” debates around pornography, prostitution, and sadomasochism, have appeared in the history of Wicca and contemporary Paganism. Pagan feminists have brought theological questions to the same debates. They have focused on the Wiccan Rede (“harm none”) and the affirmation of pleasure in Doreen Valiente’s Charge of the Goddess that states that, “All acts of pleasure are [the Goddess’s] rituals.” While support for BDSM has become the dominant public perspective in twenty-first century Paganism, the movement’s late twentieth-century history includes instances of anguish as individuals wrestled with their personal sexual desire and their feminist principles.
Keywords: Sadomasochism, Pagan, witchcraft, Wicca, BDSM, alternative sexuality
Appetite for destruction: Counterintuitive effects of attractive faces on people's food choices
Appetite for destruction: Counterintuitive effects of attractive faces on people's food choices. Tobias Otterbring. Psychology & Marketing, August 24 2019. https://doi.org/10.1002/mar.21257
Abstract: Faces in general and attractive faces, in particular, are frequently used in marketing, advertising, and packaging design. However, few studies have examined the effects of attractive faces on people's choice behavior. The present research examines whether attractive (vs. unattractive) faces increase individuals’ inclination to choose either healthy or unhealthy foods. In contrast to the beliefs held by most marketing professors, but consistent with visceral state theories, exposure to attractive (vs. unattractive) opposite‐sex faces increased choice likelihood of unhealthy foods. This effect was moderated by self‐view‐relevant attributes and exerted a particularly powerful influence on individuals who were single (vs. in a relationship) and individuals rating themselves as unattractive (vs. attractive). Furthermore, the effect was mediated by arousal, was stronger for men than for women, but did not generalize after exposure to attractive (vs. unattractive) same‐sex faces. As pictorial exposure is sufficient for the effect to occur, these findings have important implications for marketing, advertising, and public health.
Abstract: Faces in general and attractive faces, in particular, are frequently used in marketing, advertising, and packaging design. However, few studies have examined the effects of attractive faces on people's choice behavior. The present research examines whether attractive (vs. unattractive) faces increase individuals’ inclination to choose either healthy or unhealthy foods. In contrast to the beliefs held by most marketing professors, but consistent with visceral state theories, exposure to attractive (vs. unattractive) opposite‐sex faces increased choice likelihood of unhealthy foods. This effect was moderated by self‐view‐relevant attributes and exerted a particularly powerful influence on individuals who were single (vs. in a relationship) and individuals rating themselves as unattractive (vs. attractive). Furthermore, the effect was mediated by arousal, was stronger for men than for women, but did not generalize after exposure to attractive (vs. unattractive) same‐sex faces. As pictorial exposure is sufficient for the effect to occur, these findings have important implications for marketing, advertising, and public health.
Those with tattoos, especially visible ones, are more short-sighted in time preferences & more impulsive than the non-tattooed; almost nothing mitigates these results
Tat will tell: Tattoos and time preferences. Bradley J. Ruffle, Anne E. Wilson. Journal of Economic Behavior & Organization, August 24 2019. https://doi.org/10.1016/j.jebo.2019.08.001
Abstract: Survey and experimental evidence documents discrimination against tattooed individuals in the labor market and in commercial transactions. Thus, individuals’ decision to get tattooed may reflect short-sighted time preferences. We show that, according to numerous measures, those with tattoos, especially visible ones, are more short-sighted and impulsive than the non-tattooed. Almost nothing mitigates these results, neither the motive for the tattoo, the time contemplated before getting tattooed nor the time elapsed since the last tattoo. Even the expressed intention to get a(nother) tattoo predicts increased short-sightedness and helps establish the direction of causality between tattoos and short-sightedness.
Keywords: Experimental economicsTattooTime preferencesImpulsivity
Abstract: Survey and experimental evidence documents discrimination against tattooed individuals in the labor market and in commercial transactions. Thus, individuals’ decision to get tattooed may reflect short-sighted time preferences. We show that, according to numerous measures, those with tattoos, especially visible ones, are more short-sighted and impulsive than the non-tattooed. Almost nothing mitigates these results, neither the motive for the tattoo, the time contemplated before getting tattooed nor the time elapsed since the last tattoo. Even the expressed intention to get a(nother) tattoo predicts increased short-sightedness and helps establish the direction of causality between tattoos and short-sightedness.
Keywords: Experimental economicsTattooTime preferencesImpulsivity
Lifespans are becoming more equal, with octogenarians & nonagenarians accounting for most deaths; extrapolation of the trend indicates that most children born this millennium will reach 100.
Vaupel, James W., Francisco Villavicencio, and Marie-Pier B. Boucher. 2019. “Demographic Perspectives on the Rise of Longevity.” SocArXiv. August 25. doi:10.31235/osf.io/gdjtv
Abstract
Background: This article reviews findings about the rise of life expectancy, current levels of life expectancy in countries with high life expectancies, and possible future trends in life expectancy. Maximum lifespans and the equality of lifespans are also considered.
Methods: Demographic data on age-specific mortality are used to estimate life expectancy. Validated data on exceptional lifespans are used to study the maximum length of life. Findings of the most significant publications are critically summarized.
Results: In the countries doing best, life expectancy started to increase around 1840 at a pace of almost 2.5 years per decade. This trend has continued until the present. Contrary to classical evolutionary theories of senescence and contrary to the predictions of many experts, the frontier of survival is advancing to higher ages. Furthermore, lifespans are becoming more equal, with octogenarians and nonagenarians accounting for most deaths in countries with high life expectancy. Extrapolation of the trend indicates that most children born this millennium will celebrate their 100th birthdays. Considerable uncertainty, however, clouds forecasts of life expectancy and maximum lifespans: life expectancy and maximum lifespan might increase very little if at all or longevity might rise much faster than in the past.
Conclusions: Substantial progress has been made over the past three decades in deepening understanding of how long humans have lived and have long they might live. The social, economic, health, cultural and political consequences of further increases in longevity are so significant that the development of more powerful methods of forecasting is a priority.
Background: This article reviews findings about the rise of life expectancy, current levels of life expectancy in countries with high life expectancies, and possible future trends in life expectancy. Maximum lifespans and the equality of lifespans are also considered.
Methods: Demographic data on age-specific mortality are used to estimate life expectancy. Validated data on exceptional lifespans are used to study the maximum length of life. Findings of the most significant publications are critically summarized.
Results: In the countries doing best, life expectancy started to increase around 1840 at a pace of almost 2.5 years per decade. This trend has continued until the present. Contrary to classical evolutionary theories of senescence and contrary to the predictions of many experts, the frontier of survival is advancing to higher ages. Furthermore, lifespans are becoming more equal, with octogenarians and nonagenarians accounting for most deaths in countries with high life expectancy. Extrapolation of the trend indicates that most children born this millennium will celebrate their 100th birthdays. Considerable uncertainty, however, clouds forecasts of life expectancy and maximum lifespans: life expectancy and maximum lifespan might increase very little if at all or longevity might rise much faster than in the past.
Conclusions: Substantial progress has been made over the past three decades in deepening understanding of how long humans have lived and have long they might live. The social, economic, health, cultural and political consequences of further increases in longevity are so significant that the development of more powerful methods of forecasting is a priority.
Saturday, August 24, 2019
People who were bitten by a snake (either venomous or not) scored lower in fear of snakes; people could become fear immunized even against highly biologically prepared fearful stimuli
Coelho, Carlos M., Panrapee Suttiwan, and Andras N. Zsido. 2019. “Fear Inoculation Among Snake Experts.” PsyArXiv. August 24. doi:10.31234/osf.io/ph5ug
Abstract: Prepared phobias are often seen as acquired rapidly, generalize broadly and are resistant to extinction. Nonetheless provided opportunity for innocuous contact with certain kinds of stimuli people tend to show less fear when compared with others who never or rarely deal with the same stimuli. This study finds that people who were bitten by a snake (either venomous or not) scored lower in fear of snakes, as measured by the SNAQ-12, and SPQ surveys. These fearless people also have more experience with snakes than those who were not bitten. Results suggest that people could become immunized even against highly biologically prepared fearful stimuli, such as snakes, after a certain amount of previous benign exposure. We stress that lack of fear might bring people to become unworried and develop extreme risk of snakebite envenomation, which has recently (2017) been classified by the World Health Organization as a category A neglected tropical disease.
Abstract: Prepared phobias are often seen as acquired rapidly, generalize broadly and are resistant to extinction. Nonetheless provided opportunity for innocuous contact with certain kinds of stimuli people tend to show less fear when compared with others who never or rarely deal with the same stimuli. This study finds that people who were bitten by a snake (either venomous or not) scored lower in fear of snakes, as measured by the SNAQ-12, and SPQ surveys. These fearless people also have more experience with snakes than those who were not bitten. Results suggest that people could become immunized even against highly biologically prepared fearful stimuli, such as snakes, after a certain amount of previous benign exposure. We stress that lack of fear might bring people to become unworried and develop extreme risk of snakebite envenomation, which has recently (2017) been classified by the World Health Organization as a category A neglected tropical disease.
Sexual narcissism was positively associated with sexual functioning in both women and men; in the women's sample, sexual narcissism was related to a positive genital self-image
Sexual narcissism and its association with sexual and well-being outcomes. Verena Klein et al. Personality and Individual Differences, Volume 152, 1 January 2020, 109557. https://doi.org/10.1016/j.paid.2019.109557
Abstract: Theories on narcissism are traditionally closely related to sexuality. Most research on the association between narcissism and sexual behavior, however, has focused on harmful/maladaptive outcomes. The aim of the present two studies was to examine the possible health-promoting influence of both global and sexual narcissism on sexual function and genital self-image. In Study 1, sexual narcissism was positively associated with sexual functioning in both women and men (N = 505, online-recruited German participants). In the women's sample, sexual narcissism was related to a positive genital self-image. The facet sexual skill was identified as the most important predictor for sexual function and positive genital self-image in both women and men. Study 2 replicated and extended this association in an online sample of US Americans (N = 588) by including quality of life measures. Underscoring the benign nature of the sexual skill facet, this subscale was associated with quality of life in both women and men. The idea of a possible beneficial influence of sexual narcissism on sexuality related outcomes is discussed.
Keywords: Sexual narcissismNarcissismGenital self-imageSexual functionSexual health
Sexual experience has no effect on male mating or reproductive success in house mice
Sexual experience has no effect on male mating or reproductive success in house mice. Kerstin E. Thonhauser, Alexandra Raffetzeder & Dustin J. Penn. Scientific Reports, August 21 2019, volume 9, 12145 (2019). https://www.nature.com/articles/s41598-019-48392-x
Abstract: The ability to learn from experience can improve Darwinian fitness, but few studies have tested whether sexual experience enhances reproductive success. We conducted a study with wild-derived house mice (Mus musculus musculus) in which we manipulated male sexual experience and allowed females to choose between (1) a sexually experienced versus a virgin male, (2) two sexually experienced males, or (3) two virgin males (n = 60 females and 120 males). This design allowed us to test whether females are more likely to mate multiply when they encounter more virgin males, which are known to be infanticidal. We recorded females’ preference and mating behaviours, and conducted genetic paternity analyses to determine male reproductive success. We found no evidence that sexual experience influenced male mating or reproductive success, and no evidence that the number of virgin males influenced female multiple mating. Females always copulated with both males and 58% of the litters were multiple-sired. Females’ initial attraction to a male correlated with their social preferences, but neither of these preference behaviours predicted male reproductive success – raising caveats for using mating preferences as surrogates for mate choice. Male reproductive success was predicted by mating order, but unexpectedly, males that copulated first sired fewer offspring.
Abstract: The ability to learn from experience can improve Darwinian fitness, but few studies have tested whether sexual experience enhances reproductive success. We conducted a study with wild-derived house mice (Mus musculus musculus) in which we manipulated male sexual experience and allowed females to choose between (1) a sexually experienced versus a virgin male, (2) two sexually experienced males, or (3) two virgin males (n = 60 females and 120 males). This design allowed us to test whether females are more likely to mate multiply when they encounter more virgin males, which are known to be infanticidal. We recorded females’ preference and mating behaviours, and conducted genetic paternity analyses to determine male reproductive success. We found no evidence that sexual experience influenced male mating or reproductive success, and no evidence that the number of virgin males influenced female multiple mating. Females always copulated with both males and 58% of the litters were multiple-sired. Females’ initial attraction to a male correlated with their social preferences, but neither of these preference behaviours predicted male reproductive success – raising caveats for using mating preferences as surrogates for mate choice. Male reproductive success was predicted by mating order, but unexpectedly, males that copulated first sired fewer offspring.
“Daddies,” “Cougars”, & Relationship and Sexual Well-Being among Older Adults & Young Lovers: Women partnered to older men had less sex and more issues related to sexual satisfaction
“Daddies,” “Cougars,” and Their Partners Past Midlife: Gender Attitudes and Relationship and Sexual Well-Being among Older Adults in Age-Heterogenous Partnerships. Tony Silva. Socius: Sociological Research for a Dynamic World, August 19, 2019. https://doi.org/10.1177/2378023119869452
Abstract: Discussion of “daddies” has exploded in popular discourse, yet there is little sociological research on age-heterogenous partnerships. This paper uses data from the 2013 Midlife in the United States survey and the 2015–2016 National Social Life, Health, and Aging Project to examine age-heterogenous partnerships at older ages (63 was the approximate average age of each sample). On most measures of life satisfaction and relationship well-being, individuals in age-heterogenous partnerships—regardless of age or gender—were not very different from their counterparts in age-homogenous relationships. Some differences did emerge, however, especially related to sexual well-being. Women partnered to older men had less sex and more issues related to sexual satisfaction than their counterparts in age-homogenous relationships. Latent class analyses suggest that these differences were driven by around 40 percent of younger women partnered to older men, a minority of whom were deeply dissatisfied. This research helps address the underrepresentation of sexuality research at older ages and the sociological research gap about age-heterogenous partnerships.
Keywords: sexuality, gender, age-heterogenous partnership, daddy, cougar
Abstract: Discussion of “daddies” has exploded in popular discourse, yet there is little sociological research on age-heterogenous partnerships. This paper uses data from the 2013 Midlife in the United States survey and the 2015–2016 National Social Life, Health, and Aging Project to examine age-heterogenous partnerships at older ages (63 was the approximate average age of each sample). On most measures of life satisfaction and relationship well-being, individuals in age-heterogenous partnerships—regardless of age or gender—were not very different from their counterparts in age-homogenous relationships. Some differences did emerge, however, especially related to sexual well-being. Women partnered to older men had less sex and more issues related to sexual satisfaction than their counterparts in age-homogenous relationships. Latent class analyses suggest that these differences were driven by around 40 percent of younger women partnered to older men, a minority of whom were deeply dissatisfied. This research helps address the underrepresentation of sexuality research at older ages and the sociological research gap about age-heterogenous partnerships.
Keywords: sexuality, gender, age-heterogenous partnership, daddy, cougar
Absolute and relative estimates of genetic and environmental variance in brain structure volumes
Absolute and relative estimates of genetic and environmental variance in brain structure volumes. Lachlan T. Strike et al. Brain Structure and Function, August 19 2019. https://link.springer.com/article/10.1007/s00429-019-01931-8
Abstract: Comparing estimates of the amount of genetic and environmental variance for different brain structures may elucidate differences in the genetic architecture or developmental constraints of individual brain structures. However, most studies compare estimates of relative genetic (heritability) and environmental variance in brain structure, which do not reflect differences in absolute variance between brain regions. Here we used a population sample of young adult twins and singleton siblings of twins (n = 791; M = 23 years, Queensland Twin IMaging study) to estimate the absolute genetic and environmental variance, standardised by the phenotypic mean, in the size of cortical, subcortical, and ventricular brain structures. Mean-standardised genetic variance differed widely across structures [ 23.5-fold range 0.52% (hippocampus) to 12.28% (lateral ventricles) ], but the range of estimates within cortical, subcortical, or ventricular structures was more moderate (two to fivefold range). There was no association between mean-standardised and relative measures of genetic variance (i.e., heritability) in brain structure volumes. We found similar results in an independent sample (n = 1075, M = 29 years, Human Connectome Project). These findings open important new lines of enquiry: namely, understanding the bases of these variance patterns, and their implications regarding the genetic architecture, evolution, and development of the human brain.
Keywords: Volume Genetics Magnetic resonance imaging Twins
Abstract: Comparing estimates of the amount of genetic and environmental variance for different brain structures may elucidate differences in the genetic architecture or developmental constraints of individual brain structures. However, most studies compare estimates of relative genetic (heritability) and environmental variance in brain structure, which do not reflect differences in absolute variance between brain regions. Here we used a population sample of young adult twins and singleton siblings of twins (n = 791; M = 23 years, Queensland Twin IMaging study) to estimate the absolute genetic and environmental variance, standardised by the phenotypic mean, in the size of cortical, subcortical, and ventricular brain structures. Mean-standardised genetic variance differed widely across structures [ 23.5-fold range 0.52% (hippocampus) to 12.28% (lateral ventricles) ], but the range of estimates within cortical, subcortical, or ventricular structures was more moderate (two to fivefold range). There was no association between mean-standardised and relative measures of genetic variance (i.e., heritability) in brain structure volumes. We found similar results in an independent sample (n = 1075, M = 29 years, Human Connectome Project). These findings open important new lines of enquiry: namely, understanding the bases of these variance patterns, and their implications regarding the genetic architecture, evolution, and development of the human brain.
Keywords: Volume Genetics Magnetic resonance imaging Twins
Radical lifespan disparities exist in the animal kingdom: Revamping the Evolutionary Theories of Aging
Revamping the Evolutionary Theories of Aging. Adiv A. Johnsona, Maxim N. Shokhirev, Boris Shoshitaishvili. Ageing Research Reviews, August 23 2019, 100947. https://doi.org/10.1016/j.arr.2019.100947
Highlights
• Extrinsic mortality is one of the most important drivers in the evolution of aging.
• Classical predictions expect higher extrinsic mortality to shorten evolved lifespan.
• The bulk of published data conform to the classical evolutionary theories of aging.
• Increased extrinsic mortality can sometimes select for longer evolved lifespans.
• Immortal animals that experience extrinsic mortality challenge classical theories.
• The aging response to extrinsic mortality involves multiple interacting factors.
Abstract: Radical lifespan disparities exist in the animal kingdom. While the ocean quahog can survive for half a millennium, the mayfly survives for less than 48 hours. The evolutionary theories of aging seek to explain why such stark longevity differences exist and why a deleterious process like aging evolved. The classical mutation accumulation, antagonistic pleiotropy, and disposable soma theories predict that increased extrinsic mortality should select for the evolution of shorter lifespans and vice versa. Most experimental and comparative field studies conform to this prediction. Indeed, animals with extreme longevity (e.g., Greenland shark, bowhead whale, giant tortoise, vestimentiferan tubeworms) typically experience minimal predation. However, data from guppies, nematodes, and computational models show that increased extrinsic mortality can sometimes lead to longer evolved lifespans. The existence of theoretically immortal animals that experience extrinsic mortality – like planarian flatworms, panther worms, and hydra – further challenges classical assumptions. Octopuses pose another puzzle by exhibiting short lifespans and an uncanny intelligence, the latter of which is often associated with longevity and reduced extrinsic mortality. The evolutionary response to extrinsic mortality is likely dependent on multiple interacting factors in the organism, population, and ecology, including food availability, population density, reproductive cost, age-mortality interactions, and the mortality source.
Keywords: evolution of agingmutation accumulationantagonistic pleiotropydisposable somalifespanextrinsic mortality
Highlights
• Extrinsic mortality is one of the most important drivers in the evolution of aging.
• Classical predictions expect higher extrinsic mortality to shorten evolved lifespan.
• The bulk of published data conform to the classical evolutionary theories of aging.
• Increased extrinsic mortality can sometimes select for longer evolved lifespans.
• Immortal animals that experience extrinsic mortality challenge classical theories.
• The aging response to extrinsic mortality involves multiple interacting factors.
Abstract: Radical lifespan disparities exist in the animal kingdom. While the ocean quahog can survive for half a millennium, the mayfly survives for less than 48 hours. The evolutionary theories of aging seek to explain why such stark longevity differences exist and why a deleterious process like aging evolved. The classical mutation accumulation, antagonistic pleiotropy, and disposable soma theories predict that increased extrinsic mortality should select for the evolution of shorter lifespans and vice versa. Most experimental and comparative field studies conform to this prediction. Indeed, animals with extreme longevity (e.g., Greenland shark, bowhead whale, giant tortoise, vestimentiferan tubeworms) typically experience minimal predation. However, data from guppies, nematodes, and computational models show that increased extrinsic mortality can sometimes lead to longer evolved lifespans. The existence of theoretically immortal animals that experience extrinsic mortality – like planarian flatworms, panther worms, and hydra – further challenges classical assumptions. Octopuses pose another puzzle by exhibiting short lifespans and an uncanny intelligence, the latter of which is often associated with longevity and reduced extrinsic mortality. The evolutionary response to extrinsic mortality is likely dependent on multiple interacting factors in the organism, population, and ecology, including food availability, population density, reproductive cost, age-mortality interactions, and the mortality source.
Keywords: evolution of agingmutation accumulationantagonistic pleiotropydisposable somalifespanextrinsic mortality
However, since radical people represent only 20% of the population and there was no effect for Facebook or blogs, the overall effect of the Internet was moderation, not polarization
Tatsuo Tanaka, 2019. "Does the Internet cause polarization? -Panel survey in Japan-," Keio-IES Discussion Paper Series 2019-015, Institute for Economics Studies, Keio University. https://ideas.repec.org/p/keo/dpaper/2019-015.html
Abstract: There is concern that the Internet causes ideological polarization through selective exposure and the echo chamber effect. This paper examines the effect of social media on polarization by applying a difference-in-difference approach to panel data of 50 thousand respondents in Japan. Japan is good case for this research because other factors affecting polarization like huge wealth gap and massive immigration are not serious issue, thus it offers quasi natural experimental situation to test the effect of the Internet. The results show that people who started using social media during the research period (targets) were no more polarized than people who did not (controls). There was a tendency for younger and politically moderate people to be less polarized. The only case in which the Internet increased polarization was for already radical people who started using Twitter. However, since radical people represent only 20% of the population and there was no effect for Facebook or blogs, the overall effect of the Internet was moderation, not polarization.
Check also... Explaining the Spread of Misinformation on Social Media: Evidence from the 2016 U.S. Presidential Election. Pablo Barbera. Note prepared for the APSA Comparative Politics Newsletter, Fall 2018. https://www.bipartisanalliance.com/2019/06/ironically-it-may-not-be-much-trumpeted.html
And Testing popular news discourse on the “echo chamber” effect: Does political polarisation occur among those relying on social media as their primary politics news source? An Nguyen, Hong Tien Vu. First Monday, Volume 24, Number 6 - 3 June 2019.
https://www.bipartisanalliance.com/2019/06/does-political-polarisation-occur-among.html
Abstract: There is concern that the Internet causes ideological polarization through selective exposure and the echo chamber effect. This paper examines the effect of social media on polarization by applying a difference-in-difference approach to panel data of 50 thousand respondents in Japan. Japan is good case for this research because other factors affecting polarization like huge wealth gap and massive immigration are not serious issue, thus it offers quasi natural experimental situation to test the effect of the Internet. The results show that people who started using social media during the research period (targets) were no more polarized than people who did not (controls). There was a tendency for younger and politically moderate people to be less polarized. The only case in which the Internet increased polarization was for already radical people who started using Twitter. However, since radical people represent only 20% of the population and there was no effect for Facebook or blogs, the overall effect of the Internet was moderation, not polarization.
Check also... Explaining the Spread of Misinformation on Social Media: Evidence from the 2016 U.S. Presidential Election. Pablo Barbera. Note prepared for the APSA Comparative Politics Newsletter, Fall 2018. https://www.bipartisanalliance.com/2019/06/ironically-it-may-not-be-much-trumpeted.html
And Testing popular news discourse on the “echo chamber” effect: Does political polarisation occur among those relying on social media as their primary politics news source? An Nguyen, Hong Tien Vu. First Monday, Volume 24, Number 6 - 3 June 2019.
https://www.bipartisanalliance.com/2019/06/does-political-polarisation-occur-among.html
Growing body of evidence suggests that inconsistent hand preference is indicative of an increased disposition to update one’s beliefs upon exposure to novel information; this seems wrong
M. L., Vilsmeier, J. K., Voracek, M., & Tran, U. S. (2019). No Evidence That Lateral Preferences Predict Individual Differences in the Tendency to Update Mental Representations: A Replication-Extension Study. Collabra: Psychology, 5(1), 38. DOI: http://doi.org/10.1525/collabra.227
Abstract: A growing body of evidence suggests that inconsistent hand preference is indicative of an increased disposition to update one’s beliefs upon exposure to novel information. This is attributed to a facilitated exchange of information between the two brain hemispheres among inconsistent handers, compared to consistent handers. Currently available studies provide only indirect evidence for such an effect, were mostly based on small sample sizes, and did not provide measures of effect size. Small sample size is a major factor contributing to low replicability of research findings and false-positive results. We thus attempted to replicate Experiment 1 of Westfall, Corser and Jasper (2014), which appears to be representative of research on degree of handedness and belief updating in terms of the employed methods. We utilized data from a sample more than 10 times the size (N = 1243) of the original study and contrasted the commonly applied median-split technique to classify inconsistent and consistent handers with an empirically grounded classification scheme. Following a replication-extension approach, besides handedness, footedness was also explored. Only one out of 12 chi-squared tests reached significance and supported the original hypothesis that inconsistent handers stay with, or switch more often from, the status quo than consistent handers, depending on the valence of novel information. A small-telescopes analysis suggested that the original study had too low analytic power to detect its reported effect reliably. These results cast doubt on the assumption that inconsistent and consistent-handers differ in the tendency to update mental representations. We discuss the use of the median-split technique in handedness research, available neuroscientific evidence on interhemispheric interaction and inconsistent handedness, and venues of future research.
Keywords: Handedness , Degree-of-handedness , Footedness , Status Quo Bias , Lateral Preference Classification
Subscribe to:
Posts (Atom)