Faux-nobo: “Naked Bonobo” demolishes myth of sexy, egalitarian bonobos. Edward Clint • Oct 9, 2017. https://www.skepticink.com/incredulous/2017/10/09/bonobo-myth-demolished
Just a selection of a very interesting post reviewing Lynn Saxon's book:
"
That said, bonobo society is far from peaceful and bonobos are far from gentle.
Kano (1992) found a majority of one group’s individuals had “abnormalities” of the limbs, digits, ears, eyeballs, genitalia, and other parts. 28 counts of total loss of a finger or toe, 96 counts of partial loss. Only one of the 22 adult males had intact fingers and toes. 32 counts of ear lacerations which almost always result from fighting. (p. 117-8)
At Apenheul Zoo in the Netherlands, five female bonobos attacked a male and were seen gnawing on his toes; the flesh could be seen between their teeth as they chewed away. (p. 119)
At least two zookeepers have lost parts of digits (p. 119)
While releasing bonobos back into the wild after rehabilitation at a sanctuary, three trackers were attacked and mutilated. They lost noses, bits of fingers and one lost an ear. One of the men spent a month in the hospital and two required a year of reconstructive facial surgery. (p. 122)
Overall, male-male aggression rates are similar in chimpanzees and bonobos (p. 126)
Female aggression toward infants and other females
The alpha female at Twycross Zoo took the infant of the lowest ranking female, even though she was still nursing her own infant. After weeks of rough treatment at the alpha’s hands, she lost interest and the infant had to be removed for human rearing as it showed signs of “weakness and dehydration”. (p. 120-121)
There are at least 8 cases of infant abduction or victim of aggressive behavior at the Plankendael and Stuttgart zoos; the mothers of the stolen infants behaved nervously and showed signs of distress. While trying to get their infants back, some of the females would present for genito-genital rubbing. (p. 121)
Is there anything sexier than bartering a sex act for your baby’s life?
In one Lomako group, aggression between females was about 7 times higher when two or more oestrous females were in the party than when there was only one. (p. 124)
I thought that females were keeping the peace with sex, not causing fights over sex?
Next, are bonobos egalitarian? That is, are various goods and resources (food, sex, social support) more evenly distributed among individuals? Chimpanzees are thought to have the opposite arrangement, a more rigid rank system that greatly privileges the chimp elites. Saxon writes that bonobos are probably somewhat more egalitarian than chimpanzees, but much less so than is commonly supposed. Sometimes, it is not clear that there is a difference at all.
A female-biased sex ratio is taken to be evidence of male intrasexual competition. Competition is a physiologically taxing and risky enterprise that leads to early death for many males. The problem here is that, overall, the sex ratio of chimpanzees and bonobos is very similar.
Both females and males had higher mating rates when they were aggressors compared to when they were targets of aggression. (p. 124)
Bonobo society, including its females, rewards rank and aggression:
Females strongly prefer high ranking males. When in oestrus, this preference intensifies.
High ranking males are more aggressive, and actively block other males from access to fertile females.
Ranking female “wingmoms” aid their sons (but not daughters) in bullying and picking fights to advance their status. (p. 115)
Feeding is highly segmented by rank. Low ranking individuals may be charged or attacked if attempting to line-jump.
Female bonobos disperse to other groups around adolescence. When accepted in their new group, they solicit sex acts from higher ranking males at food sites. This behavior, and their total libido, drops substantially as they gain rank. In order words, they’re forced to barter sex for food. Not because they’re eager for sexual contact. When they don’t have to, they don’t. Few consider coerced prostitution a sign of gender equality.
...
4. Should humans envy bonobos?
Perhaps the most important idea in this book is that the “great” parts of the faux-nobo lifestyle wannabes want us to emulate, properly understood, become disturbing if not nightmares outright:
“Sexy” bonobos have the most frequent sex as sub-adults and sexual contact between juveniles, infants, and adults is quite common and normal.
“Tension-reducing” sex among bonobos is most often not about fun. It is not about enthusiastic consent, but coercion on threat of aggression. Like humans, we know that bonobos have preferred sexual partners. But the social friction most likely occurs between two individuals who are not close or friendly. Bonobos are therefore required to offer sexual contact to individuals they do not like. Faced with an agitated, aggressive male you do not care for, how do you women feel about offering copulation to calm him? And men, how do you feel about offering a penis to rub or your rump for the same? This is the bonobo way.
The overwhelming majority of sexual actions involving genitals do not involve ejaculation or (so far as can be told) orgasm in bonobos. This includes masturbation and intercourse.
"
Thursday, October 19, 2017
Hedge Fund Managers With Psychopathic Tendencies Make for Worse Investor
Hedge Fund Managers With Psychopathic Tendencies Make for Worse Investors. Leanne ten Brinke, Aimee Kish, Dacher Keltner. Personality and Social Psychology Bulletin. First published date: October-19-2017. https://doi.org/10.1177/0146167217733080
Abstract: It is widely assumed that psychopathic personality traits promote success in high-powered, competitive contexts such as financial investment. By contrast, empirical studies find that psychopathic leaders can be charming and persuasive, but poor performers who mismanage, bully, and engage in unethical behavior. By coding nonverbal behaviors displayed in semistructured interviews, we identified the psychopathic, Machiavellian, and narcissistic tendencies in 101 hedge fund managers, and examined whether these traits were associated with financial performance over the course of 10 diverse years of economic volatility (2005-2015). Managers with greater psychopathic tendencies produced lower absolute returns than their less psychopathic peers, and managers with greater narcissistic traits produced decreased risk-adjusted returns. The discussion focuses on the costs of Dark Triad traits in financial investment, and organizational leadership more generally.
Abstract: It is widely assumed that psychopathic personality traits promote success in high-powered, competitive contexts such as financial investment. By contrast, empirical studies find that psychopathic leaders can be charming and persuasive, but poor performers who mismanage, bully, and engage in unethical behavior. By coding nonverbal behaviors displayed in semistructured interviews, we identified the psychopathic, Machiavellian, and narcissistic tendencies in 101 hedge fund managers, and examined whether these traits were associated with financial performance over the course of 10 diverse years of economic volatility (2005-2015). Managers with greater psychopathic tendencies produced lower absolute returns than their less psychopathic peers, and managers with greater narcissistic traits produced decreased risk-adjusted returns. The discussion focuses on the costs of Dark Triad traits in financial investment, and organizational leadership more generally.
Vindeby, The World’s First Offshore Wind Farm Retires: A Post-Mortem
World’s First Offshore Wind Farm Retires: A Post-Mortem. M J Kelly, Department of Engineering, University of Cambridge, United Kingdom. Date: 18/10/17
https://www.thegwpf.com/worlds-first-offshore-wind-farm-retires-a-post-mortem/
The first-ever offshore wind farm, Vindeby, in Danish waters, is being decommissioned after twenty-five years, DONG Energy has announced.[1] By its nature it was an experiment, and we can now see whether or not is has been a successful alternative to fossil or nuclear-fuelled electricity.
It consisted of eleven turbines, each with a capacity of 0.45 MW, giving a total export capacity for the wind farm of 5 MW. The hub height of each turbine was 37.5 m and blade height 17 m, small by today’s standards. Because of its date of construction, it would have been all but totally reliant on conventional energy for its manufacture and installation. The original stated project cost was £7.16 million in 1991, which is equivalent to approximately £10 million today.[2]
During its lifetime, it delivered 243 GWh to the Danish electricity grid. This means that the actual amount of electricity generated was 22% of that which would have been generated if it had delivered 5 MW all the time for 25 years. In technical terms, it had a load factor of 0.22. From the same source we see the initial expectation was that 3506 houses would be powered annually, with a saving of 7085 tonnes of carbon dioxide per annum.[3] There was no clear indication of Vindeby’s expected lifetime. Since the average household’s annual use of energy in Denmark[4] is 5000 kWh, we can calculate that the windfarm’s anticipated energy output was 438 GWh over its 25-year lifetime. The actual total of 243 GWh was therefore only 55% of that expectation.
The (annual average) spot price for electricity from both the European Energy Exchange and Nordpool quoted over the period 2006–2014 dropped approximately linearly from €50–55/MWh in 2006 to €32–37/MWh in 2014.[5] If we assume that this trend was constant over 1991–2017, we can see that the average wholesale price paid for the Vindeby electricity was of order of €50/MWh. On this basis the revenue of the windfarm was approximately €12 million: perhaps €15 million at today’s prices. This means that the windmill spent 75% of its life paying off the £10 million cost of its construction, and most of the rest paying for maintenance. In terms of effective energy revenue, the return on input cost was close to 1:1. The individual project may have been just profitable, but the project is insufficiently productive as will be seen below.
Other windfarms have performed even worse. Lely, an smaller farm sited off the Netherlands coast, was decommissioned last year.[6] It consisted of four turbines of 0.5 MW capacity, and cost £4.4 million in 1992. One nacelle and blades failed in 2014 because of metal fatigue.[7] It produced 3500 MWh per year, implying a load factor of 20%. At the same €50/MWh as above, it would have earned €4.2 million, less than the initial project cost, let alone the additional cost of any maintenance, by any way of reckoning.[8]
The reader should note that the analysis above assumes that the scrap value of the wind turbines will pay for the decommissioning process, and so does not degrade the ratio any further: presumably the bases will remain in the sea. This assumption has been made explicit for the Cowley Ridge wind farm in Alberta, Canada, for which the actual electricity energy delivered into the Canadian grid is not in the public domain, so this similar exercise cannot be repeated.[9]
For a typical fossil-fuel plant, effective energy revenue return on input cost is of the order of 50:1 if one considers the plant alone and about 15:1 when one includes the cost of the fuel. For a nuclear plant the ratio is more like 70:1, and the fuel is a negligible part of the overall cost. The energy generation and distribution sector makes up approximately 9% of the whole world economy, suggesting that the global energy sector has an energy return ratio of 11:1.[10] It is this high average ratio, buoyed by much higher ratios in certain areas (e.g.15:1 in Europe), that allows our present world economy to function.
The lesson learned from the considerations discussed above is that wind farms like these early examples could not power a modern economy unless assisted by substantial fossil-fuelled energy.
Interestingly, DONG Energy, which built Vindeby, is proposing the much newer and bigger Hornsea Project One in the North Sea. This wind farm will have 174 turbines, each with a hub height of 113 m, 75 m blades and a nameplate capacity of 7 MW. It is due to be commissioned in 2020.[11] The project capacity is 1218 MW, and it has a current cost estimate of €3.36 billion. No clear statement of expected lifetime has been provided, but DONG has stated that 862,655 homes will be powered annually. Assuming the average per-household electricity use in the UK[12] to be 4000 kWh, this implies a constant generation of 394 MW over the year, which is 32% of capacity, which is probably realistic.
The agreed wholesale price of the Hornsea energy over the next twenty-five years is £140/MWh. Even assuming a very generous load factor of 50%, Hornsea’s lifetime revenue would be about £20 billion, suggesting a ratio of revenue to cost of 6:1 (reduced further by any maintenance costs), still barely half the average value that prevails in the global economy, which is more than 85% fossil-fuel based.
The secret of the fossil fuel success in the world economy is the high calorific value of the fuel. A ton of coal costing £42.50 produces of the order of 2000 kWh of electricity in a new coal-fired power plants (up 30% from older plants). This sells for £400 wholesale, with an energy return on energy invested (EROEI) of 10:1. A therm of natural gas costs £0.40, and produces 30 kWh of electricity, which sells for £6, representing an EROEI of 15:1. Fuel-less technologies do not have this advantage.
The disappointing results from Vindeby, and the likely results from Hornsea and similar projects must be seen in the context of the increasing wealth of a growing world population. If all the world’s 10.3 billion people alive in 2055 were to lead a European (as opposed to American) style of life, we would need 2.5 times the primary energy as used today. If, say, half of the energy is suddenly produced with an energy return on investment of 5.5:1 (i.e. half the present world average), then for the same investment we would get only 75% of the energy and we would need to cut energy consumption: the first 10% reduction could come by curtailing higher education, international air travel, the internet, advanced medicine and high culture. We could invest proportionately more of our economy in energy production than we do now, but that will still mean a step backward against the trend of the last 200 years of reducing the proportion of the total economy taken by the energy sector.[13] To avoid this undesirable scenario we would need new forms of energy to match the fossil/nuclear fuel performance.
In this context it is useful to remember that global economic growth is very sensitive to the cost of energy. The energy cost spikes in the mid-1970s and in 2010 form the boundaries between the 5% growth rate of the global economy from 1950–1975, the 3% from 1980–2008, and the 2.5% since 2012. There is a lot at stake in the choice between cheap fossil fuels and expensive renewables.
Notes
[1] http://www.dongenergy.com/en/media/newsroom/news/articles/the-worlds-first-offshore-wind-farm-is-retiring
[2] http://www.4coffshore.com/windfarms/vindeby-denmark-dk06.html
[3] http://www.dongenergy.com/en/media/newsroom/news/articles/the-worlds-first-offshore-wind-farm-is-retiring
[4] http://orbit.dtu.dk/files/4203486/ICT_ECEEE_2009_paper.pdf
[5] http://www.pfbach.dk/firma_pfb/pfb_decreasing_wholesale_prices_2015_11_11.pdf
[6] http://www.4coffshore.com/windfarms/lely-netherlands-nl27.html
[7] http://www.offshorewind.biz/2016/12/07/lely-wind-farm-fully-decommissioned-video/
[8] http://www.caddet-re.org/assets/no61.pdf
[9] http://www.power-eng.com/articles/2016/05/transalta-shuts-cowley-ridge-wind-farm-in-alberta-may-repower-it.html
[10] http://www.thegwpf.com/michael-kelly-a-challenge-for-renewable-energies/
[11] http://www.4coffshore.com/windfarms/hornsea-project-one-united-kingdom-uk81.html
[12] https://www.statista.com/statistics/517845/average-electricity-consumption-uk/
[13] Until the industrial revolution, the UK economy operated on an energy return investment of 2:1: see C. W. King, J P Maxwell and A Donovan, ‘Comparing world economic and net energy metrics, Part I: Single Technology and Commercial Perspective’ Energies 2015: 12949-74. The 2:1 ratio applies in some parts of Africa today: when half the economy is spent providing food and fuel, it leaves little over for other activities.
https://www.thegwpf.com/worlds-first-offshore-wind-farm-retires-a-post-mortem/
The first-ever offshore wind farm, Vindeby, in Danish waters, is being decommissioned after twenty-five years, DONG Energy has announced.[1] By its nature it was an experiment, and we can now see whether or not is has been a successful alternative to fossil or nuclear-fuelled electricity.
It consisted of eleven turbines, each with a capacity of 0.45 MW, giving a total export capacity for the wind farm of 5 MW. The hub height of each turbine was 37.5 m and blade height 17 m, small by today’s standards. Because of its date of construction, it would have been all but totally reliant on conventional energy for its manufacture and installation. The original stated project cost was £7.16 million in 1991, which is equivalent to approximately £10 million today.[2]
During its lifetime, it delivered 243 GWh to the Danish electricity grid. This means that the actual amount of electricity generated was 22% of that which would have been generated if it had delivered 5 MW all the time for 25 years. In technical terms, it had a load factor of 0.22. From the same source we see the initial expectation was that 3506 houses would be powered annually, with a saving of 7085 tonnes of carbon dioxide per annum.[3] There was no clear indication of Vindeby’s expected lifetime. Since the average household’s annual use of energy in Denmark[4] is 5000 kWh, we can calculate that the windfarm’s anticipated energy output was 438 GWh over its 25-year lifetime. The actual total of 243 GWh was therefore only 55% of that expectation.
The (annual average) spot price for electricity from both the European Energy Exchange and Nordpool quoted over the period 2006–2014 dropped approximately linearly from €50–55/MWh in 2006 to €32–37/MWh in 2014.[5] If we assume that this trend was constant over 1991–2017, we can see that the average wholesale price paid for the Vindeby electricity was of order of €50/MWh. On this basis the revenue of the windfarm was approximately €12 million: perhaps €15 million at today’s prices. This means that the windmill spent 75% of its life paying off the £10 million cost of its construction, and most of the rest paying for maintenance. In terms of effective energy revenue, the return on input cost was close to 1:1. The individual project may have been just profitable, but the project is insufficiently productive as will be seen below.
Other windfarms have performed even worse. Lely, an smaller farm sited off the Netherlands coast, was decommissioned last year.[6] It consisted of four turbines of 0.5 MW capacity, and cost £4.4 million in 1992. One nacelle and blades failed in 2014 because of metal fatigue.[7] It produced 3500 MWh per year, implying a load factor of 20%. At the same €50/MWh as above, it would have earned €4.2 million, less than the initial project cost, let alone the additional cost of any maintenance, by any way of reckoning.[8]
The reader should note that the analysis above assumes that the scrap value of the wind turbines will pay for the decommissioning process, and so does not degrade the ratio any further: presumably the bases will remain in the sea. This assumption has been made explicit for the Cowley Ridge wind farm in Alberta, Canada, for which the actual electricity energy delivered into the Canadian grid is not in the public domain, so this similar exercise cannot be repeated.[9]
For a typical fossil-fuel plant, effective energy revenue return on input cost is of the order of 50:1 if one considers the plant alone and about 15:1 when one includes the cost of the fuel. For a nuclear plant the ratio is more like 70:1, and the fuel is a negligible part of the overall cost. The energy generation and distribution sector makes up approximately 9% of the whole world economy, suggesting that the global energy sector has an energy return ratio of 11:1.[10] It is this high average ratio, buoyed by much higher ratios in certain areas (e.g.15:1 in Europe), that allows our present world economy to function.
The lesson learned from the considerations discussed above is that wind farms like these early examples could not power a modern economy unless assisted by substantial fossil-fuelled energy.
Interestingly, DONG Energy, which built Vindeby, is proposing the much newer and bigger Hornsea Project One in the North Sea. This wind farm will have 174 turbines, each with a hub height of 113 m, 75 m blades and a nameplate capacity of 7 MW. It is due to be commissioned in 2020.[11] The project capacity is 1218 MW, and it has a current cost estimate of €3.36 billion. No clear statement of expected lifetime has been provided, but DONG has stated that 862,655 homes will be powered annually. Assuming the average per-household electricity use in the UK[12] to be 4000 kWh, this implies a constant generation of 394 MW over the year, which is 32% of capacity, which is probably realistic.
The agreed wholesale price of the Hornsea energy over the next twenty-five years is £140/MWh. Even assuming a very generous load factor of 50%, Hornsea’s lifetime revenue would be about £20 billion, suggesting a ratio of revenue to cost of 6:1 (reduced further by any maintenance costs), still barely half the average value that prevails in the global economy, which is more than 85% fossil-fuel based.
The secret of the fossil fuel success in the world economy is the high calorific value of the fuel. A ton of coal costing £42.50 produces of the order of 2000 kWh of electricity in a new coal-fired power plants (up 30% from older plants). This sells for £400 wholesale, with an energy return on energy invested (EROEI) of 10:1. A therm of natural gas costs £0.40, and produces 30 kWh of electricity, which sells for £6, representing an EROEI of 15:1. Fuel-less technologies do not have this advantage.
The disappointing results from Vindeby, and the likely results from Hornsea and similar projects must be seen in the context of the increasing wealth of a growing world population. If all the world’s 10.3 billion people alive in 2055 were to lead a European (as opposed to American) style of life, we would need 2.5 times the primary energy as used today. If, say, half of the energy is suddenly produced with an energy return on investment of 5.5:1 (i.e. half the present world average), then for the same investment we would get only 75% of the energy and we would need to cut energy consumption: the first 10% reduction could come by curtailing higher education, international air travel, the internet, advanced medicine and high culture. We could invest proportionately more of our economy in energy production than we do now, but that will still mean a step backward against the trend of the last 200 years of reducing the proportion of the total economy taken by the energy sector.[13] To avoid this undesirable scenario we would need new forms of energy to match the fossil/nuclear fuel performance.
In this context it is useful to remember that global economic growth is very sensitive to the cost of energy. The energy cost spikes in the mid-1970s and in 2010 form the boundaries between the 5% growth rate of the global economy from 1950–1975, the 3% from 1980–2008, and the 2.5% since 2012. There is a lot at stake in the choice between cheap fossil fuels and expensive renewables.
Notes
[1] http://www.dongenergy.com/en/media/newsroom/news/articles/the-worlds-first-offshore-wind-farm-is-retiring
[2] http://www.4coffshore.com/windfarms/vindeby-denmark-dk06.html
[3] http://www.dongenergy.com/en/media/newsroom/news/articles/the-worlds-first-offshore-wind-farm-is-retiring
[4] http://orbit.dtu.dk/files/4203486/ICT_ECEEE_2009_paper.pdf
[5] http://www.pfbach.dk/firma_pfb/pfb_decreasing_wholesale_prices_2015_11_11.pdf
[6] http://www.4coffshore.com/windfarms/lely-netherlands-nl27.html
[7] http://www.offshorewind.biz/2016/12/07/lely-wind-farm-fully-decommissioned-video/
[8] http://www.caddet-re.org/assets/no61.pdf
[9] http://www.power-eng.com/articles/2016/05/transalta-shuts-cowley-ridge-wind-farm-in-alberta-may-repower-it.html
[10] http://www.thegwpf.com/michael-kelly-a-challenge-for-renewable-energies/
[11] http://www.4coffshore.com/windfarms/hornsea-project-one-united-kingdom-uk81.html
[12] https://www.statista.com/statistics/517845/average-electricity-consumption-uk/
[13] Until the industrial revolution, the UK economy operated on an energy return investment of 2:1: see C. W. King, J P Maxwell and A Donovan, ‘Comparing world economic and net energy metrics, Part I: Single Technology and Commercial Perspective’ Energies 2015: 12949-74. The 2:1 ratio applies in some parts of Africa today: when half the economy is spent providing food and fuel, it leaves little over for other activities.
Changes in the Public Acceptance of Immigrants and Refugees in Germany in the Course of Europe’s ‘Immigration Crisis’
Christian S Czymara, Alexander W Schmidt-Catran; Refugees Unwelcome? Changes in the Public Acceptance of Immigrants and Refugees in Germany in the Course of Europe’s ‘Immigration Crisis’, European Sociological Review, , jcx071, https://doi.org/10.1093/esr/jcx071
Abstract: Based on an innovative design, combining a multi-factorial survey experiment with a longitudinal perspective, we examine changes in the public acceptance of immigrants in Germany from the beginning of the so-called ‘migration crisis’ to after the sexual assaults of New Year’s Eve (NYE) 2015/2016. In contrast to previous studies investigating similar research questions, our approach allows to differentiate changes along various immigrant characteristics. Derived from discussions making up the German immigration discourse during this time, we expect reduced acceptance especially of those immigrants who were explicitly connected to the salient events, like Muslims and the offenders of NYE. Most strikingly, we find that refugees were generally highly accepted and even more so in the second wave, whereas the acceptance of immigrants from Arab or African countries further decreased. Moreover, female respondents’ initial preference for male immigrants disappeared. Contrary to our expectations, we find no changes in the acceptance of Muslims. We conclude that (i) public opinion research is well advised to match the particular political and social context under investigation to a fitting outcome variable to adequately capture the dynamics of anti-immigrant sentiment and that (ii) the vividly discussed upper limits for refugees seem to be contrary to public demands according to our data.
Abstract: Based on an innovative design, combining a multi-factorial survey experiment with a longitudinal perspective, we examine changes in the public acceptance of immigrants in Germany from the beginning of the so-called ‘migration crisis’ to after the sexual assaults of New Year’s Eve (NYE) 2015/2016. In contrast to previous studies investigating similar research questions, our approach allows to differentiate changes along various immigrant characteristics. Derived from discussions making up the German immigration discourse during this time, we expect reduced acceptance especially of those immigrants who were explicitly connected to the salient events, like Muslims and the offenders of NYE. Most strikingly, we find that refugees were generally highly accepted and even more so in the second wave, whereas the acceptance of immigrants from Arab or African countries further decreased. Moreover, female respondents’ initial preference for male immigrants disappeared. Contrary to our expectations, we find no changes in the acceptance of Muslims. We conclude that (i) public opinion research is well advised to match the particular political and social context under investigation to a fitting outcome variable to adequately capture the dynamics of anti-immigrant sentiment and that (ii) the vividly discussed upper limits for refugees seem to be contrary to public demands according to our data.
Birth order: no meaningful effects on life satisfaction, locus of control, interpersonal trust, reciprocity, risk taking, patience, impulsivity, or political orientation
Probing Birth-Order Effects on Narrow Traits Using Specification-Curve Analysis. Julia M. Rohrer, Boris Egloff, and Stefan C. Schmukle. Psychological Science. First published date: October-17-2017. DOI 10.1177/0956797617723726
Abstract: The idea that birth-order position has a lasting impact on personality has been discussed for the past 100 years. Recent large-scale studies have indicated that birth-order effects on the Big Five personality traits are negligible. In the current study, we examined a variety of more narrow personality traits in a large representative sample (n = 6,500–10,500 in between-family analyses; n = 900–1,200 in within-family analyses). We used specification-curve analysis to assess evidence for birth-order effects across a range of models implementing defensible yet arbitrary analytical decisions (e.g., whether to control for age effects or to exclude participants on the basis of sibling spacing). Although specification-curve analysis clearly confirmed the previously reported birth-order effect on intellect, we found no meaningful effects on life satisfaction, locus of control, interpersonal trust, reciprocity, risk taking, patience, impulsivity, or political orientation. The lack of meaningful birth-order effects on self-reports of personality was not limited to broad traits but also held for more narrowly defined characteristics.
Abstract: The idea that birth-order position has a lasting impact on personality has been discussed for the past 100 years. Recent large-scale studies have indicated that birth-order effects on the Big Five personality traits are negligible. In the current study, we examined a variety of more narrow personality traits in a large representative sample (n = 6,500–10,500 in between-family analyses; n = 900–1,200 in within-family analyses). We used specification-curve analysis to assess evidence for birth-order effects across a range of models implementing defensible yet arbitrary analytical decisions (e.g., whether to control for age effects or to exclude participants on the basis of sibling spacing). Although specification-curve analysis clearly confirmed the previously reported birth-order effect on intellect, we found no meaningful effects on life satisfaction, locus of control, interpersonal trust, reciprocity, risk taking, patience, impulsivity, or political orientation. The lack of meaningful birth-order effects on self-reports of personality was not limited to broad traits but also held for more narrowly defined characteristics.
Not aware of the improvement, but drinking alcohol significantly betters observer-ratings for Dutch language, specifically pronunciation
Dutch courage? Effects of acute alcohol consumption on self-ratings and observer ratings of foreign language skills. Fritz Renner, Inge Kersbergen, Matt Field, and Jessica Werthmann. Journal of Psychopharmacology. First published date: October-18-2017. DOI 10.1177/0269881117735687
Abstract
Aims: A popular belief is that alcohol improves the ability to speak in a foreign language. The effect of acute alcohol consumption on perceived foreign language performance and actual foreign language performance in foreign language learners has not been investigated. The aim of the current study was to test the effects of acute alcohol consumption on self-rated and observer-rated verbal foreign language performance in participants who have recently learned this language.
ethods: Fifty native German speakers who had recently learned Dutch were randomized to receive either a low dose of alcohol or a control beverage that contained no alcohol. Following the experimental manipulation, participants took part in a standardized discussion in Dutch with a blinded experimenter. The discussion was audio-recorded and foreign language skills were subsequently rated by two native Dutch speakers who were blind to the experimental condition (observer-rating). Participants also rated their own individual Dutch language skills during the discussion (self-rating).
Results: Participants who consumed alcohol had significantly better observer-ratings for their Dutch language, specifically better pronunciation, compared with those who did not consume alcohol. However, alcohol had no effect on self-ratings of Dutch language skills.
Conclusions: Acute alcohol consumption may have beneficial effects on the pronunciation of a foreign language in people who have recently learned that language.
Abstract
Aims: A popular belief is that alcohol improves the ability to speak in a foreign language. The effect of acute alcohol consumption on perceived foreign language performance and actual foreign language performance in foreign language learners has not been investigated. The aim of the current study was to test the effects of acute alcohol consumption on self-rated and observer-rated verbal foreign language performance in participants who have recently learned this language.
ethods: Fifty native German speakers who had recently learned Dutch were randomized to receive either a low dose of alcohol or a control beverage that contained no alcohol. Following the experimental manipulation, participants took part in a standardized discussion in Dutch with a blinded experimenter. The discussion was audio-recorded and foreign language skills were subsequently rated by two native Dutch speakers who were blind to the experimental condition (observer-rating). Participants also rated their own individual Dutch language skills during the discussion (self-rating).
Results: Participants who consumed alcohol had significantly better observer-ratings for their Dutch language, specifically better pronunciation, compared with those who did not consume alcohol. However, alcohol had no effect on self-ratings of Dutch language skills.
Conclusions: Acute alcohol consumption may have beneficial effects on the pronunciation of a foreign language in people who have recently learned that language.
Infants React with Increased Arousal to Spiders and Snakes
Itsy Bitsy Spider…: Infants React with Increased Arousal to Spiders and Snakes. Stefanie Hoehl, Kahl Hellmer, Maria Johansson and Gustaf Gredebäck. Front. Psychol., October 18 2017, https://doi.org/10.3389/fpsyg.2017.01710
Abstract: Attention biases have been reported for ancestral threats like spiders and snakes in infants, children, and adults. However, it is currently unclear whether these stimuli induce increased physiological arousal in infants. Here, 6-month-old infants were presented with pictures of spiders and flowers (Study 1, within-subjects), or snakes and fish (Study 1, within-subjects; Study 2, between-subjects). Infants’ pupillary responses linked to activation of the noradrenergic system were measured. Infants reacted with increased pupillary dilation indicating arousal to spiders and snakes compared with flowers and fish. Results support the notion of an evolved preparedness for developing fear of these ancestral threats.
Abstract: Attention biases have been reported for ancestral threats like spiders and snakes in infants, children, and adults. However, it is currently unclear whether these stimuli induce increased physiological arousal in infants. Here, 6-month-old infants were presented with pictures of spiders and flowers (Study 1, within-subjects), or snakes and fish (Study 1, within-subjects; Study 2, between-subjects). Infants’ pupillary responses linked to activation of the noradrenergic system were measured. Infants reacted with increased pupillary dilation indicating arousal to spiders and snakes compared with flowers and fish. Results support the notion of an evolved preparedness for developing fear of these ancestral threats.
Wednesday, October 18, 2017
Supplementing multivitamins and iodine to deficient children and learning to play a musical instrument raises the IQ
Raising IQ among school-aged children: Five meta-analyses and a review of randomized controlled trials. John Protzko. Developmental Review, Volume 46, December 2017, Pages 81-101. https://doi.org/10.1016/j.dr.2017.05.001
Highlights
• There have been 36 RCTs attempting to raise IQ in school-aged children.
• Nutrient supplementation includes multivitamins, iron, iodine, and zinc.
• Training includes EF and reasoning training, and learning a musical instrument.
• We meta-analyze this literature to provide a best-evidence summary to date.
• Multivitamin & iodine supplementation, and learning a musical instrument, raise IQ.
Abstract: In this paper, we examine nearly every available randomized controlled trial that attempts to raise IQ in children from once they begin kindergarten until pre-adolescence. We use meta-analytic procedures when there are more than three studies employing similar methods, reviewing individual interventions when too few replications are available for a quantitative analysis. All studies included in this synthesis are on non-clinical populations. This yields five fixed-effects meta-analyses on the roles of dietary supplementation with multivitamins, iron, and iodine, as well as executive function training, and learning to play a musical instrument. We find that supplementing a deficient child with multivitamins raises their IQ, supplementing a deficient child with iodine raises their IQ, and learning to play a musical instrument raises a child’s IQ. The role of iron, and executive function training are unreliable in their estimates. We also subject each meta-analytic result to a series of robustness checks. In each meta-analysis, we discuss probable causal mechanisms for how each of these procedures raises intelligence. Though each meta-analysis includes a moderate to small number of studies (< 19 effect sizes), our purpose is to highlight the best available evidence and encourage the continued experimentation in each of these fields.
Highlights
• There have been 36 RCTs attempting to raise IQ in school-aged children.
• Nutrient supplementation includes multivitamins, iron, iodine, and zinc.
• Training includes EF and reasoning training, and learning a musical instrument.
• We meta-analyze this literature to provide a best-evidence summary to date.
• Multivitamin & iodine supplementation, and learning a musical instrument, raise IQ.
Abstract: In this paper, we examine nearly every available randomized controlled trial that attempts to raise IQ in children from once they begin kindergarten until pre-adolescence. We use meta-analytic procedures when there are more than three studies employing similar methods, reviewing individual interventions when too few replications are available for a quantitative analysis. All studies included in this synthesis are on non-clinical populations. This yields five fixed-effects meta-analyses on the roles of dietary supplementation with multivitamins, iron, and iodine, as well as executive function training, and learning to play a musical instrument. We find that supplementing a deficient child with multivitamins raises their IQ, supplementing a deficient child with iodine raises their IQ, and learning to play a musical instrument raises a child’s IQ. The role of iron, and executive function training are unreliable in their estimates. We also subject each meta-analytic result to a series of robustness checks. In each meta-analysis, we discuss probable causal mechanisms for how each of these procedures raises intelligence. Though each meta-analysis includes a moderate to small number of studies (< 19 effect sizes), our purpose is to highlight the best available evidence and encourage the continued experimentation in each of these fields.
Deciding for oneself, we are averse to loss; deciding for one other, aversion is significantly reduced
Decision making for others: The case of loss aversion. Sascha C. Füllbrunn and Wolfgang J. Luhan. Economics Letters, https://doi.org/10.1016/j.econlet.2017.09.037
Highlights
• We test whether loss aversion plays a role in risky decisions making for others.
• Deciding for oneself, we find loss aversion levels similar to the literature.
• Deciding for one other only, we find loss aversion to be significantly reduced.
• Deciding for oneself and one other at the same time, we find no difference.
Abstract: Risky decisions are at the core of economic theory. While many of these decisions are taken on behalf of others rather than for oneself, the existing literature finds mixed results on whether people take more or less risk for others then for themselves. Recent studies suggest that taking decisions for others reduces loss aversion, thereby increasing risk taking on behalf of others. To test this, we elicit loss aversion in three treatments: making risky decisions for oneself, for one other subject, or for the decision maker and another person combined. We find a clear treatment effect when making decisions for others but not when making decisions for both.
JEL classification: C9; D3; D8
Keywords: Decision making for others; Risk taking; Loss aversion; Experiment
Highlights
• We test whether loss aversion plays a role in risky decisions making for others.
• Deciding for oneself, we find loss aversion levels similar to the literature.
• Deciding for one other only, we find loss aversion to be significantly reduced.
• Deciding for oneself and one other at the same time, we find no difference.
Abstract: Risky decisions are at the core of economic theory. While many of these decisions are taken on behalf of others rather than for oneself, the existing literature finds mixed results on whether people take more or less risk for others then for themselves. Recent studies suggest that taking decisions for others reduces loss aversion, thereby increasing risk taking on behalf of others. To test this, we elicit loss aversion in three treatments: making risky decisions for oneself, for one other subject, or for the decision maker and another person combined. We find a clear treatment effect when making decisions for others but not when making decisions for both.
JEL classification: C9; D3; D8
Keywords: Decision making for others; Risk taking; Loss aversion; Experiment
Acquiescence: People can explicitly recognize that their intuitive judgment is wrong but, nevertheless, stick with it
Risen JL. Acquiescing to intuition: Believing what we know isn't so. Soc Personal Psychol Compass. 2017;e12358. https://doi.org/10.1111/spc3.12358
Abstract: When people identify an error in their initial judgment, they typically try to correct it. But, in some cases, they choose not to—even when they know, in the moment, that they are being irrational or making a mistake. A baseball fan may know that he cannot affect the pitcher from his living room but still be reluctant to say “no-hitter.” A person may learn that flying in an airplane is statistically safer than driving a car and still refuse to fly. Dual-process models of judgment and decision making often implicitly assume that if an error is detected, it will be corrected. Recent work suggests, however, that models should decouple error detection and correction. Indeed, people can explicitly recognize that their intuitive judgment is wrong but, nevertheless, stick with it, a phenomenon known as acquiescence. My goals are to offer criteria for identifying acquiescence, consider why people acquiesce even when it incurs a cost, discuss how lessons that are learned in cases when acquiescence is clearly identified can be exported to cases when acquiescence may be harder to establish, and, more broadly, describe the implications of a model that decouples error detection and error correction.
Abstract: When people identify an error in their initial judgment, they typically try to correct it. But, in some cases, they choose not to—even when they know, in the moment, that they are being irrational or making a mistake. A baseball fan may know that he cannot affect the pitcher from his living room but still be reluctant to say “no-hitter.” A person may learn that flying in an airplane is statistically safer than driving a car and still refuse to fly. Dual-process models of judgment and decision making often implicitly assume that if an error is detected, it will be corrected. Recent work suggests, however, that models should decouple error detection and correction. Indeed, people can explicitly recognize that their intuitive judgment is wrong but, nevertheless, stick with it, a phenomenon known as acquiescence. My goals are to offer criteria for identifying acquiescence, consider why people acquiesce even when it incurs a cost, discuss how lessons that are learned in cases when acquiescence is clearly identified can be exported to cases when acquiescence may be harder to establish, and, more broadly, describe the implications of a model that decouples error detection and error correction.
Why museum visitors touch the exhibits when they do not have permission to do so
Rehabilitating unauthorised touch or why museum visitors touch the exhibits. Fiona Candlin. The Senses and Society, Volume 12, 2017 - Issue 3, Pages 251-266. http://dx.doi.org/10.1080/17458927.2017.1367485
Abstract: In 2014 The Senses and Society published a special issue on “Sensory Museology.” Registering the emergence of this new multi-disciplinary field, the editor usefully observed that “its most salient trend has been the rehabilitation of touch.” Arguably, however, touch has only been rehabilitated as an area of study insofar as it is authorised by the museum. Scholars have rarely considered the propensity of visitors to touch museum exhibits when they do not have permission to do so. In this article I suggest that the academic emphasis on authorised forms of contact privileges the institution’s aims and perspective. Conversely, researching unauthorised touch places a higher degree of emphasis on the visitors’ motivations and responses, and has the capacity to bring dominant characterisations of the museum into question. I substantiate and work through these claims by drawing on interview-based research conducted at the British Museum, and by investigating why visitors touch the exhibits without permission, what they touch, and what experiences that encounter enables.
Keywords: Sensory museology, touch, museums, exhibits, visitors, vandalism
---
Thus, the visitors therefore touched the objects on display to establish that were real and not replicas, to find out about the material qualities of an exhibit and the processes by which it was made, and to get a grasp on the skill involved in its manufacture. They also touched to make contact with the past. It is possible that a consciousness of being connected to past eras and peoples is what prompts visitors to touch, but judging from the interviews it seems that this experience is predicated on actual contact. Visitors needed to put their hands into the places that their predecessors touched, or to use their bodies to mimic the shapes of the initial makers and users in order to conceive of, or to bridge the enormous geographical and historical distances that lie between them and the objects’ contexts of production.
[...]
For the visitors, touching the sculptures of animals and humans had a markedly different dynamic to that of touching architectural exhibits such as columns or sarcophagi. It did not provide a connection with the past, rather the representational character of the sculptures outweighed the consideration of who made the carvings, when, and under what conditions. These sculptures were not primarily conceived as products of human endeavor, but as quasi-men, women, and animals.
[...]
A similar logic applied to the way that visitors touched other figures, both clothed and unclothed, and animals. Visitors behaved in ways that were appropriate to the real-life version of that thing, for example, stroking a horse’s nose, but precisely because it is a carving, they were free to push the boundaries of what is acceptable or safe.
Abstract: In 2014 The Senses and Society published a special issue on “Sensory Museology.” Registering the emergence of this new multi-disciplinary field, the editor usefully observed that “its most salient trend has been the rehabilitation of touch.” Arguably, however, touch has only been rehabilitated as an area of study insofar as it is authorised by the museum. Scholars have rarely considered the propensity of visitors to touch museum exhibits when they do not have permission to do so. In this article I suggest that the academic emphasis on authorised forms of contact privileges the institution’s aims and perspective. Conversely, researching unauthorised touch places a higher degree of emphasis on the visitors’ motivations and responses, and has the capacity to bring dominant characterisations of the museum into question. I substantiate and work through these claims by drawing on interview-based research conducted at the British Museum, and by investigating why visitors touch the exhibits without permission, what they touch, and what experiences that encounter enables.
Keywords: Sensory museology, touch, museums, exhibits, visitors, vandalism
---
Thus, the visitors therefore touched the objects on display to establish that were real and not replicas, to find out about the material qualities of an exhibit and the processes by which it was made, and to get a grasp on the skill involved in its manufacture. They also touched to make contact with the past. It is possible that a consciousness of being connected to past eras and peoples is what prompts visitors to touch, but judging from the interviews it seems that this experience is predicated on actual contact. Visitors needed to put their hands into the places that their predecessors touched, or to use their bodies to mimic the shapes of the initial makers and users in order to conceive of, or to bridge the enormous geographical and historical distances that lie between them and the objects’ contexts of production.
[...]
For the visitors, touching the sculptures of animals and humans had a markedly different dynamic to that of touching architectural exhibits such as columns or sarcophagi. It did not provide a connection with the past, rather the representational character of the sculptures outweighed the consideration of who made the carvings, when, and under what conditions. These sculptures were not primarily conceived as products of human endeavor, but as quasi-men, women, and animals.
[...]
A similar logic applied to the way that visitors touched other figures, both clothed and unclothed, and animals. Visitors behaved in ways that were appropriate to the real-life version of that thing, for example, stroking a horse’s nose, but precisely because it is a carving, they were free to push the boundaries of what is acceptable or safe.
A Replication of Thomas Piketty's Data on the Concentration of Wealth in the US. By Richard Sutch
The One Percent across Two Centuries: A Replication of Thomas Piketty's Data on the Concentration of Wealth in the United States. Richard Sutch. Social Science History, Volume 41, Issue 4, Winter 2017 , pp. 587-613. https://doi.org/10.1017/ssh.2017.27
Abstract: This exercise reproduces and assesses the historical time series on the top shares of the wealth distribution for the United States presented by Thomas Piketty in Capital in the Twenty-First Century. Piketty's best-selling book has gained as much attention for its extensive presentation of detailed historical statistics on inequality as for its bold and provocative predictions about a continuing rise in inequality in the twenty-first century. Here I examine Piketty's US data for the period 1810 to 2010 for the top 10 percent and the top 1 percent of the wealth distribution. I conclude that Piketty's data for the wealth share of the top 10 percent for the period 1870 to 1970 are unreliable. The values he reported are manufactured from the observations for the top 1 percent inflated by a constant 36 percentage points. Piketty's data for the top 1 percent of the distribution for the nineteenth century (1810–1910) are also unreliable. They are based on a single mid-century observation that provides no guidance about the antebellum trend and only tenuous information about the trend in inequality during the Gilded Age. The values Piketty reported for the twentieth century (1910–2010) are based on more solid ground, but have the disadvantage of muting the marked rise of inequality during the Roaring Twenties and the decline associated with the Great Depression. This article offers an alternative picture of the trend in inequality based on newly available data and a reanalysis of the 1870 Census of Wealth. This article does not question Piketty's integrity.
---
Very little of value can be salvaged from Piketty’s treatment of data from the nineteenth century. The user is provided with no reliable information on the antebellum trends in the wealth share and is even left uncertain about the trend for the top 10 percent during the Gilded Age (1870–1916). This is noteworthy because Piketty spends the bulk of his attention devoted to America discussing the nineteenth-century trends (Piketty 2014: 347–50).
The heavily manipulated twentieth-century data for the top 1 percent share, the lack of empirical support for the top 10 percent share, the lack of clarity about the procedures used to harmonize and average the data, the insufficient documentation, and the spreadsheet errors are more than annoying. Together they create a misleading picture of the dynamics of wealth inequality. They obliterate the intradecade movements essential to an understanding of the impact of political and financial-market shocks on inequality. Piketty’s estimates offer no help to those who wish to understand the impact of inequality on “the way economic, social, and political actors view what is just and what is not” (Piketty 2014: 20).
Abstract: This exercise reproduces and assesses the historical time series on the top shares of the wealth distribution for the United States presented by Thomas Piketty in Capital in the Twenty-First Century. Piketty's best-selling book has gained as much attention for its extensive presentation of detailed historical statistics on inequality as for its bold and provocative predictions about a continuing rise in inequality in the twenty-first century. Here I examine Piketty's US data for the period 1810 to 2010 for the top 10 percent and the top 1 percent of the wealth distribution. I conclude that Piketty's data for the wealth share of the top 10 percent for the period 1870 to 1970 are unreliable. The values he reported are manufactured from the observations for the top 1 percent inflated by a constant 36 percentage points. Piketty's data for the top 1 percent of the distribution for the nineteenth century (1810–1910) are also unreliable. They are based on a single mid-century observation that provides no guidance about the antebellum trend and only tenuous information about the trend in inequality during the Gilded Age. The values Piketty reported for the twentieth century (1910–2010) are based on more solid ground, but have the disadvantage of muting the marked rise of inequality during the Roaring Twenties and the decline associated with the Great Depression. This article offers an alternative picture of the trend in inequality based on newly available data and a reanalysis of the 1870 Census of Wealth. This article does not question Piketty's integrity.
---
Very little of value can be salvaged from Piketty’s treatment of data from the nineteenth century. The user is provided with no reliable information on the antebellum trends in the wealth share and is even left uncertain about the trend for the top 10 percent during the Gilded Age (1870–1916). This is noteworthy because Piketty spends the bulk of his attention devoted to America discussing the nineteenth-century trends (Piketty 2014: 347–50).
The heavily manipulated twentieth-century data for the top 1 percent share, the lack of empirical support for the top 10 percent share, the lack of clarity about the procedures used to harmonize and average the data, the insufficient documentation, and the spreadsheet errors are more than annoying. Together they create a misleading picture of the dynamics of wealth inequality. They obliterate the intradecade movements essential to an understanding of the impact of political and financial-market shocks on inequality. Piketty’s estimates offer no help to those who wish to understand the impact of inequality on “the way economic, social, and political actors view what is just and what is not” (Piketty 2014: 20).
Hunting Strategies with Cultivated Plants as Bait and the Prey Pathway to Animal Domestication
Hunting Strategies with Cultivated Plants as Bait and the Prey Pathway to Animal Domestication. Serge Svizzero. International Journal of Research in Sociology and Anthropology, Volume 2, Issue 2, 2016, PP 53-68. http://dx.doi.org/10.20431/2454
Abstract: For various reasons related to human diet, social prestige or cosmology, hunting -especially of large preys- has always been central in foragers' societies. When pre-Neolithic foragers have iven up their nomadic way of life they have faced a sink-source problem about game procurement in the resource-catchment area around their settlements. Baiting, by mean of the cultivation of wild plants in food plots, may have help them to attract herbivores, thus improving the return of hunting activities. These foragers were also motivated by the capture of wild animals alive, in order to keep fresh meat for a while, to translocate these animals or for milk exploitation. For this capture, the use of a passive form of drive hunting seems best suited. The cultivation of food plots within the funnel and the corral might have been used to attract wild herbivores into the drive. Baiting was therefore designed either to increase the hunt or to improve the capture of large wild herbivores such as the Near-Eastern wild caprines that were later domesticated. Therefore baiting should be viewed as a hunting strategy as well as an unconscious selection mechanism since it has inadvertently contributed to the prey pathway to animal domestication
Keywords: Neolithic revolutions, hunter-gatherers, sedentism, animal domestication, unconscious selection, large herbivores, drive hunting, Near East.
Abstract: For various reasons related to human diet, social prestige or cosmology, hunting -especially of large preys- has always been central in foragers' societies. When pre-Neolithic foragers have iven up their nomadic way of life they have faced a sink-source problem about game procurement in the resource-catchment area around their settlements. Baiting, by mean of the cultivation of wild plants in food plots, may have help them to attract herbivores, thus improving the return of hunting activities. These foragers were also motivated by the capture of wild animals alive, in order to keep fresh meat for a while, to translocate these animals or for milk exploitation. For this capture, the use of a passive form of drive hunting seems best suited. The cultivation of food plots within the funnel and the corral might have been used to attract wild herbivores into the drive. Baiting was therefore designed either to increase the hunt or to improve the capture of large wild herbivores such as the Near-Eastern wild caprines that were later domesticated. Therefore baiting should be viewed as a hunting strategy as well as an unconscious selection mechanism since it has inadvertently contributed to the prey pathway to animal domestication
Keywords: Neolithic revolutions, hunter-gatherers, sedentism, animal domestication, unconscious selection, large herbivores, drive hunting, Near East.
Home sharing driving up rents. Evidence from Airbnb in Boston
Is home sharing driving up rents? Evidence from Airbnb in Boston. Keren Horn & Mark Merante. Journal of Housing Economics, Volume 38, December 2017, Pages 14-24. https://doi.org/10.1016/j.jhe.2017.08.002
Abstract: The growth of the sharing economy has received increasing attention from economists. Some researchers have examined how these new business models shape market mechanisms and, in the case of home sharing, economists have examined how the sharing economy affects the hotel industry. There is currently limited evidence on whether home sharing affects the housing market, despite the obvious overlap between these two markets. As a result, policy makers grappling with the effects of the rapid growth of home sharing have inadequate information on which to make reasoned policy decisions. In this paper, we add to the small but growing body of knowledge on how the sharing economy is shaping the housing market by focusing on the short-term effects of the growth of Airbnb in Boston neighborhoods on the rental market, relying on individual rental listings. We examine whether the increasing presence of Airbnb raises asking rents and whether the change in rents may be driven by a decline in the supply of housing offered for rent. We show that a one standard deviation increase in Airbnb listings is associated with an increase in asking rents of 0.4%.
---
Ultimately, our analysis supports the contention that home sharing is increasing rents by decreasing the supply of units available to potential residents
Abstract: The growth of the sharing economy has received increasing attention from economists. Some researchers have examined how these new business models shape market mechanisms and, in the case of home sharing, economists have examined how the sharing economy affects the hotel industry. There is currently limited evidence on whether home sharing affects the housing market, despite the obvious overlap between these two markets. As a result, policy makers grappling with the effects of the rapid growth of home sharing have inadequate information on which to make reasoned policy decisions. In this paper, we add to the small but growing body of knowledge on how the sharing economy is shaping the housing market by focusing on the short-term effects of the growth of Airbnb in Boston neighborhoods on the rental market, relying on individual rental listings. We examine whether the increasing presence of Airbnb raises asking rents and whether the change in rents may be driven by a decline in the supply of housing offered for rent. We show that a one standard deviation increase in Airbnb listings is associated with an increase in asking rents of 0.4%.
---
Ultimately, our analysis supports the contention that home sharing is increasing rents by decreasing the supply of units available to potential residents
For political candidates, an increased audience diversity is associated with a reduced ‘Liking’
Trumped by context collapse: Examination of ‘Liking’ political candidates in the presence of audience diversity. Ben Marder. Computers in Human Behavior, https://doi.org/10.1016/j.chb.2017.10.025
Highlights
• Examination of the effect of audience diversity on ‘Liking’ political candidates.
• Survey of 1027 potential voters prior the 2016 US presidential election.
• Increased audience diversity is associated with a reduced ‘Liking’.
• Increased audience diversity predicts greater social anxiety.
• ‘Non-Likers’ have a more diverse audience than ‘Likers’.
Abstract: Harnessing social media such as Facebook is now considered critical for electoral success. Although Facebook is widely used by the electorate, few have ‘Liked’ the Facebook pages of the political candidates for whom they vote. To provide understanding of this discrepancy, the present paper offers the first investigation on the role of audience diversity on ‘Liking’ behavior, as well as its association with varying degrees of social anxiety that may arise from ‘Liking’ political candidates. A survey of potential voters who used Facebook preceding the 2016 Presidential Election was conducted ( = 1027). Using the lens of Self-Presentation Theory, results found that for those who had not already ‘Liked’ Hillary Clinton or Donald Trump, their intention to do so before the election was negatively associated with the diversity of their Facebook audience. This relationship was mediated by their expected degree of social anxiety from ‘Liking’ the candidate. A comparison of audience diversity of participants who had ‘Liked’ a candidate vs. those who had not ‘Liked’ a candidate also showed that increased audience diversity hinders ‘Liking’. This paper contributes to the knowledge of engagement with politicians through social media as well as the study of audience diversity more generally. Implications for managers are provided.
Keywords: Social media; Politics; Context collapse; Social anxiety; Facebook; Audience diversity
Highlights
• Examination of the effect of audience diversity on ‘Liking’ political candidates.
• Survey of 1027 potential voters prior the 2016 US presidential election.
• Increased audience diversity is associated with a reduced ‘Liking’.
• Increased audience diversity predicts greater social anxiety.
• ‘Non-Likers’ have a more diverse audience than ‘Likers’.
Abstract: Harnessing social media such as Facebook is now considered critical for electoral success. Although Facebook is widely used by the electorate, few have ‘Liked’ the Facebook pages of the political candidates for whom they vote. To provide understanding of this discrepancy, the present paper offers the first investigation on the role of audience diversity on ‘Liking’ behavior, as well as its association with varying degrees of social anxiety that may arise from ‘Liking’ political candidates. A survey of potential voters who used Facebook preceding the 2016 Presidential Election was conducted ( = 1027). Using the lens of Self-Presentation Theory, results found that for those who had not already ‘Liked’ Hillary Clinton or Donald Trump, their intention to do so before the election was negatively associated with the diversity of their Facebook audience. This relationship was mediated by their expected degree of social anxiety from ‘Liking’ the candidate. A comparison of audience diversity of participants who had ‘Liked’ a candidate vs. those who had not ‘Liked’ a candidate also showed that increased audience diversity hinders ‘Liking’. This paper contributes to the knowledge of engagement with politicians through social media as well as the study of audience diversity more generally. Implications for managers are provided.
Keywords: Social media; Politics; Context collapse; Social anxiety; Facebook; Audience diversity
Psychedelic use associated with reduced odds of larceny/theft, assault, arrest for a property crime & for a violent crime
The relationships of classic psychedelic use with criminal behavior in the United States adult population. Peter S Hendricks et al. Journal of Psychopharmacology, https://doi.org/10.1177/0269881117735685
Abstract: Criminal behavior exacts a large toll on society and is resistant to intervention. Some evidence suggests classic psychedelics may inhibit criminal behavior, but the extent of these effects has not been comprehensively explored. In this study, we tested the relationships of classic psychedelic use and psilocybin use per se with criminal behavior among over 480,000 United States adult respondents pooled from the last 13 available years of the National Survey on Drug Use and Health (2002 through 2014) while controlling for numerous covariates. Lifetime classic psychedelic use was associated with a reduced odds of past year larceny/theft (aOR = 0.73 (0.65–0.83)), past year assault (aOR = 0.88 (0.80–0.97)), past year arrest for a property crime (aOR = 0.78 (0.65–0.95)), and past year arrest for a violent crime (aOR = 0.82 (0.70–0.97)). In contrast, lifetime illicit use of other drugs was, by and large, associated with an increased odds of these outcomes. Lifetime classic psychedelic use, like lifetime illicit use of almost all other substances, was associated with an increased odds of past year drug distribution. Results were consistent with a protective effect of psilocybin for antisocial criminal behavior. These findings contribute to a compelling rationale for the initiation of clinical research with classic psychedelics, including psilocybin, in forensic settings.
Abstract: Criminal behavior exacts a large toll on society and is resistant to intervention. Some evidence suggests classic psychedelics may inhibit criminal behavior, but the extent of these effects has not been comprehensively explored. In this study, we tested the relationships of classic psychedelic use and psilocybin use per se with criminal behavior among over 480,000 United States adult respondents pooled from the last 13 available years of the National Survey on Drug Use and Health (2002 through 2014) while controlling for numerous covariates. Lifetime classic psychedelic use was associated with a reduced odds of past year larceny/theft (aOR = 0.73 (0.65–0.83)), past year assault (aOR = 0.88 (0.80–0.97)), past year arrest for a property crime (aOR = 0.78 (0.65–0.95)), and past year arrest for a violent crime (aOR = 0.82 (0.70–0.97)). In contrast, lifetime illicit use of other drugs was, by and large, associated with an increased odds of these outcomes. Lifetime classic psychedelic use, like lifetime illicit use of almost all other substances, was associated with an increased odds of past year drug distribution. Results were consistent with a protective effect of psilocybin for antisocial criminal behavior. These findings contribute to a compelling rationale for the initiation of clinical research with classic psychedelics, including psilocybin, in forensic settings.
Tuesday, October 17, 2017
How Will China's Industrial Modernization Plan affect Workers? By Boy Luethje
How Will China's Industrial Modernization Plan affect Workers? By Boy Luethje. East-West Center, Oct 17 2017. How Will China’s Industrial Modernization Plan Affect Workers? | East-West Center | www.eastwestcenter.org
HONOLULU (Oct. 17, 2017) -- Today’s discussions about the future of manufacturing are awash with visions of revolutionary change. Digital technologies are expected to create a “fourth industrial revolution”—a world of seamlessly interconnected “smart factories” driven by artificial intelligence, cloud computing and big data applications.
In line with this thinking, China has developed a master plan to transform its vast manufacturing base from low-cost export production to highly automated advanced manufacturing aimed primarily at the domestic market. The plan was drafted by the Ministry of Industry and Information Technology (MIIT) and was outlined in 2015 in a comprehensive government document titled “Made in China 2025.”
Made in China 2025 gives a strong role to China’s new multinationals in areas such as solar systems, wind turbines, LED, household appliances and, most prominently, telecommunications and advanced Internet services. The plan thus reflects the increased importance of large non-state-owned enterprises as drivers of innovation and marks a substantial change in economic power relations in China.
Serious questions remain, however, for China’s large, low-wage labor force, particularly related to labor markets, the transformation of work and industrial relations. Reforms are needed in areas such as vocational training, human-resource management, wage and incentive systems, appraisal of skills, and workplace safety and privacy. Yet the Ministry of Labor and Social Security, the Ministry of Education, the All China Federation of Trade Unions and other mass organizations have been mostly absent from the drafting and execution of the program.
Some relevant labor laws have been extended in recent years and offer improved protections for workers related to mass layoffs, workplace safety and employment of temporary workers. Current discussions are dominated, however, by demands to discontinue key provisions of the 2008 Labor Contract Law in order to facilitate the massive job reductions underway in state-owned heavy industries and coal mining.
Meanwhile, the Chinese government and research institutions have not provided any valid assessment of the potential labor-market effects of Made in China 2025. The relevant statistics are scattered among various government agencies, making it difficult to assess the labor-market, social-security, training and other implications of the program. Ongoing research on current automation projects and policies clearly indicates that massive job cuts lie ahead. The effect will vary by industry and region:
. In predominantly state-owned manufacturing, such as the automotive industry, the job impacts of digitalization appear to be relatively minor. Many factories are already characterized by high levels of automation, and digital technologies can be introduced gradually.
. Among private Chinese and multinational manufacturers with large, low-wage labor forces, the effects of transformation from labor-intensive to automated manufacturing are potentially much greater. In some model “Internet factories” of home-appliance makers, more than 50 percent of the manufacturing workforce has already been cut.
. Job reductions are potentially highest among labor-intensive small and medium enterprises. Here, relatively simple automation equipment can replace large numbers of semi- and low-skilled workers. A recent study in the city of Dongguan in central Guangdong Province found job reductions of 67 to 85 percent in such companies, often affecting the workers with the best skills and bargaining positions.
The situation in Guangdong Province illustrates the negative effects of top-down industrial policies. Ambitious to become China’s leading region in factory automation, the provincial government has promoted Made in China 2025 with the slogan “Robot-Replaces-Man.” City governments have picked up the message and make the replacement of workers a top criterion in their plans to subsidize the procurement of robots. The issue of job cuts and retraining is mostly ignored because many of the workers who lose their jobs are migrants from other regions.
The Dongguan city government reported that in 2015, the first year of its “Robot-Replaces-Man” program, 1,262 participating companies cut 71,000 jobs. With a working population of more than five million, the local labor market may absorb these job losses for the time being. In the long term, however, serious problems may occur.
Overall, digital technologies have significant potential to change the structure of manufacturing, improve cooperation within production networks and relocate production closer to end markets. For China, digital manufacturing could ease pressures for large-scale urbanization and related problems of labor migration.
Instead of the present top-down approach, industrial policies “from below” could integrate technological upgrading with strategies to develop a skilled workforce and rebalance labor markets. Industrial cities in the Pearl River Delta, for example, could support industrial upgrading by making subsidies for automation equipment conditional upon improvements in working conditions and training of workers. Long-term development of a skilled industrial workforce could be supported by granting permanent residence (called hukou) to migrant workers who graduate from vocational training programs. Last but not least, the provincial and local trade unions could enforce standards of decent work and accelerate the implementation of collective bargaining in privately owned enterprises.
Such approaches exist, but the innovative potential of digital manufacturing to improve conditions for China’s huge workforce remains unexplored due to pressure for short-term profits and the absence of institutional reform. There will most likely be job losses, but the key challenge is to find the right mix of automation and a higher-skilled labor force for long-term growth.
Dr. Boy Luethje is a Professor and Volkswagen Endowed Chair of Industrial Relations and Social Development at Sun Yat-sen University’s School of Government in Guangzhou, China. In January and February 2017, he was a Visiting Scholar at the East-West Center. Dr Luethje recently published a chapter with co-author Florian Butollo, ‘“Made in China 2025”: Intelligent manufacturing and work,’ in the book The New Digital Workplace: How New Technologies Revolutionise Work, published by Palgrave-Macmillan.
HONOLULU (Oct. 17, 2017) -- Today’s discussions about the future of manufacturing are awash with visions of revolutionary change. Digital technologies are expected to create a “fourth industrial revolution”—a world of seamlessly interconnected “smart factories” driven by artificial intelligence, cloud computing and big data applications.
In line with this thinking, China has developed a master plan to transform its vast manufacturing base from low-cost export production to highly automated advanced manufacturing aimed primarily at the domestic market. The plan was drafted by the Ministry of Industry and Information Technology (MIIT) and was outlined in 2015 in a comprehensive government document titled “Made in China 2025.”
Made in China 2025 gives a strong role to China’s new multinationals in areas such as solar systems, wind turbines, LED, household appliances and, most prominently, telecommunications and advanced Internet services. The plan thus reflects the increased importance of large non-state-owned enterprises as drivers of innovation and marks a substantial change in economic power relations in China.
Serious questions remain, however, for China’s large, low-wage labor force, particularly related to labor markets, the transformation of work and industrial relations. Reforms are needed in areas such as vocational training, human-resource management, wage and incentive systems, appraisal of skills, and workplace safety and privacy. Yet the Ministry of Labor and Social Security, the Ministry of Education, the All China Federation of Trade Unions and other mass organizations have been mostly absent from the drafting and execution of the program.
Some relevant labor laws have been extended in recent years and offer improved protections for workers related to mass layoffs, workplace safety and employment of temporary workers. Current discussions are dominated, however, by demands to discontinue key provisions of the 2008 Labor Contract Law in order to facilitate the massive job reductions underway in state-owned heavy industries and coal mining.
Meanwhile, the Chinese government and research institutions have not provided any valid assessment of the potential labor-market effects of Made in China 2025. The relevant statistics are scattered among various government agencies, making it difficult to assess the labor-market, social-security, training and other implications of the program. Ongoing research on current automation projects and policies clearly indicates that massive job cuts lie ahead. The effect will vary by industry and region:
. In predominantly state-owned manufacturing, such as the automotive industry, the job impacts of digitalization appear to be relatively minor. Many factories are already characterized by high levels of automation, and digital technologies can be introduced gradually.
. Among private Chinese and multinational manufacturers with large, low-wage labor forces, the effects of transformation from labor-intensive to automated manufacturing are potentially much greater. In some model “Internet factories” of home-appliance makers, more than 50 percent of the manufacturing workforce has already been cut.
. Job reductions are potentially highest among labor-intensive small and medium enterprises. Here, relatively simple automation equipment can replace large numbers of semi- and low-skilled workers. A recent study in the city of Dongguan in central Guangdong Province found job reductions of 67 to 85 percent in such companies, often affecting the workers with the best skills and bargaining positions.
The situation in Guangdong Province illustrates the negative effects of top-down industrial policies. Ambitious to become China’s leading region in factory automation, the provincial government has promoted Made in China 2025 with the slogan “Robot-Replaces-Man.” City governments have picked up the message and make the replacement of workers a top criterion in their plans to subsidize the procurement of robots. The issue of job cuts and retraining is mostly ignored because many of the workers who lose their jobs are migrants from other regions.
The Dongguan city government reported that in 2015, the first year of its “Robot-Replaces-Man” program, 1,262 participating companies cut 71,000 jobs. With a working population of more than five million, the local labor market may absorb these job losses for the time being. In the long term, however, serious problems may occur.
Overall, digital technologies have significant potential to change the structure of manufacturing, improve cooperation within production networks and relocate production closer to end markets. For China, digital manufacturing could ease pressures for large-scale urbanization and related problems of labor migration.
Instead of the present top-down approach, industrial policies “from below” could integrate technological upgrading with strategies to develop a skilled workforce and rebalance labor markets. Industrial cities in the Pearl River Delta, for example, could support industrial upgrading by making subsidies for automation equipment conditional upon improvements in working conditions and training of workers. Long-term development of a skilled industrial workforce could be supported by granting permanent residence (called hukou) to migrant workers who graduate from vocational training programs. Last but not least, the provincial and local trade unions could enforce standards of decent work and accelerate the implementation of collective bargaining in privately owned enterprises.
Such approaches exist, but the innovative potential of digital manufacturing to improve conditions for China’s huge workforce remains unexplored due to pressure for short-term profits and the absence of institutional reform. There will most likely be job losses, but the key challenge is to find the right mix of automation and a higher-skilled labor force for long-term growth.
Dr. Boy Luethje is a Professor and Volkswagen Endowed Chair of Industrial Relations and Social Development at Sun Yat-sen University’s School of Government in Guangzhou, China. In January and February 2017, he was a Visiting Scholar at the East-West Center. Dr Luethje recently published a chapter with co-author Florian Butollo, ‘“Made in China 2025”: Intelligent manufacturing and work,’ in the book The New Digital Workplace: How New Technologies Revolutionise Work, published by Palgrave-Macmillan.
After seeing an agent attain two goals equally often at varying costs, infants expected the agent to prefer the goal it attained through costlier actions
Liu, Shari, Tomer D Ullman, josh tenenbaum, and Elizabeth Spelke. 2017. “Ten-month-old Infants Infer the Value of Goals from the Costs of Actions”. PsyArXiv. October 17. psyarxiv.com/78qd4
Abstract: Infants understand that people pursue goals, but how do they learn which goals people prefer? Here, we test whether infants solve this problem by inverting a mental model of action planning, trading off the costs of acting against the rewards actions bring. After seeing an agent attain two goals equally often at varying costs, infants expected the agent to prefer the goal it attained through costlier actions. These expectations held across three experiments conveying cost through different physical path features (jump height and width; incline angle), suggesting that an abstract variable, such as ‘force’, ‘work’ or ‘effort’, supported infants’ inferences. We model infants' expectations as Bayesian inferences over utility-theoretic calculations, providing a bridge to recent quantitative accounts of action understanding in older children and adults.
Abstract: Infants understand that people pursue goals, but how do they learn which goals people prefer? Here, we test whether infants solve this problem by inverting a mental model of action planning, trading off the costs of acting against the rewards actions bring. After seeing an agent attain two goals equally often at varying costs, infants expected the agent to prefer the goal it attained through costlier actions. These expectations held across three experiments conveying cost through different physical path features (jump height and width; incline angle), suggesting that an abstract variable, such as ‘force’, ‘work’ or ‘effort’, supported infants’ inferences. We model infants' expectations as Bayesian inferences over utility-theoretic calculations, providing a bridge to recent quantitative accounts of action understanding in older children and adults.
Dramatic pretend play games uniquely improve emotional control in young children
Goldstein TR, Lerner MD. Dramatic pretend play games uniquely improve emotional control in young children. Dev Sci. 2017;e12603. https://doi.org/10.1111/desc.12603
Abstract: Pretense is a naturally occurring, apparently universal activity for typically developing children. Yet its function and effects remain unclear. One theorized possibility is that pretense activities, such as dramatic pretend play games, are a possible causal path to improve children's emotional development. Social and emotional skills, particularly emotional control, are critically important for social development, as well as academic performance and later life success. However, the study of such approaches has been criticized for potential bias and lack of rigor, precluding the ability to make strong causal claims. We conducted a randomized, component control (dismantling) trial of dramatic pretend play games with a low-SES group of 4-year-old children (N = 97) to test whether such practice yields generalized improvements in multiple social and emotional outcomes. We found specific effects of dramatic play games only on emotional self-control. Results suggest that dramatic pretend play games involving physicalizing emotional states and traits, pretending to be animals and human characters, and engaging in pretend scenarios in a small group may improve children's emotional control. These findings have implications for the function of pretense and design of interventions to improve emotional control in typical and atypical populations. Further, they provide support for the unique role of dramatic pretend play games for young children, particularly those from low-income backgrounds.
Abstract: Pretense is a naturally occurring, apparently universal activity for typically developing children. Yet its function and effects remain unclear. One theorized possibility is that pretense activities, such as dramatic pretend play games, are a possible causal path to improve children's emotional development. Social and emotional skills, particularly emotional control, are critically important for social development, as well as academic performance and later life success. However, the study of such approaches has been criticized for potential bias and lack of rigor, precluding the ability to make strong causal claims. We conducted a randomized, component control (dismantling) trial of dramatic pretend play games with a low-SES group of 4-year-old children (N = 97) to test whether such practice yields generalized improvements in multiple social and emotional outcomes. We found specific effects of dramatic play games only on emotional self-control. Results suggest that dramatic pretend play games involving physicalizing emotional states and traits, pretending to be animals and human characters, and engaging in pretend scenarios in a small group may improve children's emotional control. These findings have implications for the function of pretense and design of interventions to improve emotional control in typical and atypical populations. Further, they provide support for the unique role of dramatic pretend play games for young children, particularly those from low-income backgrounds.
Generic language encourages to categorize individuals using a lower evidentiary standard regardless of negative consequences
When Your Kind Cannot Live Here: How Generic Language and Criminal Sanctions Shape Social Categorization. Deborah Goldfarb et al. Psychological Science, https://doi.org/10.1177/0956797617714827
Abstract: Using generic language to describe groups (applying characteristics to entire categories) is ubiquitous and affects how children and adults categorize other people. Five-year-olds, 8-year-olds, and adults (N = 190) learned about a novel social group that separated into two factions (citizens and noncitizens). Noncitizens were described in either generic or specific language. Later, the children and adults categorized individuals in two contexts: criminal (individuals labeled as noncitizens faced jail and deportation) and noncriminal (labeling had no consequences). Language genericity influenced decision making. Participants in the specific-language condition, but not those in the generic-language condition, reduced the rate at which they identified potential noncitizens when their judgments resulted in criminal penalties compared with when their judgments had no consequences. In addition, learning about noncitizens in specific language (vs. generic language) increased the amount of matching evidence participants needed to identify potential noncitizens (preponderance standard) and decreased participants’ certainty in their judgments. Thus, generic language encourages children and adults to categorize individuals using a lower evidentiary standard regardless of negative consequences for presumed social-group membership.
Abstract: Using generic language to describe groups (applying characteristics to entire categories) is ubiquitous and affects how children and adults categorize other people. Five-year-olds, 8-year-olds, and adults (N = 190) learned about a novel social group that separated into two factions (citizens and noncitizens). Noncitizens were described in either generic or specific language. Later, the children and adults categorized individuals in two contexts: criminal (individuals labeled as noncitizens faced jail and deportation) and noncriminal (labeling had no consequences). Language genericity influenced decision making. Participants in the specific-language condition, but not those in the generic-language condition, reduced the rate at which they identified potential noncitizens when their judgments resulted in criminal penalties compared with when their judgments had no consequences. In addition, learning about noncitizens in specific language (vs. generic language) increased the amount of matching evidence participants needed to identify potential noncitizens (preponderance standard) and decreased participants’ certainty in their judgments. Thus, generic language encourages children and adults to categorize individuals using a lower evidentiary standard regardless of negative consequences for presumed social-group membership.
Typical courses and critical thinking skills acquired during a semester are not sufficient to prompt skepticism about myth statements
Class Dis-Mythed: Exploring the Prevalence and Perseverance of Myths in Upper-Level Psychology Courses. Michael Root and Caroline Stanley. https://www.researchgate.net/publication/317646288_Class_Dis-Mythed_Exploring_the_Prevalence_and_Perseverance_of_Myths_in_Upper-Level_Psychology_Courses
Description: Undergraduates (N = 117) from two mid-sized universities enrolled in one of three psychology courses (Cognitive Psychology, Learning & Memory, or Personality) completed surveys about commonly held psychology myths related to the course in which they were enrolled. Students completed the surveys at the beginning and end of the semester. The purpose of our study was twofold. First, we wanted to measure the prevalence of myth beliefs in undergraduates taking upper level psychology courses. Second, we wanted to discern whether course content alone (i.e., readings, lectures, class activities, tests, and assignments) was sufficient to disabuse undergraduates of their myth beliefs. Although all three courses had prerequisite psychology courses, beginning-of-semester responses indicated that students in all three classes believed a majority of the myths related to the subject matter of their course. End-of-semester responses indicated that, unless a myth was explicitly debunked in a course (e.g., material in a Learning & Memory course contradicted their belief that people have different learning styles), myth beliefs persisted throughout the semester. Our results suggest that typical course content and any critical thinking skills acquired during a semester is not sufficient to prompt skepticism about myth statements. Instead, we argue that a more effective strategy to dispel common myths that may hinder undergraduates reasoning and critical thinking skills is for instructors to make undergraduates explicitly aware of these myths and how research fails to support them.
Check also: Dispelling the Myth: Training in Education or Neuroscience Decreases but Does Not Eliminate Beliefs in Neuromyths. Kelly Macdonald et al. Frontiers in Psychology, Aug 10 2017. http://www.bipartisanalliance.com/2017/08/training-in-education-or-neuroscience.html
Description: Undergraduates (N = 117) from two mid-sized universities enrolled in one of three psychology courses (Cognitive Psychology, Learning & Memory, or Personality) completed surveys about commonly held psychology myths related to the course in which they were enrolled. Students completed the surveys at the beginning and end of the semester. The purpose of our study was twofold. First, we wanted to measure the prevalence of myth beliefs in undergraduates taking upper level psychology courses. Second, we wanted to discern whether course content alone (i.e., readings, lectures, class activities, tests, and assignments) was sufficient to disabuse undergraduates of their myth beliefs. Although all three courses had prerequisite psychology courses, beginning-of-semester responses indicated that students in all three classes believed a majority of the myths related to the subject matter of their course. End-of-semester responses indicated that, unless a myth was explicitly debunked in a course (e.g., material in a Learning & Memory course contradicted their belief that people have different learning styles), myth beliefs persisted throughout the semester. Our results suggest that typical course content and any critical thinking skills acquired during a semester is not sufficient to prompt skepticism about myth statements. Instead, we argue that a more effective strategy to dispel common myths that may hinder undergraduates reasoning and critical thinking skills is for instructors to make undergraduates explicitly aware of these myths and how research fails to support them.
Check also: Dispelling the Myth: Training in Education or Neuroscience Decreases but Does Not Eliminate Beliefs in Neuromyths. Kelly Macdonald et al. Frontiers in Psychology, Aug 10 2017. http://www.bipartisanalliance.com/2017/08/training-in-education-or-neuroscience.html
Peak olfactory acuity is 9pm... Maybe to help finding sexual mates.
The Influence of Circadian Timing on Olfactory Sensitivity. Rachel S Herz, Eliza Van Reen, David Barker, Cassie J Hilditch, Ashten Bartz, Mary A Carskadon. Chemical Senses, bjx067, https://doi.org/10.1093/chemse/bjx067
Abstract: Olfactory sensitivity has traditionally been viewed as a trait that varies according to individual differences but is not expected to change with one’s momentary state. Recent research has begun to challenge this position and time of day has been shown to alter detection levels. Links between obesity and the timing of food intake further raise the issue of whether odor detection may vary as a function of circadian processes. To investigate this question, thirty-seven (21 male) adolescents (M age =13.7 years) took part in a 28-hr forced-desynchrony (FD) protocol with 17.5 hours awake and 10.5 hours of sleep, for seven FD cycles. Odor threshold was measured using Sniffin’ Sticks six times for each FD cycle (total threshold tests = 42). Circadian phase was determined by intrinsic period derived from dim light melatonin onsets. Odor threshold showed a significant effect of circadian phase, with lowest threshold occurring on average slightly after the onset of melatonin production, or about 1.5 ○ (approximately 21:08 hours). Considerable individual variability was observed, however, peak olfactory acuity never occurred between 80.5 ○- 197.5 ○ (~02:22-10:10 hours). These data are the first to show that odor threshold is differentially and consistently influenced by circadian timing, and is not a stable trait. Potential biological relevance for connections between circadian phase and olfactory sensitivity are discussed.
Keywords: adolescents, food intake, forced desynchrony, individual differences, odor threshold, trait-state
Abstract: Olfactory sensitivity has traditionally been viewed as a trait that varies according to individual differences but is not expected to change with one’s momentary state. Recent research has begun to challenge this position and time of day has been shown to alter detection levels. Links between obesity and the timing of food intake further raise the issue of whether odor detection may vary as a function of circadian processes. To investigate this question, thirty-seven (21 male) adolescents (M age =13.7 years) took part in a 28-hr forced-desynchrony (FD) protocol with 17.5 hours awake and 10.5 hours of sleep, for seven FD cycles. Odor threshold was measured using Sniffin’ Sticks six times for each FD cycle (total threshold tests = 42). Circadian phase was determined by intrinsic period derived from dim light melatonin onsets. Odor threshold showed a significant effect of circadian phase, with lowest threshold occurring on average slightly after the onset of melatonin production, or about 1.5 ○ (approximately 21:08 hours). Considerable individual variability was observed, however, peak olfactory acuity never occurred between 80.5 ○- 197.5 ○ (~02:22-10:10 hours). These data are the first to show that odor threshold is differentially and consistently influenced by circadian timing, and is not a stable trait. Potential biological relevance for connections between circadian phase and olfactory sensitivity are discussed.
Keywords: adolescents, food intake, forced desynchrony, individual differences, odor threshold, trait-state
Citizens believe others, especially their political rivals, gravitate toward like-minded news
Public Perceptions of Partisan Selective Exposure. Perryman, Mallory R. The University of Wisconsin - Madison, ProQuest Dissertations Publishing, 2017. 10607943. https://search.proquest.com/openview/20d6e3befcf61455779aebe39b91d29f/1?pq-origsite=gscholar&cbl=18750&diss=y
From the introduction:
This dissertation investigates citizens’ perceptions of where others turn to for news, i.e., perceived exposure. In two empirical studies, I demonstrate how the assumptions that perceivers make about media and the assumptions they make about other people ultimately produce a perception of perceived partisan selective exposure. I test the extent to which citizens believe that they and others engage in selective media habits and examine the cognitive shortcuts that perceivers use to make such assessments. Ultimately, this investigation concludes that citizens believe others, especially their political rivals, gravitate toward like-minded news.
Though this is the first examination of public perceptions of selective exposure, it is not the first study to try and gauge perceptions of others’ media use. Capturing beliefs about others’ media exposure originated with research into perceived media effects, an avenue of research concerned with the ways in which people believe media impact other people. It is easy to see how perceived exposure is a core tenet of perceived media effects research: In order to believe others have been affected by a media message, a perceiver must first assume the others-in-question have been exposed to that message.
Understanding why citizens believe certain others interact with certain media messages thus requires revisiting the basic principles of perceived media effect research –- research that explores how individuals make assumptions other people, about media, and about what happens when other people encounter that media.
Check also: The Myth of Partisan Selective Exposure: A Portrait of the Online Political News Audience. Jacob L. Nelson, and James G. Webster. Social Media + Society, https://doi.org/10.1177/2056305117729314
And: Echo Chamber? What Echo Chamber? Reviewing the Evidence. Axel Bruns. Future of Journalism 2017 Conference. http://www.bipartisanalliance.com/2017/09/echo-chamber-what-echo-chamber.html
And: Stanley, M. L., Dougherty, A. M., Yang, B. W., Henne, P., & De Brigard, F. (2017). Reasons Probably Won’t Change Your Mind: The Role of Reasons in Revising Moral Decisions. Journal of Experimental Psychology: General. https://doi.org/10.1037/xge0000368
And: Consumption of fake news is a consequence, not a cause of their readers’ voting preferences
Kahan, Dan M., Misinformation and Identity-Protective Cognition (October 2, 2017). SSRN, https://ssrn.com/abstract=3046603
And: Fake news and post-truth pronouncements in general and in early human development. Victor Grech.Early Human Development, https://doi.org/10.1016/j.earlhumdev.2017.09.017
And: Science Denial Across the Political Divide -- Liberals and Conservatives Are Similarly Motivated to Deny Attitude-Inconsistent Science. Anthony N. Washburn, Linda J. Skitka. Social Psychological and Personality Science, 10.1177/1948550617731500
And: Biased Policy Professionals. Sheheryar Banuri, Stefan Dercon, and Varun Gauri. World Bank Policy Research Working Paper 8113. https://t.co/Jga1EUEkbF.
And: Dispelling the Myth: Training in Education or Neuroscience Decreases but Does Not Eliminate Beliefs in Neuromyths. Kelly Macdonald et al. Frontiers in Psychology, Aug 10 2017. http://www.bipartisanalliance.com/2017/08/training-in-education-or-neuroscience.html
And: Wisdom and how to cultivate it: Review of emerging evidence for a constructivist model of wise thinking. Igor Grossmann. European Psychologist, in press. Pre-print: https://osf.io/preprints/psyarxiv/qkm6v/
And: Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Caitlin Drummond and Baruch Fischhoff. Proceedings of the National Academy of Sciences, vol. 114 no. 36, pp 9587–9592, doi: 10.1073/pnas.1704882114
And: Expert ability can actually impair the accuracy of expert perception when judging others' performance: Adaptation and fallibility in experts' judgments of novice performers. By Larson, J. S., & Billeter, D. M. (2017). Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(2), 271–288. http://dx.doi.org/10.1037/xlm0000304
And: Bottled Water and the Overflowing Nanny State, by Angela Logomasini. How Misinformation Erodes Consumer Freedom. CEI, February 17, 2009
http://www.bipartisanalliance.com/2009/02/bottled-water-and-overflowing-nanny.html
And Competing cues: Older adults rely on knowledge in the face of fluency. By Brashier, Nadia M.; Umanath, Sharda; Cabeza, Roberto; Marsh, Elizabeth J.
Psychology and Aging, Vol 32(4), Jun 2017, 331-337. http://www.bipartisanalliance.com/2017/07/competing-cues-older-adults-rely-on.html
From the introduction:
This dissertation investigates citizens’ perceptions of where others turn to for news, i.e., perceived exposure. In two empirical studies, I demonstrate how the assumptions that perceivers make about media and the assumptions they make about other people ultimately produce a perception of perceived partisan selective exposure. I test the extent to which citizens believe that they and others engage in selective media habits and examine the cognitive shortcuts that perceivers use to make such assessments. Ultimately, this investigation concludes that citizens believe others, especially their political rivals, gravitate toward like-minded news.
Though this is the first examination of public perceptions of selective exposure, it is not the first study to try and gauge perceptions of others’ media use. Capturing beliefs about others’ media exposure originated with research into perceived media effects, an avenue of research concerned with the ways in which people believe media impact other people. It is easy to see how perceived exposure is a core tenet of perceived media effects research: In order to believe others have been affected by a media message, a perceiver must first assume the others-in-question have been exposed to that message.
Understanding why citizens believe certain others interact with certain media messages thus requires revisiting the basic principles of perceived media effect research –- research that explores how individuals make assumptions other people, about media, and about what happens when other people encounter that media.
Check also: The Myth of Partisan Selective Exposure: A Portrait of the Online Political News Audience. Jacob L. Nelson, and James G. Webster. Social Media + Society, https://doi.org/10.1177/2056305117729314
And: Echo Chamber? What Echo Chamber? Reviewing the Evidence. Axel Bruns. Future of Journalism 2017 Conference. http://www.bipartisanalliance.com/2017/09/echo-chamber-what-echo-chamber.html
And: Stanley, M. L., Dougherty, A. M., Yang, B. W., Henne, P., & De Brigard, F. (2017). Reasons Probably Won’t Change Your Mind: The Role of Reasons in Revising Moral Decisions. Journal of Experimental Psychology: General. https://doi.org/10.1037/xge0000368
And: Consumption of fake news is a consequence, not a cause of their readers’ voting preferences
Kahan, Dan M., Misinformation and Identity-Protective Cognition (October 2, 2017). SSRN, https://ssrn.com/abstract=3046603
And: Fake news and post-truth pronouncements in general and in early human development. Victor Grech.Early Human Development, https://doi.org/10.1016/j.earlhumdev.2017.09.017
And: Science Denial Across the Political Divide -- Liberals and Conservatives Are Similarly Motivated to Deny Attitude-Inconsistent Science. Anthony N. Washburn, Linda J. Skitka. Social Psychological and Personality Science, 10.1177/1948550617731500
And: Biased Policy Professionals. Sheheryar Banuri, Stefan Dercon, and Varun Gauri. World Bank Policy Research Working Paper 8113. https://t.co/Jga1EUEkbF.
And: Dispelling the Myth: Training in Education or Neuroscience Decreases but Does Not Eliminate Beliefs in Neuromyths. Kelly Macdonald et al. Frontiers in Psychology, Aug 10 2017. http://www.bipartisanalliance.com/2017/08/training-in-education-or-neuroscience.html
And: Wisdom and how to cultivate it: Review of emerging evidence for a constructivist model of wise thinking. Igor Grossmann. European Psychologist, in press. Pre-print: https://osf.io/preprints/psyarxiv/qkm6v/
And: Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Caitlin Drummond and Baruch Fischhoff. Proceedings of the National Academy of Sciences, vol. 114 no. 36, pp 9587–9592, doi: 10.1073/pnas.1704882114
And: Expert ability can actually impair the accuracy of expert perception when judging others' performance: Adaptation and fallibility in experts' judgments of novice performers. By Larson, J. S., & Billeter, D. M. (2017). Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(2), 271–288. http://dx.doi.org/10.1037/xlm0000304
And: Bottled Water and the Overflowing Nanny State, by Angela Logomasini. How Misinformation Erodes Consumer Freedom. CEI, February 17, 2009
http://www.bipartisanalliance.com/2009/02/bottled-water-and-overflowing-nanny.html
And Competing cues: Older adults rely on knowledge in the face of fluency. By Brashier, Nadia M.; Umanath, Sharda; Cabeza, Roberto; Marsh, Elizabeth J.
Psychology and Aging, Vol 32(4), Jun 2017, 331-337. http://www.bipartisanalliance.com/2017/07/competing-cues-older-adults-rely-on.html
Monday, October 16, 2017
Psychopaths are willing and able to disclose how they are under conditions of confidentiality, and do not mind being so
Kelley, S. E., Edens, J. F., Donnellan, M. B., Mowle, E. N. and Sörman, K. (), Self- and Informant Perceptions of Psychopathic Traits in Relation to the Triarchic Model. Journal of Personality. Accepted Author Manuscript. doi:10.1111/jopy.12354
Abstract
Objective: The validity of self-report psychopathy measures may be undermined by characteristics thought to be defining features of the construct, including poor self-awareness, pathological lying, and impression management. The current study examined agreement between self- and informant perceptions of psychopathic traits captured by the triarchic model (Patrick, Fowler, & Krueger, 2009) and the extent to which psychopathic traits are associated with socially desirable responding.
Method: Participants were undergraduate roommate dyads (N = 174; Mage = 18.9 years; 64.4% female; 59.8% Caucasian) who completed self- and informant-reports of boldness, meanness, and disinhibition.
Results: Self-reports of psychopathic traits reasonably aligned with the perceptions of informants (rs = .36 - .60) and both predicted various types of antisocial behaviors, although some associations were only significant for monomethod correlations. Participants viewed by informants as more globally psychopathic did not engage in greater positive impression management. However, this response style significantly correlated with self- and informant-reported boldness, suppressing associations with antisocial behavior.
Conclusions: These findings suggest that participants are willing and able to disclose psychopathic personality traits in research settings under conditions of confidentiality. Nonetheless, accounting for response style is potentially useful when using self-report measures to examine the nature and correlates of psychopathic traits.
Abstract
Objective: The validity of self-report psychopathy measures may be undermined by characteristics thought to be defining features of the construct, including poor self-awareness, pathological lying, and impression management. The current study examined agreement between self- and informant perceptions of psychopathic traits captured by the triarchic model (Patrick, Fowler, & Krueger, 2009) and the extent to which psychopathic traits are associated with socially desirable responding.
Method: Participants were undergraduate roommate dyads (N = 174; Mage = 18.9 years; 64.4% female; 59.8% Caucasian) who completed self- and informant-reports of boldness, meanness, and disinhibition.
Results: Self-reports of psychopathic traits reasonably aligned with the perceptions of informants (rs = .36 - .60) and both predicted various types of antisocial behaviors, although some associations were only significant for monomethod correlations. Participants viewed by informants as more globally psychopathic did not engage in greater positive impression management. However, this response style significantly correlated with self- and informant-reported boldness, suppressing associations with antisocial behavior.
Conclusions: These findings suggest that participants are willing and able to disclose psychopathic personality traits in research settings under conditions of confidentiality. Nonetheless, accounting for response style is potentially useful when using self-report measures to examine the nature and correlates of psychopathic traits.
Higher paternal age at offspring conception increases de novo genetic mutations & the children would be less likely to survive and reproduce
Older fathers' children have lower evolutionary fitness across four centuries and in four populations. Ruben Arslan et al. Proceedings of the Royal Society: Biological Sciences, Volume 284, issue 1862, September 13 2017. DO 10.1098/rspb.2017.1562
Abstract: Higher paternal age at offspring conception increases de novo genetic mutations. Based on evolutionary genetic theory we predicted older fathers' children, all else equal, would be less likely to survive and reproduce, i.e. have lower fitness. In sibling control studies, we find support for negative paternal age effects on offspring survival and reproductive success across four large populations with an aggregate N > 1.4 million. Three populations were pre-industrial (1670–1850) Western populations and showed negative paternal age effects on infant survival and offspring reproductive success. In twentieth-century Sweden, we found minuscule paternal age effects on survival, but found negative effects on reproductive success. Effects survived tests for key competing explanations, including maternal age and parental loss, but effects varied widely over different plausible model specifications and some competing explanations such as diminishing paternal investment and epigenetic mutations could not be tested. We can use our findings to aid in predicting the effect increasingly older parents in today's society will have on their children's survival and reproductive success. To the extent that we succeeded in isolating a mutation-driven effect of paternal age, our results can be understood to show that de novo mutations reduce offspring fitness across populations and time periods.
Abstract: Higher paternal age at offspring conception increases de novo genetic mutations. Based on evolutionary genetic theory we predicted older fathers' children, all else equal, would be less likely to survive and reproduce, i.e. have lower fitness. In sibling control studies, we find support for negative paternal age effects on offspring survival and reproductive success across four large populations with an aggregate N > 1.4 million. Three populations were pre-industrial (1670–1850) Western populations and showed negative paternal age effects on infant survival and offspring reproductive success. In twentieth-century Sweden, we found minuscule paternal age effects on survival, but found negative effects on reproductive success. Effects survived tests for key competing explanations, including maternal age and parental loss, but effects varied widely over different plausible model specifications and some competing explanations such as diminishing paternal investment and epigenetic mutations could not be tested. We can use our findings to aid in predicting the effect increasingly older parents in today's society will have on their children's survival and reproductive success. To the extent that we succeeded in isolating a mutation-driven effect of paternal age, our results can be understood to show that de novo mutations reduce offspring fitness across populations and time periods.
Menarcheal timing is accelerated by favorable nutrition but unrelated to developmental cues of mortality or familial instability
Menarcheal timing is accelerated by favorable nutrition but unrelated to developmental cues of mortality or familial instability in Cebu, Philippines. Moira A. Kyweluka et al. Evolution and Human Behavior, https://doi.org/10.1016/j.evolhumbehav.2017.10.002
Abstract: Understanding the determinants of pubertal timing, particularly menarche in girls, is an important area of investigation owing to the many health, psychosocial, and demographic outcomes related to reproductive maturation. Traditional explanations emphasized the role of favorable nutrition in maturational acceleration. More recently, work has documented early maturity in relation to markers of familial and environmental instability (e.g. paternal absence), which are hypothesized to serve as cues triggering adaptive adjustment of life history scheduling. While these studies hint at an ability of human females to accelerate maturity in stressful environments, most have focused on populations characterized by energetic excess. The present study investigates the role of developmental nutrition alongside cues of environmental risk and instability (maternal absence, paternal absence, and sibling death) as predictors of menarcheal age in a well-characterized birth cohort born in 1983 in metropolitan Cebu, the Philippines. In this sample, which was marked by a near-absence of childhood overweight and obesity, we find that menarcheal age is not predicted by cues of risk and instability measured at birth, during childhood and early adolescence, but that infancy weight gain and measures of favorable childhood nutrition are strong predictors of maturational acceleration. These findings contrast with studies of populations in which psychosocial stress and instability co-occur with excess weight. The present findings suggest that infancy and childhood nutrition may exert greater influence on age at menarche than psychosocial cues in environments characterized by marginal nutrition, and that puberty is often delayed, rather than accelerated, in the context of stressful environments.
Keywords: Life history theory; Puberty; Reproductive timing; Human growth; Fertility milestones
Abstract: Understanding the determinants of pubertal timing, particularly menarche in girls, is an important area of investigation owing to the many health, psychosocial, and demographic outcomes related to reproductive maturation. Traditional explanations emphasized the role of favorable nutrition in maturational acceleration. More recently, work has documented early maturity in relation to markers of familial and environmental instability (e.g. paternal absence), which are hypothesized to serve as cues triggering adaptive adjustment of life history scheduling. While these studies hint at an ability of human females to accelerate maturity in stressful environments, most have focused on populations characterized by energetic excess. The present study investigates the role of developmental nutrition alongside cues of environmental risk and instability (maternal absence, paternal absence, and sibling death) as predictors of menarcheal age in a well-characterized birth cohort born in 1983 in metropolitan Cebu, the Philippines. In this sample, which was marked by a near-absence of childhood overweight and obesity, we find that menarcheal age is not predicted by cues of risk and instability measured at birth, during childhood and early adolescence, but that infancy weight gain and measures of favorable childhood nutrition are strong predictors of maturational acceleration. These findings contrast with studies of populations in which psychosocial stress and instability co-occur with excess weight. The present findings suggest that infancy and childhood nutrition may exert greater influence on age at menarche than psychosocial cues in environments characterized by marginal nutrition, and that puberty is often delayed, rather than accelerated, in the context of stressful environments.
Keywords: Life history theory; Puberty; Reproductive timing; Human growth; Fertility milestones
Automated Driving: Use With Caution
Automated driving: Safety blind spots. Ian Y. Noy, David Shinar, William J. Horrey. Safety Science, Volume 102, February 2018, Pages 68–78. https://doi.org/10.1016/j.ssci.2017.07.018
Highlights
• Automated driving has the potential to improve traffic safety in the long term.
• For the foreseeable future, partially AD present unwitting consequences.
• Drivers’ role will change and lead to potential confusion or traffic conflicts.
• Human factors research is needed address new questions of partial automation.
• Integration within the broader cyber-physical world is an emerging challenge.
• This paper identifies areas that require explicit and urgent scientific exploration.
Abstract: Driver assist technologies have reached the tipping point and are poised to take control of most, if not all, aspects of the driving task. Proponents of automated driving (AD) are enthusiastic about its promise to transform mobility and realize impressive societal benefits. This paper is an attempt to carefully examine the potential of AD to realize safety benefits, to challenge widely-held assumptions and to delve more deeply into the barriers that are hitherto largely overlooked. As automated vehicle (AV) technologies advance and emerge within a ubiquitous cyber-physical world they raise additional issues that have not yet been adequately defined, let alone researched. Issues around automation, sociotechnical complexity and systems resilience are well known in the context of aviation and space. There are important lessons that could be drawn from these applications to help inform the development of automated driving. This paper argues that for the foreseeable future, regardless of the level of automation, a driver will continue to have a role. It seems clear that the benefits of automated driving, safety and otherwise, will accrue only if these technologies are designed in accordance with sound cybernetics principles, promote effective human-systems integration and gain the trust by operators and the public.
Keywords: Automated driving; Safety; Driver-vehicle interaction; Psychology; Autonomous vehicles
Highlights
• Automated driving has the potential to improve traffic safety in the long term.
• For the foreseeable future, partially AD present unwitting consequences.
• Drivers’ role will change and lead to potential confusion or traffic conflicts.
• Human factors research is needed address new questions of partial automation.
• Integration within the broader cyber-physical world is an emerging challenge.
• This paper identifies areas that require explicit and urgent scientific exploration.
Abstract: Driver assist technologies have reached the tipping point and are poised to take control of most, if not all, aspects of the driving task. Proponents of automated driving (AD) are enthusiastic about its promise to transform mobility and realize impressive societal benefits. This paper is an attempt to carefully examine the potential of AD to realize safety benefits, to challenge widely-held assumptions and to delve more deeply into the barriers that are hitherto largely overlooked. As automated vehicle (AV) technologies advance and emerge within a ubiquitous cyber-physical world they raise additional issues that have not yet been adequately defined, let alone researched. Issues around automation, sociotechnical complexity and systems resilience are well known in the context of aviation and space. There are important lessons that could be drawn from these applications to help inform the development of automated driving. This paper argues that for the foreseeable future, regardless of the level of automation, a driver will continue to have a role. It seems clear that the benefits of automated driving, safety and otherwise, will accrue only if these technologies are designed in accordance with sound cybernetics principles, promote effective human-systems integration and gain the trust by operators and the public.
Keywords: Automated driving; Safety; Driver-vehicle interaction; Psychology; Autonomous vehicles
Subscribe to:
Posts (Atom)