Bartneck, Christoph, and Hanna Bartneck. 2017. “Interaction of Brand and Taste in the Liking of Cola Drinks by School Children”. PsyArXiv. August 21. https://psyarxiv.com/fzjwv
Abstract: Sugar sweetened soft drinks play an important role in obesity and world wide brands, such as Coca Cola, spend considerable resources on branding and advertising. As a result, their soft drinks tend to be up to 2.5 times more expensive than store brand cola drinks. We investigated the effect that the taste and the label has on the liking of cola drinks by school children. Taste did not have any significant effect and there was a significant interaction effect between the label and the taste. The children preferred cola drinks with their correct labels. A familiar brand with a familiar taste is liked more and the Coca-Cola company with its large advertising campaign and market share take advantage of this effect.
My comment: Are you sure of any important role of sodas in obesity?
Tuesday, September 5, 2017
Individuals with greater science literacy and education have more polarized beliefs on controversial science topics
Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Caitlin Drummond and Baruch Fischhoff. Proceedings of the National Academy of Sciences, vol. 114 no. 36, pp 9587–9592, doi: 10.1073/pnas.1704882114
Significance: Public opinion toward some science and technology issues is polarized along religious and political lines. We investigate whether people with more education and greater science knowledge tend to express beliefs that are more (or less) polarized. Using data from the nationally representative General Social Survey, we find that more knowledgeable individuals are more likely to express beliefs consistent with their religious or political identities for issues that have become polarized along those lines (e.g., stem cell research, human evolution), but not for issues that are controversial on other grounds (e.g., genetically modified foods). These patterns suggest that scientific knowledge may facilitate defending positions motivated by nonscientific concerns.
Abstract: Although Americans generally hold science in high regard and respect its findings, for some contested issues, such as the existence of anthropogenic climate change, public opinion is polarized along religious and political lines. We ask whether individuals with more general education and greater science knowledge, measured in terms of science education and science literacy, display more (or less) polarized beliefs on several such issues. We report secondary analyses of a nationally representative dataset (the General Social Survey), examining the predictors of beliefs regarding six potentially controversial issues. We find that beliefs are correlated with both political and religious identity for stem cell research, the Big Bang, and human evolution, and with political identity alone on climate change. Individuals with greater education, science education, and science literacy display more polarized beliefs on these issues. We find little evidence of political or religious polarization regarding nanotechnology and genetically modified foods. On all six topics, people who trust the scientific enterprise more are also more likely to accept its findings. We discuss the causal mechanisms that might underlie the correlation between education and identity-based polarization.
Keywords: science, literacy, polarization, science communication, science education, trust
---
More at "The tribal nature of the human mind leads people to value party dogma over truth; those with political sophistication, science literacy, numeracy abilities, and cognitive reflection are more affected" http://www.bipartisanalliance.com/2018/02/the-tribal-nature-of-human-mind-leads.html
Significance: Public opinion toward some science and technology issues is polarized along religious and political lines. We investigate whether people with more education and greater science knowledge tend to express beliefs that are more (or less) polarized. Using data from the nationally representative General Social Survey, we find that more knowledgeable individuals are more likely to express beliefs consistent with their religious or political identities for issues that have become polarized along those lines (e.g., stem cell research, human evolution), but not for issues that are controversial on other grounds (e.g., genetically modified foods). These patterns suggest that scientific knowledge may facilitate defending positions motivated by nonscientific concerns.
Abstract: Although Americans generally hold science in high regard and respect its findings, for some contested issues, such as the existence of anthropogenic climate change, public opinion is polarized along religious and political lines. We ask whether individuals with more general education and greater science knowledge, measured in terms of science education and science literacy, display more (or less) polarized beliefs on several such issues. We report secondary analyses of a nationally representative dataset (the General Social Survey), examining the predictors of beliefs regarding six potentially controversial issues. We find that beliefs are correlated with both political and religious identity for stem cell research, the Big Bang, and human evolution, and with political identity alone on climate change. Individuals with greater education, science education, and science literacy display more polarized beliefs on these issues. We find little evidence of political or religious polarization regarding nanotechnology and genetically modified foods. On all six topics, people who trust the scientific enterprise more are also more likely to accept its findings. We discuss the causal mechanisms that might underlie the correlation between education and identity-based polarization.
Keywords: science, literacy, polarization, science communication, science education, trust
---
More at "The tribal nature of the human mind leads people to value party dogma over truth; those with political sophistication, science literacy, numeracy abilities, and cognitive reflection are more affected" http://www.bipartisanalliance.com/2018/02/the-tribal-nature-of-human-mind-leads.html
Seeing faces: The role of brand visual processing and social connection in brand liking
Seeing faces: The role of brand visual processing and social connection in brand liking. Ulrich Orth et al. European Journal of Social Psychology, April 2017, Pages 348–361, doi 10.1002/ejsp.2245
Abstract: This paper investigates how brands — through visuals — can fill a void for consumers experiencing a lack of social connection. Using psychometric measures and mock advertisements with visuals of human faces and non-faces, Study 1 shows that seeing faces relates to greater brand liking with processing fluency mediating, and individual loneliness and tendency to anthropomorphize moderating the effect. Study 2 replicates findings with other-race faces corroborating that fluency but not ethnic self-referencing underlies the effect. Study 3 complements the psychometric measures of Studies 1 and 2 with eye tracking data to demonstrate that fluency correlates with distinct patterns of attention. Study 4 uses actual brand stimuli to show that effects are robust and extend beyond advertisements. Taken together, the findings show that communicating brand names in conjunction with visuals seen by consumers as human faces can increase brand liking.
Abstract: This paper investigates how brands — through visuals — can fill a void for consumers experiencing a lack of social connection. Using psychometric measures and mock advertisements with visuals of human faces and non-faces, Study 1 shows that seeing faces relates to greater brand liking with processing fluency mediating, and individual loneliness and tendency to anthropomorphize moderating the effect. Study 2 replicates findings with other-race faces corroborating that fluency but not ethnic self-referencing underlies the effect. Study 3 complements the psychometric measures of Studies 1 and 2 with eye tracking data to demonstrate that fluency correlates with distinct patterns of attention. Study 4 uses actual brand stimuli to show that effects are robust and extend beyond advertisements. Taken together, the findings show that communicating brand names in conjunction with visuals seen by consumers as human faces can increase brand liking.
Consumers perceive that a product made by mistake is more improbable, and thus, more unique than an intentional one
Made by Mistake: When Mistakes Increase Product Preference. Taly Reich, Daniella Kupor and Rosanna Smith. Journal of Consumer Research, https://doi.org/10.1093/jcr/ucx089
Abstract: Significant literature has demonstrated that mistakes are undesirable and often result in negative inferences about the person or company that made the mistake. Consequently, individuals and companies tend to avoid sharing information about their mistakes with others. However, we find that consumers actually prefer products that were made by mistake to otherwise identical products that were made intentionally. This preference arises because consumers perceive that a product made by mistake is more improbable relative to a product made intentionally, and thus, view the product as more unique. We find converging evidence for this preference in a field study, six experiments, and eBay auction sales. Importantly, this preference holds regardless of whether the mistake enhances or detracts from the product. However, in domains where consumers do not value uniqueness (e.g., utilitarian goods), the preference is eliminated.
Abstract: Significant literature has demonstrated that mistakes are undesirable and often result in negative inferences about the person or company that made the mistake. Consequently, individuals and companies tend to avoid sharing information about their mistakes with others. However, we find that consumers actually prefer products that were made by mistake to otherwise identical products that were made intentionally. This preference arises because consumers perceive that a product made by mistake is more improbable relative to a product made intentionally, and thus, view the product as more unique. We find converging evidence for this preference in a field study, six experiments, and eBay auction sales. Importantly, this preference holds regardless of whether the mistake enhances or detracts from the product. However, in domains where consumers do not value uniqueness (e.g., utilitarian goods), the preference is eliminated.
Hidden Cost of Insurance on Cooperation: the presumed safeguard against the risk of betrayal may increase the probability of betrayal
van de Calseyde, P. P. F. M., Keren, G., and Zeelenberg, M. (2017) The Hidden Cost of Insurance on Cooperation. Journal of Behavioral Decision Making, doi: 10.1002/bdm.2033.
Abstract: A common solution to mitigate risk is to buy insurance. Employing the trust game, we find that buying insurance against the risk of betrayal has a hidden cost: trustees are more likely to act opportunistically when trustors choose to be insured against the breach of trust. Supposedly, trustees are less likely to cooperate when trustors buy insurance because choosing insurance implicitly signals that the trustor expects the trustee to behave opportunistically, paradoxically encouraging trustees not to cooperate. These results shed new light on the potential drawbacks of financial safeguards that are intended to minimize the risky nature of trust taking: the presumed safeguard against the risk of betrayal may, under certain circumstances, increase the probability of betrayal.
Abstract: A common solution to mitigate risk is to buy insurance. Employing the trust game, we find that buying insurance against the risk of betrayal has a hidden cost: trustees are more likely to act opportunistically when trustors choose to be insured against the breach of trust. Supposedly, trustees are less likely to cooperate when trustors buy insurance because choosing insurance implicitly signals that the trustor expects the trustee to behave opportunistically, paradoxically encouraging trustees not to cooperate. These results shed new light on the potential drawbacks of financial safeguards that are intended to minimize the risky nature of trust taking: the presumed safeguard against the risk of betrayal may, under certain circumstances, increase the probability of betrayal.
Still an Agenda Setter: Traditional News Media and Public Opinion During the Transition From Low to High Choice Media Environments
Djerf-Pierre, M. and Shehata, A. (2017), Still an Agenda Setter: Traditional News Media and Public Opinion During the Transition From Low to High Choice Media Environments. J Commun. doi:10.1111/jcom.12327
Abstract: This study analyzes whether the agenda-setting influence of traditional news media has become weaker over time—a key argument in the “new era of minimal effects” controversy. Based on media content and public opinion data collected in Sweden over a period of 23 years (1992–2014), we analyze both aggregate and individual-level agenda-setting effects on public opinion concerning 12 different political issues. Taken together, ***we find very little evidence that the traditional news media has become less influential as agenda setters. Rather, citizens appear as responsive to issue signals from the collective media agenda today as during the low-choice era***. We discuss these findings in terms of cross-national differences in media systems and opportunity structures for selective exposure.
Abstract: This study analyzes whether the agenda-setting influence of traditional news media has become weaker over time—a key argument in the “new era of minimal effects” controversy. Based on media content and public opinion data collected in Sweden over a period of 23 years (1992–2014), we analyze both aggregate and individual-level agenda-setting effects on public opinion concerning 12 different political issues. Taken together, ***we find very little evidence that the traditional news media has become less influential as agenda setters. Rather, citizens appear as responsive to issue signals from the collective media agenda today as during the low-choice era***. We discuss these findings in terms of cross-national differences in media systems and opportunity structures for selective exposure.
Women are more compassionate than men, but that does not explain the gender gap in partisanship
Blinder, S. and Rolfe, M. (2017), Rethinking Compassion: Toward a Political Account of the Partisan Gender Gap in the United States. Political Psychology. doi:10.1111/pops.12447
Abstract: Scholarship on the political gender gap in the United States has attributed women's political views to their greater compassion, yet individual-level measures of compassion have almost never been used to directly examine such claims. We address this issue using the only nationally representative survey to include psychometrically validated measures of compassion alongside appropriate political variables. We show that ***even though women are more compassionate in the aggregate than men in some respects, this added compassion does not explain the gender gap in partisanship***. Female respondents report having more tender feelings towards the less fortunate, but these empathetic feelings are not associated with partisan identity. Women also show a slightly greater commitment to a principle of care, but this principle accounts for little of the partisan gap between men and women and has no significant relationship with partisanship after accounting for gender differences in egalitarian political values.
Abstract: Scholarship on the political gender gap in the United States has attributed women's political views to their greater compassion, yet individual-level measures of compassion have almost never been used to directly examine such claims. We address this issue using the only nationally representative survey to include psychometrically validated measures of compassion alongside appropriate political variables. We show that ***even though women are more compassionate in the aggregate than men in some respects, this added compassion does not explain the gender gap in partisanship***. Female respondents report having more tender feelings towards the less fortunate, but these empathetic feelings are not associated with partisan identity. Women also show a slightly greater commitment to a principle of care, but this principle accounts for little of the partisan gap between men and women and has no significant relationship with partisanship after accounting for gender differences in egalitarian political values.
How Dodd-Frank Doubles Down on 'Too Big to Fail'
Two major flaws mean that the act doesn't address problems that led to the financial crisis of 2008.
By Charles W. Calomiris And Allan H. Meltzer
Feb. 12, 2014 6:44 p.m. ET
The Dodd-Frank Act, passed in 2010, mandated hundreds of major regulations to control bank risk-taking, with the aim of preventing a repeat of the taxpayer bailouts of "too big to fail" financial institutions. These regulations are on top of many rules adopted after the 2008 financial crisis to make banks more secure. Yet at a Senate hearing in January, Elizabeth Warren asked a bipartisan panel of four economists (including Allan Meltzer ) whether the Dodd-Frank Act would end the problem of too-big-to-fail banks. Every one answered no.
Dodd-Frank's approach to regulating bank risk has two major flaws. First, its standards and rules require regulatory enforcement instead of giving bankers strong incentives to maintain safety and soundness of their own institutions. Second, the regulatory framework attempts to prevent any individual bank from failing, instead of preventing the collapse of the payments and credit systems.
The principal danger to the banking system arises when fear and uncertainty about the value of bank assets induces the widespread refusal by banks to accept each other's short-term debts. Such refusals can lead to a collapse of the interbank payments system, a dramatic contraction of bank credit, and a general loss in confidence by consumers and businesses—all of which can have dire economic consequences. The proper goal is thus to make the banking system sufficiently resilient so that no single failure can result in a general collapse.
Part of the current confusion over regulatory means and ends reflects a mistaken understanding of the Lehman Brothers bankruptcy. The collapse of interbank credit in September 2008 was not the automatic consequence of Lehman's failure.
Rather, it resulted from a widespread market perception that many large banks were at significant risk of failing. This perception didn't develop overnight. It had evolved steadily and visibly over more than two years, while regulators and politicians did nothing.
Citigroup's equity-to-assets ratio, measured in market value—the best single comprehensive measure of a bank's financial strength—fell steadily from about 13% in April 2006 to about 3% by September 2008. And that low value reflected an even lower perception of fundamental asset worth, because the 3% market value included the value of an expected bailout. Lehman's collapse was simply the match in the tinder box. If other banks had been sufficiently safe and sound at the time of Lehman's demise, then the financial system would not have been brought to its knees by a single failure.
To ensure systemwide resiliency, most of Dodd-Frank's regulations should be replaced by measures requiring large, systemically important banks to increase their capacity to deal with losses. The first step would be to substantially raise the minimum ratio of the book value of their equity relative to the book value of their assets.
The Brown-Vitter bill now before Congress (the Terminating Bailouts for Taxpayer Fairness Act) would raise that minimum ratio to 15%, roughly a threefold increase from current levels. Although reasonable people can disagree about the optimal minimum ratio—one could argue that a 10% ratio would be adequate in the presence of additional safeguards—15% is not an arbitrary number.
At the onset of the Great Depression, large New York City banks all maintained more than 15% of their assets in equity, and none of them succumbed to the worst banking system shocks in U.S. history from 1929 to 1932. The losses suffered by major banks in the recent crisis would not have wiped out their equity if it had been equal to 15% of their assets.
Bankers and their supervisors often find it mutually convenient to understate expected loan losses and thereby overstate equity values. The problem is magnified when equity requirements are expressed relative to "risk-weighted assets," allowing regulators to permit banks' models to underestimate their risks.
This is not a hypothetical issue. In December 2008, when Citi was effectively insolvent, and the market's valuation of its equity correctly reflected that fact, the bank's accounts showed a risk-based capital ratio of 11.8% and a risk-based Tier 1 capital ratio (meant to include only high-quality, equity-like capital) of about 7%. Moreover, factors such as a drop in bank fee income can affect the actual value of a bank's equity, regardless of the riskiness of its loans.
For these reasons, large banks' book equity requirements need to be buttressed by other measures. One is a minimum requirement that banks maintain cash reserves (New York City banks during the Depression maintained cash reserves in excess of 25%). Cash held at the central bank provides protection against default risk similar to equity capital, but it has the advantage of being observable and incapable of being fudged by esoteric risk-modeling.
Several researchers have suggested a variety of ways to supplement simple equity and cash requirements with creative contractual devices that would give bankers strong incentives to make sure that they maintain adequate capital. In the Journal of Applied Corporate Finance (2013), Charles Calomiris and Richard Herring propose debt that converts to equity whenever the market value ratio of a bank's equity is below 9% for more than 90 days. Since the conversion would significantly dilute the value of the stock held by pre-existing shareholders, a bank CEO will have a big incentive to avoid it.
There is plenty of room to debate the details, but the essential reform is to place responsibility for absorbing a bank's losses on banks and their owners. Dodd-Frank institutionalizes too-big-to-fail protection by explicitly permitting bailouts via a "resolution authority" provision at the discretion of government authorities, financed by taxes on surviving banks—and by taxpayers should these bank taxes be insufficient. That provision should be repealed and replaced by clear rules that can't be gamed by bank managers.
Mr. Calomiris is the co-author (with Stephen Haber ) of "Fragile By Design: The Political Origins of Banking Crises and Scarce Credit" (Princeton, 2014). Mr. Meltzer is the author of "Why Capitalism?" (Oxford, 2012). They co-direct (with Kenneth Scott ) the new program on Regulation and the Rule of Law at the Hoover Institution.
Two major flaws mean that the act doesn't address problems that led to the financial crisis of 2008.
By Charles W. Calomiris And Allan H. Meltzer
Feb. 12, 2014 6:44 p.m. ET
The Dodd-Frank Act, passed in 2010, mandated hundreds of major regulations to control bank risk-taking, with the aim of preventing a repeat of the taxpayer bailouts of "too big to fail" financial institutions. These regulations are on top of many rules adopted after the 2008 financial crisis to make banks more secure. Yet at a Senate hearing in January, Elizabeth Warren asked a bipartisan panel of four economists (including Allan Meltzer ) whether the Dodd-Frank Act would end the problem of too-big-to-fail banks. Every one answered no.
Dodd-Frank's approach to regulating bank risk has two major flaws. First, its standards and rules require regulatory enforcement instead of giving bankers strong incentives to maintain safety and soundness of their own institutions. Second, the regulatory framework attempts to prevent any individual bank from failing, instead of preventing the collapse of the payments and credit systems.
The principal danger to the banking system arises when fear and uncertainty about the value of bank assets induces the widespread refusal by banks to accept each other's short-term debts. Such refusals can lead to a collapse of the interbank payments system, a dramatic contraction of bank credit, and a general loss in confidence by consumers and businesses—all of which can have dire economic consequences. The proper goal is thus to make the banking system sufficiently resilient so that no single failure can result in a general collapse.
Part of the current confusion over regulatory means and ends reflects a mistaken understanding of the Lehman Brothers bankruptcy. The collapse of interbank credit in September 2008 was not the automatic consequence of Lehman's failure.
Rather, it resulted from a widespread market perception that many large banks were at significant risk of failing. This perception didn't develop overnight. It had evolved steadily and visibly over more than two years, while regulators and politicians did nothing.
Citigroup's equity-to-assets ratio, measured in market value—the best single comprehensive measure of a bank's financial strength—fell steadily from about 13% in April 2006 to about 3% by September 2008. And that low value reflected an even lower perception of fundamental asset worth, because the 3% market value included the value of an expected bailout. Lehman's collapse was simply the match in the tinder box. If other banks had been sufficiently safe and sound at the time of Lehman's demise, then the financial system would not have been brought to its knees by a single failure.
To ensure systemwide resiliency, most of Dodd-Frank's regulations should be replaced by measures requiring large, systemically important banks to increase their capacity to deal with losses. The first step would be to substantially raise the minimum ratio of the book value of their equity relative to the book value of their assets.
The Brown-Vitter bill now before Congress (the Terminating Bailouts for Taxpayer Fairness Act) would raise that minimum ratio to 15%, roughly a threefold increase from current levels. Although reasonable people can disagree about the optimal minimum ratio—one could argue that a 10% ratio would be adequate in the presence of additional safeguards—15% is not an arbitrary number.
At the onset of the Great Depression, large New York City banks all maintained more than 15% of their assets in equity, and none of them succumbed to the worst banking system shocks in U.S. history from 1929 to 1932. The losses suffered by major banks in the recent crisis would not have wiped out their equity if it had been equal to 15% of their assets.
Bankers and their supervisors often find it mutually convenient to understate expected loan losses and thereby overstate equity values. The problem is magnified when equity requirements are expressed relative to "risk-weighted assets," allowing regulators to permit banks' models to underestimate their risks.
This is not a hypothetical issue. In December 2008, when Citi was effectively insolvent, and the market's valuation of its equity correctly reflected that fact, the bank's accounts showed a risk-based capital ratio of 11.8% and a risk-based Tier 1 capital ratio (meant to include only high-quality, equity-like capital) of about 7%. Moreover, factors such as a drop in bank fee income can affect the actual value of a bank's equity, regardless of the riskiness of its loans.
For these reasons, large banks' book equity requirements need to be buttressed by other measures. One is a minimum requirement that banks maintain cash reserves (New York City banks during the Depression maintained cash reserves in excess of 25%). Cash held at the central bank provides protection against default risk similar to equity capital, but it has the advantage of being observable and incapable of being fudged by esoteric risk-modeling.
Several researchers have suggested a variety of ways to supplement simple equity and cash requirements with creative contractual devices that would give bankers strong incentives to make sure that they maintain adequate capital. In the Journal of Applied Corporate Finance (2013), Charles Calomiris and Richard Herring propose debt that converts to equity whenever the market value ratio of a bank's equity is below 9% for more than 90 days. Since the conversion would significantly dilute the value of the stock held by pre-existing shareholders, a bank CEO will have a big incentive to avoid it.
There is plenty of room to debate the details, but the essential reform is to place responsibility for absorbing a bank's losses on banks and their owners. Dodd-Frank institutionalizes too-big-to-fail protection by explicitly permitting bailouts via a "resolution authority" provision at the discretion of government authorities, financed by taxes on surviving banks—and by taxpayers should these bank taxes be insufficient. That provision should be repealed and replaced by clear rules that can't be gamed by bank managers.
Mr. Calomiris is the co-author (with Stephen Haber ) of "Fragile By Design: The Political Origins of Banking Crises and Scarce Credit" (Princeton, 2014). Mr. Meltzer is the author of "Why Capitalism?" (Oxford, 2012). They co-direct (with Kenneth Scott ) the new program on Regulation and the Rule of Law at the Hoover Institution.
Meritocracy Not Racial Discrimination Explains the Racial Income Gap: An Analysis of NLSY 1979
Kirkegaard, Emil O W. 2017. “Meritocracy Not Racial Discrimination Explains the Racial Income Gap: An Analysis of NLSY 1979”. PsyArXiv. September 5. https://psyarxiv.com/qty3n
Abstract: Socially defined racial groups usually differ in average incomes, though the causes of this are contested. Here the NLSY1979 (National Longitudinal Survey of Youth, n > 11k) was analyzed to examine the relative importance of cognitive ability, market-irrelevant racial discrimination and cultural self-identification for predicting long term income (average of 25 years). The use of both self- and other-perceived racial group plausibly allows one to distinguish between effects related to how others perceive one’s racial group vs. how individuals perceive themselves. Results indicated that other-perceived Black and Hispanic racial status was associated with either no difference or slightly higher incomes when cognitive ability was controlled for (betas 0.15 and 0.11, respectively), whereas self-perceived Black status was negatively negatively to income (beta = -0.24). Other self-perceived racial statuses has no clearly detectable effect.
Results were interpreted as being mainly congruent with meritocracy and inconsistent with market-irrelevant racial discrimination models.
Abstract: Socially defined racial groups usually differ in average incomes, though the causes of this are contested. Here the NLSY1979 (National Longitudinal Survey of Youth, n > 11k) was analyzed to examine the relative importance of cognitive ability, market-irrelevant racial discrimination and cultural self-identification for predicting long term income (average of 25 years). The use of both self- and other-perceived racial group plausibly allows one to distinguish between effects related to how others perceive one’s racial group vs. how individuals perceive themselves. Results indicated that other-perceived Black and Hispanic racial status was associated with either no difference or slightly higher incomes when cognitive ability was controlled for (betas 0.15 and 0.11, respectively), whereas self-perceived Black status was negatively negatively to income (beta = -0.24). Other self-perceived racial statuses has no clearly detectable effect.
Results were interpreted as being mainly congruent with meritocracy and inconsistent with market-irrelevant racial discrimination models.
Minimum wage increases increase the likelihood that low-skilled workers in automatable jobs become unemployed
People Versus Machines: The Impact of Minimum Wages on Automatable Jobs. Grace Lordan and David Neumark. NBER Working Paper, August 2017, http://www.nber.org/papers/w23667
Abstract: We study the effect of minimum wage increases on employment in automatable jobs – jobs in which employers may find it easier to substitute machines for people – focusing on low-skilled workers from whom such substitution may be spurred by minimum wage increases. Based on CPS data from 1980-2015, we find that ***increasing the minimum wage decreases significantly the share of automatable employment held by low-skilled workers, and increases the likelihood that low-skilled workers in automatable jobs become unemployed***. The average effects mask significant heterogeneity by industry and demographic group, including substantive adverse effects for older, low-skilled workers in manufacturing. The findings imply that ***groups often ignored in the minimum wage literature are in fact quite vulnerable to employment changes and job loss because of automation following a minimum wage increase.***
Abstract: We study the effect of minimum wage increases on employment in automatable jobs – jobs in which employers may find it easier to substitute machines for people – focusing on low-skilled workers from whom such substitution may be spurred by minimum wage increases. Based on CPS data from 1980-2015, we find that ***increasing the minimum wage decreases significantly the share of automatable employment held by low-skilled workers, and increases the likelihood that low-skilled workers in automatable jobs become unemployed***. The average effects mask significant heterogeneity by industry and demographic group, including substantive adverse effects for older, low-skilled workers in manufacturing. The findings imply that ***groups often ignored in the minimum wage literature are in fact quite vulnerable to employment changes and job loss because of automation following a minimum wage increase.***
A corporate income tax cut can reduce the non-employment rate by up to 7pct
Corporate Income Tax, Legal Form of Organization, and Employment. Daphne Chen, Shi Shao Qi and Don Schlagenhauf. Federal Reserve Working Paper, July 2017, https://research.stlouisfed.org/wp/2014/2014-018.pdf
Abstract: A dynamic stochastic occupational choice model with heterogeneous agents is developed to evaluate the impact of a corporate income tax reduction on employment. In this framework, the key margin is the endogenous entrepreneurial choice of the legal form of organization (LFO). A reduction in the corporate income tax burden encourages adoption of the C corporation legal form, which reduces capital constraints on firms. Improved capital re-allocation increases overall productive efficiency in the economy and therefore expands the labor market. Relative to the benchmark economy, a corporate income tax cut can reduce the non-employment rate by up to 7 percent.
Abstract: A dynamic stochastic occupational choice model with heterogeneous agents is developed to evaluate the impact of a corporate income tax reduction on employment. In this framework, the key margin is the endogenous entrepreneurial choice of the legal form of organization (LFO). A reduction in the corporate income tax burden encourages adoption of the C corporation legal form, which reduces capital constraints on firms. Improved capital re-allocation increases overall productive efficiency in the economy and therefore expands the labor market. Relative to the benchmark economy, a corporate income tax cut can reduce the non-employment rate by up to 7 percent.
Studying the Effect of Hard and Soft News Exposure on Mental Well-Being Over Time
Studying the Effect of Hard and Soft News Exposure on Mental Well-Being Over Time. Mark Boukes and Rens Vliegenthart. Journal of Media Psychology (2017), 29, pp. 137-147. https://doi.org/10.1027/1864-1105/a000224.
Abstract. Following the news is generally understood to be crucial for democracy as it allows citizens to politically participate in an informed manner; yet, one may wonder about the unintended side effects it has for the mental well-being of citizens. With news focusing on the negative and worrisome events in the world, framing that evokes a sense of powerlessness, and lack of entertainment value, this study hypothesizes that news consumption decreases mental well-being via negative hedonic experiences; thereby, we differentiate between hard and soft news. Using a panel survey in combination with latent growth curve modeling (n = 2,767), we demonstrate that ***the consumption of hard news television programs has a negative effect on the development of mental well-being over time. Soft news consumption, by contrast, has a marginally positive impact on the trend in well-being***. This can be explained by the differential topic focus, framing and style of soft news vis-à-vis hard news. Investigating the effects of news consumption on mental well-being provides insight into the impact news exposure has on variables other than the political ones, which definitively are not less societally relevant.
Keywords: news consumption, mental well-being, hedonic experiences, negativity, hard versus soft news
Abstract. Following the news is generally understood to be crucial for democracy as it allows citizens to politically participate in an informed manner; yet, one may wonder about the unintended side effects it has for the mental well-being of citizens. With news focusing on the negative and worrisome events in the world, framing that evokes a sense of powerlessness, and lack of entertainment value, this study hypothesizes that news consumption decreases mental well-being via negative hedonic experiences; thereby, we differentiate between hard and soft news. Using a panel survey in combination with latent growth curve modeling (n = 2,767), we demonstrate that ***the consumption of hard news television programs has a negative effect on the development of mental well-being over time. Soft news consumption, by contrast, has a marginally positive impact on the trend in well-being***. This can be explained by the differential topic focus, framing and style of soft news vis-à-vis hard news. Investigating the effects of news consumption on mental well-being provides insight into the impact news exposure has on variables other than the political ones, which definitively are not less societally relevant.
Keywords: news consumption, mental well-being, hedonic experiences, negativity, hard versus soft news
Trash-talking: Competitive incivility motivates rivalry, performance, and unethical behavior
Trash-talking: Competitive incivility motivates rivalry, performance, and unethical behavior. Jeremy Yip, Maurice Schweitzer & Samir Nurmohamed. Organizational Behavior and Human Decision Processes, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2737109
Abstract: Trash-talking increases the psychological stakes of competition and motivates targets to outperform their opponents. In Studies 1 and 2, participants in a competition who were targets of trash-talking outperformed participants who faced the same economic incentives, but were not targets of trash-talking. Perceptions of rivalry mediate the relationship between trash-talking and effort-based performance. In Study 3, we find that targets of trash-talking were particularly motivated to punish their opponents and see them lose. In Study 4, we identify a boundary condition, and show that trash-talking increases effort in competitive interactions, but incivility decreases effort in cooperative interactions. In Study 5, we find that targets of trash-talking were more likely to cheat in a competition than were participants who received neutral messages. In Study 6, we demonstrate that trash-talking harms performance when the performance task involves creativity. Taken together, our findings reveal that trash-talking is a common workplace behavior that can foster rivalry and motivate both constructive and destructive behavior.
Abstract: Trash-talking increases the psychological stakes of competition and motivates targets to outperform their opponents. In Studies 1 and 2, participants in a competition who were targets of trash-talking outperformed participants who faced the same economic incentives, but were not targets of trash-talking. Perceptions of rivalry mediate the relationship between trash-talking and effort-based performance. In Study 3, we find that targets of trash-talking were particularly motivated to punish their opponents and see them lose. In Study 4, we identify a boundary condition, and show that trash-talking increases effort in competitive interactions, but incivility decreases effort in cooperative interactions. In Study 5, we find that targets of trash-talking were more likely to cheat in a competition than were participants who received neutral messages. In Study 6, we demonstrate that trash-talking harms performance when the performance task involves creativity. Taken together, our findings reveal that trash-talking is a common workplace behavior that can foster rivalry and motivate both constructive and destructive behavior.
Changes in the Subjectively Experienced Severity of Detention: Exploring Individual Differences
Changes in the Subjectively Experienced Severity of Detention: Exploring Individual Differences. Ellen A. C. Raaijmakers et al. The Prison Journal, https://doi.org/10.1177/0032885517728902
Abstract: A core assumption underlying deterrent sentencing and just deserts theory is that the severity of imprisonment is merely dependent upon its duration. However, empirical research examining how inmates’ subjectively experienced severity of detention (SESD) changes as a function of the length of confinement remains sparse. This study assesses changes in inmates’ SESD over the course of confinement and seeks to explain this process. Multilevel analyses revealed considerable change in the SESD over the course of confinement. Although individual characteristics are related to inmates’ initial SESD, they are not related to their pattern of change in SESD over the course of confinement.
---
"although the SESD (subjectively experienced severity of detention) increases during the first 3 months, it decreases thereafter, stabilizing after 9 months of confinement"
Abstract: A core assumption underlying deterrent sentencing and just deserts theory is that the severity of imprisonment is merely dependent upon its duration. However, empirical research examining how inmates’ subjectively experienced severity of detention (SESD) changes as a function of the length of confinement remains sparse. This study assesses changes in inmates’ SESD over the course of confinement and seeks to explain this process. Multilevel analyses revealed considerable change in the SESD over the course of confinement. Although individual characteristics are related to inmates’ initial SESD, they are not related to their pattern of change in SESD over the course of confinement.
---
"although the SESD (subjectively experienced severity of detention) increases during the first 3 months, it decreases thereafter, stabilizing after 9 months of confinement"
Subscribe to:
Posts (Atom)