Fang, Dawei and Noe, Thomas H., Less Competition, More Meritocracy? (October 16, 2018). http://dx.doi.org/10.2139/ssrn.3282723
Abstract: Uncompetitive contests for grades, promotions, and job assignments, which feature lax standards or consider only limited talent pools, are often criticized for being unmeritocratic. We show that, when contestants are strategic, lax standards and exclusivity can make selection more meritocratic. Strategic contestants take more risks in more competitive contests. Risk taking reduces the correlation between selection and ability. By reducing the noise engendered by strategic risk taking, dialing down competition can produce outcomes that better conform with the meritocratic ideal of selecting the best and only the best.
Keywords: selection contests, meritocracy, risk taking
JEL Classification: C72, D82, J01
Monday, December 31, 2018
Federal Presidents Attacking The Press Thru Illegal Moves or Even Promoting & Using Unconstitutional Laws
7 Presidents Who Were Tougher Than Trump on the Media. Fred Lucas. The Daily Signal, Dec 27 2018. https://www.dailysignal.com/2018/12/27/7-presidents-who-were-tougher-than-trump-on-the-media
[President Lyndon B. Johnson speaks during a press conference. Johnson and political allies tried to blunt media criticism. (Photo: Everett Collection/Newscom)]
The president was frustrated with the media coverage of him and his policies, swearing that 85 percent of all newspapers were against him.
“Our newspapers cannot be edited in the interests of the general public,” the president griped. Then, almost derisively, he said: “Freedom of the press. How many bogies are conjured up by invoking that greatly overworked phrase?”
So, he opted to bypass the traditional media he was convinced was unfair and speak directly to America.
And President Franklin D. Roosevelt’s fireside chats on the radio, beginning in 1933, proved to be a successful political move.
The verdict is still out on President Donald Trump’s tweets, though.
Trump regularly tweets about “fake news.” He has doubled down on the view that overly critical news outlets are the “enemy of the American people.”
He talked about more stringent libel laws to make it easier to sue news organizations, threatened the broadcast license of certain networks, and the Trump White House pulled the press pass for CNN personality Jim Acosta after a confrontation at a press conference.
But so far he hasn’t taken government action, as Roosevelt and other past presidents have.
A Trump-appointed federal judge sided with CNN on the Acosta press pass. Congress is unlikely to enact new libel laws, as Supreme Court precedent sets a high standard for a public figure to sue a news outlet.
The Federal Communications Commission lacks the authority to pull a license of a network (which aren’t licensed), having purview only over individual stations that operate on the public airwaves (which are licensed). Cable news outlets such as CNN, MSNBC, and Fox News Channel also are not licensed and not subject to FCC regulation.
Past presidents have taken tangible actions to undermine a free press. Trump has so far taken only a more negative rhetorical tone toward the press, said David Beito, a history professor at the University of Alabama.
“Would he like to do something? He probably would, but a change of tone has been the biggest difference,” Beito told The Daily Signal, characterizing Trump’s rhetorical attacks on the press as more aggressive than most of his predecessors.
Roosevelt and Woodrow Wilson were among the biggest presidential offenders during the 20th century, he added.
“Wilson was extremely hostile to any sort of criticism, but it was couched in terms of wartime and the red scare,” Beito said. “Everyone knew Wilson was doing this. FDR was very subtle. Roosevelt was effective working through third parties. It was hard tying him to anything.”
Here are seven examples of presidential administrations that went well beyond rhetoric in going after the press.
1. ‘Thank’ Obama
The Obama administration’s Justice Department launched more leak investigations under the World War I-era Espionage Act than any other administration in history, according to then-New York Times reporter James Risen, writing in a December 2016 op-ed.
The Obama administration targeted Risen with a subpoena to force him to reveal his sources.
In a separate case, the Obama Justice Department named then-Fox News Channel reporter James Rosen as an unindicted co-conspirator. The Justice Department also seized the phone records of Rosen’s parents.
The Obama administration also seized the phone records of Associated Press reporters and editors, seizing records for 20 separate phone lines, including cellular and home lines.
Risen, now with The Intercept, wrote in his op-ed in The New York Times:
If Donald J. Trump decides as president to throw a whistle-blower in jail for trying to talk to a reporter, or gets the FBI to spy on a journalist, he will have one man to thank for bequeathing him such expansive power: Barack Obama. …
Under Mr. Obama, the Justice Department and the FBI have spied on reporters by monitoring their phone records, labeled one journalist an unindicted co-conspirator in a criminal case for simply doing reporting and issued subpoenas to other reporters to try to force them to reveal their sources and testify in criminal cases.
“With Obama, the press often gave him cover,” Beito told The Daily Signal. “Obama often did do things against reporters that were concerning.”
If the Trump administration imposes regulations on social media and internet giants, he added, it could set a precedent for future Democratic presidents who want to regulate more sectors.
A 2013 report from the nonprofit Committee to Protect Journalists compared Obama to former President Richard Nixon for his aggressive probes of leaks to reporters.
2. LBJ on ‘Challenge and Harass’
Talk radio was not a conservative phenomenon in the 1960s, as it became in the 1990s. But President Lyndon B. Johnson—and the Democratic National Committee—took action to suppress the format during his 1964 presidential race.
The Fairness Doctrine, an FCC rule, required broadcasters to air both sides of a controversial issue.
A former CBS News president, Fred Friendly, broke the story in his 1977 book, “The Good Guys, the Bad Guys and the First Amendment,” of how the Democratic National Committee used the rule to target unfriendly broadcasts.
Friendly wrote that “there is little doubt that this contrived scheme had White House approval.”
The DNC delivered a kit to activists explaining “how to demand time under the Fairness Doctrine.” It also mailed out thousands of copies of an article against conservative talk radio published in The Nation, a liberal magazine.
The Democrats also sent thousands of radio stations a letter from DNC counsel Dan Brightman warning that if Democrats are attacked on their programs, they would demand equal time.
Democrat operative Wayne Phillips was quoted in the Friendly book as saying, “the effectiveness of this operation was in inhibiting the political activity of these right-wing broadcasts.”
Bill Ruder, an assistant secretary of the Commerce Department in the Johnson administration, recalled: “Our massive strategy was to use the Fairness Doctrine to challenge and harass right-wing broadcasters and hope that the challenge would be so costly to them that they would be inhibited and decide it was too expensive to continue.”
3. Nixon and the Fairness Doctrine
Johnson’s successor as president, Richard Nixon, would use similar tactics, particularly in the heat of the Watergate investigation.
The Nixon administration’s FCC threatened the licenses of TV stations owned by The Washington Post Co. and CBS Inc. over aggressive coverage of the Watergate scandal that eventually led Nixon to resign.
Nixon’s White House chief of staff, H.R. Haldeman, targeted individual stations with Fairness Doctrine complaints, according to the Poynter Institute, a journalism research group.
Nixon also kept an “enemies list” that largely included journalists.
The Reagan administration’s FCC did away with the Fairness Doctrine in 1987 and Reagan vetoed subsequent legislation to put the policy in law. This led to the flourishing of conservative talk radio.
4. FDR and ‘Overworked Phrase’
The Roosevelt administration frequently targeted major newspapers, publishers, and journalists for tax audits. The common factor was that these publications or individuals opposed FDR’s New Deal programs, Beito said.
The chief targets included Col. Robert McCormick, owner and publisher of the Chicago Tribune, and press barons Frank Gannett and William Randolph Hearst.
Beito wrote about Roosevelt’s tactics in a piece for Reason, a libertarian magazine.
Roosevelt, during his re-election campaign in 1936, complained that 85 percent of newspapers were against him and the New Deal. In 1938, the president vented:
Our newspapers cannot be edited in the interests of the general public, from the counting room. And I wish we could have a national symposium on that question, particularly in relation to the freedom of the press. How many bogies are conjured up by invoking that greatly overworked phrase?
Sen. Hugo Black, D-Ala., a staunch FDR ally whom the president later would name to the Supreme Court, was chairman of the Special Senate Committee on Lobbying.
The lobbying committee began investigating utility companies, banks, and businesses that opposed the New Deal. Its work eventually turned into a fishing expedition, issuing subpoenas to critics such as Hearst and unfriendly media outlets, Beito wrote in the Reason article.
A court decision in Hearst’s favor short-circuited the Black committee’s investigation into the telegrams of major businesses that opposed the New Deal, he wrote.
5. Woodrow Wilson’s Committee on Public Information
Not long after the country entered World War I, Wilson wrote the Democratic-controlled House, asking for “authority to exercise censorship over the press to the extent that that censorship is embodied in the recent action of the House of Representatives is absolutely necessary to the public safety.”
Congress turned down Wilson, so the president issued an executive order creating a Committee on Public Information.
The agency employed 75,000 in its speaking division alone, and had separate divisions overseeing foreign language newspapers and films, according to Smithsonian magazine.
This was part of Wilson’s larger effort to control news coverage, Christopher B. Daly wrote last year in the Smithsonian magazine article:
In its crusade to ‘make the world safe for democracy,’ the Wilson administration took immediate steps at home to curtail one of the pillars of democracy—press freedom—by implementing a plan to control, manipulate and censor all news coverage, on a scale never seen in U.S. history. … He waged a campaign of intimidation and outright suppression against those ethnic and socialist papers that continued to oppose the war. Taken together, these wartime measures added up to an unprecedented assault on press freedom.
The federal propaganda agency also established a government-run national newspaper called the Official Bulletin, Daly wrote: “In some respects, it is the closest the United States has come to a paper like the Soviet Union’s Pravda or China’s People’s Daily.”
6. Lincoln and the Civil War
The Civil War was an unparalleled test of the nation and civil liberties. Press freedom not surprisingly took a hit.
President Abraham Lincoln didn’t order the military to shut down pro-Confederate and anti-war newspapers, but turned a blind eye when the Union army did so, according to the magazine Civil War Times.
In the midst of war, pro-Union newspaper publishers generally didn’t speak up for their fellow newspapermen, who were sometimes jailed.
Chiefly, the Union army targeted newspapers in Kentucky, a border state with split loyalties; Virginia, a Confederate state; and Maryland and Missouri, both Union states.
According to the article in the Civil War Times:
At their most unobjectionable level, the safeguards were initially meant to keep secret military information off the telegraph wires and out of the press. But in other early cases censors also prevented the publication of pro-secession sentiments that might encourage border states out of the Union. …
Eventually the military and the government began punishing editorial opposition to the war itself. Authorities banned pro-peace newspapers from the U.S. mails, shut down newspaper offices and confiscated printing materials. They intimidated, and sometimes imprisoned, reporters, editors and publishers who sympathized with the South or objected to an armed struggle to restore the Union.
For the first year of the war, Lincoln left no trail of documents attesting to any personal conviction that dissenting newspapers ought to be muzzled. But neither did he say anything to control or contradict such efforts when they were undertaken, however haphazardly, by his Cabinet officers or military commanders.
7. Adams and the Sedition Act
President John Adams signed the Sedition Act of 1798 to ban “false, scandalous and malicious writing” against Congress or the president and to make it illegal to conspire “to oppose any measure or measures of the government.”
This could be the oldest, most well-known clash between a president and the press.
Adams and the Federalist Congress were not tyrants, but rather passed the series of four laws known as the Alien and Sedition Acts out of fear of a pending war with France that never occurred.
Rep. Matthew Lyon of Vermont, who wrote letters to Democratic-Republican newspapers, was the first person tried under the law.
Acting as his own lawyer, Lyon argued that the law wasn’t constitutional. He was convicted nonetheless, and sentenced to four months in prison and a $1,000 fine.
One publisher of a Democratic-Republican newspaper, James Callender, was convicted and jailed for nine months for “false, scandalous, and malicious writing, against the said President of the United States.”
The law expired in early 1801. President Thomas Jefferson, leader of the Democratic-Republican Party, pardoned everyone convicted under the law.
Fred Lucas is the White House correspondent for The Daily Signal and co-host of "The Right Side of History" podcast. Send an email to Fred.
[President Lyndon B. Johnson speaks during a press conference. Johnson and political allies tried to blunt media criticism. (Photo: Everett Collection/Newscom)]
The president was frustrated with the media coverage of him and his policies, swearing that 85 percent of all newspapers were against him.
“Our newspapers cannot be edited in the interests of the general public,” the president griped. Then, almost derisively, he said: “Freedom of the press. How many bogies are conjured up by invoking that greatly overworked phrase?”
So, he opted to bypass the traditional media he was convinced was unfair and speak directly to America.
And President Franklin D. Roosevelt’s fireside chats on the radio, beginning in 1933, proved to be a successful political move.
The verdict is still out on President Donald Trump’s tweets, though.
Trump regularly tweets about “fake news.” He has doubled down on the view that overly critical news outlets are the “enemy of the American people.”
He talked about more stringent libel laws to make it easier to sue news organizations, threatened the broadcast license of certain networks, and the Trump White House pulled the press pass for CNN personality Jim Acosta after a confrontation at a press conference.
But so far he hasn’t taken government action, as Roosevelt and other past presidents have.
A Trump-appointed federal judge sided with CNN on the Acosta press pass. Congress is unlikely to enact new libel laws, as Supreme Court precedent sets a high standard for a public figure to sue a news outlet.
The Federal Communications Commission lacks the authority to pull a license of a network (which aren’t licensed), having purview only over individual stations that operate on the public airwaves (which are licensed). Cable news outlets such as CNN, MSNBC, and Fox News Channel also are not licensed and not subject to FCC regulation.
Past presidents have taken tangible actions to undermine a free press. Trump has so far taken only a more negative rhetorical tone toward the press, said David Beito, a history professor at the University of Alabama.
“Would he like to do something? He probably would, but a change of tone has been the biggest difference,” Beito told The Daily Signal, characterizing Trump’s rhetorical attacks on the press as more aggressive than most of his predecessors.
Roosevelt and Woodrow Wilson were among the biggest presidential offenders during the 20th century, he added.
“Wilson was extremely hostile to any sort of criticism, but it was couched in terms of wartime and the red scare,” Beito said. “Everyone knew Wilson was doing this. FDR was very subtle. Roosevelt was effective working through third parties. It was hard tying him to anything.”
Here are seven examples of presidential administrations that went well beyond rhetoric in going after the press.
1. ‘Thank’ Obama
The Obama administration’s Justice Department launched more leak investigations under the World War I-era Espionage Act than any other administration in history, according to then-New York Times reporter James Risen, writing in a December 2016 op-ed.
The Obama administration targeted Risen with a subpoena to force him to reveal his sources.
In a separate case, the Obama Justice Department named then-Fox News Channel reporter James Rosen as an unindicted co-conspirator. The Justice Department also seized the phone records of Rosen’s parents.
The Obama administration also seized the phone records of Associated Press reporters and editors, seizing records for 20 separate phone lines, including cellular and home lines.
Risen, now with The Intercept, wrote in his op-ed in The New York Times:
If Donald J. Trump decides as president to throw a whistle-blower in jail for trying to talk to a reporter, or gets the FBI to spy on a journalist, he will have one man to thank for bequeathing him such expansive power: Barack Obama. …
Under Mr. Obama, the Justice Department and the FBI have spied on reporters by monitoring their phone records, labeled one journalist an unindicted co-conspirator in a criminal case for simply doing reporting and issued subpoenas to other reporters to try to force them to reveal their sources and testify in criminal cases.
“With Obama, the press often gave him cover,” Beito told The Daily Signal. “Obama often did do things against reporters that were concerning.”
If the Trump administration imposes regulations on social media and internet giants, he added, it could set a precedent for future Democratic presidents who want to regulate more sectors.
A 2013 report from the nonprofit Committee to Protect Journalists compared Obama to former President Richard Nixon for his aggressive probes of leaks to reporters.
2. LBJ on ‘Challenge and Harass’
Talk radio was not a conservative phenomenon in the 1960s, as it became in the 1990s. But President Lyndon B. Johnson—and the Democratic National Committee—took action to suppress the format during his 1964 presidential race.
The Fairness Doctrine, an FCC rule, required broadcasters to air both sides of a controversial issue.
A former CBS News president, Fred Friendly, broke the story in his 1977 book, “The Good Guys, the Bad Guys and the First Amendment,” of how the Democratic National Committee used the rule to target unfriendly broadcasts.
Friendly wrote that “there is little doubt that this contrived scheme had White House approval.”
The DNC delivered a kit to activists explaining “how to demand time under the Fairness Doctrine.” It also mailed out thousands of copies of an article against conservative talk radio published in The Nation, a liberal magazine.
The Democrats also sent thousands of radio stations a letter from DNC counsel Dan Brightman warning that if Democrats are attacked on their programs, they would demand equal time.
Democrat operative Wayne Phillips was quoted in the Friendly book as saying, “the effectiveness of this operation was in inhibiting the political activity of these right-wing broadcasts.”
Bill Ruder, an assistant secretary of the Commerce Department in the Johnson administration, recalled: “Our massive strategy was to use the Fairness Doctrine to challenge and harass right-wing broadcasters and hope that the challenge would be so costly to them that they would be inhibited and decide it was too expensive to continue.”
3. Nixon and the Fairness Doctrine
Johnson’s successor as president, Richard Nixon, would use similar tactics, particularly in the heat of the Watergate investigation.
The Nixon administration’s FCC threatened the licenses of TV stations owned by The Washington Post Co. and CBS Inc. over aggressive coverage of the Watergate scandal that eventually led Nixon to resign.
Nixon’s White House chief of staff, H.R. Haldeman, targeted individual stations with Fairness Doctrine complaints, according to the Poynter Institute, a journalism research group.
Nixon also kept an “enemies list” that largely included journalists.
The Reagan administration’s FCC did away with the Fairness Doctrine in 1987 and Reagan vetoed subsequent legislation to put the policy in law. This led to the flourishing of conservative talk radio.
4. FDR and ‘Overworked Phrase’
The Roosevelt administration frequently targeted major newspapers, publishers, and journalists for tax audits. The common factor was that these publications or individuals opposed FDR’s New Deal programs, Beito said.
The chief targets included Col. Robert McCormick, owner and publisher of the Chicago Tribune, and press barons Frank Gannett and William Randolph Hearst.
Beito wrote about Roosevelt’s tactics in a piece for Reason, a libertarian magazine.
Roosevelt, during his re-election campaign in 1936, complained that 85 percent of newspapers were against him and the New Deal. In 1938, the president vented:
Our newspapers cannot be edited in the interests of the general public, from the counting room. And I wish we could have a national symposium on that question, particularly in relation to the freedom of the press. How many bogies are conjured up by invoking that greatly overworked phrase?
Sen. Hugo Black, D-Ala., a staunch FDR ally whom the president later would name to the Supreme Court, was chairman of the Special Senate Committee on Lobbying.
The lobbying committee began investigating utility companies, banks, and businesses that opposed the New Deal. Its work eventually turned into a fishing expedition, issuing subpoenas to critics such as Hearst and unfriendly media outlets, Beito wrote in the Reason article.
A court decision in Hearst’s favor short-circuited the Black committee’s investigation into the telegrams of major businesses that opposed the New Deal, he wrote.
5. Woodrow Wilson’s Committee on Public Information
Not long after the country entered World War I, Wilson wrote the Democratic-controlled House, asking for “authority to exercise censorship over the press to the extent that that censorship is embodied in the recent action of the House of Representatives is absolutely necessary to the public safety.”
Congress turned down Wilson, so the president issued an executive order creating a Committee on Public Information.
The agency employed 75,000 in its speaking division alone, and had separate divisions overseeing foreign language newspapers and films, according to Smithsonian magazine.
This was part of Wilson’s larger effort to control news coverage, Christopher B. Daly wrote last year in the Smithsonian magazine article:
In its crusade to ‘make the world safe for democracy,’ the Wilson administration took immediate steps at home to curtail one of the pillars of democracy—press freedom—by implementing a plan to control, manipulate and censor all news coverage, on a scale never seen in U.S. history. … He waged a campaign of intimidation and outright suppression against those ethnic and socialist papers that continued to oppose the war. Taken together, these wartime measures added up to an unprecedented assault on press freedom.
The federal propaganda agency also established a government-run national newspaper called the Official Bulletin, Daly wrote: “In some respects, it is the closest the United States has come to a paper like the Soviet Union’s Pravda or China’s People’s Daily.”
6. Lincoln and the Civil War
The Civil War was an unparalleled test of the nation and civil liberties. Press freedom not surprisingly took a hit.
President Abraham Lincoln didn’t order the military to shut down pro-Confederate and anti-war newspapers, but turned a blind eye when the Union army did so, according to the magazine Civil War Times.
In the midst of war, pro-Union newspaper publishers generally didn’t speak up for their fellow newspapermen, who were sometimes jailed.
Chiefly, the Union army targeted newspapers in Kentucky, a border state with split loyalties; Virginia, a Confederate state; and Maryland and Missouri, both Union states.
According to the article in the Civil War Times:
At their most unobjectionable level, the safeguards were initially meant to keep secret military information off the telegraph wires and out of the press. But in other early cases censors also prevented the publication of pro-secession sentiments that might encourage border states out of the Union. …
Eventually the military and the government began punishing editorial opposition to the war itself. Authorities banned pro-peace newspapers from the U.S. mails, shut down newspaper offices and confiscated printing materials. They intimidated, and sometimes imprisoned, reporters, editors and publishers who sympathized with the South or objected to an armed struggle to restore the Union.
For the first year of the war, Lincoln left no trail of documents attesting to any personal conviction that dissenting newspapers ought to be muzzled. But neither did he say anything to control or contradict such efforts when they were undertaken, however haphazardly, by his Cabinet officers or military commanders.
7. Adams and the Sedition Act
President John Adams signed the Sedition Act of 1798 to ban “false, scandalous and malicious writing” against Congress or the president and to make it illegal to conspire “to oppose any measure or measures of the government.”
This could be the oldest, most well-known clash between a president and the press.
Adams and the Federalist Congress were not tyrants, but rather passed the series of four laws known as the Alien and Sedition Acts out of fear of a pending war with France that never occurred.
Rep. Matthew Lyon of Vermont, who wrote letters to Democratic-Republican newspapers, was the first person tried under the law.
Acting as his own lawyer, Lyon argued that the law wasn’t constitutional. He was convicted nonetheless, and sentenced to four months in prison and a $1,000 fine.
One publisher of a Democratic-Republican newspaper, James Callender, was convicted and jailed for nine months for “false, scandalous, and malicious writing, against the said President of the United States.”
The law expired in early 1801. President Thomas Jefferson, leader of the Democratic-Republican Party, pardoned everyone convicted under the law.
Fred Lucas is the White House correspondent for The Daily Signal and co-host of "The Right Side of History" podcast. Send an email to Fred.
Who Built Maslow’s Pyramid? A History of the Creation of Management Studies’ Most Famous Symbol
Who Built Maslow’s Pyramid? A History of the Creation of Management Studies’ Most Famous Symbol and Its Implications for Management Education. Todd Bridgman, Stephen Cummings and John A Ballard. Academy of Management Learning & Education, Apr 13 2018. https://doi.org/10.5465/amle.2017.0351
Abstract: Abraham Maslow’s theory of motivation, the idea that human needs exist in a hierarchy that people strive progressively to satisfy, is regarded as a fundamental approach to understanding and motivating people at work. It is one of the first and most remembered models encountered by students of management. Despite gaining little support in empirical studies and being criticized for promoting an elitist, individualistic view of management, Maslow’s theory remains popular, its popularity underpinned by its widely-recognized pyramid form. However, Maslow never created a pyramid to represent the hierarchy of needs. We investigated how it came to be and draw on this analysis to call for a rethink of how Maslow is represented in management studies. We also challenge management educators to reflect critically on what are taken to be the historical foundations of management studies and the forms in which those foundations are taught to students.
Abstract: Abraham Maslow’s theory of motivation, the idea that human needs exist in a hierarchy that people strive progressively to satisfy, is regarded as a fundamental approach to understanding and motivating people at work. It is one of the first and most remembered models encountered by students of management. Despite gaining little support in empirical studies and being criticized for promoting an elitist, individualistic view of management, Maslow’s theory remains popular, its popularity underpinned by its widely-recognized pyramid form. However, Maslow never created a pyramid to represent the hierarchy of needs. We investigated how it came to be and draw on this analysis to call for a rethink of how Maslow is represented in management studies. We also challenge management educators to reflect critically on what are taken to be the historical foundations of management studies and the forms in which those foundations are taught to students.
Sunday, December 30, 2018
If it is easy to remember, then it is not secure: Metacognitive beliefs affect password selection
If it is easy to remember, then it is not secure: Metacognitive beliefs affect password selection. Karlos Luna. Applied Cognitive Psychology, https://doi.org/10.1002/acp.3516
Summary: In this research we applied current theories of metacognition to study computer security and tested the idea that users’ password selection is affected by the metacognitive belief that if a password is memorable, then it is not secure. In two experiments, different types of eight‐character passwords and longer, more secure sentences were presented. Participants rated perceived memorability and perceived security of the passwords and indicated whether they would use them in a critical and in a non‐critical service. The results confirmed the belief. Sentences that are in fact highly secure and perceived as highly memorable were also perceived as weak passwords. The belief strongly affected password selection for critical services, but it had no effect on non‐critical services. In sum, long sentences are a particularly interesting type of password because they meet both security and memorability criteria, but their use is limited by a false belief.
Summary: In this research we applied current theories of metacognition to study computer security and tested the idea that users’ password selection is affected by the metacognitive belief that if a password is memorable, then it is not secure. In two experiments, different types of eight‐character passwords and longer, more secure sentences were presented. Participants rated perceived memorability and perceived security of the passwords and indicated whether they would use them in a critical and in a non‐critical service. The results confirmed the belief. Sentences that are in fact highly secure and perceived as highly memorable were also perceived as weak passwords. The belief strongly affected password selection for critical services, but it had no effect on non‐critical services. In sum, long sentences are a particularly interesting type of password because they meet both security and memorability criteria, but their use is limited by a false belief.
We observe a natural memory decay pattern where beliefs become less accurate & confidence is reduced as well; & on average, subjects overpay for bets on propositions that they believe in, but underpay for the opposite bets
Subjective beliefs and confidence when facts are forgotten. Igor Kopylov, Joshua Miller. Journal of Risk and Uncertainty, https://link.springer.com/article/10.1007/s11166-018-9295-1
Abstract: Forgetting can be a salient source of uncertainty for subjective beliefs, confidence, and ambiguity attitudes. To investigate this, we run several experiments where people bet on propositions (facts) that they cannot recall with certainty. We use betting preferences to infer subjects’ revealed beliefs and their revealed confidence in these beliefs. Forgetting is induced via interference tasks and time delays (up to one year). We observe a natural memory decay pattern where beliefs become less accurate and confidence is reduced as well. Moreover, we find a form of comparative ignorance where subjects are more ambiguity averse when they cannot recall the truth rather than never having learnt it. In a different vein, we identify an overconfidence pattern: on average, subjects overpay for bets on propositions that they believe in, but underpay for the opposite bets. We formulate a two-signal behavioral model of forgetting that generates all of these patterns. It suggests new testable hypotheses that are confirmed by our data.
Abstract: Forgetting can be a salient source of uncertainty for subjective beliefs, confidence, and ambiguity attitudes. To investigate this, we run several experiments where people bet on propositions (facts) that they cannot recall with certainty. We use betting preferences to infer subjects’ revealed beliefs and their revealed confidence in these beliefs. Forgetting is induced via interference tasks and time delays (up to one year). We observe a natural memory decay pattern where beliefs become less accurate and confidence is reduced as well. Moreover, we find a form of comparative ignorance where subjects are more ambiguity averse when they cannot recall the truth rather than never having learnt it. In a different vein, we identify an overconfidence pattern: on average, subjects overpay for bets on propositions that they believe in, but underpay for the opposite bets. We formulate a two-signal behavioral model of forgetting that generates all of these patterns. It suggests new testable hypotheses that are confirmed by our data.
The vaunted gatekeeper role of the creative industries is largely mythical; high costs of production have stifled creativity in industries that require ever-bigger blockbusters to cover the losses of expensive failure
Digital Renaissance: What Data and Economics Tell Us about the Future of Popular Culture. Joel Waldfogel. Princeton: Princeton Univ Press, 2018. https://www.amazon.com/Digital-Renaissance-Economics-Popular-Culture/dp/0691162824/
The digital revolution poses a mortal threat to the major creative industries—music, publishing, television, and the movies. The ease with which digital files can be copied and distributed has unleashed a wave of piracy with disastrous effects on revenue. Cheap, easy self-publishing is eroding the position of these gatekeepers and guardians of culture. Does this revolution herald the collapse of culture, as some commentators claim? Far from it. In Digital Renaissance, Joel Waldfogel argues that digital technology is enabling a new golden age of popular culture, a veritable digital renaissance.
By reducing the costs of production, distribution, and promotion, digital technology is democratizing access to the cultural marketplace. More books, songs, television shows, and movies are being produced than ever before. Nor does this mean a tidal wave of derivative, poorly produced kitsch; analyzing decades of production and sales data, as well as bestseller and best-of lists, Waldfogel finds that the new digital model is just as successful at producing high-quality, successful work as the old industry model, and in many cases more so. The vaunted gatekeeper role of the creative industries proves to have been largely mythical. The high costs of production have stifled creativity in industries that require ever-bigger blockbusters to cover the losses on ever-more-expensive failures.
Are we drowning in a tide of cultural silt, or living in a golden age for culture? The answers in Digital Renaissance may surprise you.
The digital revolution poses a mortal threat to the major creative industries—music, publishing, television, and the movies. The ease with which digital files can be copied and distributed has unleashed a wave of piracy with disastrous effects on revenue. Cheap, easy self-publishing is eroding the position of these gatekeepers and guardians of culture. Does this revolution herald the collapse of culture, as some commentators claim? Far from it. In Digital Renaissance, Joel Waldfogel argues that digital technology is enabling a new golden age of popular culture, a veritable digital renaissance.
By reducing the costs of production, distribution, and promotion, digital technology is democratizing access to the cultural marketplace. More books, songs, television shows, and movies are being produced than ever before. Nor does this mean a tidal wave of derivative, poorly produced kitsch; analyzing decades of production and sales data, as well as bestseller and best-of lists, Waldfogel finds that the new digital model is just as successful at producing high-quality, successful work as the old industry model, and in many cases more so. The vaunted gatekeeper role of the creative industries proves to have been largely mythical. The high costs of production have stifled creativity in industries that require ever-bigger blockbusters to cover the losses on ever-more-expensive failures.
Are we drowning in a tide of cultural silt, or living in a golden age for culture? The answers in Digital Renaissance may surprise you.
Saturday, December 29, 2018
Individuals who are in a romantic relationship, have ever had sexual intercourse & oral sex, & who have more frequent & variable sex have more positive body attitudes
A review of research linking body image and sexual well-being. Meghan M. Gillen, Charlotte H.Markey. Body Image, https://doi.org/10.1016/j.bodyim.2018.12.004
Highlights
• We reviewed research on body image and sexual well-being.
• The review focused on Dr. Thomas Cash’s contributions to this area.
• Most research suggests a positive link between body image and sexual well-being.
• We suggest research on new populations using new methods and on positive body image.
Abstract: The link between body image and sexual well-being is intuitive and increasingly supported by psychological research: individuals, particularly women, with greater body satisfaction and body appreciation tend to report more positive sexual experiences. Although both perceptions of one’s body and one’s sexual life are central to most adults’ experiences, this area of research has remained somewhat understudied. In this review, we discuss the findings that are available and suggest directions for future research and applied implications of this work. We highlight Thomas Cash’s contributions to this area of study, given his significant contributions to moving our understanding of body image and sexual well-being forward.
Highlights
• We reviewed research on body image and sexual well-being.
• The review focused on Dr. Thomas Cash’s contributions to this area.
• Most research suggests a positive link between body image and sexual well-being.
• We suggest research on new populations using new methods and on positive body image.
Abstract: The link between body image and sexual well-being is intuitive and increasingly supported by psychological research: individuals, particularly women, with greater body satisfaction and body appreciation tend to report more positive sexual experiences. Although both perceptions of one’s body and one’s sexual life are central to most adults’ experiences, this area of research has remained somewhat understudied. In this review, we discuss the findings that are available and suggest directions for future research and applied implications of this work. We highlight Thomas Cash’s contributions to this area of study, given his significant contributions to moving our understanding of body image and sexual well-being forward.
A decrease in population growth lowers firm entry rates; aging firms fully explains the concentration of employment in large firms, & trends in average firm size & exit rates, & the decline in labor’s share of GDP
From Population Growth to Firm Demographics: Implications for Concentration, Entrepreneurship and the Labor Share. Hugo Hopenhayn, Julian Neira, Rish Singhania. NBER Working Paper No. 25382, Dec 2018. https://www.nber.org/papers/w25382
Abstract: The US economy has undergone a number of puzzling changes in recent decades. Large firms now account for a greater share of economic activity, new firms are being created at a slower rate, and workers are getting paid a smaller share of GDP. This paper shows that changes in population growth provide a unified quantitative explanation for these long-term changes. The mechanism goes through firm entry rates. A decrease in population growth lowers firm entry rates, shifting the firm-age distribution towards older firms. Heterogeneity across firm age groups combined with an aging firm distribution replicates the observed trends. Micro data show that an aging firm distribution fully explains i) the concentration of employment in large firms, ii) and trends in average firm size and exit rates, key determinants of the firm entry rate. An aging firm distribution also explains the decline in labor’s share of GDP. In our model, older firms have lower labor shares because of lower overhead labor to employment ratios. Consistent with our mechanism, we find that the ratio of nonproduction workers to total employment has declined in the US.
Abstract: The US economy has undergone a number of puzzling changes in recent decades. Large firms now account for a greater share of economic activity, new firms are being created at a slower rate, and workers are getting paid a smaller share of GDP. This paper shows that changes in population growth provide a unified quantitative explanation for these long-term changes. The mechanism goes through firm entry rates. A decrease in population growth lowers firm entry rates, shifting the firm-age distribution towards older firms. Heterogeneity across firm age groups combined with an aging firm distribution replicates the observed trends. Micro data show that an aging firm distribution fully explains i) the concentration of employment in large firms, ii) and trends in average firm size and exit rates, key determinants of the firm entry rate. An aging firm distribution also explains the decline in labor’s share of GDP. In our model, older firms have lower labor shares because of lower overhead labor to employment ratios. Consistent with our mechanism, we find that the ratio of nonproduction workers to total employment has declined in the US.
Is the Link between Pornography Use and Relational Happiness Really More about Masturbation? Results from Two National Surveys
Is the Link between Pornography Use and Relational Happiness Really More about Masturbation? Results from Two National Surveys. Samuel Perry, The Journal of Sex Research, Dec 2018, https://www.researchgate.net/publication/329403709
Abstract: Numerous studies have observed a persistent, and most often negative, association between pornography use and romantic relationship quality. While various theories have been suggested to explain this association, studies have yet to empirically examine whether the observed link between pornography consumption and relationship outcomes has more to do with solo-masturbation than actually watching pornography. The current study draws on two nationally-representative data sets with nearly identical measures to test whether taking masturbation practice into account reduces or nullifies the association between pornography use and relational happiness. Controls are included for sex frequency and satisfaction, depressive symptoms, and other relevant correlates. Results from both the 2012 New Family Structures Study (N=1,977) and 2014 Relationships in America survey (N=10,106) show that masturbation is negatively associated with relational happiness for men and women, while pornography use is either unassociated or becomes unassociated with relational happiness once masturbation is included. Indeed, evidence points to a slight positive association between pornography use and relational happiness once masturbation and gender differences are accounted for. Findings suggest that future studies on this topic should include measures of masturbation practice along with pornography use and that modifications to theories connecting pornography use to relationship outcomes should be considered.
Abstract: Numerous studies have observed a persistent, and most often negative, association between pornography use and romantic relationship quality. While various theories have been suggested to explain this association, studies have yet to empirically examine whether the observed link between pornography consumption and relationship outcomes has more to do with solo-masturbation than actually watching pornography. The current study draws on two nationally-representative data sets with nearly identical measures to test whether taking masturbation practice into account reduces or nullifies the association between pornography use and relational happiness. Controls are included for sex frequency and satisfaction, depressive symptoms, and other relevant correlates. Results from both the 2012 New Family Structures Study (N=1,977) and 2014 Relationships in America survey (N=10,106) show that masturbation is negatively associated with relational happiness for men and women, while pornography use is either unassociated or becomes unassociated with relational happiness once masturbation is included. Indeed, evidence points to a slight positive association between pornography use and relational happiness once masturbation and gender differences are accounted for. Findings suggest that future studies on this topic should include measures of masturbation practice along with pornography use and that modifications to theories connecting pornography use to relationship outcomes should be considered.
A Meta-Analytic Review of the Association Between Disgust and Prejudice Toward Gay Men (but not gay women)
A Meta-Analytic Review of the Association Between Disgust and Prejudice Toward Gay Men. Mark J. Kiss, Melanie A. Morrison , & Todd G. Morrison. Journal of Homosexuality, https://doi.org/10.1080/00918369.2018.1553349
ABSTRACT: A sizeable number of studies have documented a relationship between heterosexual persons’ experience of disgust (measured as an individual difference variable or induced experimentally) and prejudice toward gay men (i.e., homonegativity). Yet, to date, no one has attempted to meta-analytically review this corpus of research. We address this gap by conducting a meta-analysis of published and unpublished work examining heterosexual men and women’s disgust and their homonegativity toward gay men. Fourteen articles (12 published, two unpublished) containing 17 studies were analyzed (N = 7,322). The average effect size for disgust sensitivity studies was moderate to large (d = 0.64), whereas for disgust induction studies, the effect was large (d = 0.77). No evidence of effect size heterogeneity emerged. Future directions and recommendations for methodological improvements are outlined.
KEYWORDS: Gay men, disgust, prejudice, homonegativity, emotions, meta-analysis
---
The current meta-analysis reveals that disgust is associated with negative attitudes toward gay men. While a number of possible explanations for this association were elucidated, the question remains: Why do heterosexuals who experience or are sensitive to disgust evidence greater prejudice toward gay men but not lesbian women or other minoritized social groups? What is it about gay men—as a social category—that links them to the affective state of disgust? Relatedly, although disgust can be evoked using disparate methods, is there a specific type of disgust induction that is most salient vis-à-vis homonegative attitudes toward gay men? Morrison, Kiss, et al. (in press) noted:
ABSTRACT: A sizeable number of studies have documented a relationship between heterosexual persons’ experience of disgust (measured as an individual difference variable or induced experimentally) and prejudice toward gay men (i.e., homonegativity). Yet, to date, no one has attempted to meta-analytically review this corpus of research. We address this gap by conducting a meta-analysis of published and unpublished work examining heterosexual men and women’s disgust and their homonegativity toward gay men. Fourteen articles (12 published, two unpublished) containing 17 studies were analyzed (N = 7,322). The average effect size for disgust sensitivity studies was moderate to large (d = 0.64), whereas for disgust induction studies, the effect was large (d = 0.77). No evidence of effect size heterogeneity emerged. Future directions and recommendations for methodological improvements are outlined.
KEYWORDS: Gay men, disgust, prejudice, homonegativity, emotions, meta-analysis
---
The current meta-analysis reveals that disgust is associated with negative attitudes toward gay men. While a number of possible explanations for this association were elucidated, the question remains: Why do heterosexuals who experience or are sensitive to disgust evidence greater prejudice toward gay men but not lesbian women or other minoritized social groups? What is it about gay men—as a social category—that links them to the affective state of disgust? Relatedly, although disgust can be evoked using disparate methods, is there a specific type of disgust induction that is most salient vis-à-vis homonegative attitudes toward gay men? Morrison, Kiss, et al. (in press) noted:
Gay men may be regarded as disgusting because anal intercourse is widely (mis) perceived as a common practice among members of this social category. This behaviour, especially when engaged in receptively, constitutes a nexus of taboos: violation of hegemonic standards of masculinity; a disconcerting proximity to faeces and attendant concerns about germs/disease; and, given its nonprocreative and “base” nature, the capacity to erode the distinction between humans and animals and, hence, undermine our faith in speciesism.
Friday, December 28, 2018
In monkeys & apes the maximax heuristic (risk is ignored for potential gains, however low they may be) is observed while playing lotteries, as can happen in human managerial & financial decision-making
Broihanne, M.-H., Romain, A., Call, J., Thierry, B., Wascher, C. A. F., De Marco, A., . . . Dufour, V. (2018). Monkeys (Sapajus apella and Macaca tonkeana) and great apes (Gorilla gorilla, Pongo abelii, Pan paniscus, and Pan troglodytes) play for the highest bid. Journal of Comparative Psychology, http://dx.doi.org/10.1037/com0000153
Abstract: Many studies investigate the decisions made by animals by focusing on their attitudes toward risk, that is, risk-seeking, risk neutrality, or risk aversion. However, little attention has been paid to the extent to which individuals understand the different odds of outcomes. In a previous gambling task involving 18 different lotteries (Pelé, Broihanne, Thierry, Call, & Dufour, 2014), nonhuman primates used probabilities of gains and losses to make their decision. Although the use of complex mathematical calculation for decision-making seemed unlikely, we applied a gradual decrease in the chances to win throughout the experiment. This probably facilitated the extraction of information about odds. Here, we investigated whether individuals would still make efficient decisions if this facilitating factor was removed. To do so, we randomized the order of presentation of the 18 lotteries. Individuals from 4 ape and 2 monkey species were tested. Only capuchin monkeys differed from others, gambling even when there was nothing to win. Randomizing the lottery presentation order leads all species to predominantly use a maximax heuristic. Individuals gamble as soon as there is at least one chance to win more than they already possess, whatever the risk. Most species also gambled more as the frequency of larger rewards increased. These results suggest optimistic behavior. The maximax heuristic is sometimes observed in human managerial and financial decision-making, where risk is ignored for potential gains, however low they may be. This suggests a shared and strong propensity in primates to rely on heuristics whenever complexity in evaluation of outcome odds arises.
Abstract: Many studies investigate the decisions made by animals by focusing on their attitudes toward risk, that is, risk-seeking, risk neutrality, or risk aversion. However, little attention has been paid to the extent to which individuals understand the different odds of outcomes. In a previous gambling task involving 18 different lotteries (Pelé, Broihanne, Thierry, Call, & Dufour, 2014), nonhuman primates used probabilities of gains and losses to make their decision. Although the use of complex mathematical calculation for decision-making seemed unlikely, we applied a gradual decrease in the chances to win throughout the experiment. This probably facilitated the extraction of information about odds. Here, we investigated whether individuals would still make efficient decisions if this facilitating factor was removed. To do so, we randomized the order of presentation of the 18 lotteries. Individuals from 4 ape and 2 monkey species were tested. Only capuchin monkeys differed from others, gambling even when there was nothing to win. Randomizing the lottery presentation order leads all species to predominantly use a maximax heuristic. Individuals gamble as soon as there is at least one chance to win more than they already possess, whatever the risk. Most species also gambled more as the frequency of larger rewards increased. These results suggest optimistic behavior. The maximax heuristic is sometimes observed in human managerial and financial decision-making, where risk is ignored for potential gains, however low they may be. This suggests a shared and strong propensity in primates to rely on heuristics whenever complexity in evaluation of outcome odds arises.
New Studies Show Russian Social-Media Involvement in US Politics, far from being a sophisticated propaganda campaign, was small, amateurish, & mostly unrelated to the 2016 election
New Studies Show Pundits Are Wrong About Russian Social-Media Involvement in US Politics. Aaron Maté. The Nation, Dec 28 2018, https://www.thenation.com/article/russiagate-elections-interference/
Excerpts with no links:
The release of two Senate-commissioned reports has sparked a new round of panic about Russia manipulating a vulnerable American public on social media. Headlines warn that Russian trolls have tried to suppress the African-American vote, promote Green Party candidate Jill Stein, recruit “assets,” and “sow discord” or “hack the 2016 election” via sex-toy ads and Pokémon Go. “The studies,” writes David Ignatius of The Washington Post, “describe a sophisticated, multilevel Russian effort to use every available tool of our open society to create resentment, mistrust and social disorder,” demonstrating that the Russians, “thanks to the Internet…seem to be perfecting these dark arts.” According to Michelle Goldberg of The New York Times, “it looks increasingly as though” Russian disinformation “changed the direction of American history” in the narrowly decided 2016 election, when “Russian trolling easily could have made the difference.”
The reports, from the University of Oxford’s Computational Propaganda Research Project and the firm New Knowledge, do provide the most thorough look at Russian social-media activity to date. With an abundance of data, charts, graphs, and tables, coupled with extensive qualitative analysis, the authors scrutinize the output of the Internet Research Agency (IRA) the Russian clickbait firm indicted by special counsel Robert Mueller in February 2018. On every significant metric, it is difficult to square the data with the dramatic conclusions that have been drawn.
• 2016 Election Content: The most glaring data point is how minimally Russian social-media activity pertained to the 2016 campaign. The New Knowledge report acknowledges that evaluating IRA content “purely based on whether it definitively swung the election is too narrow a focus,” as the “explicitly political content was a small percentage.” To be exact, just “11% of the total content” attributed to the IRA and 33 percent of user engagement with it “was related to the election.” The IRA’s posts “were minimally about the candidates,” with “roughly 6% of tweets, 18% of Instagram posts, and 7% of Facebook posts” having “mentioned Trump or Clinton by name.”
• Scale: The researchers claim that “the scale of [the Russian] operation was unprecedented,” but they base that conclusion on dubious figures. They repeat the widespread claim that Russian posts “reached 126 million people on Facebook,” which is in fact a spin on Facebook’s own guess. “Our best estimate,” Facebook’s Colin Stretch testified to Congress in October 2017, “is that approximately 126 million people may have been served one of these [IRA] stories at some time during the two year period” between 2015 and 2017. According to Stretch, posts generated by suspected Russian accounts showing up in Facebook’s News Feed amounted to “approximately 1 out of 23,000 pieces of content.”
• Spending: Also hurting the case that the Russians reached a large number of Americans is that they spent such a microscopic amount of money to do it. Oxford puts the IRA’s Facebook spending between 2015 and 2017 at just $73,711. As was previously known, about $46,000 was spent on Russian-linked Facebook ads before the 2016 election. That amounts to about 0.05 percent of the $81 million spent on Facebook ads by the Clinton and Trump campaigns combined. A recent disclosure by Google that Russian-linked accounts spent $4,700 on platforms in 2016 only underscores how miniscule that spending was. The researchers also claim that the IRA’s “manipulation of American political discourse had a budget that exceeded $25 million USD.” But that number is based on a widely repeated error that mistakes the IRA’s spending on US-related activities for its parent project’s overall global budget, including domestic social-media activity in Russia.
• Sophistication: Another reason to question the operation’s sophistication can be found by simply looking at its offerings. The IRA’s most shared pre-election Facebook post was a cartoon of a gun-wielding Yosemite Sam. Over on Instagram, the best-received image urged users to give it a “Like” if they believe in Jesus. The top IRA post on Facebook before the election to mention Hillary Clinton was a conspiratorial screed about voter fraud. It’s telling that those who are so certain Russian social-media posts affected the 2016 election never cite the posts that they think actually helped achieve that end. The actual content of those posts might explain why.
• Covert or Clickbait Operation? Far from exposing a sophisticated propaganda campaign, the reports provide more evidence that the Russians were actually engaging in clickbait capitalism: targeting unique demographics like African Americans or evangelicals in a bid to attract large audiences for commercial purposes. Reporters who have profiled the IRA have commonly described it as “a social media marketing campaign.” Mueller’s indictment of the IRA disclosed that it sold “promotions and advertisements” on its pages that generally sold in the $25-$50 range. “This strategy,” Oxford observes, “is not an invention for politics and foreign intrigue, it is consistent with techniques used in digital marketing.” New Knowledge notes that the IRA even sold merchandise that “perhaps provided the IRA with a source of revenue,” hawking goods such as T-shirts, “LGBT-positive sex toys and many variants of triptych and 5-panel artwork featuring traditionally conservative, patriotic themes.”
• “Asset Development”: Lest one wonder how promoting sex toys might factor into a sophisticated influence campaign, the New Knowledge report claims that exploiting “sexual behavior” was a key component of the IRA’s “expansive” “human asset recruitment strategy” in the United States. “Recruiting an asset by exploiting a personal vulnerability,” the report explains, “is a timeless espionage practice.” The first example of this timeless espionage practice is of an ad featuring Jesus consoling a dejected young man by telling him: “Struggling with the addiction to masturbation? Reach out to me and we will beat it together.” It is unknown if this particular tactic brought any assets into the fold. But New Knowledge reports that there was “some success with several of these human-activation attempts.” That is correct: The IRA’s online trolls apparently succeeded in sparking protests in 2016, like several in Florida where “it’s unclear if anyone attended”; “no people showed up to at least one,” and “ragtag groups” showed up at others, including one where video footage captured a crowd of eight people. The most successful effort appears to have been in Houston, where Russian trolls allegedly organized dueling rallies pitting a dozen white supremacists against several dozen counter-protesters outside an Islamic center.
Based on all of this data, we can draw this picture of Russian social-media activity: It was mostly unrelated to the 2016 election; microscopic in reach, engagement, and spending; and juvenile or absurd in its content. This leads to the inescapable conclusion, as the New Knowledge study acknowledges, that “the operation’s focus on elections was merely a small subset” of its activity. They qualify that “accurate” narrative by saying it “misses nuance and deserves more contextualization.” Alternatively, perhaps it deserves some minimal reflection that a juvenile social-media operation with such a small focus on elections is being widely portrayed as a seismic threat that may well have decided the 2016 contest.
Doing so leads us to conclusions that have nothing to do with Russian social-media activity, nor with the voters supposedly influenced by it. Take the widespread speculation that Russian social-media posts may have suppressed the black vote. That a Russian troll farm sought to deceive black audiences and other targeted demographics on social media is certainly contemptible. But in criticizing that effort there’s no reason to assume it was successful—and yet that’s exactly what the pundits did. “When you consider the narrow margins by which [Donald Trump] won [Michigan and Wisconsin], and poor minority turnout there, these Russian voter suppression efforts may have been decisive,” former Obama adviser David Axelrod commented. “Black voter turnout declined in 2016 for the first time in 20 years in a presidential election,” The New York Times conspicuously notes, “but it is impossible to determine whether that was the result of the Russian campaign.”
That it is even considered possible that the Russian campaign impacted the black vote displays a rather stunning paternalism and condescension. Would Axelrod, Times reporters, or any of the others floating a similar scenario accept a suggestion that their own votes might be susceptible to silly social-media posts mostly unrelated to the election? If not, what does that tell us about their attitudes toward the people that they presume could be so vulnerable?
Entertaining the possibility that Russian social-media posts impacted the election outcome requires more than just a contemptuous view of average voters. It also requires the abandonment of elementary standards of logic, probability, and arithmetic. We now have corroboration of this judgment from an unlikely source. Just days after the New Knowledge report was released, The New York Times reported that the company had carried out “a secret experiment” in the 2017 Alabama Senate race. According to an internal document, New Knowledge used “many of the [Russian] tactics now understood to have influenced the 2016 elections,” going so far as to stage an “elaborate ‘false flag’ operation” that promoted the idea that the Republican candidate, Roy Moore, was backed by Russian bots. The fallout from the operation has led Facebook to suspend the accounts of five people, including New Knowledge CEO Jonathon Morgan.
The Times discloses that the project had a budget of $100,000, but adds that it “was likely too small to have a significant effect on the race.” A Democratic operative concurs, telling the Times that “it was impossible that a $100,000 operation had an impact.”
The Alabama Senate race cost $51 million. If it was impossible for a $100,000 New Knowledge operation to affect a 2017 state election, then how could a comparable—perhaps even less expensive—Russian operation possibly impact a $2.4 billion US presidential election in 2016?
On top of straining credulity, fixating on barely detectable and trivial social-media content also downplays myriad serious issues. As the journalist Ari Berman has tirelessly pointed out, the 2016 election was “the first presidential contest in 50 years without the full protections of the [Voting Rights Act],” one that was conducted amid “the greatest rollback of voting rights since the act was passed” in 1965. Rather than ruminating over whether they were duped by Russian clickbait, reporters who have actually spoken to black Midwest voters have found that political disillusionment amid stagnant wages, high inequality, and pervasive police brutality led many to stay home.
And that leads us to perhaps a key reason why elites in particular are so fixated on the purported threat of Russian meddling: It deflects attention from their own failures, and the failings of the system that grants them status as elites. During the campaign, corporate media outlets handed Donald Trump billions of dollars worth of air time because, in the words of the now ousted CBS exec Les Moonves: “It may not be good for America, but it’s damn good for CBS…. The money’s rolling in and this is fun.” Not wanting to interrupt the fun, these outlets have every incentive to breathlessly cover Russiagate and amplify comparisons of stolen Democratic Party e-mails and Russian social-media posts to Pearl Harbor, 9/11, Kristallnacht, and “cruise missiles.”
Having lost the presidential election to a reality TV host, the Democratic Party leadership is arguably the most incentivized to capitalize on the Russia panic. They continue to oblige. Like clockwork, former Clinton campaign manager Robby Mook seized on the new Senate studies to warn that “Russian operatives will try to divide Democrats again in the 2020 primary, making activists unwitting accomplices.” By “unwitting accomplices,” Mook is presumably referring to the progressive Democrats who have protested the DNC leadership’s collusion with the Clinton campaign and bias against Bernie Sanders in the 2016 primary. Mook is following a now familiar Democratic playbook: blaming Russia for the consequences of the party elite’s own actions. When an uproar arose over Trump campaign data firm Cambridge Analytica in early 2018, Hillary Clinton was quoted posing what she dubbed the “real question”: “How did the Russians know how to target their messages so precisely to undecided voters in Wisconsin, or Michigan, or Pennsylvania?”
In fact, the Russians spent a grand total of $3,102 in these three states, with the majority of that paltry sum not even during the general election but during the primaries, and the majority of the ads were not even about candidates but about social issues. The total number of times ads were targeted at Wisconsin (54), Michigan (36), Pennsylvania (25) combined is less than the 152 times that ads were targeted at the blue state of New York. Wisconsin and Michigan also happen to be two states that Clinton infamously, and perilously, avoided visiting in the campaign’s final months.
The utility of Russia-baiting goes far beyond absolving elites of responsibility for their own failures. Hacked documents have recently revealed that a UK-government charity has waged a global propaganda operation in the name of “countering Russian disinformation.” The project, known as the Integrity Initiative, is run by military intelligence officials with funding from the British Foreign Office and other government sources, including the US State Department and NATO. It works closely with “clusters” of sympathetic journalists and academics across the West, and has already been outed for waging a social-media campaign against Labour leader Jeremy Corbyn. The group’s Twitter account promoted articles that painted Corbyn as a “useful idiot” in support of “the Kremlin cause”; criticized his communications director, Seumas Milne, for his alleged “work with the Kremlin agenda”; and said, “It’s time for the Corbyn left to confront its Putin problem.”
The Corbyn camp is far from the only progressive force to be targeted with this smear tactic. That it is revealed to be part of a Western government–backed operation is yet another reason to consider the fixation with Russian social-media activity in a new light. There is no indication that the disinformation spread by employees of a St. Petersburg troll farm has had a discernible impact on the US electorate. The barrage of claims to the contrary is but one element of an infinitely larger chorus from failed political elites, sketchy private firms, shadowy intelligence officials, and credulous media outlets that inculcates the Western public with fears of a Kremlin “sowing discord.” Given how divorced the prevailing alarm is from the actual facts—and the influence of those fueling it—we might ask ourselves whose disinformation is most worthy of concern.
Aaron Maté is a host/producer for The Real News.
Far from being a sophisticated propaganda campaign, it was small, amateurish, and mostly unrelated to the 2016 election.
Excerpts with no links:
The release of two Senate-commissioned reports has sparked a new round of panic about Russia manipulating a vulnerable American public on social media. Headlines warn that Russian trolls have tried to suppress the African-American vote, promote Green Party candidate Jill Stein, recruit “assets,” and “sow discord” or “hack the 2016 election” via sex-toy ads and Pokémon Go. “The studies,” writes David Ignatius of The Washington Post, “describe a sophisticated, multilevel Russian effort to use every available tool of our open society to create resentment, mistrust and social disorder,” demonstrating that the Russians, “thanks to the Internet…seem to be perfecting these dark arts.” According to Michelle Goldberg of The New York Times, “it looks increasingly as though” Russian disinformation “changed the direction of American history” in the narrowly decided 2016 election, when “Russian trolling easily could have made the difference.”
The reports, from the University of Oxford’s Computational Propaganda Research Project and the firm New Knowledge, do provide the most thorough look at Russian social-media activity to date. With an abundance of data, charts, graphs, and tables, coupled with extensive qualitative analysis, the authors scrutinize the output of the Internet Research Agency (IRA) the Russian clickbait firm indicted by special counsel Robert Mueller in February 2018. On every significant metric, it is difficult to square the data with the dramatic conclusions that have been drawn.
• 2016 Election Content: The most glaring data point is how minimally Russian social-media activity pertained to the 2016 campaign. The New Knowledge report acknowledges that evaluating IRA content “purely based on whether it definitively swung the election is too narrow a focus,” as the “explicitly political content was a small percentage.” To be exact, just “11% of the total content” attributed to the IRA and 33 percent of user engagement with it “was related to the election.” The IRA’s posts “were minimally about the candidates,” with “roughly 6% of tweets, 18% of Instagram posts, and 7% of Facebook posts” having “mentioned Trump or Clinton by name.”
• Scale: The researchers claim that “the scale of [the Russian] operation was unprecedented,” but they base that conclusion on dubious figures. They repeat the widespread claim that Russian posts “reached 126 million people on Facebook,” which is in fact a spin on Facebook’s own guess. “Our best estimate,” Facebook’s Colin Stretch testified to Congress in October 2017, “is that approximately 126 million people may have been served one of these [IRA] stories at some time during the two year period” between 2015 and 2017. According to Stretch, posts generated by suspected Russian accounts showing up in Facebook’s News Feed amounted to “approximately 1 out of 23,000 pieces of content.”
• Spending: Also hurting the case that the Russians reached a large number of Americans is that they spent such a microscopic amount of money to do it. Oxford puts the IRA’s Facebook spending between 2015 and 2017 at just $73,711. As was previously known, about $46,000 was spent on Russian-linked Facebook ads before the 2016 election. That amounts to about 0.05 percent of the $81 million spent on Facebook ads by the Clinton and Trump campaigns combined. A recent disclosure by Google that Russian-linked accounts spent $4,700 on platforms in 2016 only underscores how miniscule that spending was. The researchers also claim that the IRA’s “manipulation of American political discourse had a budget that exceeded $25 million USD.” But that number is based on a widely repeated error that mistakes the IRA’s spending on US-related activities for its parent project’s overall global budget, including domestic social-media activity in Russia.
• Sophistication: Another reason to question the operation’s sophistication can be found by simply looking at its offerings. The IRA’s most shared pre-election Facebook post was a cartoon of a gun-wielding Yosemite Sam. Over on Instagram, the best-received image urged users to give it a “Like” if they believe in Jesus. The top IRA post on Facebook before the election to mention Hillary Clinton was a conspiratorial screed about voter fraud. It’s telling that those who are so certain Russian social-media posts affected the 2016 election never cite the posts that they think actually helped achieve that end. The actual content of those posts might explain why.
• Covert or Clickbait Operation? Far from exposing a sophisticated propaganda campaign, the reports provide more evidence that the Russians were actually engaging in clickbait capitalism: targeting unique demographics like African Americans or evangelicals in a bid to attract large audiences for commercial purposes. Reporters who have profiled the IRA have commonly described it as “a social media marketing campaign.” Mueller’s indictment of the IRA disclosed that it sold “promotions and advertisements” on its pages that generally sold in the $25-$50 range. “This strategy,” Oxford observes, “is not an invention for politics and foreign intrigue, it is consistent with techniques used in digital marketing.” New Knowledge notes that the IRA even sold merchandise that “perhaps provided the IRA with a source of revenue,” hawking goods such as T-shirts, “LGBT-positive sex toys and many variants of triptych and 5-panel artwork featuring traditionally conservative, patriotic themes.”
• “Asset Development”: Lest one wonder how promoting sex toys might factor into a sophisticated influence campaign, the New Knowledge report claims that exploiting “sexual behavior” was a key component of the IRA’s “expansive” “human asset recruitment strategy” in the United States. “Recruiting an asset by exploiting a personal vulnerability,” the report explains, “is a timeless espionage practice.” The first example of this timeless espionage practice is of an ad featuring Jesus consoling a dejected young man by telling him: “Struggling with the addiction to masturbation? Reach out to me and we will beat it together.” It is unknown if this particular tactic brought any assets into the fold. But New Knowledge reports that there was “some success with several of these human-activation attempts.” That is correct: The IRA’s online trolls apparently succeeded in sparking protests in 2016, like several in Florida where “it’s unclear if anyone attended”; “no people showed up to at least one,” and “ragtag groups” showed up at others, including one where video footage captured a crowd of eight people. The most successful effort appears to have been in Houston, where Russian trolls allegedly organized dueling rallies pitting a dozen white supremacists against several dozen counter-protesters outside an Islamic center.
Based on all of this data, we can draw this picture of Russian social-media activity: It was mostly unrelated to the 2016 election; microscopic in reach, engagement, and spending; and juvenile or absurd in its content. This leads to the inescapable conclusion, as the New Knowledge study acknowledges, that “the operation’s focus on elections was merely a small subset” of its activity. They qualify that “accurate” narrative by saying it “misses nuance and deserves more contextualization.” Alternatively, perhaps it deserves some minimal reflection that a juvenile social-media operation with such a small focus on elections is being widely portrayed as a seismic threat that may well have decided the 2016 contest.
Doing so leads us to conclusions that have nothing to do with Russian social-media activity, nor with the voters supposedly influenced by it. Take the widespread speculation that Russian social-media posts may have suppressed the black vote. That a Russian troll farm sought to deceive black audiences and other targeted demographics on social media is certainly contemptible. But in criticizing that effort there’s no reason to assume it was successful—and yet that’s exactly what the pundits did. “When you consider the narrow margins by which [Donald Trump] won [Michigan and Wisconsin], and poor minority turnout there, these Russian voter suppression efforts may have been decisive,” former Obama adviser David Axelrod commented. “Black voter turnout declined in 2016 for the first time in 20 years in a presidential election,” The New York Times conspicuously notes, “but it is impossible to determine whether that was the result of the Russian campaign.”
That it is even considered possible that the Russian campaign impacted the black vote displays a rather stunning paternalism and condescension. Would Axelrod, Times reporters, or any of the others floating a similar scenario accept a suggestion that their own votes might be susceptible to silly social-media posts mostly unrelated to the election? If not, what does that tell us about their attitudes toward the people that they presume could be so vulnerable?
Entertaining the possibility that Russian social-media posts impacted the election outcome requires more than just a contemptuous view of average voters. It also requires the abandonment of elementary standards of logic, probability, and arithmetic. We now have corroboration of this judgment from an unlikely source. Just days after the New Knowledge report was released, The New York Times reported that the company had carried out “a secret experiment” in the 2017 Alabama Senate race. According to an internal document, New Knowledge used “many of the [Russian] tactics now understood to have influenced the 2016 elections,” going so far as to stage an “elaborate ‘false flag’ operation” that promoted the idea that the Republican candidate, Roy Moore, was backed by Russian bots. The fallout from the operation has led Facebook to suspend the accounts of five people, including New Knowledge CEO Jonathon Morgan.
The Times discloses that the project had a budget of $100,000, but adds that it “was likely too small to have a significant effect on the race.” A Democratic operative concurs, telling the Times that “it was impossible that a $100,000 operation had an impact.”
The Alabama Senate race cost $51 million. If it was impossible for a $100,000 New Knowledge operation to affect a 2017 state election, then how could a comparable—perhaps even less expensive—Russian operation possibly impact a $2.4 billion US presidential election in 2016?
On top of straining credulity, fixating on barely detectable and trivial social-media content also downplays myriad serious issues. As the journalist Ari Berman has tirelessly pointed out, the 2016 election was “the first presidential contest in 50 years without the full protections of the [Voting Rights Act],” one that was conducted amid “the greatest rollback of voting rights since the act was passed” in 1965. Rather than ruminating over whether they were duped by Russian clickbait, reporters who have actually spoken to black Midwest voters have found that political disillusionment amid stagnant wages, high inequality, and pervasive police brutality led many to stay home.
And that leads us to perhaps a key reason why elites in particular are so fixated on the purported threat of Russian meddling: It deflects attention from their own failures, and the failings of the system that grants them status as elites. During the campaign, corporate media outlets handed Donald Trump billions of dollars worth of air time because, in the words of the now ousted CBS exec Les Moonves: “It may not be good for America, but it’s damn good for CBS…. The money’s rolling in and this is fun.” Not wanting to interrupt the fun, these outlets have every incentive to breathlessly cover Russiagate and amplify comparisons of stolen Democratic Party e-mails and Russian social-media posts to Pearl Harbor, 9/11, Kristallnacht, and “cruise missiles.”
Having lost the presidential election to a reality TV host, the Democratic Party leadership is arguably the most incentivized to capitalize on the Russia panic. They continue to oblige. Like clockwork, former Clinton campaign manager Robby Mook seized on the new Senate studies to warn that “Russian operatives will try to divide Democrats again in the 2020 primary, making activists unwitting accomplices.” By “unwitting accomplices,” Mook is presumably referring to the progressive Democrats who have protested the DNC leadership’s collusion with the Clinton campaign and bias against Bernie Sanders in the 2016 primary. Mook is following a now familiar Democratic playbook: blaming Russia for the consequences of the party elite’s own actions. When an uproar arose over Trump campaign data firm Cambridge Analytica in early 2018, Hillary Clinton was quoted posing what she dubbed the “real question”: “How did the Russians know how to target their messages so precisely to undecided voters in Wisconsin, or Michigan, or Pennsylvania?”
In fact, the Russians spent a grand total of $3,102 in these three states, with the majority of that paltry sum not even during the general election but during the primaries, and the majority of the ads were not even about candidates but about social issues. The total number of times ads were targeted at Wisconsin (54), Michigan (36), Pennsylvania (25) combined is less than the 152 times that ads were targeted at the blue state of New York. Wisconsin and Michigan also happen to be two states that Clinton infamously, and perilously, avoided visiting in the campaign’s final months.
The utility of Russia-baiting goes far beyond absolving elites of responsibility for their own failures. Hacked documents have recently revealed that a UK-government charity has waged a global propaganda operation in the name of “countering Russian disinformation.” The project, known as the Integrity Initiative, is run by military intelligence officials with funding from the British Foreign Office and other government sources, including the US State Department and NATO. It works closely with “clusters” of sympathetic journalists and academics across the West, and has already been outed for waging a social-media campaign against Labour leader Jeremy Corbyn. The group’s Twitter account promoted articles that painted Corbyn as a “useful idiot” in support of “the Kremlin cause”; criticized his communications director, Seumas Milne, for his alleged “work with the Kremlin agenda”; and said, “It’s time for the Corbyn left to confront its Putin problem.”
The Corbyn camp is far from the only progressive force to be targeted with this smear tactic. That it is revealed to be part of a Western government–backed operation is yet another reason to consider the fixation with Russian social-media activity in a new light. There is no indication that the disinformation spread by employees of a St. Petersburg troll farm has had a discernible impact on the US electorate. The barrage of claims to the contrary is but one element of an infinitely larger chorus from failed political elites, sketchy private firms, shadowy intelligence officials, and credulous media outlets that inculcates the Western public with fears of a Kremlin “sowing discord.” Given how divorced the prevailing alarm is from the actual facts—and the influence of those fueling it—we might ask ourselves whose disinformation is most worthy of concern.
Aaron Maté is a host/producer for The Real News.
Clear judgments based on unclear evidence: Person evaluation is strongly influenced by untrustworthy gossip, even if already marked as dubious
Baum, J., Rabovsky, M., Rose, S. B., & Abdel Rahman, R. (2018). Clear judgments based on unclear evidence: Person evaluation is strongly influenced by untrustworthy gossip. Emotion, http://dx.doi.org/10.1037/emo0000545
Abstract: Affective information about other people’s social behavior may prejudice social interactions and bias person judgments. The trustworthiness of person-related information, however, can vary considerably, as in the case of gossip, rumors, lies, or “fake news.” Here, we investigated how spontaneous person likability and explicit person judgments are influenced by trustworthiness, employing event-related potentials as indices of emotional brain responses. Social–emotional information about the (im)moral behavior of previously unknown persons was verbally presented as trustworthy fact (e.g., “He bullied his apprentice”) or marked as untrustworthy gossip (by adding, e.g., allegedly), using verbal qualifiers that are frequently used in conversations, news, and social media to indicate the questionable trustworthiness of the information and as a precaution against wrong accusations. In Experiment 1, spontaneous likability, deliberate person judgments, and electrophysiological measures of emotional person evaluation were strongly influenced by negative information yet remarkably unaffected by the trustworthiness of the information. Experiment 2 replicated these findings and extended them to positive information. Our findings demonstrate a tendency for strong emotional evaluations and person judgments even when they are knowingly based on unclear evidence.
Abstract: Affective information about other people’s social behavior may prejudice social interactions and bias person judgments. The trustworthiness of person-related information, however, can vary considerably, as in the case of gossip, rumors, lies, or “fake news.” Here, we investigated how spontaneous person likability and explicit person judgments are influenced by trustworthiness, employing event-related potentials as indices of emotional brain responses. Social–emotional information about the (im)moral behavior of previously unknown persons was verbally presented as trustworthy fact (e.g., “He bullied his apprentice”) or marked as untrustworthy gossip (by adding, e.g., allegedly), using verbal qualifiers that are frequently used in conversations, news, and social media to indicate the questionable trustworthiness of the information and as a precaution against wrong accusations. In Experiment 1, spontaneous likability, deliberate person judgments, and electrophysiological measures of emotional person evaluation were strongly influenced by negative information yet remarkably unaffected by the trustworthiness of the information. Experiment 2 replicated these findings and extended them to positive information. Our findings demonstrate a tendency for strong emotional evaluations and person judgments even when they are knowingly based on unclear evidence.
Men pursuing a relatively slow life history strategy produced higher quality ejaculates, reflecting resource allocation decisions for greater parenting effort, as opposed to greater mating effort
Barbaro, N., Shackelford, T. K., Holub, A. M., Jeffery, A. J., Lopes, G. S., & Zeigler-Hill, V. (2018). Life history correlates of human (Homo sapiens) ejaculate quality. Journal of Comparative Psychology. http://dx.doi.org/10.1037/com0000161
Abstract: Life history strategies reflect resource allocation decisions, which manifest as physiological, psychological, and behavioral traits. We investigated whether human ejaculate quality is associated with indicators of relatively fast (greater resource allocation to mating effort) or slow (greater resource allocation to parenting effort) life history strategies in a test of two competing hypotheses: (a) The phenotype-linked fertility hypothesis, which predicts that men pursuing a relatively fast life history strategy will produce higher quality ejaculates, and (b) the cuckoldry-risk hypothesis, which predicts that men pursuing a relatively slow life history strategy will produce higher quality ejaculates. Men (n = 41) completed a self-report measure assessing life history strategy and provided two masturbatory ejaculate samples. Results provide preliminary support for the cuckoldry-risk hypothesis: Men pursuing a relatively slow life history strategy produced higher quality ejaculates. Ejaculate quality may therefore reflect resource allocation decisions for greater parenting effort, as opposed to greater mating effort. The findings contribute informative data on correlations between physiological and phenotypic indicators of human life history strategies.
Abstract: Life history strategies reflect resource allocation decisions, which manifest as physiological, psychological, and behavioral traits. We investigated whether human ejaculate quality is associated with indicators of relatively fast (greater resource allocation to mating effort) or slow (greater resource allocation to parenting effort) life history strategies in a test of two competing hypotheses: (a) The phenotype-linked fertility hypothesis, which predicts that men pursuing a relatively fast life history strategy will produce higher quality ejaculates, and (b) the cuckoldry-risk hypothesis, which predicts that men pursuing a relatively slow life history strategy will produce higher quality ejaculates. Men (n = 41) completed a self-report measure assessing life history strategy and provided two masturbatory ejaculate samples. Results provide preliminary support for the cuckoldry-risk hypothesis: Men pursuing a relatively slow life history strategy produced higher quality ejaculates. Ejaculate quality may therefore reflect resource allocation decisions for greater parenting effort, as opposed to greater mating effort. The findings contribute informative data on correlations between physiological and phenotypic indicators of human life history strategies.
Loud and unclear: Intense real-life vocalizations during affective situations are perceptually ambiguous and contextually malleable
Atias, D., Todorov, A., Liraz, S., Eidinger, A., Dror, I., Maymon, Y., & Aviezer, H. (2018). Loud and unclear: Intense real-life vocalizations during affective situations are perceptually ambiguous and contextually malleable. Journal of Experimental Psychology: General. http://dx.doi.org/10.1037/xge0000535
Abstract: A basic premise of emotion theories is that experienced feelings (whether specific emotions or broad valence) are expressed via vocalizations in a veridical and clear manner. By contrast, functional–contextual frameworks, rooted in animal communication research, view vocalizations as contextually flexible tools for social influence, not as expressions of emotion. Testing these theories has proved difficult because past research relied heavily on posed sounds which may lack ecological validity. Here, we test these theories by examining the perception of human affective vocalizations evoked during highly intense, real-life emotional situations. In Experiment 1a, we show that highly intense vocalizations of opposite valence (e.g., joyous reunions, fearful encounters) are perceptually confusable and their ambiguity increases with higher intensity. In Experiment 1b, we use authentic lottery winning reactions and show that increased hedonic intensity leads to lower, not higher valence. In Experiment 2, we demonstrate that visual context operates as a powerful mechanism for disambiguating real-life vocalizations, shifting perceived valence categorically. These results suggest affective vocalizations may be inherently ambiguous, demonstrate the role of intensity in driving affective ambiguity, and suggest a critical role for context in vocalization perception. Together, these findings challenge both basic emotion and dimensional theories of emotion expression and are better in line with a functional–contextual account which is externalist and by definition, context dependent.
Abstract: A basic premise of emotion theories is that experienced feelings (whether specific emotions or broad valence) are expressed via vocalizations in a veridical and clear manner. By contrast, functional–contextual frameworks, rooted in animal communication research, view vocalizations as contextually flexible tools for social influence, not as expressions of emotion. Testing these theories has proved difficult because past research relied heavily on posed sounds which may lack ecological validity. Here, we test these theories by examining the perception of human affective vocalizations evoked during highly intense, real-life emotional situations. In Experiment 1a, we show that highly intense vocalizations of opposite valence (e.g., joyous reunions, fearful encounters) are perceptually confusable and their ambiguity increases with higher intensity. In Experiment 1b, we use authentic lottery winning reactions and show that increased hedonic intensity leads to lower, not higher valence. In Experiment 2, we demonstrate that visual context operates as a powerful mechanism for disambiguating real-life vocalizations, shifting perceived valence categorically. These results suggest affective vocalizations may be inherently ambiguous, demonstrate the role of intensity in driving affective ambiguity, and suggest a critical role for context in vocalization perception. Together, these findings challenge both basic emotion and dimensional theories of emotion expression and are better in line with a functional–contextual account which is externalist and by definition, context dependent.
Great apes possess intuitive statistical capacities on a par with those of our infants, which suggests that our' statistical abilities are founded on an evolutionary ancient capacity shared with living relatives
Some apes possess a sensitivity towards probabilistic information & are able to reason about never before experienced single-events and to draw intuitive inferences from population to randomly drawn samples:
The evolutionary roots of intuitive statistics. Johanna Eckert. PhD Dissertation, Georg-August-Universität Göttingen, 2018, https://d-nb.info/1172970726/34
Abstract
Intuitive statistical reasoning is the capacity to draw intuitive probabilistic inferences based on an understanding of the relations between populations, sampling processes and resulting samples. This capacity is fundamental to our daily lives and one of the hallmarks of human thinking. We constantly use sample observations to draw general conclusions about the world, use these generalizations to predict what will happen next and to make rational decisions under uncertainty. Historically, statistical reaso ning was thought to develop late in ontogeny, to be biased by general-purpose heuristics throughout adulthood, and to be restricted to certain situations and specific types of information. In the last decade, however, evidence has accumulated from developmental research showing that even pre-verbal infants can reason from populations of items to randomly drawn samples and vice versa. Moreover, infants can flexibly integrate knowledge from different cognitive domains (such as physical or psychological knowle dge) into their statistical inferences. This indicates that neither language nor mathematical education are prerequisites for intuitive statistical abilities. Beyond that, recent comparative research suggests that basic forms of such capacities are not uni quely human: Rakoczy et al. (2014) presented nonhuman great apes with two populations with different proportions of preferred to non-preferred food items. Apes were able to infer which population was more likely to lead to a preferred food item as randomly drawn sample. Hence, just like human infants, great apes can reason from population to sample, giving a first hint that human statistical abilities may be based on an evolutionary ancient capacity.
The aim of the present dissertation is to explore the evolutionary roots of intuitive statistics more systematically and comprehensively by expanding on the initial findings of Rakoczy et al. (2014). I examined three questions regarding the i) generality and flexibility of nonhuman great apes' statistical capacities, ii) their cognitive structures and limits, as well as iii) their interaction with knowledge from other cognitive domains. To address these questions, I conducted three studies applying variants of the paradigm established by Rakoczy et al. (2014) .
In the first study, zoo-living great apes ( Pan troglodytes, Pan paniscus, Pongo abelii, Gorilla gorilla) were required to infer from samples to populations of food items: Apes were pres ented with two covered populations and witnessed representative multi-item samples being drawn from these populations. Subsequently, subjects could choose which population they wanted to receive as a reward. I found that apes ́ statistical abilities in this direction were more restricted than vice versa. However, these limitations were potentially due to accessory task demands rather than limitations in statistical reasoning. The second study was designed to gain deeper insights into the cognitive structure of intuitive statistics in chimpanzees and humans. More specifically, I tested sanctuary-living chimpanzees and human adults in a task requiring inferences from population to sample and I systematically varied the magnitude of difference between the populations' ratios (the ratio of ratios, ROR). I discovered that the statistical abilities of both chimpanzees and human adults varied as a function of the ROR and thus followed Weber's law. This suggests that intuitive statistics are based on the analogue magnitude system, an evolutionary ancient cognitive mechanism common to many aspects of quantitative cognition. The third study investigated whether chimpanzees consider knowledge about others' mental states when drawing statistical inferences. I tested sanctuary-living chimpanzees in a task that required subjects to infer which of two populations was more likely to lead to a desired outcome for the subject. I manipulated whether the experimenters had preferences to draw certain food types or acted neutrally and whether they had visual access to the populations while sampling or drew blindly. Chimpanzees chose based on proportional information alone when they had no information about experimenters’ preferences and (to a lesser extent) when experimenters had preferences for certain food types but drew blindly. By contrast, when biased experimenters had visual access, subjects ignored statistical information and instead chose based on experimenters’ preferences. Consistent with recent findings on pre-verbal infants, apes seem to have a random sampling assumption that can be overridden under the appropriate circumstances and they are able to use information about others ' mental states to judge whether this is necessary.
Taken together, the findings of the present dissertation indicate that nonhuman great apes possess intuitive statistical capacities on a par with those of human infants. Therefore, intuitive statistics antedate language and mathematical thinking not only ontogenetically, but also phylogenetically. This suggests that humans' statistical abilities are founded on an evolutionary ancient capacity shared with our closest living relatives.
The evolutionary roots of intuitive statistics. Johanna Eckert. PhD Dissertation, Georg-August-Universität Göttingen, 2018, https://d-nb.info/1172970726/34
Abstract
Intuitive statistical reasoning is the capacity to draw intuitive probabilistic inferences based on an understanding of the relations between populations, sampling processes and resulting samples. This capacity is fundamental to our daily lives and one of the hallmarks of human thinking. We constantly use sample observations to draw general conclusions about the world, use these generalizations to predict what will happen next and to make rational decisions under uncertainty. Historically, statistical reaso ning was thought to develop late in ontogeny, to be biased by general-purpose heuristics throughout adulthood, and to be restricted to certain situations and specific types of information. In the last decade, however, evidence has accumulated from developmental research showing that even pre-verbal infants can reason from populations of items to randomly drawn samples and vice versa. Moreover, infants can flexibly integrate knowledge from different cognitive domains (such as physical or psychological knowle dge) into their statistical inferences. This indicates that neither language nor mathematical education are prerequisites for intuitive statistical abilities. Beyond that, recent comparative research suggests that basic forms of such capacities are not uni quely human: Rakoczy et al. (2014) presented nonhuman great apes with two populations with different proportions of preferred to non-preferred food items. Apes were able to infer which population was more likely to lead to a preferred food item as randomly drawn sample. Hence, just like human infants, great apes can reason from population to sample, giving a first hint that human statistical abilities may be based on an evolutionary ancient capacity.
The aim of the present dissertation is to explore the evolutionary roots of intuitive statistics more systematically and comprehensively by expanding on the initial findings of Rakoczy et al. (2014). I examined three questions regarding the i) generality and flexibility of nonhuman great apes' statistical capacities, ii) their cognitive structures and limits, as well as iii) their interaction with knowledge from other cognitive domains. To address these questions, I conducted three studies applying variants of the paradigm established by Rakoczy et al. (2014) .
In the first study, zoo-living great apes ( Pan troglodytes, Pan paniscus, Pongo abelii, Gorilla gorilla) were required to infer from samples to populations of food items: Apes were pres ented with two covered populations and witnessed representative multi-item samples being drawn from these populations. Subsequently, subjects could choose which population they wanted to receive as a reward. I found that apes ́ statistical abilities in this direction were more restricted than vice versa. However, these limitations were potentially due to accessory task demands rather than limitations in statistical reasoning. The second study was designed to gain deeper insights into the cognitive structure of intuitive statistics in chimpanzees and humans. More specifically, I tested sanctuary-living chimpanzees and human adults in a task requiring inferences from population to sample and I systematically varied the magnitude of difference between the populations' ratios (the ratio of ratios, ROR). I discovered that the statistical abilities of both chimpanzees and human adults varied as a function of the ROR and thus followed Weber's law. This suggests that intuitive statistics are based on the analogue magnitude system, an evolutionary ancient cognitive mechanism common to many aspects of quantitative cognition. The third study investigated whether chimpanzees consider knowledge about others' mental states when drawing statistical inferences. I tested sanctuary-living chimpanzees in a task that required subjects to infer which of two populations was more likely to lead to a desired outcome for the subject. I manipulated whether the experimenters had preferences to draw certain food types or acted neutrally and whether they had visual access to the populations while sampling or drew blindly. Chimpanzees chose based on proportional information alone when they had no information about experimenters’ preferences and (to a lesser extent) when experimenters had preferences for certain food types but drew blindly. By contrast, when biased experimenters had visual access, subjects ignored statistical information and instead chose based on experimenters’ preferences. Consistent with recent findings on pre-verbal infants, apes seem to have a random sampling assumption that can be overridden under the appropriate circumstances and they are able to use information about others ' mental states to judge whether this is necessary.
Taken together, the findings of the present dissertation indicate that nonhuman great apes possess intuitive statistical capacities on a par with those of human infants. Therefore, intuitive statistics antedate language and mathematical thinking not only ontogenetically, but also phylogenetically. This suggests that humans' statistical abilities are founded on an evolutionary ancient capacity shared with our closest living relatives.
Surnames with initials farther from the beginning of the alphabet were associated with less distinction & satisfaction in high school, lower educational attainment, more military service & less attractive first jobs
Cauley, Alexander and Zax, Jeffrey S., Alphabetism: The Effects of Surname Initial and the Cost of Being Otherwise Undistinguished (October 24, 2018). http://dx.doi.org/10.2139/ssrn.3272556
Abstract: A small literature demonstrates that names are economically relevant. However, this is the first paper to examine the relationship between surname initial rank and male life outcomes, including human capital investments and labor market experiences. Surnames with initials farther from the beginning of the alphabet were associated with less distinction and satisfaction in high school, lower educational attainment, more military service and less attractive first jobs. These effects were concentrated among men who were undistinguished by cognitive ability or appearance, and, for them, may have persisted into middle age. They suggest that ordering is important and that over-reliance on alphabetical orderings can be harmful.
Keywords: alphabetism, surname initial, rank effects, ordered search, anthroponomastics, socio-onomastics
JEL Classification: D63, I31, J19, J71
Abstract: A small literature demonstrates that names are economically relevant. However, this is the first paper to examine the relationship between surname initial rank and male life outcomes, including human capital investments and labor market experiences. Surnames with initials farther from the beginning of the alphabet were associated with less distinction and satisfaction in high school, lower educational attainment, more military service and less attractive first jobs. These effects were concentrated among men who were undistinguished by cognitive ability or appearance, and, for them, may have persisted into middle age. They suggest that ordering is important and that over-reliance on alphabetical orderings can be harmful.
Keywords: alphabetism, surname initial, rank effects, ordered search, anthroponomastics, socio-onomastics
JEL Classification: D63, I31, J19, J71
Thursday, December 27, 2018
Cultivation theory holds that the media is responsible for the public’s crime trend perceptions; but it is likely that the cultivation effect of media has been overstated in the previous research
Media consumption and crime trend perceptions: a longitudinal analysis. Luzi Shi, Sean Patrick Roche & Ryan M. McKenna. Deviant Behavior, https://doi.org/10.1080/01639625.2018.1519129
ABSTRACT: For over two decades, despite the downward crime trend, the American public has persisted in believing crime is on the rise. Cultivation theory holds that the media is responsible for the public’s crime trend perceptions. Previous cultivation studies heavily rely on cross-sectional data, which may lead to spurious conclusions due to reverse causation and omitted variable bias. This study aims to address these issues by utilizing longitudinal analyses. Drawing on three waves of the 2008–2009 American National Election Survey, we test the cultivation hypothesis using traditional OLS, OLS with lagged crime trend perceptions, fixed effects, and dynamic panel models. Newspaper and TV news consumption are related to crime trend perceptions in traditional OLS models. In other models, media consumption is not related to crime trend perceptions. The results do not support the cultivation hypothesis. It is likely that the cultivation effect of media has been overstated in the previous cross-sectional research.
ABSTRACT: For over two decades, despite the downward crime trend, the American public has persisted in believing crime is on the rise. Cultivation theory holds that the media is responsible for the public’s crime trend perceptions. Previous cultivation studies heavily rely on cross-sectional data, which may lead to spurious conclusions due to reverse causation and omitted variable bias. This study aims to address these issues by utilizing longitudinal analyses. Drawing on three waves of the 2008–2009 American National Election Survey, we test the cultivation hypothesis using traditional OLS, OLS with lagged crime trend perceptions, fixed effects, and dynamic panel models. Newspaper and TV news consumption are related to crime trend perceptions in traditional OLS models. In other models, media consumption is not related to crime trend perceptions. The results do not support the cultivation hypothesis. It is likely that the cultivation effect of media has been overstated in the previous cross-sectional research.
Hospital Readmissions Reduction Program & rising deaths: Why are policies that profoundly influence patient care not rigorously studied before widespread rollout?
Did This Health Care Policy Do Harm? Rishi K. Wadhera, Karen E. Joynt Maddox and Robert W. Yeh. The New York Times, Dec 21 2018. https://www.nytimes.com/2018/12/21/opinion/did-this-health-care-policy-do-harm.html
A well-intentioned program created by the Affordable Care Act may have led to patient deaths.
Excerpts with almost no supplementary data. Check the original for the full text with links:
The authors are cardiologists and health policy researchers.
No patient leaves the hospital hoping to return soon. But a decade ago, one in five Medicare patients who were hospitalized for common conditions ended up back in the hospital within 30 days. Because roughly half of those cases were thought to be preventable, reducing hospital readmissions was seen by policymakers as a rare opportunity to improve the quality of care while reducing costs.
In 2010, the federal agency that oversees Medicare, the Centers for Medicare and Medicaid Services, established the Hospital Readmissions Reduction Program under the Affordable Care Act. Two years later, the program began imposing financial penalties on hospitals with high rates of readmission within 30 days of a hospitalization for pneumonia, heart attack or heart failure, a chronic condition in which the heart has difficulty pumping blood to the body.
At first, the reduction program seemed like the win-win that policymakers had hoped for. Readmission rates declined nationwide for target conditions. Medicare saved an estimated $10 billion because of the reduction in hospital admissions. Based on those results, many policymakers have called for expanding the program.
But a deeper look at the Hospital Readmissions Reduction Program reveals a few troubling trends. First, since the policy has been in place, patients returning to a hospital are more likely to be cared for in emergency rooms and observation units. This has raised concern that some hospitals may be avoiding readmissions, even for patients who would benefit most from inpatient care.
Second, safety-net hospitals with limited resources have been disproportionately punished by the program because they tend to care for more low-income patients who are at much higher risk of readmission. Financially penalizing these resource-poor hospitals may impede their ability to deliver good care.
Finally, and most concerning, there is growing evidence that while readmission rates are falling, death rates may be rising.
In a new study [https://jamanetwork.com/journals/jama/fullarticle/2719307] of approximately eight million Medicare patients hospitalized between 2005 and 2015 that we conducted with other colleagues, we found that the Hospital Readmissions Reduction Program was associated with an increase in deaths within 30 days of discharge among patients hospitalized for heart failure or pneumonia, though not for a heart attack.
The study [...] found that although post-discharge deaths for patients with heart failure were increasing in the years before the program, the trend accelerated after the program was established. Death rates following a pneumonia hospitalization were stable before the Hospital Readmissions Reduction Program, but increased after the program began.
For both conditions, the increase in deaths after the program were concentrated in those patients who had not been readmitted to the hospital after discharge. If we assume that the program was directly responsible for these increases in mortality and that prior trends would have continued unabated, the program may have resulted in 10,000 more deaths among patients with heart failure and pneumonia.
Our findings build upon a smaller-scale study by independent research groups that has also shown that the program was associated with an increase in post-discharge death [...].
How might this have happened? Though policymakers assumed that reductions in readmissions under the program were solely due to improvements in quality of care, our findings suggest otherwise. It is possible that some hospitals treated patients in the emergency room or in an observation unit when they would have benefited most from an inpatient readmission. It is also possible that shifting clinicians’ focus to readmissions distracted them from working to reduce mortality, since the readmissions penalties are over 10 times higher than the financial penalties for high death rates.
We don’t know exactly why we see the patterns we do. And another recent study reported that although deaths after discharge were increasing for heart failure and pneumonia, they did not accelerate under the program. They argue that other changes could have been responsible for the trend, such as an increase in the medical complexity of patients who were admitted to the hospital.
While the problem is complex, the short-term answer is simple — err on the side of caution. Further expansion of the program, from six conditions to all conditions warranting hospitalization, as some policymakers have advocated, makes little sense given legitimate concerns our study and others raise about its repercussions.
In the long term, the Centers for Medicare and Medicaid Services should conduct an investigation into the patterns we and others report. All possibilities should be considered, from coding changes to inappropriately turning patients away from the emergency room to changes in risk factors among Medicare patients. The agency must also engage physicians and patients to understand how this program has influenced “on the ground” care.
More broadly, this continuing debate about the Hospital Readmissions Reduction Program highlights a bigger issue: Why are policies that profoundly influence patient care not rigorously studied before widespread rollout?
[...]
Rishi K. Wadhera is a cardiology fellow at Harvard Medical School. Karen E. Joynt Maddox is a cardiologist at the Washington University School of Medicine in St. Louis. Robert W. Yeh is a cardiologist and director of the Smith Center for Outcomes Research in Cardiology at Beth Israel Deaconess Medical Center.
A version of this article appears in print on Dec. 22, 2018, on Page A23 of the New York edition with the headline: A Harmful Health Care Policy?.
A well-intentioned program created by the Affordable Care Act may have led to patient deaths.
Excerpts with almost no supplementary data. Check the original for the full text with links:
The authors are cardiologists and health policy researchers.
No patient leaves the hospital hoping to return soon. But a decade ago, one in five Medicare patients who were hospitalized for common conditions ended up back in the hospital within 30 days. Because roughly half of those cases were thought to be preventable, reducing hospital readmissions was seen by policymakers as a rare opportunity to improve the quality of care while reducing costs.
In 2010, the federal agency that oversees Medicare, the Centers for Medicare and Medicaid Services, established the Hospital Readmissions Reduction Program under the Affordable Care Act. Two years later, the program began imposing financial penalties on hospitals with high rates of readmission within 30 days of a hospitalization for pneumonia, heart attack or heart failure, a chronic condition in which the heart has difficulty pumping blood to the body.
At first, the reduction program seemed like the win-win that policymakers had hoped for. Readmission rates declined nationwide for target conditions. Medicare saved an estimated $10 billion because of the reduction in hospital admissions. Based on those results, many policymakers have called for expanding the program.
But a deeper look at the Hospital Readmissions Reduction Program reveals a few troubling trends. First, since the policy has been in place, patients returning to a hospital are more likely to be cared for in emergency rooms and observation units. This has raised concern that some hospitals may be avoiding readmissions, even for patients who would benefit most from inpatient care.
Second, safety-net hospitals with limited resources have been disproportionately punished by the program because they tend to care for more low-income patients who are at much higher risk of readmission. Financially penalizing these resource-poor hospitals may impede their ability to deliver good care.
Finally, and most concerning, there is growing evidence that while readmission rates are falling, death rates may be rising.
In a new study [https://jamanetwork.com/journals/jama/fullarticle/2719307] of approximately eight million Medicare patients hospitalized between 2005 and 2015 that we conducted with other colleagues, we found that the Hospital Readmissions Reduction Program was associated with an increase in deaths within 30 days of discharge among patients hospitalized for heart failure or pneumonia, though not for a heart attack.
The study [...] found that although post-discharge deaths for patients with heart failure were increasing in the years before the program, the trend accelerated after the program was established. Death rates following a pneumonia hospitalization were stable before the Hospital Readmissions Reduction Program, but increased after the program began.
For both conditions, the increase in deaths after the program were concentrated in those patients who had not been readmitted to the hospital after discharge. If we assume that the program was directly responsible for these increases in mortality and that prior trends would have continued unabated, the program may have resulted in 10,000 more deaths among patients with heart failure and pneumonia.
Our findings build upon a smaller-scale study by independent research groups that has also shown that the program was associated with an increase in post-discharge death [...].
How might this have happened? Though policymakers assumed that reductions in readmissions under the program were solely due to improvements in quality of care, our findings suggest otherwise. It is possible that some hospitals treated patients in the emergency room or in an observation unit when they would have benefited most from an inpatient readmission. It is also possible that shifting clinicians’ focus to readmissions distracted them from working to reduce mortality, since the readmissions penalties are over 10 times higher than the financial penalties for high death rates.
We don’t know exactly why we see the patterns we do. And another recent study reported that although deaths after discharge were increasing for heart failure and pneumonia, they did not accelerate under the program. They argue that other changes could have been responsible for the trend, such as an increase in the medical complexity of patients who were admitted to the hospital.
While the problem is complex, the short-term answer is simple — err on the side of caution. Further expansion of the program, from six conditions to all conditions warranting hospitalization, as some policymakers have advocated, makes little sense given legitimate concerns our study and others raise about its repercussions.
In the long term, the Centers for Medicare and Medicaid Services should conduct an investigation into the patterns we and others report. All possibilities should be considered, from coding changes to inappropriately turning patients away from the emergency room to changes in risk factors among Medicare patients. The agency must also engage physicians and patients to understand how this program has influenced “on the ground” care.
More broadly, this continuing debate about the Hospital Readmissions Reduction Program highlights a bigger issue: Why are policies that profoundly influence patient care not rigorously studied before widespread rollout?
[...]
Rishi K. Wadhera is a cardiology fellow at Harvard Medical School. Karen E. Joynt Maddox is a cardiologist at the Washington University School of Medicine in St. Louis. Robert W. Yeh is a cardiologist and director of the Smith Center for Outcomes Research in Cardiology at Beth Israel Deaconess Medical Center.
A version of this article appears in print on Dec. 22, 2018, on Page A23 of the New York edition with the headline: A Harmful Health Care Policy?.
On consumer finance, personal characteristics, & health, many prefer to remain in a state of active ignorance even when information is freely available
Ho, Emily and Hagmann, David and Loewenstein, George F., Measuring Information Preferences (September 14, 2018). http://dx.doi.org/10.2139/ssrn.3249768
Abstract: Advances in medical testing and widespread access to the internet have made it easier than ever to obtain information. Yet, when it comes to some of the most important decisions in life, people often choose to remain ignorant for a variety of psychological and economical reasons. We design and validate an information preference scale to measure an individual’s desire to obtain or avoid information that may be unpleasant, but could improve their future decisions. The scale measures information preferences in three domains that are psychologically and materially consequential: consumer finance, personal characteristics, and health. We present tests of the scale’s reliability and validity and show that the scale predicts a real decision to obtain (or avoid) information in each of the three domains, as well as in the domain of politics, which is not explicitly measured in the scale. We find that across settings, many respondents prefer to remain in a state of active ignorance even when information is freely available, and that information preferences are a stable trait but can differ across domains. (Under R&R at Management Science)
Keywords: Information Avoidance, Scale Development, Information Preference, Health, Consumer Finance
JEL Classification: D83, D91, C90, I12
Abstract: Advances in medical testing and widespread access to the internet have made it easier than ever to obtain information. Yet, when it comes to some of the most important decisions in life, people often choose to remain ignorant for a variety of psychological and economical reasons. We design and validate an information preference scale to measure an individual’s desire to obtain or avoid information that may be unpleasant, but could improve their future decisions. The scale measures information preferences in three domains that are psychologically and materially consequential: consumer finance, personal characteristics, and health. We present tests of the scale’s reliability and validity and show that the scale predicts a real decision to obtain (or avoid) information in each of the three domains, as well as in the domain of politics, which is not explicitly measured in the scale. We find that across settings, many respondents prefer to remain in a state of active ignorance even when information is freely available, and that information preferences are a stable trait but can differ across domains. (Under R&R at Management Science)
Keywords: Information Avoidance, Scale Development, Information Preference, Health, Consumer Finance
JEL Classification: D83, D91, C90, I12
5-year-olds judged conventional eaters more positively than unconventional eaters, judge ingroup & outgroup members negatively for unconventional choices
Children judge others based on their food choices. Jasmine M. DeJesus et al. Journal of Experimental Child Psychology, Volume 179, March 2019, Pages 143-161. https://doi.org/10.1016/j.jecp.2018.10.009
Highlights
• 5-year-olds judged conventional eaters more positively than unconventional eaters.
• Unconventional foods were judged as negatively as disgust elicitors.
• Children judge ingroup and outgroup members negatively for unconventional choices.
• Children appreciate food choice as a behavior conveying social meaning.
Abstract: Individuals and cultures share some commonalities in food preferences, yet cuisines also differ widely across social groups. Eating is a highly social phenomenon; however, little is known about the judgments children make about other people’s food choices. Do children view conventional food choices as normative and consequently negatively evaluate people who make unconventional food choices? In five experiments, 5-year-old children were shown people who ate conventional and unconventional foods, including typical food items paired in unconventional ways. In Experiment 1, children preferred conventional foods and conventional food eaters. Experiment 2 suggested a link between expectations of conventionality and native/foreign status; children in the United States thought that English speakers were relatively more likely to choose conventional foods than French speakers. Yet, children in Experiments 3 and 4 judged people who ate unconventional foods as negatively as they judged people who ate canonical disgust elicitors and nonfoods, even when considering people from a foreign culture. Children in Experiment 5 were more likely to assign conventional foods to cultural ingroup members than to cultural outgroup members; nonetheless, they thought that no one was likely to eat the nonconventional items. These results demonstrate that children make normative judgments about other people’s food choices and negatively evaluate people across groups who deviate from conventional eating practices.
Highlights
• 5-year-olds judged conventional eaters more positively than unconventional eaters.
• Unconventional foods were judged as negatively as disgust elicitors.
• Children judge ingroup and outgroup members negatively for unconventional choices.
• Children appreciate food choice as a behavior conveying social meaning.
Abstract: Individuals and cultures share some commonalities in food preferences, yet cuisines also differ widely across social groups. Eating is a highly social phenomenon; however, little is known about the judgments children make about other people’s food choices. Do children view conventional food choices as normative and consequently negatively evaluate people who make unconventional food choices? In five experiments, 5-year-old children were shown people who ate conventional and unconventional foods, including typical food items paired in unconventional ways. In Experiment 1, children preferred conventional foods and conventional food eaters. Experiment 2 suggested a link between expectations of conventionality and native/foreign status; children in the United States thought that English speakers were relatively more likely to choose conventional foods than French speakers. Yet, children in Experiments 3 and 4 judged people who ate unconventional foods as negatively as they judged people who ate canonical disgust elicitors and nonfoods, even when considering people from a foreign culture. Children in Experiment 5 were more likely to assign conventional foods to cultural ingroup members than to cultural outgroup members; nonetheless, they thought that no one was likely to eat the nonconventional items. These results demonstrate that children make normative judgments about other people’s food choices and negatively evaluate people across groups who deviate from conventional eating practices.
Wednesday, December 26, 2018
Constipated people tend to be obstinate, excessively concerned with hygiene, & inclined toward retaining possessions; diarrhoeic people tend to be careless, disorganized, & disposed to share their possessions with others
Consumer behaviour and the toilet: Research on expulsive and retentive personalities. Gianluigi Guido, Russell W. Belk, Cristian Rizzo, Giovanni Pino. Journal of Consumer Behaviour, https://doi.org/10.1002/cb.1709
Abstract: During the last five decades, a number of studies have attempted to draw from psychoanalytic theory to examine the relationship between evacuation disorders and a person's character. According to Freud's original conceptualization, early or harsh toilet training leads children to develop an anal retentive personality, characterized by the tendency to control their bowels as well as their material possessions; by contrast, liberal toilet training leads children to develop an anal expulsive personality, characterized by the tendency to excessively relieve faeces, as well as be being careless, messy, and inclined to dispose of old products and buy new ones. Although toilet training may not be responsible, these sets of traits do cohere. To empirically examine these hypotheses, we studied the personality traits and consumption habits of people suffering from different bowel disorders. By means of semistructured interviews, we analysed the personality characteristics, sociodemographic backgrounds, and peculiar consumption habits of people suffering from constipation and diarrhoeic syndromes. The results show that constipated people tend to be obstinate, excessively concerned with hygiene, and inclined toward retaining possessions, whereas diarrhoeic people tend to be careless, disorganized, and disposed to share their possessions with others. We discuss the theoretical and practical implications of these results and indicate avenues for future research.
---
It is amazing, Freudianism doesn't die...
Abstract: During the last five decades, a number of studies have attempted to draw from psychoanalytic theory to examine the relationship between evacuation disorders and a person's character. According to Freud's original conceptualization, early or harsh toilet training leads children to develop an anal retentive personality, characterized by the tendency to control their bowels as well as their material possessions; by contrast, liberal toilet training leads children to develop an anal expulsive personality, characterized by the tendency to excessively relieve faeces, as well as be being careless, messy, and inclined to dispose of old products and buy new ones. Although toilet training may not be responsible, these sets of traits do cohere. To empirically examine these hypotheses, we studied the personality traits and consumption habits of people suffering from different bowel disorders. By means of semistructured interviews, we analysed the personality characteristics, sociodemographic backgrounds, and peculiar consumption habits of people suffering from constipation and diarrhoeic syndromes. The results show that constipated people tend to be obstinate, excessively concerned with hygiene, and inclined toward retaining possessions, whereas diarrhoeic people tend to be careless, disorganized, and disposed to share their possessions with others. We discuss the theoretical and practical implications of these results and indicate avenues for future research.
---
It is amazing, Freudianism doesn't die...
Tuesday, December 25, 2018
Is Islam Compatible with Free-Market Capitalism? An Empirical Analysis, 1970–2010: Capitalistic policies and institutions, it seems, may travel across religions more easily than culturalists claim
Is Islam Compatible with Free-Market Capitalism? An Empirical Analysis, 1970–2010. Indra de Soysa. Politics and Religion, https://doi.org/10.1017/S1755048318000780
Abstract: Are majority-Muslim countries laggards when it comes to developing liberal economic institutions? Using an Index of Economic Freedom and its component parts, this study finds that Muslim-dominant countries (>50% of the population) are positively associated with free-market capitalism. Protestant dominance is also positively correlated, but the association stems from just two components of the index, mainly “legal security and property rights protection.” Surprisingly, Protestant countries correlate negatively with “small government” and “freedom to trade,” two critical components of free-market capitalism. Muslim dominance shows positive correlations with all areas except for “legal security and property rights.” The results are consistent when assessing similar variables measuring property rights and government ownership of the economy collected by the Varieties of Democracy Project. Capitalistic policies and institutions, it seems, may travel across religions more easily than culturalists claim.
Abstract: Are majority-Muslim countries laggards when it comes to developing liberal economic institutions? Using an Index of Economic Freedom and its component parts, this study finds that Muslim-dominant countries (>50% of the population) are positively associated with free-market capitalism. Protestant dominance is also positively correlated, but the association stems from just two components of the index, mainly “legal security and property rights protection.” Surprisingly, Protestant countries correlate negatively with “small government” and “freedom to trade,” two critical components of free-market capitalism. Muslim dominance shows positive correlations with all areas except for “legal security and property rights.” The results are consistent when assessing similar variables measuring property rights and government ownership of the economy collected by the Varieties of Democracy Project. Capitalistic policies and institutions, it seems, may travel across religions more easily than culturalists claim.
Consumers’ intelligence scores & their choice to co-own & lease their cars are linked; can be explained by higher social trust in people & institutions, their financial standing & tendency to seek savings
Sharing-Dominant Logic? Quantifying the Association between Consumer Intelligence and Choice of Social Access Modes. Jaakko Aspara Kristina Wittkowski. Journal of Consumer Research, ucy074, https://doi.org/10.1093/jcr/ucy074
Abstract: With sharing economy and access-based consumption, consumers increasingly access goods through social access modes other than private ownership—such as co-ownership, leasing, or borrowing. Prior research focuses on consumers’ attitudinal motivations and consumption-cultural use experiences pertaining to such social exchange-based access modes. In so doing, prior research has overlooked the influence that consumers’ fundamental, even biologically-shaped cognitive traits may have on their choice of access modes. To fill this research gap, this study analyzes a big data set of more than 30,000 new car registrations by male consumers in Finland, including cognitive test data from the Finnish Defense Forces and covariates from other governmental sources. The field data suggests that consumers’ intelligence scores and their choice to co-own and lease their cars are positively associated. Econometric evidence further suggests that the association between intelligence and choice of social exchange-based access modes can be explained by intelligent consumers’ higher social trust in people and institutions, as well as two circumstantial mechanisms: their financial standing and tendency to seek savings. The findings from the field data are supported by an additional survey study (n = 460). Implications for the evolution of markets and consumption, as well as human intelligence and cooperation, are discussed.
Keywords: Access, ownership, cognitive ability, intelligence, sharing, social exchange
Abstract: With sharing economy and access-based consumption, consumers increasingly access goods through social access modes other than private ownership—such as co-ownership, leasing, or borrowing. Prior research focuses on consumers’ attitudinal motivations and consumption-cultural use experiences pertaining to such social exchange-based access modes. In so doing, prior research has overlooked the influence that consumers’ fundamental, even biologically-shaped cognitive traits may have on their choice of access modes. To fill this research gap, this study analyzes a big data set of more than 30,000 new car registrations by male consumers in Finland, including cognitive test data from the Finnish Defense Forces and covariates from other governmental sources. The field data suggests that consumers’ intelligence scores and their choice to co-own and lease their cars are positively associated. Econometric evidence further suggests that the association between intelligence and choice of social exchange-based access modes can be explained by intelligent consumers’ higher social trust in people and institutions, as well as two circumstantial mechanisms: their financial standing and tendency to seek savings. The findings from the field data are supported by an additional survey study (n = 460). Implications for the evolution of markets and consumption, as well as human intelligence and cooperation, are discussed.
Keywords: Access, ownership, cognitive ability, intelligence, sharing, social exchange
There is considerable evidence to suggest that aesthetic experiences engage a distributed set of structures in the brain, and likely emerge from the interactions of multiple neural systems
Internal Orientation in Aesthetic Experience. Oshin Vartanian. In The Oxford Handbook of Spontaneous Thought: Mind-Wandering, Creativity, and Dreaming, edited by Kalina Christoff and Kieran C.R. Fox. May 2018, DOI: 10.1093/oxfordhb/9780190464745.013.17
Abstract: There is considerable evidence to suggest that aesthetic experiences engage a distributed set of structures in the brain, and likely emerge from the interactions of multiple neural systems. In addition, aside from an external (i.e., object-focused) orientation, aesthetic experiences also involve an internal (i.e., person-focused) orientation. This internal orientation appears to have two dissociable neural components: one component involves the processing of visceral feeling states (i.e., interoception) and primarily engages the insula, whereas the other involves the processing of self-referential, autobiographical, and narrative information, and is represented by activation in the default mode network. Evidence supporting this neural dissociation has provided insights into processes that can lead to deep and moving aesthetic experiences.
Keywords: aesthetics, self-referential, interoception, narrative, default mode network
Abstract: There is considerable evidence to suggest that aesthetic experiences engage a distributed set of structures in the brain, and likely emerge from the interactions of multiple neural systems. In addition, aside from an external (i.e., object-focused) orientation, aesthetic experiences also involve an internal (i.e., person-focused) orientation. This internal orientation appears to have two dissociable neural components: one component involves the processing of visceral feeling states (i.e., interoception) and primarily engages the insula, whereas the other involves the processing of self-referential, autobiographical, and narrative information, and is represented by activation in the default mode network. Evidence supporting this neural dissociation has provided insights into processes that can lead to deep and moving aesthetic experiences.
Keywords: aesthetics, self-referential, interoception, narrative, default mode network
Fear extinction does not prevent post-traumatic stress or have long-term therapeutic benefits in fear-related disorders unless extinction memories are easily retrieved at later encounters with the once-threatening stimulus
Dopamine-dependent prefrontal reactivations explain long-term benefit of fear extinction. A. M. V. Gerlicher, O. Tüscher & R. Kalisch. Nature Communications, volume 9, Article number: 4294 (2018), https://www.nature.com/articles/s41467-018-06785-y
Abstract: Fear extinction does not prevent post-traumatic stress or have long-term therapeutic benefits in fear-related disorders unless extinction memories are easily retrieved at later encounters with the once-threatening stimulus. Previous research in rodents has pointed towards a role for spontaneous prefrontal activity occurring after extinction learning in stabilizing and consolidating extinction memories. In other memory domains spontaneous post-learning activity has been linked to dopamine. Here, we show that a neural activation pattern — evoked in the ventromedial prefrontal cortex (vmPFC) by the unexpected omission of the feared outcome during extinction learning — spontaneously reappears during postextinction rest. The number of spontaneous vmPFC pattern reactivations predicts extinction memory retrieval and vmPFC activation at test 24 h later. Critically, pharmacologically enhancing dopaminergic activity during extinction consolidation amplifies spontaneous vmPFC reactivations and correspondingly improves extinction memory retrieval at test. Hence, a spontaneous dopamine-dependent memory consolidation-based mechanism may underlie the long-term behavioral effects of fear extinction.
Abstract: Fear extinction does not prevent post-traumatic stress or have long-term therapeutic benefits in fear-related disorders unless extinction memories are easily retrieved at later encounters with the once-threatening stimulus. Previous research in rodents has pointed towards a role for spontaneous prefrontal activity occurring after extinction learning in stabilizing and consolidating extinction memories. In other memory domains spontaneous post-learning activity has been linked to dopamine. Here, we show that a neural activation pattern — evoked in the ventromedial prefrontal cortex (vmPFC) by the unexpected omission of the feared outcome during extinction learning — spontaneously reappears during postextinction rest. The number of spontaneous vmPFC pattern reactivations predicts extinction memory retrieval and vmPFC activation at test 24 h later. Critically, pharmacologically enhancing dopaminergic activity during extinction consolidation amplifies spontaneous vmPFC reactivations and correspondingly improves extinction memory retrieval at test. Hence, a spontaneous dopamine-dependent memory consolidation-based mechanism may underlie the long-term behavioral effects of fear extinction.
Those with very rare genetic condition Urbach-Wiethe Disease had significantly more pleasant and significantly shorter and less complex dreams than controls; n o differences threat or danger levels
The Role of the Basolateral Amygdala in Dreaming. Yvonne Blake et al. Cortex, https://doi.org/10.1016/j.cortex.2018.12.016
Abstract: Neuroimaging studies have repeatedly shown amygdala activity during sleep (REM and NREM). Consequently, various theorists propose central roles for the amygdala in dreaming - particularly in the generation of dream affects, which seem to play a major role in dream plots. However, a causal role for the amygdala in dream phenomena has never been demonstrated. The traditional first step in determining this role is to observe the functional effects of isolated lesions to the brain structure in question. However, circumscribed bilateral amygdala lesions are extremely rare. Furthermore, the treatment of the amygdala as a unitary structure is problematic, as the basolateral and centromedial amygdala (BLA and CMA) may serve very different functions.
We analysed 23 dream reports collected from eight adult patients with bilateral calcification of the BLA as a result of a very rare genetic condition called Urbach-Wiethe Disease (UWD). We compared these dream reports to 52 reports collected from 17 matched controls. Given that the BLA has been implicated in various affective processes in waking life, we predicted that the emotional content of the patients’ dreams would differ from that of controls. Due to the exploratory nature of this research, a range of different dream characteristics were analysed.
A principal components analysis run on all data returned three key factors, namely pleasantness, length and danger. The UWD patients’ dream reports were significantly more pleasant and significantly shorter and less complex than control reports. No differences were found in levels of threat or danger.
The results support some current hypotheses concerning the amygdala’s role in dreaming, and call others into question. Future research should examine whether these UWD patients show generally impaired emotional episodic memory due to BLA damage, which could explain some of the current findings.
Abstract: Neuroimaging studies have repeatedly shown amygdala activity during sleep (REM and NREM). Consequently, various theorists propose central roles for the amygdala in dreaming - particularly in the generation of dream affects, which seem to play a major role in dream plots. However, a causal role for the amygdala in dream phenomena has never been demonstrated. The traditional first step in determining this role is to observe the functional effects of isolated lesions to the brain structure in question. However, circumscribed bilateral amygdala lesions are extremely rare. Furthermore, the treatment of the amygdala as a unitary structure is problematic, as the basolateral and centromedial amygdala (BLA and CMA) may serve very different functions.
We analysed 23 dream reports collected from eight adult patients with bilateral calcification of the BLA as a result of a very rare genetic condition called Urbach-Wiethe Disease (UWD). We compared these dream reports to 52 reports collected from 17 matched controls. Given that the BLA has been implicated in various affective processes in waking life, we predicted that the emotional content of the patients’ dreams would differ from that of controls. Due to the exploratory nature of this research, a range of different dream characteristics were analysed.
A principal components analysis run on all data returned three key factors, namely pleasantness, length and danger. The UWD patients’ dream reports were significantly more pleasant and significantly shorter and less complex than control reports. No differences were found in levels of threat or danger.
The results support some current hypotheses concerning the amygdala’s role in dreaming, and call others into question. Future research should examine whether these UWD patients show generally impaired emotional episodic memory due to BLA damage, which could explain some of the current findings.
Attraction to the similars: For all three relations, friendships, casual/short-term, & long-term, what counts is political views, career goals, food preferences, travel desires, & music preferences
Domains of Similarity and Attraction in Three Types of Relationships. Stanislav Treger, James N. Masciale. Interpersona, 2018, Vol. 12(2), 254–266, doi:10.5964/ijpr.v12i2.321
Abstract: For decades, social scientists have observed that people greatly desire a partner who is similar to themselves. Less is known, however, about whether particular similarity domains (e.g., music preferences) may uniquely influence relationship formation. We address this gap by examining people’s preferences for 18 similarity domains in three types of relationships: friendships, casual/short-term, and long-term. The most important similarity domains, across the three relationship types, were political views, career goals, food preferences, travel desires, and music preferences. General similarity was most important in long-term rather than in friendships and casual/short-term relationships, with the latter two relationship types not differing from one another. This pattern emerged for all similarity domains with four exceptions: preferences for books, video games, computer brands, and cell phone brands. No sex differences emerged in similarity domains except in preferences in video games and brands of cell phones and computers. Men rated these domains to be more important than did women. All three of these differences were of relatively small effect size. We tie this work into the larger body of research on similarity and preferences for partner traits.
Keywords: attraction, mate preferences, similarity
Abstract: For decades, social scientists have observed that people greatly desire a partner who is similar to themselves. Less is known, however, about whether particular similarity domains (e.g., music preferences) may uniquely influence relationship formation. We address this gap by examining people’s preferences for 18 similarity domains in three types of relationships: friendships, casual/short-term, and long-term. The most important similarity domains, across the three relationship types, were political views, career goals, food preferences, travel desires, and music preferences. General similarity was most important in long-term rather than in friendships and casual/short-term relationships, with the latter two relationship types not differing from one another. This pattern emerged for all similarity domains with four exceptions: preferences for books, video games, computer brands, and cell phone brands. No sex differences emerged in similarity domains except in preferences in video games and brands of cell phones and computers. Men rated these domains to be more important than did women. All three of these differences were of relatively small effect size. We tie this work into the larger body of research on similarity and preferences for partner traits.
Keywords: attraction, mate preferences, similarity
Subscribe to:
Posts (Atom)