Scan the roster of history’s intellectual and artistic giants, and
you quickly notice something remarkable: Many were immigrants or
refugees, from Victor Hugo, W.H. Auden and Vladimir Nabokov to Nikolas
Tesla, Marie Curie and Sigmund Freud. At the top of this pantheon sits
the genius’s genius: Einstein. His “miracle year” of 1905, when he
published no fewer than four groundbreaking scientific papers, occurred
after he had emigrated from Germany to Switzerland.
Lost in
today’s immigration debate is this unavoidable fact: An awful lot of
brilliant minds blossomed in alien soil. That is especially true of the
U.S., a nation defined by the creative zeal of the newcomer. Today,
foreign-born residents account for only 13% of the U.S. population but
hold nearly a third of all patents and a quarter of all Nobel Prizes
awarded to Americans.
But why? What is it about the act of relocating to distant shores—voluntarily or not—that sparks creative genius?
When
pressed to explain, we usually turn to a tidy narrative: Scruffy but
determined immigrant, hungry for success, arrives on distant shores.
Immigrant works hard. Immigrant is bolstered by a supportive family, as
well as a wider network from the old country. Immigrant succeeds, buys
flashy new threads.
It is an inspiring narrative—but it is also
misleading. That fierce drive might explain why immigrants and refugees
succeed in their chosen fields, but it fails to explain their
exceptional creativity. It fails to explain their genius.
Recent research points to an intriguing explanation. Several studies
have shed light on the role of “schema violations” in intellectual
development. A schema violation occurs when our world is turned
upside-down, when temporal and spatial cues are off-kilter.
In a 2011 study
led by the Dutch psychologist Simone Ritter and published in the
Journal of Experimental Social Psychology, researchers asked some
subjects to make breakfast in the “wrong” order and others to perform
the task in the conventional manner. Those in the first group—the ones
engaged in a schema violation—consistently demonstrated more “cognitive
flexibility,” a prerequisite for creative thinking.
This suggests that it isn’t the immigrant’s ambition that explains
her creativity but her marginality. Many immigrants possess what the
psychologist Nigel Barber calls “oblique perspective.” Uprooted from the
familiar, they see the world at an angle, and this fresh perspective
enables them to surpass the merely talented. To paraphrase the
philosopher Schopenhauer: Talent hits a target no one else can hit.
Genius hits a target no one else can see.
Freud is a classic
case. As a little boy, he and his family joined a flood of immigrants
from the fringes of the Austro-Hungarian empire to Vienna, a city where,
by 1913, less than half the population was native-born. Freud tried to
fit in. He wore lederhosen and played a local card game called tarock,
but as a Jew and an immigrant, he was never fully accepted. He was an
insider-outsider, residing far enough beyond the mainstream to see the
world through fresh eyes yet close enough to propagate his ideas.
Marie
Curie, born and raised in Poland, was frustrated by the lack of
academic opportunities in her homeland. In 1891, at age 24, she
immigrated to Paris. Life was difficult at first; she studied during the
day and tutored in the evenings. Two years later, though, she earned a
degree in physics, launching a stellar career that culminated with two
Nobel prizes.
Exceptionally creative people such as Curie and Freud possess many
traits, of course, but their “openness to experience” is the most
important, says the cognitive psychologist Scott Barry Kaufman of the
University of Pennsylvania. That seems to hold for entire societies as
well.
Consider a country like Japan, which has historically been among the
world’s most closed societies. Examining the long stretch of time from
580 to 1939, Dean Simonton of the University of California, writing in the Journal of Personality and Social Psychology,
compared Japan’s “extra cultural influx” (from immigration, travel
abroad, etc.) in different eras with its output in such fields as
medicine, philosophy, painting and literature. Dr. Simonton found a
consistent correlation: the greater Japan’s openness, the greater its
achievements.
It isn’t necessarily new ideas from the outside that directly drive
innovation, Dr. Simonton argues. It’s simply their presence as a goad.
Some people start to see the arbitrary nature of many of their own
cultural habits and open their minds to new possibilities. Once you
recognize that there is another way of doing X or thinking about Y, all
sorts of new channels open to you, he says. “The awareness of cultural
variety helps set the mind free,” he concludes.
History bears
this out. In ancient Athens, foreigners known as metics (today we’d call
them resident aliens) contributed mightily to the city-state’s
brilliance. Renaissance Florence recruited the best and brightest from
the crumbling Byzantine Empire. Even when the “extra cultural influx”
arrives uninvited, as it did in India during the British Raj, creativity
sometimes results. The intermingling of cultures sparked the “Bengal
Renaissance” of the late 19th century.
In a 2014 study
published in the Creativity Research Journal, Dr. Ritter and her
colleagues found that people did not need to participate directly in a
schema violation in order to boost their own creative thinking. Merely
watching an actor perform an “upside-down” task did the trick, provided
that the participants identified with the actor. This suggests that even
non-immigrants benefit from the otherness of the newcomer.
Not
all cultural collisions end happily, of course, and not all immigrants
become geniuses. The adversity that spurs some to greatness sends others
into despair. But as we wrestle with our own immigration and refugee
policies, we would be wise to view the welcome mat not as charity but,
rather, as enlightened self-interest. Once creativity is in the air, we
all breathe a more stimulating air.
—Mr. Weiner is the
author of “The Geography of Genius: A Search for the World’s Most
Creative Places, From Ancient Athens to Silicon Valley,” just published
by Simon & Schuster
Almost nobody in Washington cares, and most of the financial media haven’t noticed. But the inspector general’s office at the Federal Reserve recently reported the disturbing results of an internal investigation. Last December the central bank internally identified “fundamental weaknesses in key areas” related to the Fed’s own governance of the stress testing it conducts of financial firms.
The Fed’s stress tests theoretically judge whether the country’s largest banks can withstand economic downturns. So the Fed identifying a problem with its own management of the stress tests is akin to an energy company noticing that something is not right at one of its nuclear reactors.
According to the inspector general, “The governance review findings include, among other items, a shortcoming in policies and procedures, insufficient model testing” and “incomplete structures and information flows to ensure proper oversight of model risk management.” These Fed models are essentially a black box to the public, so there’s no way to tell from the outside how large a problem this is.
The Fed’s ability to construct and maintain financial and economic models is much more than a subject of intellectual curiosity. Given that Fed-approved models at the heart of the so-called Basel capital standards proved to be spectacularly wrong in the run-up to the last financial crisis, the new report is more reason to wonder why anyone should expect them to be more accurate the next time.
The Fed’s IG adds that last year’s internal review “notes that similar findings identified at institutions supervised by the Federal Reserve have typically been characterized as matters requiring immediate attention or as matters requiring attention.”
That’s for sure. Receiving a “matters requiring immediate attention” letter from the Fed is a big deal at a bank. The Journal reported last year that after the Fed used this language in a letter to Credit Suisse castigating the bank’s work in the market for leveraged loans, the bank chose not to participate in the financing of several buy-out deals.
But it’s hard to tell if anything will come from this report that seems to have fallen deep in a Beltway forest. The IG office’s report says that the Fed is taking a number of steps to correct its shortcomings, and that the Fed’s reform plans “appear to be responsive to our recommendations.”
The Fed wields enormous power with little democratic accountability and transparency. This was tolerable when the Fed’s main job was monetary, but its vast new regulatory authority requires more scrutiny. Congress should add the Fed’s stressed-out standards for stress tests to its oversight list.
Someone at Yale University should have dressed up as Robespierre for Halloween, as its students seem to have lost their minds over what constitutes a culturally appropriate costume. Identity and grievance politics keeps hitting new lows on campus, and now even liberal professors are being consumed by the revolution.
On Oct. 28 Yale Dean Burgwell Howard and Yale’s Intercultural Affairs Committee blasted out an email advising students against “culturally unaware” Halloween costumes, with self-help questions such as: “If this costume is meant to be historical, does it further misinformation or historical and cultural inaccuracies?” Watch out for insensitivity toward “religious beliefs, Native American/Indigenous people, Socio-economic strata, Asians, Hispanic/Latino, Women, Muslims, etc.” In short, everyone.
Who knew Yale still employed anyone willing to doubt the costume wardens? But in response to the dean’s email, lecturer in early childhood education Erika Christakis mused to the student residential community she oversees with her husband, Nicholas, a Yale sociologist and physician: “I don’t wish to trivialize genuine concerns,” but she wondered if colleges had morphed into “places of censure and prohibition.”
And: “Nicholas says, if you don’t like a costume someone is wearing, look away, or tell them you are offended. Talk to each other. Free speech and the ability to tolerate offence are the hallmarks of a free and open society.”
Some 750 Yale students, faculty, alumni and others signed a letter saying Ms. Christakis’s “jarring” email served to “further degrade marginalized people,” as though someone with a Yale degree could be marginalized in America. Students culturally appropriated a Puritan shaming trial and encircled Mr. Christakis on a lawn, cursing and heckling him to quit. “I stand behind free speech,” he told the mob.
Hundreds of protesters also turned on Jonathan Holloway, Yale’s black dean, demanding to know why the school hadn’t addressed allegations that a black woman had been kept out of a fraternity party. Fragile scholars also melted down over a visiting speaker who made a joke about Yale’s fracas while talking at a conference sponsored by the school’s William F. Buckley, Jr. program focused on . . . the future of free speech.
The episode reminds us of when Yale alumnus Lee Bass in 1995 asked the university to return his $20 million donation. Mr. Bass had hoped to seed a curriculum in Western civilization, but Yale’s faculty ripped the idea as white imperialism, and he requested a refund. Two decades later the alternative to Western civilization is on display, and it seems to be censorship.
According to a student reporting for the Washington Post, Yale president Peter Salovey told minority students in response to the episode that “we failed you.” That’s true, though not how he means it. The failure is that elite colleges are turning out ostensible leaders who seem to have no idea why America’s Founders risked extreme discomfort—that is, death—for the right to speak freely.
The debate about whether the Joint Comprehensive Plan of Action with
Iran regarding its nuclear program stabilized the Middle East’s
strategic framework had barely begun when the region’s geopolitical
framework collapsed. Russia’s unilateral military action in Syria is the
latest symptom of the disintegration of the American role in
stabilizing the Middle East order that emerged from the Arab-Israeli war
of 1973.
In the aftermath of that conflict, Egypt abandoned its
military ties with the Soviet Union and joined an American-backed
negotiating process that produced peace treaties between Israel and
Egypt, and Israel and Jordan, a United Nations-supervised disengagement
agreement between Israel and Syria, which has been observed for over
four decades (even by the parties of the Syrian civil war), and
international support of Lebanon’s sovereign territorial integrity.
Later, Saddam Hussein’s war to incorporate Kuwait into Iraq was
defeated by an international coalition under U.S. leadership. American
forces led the war against terror in Iraq and Afghanistan. Egypt,
Jordan, Saudi Arabia and the other Gulf States were our allies in all
these efforts. The Russian military presence disappeared from the
region.
That geopolitical pattern is now in shambles. Four states
in the region have ceased to function as sovereign. Libya, Yemen, Syria
and Iraq have become targets for nonstate movements seeking to impose
their rule. Over large swaths in Iraq and Syria, an ideologically
radical religious army has declared itself the Islamic State (also
called ISIS or ISIL) as an unrelenting foe of established world order.
It seeks to replace the international system’s multiplicity of states
with a caliphate, a single Islamic empire governed by Shariah law.
ISIS’
claim has given the millennium-old split between the Shiite and Sunni
sects of Islam an apocalyptic dimension. The remaining Sunni states feel
threatened by both the religious fervor of ISIS as well as by Shiite
Iran, potentially the most powerful state in the region. Iran compounds
its menace by presenting itself in a dual capacity. On one level, Iran
acts as a legitimate Westphalian state conducting traditional diplomacy,
even invoking the safeguards of the international system. At the same
time, it organizes and guides nonstate actors seeking regional hegemony
based on jihadist principles: Hezbollah in Lebanon and Syria; Hamas in
Gaza; the Houthis in Yemen.
Thus the Sunni Middle East risks
engulfment by four concurrent sources: Shiite-governed Iran and its
legacy of Persian imperialism; ideologically and religiously radical
movements striving to overthrow prevalent political structures;
conflicts within each state between ethnic and religious groups
arbitrarily assembled after World War I into (now collapsing) states;
and domestic pressures stemming from detrimental political, social and
economic domestic policies.
The fate of Syria provides a vivid
illustration: What started as a Sunni revolt against the Alawite (a
Shiite offshoot) autocrat Bashar Assad fractured the state into its
component religious and ethnic groups, with nonstate militias supporting
each warring party, and outside powers pursuing their own strategic
interests. Iran supports the Assad regime as the linchpin of an Iranian
historic dominance stretching from Tehran to the Mediterranean. The Gulf
States insist on the overthrow of Mr. Assad to thwart Shiite Iranian
designs, which they fear more than Islamic State. They seek the defeat
of ISIS while avoiding an Iranian victory. This ambivalence has been
deepened by the nuclear deal, which in the Sunni Middle East is widely
interpreted as tacit American acquiescence in Iranian hegemony.
These
conflicting trends, compounded by America’s retreat from the region,
have enabled Russia to engage in military operations deep in the Middle
East, a deployment unprecedented in Russian history. Russia’s principal
concern is that the Assad regime’s collapse could reproduce the chaos of
Libya, bring ISIS into power in Damascus, and turn all of Syria into a
haven for terrorist operations, reaching into Muslim regions inside
Russia’s southern border in the Caucasus and elsewhere.
On the
surface, Russia’s intervention serves Iran’s policy of sustaining the
Shiite element in Syria. In a deeper sense, Russia’s purposes do not
require the indefinite continuation of Mr. Assad’s rule. It is a classic
balance-of-power maneuver to divert the Sunni Muslim terrorist threat
from Russia’s southern border region. It is a geopolitical, not an
ideological, challenge and should be dealt with on that level. Whatever
the motivation, Russian forces in the region—and their participation in
combat operations—produce a challenge that American Middle East policy
has not encountered in at least four decades.
American
policy has sought to straddle the motivations of all parties and is
therefore on the verge of losing the ability to shape events. The U.S.
is now opposed to, or at odds in some way or another with, all parties
in the region: with Egypt on human rights; with Saudi Arabia over Yemen;
with each of the Syrian parties over different objectives. The U.S.
proclaims the determination to remove Mr. Assad but has been unwilling
to generate effective leverage—political or military—to achieve that
aim. Nor has the U.S. put forward an alternative political structure to
replace Mr. Assad should his departure somehow be realized.
Russia,
Iran, ISIS and various terrorist organizations have moved into this
vacuum: Russia and Iran to sustain Mr. Assad; Tehran to foster imperial
and jihadist designs. The Sunni states of the Persian Gulf, Jordan and
Egypt, faced with the absence of an alternative political structure,
favor the American objective but fear the consequence of turning Syria
into another Libya.
American policy on Iran has moved to the
center of its Middle East policy. The administration has insisted that
it will take a stand against jihadist and imperialist designs by Iran
and that it will deal sternly with violations of the nuclear agreement.
But it seems also passionately committed to the quest for bringing about
a reversal of the hostile, aggressive dimension of Iranian policy
through historic evolution bolstered by negotiation.
The prevailing U.S. policy toward Iran is often compared by its advocates to the Nixon
administration’s opening to China, which contributed, despite some
domestic opposition, to the ultimate transformation of the Soviet Union
and the end of the Cold War. The comparison is not apt. The opening to
China in 1971 was based on the mutual recognition by both parties that
the prevention of Russian hegemony in Eurasia was in their common
interest. And 42 Soviet divisions lining the Sino-Soviet border
reinforced that conviction. No comparable strategic agreement exists
between Washington and Tehran. On the contrary, in the immediate
aftermath of the nuclear accord, Iran’s Supreme Leader Ayatollah Ali Khamenei
described the U.S. as the “Great Satan” and rejected negotiations with
America about nonnuclear matters. Completing his geopolitical diagnosis,
Mr. Khamenei also predicted that Israel would no longer exist in 25
years.
Forty-five years ago, the expectations of China and the
U.S. were symmetrical. The expectations underlying the nuclear agreement
with Iran are not. Tehran will gain its principal objectives at the
beginning of the implementation of the accord. America’s benefits reside
in a promise of Iranian conduct over a period of time. The opening to
China was based on an immediate and observable adjustment in Chinese
policy, not on an expectation of a fundamental change in China’s
domestic system. The optimistic hypothesis on Iran postulates that
Tehran’s revolutionary fervor will dissipate as its economic and
cultural interactions with the outside world increase.
American
policy runs the risk of feeding suspicion rather than abating it. Its
challenge is that two rigid and apocalyptic blocs are confronting each
other: a Sunni bloc consisting of Egypt, Jordan, Saudi Arabia and the
Gulf States; and the Shiite bloc comprising Iran, the Shiite sector of
Iraq with Baghdad as its capital, the Shiite south of Lebanon under
Hezbollah control facing Israel, and the Houthi portion of Yemen,
completing the encirclement of the Sunni world. In these circumstances,
the traditional adage that the enemy of your enemy can be treated as
your friend no longer applies. For in the contemporary Middle East, it
is likely that the enemy of your enemy remains your enemy.
A
great deal depends on how the parties interpret recent events. Can the
disillusionment of some of our Sunni allies be mitigated? How will
Iran’s leaders interpret the nuclear accord once implemented—as a
near-escape from potential disaster counseling a more moderate course,
returning Iran to an international order? Or as a victory in which they
have achieved their essential aims against the opposition of the U.N.
Security Council, having ignored American threats and, hence, as an
incentive to continue Tehran’s dual approach as both a legitimate state
and a nonstate movement challenging the international order?
Two-power
systems are prone to confrontation, as was demonstrated in Europe in
the run-up to World War I. Even with traditional weapons technology, to
sustain a balance of power between two rigid blocs requires an
extraordinary ability to assess the real and potential balance of
forces, to understand the accumulation of nuances that might affect this
balance, and to act decisively to restore it whenever it deviates from
equilibrium—qualities not heretofore demanded of an America sheltered
behind two great oceans.
But the current crisis is taking place
in a world of nontraditional nuclear and cyber technology. As competing
regional powers strive for comparable threshold capacity, the
nonproliferation regime in the Middle East may crumble. If nuclear
weapons become established, a catastrophic outcome is nearly inevitable.
A strategy of pre-emption is inherent in the nuclear technology. The
U.S. must be determined to prevent such an outcome and apply the
principle of nonproliferation to all nuclear aspirants in the region.
Too
much of our public debate deals with tactical expedients. What we need
is a strategic concept and to establish priorities on the following
principles:
• So long as ISIS survives and remains in control of a
geographically defined territory, it will compound all Middle East
tensions. Threatening all sides and projecting its goals beyond the
region, it freezes existing positions or tempts outside efforts to
achieve imperial jihadist designs. The destruction of ISIS is more
urgent than the overthrow of Bashar Assad, who has already lost over
half of the area he once controlled. Making sure that this territory
does not become a permanent terrorist haven must have precedence. The
current inconclusive U.S. military effort risks serving as a recruitment
vehicle for ISIS as having stood up to American might.
• The
U.S. has already acquiesced in a Russian military role. Painful as this
is to the architects of the 1973 system, attention in the Middle East
must remain focused on essentials. And there exist compatible
objectives. In a choice among strategies, it is preferable for ISIS-held
territory to be reconquered either by moderate Sunni forces or outside
powers than by Iranian jihadist or imperial forces. For Russia, limiting
its military role to the anti-ISIS campaign may avoid a return to Cold
War conditions with the U.S.
• The reconquered territories should
be restored to the local Sunni rule that existed there before the
disintegration of both Iraqi and Syrian sovereignty. The sovereign
states of the Arabian Peninsula, as well as Egypt and Jordan, should
play a principal role in that evolution. After the resolution of its
constitutional crisis, Turkey could contribute creatively to such a
process.
• As the terrorist region is being dismantled and
brought under nonradical political control, the future of the Syrian
state should be dealt with concurrently. A federal structure could then
be built between the Alawite and Sunni portions. If the Alawite regions
become part of a Syrian federal system, a context will exist for the
role of Mr. Assad, which reduces the risks of genocide or chaos leading
to terrorist triumph.
• The U.S. role in such a Middle East would
be to implement the military assurances in the traditional Sunni states
that the administration promised during the debate on the Iranian
nuclear agreement, and which its critics have demanded.
• In this
context, Iran’s role can be critical. The U.S. should be prepared for a
dialogue with an Iran returning to its role as a Westphalian state
within its established borders.
The U.S. must decide for itself
the role it will play in the 21st century; the Middle East will be our
most immediate—and perhaps most severe—test. At question is not the
strength of American arms but rather American resolve in understanding
and mastering a new world.
Mr. Kissinger served as national-security adviser and secretary of state under Presidents Nixon and Ford.
In a 2005 best seller, Harry Frankfurt, a Princeton philosophy professor, explored the often complex nature of popular false ideas. “On Bulls—” examined outright lies, ambiguous forms of obfuscation and the not-always-transparent intentions of those who promote them. Now, in “On Inequality,” Mr. Frankfurt eviscerates one of the shibboleths of our time: that economic inequality—in his definition, “the possession by some of more money than others”—is the most urgent issue confronting society. This idea, he believes, suffers from logical and moral errors of the highest order.
The fixation on equality, as a moral ideal in and of itself, is critically flawed, according to the professor. It holds that justice is determined by one person’s position relative to another, not his absolute well-being. Therefore the logic of egalitarianism can lead to perverse outcomes, he argues. Most egregiously, income inequality could be eliminated very effectively “by making everyone equally poor.” And while the lowest economic stratum of society is always associated with abject poverty, this need not be the case. Mr. Frankfurt imagines instances where those “who are doing considerably worse than others may nonetheless be doing rather well.” This possibility—as with contemporary America’s wide inequalities among relatively prosperous people—undermines the coherence of a philosophy mandating equality.
Mr. Frankfurt acknowledges that “among morally conscientious individuals, appeals in behalf of equality often have very considerable emotional or rhetorical power.” The motivations for pursuing equality may be well-meaning but they are profoundly misguided and contribute to “the moral disorientation and shallowness of our time.”
The idea that equality in itself is a paramount goal, Mr. Frankfurt argues, alienates people from their own characters and life aspirations. The amount of wealth possessed by others does not bear on “what is needed for the kind of life a person would most sensibly and appropriately seek for himself.” The incessant egalitarian comparison of one against another subordinates each individual’s goals to “those that are imposed on them by the conditions in which others happen to live.” Thus, individuals are led to apply an arbitrary relative standard that does not “respect” their authentic selves.
If his literalist critique of egalitarianism is often compelling, Mr. Frankfurt’s own philosophy has more in common with such thinking than is first apparent. For Mr. Frankfurt, the imperative of justice is to alleviate poverty and improve lives, not to make people equal. He does not, however, think that it is morally adequate merely to provide people with a safety net. Instead, he argues for an ideal of “sufficiency.”
By sufficiency Mr. Frankfurt means enough economic resources for every individual to be reasonably satisfied with his circumstances, assuming that the individual’s satisfaction need not be disturbed by others having more. While more money might be welcome, it would not “alter his attitude toward his life, or the degree of his contentment with it.” The achievement of economic and personal contentment by everyone is Mr. Frankfurt’s priority. In fact, his principle of sufficiency is so ambitious it demands that lack of money should never be the cause of anything “distressing or unsatisfying” in anyone’s life.
What’s the harm of such a desirable, if unrealistic goal? The author declares that inequality is “morally disturbing” only when his standard of sufficiency is not achieved. His just society would, in effect, mandate a universal entitlement to a lifestyle that has been attained only by a minuscule fraction of humans in all history. Mr. Frankfurt recognizes such reasoning may bring us full circle: “The most feasible approach” to universal sufficiency may well be policies that, in practice, differ little from those advocated in the “pursuit of equality.”
In passing, the author notes another argument against egalitarianism, the “dangerous conflict between equality and liberty.” He is referring to the notion that leaving people free to choose their work and what goods and services they consume will always lead to an unequal distribution of income. To impose any preconceived economic distribution will, as the philosopher Robert Nozick argued, involves “continuous interference in people’s lives.” Like egalitarianism, Mr. Frankfurt’s ideal of “sufficiency” would hold property rights and economic liberty hostage to his utopian vision.
Such schemes, Nozick argued, see economic assets as having arrived on earth fully formed, like “manna from heaven,” with no consideration of their human origin. Mr. Frankfurt also presumes that one person’s wealth must be the reason others don’t have a “sufficient” amount to be blissfully carefree; he condemns the “excessively affluent” who have “extracted” too much from the nation. This leaves a would-be philosopher-king the task of divvying up loot as he chooses.
On the surface, “On Inequality” is a provocative challenge to a prevailing orthodoxy. But as the author’s earlier book showed, appearances can deceive. When Thomas Piketty, in “Capital in the Twenty-First Century,” says that most wealth is rooted in theft or is arbitrary, or when Mr. Frankfurt’s former Princeton colleague Paul Krugman says the “rich” are “undeserving,” they are not (just) making the case for equality. By arguing that wealth accumulation is inherently unjust, they lay a moral groundwork for confiscation of property. Similarly, Mr. Frankfurt accuses the affluent of “gluttony”—a sentiment about which there appears to be unanimity in that temple of tenured sufficiency, the Princeton faculty club. The author claims to be motivated by respect for personal autonomy and fulfillment. By ignoring economic liberty, he reveals he is not.
President Obama arrived in Kenya
on Friday and will travel from here to Ethiopia, two crucial U.S. allies
in East Africa. The region is not only emerging as an economic
powerhouse, it is also an important front in the battle with al Qaeda,
al-Shabaab, Islamic State and other Islamist radicals.
Yet
grievances related to how the International Criminal Court’s universal
jurisdiction is applied in Africa are interfering with U.S. and European
relations on the continent. In Africa there are accusations of
neocolonialism and even racism in ICC proceedings, and a growing
consensus that Africans are being unjustly indicted by the court.
It
wasn’t supposed to be this way. After the failure to prevent mass
atrocities in Europe and Africa in the 1990s, a strong consensus emerged
that combating impunity had to be an international priority. Ad hoc
United Nations tribunals were convened to judge the masterminds of
genocide and crimes against humanity in Yugoslavia, Rwanda and Sierra
Leone. These courts were painfully slow and expensive. But their
mandates were clear and limited, and they helped countries to turn the
page and focus on rebuilding.
Soon universal jurisdiction was
seen not only as a means to justice, but also a tool for preventing
atrocities in the first place. Several countries in Western Europe
including Spain, the United Kingdom, Belgium and France empowered their
national courts with universal jurisdiction. In 2002 the International
Criminal Court came into force.
Africa and Europe were early
adherents and today constitute the bulk of ICC membership. But India,
China, Russia and most of the Middle East—representing well over half
the world’s population—stayed out. So did the United States. Leaders in
both parties worried that an unaccountable supranational court would
become a venue for politicized show trials. The track record of the ICC
and European courts acting under universal jurisdiction has amply borne
out these concerns.
Only when U.S. Defense Secretary Donald Rumsfeld threatened to move NATO headquarters out of Brussels in 2003 did Belgium rein in efforts to indict former President George H.W. Bush, and Gens. Colin Powell and Tommy Franks,
for alleged “war crimes” during the 1990-91 Gulf War. Spanish courts
have indicted American military personnel in Iraq and investigated the
U.S. detention facility in Guantanamo Bay.
But with powerful
states able to shield themselves and their clients, Africa has borne the
brunt of indictments. Far from pursuing justice for victims, these
courts have become a venue for public-relations exercises by activist
groups. Within African countries, they have been manipulated by one
political faction to sideline another, often featuring in electoral
politics.
The ICC’s recent indictments of top Kenyan officials are a prime example. In October 2014, Kenyan President Uhuru Kenyatta
became the first sitting head of state to appear before the ICC, though
he took the extraordinary step of temporarily transferring power to his
deputy to avoid the precedent. ICC prosecutors indicted Mr. Kenyatta in
connection with Kenya’s post-election ethnic violence of 2007-08, in
which some 1,200 people were killed.
Last December the ICC
withdrew all charges against Mr. Kenyatta, saying the evidence had “not
improved to such an extent that Mr Kenyatta’s alleged criminal
responsibility can be proven beyond reasonable doubt.” As U.S. assistant
secretary of state for African affairs from 2005-09, and the point
person during Kenya’s 2007-08 post-election violence, I knew the ICC
indictments were purely political. The court’s decision to continue its
case against Kenya’s deputy president, William Ruto, reflects a degree of indifference and even hostility to Kenya’s efforts to heal its political divisions.
The ICC’s indictments in Kenya began with former chief prosecutor Luis Moreno-Ocampo’s
determination to prove the court’s relevance in Africa by going after
what he reportedly called “low-hanging fruit.” In other words, African
political and military leaders unable to resist ICC jurisdiction.
More
recently, the arrest of Rwandan chief of intelligence Lt. Gen. Emmanuel
Karenzi Karake in London last month drew a unanimous reproach from the
African Union’s Peace and Security Council. The warrant dates to a 2008
Spanish indictment for alleged reprisal killings following the 1994
Rwandan genocide. At the time of the indictment, Mr. Karenzi Karake was
deputy commander of the joint U.N.-African Union peacekeeping operation
in Darfur. The Rwandan troops under his command were the backbone of
the Unamid force, and his performance in Darfur was by all accounts
exemplary.
Moreover, a U.S. government interagency review
conducted in 2007-08, when I led the State Department’s Bureau of
African Affairs, found that the Spanish allegations against Mr. Karenzi
Karake were false and unsubstantiated. The U.S. fully backed his
reappointment in 2008 as deputy commander of Unamid forces. It would be a
travesty of justice if the U.K. were to extradite Mr. Karake to Spain
to stand trial.
Sadly, the early hope of “universal jurisdiction”
ending impunity for perpetrators of genocide and crimes against
humanity has given way to cynicism, both in Africa and the West. In
Africa it is believed that, in the rush to demonstrate their power,
these courts and their defenders have been too willing to brush aside
considerations of due process that they defend at home.
In the
West, the cynicism is perhaps even more damaging because it calls into
question the moral capabilities of Africans and their leaders, and
revives the language of paternalism and barbarism of earlier
generations.
Ms. Frazer, a former U.S. ambassador to South
Africa (2004-05) and assistant secretary of state for African affairs
(2005-09), is an adjunct senior fellow for Africa studies at the Council
on Foreign Relations.
Brilliance isn’t the only key to Warren Buffett’s investing success. See rule No. 5.
The U.S. economy shrank last quarter. The Federal Reserve is
widely expected to begin raising interest rates later this year. U.S.
stocks are expensive by many measures. Greece’s national finances remain
fragile. Oh, and election season already is under way in the U.S.
Investors who are tempted to sell risky assets and flee to safety don’t have to look far for justification.
If you are one of them, ponder this: Most of what matters in investing involves bedrock principles, not current events.
Here are five principles every investor should keep in mind:
1. Diversification is how you limit the risk of losses in an uncertain world.
If,
30 years ago, a visitor from the future had said that the Soviet Union
had collapsed, Japan’s stock market had stagnated for a quarter century,
China had become a superpower and North Dakota had helped turn the U.S.
into a fast-growing source of crude oil, few would have believed it.
The next 30 years will be just as surprising.
Diversification among different assets can be frustrating. It requires, at every point in time, owning some unpopular assets.
Why would I want to own European stocks if its economy is such a mess? Why should I buy bonds if interest rates are so low?
The appropriate answer is, “Because the future will play out in ways you or your adviser can’t possibly comprehend.”
Owning a little bit of everything is a bet on humility, which the history of investing shows is a valuable trait.
2. You are your own worst enemy.
The biggest risk investors face isn’t a recession, a bear market, the Federal Reserve or their least favorite political party.
It is their own emotions and biases, and the destructive behaviors they cause.
You
can be the best stock picker in the world, capable of finding
tomorrow’s winning businesses before anyone else. But if you panic and
sell during the next bear market, none of it will matter.
You can
be armed with an M.B.A. and have 40 years before retirement to let your
savings compound into a fortune. But if you have a gambling mentality
and you day-trade penny stocks, your outlook seems dismal.
You
can be a mathematical genius, building the most sophisticated
stock-market forecasting models. But if you don’t understand the limits
of your intelligence, you are on your way to disaster.
There
aren’t many iron rules of investing, but one of them is that no amount
of brain power can compensate for behavioral errors. Figure out what
mistakes you are prone to make and embrace strategies that limit the
risk.
3. There is a price to pay.
The stock market has historically offered stellar long-term returns, far better than cash or bonds.
But
there is a cost. The price of admission to earn high long-term returns
in stocks is a ceaseless torrent of unpredictable outcomes, senseless
volatility and unexpected downturns.
If you can stick with your
investments through the rough spots, you don’t actually pay this bill;
it is a mental surcharge. But it is very real. Not everyone is willing
to pay it, which is why there is opportunity for those who are.
There
is an understandable desire to forecast what the market will do in the
short run. But the reason stocks offer superior long-term returns is
precisely because we can’t forecast what they will do in the short run.
4. When in doubt, choose the investment with the lowest fee.
As a group, investors’ profits always will equal the overall market’s returns minus all fees and expenses.
Below-average fees, therefore, offer one of your best shots at earning above-average results.
A
talented fund manager can be worth a higher fee, mind you. But enduring
outperformance is one of the most elusive investing skills.
According
to Vanguard Group, which has championed low-cost investing products,
more than 80% of actively managed U.S. stock funds underperformed a
low-cost index fund in the 10 years through December. It is far more
common for a fund manager to charge excess fees than to deliver excess
performance.
There are no promises in investing. The best you can
do is put the odds in your favor. And the evidence is overwhelming: The
lower the costs, the more the odds tip in your favor.
5. Time is the most powerful force in investing.
Eighty-four year old Warren Buffett’s current net worth is around $73 billion, nearly all of which is in Berkshire Hathaway stock. Berkshire’s stock has risen 24-fold since 1990.
Do the math, and some $70 billion of Mr. Buffett’s $73 billion fortune was accumulated around or after his 60th birthday.
Mr.
Buffett is, of course, a phenomenal investor whose talents few will
replicate. But the real key to his wealth is that he has been a
phenomenal investor for two-thirds of a century.
Wealth grows exponentially—a little at first, then slightly more, and then in a hurry for those who stick around the longest.
That
lesson—that time, patience and endurance pay off—is something us
mortals can learn from, particularly younger workers just starting to
save for retirement.
June marks the 800th
anniversary of Magna Carta, the ‘Great Charter’ that established the
rule of law for the English-speaking world. Its revolutionary impact
still resounds today, writes Daniel Hannan
King John, pressured
by English barons, reluctantly signs Magna Carta, the ‘Great Charter,’
on the Thames riverbank, Runnymede, June 15, 1215, as rendered in James
Doyle’s ‘A Chronicle of England.’
Photo:
Mary Evans Picture Library/Everett Collection
http://si.wsj.net/public/resources/images/BN-IQ808_MAGNA_J_20150529103352.jpg
By Daniel Hannan
Eight hundred years ago next month, on a reedy stretch of
riverbank in southern England, the most important bargain in the history
of the human race was struck. I realize that’s a big claim, but in this
case, only superlatives will do. As Lord Denning, the most celebrated
modern British jurist put it, Magna Carta was “the greatest
constitutional document of all time, the foundation of the freedom of
the individual against the arbitrary authority of the despot.”
It
was at Runnymede, on June 15, 1215, that the idea of the law standing
above the government first took contractual form. King John accepted
that he would no longer get to make the rules up as he went along. From
that acceptance flowed, ultimately, all the rights and freedoms that we
now take for granted: uncensored newspapers, security of property,
equality before the law, habeas corpus, regular elections, sanctity of
contract, jury trials.
Magna Carta is Latin for “Great Charter.”
It was so named not because the men who drafted it foresaw its epochal
power but because it was long. Yet, almost immediately, the document
began to take on a political significance that justified the adjective
in every sense.
The bishops and barons who had brought King John
to the negotiating table understood that rights required an enforcement
mechanism. The potency of a charter is not in its parchment but in the
authority of its interpretation. The constitution of the U.S.S.R., to
pluck an example more or less at random, promised all sorts of
entitlements: free speech, free worship, free association. But as Soviet
citizens learned, paper rights are worthless in the absence of
mechanisms to hold rulers to account.
Magna Carta instituted a form of conciliar rule that was to develop
directly into the Parliament that meets at Westminster today. As the
great Victorian historian William Stubbs put it, “the whole
constitutional history of England is little more than a commentary on
Magna Carta.”
And
not just England. Indeed, not even England in particular. Magna Carta
has always been a bigger deal in the U.S. The meadow where the
abominable King John put his royal seal to the parchment lies in my
electoral district in the county of Surrey. It went unmarked until 1957,
when a memorial stone was finally raised there—by the American Bar
Association.
Only now, for the anniversary, is a British
monument being erected at the place where freedom was born. After some
frantic fundraising by me and a handful of local councilors, a large
bronze statue of Queen Elizabeth II will gaze out across the slow, green
waters of the Thames, marking 800 years of the Crown’s acceptance of
the rule of law.
Eight hundred years is a long wait. We British
have, by any measure, been slow to recognize what we have. Americans, by
contrast, have always been keenly aware of the document, referring to
it respectfully as the Magna Carta.
Why? Largely because
of who the first Americans were. Magna Carta was reissued several times
throughout the 14th and 15th centuries, as successive Parliaments
asserted their prerogatives, but it receded from public consciousness
under the Tudors, whose dynasty ended with the death of Elizabeth I in
1603.
In the early 17th century, members of Parliament revived
Magna Carta as a weapon in their quarrels with the autocratic Stuart
monarchs. Opposition to the Crown was led by the brilliant lawyer Edward
Coke (pronounced Cook), who drafted the first Virginia Charter in 1606.
Coke’s argument was that the king was sidelining Parliament, and so
unbalancing the “ancient constitution” of which Magna Carta was the
supreme expression.
United for the first
time, the four surviving original Magna Carta manuscripts are prepared
for display at the British Library, London, Feb. 1, 2015.
Photo:
UPPA/ZUMA PRESS
The early settlers arrived while these rows were at their height and
carried the mania for Magna Carta to their new homes. As early as 1637,
Maryland sought permission to incorporate Magna Carta into its basic
law, and the first edition of the Great Charter was published on
American soil in 1687 by William Penn, who explained that it was what
made Englishmen unique: “In France, and other nations, the mere will of
the Prince is Law, his word takes off any man’s head, imposeth taxes, or
seizes any man’s estate, when, how and as often as he lists; But in
England, each man hath a fixed Fundamental Right born with him, as to
freedom of his person and property in his estate, which he cannot be
deprived of, but either by his consent, or some crime, for which the law
has imposed such a penalty or forfeiture.”
There was a
divergence between English and American conceptions of Magna Carta. In
the Old World, it was thought of, above all, as a guarantor of
parliamentary supremacy; in the New World, it was already coming to be
seen as something that stood above both Crown and Parliament. This
difference was to have vast consequences in the 1770s.
The
American Revolution is now remembered on both sides of the Atlantic as a
national conflict—as, indeed, a “War of Independence.” But no one at
the time thought of it that way—not, at any rate, until the French
became involved in 1778. Loyalists and patriots alike saw it as a civil
war within a single polity, a war that divided opinion every bit as much
in Great Britain as in the colonies.
The American
Revolutionaries weren’t rejecting their identity as Englishmen; they
were asserting it. As they saw it, George III was violating the “ancient
constitution” just as King John and the Stuarts had done. It was
therefore not just their right but their duty to resist, in the words of
the delegates to the first Continental Congress in 1774, “as Englishmen
our ancestors in like cases have usually done.”
Nowhere, at this
stage, do we find the slightest hint that the patriots were fighting
for universal rights. On the contrary, they were very clear that they
were fighting for the privileges bestowed on them by Magna Carta. The
concept of “no taxation without representation” was not an abstract
principle. It could be found, rather, in Article 12 of the Great
Charter: “No scutage or aid is to be levied in our realm except by the
common counsel of our realm.” In 1775, Massachusetts duly adopted as its
state seal a patriot with a sword in one hand and a copy of Magna Carta
in the other.
I recount these facts to make an important, if
unfashionable, point. The rights we now take for granted—freedom of
speech, religion, assembly and so on—are not the natural condition of an
advanced society. They were developed overwhelmingly in the language in
which you are reading these words.
When we call them universal
rights, we are being polite. Suppose World War II or the Cold War had
ended differently: There would have been nothing universal about them
then. If they are universal rights today, it is because of a series of
military victories by the English-speaking peoples.
Various early
copies of Magna Carta survive, many of them in England’s cathedrals,
tended like the relics that were removed during the Reformation. One
hangs in the National Archives in Washington, D.C., next to the two
documents it directly inspired: the Declaration of Independence and the
Constitution. Another enriches the Australian Parliament in Canberra.
But
there are only four 1215 originals. One of them, normally housed at
Lincoln Cathedral, has recently been on an American tour, resting for
some weeks at the Library of Congress. It wasn’t that copy’s first visit
to the U.S. The same parchment was exhibited in New York at the 1939
World’s Fair, attracting an incredible 13 million visitors. World War II
broke out while it was still on display, and it was transferred to Fort
Knox for safekeeping until the end of the conflict.
Could there
have been a more apt symbol of what the English-speaking peoples were
fighting for in that conflagration? Think of the world as it stood in
1939. Constitutional liberty was more or less confined to the
Anglosphere. Everywhere else, authoritarianism was on the rise. Our
system, uniquely, elevated the individual over the state, the rules over
the rulers.
When the 18th-century statesman Pitt the Elder
described Magna Carta as England’s Bible, he was making a profound
point. It is, so to speak, the Torah of the English-speaking peoples:
the text that sets us apart while at the same time speaking truths to
the rest of mankind.
The very success of Magna Carta makes it
hard for us, 800 years on, to see how utterly revolutionary it must have
appeared at the time. Magna Carta did not create democracy: Ancient
Greeks had been casting differently colored pebbles into voting urns
while the remote fathers of the English were grubbing about alongside
pigs in the cold soil of northern Germany. Nor was it the first
expression of the law: There were Sumerian and Egyptian law codes even
before Moses descended from Sinai.
What Magna Carta initiated,
rather, was constitutional government—or, as the terse inscription on
the American Bar Association’s stone puts it, “freedom under law.”
It
takes a real act of imagination to see how transformative this concept
must have been. The law was no longer just an expression of the will of
the biggest guy in the tribe. Above the king brooded something more
powerful yet—something you couldn’t see or hear or touch or taste but
that bound the sovereign as surely as it bound the poorest wretch in the
kingdom. That something was what Magna Carta called “the law of the
land.”
This phrase is commonplace in our language. But think of
what it represents. The law is not determined by the people in
government, nor yet by clergymen presuming to interpret a holy book.
Rather, it is immanent in the land itself, the common inheritance of the
people living there.
The idea of the law coming up from the
people, rather than down from the government, is a peculiar feature of
the Anglosphere. Common law is an anomaly, a beautiful, miraculous
anomaly. In the rest of the world, laws are written down from first
principles and then applied to specific disputes, but the common law
grows like a coral, case by case, each judgment serving as the starting
point for the next dispute. In consequence, it is an ally of freedom
rather than an instrument of state control. It implicitly assumes
residual rights.
And indeed, Magna Carta conceives rights in
negative terms, as guarantees against state coercion. No one can put you
in prison or seize your property or mistreat you other than by due
process. This essentially negative conception of freedom is worth
clinging to in an age that likes to redefine rights as entitlements—the
right to affordable health care, the right to be forgotten and so on.
It
is worth stressing, too, that Magna Carta conceived freedom and
property as two expressions of the same principle. The whole document
can be read as a lengthy promise that the goods of a free citizen will
not be arbitrarily confiscated by someone higher up the social scale.
Even the clauses that seem most remote from modern experience generally
turn out, in reality, to be about security of ownership.
There
are, for example, detailed passages about wardship. King John had been
in the habit of marrying heiresses to royal favorites as a way to get
his hands on their estates. The abstruse-sounding articles about
inheritance rights are, in reality, simply one more expression of the
general principle that the state may not expropriate without due
process.
Those who stand awe-struck before the Great Charter
expecting to find high-flown phrases about liberty are often surprised
to see that a chunk of it is taken up with the placing of fish-traps on
the Thames. Yet these passages, too, are about property, specifically
the freedom of merchants to navigate inland waterways without having
arbitrary tolls imposed on them by fish farmers.
Liberty and
property: how naturally those words tripped, as a unitary concept, from
the tongues of America’s Founders. These were men who had been shaped in
the English tradition, and they saw parliamentary government not as an
expression of majority rule but as a guarantor of individual freedom.
How different was the Continental tradition, born 13 years later with
the French Revolution, which saw elected assemblies as the embodiment of
what Rousseau called the “general will” of the people.
In that
difference, we may perhaps discern explanation of why the Anglosphere
resisted the chronic bouts of authoritarianism to which most other
Western countries were prone. We who speak this language have always
seen the defense of freedom as the duty of our representatives and so,
by implication, of those who elect them. Liberty and democracy, in our
tradition, are not balanced against each other; they are yoked together.
In February, the four surviving original copies of Magna Carta were
united, for just a few hours, at the British Library—something that had
not happened in 800 years. As I stood reverentially before them, someone
recognized me and posted a photograph on Twitter with the caption: “If
Dan Hannan gets his hands on all four copies of Magna Carta, will he be
like Sauron with the Rings?”
Yet the majesty of the document
resides in the fact that it is, so to speak, a shield against Saurons.
Most other countries have fallen for, or at least fallen to, dictators.
Many, during the 20th century, had popular communist parties or fascist
parties or both. The Anglosphere, unusually, retained a consensus behind
liberal capitalism.
This is not because of any special property
in our geography or our genes but because of our constitutional
arrangements. Those constitutional arrangements can take root anywhere.
They explain why Bermuda is not Haiti, why Hong Kong is not China, why
Israel is not Syria.
They work because, starting with Magna
Carta, they have made the defense of freedom everyone’s responsibility.
Americans, like Britons, have inherited their freedoms from past
generations and should not look to any external agent for their
perpetuation. The defense of liberty is your job and mine. It is up to
us to keep intact the freedoms we inherited from our parents and to pass
them on securely to our children.
Mr. Hannan is a British
member of the European Parliament for the Conservative Party, a
columnist for the Washington Examiner and the author of “Inventing
Freedom: How the English-speaking Peoples Made the Modern World.”
White House officials can be oddly candid in talking to their liberal
friends at the New Yorker magazine. That’s where an unnamed official in
2011 boasted of “leading from behind,” and where last year President Obama
dismissed Islamic State as a terrorist “jayvee team.” Now the U.S. Vice
President has revealed the Administration line on human rights in
China.
In the April 6 issue, Joe Biden recounts meeting Xi Jinping
months before his 2012 ascent to be China’s supreme leader. Mr. Xi
asked him why the U.S. put “so much emphasis on human rights.” The right
answer is simple: No government has the right to deny its citizens
basic freedoms, and those that do tend also to threaten peace overseas,
so U.S. support for human rights is a matter of values and interests.
Instead,
Mr. Biden downplayed U.S. human-rights rhetoric as little more than
political posturing. “No president of the United States could represent
the United States were he not committed to human rights,” he told Mr.
Xi. “President Barack Obama would not be able to stay in power if he did
not speak of it. So look at it as a political imperative.” Then Mr.
Biden assured China’s leader: “It doesn’t make us better or worse. It’s
who we are. You make your decisions. We’ll make ours.” [not the WSJ's emphasis.]
Mr. Xi took the advice. Since taking office he has detained more
than 1,000 political prisoners, from anticorruption activist Xu Zhiyong
to lawyer Pu Zhiqiang and journalist Gao Yu. He has cracked down on Uighurs in Xinjiang, banning more Muslim practices and jailing scholar-activist Ilham Tohti for life. Anti-Christian repression and Internet controls are tightening. Nobel Peace laureate Liu Xiaobo remains in prison, his wife Liu Xia
under illegal house arrest for the fifth year. Lawyer Gao Zhisheng left
prison in August but is blocked from receiving medical care overseas.
Hong Kong, China’s most liberal city, is losing its press freedom and
political autonomy.
Amid all of this Mr. Xi and his government have faced little challenge from Washington. That is consistent with Hillary Clinton’s
2009 statement that human rights can’t be allowed to “interfere” with
diplomacy on issues such as the economy and the environment. Mr. Obama
tried walking that back months later, telling the United Nations that
democracy and human rights aren’t “afterthoughts.” But his
Administration’s record—and now Mr. Biden’s testimony—prove otherwise.
JAMA Surg. Published online March 11, 2015. doi:10.1001/jamasurg.2014.2911.
On Thursday mornings, our operating room management committee meets
to handle items large and small. Most of our discussions focus on
block-time allocation, purchasing decisions, and alike. However, too
often we talk about behavioral issues, particularly the now
well-characterized disruptive physician.
We have all seen it or been there before. A physician acts out in the
operating room with shouting or biting sarcasm, intimidating colleagues
and staff and impeding them from functioning at a high level. The most
debilitating perpetrators of this behavior are repeat customers who
engender such fear and uncertainty in all who contact them that the
morale of the nursing staff and anesthesiologists is undermined, work
becomes an unbearable chore, and performance suffers.
When one engages a difficult physician on his or her behavior, the
physician responds in characteristic fashion. He or she defends his or
her actions as patient advocacy, pointing out the shortcomings of the
scrub nurse or instruments and showing limited, if any, remorse. He or
she argues that such civil disobedience is the only way to enact change.
In truth, disruptive physicians’ actions are often admired by a sizable
minority of their colleagues as the only way to articulate real
frustrations of working in today’s highly complex hospital. In extreme
situations, these physicians become folk heroes to younger physicians
who envy their fortitude in confronting the power of the bureaucracy.
A few days after a recent outburst by a particularly unpleasant and
repeat offender, I was enjoying my daily interval on the stationary
bicycle at my gym. My thoughts were wandering to a broad range of
topics. I spent some time considering what really drives this
nonproductive behavior and how otherwise valuable physicians could be
channeled successfully into a more collegial state. As in the past, I
was long on theory but short on conviction that it would make a
difference.
After my workout as I prepared to shower, I received an urgent email.
A patient I was consulting for upper extremity embolization had
developed confusion and possible cerebral emboli despite full
anticoagulation. I responded that I was on my way to see her and
suggested a few diagnostic tests and consultations.
As I typed my message, a custodial employee of the gym reminded me
that no cellular telephones were allowed in the locker room. I pointed
out that I was not using my cellular telephone but rather an email
function and I was not offending anyone by talking. He again pointed out
that cellular telephones were not allowed under any circumstances. As I
argued back, “I am a physician and this is an emergency.” My voice got
louder and I became confrontational. I told him to call the manager.
Another member next to me said quietly that the reason for the cellular
telephone ban was the photographic potential of the devices and that I
could have simply moved to the reception area and used the telephone any
way I wished.
I felt like the fool I was. I trudged off to the showers feeling, as
in the Texas homily, lower than a snake’s belly. After toweling off, I
approached the employee and apologized for my behavior and for making
his job more difficult. I told him he had handled the situation far
better than me and I admired his restraint.
The lessons were stark and undeniable. Like my disruptive colleagues,
I had justified my boorish behavior with patient care. I had assumed my
need to break the rules far outweighed the reasonable and rational
policy of the establishment; after all, I was important and people
depended on me. Worse yet, I felt empowered to take out my frustration,
enhanced by my worry about the patient, on someone unlikely to retaliate
against me for fear of job loss.
I have come to realize that irrespective of disposition, when the
setting is right, we are all potentially disruptive. The only questions
are how frequent and how severe. Even more importantly, from a
prognostic perspective, can we share the common drivers of these
behaviors and develop insights that will lead to avoidance?
The most common approaches used today are only moderately effective.
As in many other institutions, when physicians are deemed by their peers
to have violated a carefully defined code of conduct, they are advised
to apologize to any offended personnel. In many instances, these
apologies are sincere and are, in fact, appreciated by all.
Unfortunately, on occasion, the interaction is viewed as a forced
function and the behavior is soon repeated albeit in a different nursing
unit or operating room.
When such failures occur, persistently disruptive physicians are
referred to our physician well-being committee. Through a highly
confidential process, efforts are made to explore the potential causes
for the behavior and acquaint the referred physician with the
consequences of their actions on hospital function. Often, behavioral
contracts are drawn up to precisely outline the individual’s issues and
subsequent medical staff penalties if further violations occur.
That said, as well intentioned and psychologically sound as these
programs are, there remains a hard core of repeat offenders. Despite the
heightened stress and ill will engendered by disruptive physicians’
behavior, they simply cannot interact in other than confrontational
fashion when frustrated by real or imagined shortcomings in the
environment.
Based on nearly 20 years of physician management experience, it is my
belief that in these few physicians, such behaviors are hard wired and
fairly resistant to traditional counseling. An unfortunate end game is
termination from a medical staff if the hostile working environment
created by their outbursts is viewed as a liability threat by the
institution. Such actions are always painful and bring no satisfaction
to anyone involved. These high-stakes dramas, often involving critical
institutional players on both sides, are played out behind closed doors.
Few people are privy to the details of either the infraction or the
attempts at remediation. Misunderstandings in the staff are common.
I suggest that an underused remedy is more intense peer pressure
through continued education of those colleagues who might silently
support these outbursts without fully realizing the consequences. This
would begin by treating these incidents in the same way that we do other
significant adverse events that occur in our hospitals. In confidential
but interdisciplinary sessions, the genesis, nature, and consequences
of the interaction could be explored openly. If indeed the inciting
event was judged to be an important patient care issue, the problem
could be identified and addressed yet clearly separated from the
counterproductive interaction that followed. In addition to the
deterrence provided by the more public airing of the incidents, the
tenuous linkage between abusive behavior and patient protection could be
severed. It is this linkage that provides any superficial legitimacy to
the outbursts.
Through this process, peer pressure would be increased and provide a
greater impetus for self-control and more productive interactions.
Importantly, with such a direct and full examination of both the
character and costs of poor conduct, whatever support exists for such
behaviors within the medical staff would be diminished.
Bruce Gewertz, MD, Cedars-Sinai Health System
Published Online: March 11, 2015. doi:10.1001/jamasurg.2014.2911. Conflict of Interest Disclosures: None reported.
The
Obama
administration’s troubling flirtation with another mortgage
meltdown took an unsettling turn on Tuesday with Federal Housing Finance
Agency Director
Mel Watt
’s testimony before the House Financial Services Committee.
Mr.
Watt told the committee that, having received “feedback from
stakeholders,” he expects to release by the end of March new guidance on
the “guarantee fee” charged by
Fannie Mae
and
Freddie Mac
to cover the credit risk on loans the federal mortgage agencies guarantee.
Here
we go again. In the Obama administration, new guidance on housing
policy invariably means lowering standards to get mortgages into the
hands of people who may not be able to afford them.
Earlier this
month, President Obama announced that the Federal Housing
Administration (FHA) will begin lowering annual mortgage-insurance
premiums “to make mortgages more affordable and accessible.” While that
sounds good in the abstract, the decision is a bad one with serious
consequences for the housing market.
Government programs to make
mortgages more widely available to low- and moderate-income families
have consistently offered overleveraged, high-risk loans that set up too
many homeowners to fail. In the long run-up to the 2008 financial
crisis, for example, federal mortgage agencies and their regulators
cajoled and wheedled private lenders to loosen credit standards. They
have been doing so again. When the next housing crash arrives, private
lenders will be blamed—and homeowners and taxpayers will once again pay
dearly.
Lowering annual mortgage-insurance premiums is part of a
new affordable-lending effort by the Obama administration. More
specifically, it is the latest salvo in a price war between two
government mortgage giants to meet government mandates.
Fannie
Mae fired the first shot in December when it relaunched the 30-year, 97%
loan-to-value, or LTV, mortgage (a type of loan that was suspended in
2013). Fannie revived these 3% down-payment mortgages at the behest of
its federal regulator, the Federal Housing Finance Agency (FHFA)—which
has run Fannie Mae and Freddie Mac since 2008, when both
government-sponsored enterprises (GSEs) went belly up and were put into
conservatorship. The FHA’s mortgage-premium price rollback was a
counteroffensive.
Fannie’s
goal in 1994 and today is to take market share from the FHA, the main
competitor for loans it and Freddie Mac need to meet mandates set by
Congress since 1992 to increase loans to low- and moderate-income
homeowners. The weapons in this war are familiar—lower pricing and
progressively looser credit as competing federal agencies fight over
existing high-risk lending and seek to expand such lending.
Mortgage
price wars between government agencies are particularly dangerous,
since access to low-cost capital and minimal capital requirements gives
them the ability to continue for many years—all at great risk to the
taxpayers. Government agencies also charge low-risk consumers more than
necessary to cover the risk of default, using the overage to lower fees
on loans to high-risk consumers.
Starting in 2009 the FHFA
released annual studies documenting the widespread nature of these
cross-subsidies. The reports showed that low down payment, 30-year loans
to individuals with low FICO scores were consistently subsidized by
less-risky loans.
Unfortunately, special interests such as the
National Association of Realtors—always eager to sell more houses and
reap the commissions—and the left-leaning Urban Institute were
cheerleaders for loose credit. In 1997, for example, HUD commissioned
the Urban Institute to study Fannie and Freddie’s single-family
underwriting standards. The Urban Institute’s 1999 report found that
“the GSEs’ guidelines, designed to identify creditworthy applicants, are
more likely to disqualify borrowers with low incomes, limited wealth,
and poor credit histories; applicants with these characteristics are
disproportionately minorities.” By 2000 Fannie and Freddie did away with
down payments and raised debt-to-income ratios. HUD encouraged them to
more aggressively enter the subprime market, and the GSEs decided to
re-enter the “liar loan” (low doc or no doc) market, partly in a desire
to meet higher HUD low- and moderate-income lending mandates.
On
Jan. 6, the Urban Institute announced in a blog post: “FHA: Time to stop
overcharging today’s borrowers for yesterday’s mistakes.” The institute
endorsed an immediate cut of 0.40% in mortgage-insurance premiums
charged by the FHA. But once the agency cuts premiums, Fannie and
Freddie will inevitably reduce the guarantee fees charged to cover the
credit risk on the loans they guarantee.
Now the other shoe appears poised to drop, given Mr. Watt’s promise on Tuesday to issue new guidance on guarantee fees.
This
is happening despite Congress’s 2011 mandate that Fannie’s regulator
adjust the prices of mortgages and guarantee fees to make sure they
reflect the actual risk of loss—that is, to eliminate dangerous and
distortive pricing by the two GSEs. Ed DeMarco, acting director of the
FHFA since March 2009, worked hard to do so but left office in January
2014. Mr. Watt, his successor, suspended
Mr. DeMarc
o’s efforts to comply with Congress’s mandate. Now that Fannie
will once again offer heavily subsidized 3%-down mortgages, massive new
cross-subsidies will return, and the congressional mandate will be
ignored.
The law stipulates that the FHA maintain a
loss-absorbing capital buffer equal to 2% of the value of its
outstanding mortgages. The agency obtains this capital from profits
earned on mortgages and future premiums. It hasn’t met its capital
obligation since 2009 and will not reach compliance until the fall of
2016, according to the FHA’s latest actuarial report. But if the economy
runs into another rough patch, this projection will go out the window.
Congress
should put an end to this price war before it does real damage to the
economy. It should terminate the ill-conceived GSE affordable-housing
mandates and impose strong capital standards on the FHA that can’t be
ignored as they have been for five years and counting.
Mr. Pinto,
former chief credit officer of Fannie Mae, is co-director and
chief risk officer of the International Center on Housing Risk at the
American Enterprise Institute.
Condolences to the family of
Luke Somers,
the kidnapped American journalist who was murdered Saturday
during a rescue attempt by U.S. special forces in Yemen. His death is a
moment for sadness and anger, but also for pride in the rescue team and
praise for the
Obama
Administration for ordering the attempt.
According to the
Journal’s account based on military and Administration sources, some 40
special forces flew to a remote part of Yemen, marching five miles to
escape detection, but lost the element of surprise about 100 yards from
the jihadist hideout. One of the terrorists was observed by drone
surveillance to enter a building where it is believed he shot Somers and
a South African hostage,
Pierre Korkie.
The special forces carried the wounded men out by helicopter, but
one died on route and the other aboard a Navy ship.
There is no
blame for failing to save Somers, whose al Qaeda captors had released a
video on Thursday vowing to kill him in 72 hours if the U.S. did not
meet unspecified demands. The jihadists were no doubt on high alert
after special forces conducted a rescue attempt in late November at a
hillside cave. The commandos rescued eight people, mostly Yemenis, but
Somers had been moved.
It’s a tribute to the skill of U.S. special forces that these
high-risk missions against a dangerous enemy don’t fail more often. But
given good intelligence and a reasonable chance to save Somers, the
fault would have been not to try for fear of failure or political blame.
The reality is that most American and British citizens captured
by jihadists are now likely to be murdered as a terrorist statement.
This isn’t always true for citizens of other countries that pay ransom.
But the U.S. and U.K. rightly refuse on grounds that the payments give
incentive for more kidnappings while enriching the terrorists.
Jihadists
don’t distinguish between civilians and soldiers, or among journalists,
clergy, doctors or aid workers. They are waging what they think is a
struggle to the death against other religious faiths and the West. Their
goal is to kill for political control and their brand of Islam.
The
murders are likely to increase as the U.S. fight against Islamic State
intensifies. The jihadists know from experience that they can’t win a
direct military confrontation, so their goal is to weaken the resolve of
democracies at home. Imposing casualties on innocent Americans abroad
and attacking the homeland are part of their military strategy.
They don’t seem to realize that such brutality often backfires, reinforcing U.S. public resolve, as even
Osama bin Laden
understood judging by his intercepted communications. But
Americans need to realize that there are no safe havens in this long
war. Everyone is a potential target.
So we are entering an era
when the U.S. will have to undertake more such rescues of Americans
kidnapped overseas. The results will be mixed, but even failed attempts
will send a message to jihadists that capturing Americans will make them
targets—and that there is no place in the world they can’t be found and
killed.
It’s a tragedy that fanatical Islamists have made the
world so dangerous, but Americans should be proud of a country that has
men and women willing to risk their own lives to leave no American
behind.
Jonathan Gruber’s ‘Stupid’ Budget Tricks. WSJ Editorial His ObamaCare candor shows how Congress routinely cons taxpayers.Wall Street Journal, Nov. 14, 2014 6:51 p.m. ET
As a rule, Americans don’t like to be called “stupid,” as Jonathan Gruber is discovering. Whatever his academic contempt for voters, the ObamaCare architect and Massachusetts Institute of Technology economist deserves the Presidential Medal of Freedom for his candor about the corruption of the federal budget process.
In his now-infamous talk at the University of Pennsylvania last year, Professor Gruber argued that the Affordable Care Act “would not have passed” had Democrats been honest about the income-redistribution policies embedded in its insurance regulations. But the more instructive moment is his admission that “this bill was written in a tortured way to make sure CBO did not score the mandate as taxes. If CBO scored the mandate as taxes, the bill dies.”
Mr. Gruber means the Congressional Budget Office, the institution responsible for putting “scores” or official price tags on legislation. He’s right that to pass ObamaCare Democrats perpetrated the rawest, most cynical abuse of the CBO since its creation in 1974.
In another clip from Mr. Gruber’s seemingly infinite video library, he discusses how he and Democrats wrote the law to game the CBO’s fiscal conventions and achieve goals that would otherwise be “politically impossible.” In still another, he explains that these ruses are “a sad statement about budget politics in the U.S., but there you have it.”
Yes you do. Such admissions aren’t revelations, since the truth has long been obvious to anyone curious enough to look. We and other critics wrote about ObamaCare’s budget gimmicks during the debate, and Rep. Paul Ryan exposed them at the 2010 “health summit.” President Obama changed the subject.
But rarely are liberal intellectuals as full frontal as Mr. Gruber about the accounting fraud ingrained in ObamaCare. Also notable are his do-what-you-gotta-do apologetics: “I’d rather have this law than not,” he says.
Recall five years ago. The White House wanted to pretend that the open-ended new entitlement would spend less than $1 trillion over 10 years and reduce the deficit too. Congress requires the budget gnomes to score bills as written, no matter how unrealistic the assumption or fake the promise. Democrats with the help of Mr. Gruber carefully designed the bill to exploit this built-in gullibility.
So they used a decade of taxes to fund merely six years of insurance subsidies. They made-believe that Medicare payments to hospitals will some day fall below Medicaid rates. A since-repealed program for long-term care front-loaded taxes but back-loaded spending, meant to gradually go broke by design. Remember the spectacle of Democrats waiting for the white smoke to come up from CBO and deliver the holy scripture verdict?
On the tape, Mr. Gruber also identifies a special liberal manipulation: CBO’s policy reversal to not count the individual mandate to buy insurance as an explicit component of the federal budget. In 1994, then CBO chief Robert Reischauer reasonably determined that if the government forces people to buy a product by law, then those transactions no longer belong to the private economy but to the U.S. balance sheet. The CBO’s face-melting cost estimate helped to kill HillaryCare.
The CBO director responsible for this switcheroo that moved much of ObamaCare’s real spending off the books was Peter Orszag, who went on to become Mr. Obama’s budget director. Mr. Orszag nonetheless assailed CBO during the debate for not giving him enough credit for the law’s phantom “savings.”
Then again, Mr. Gruber told a Holy Cross audience in 2010 that although ObamaCare “is 90% health insurance coverage and 10% about cost control, all you ever hear people talk about is cost control. How it’s going to lower the cost of health care, that’s all they talk about. Why? Because that’s what people want to hear about because a majority of Americans care about health-care costs.”
***
Both political parties for some reason treat the CBO with the same reverence the ancient Greeks reserved for the Delphic oracle, but Mr. Gruber’s honesty is another warning that the budget rules are rigged to expand government and hide the true cost of entitlements. CBO scores aren’t unambiguous facts but are guesses about the future, biased by the Keynesian assumptions and models its political masters in Congress instruct it to use.
Republicans who now run Congress can help taxpayers by appointing a new CBO director, as is their right as the majority. Current head Doug Elmendorf is a respected economist, and he often has a dry wit as he reminds Congressfolk that if they feed him garbage, he must give them garbage back. But if the GOP won’t abolish the institution, then they can find a replacement who is as candid as Mr. Gruber about the flaws and limitations of the CBO status quo. The Tax Foundation’s Steve Entin would be an inspired pick.
Democrats are now pretending they’ve never heard of Mr. Gruber, though they used to appeal to his authority when he still had some. His commentaries are no less valuable because he is now a political liability for Democrats.