Brilliance isn’t the only key to Warren Buffett’s investing success. See rule No. 5.
The U.S. economy shrank last quarter. The Federal Reserve is
widely expected to begin raising interest rates later this year. U.S.
stocks are expensive by many measures. Greece’s national finances remain
fragile. Oh, and election season already is under way in the U.S.
Investors who are tempted to sell risky assets and flee to safety don’t have to look far for justification.
If you are one of them, ponder this: Most of what matters in investing involves bedrock principles, not current events.
Here are five principles every investor should keep in mind:
1. Diversification is how you limit the risk of losses in an uncertain world.
If,
30 years ago, a visitor from the future had said that the Soviet Union
had collapsed, Japan’s stock market had stagnated for a quarter century,
China had become a superpower and North Dakota had helped turn the U.S.
into a fast-growing source of crude oil, few would have believed it.
The next 30 years will be just as surprising.
Diversification among different assets can be frustrating. It requires, at every point in time, owning some unpopular assets.
Why would I want to own European stocks if its economy is such a mess? Why should I buy bonds if interest rates are so low?
The appropriate answer is, “Because the future will play out in ways you or your adviser can’t possibly comprehend.”
Owning a little bit of everything is a bet on humility, which the history of investing shows is a valuable trait.
2. You are your own worst enemy.
The biggest risk investors face isn’t a recession, a bear market, the Federal Reserve or their least favorite political party.
It is their own emotions and biases, and the destructive behaviors they cause.
You
can be the best stock picker in the world, capable of finding
tomorrow’s winning businesses before anyone else. But if you panic and
sell during the next bear market, none of it will matter.
You can
be armed with an M.B.A. and have 40 years before retirement to let your
savings compound into a fortune. But if you have a gambling mentality
and you day-trade penny stocks, your outlook seems dismal.
You
can be a mathematical genius, building the most sophisticated
stock-market forecasting models. But if you don’t understand the limits
of your intelligence, you are on your way to disaster.
There
aren’t many iron rules of investing, but one of them is that no amount
of brain power can compensate for behavioral errors. Figure out what
mistakes you are prone to make and embrace strategies that limit the
risk.
3. There is a price to pay.
The stock market has historically offered stellar long-term returns, far better than cash or bonds.
But
there is a cost. The price of admission to earn high long-term returns
in stocks is a ceaseless torrent of unpredictable outcomes, senseless
volatility and unexpected downturns.
If you can stick with your
investments through the rough spots, you don’t actually pay this bill;
it is a mental surcharge. But it is very real. Not everyone is willing
to pay it, which is why there is opportunity for those who are.
There
is an understandable desire to forecast what the market will do in the
short run. But the reason stocks offer superior long-term returns is
precisely because we can’t forecast what they will do in the short run.
4. When in doubt, choose the investment with the lowest fee.
As a group, investors’ profits always will equal the overall market’s returns minus all fees and expenses.
Below-average fees, therefore, offer one of your best shots at earning above-average results.
A
talented fund manager can be worth a higher fee, mind you. But enduring
outperformance is one of the most elusive investing skills.
According
to Vanguard Group, which has championed low-cost investing products,
more than 80% of actively managed U.S. stock funds underperformed a
low-cost index fund in the 10 years through December. It is far more
common for a fund manager to charge excess fees than to deliver excess
performance.
There are no promises in investing. The best you can
do is put the odds in your favor. And the evidence is overwhelming: The
lower the costs, the more the odds tip in your favor.
5. Time is the most powerful force in investing.
Eighty-four year old Warren Buffett’s current net worth is around $73 billion, nearly all of which is in Berkshire Hathaway stock. Berkshire’s stock has risen 24-fold since 1990.
Do the math, and some $70 billion of Mr. Buffett’s $73 billion fortune was accumulated around or after his 60th birthday.
Mr.
Buffett is, of course, a phenomenal investor whose talents few will
replicate. But the real key to his wealth is that he has been a
phenomenal investor for two-thirds of a century.
Wealth grows exponentially—a little at first, then slightly more, and then in a hurry for those who stick around the longest.
That
lesson—that time, patience and endurance pay off—is something us
mortals can learn from, particularly younger workers just starting to
save for retirement.
June marks the 800th
anniversary of Magna Carta, the ‘Great Charter’ that established the
rule of law for the English-speaking world. Its revolutionary impact
still resounds today, writes Daniel Hannan
King John, pressured
by English barons, reluctantly signs Magna Carta, the ‘Great Charter,’
on the Thames riverbank, Runnymede, June 15, 1215, as rendered in James
Doyle’s ‘A Chronicle of England.’
Photo:
Mary Evans Picture Library/Everett Collection
http://si.wsj.net/public/resources/images/BN-IQ808_MAGNA_J_20150529103352.jpg
By Daniel Hannan
Eight hundred years ago next month, on a reedy stretch of
riverbank in southern England, the most important bargain in the history
of the human race was struck. I realize that’s a big claim, but in this
case, only superlatives will do. As Lord Denning, the most celebrated
modern British jurist put it, Magna Carta was “the greatest
constitutional document of all time, the foundation of the freedom of
the individual against the arbitrary authority of the despot.”
It
was at Runnymede, on June 15, 1215, that the idea of the law standing
above the government first took contractual form. King John accepted
that he would no longer get to make the rules up as he went along. From
that acceptance flowed, ultimately, all the rights and freedoms that we
now take for granted: uncensored newspapers, security of property,
equality before the law, habeas corpus, regular elections, sanctity of
contract, jury trials.
Magna Carta is Latin for “Great Charter.”
It was so named not because the men who drafted it foresaw its epochal
power but because it was long. Yet, almost immediately, the document
began to take on a political significance that justified the adjective
in every sense.
The bishops and barons who had brought King John
to the negotiating table understood that rights required an enforcement
mechanism. The potency of a charter is not in its parchment but in the
authority of its interpretation. The constitution of the U.S.S.R., to
pluck an example more or less at random, promised all sorts of
entitlements: free speech, free worship, free association. But as Soviet
citizens learned, paper rights are worthless in the absence of
mechanisms to hold rulers to account.
Magna Carta instituted a form of conciliar rule that was to develop
directly into the Parliament that meets at Westminster today. As the
great Victorian historian William Stubbs put it, “the whole
constitutional history of England is little more than a commentary on
Magna Carta.”
And
not just England. Indeed, not even England in particular. Magna Carta
has always been a bigger deal in the U.S. The meadow where the
abominable King John put his royal seal to the parchment lies in my
electoral district in the county of Surrey. It went unmarked until 1957,
when a memorial stone was finally raised there—by the American Bar
Association.
Only now, for the anniversary, is a British
monument being erected at the place where freedom was born. After some
frantic fundraising by me and a handful of local councilors, a large
bronze statue of Queen Elizabeth II will gaze out across the slow, green
waters of the Thames, marking 800 years of the Crown’s acceptance of
the rule of law.
Eight hundred years is a long wait. We British
have, by any measure, been slow to recognize what we have. Americans, by
contrast, have always been keenly aware of the document, referring to
it respectfully as the Magna Carta.
Why? Largely because
of who the first Americans were. Magna Carta was reissued several times
throughout the 14th and 15th centuries, as successive Parliaments
asserted their prerogatives, but it receded from public consciousness
under the Tudors, whose dynasty ended with the death of Elizabeth I in
1603.
In the early 17th century, members of Parliament revived
Magna Carta as a weapon in their quarrels with the autocratic Stuart
monarchs. Opposition to the Crown was led by the brilliant lawyer Edward
Coke (pronounced Cook), who drafted the first Virginia Charter in 1606.
Coke’s argument was that the king was sidelining Parliament, and so
unbalancing the “ancient constitution” of which Magna Carta was the
supreme expression.
United for the first
time, the four surviving original Magna Carta manuscripts are prepared
for display at the British Library, London, Feb. 1, 2015.
Photo:
UPPA/ZUMA PRESS
The early settlers arrived while these rows were at their height and
carried the mania for Magna Carta to their new homes. As early as 1637,
Maryland sought permission to incorporate Magna Carta into its basic
law, and the first edition of the Great Charter was published on
American soil in 1687 by William Penn, who explained that it was what
made Englishmen unique: “In France, and other nations, the mere will of
the Prince is Law, his word takes off any man’s head, imposeth taxes, or
seizes any man’s estate, when, how and as often as he lists; But in
England, each man hath a fixed Fundamental Right born with him, as to
freedom of his person and property in his estate, which he cannot be
deprived of, but either by his consent, or some crime, for which the law
has imposed such a penalty or forfeiture.”
There was a
divergence between English and American conceptions of Magna Carta. In
the Old World, it was thought of, above all, as a guarantor of
parliamentary supremacy; in the New World, it was already coming to be
seen as something that stood above both Crown and Parliament. This
difference was to have vast consequences in the 1770s.
The
American Revolution is now remembered on both sides of the Atlantic as a
national conflict—as, indeed, a “War of Independence.” But no one at
the time thought of it that way—not, at any rate, until the French
became involved in 1778. Loyalists and patriots alike saw it as a civil
war within a single polity, a war that divided opinion every bit as much
in Great Britain as in the colonies.
The American
Revolutionaries weren’t rejecting their identity as Englishmen; they
were asserting it. As they saw it, George III was violating the “ancient
constitution” just as King John and the Stuarts had done. It was
therefore not just their right but their duty to resist, in the words of
the delegates to the first Continental Congress in 1774, “as Englishmen
our ancestors in like cases have usually done.”
Nowhere, at this
stage, do we find the slightest hint that the patriots were fighting
for universal rights. On the contrary, they were very clear that they
were fighting for the privileges bestowed on them by Magna Carta. The
concept of “no taxation without representation” was not an abstract
principle. It could be found, rather, in Article 12 of the Great
Charter: “No scutage or aid is to be levied in our realm except by the
common counsel of our realm.” In 1775, Massachusetts duly adopted as its
state seal a patriot with a sword in one hand and a copy of Magna Carta
in the other.
I recount these facts to make an important, if
unfashionable, point. The rights we now take for granted—freedom of
speech, religion, assembly and so on—are not the natural condition of an
advanced society. They were developed overwhelmingly in the language in
which you are reading these words.
When we call them universal
rights, we are being polite. Suppose World War II or the Cold War had
ended differently: There would have been nothing universal about them
then. If they are universal rights today, it is because of a series of
military victories by the English-speaking peoples.
Various early
copies of Magna Carta survive, many of them in England’s cathedrals,
tended like the relics that were removed during the Reformation. One
hangs in the National Archives in Washington, D.C., next to the two
documents it directly inspired: the Declaration of Independence and the
Constitution. Another enriches the Australian Parliament in Canberra.
But
there are only four 1215 originals. One of them, normally housed at
Lincoln Cathedral, has recently been on an American tour, resting for
some weeks at the Library of Congress. It wasn’t that copy’s first visit
to the U.S. The same parchment was exhibited in New York at the 1939
World’s Fair, attracting an incredible 13 million visitors. World War II
broke out while it was still on display, and it was transferred to Fort
Knox for safekeeping until the end of the conflict.
Could there
have been a more apt symbol of what the English-speaking peoples were
fighting for in that conflagration? Think of the world as it stood in
1939. Constitutional liberty was more or less confined to the
Anglosphere. Everywhere else, authoritarianism was on the rise. Our
system, uniquely, elevated the individual over the state, the rules over
the rulers.
When the 18th-century statesman Pitt the Elder
described Magna Carta as England’s Bible, he was making a profound
point. It is, so to speak, the Torah of the English-speaking peoples:
the text that sets us apart while at the same time speaking truths to
the rest of mankind.
The very success of Magna Carta makes it
hard for us, 800 years on, to see how utterly revolutionary it must have
appeared at the time. Magna Carta did not create democracy: Ancient
Greeks had been casting differently colored pebbles into voting urns
while the remote fathers of the English were grubbing about alongside
pigs in the cold soil of northern Germany. Nor was it the first
expression of the law: There were Sumerian and Egyptian law codes even
before Moses descended from Sinai.
What Magna Carta initiated,
rather, was constitutional government—or, as the terse inscription on
the American Bar Association’s stone puts it, “freedom under law.”
It
takes a real act of imagination to see how transformative this concept
must have been. The law was no longer just an expression of the will of
the biggest guy in the tribe. Above the king brooded something more
powerful yet—something you couldn’t see or hear or touch or taste but
that bound the sovereign as surely as it bound the poorest wretch in the
kingdom. That something was what Magna Carta called “the law of the
land.”
This phrase is commonplace in our language. But think of
what it represents. The law is not determined by the people in
government, nor yet by clergymen presuming to interpret a holy book.
Rather, it is immanent in the land itself, the common inheritance of the
people living there.
The idea of the law coming up from the
people, rather than down from the government, is a peculiar feature of
the Anglosphere. Common law is an anomaly, a beautiful, miraculous
anomaly. In the rest of the world, laws are written down from first
principles and then applied to specific disputes, but the common law
grows like a coral, case by case, each judgment serving as the starting
point for the next dispute. In consequence, it is an ally of freedom
rather than an instrument of state control. It implicitly assumes
residual rights.
And indeed, Magna Carta conceives rights in
negative terms, as guarantees against state coercion. No one can put you
in prison or seize your property or mistreat you other than by due
process. This essentially negative conception of freedom is worth
clinging to in an age that likes to redefine rights as entitlements—the
right to affordable health care, the right to be forgotten and so on.
It
is worth stressing, too, that Magna Carta conceived freedom and
property as two expressions of the same principle. The whole document
can be read as a lengthy promise that the goods of a free citizen will
not be arbitrarily confiscated by someone higher up the social scale.
Even the clauses that seem most remote from modern experience generally
turn out, in reality, to be about security of ownership.
There
are, for example, detailed passages about wardship. King John had been
in the habit of marrying heiresses to royal favorites as a way to get
his hands on their estates. The abstruse-sounding articles about
inheritance rights are, in reality, simply one more expression of the
general principle that the state may not expropriate without due
process.
Those who stand awe-struck before the Great Charter
expecting to find high-flown phrases about liberty are often surprised
to see that a chunk of it is taken up with the placing of fish-traps on
the Thames. Yet these passages, too, are about property, specifically
the freedom of merchants to navigate inland waterways without having
arbitrary tolls imposed on them by fish farmers.
Liberty and
property: how naturally those words tripped, as a unitary concept, from
the tongues of America’s Founders. These were men who had been shaped in
the English tradition, and they saw parliamentary government not as an
expression of majority rule but as a guarantor of individual freedom.
How different was the Continental tradition, born 13 years later with
the French Revolution, which saw elected assemblies as the embodiment of
what Rousseau called the “general will” of the people.
In that
difference, we may perhaps discern explanation of why the Anglosphere
resisted the chronic bouts of authoritarianism to which most other
Western countries were prone. We who speak this language have always
seen the defense of freedom as the duty of our representatives and so,
by implication, of those who elect them. Liberty and democracy, in our
tradition, are not balanced against each other; they are yoked together.
In February, the four surviving original copies of Magna Carta were
united, for just a few hours, at the British Library—something that had
not happened in 800 years. As I stood reverentially before them, someone
recognized me and posted a photograph on Twitter with the caption: “If
Dan Hannan gets his hands on all four copies of Magna Carta, will he be
like Sauron with the Rings?”
Yet the majesty of the document
resides in the fact that it is, so to speak, a shield against Saurons.
Most other countries have fallen for, or at least fallen to, dictators.
Many, during the 20th century, had popular communist parties or fascist
parties or both. The Anglosphere, unusually, retained a consensus behind
liberal capitalism.
This is not because of any special property
in our geography or our genes but because of our constitutional
arrangements. Those constitutional arrangements can take root anywhere.
They explain why Bermuda is not Haiti, why Hong Kong is not China, why
Israel is not Syria.
They work because, starting with Magna
Carta, they have made the defense of freedom everyone’s responsibility.
Americans, like Britons, have inherited their freedoms from past
generations and should not look to any external agent for their
perpetuation. The defense of liberty is your job and mine. It is up to
us to keep intact the freedoms we inherited from our parents and to pass
them on securely to our children.
Mr. Hannan is a British
member of the European Parliament for the Conservative Party, a
columnist for the Washington Examiner and the author of “Inventing
Freedom: How the English-speaking Peoples Made the Modern World.”
White House officials can be oddly candid in talking to their liberal
friends at the New Yorker magazine. That’s where an unnamed official in
2011 boasted of “leading from behind,” and where last year President Obama
dismissed Islamic State as a terrorist “jayvee team.” Now the U.S. Vice
President has revealed the Administration line on human rights in
China.
In the April 6 issue, Joe Biden recounts meeting Xi Jinping
months before his 2012 ascent to be China’s supreme leader. Mr. Xi
asked him why the U.S. put “so much emphasis on human rights.” The right
answer is simple: No government has the right to deny its citizens
basic freedoms, and those that do tend also to threaten peace overseas,
so U.S. support for human rights is a matter of values and interests.
Instead,
Mr. Biden downplayed U.S. human-rights rhetoric as little more than
political posturing. “No president of the United States could represent
the United States were he not committed to human rights,” he told Mr.
Xi. “President Barack Obama would not be able to stay in power if he did
not speak of it. So look at it as a political imperative.” Then Mr.
Biden assured China’s leader: “It doesn’t make us better or worse. It’s
who we are. You make your decisions. We’ll make ours.” [not the WSJ's emphasis.]
Mr. Xi took the advice. Since taking office he has detained more
than 1,000 political prisoners, from anticorruption activist Xu Zhiyong
to lawyer Pu Zhiqiang and journalist Gao Yu. He has cracked down on Uighurs in Xinjiang, banning more Muslim practices and jailing scholar-activist Ilham Tohti for life. Anti-Christian repression and Internet controls are tightening. Nobel Peace laureate Liu Xiaobo remains in prison, his wife Liu Xia
under illegal house arrest for the fifth year. Lawyer Gao Zhisheng left
prison in August but is blocked from receiving medical care overseas.
Hong Kong, China’s most liberal city, is losing its press freedom and
political autonomy.
Amid all of this Mr. Xi and his government have faced little challenge from Washington. That is consistent with Hillary Clinton’s
2009 statement that human rights can’t be allowed to “interfere” with
diplomacy on issues such as the economy and the environment. Mr. Obama
tried walking that back months later, telling the United Nations that
democracy and human rights aren’t “afterthoughts.” But his
Administration’s record—and now Mr. Biden’s testimony—prove otherwise.
JAMA Surg. Published online March 11, 2015. doi:10.1001/jamasurg.2014.2911.
On Thursday mornings, our operating room management committee meets
to handle items large and small. Most of our discussions focus on
block-time allocation, purchasing decisions, and alike. However, too
often we talk about behavioral issues, particularly the now
well-characterized disruptive physician.
We have all seen it or been there before. A physician acts out in the
operating room with shouting or biting sarcasm, intimidating colleagues
and staff and impeding them from functioning at a high level. The most
debilitating perpetrators of this behavior are repeat customers who
engender such fear and uncertainty in all who contact them that the
morale of the nursing staff and anesthesiologists is undermined, work
becomes an unbearable chore, and performance suffers.
When one engages a difficult physician on his or her behavior, the
physician responds in characteristic fashion. He or she defends his or
her actions as patient advocacy, pointing out the shortcomings of the
scrub nurse or instruments and showing limited, if any, remorse. He or
she argues that such civil disobedience is the only way to enact change.
In truth, disruptive physicians’ actions are often admired by a sizable
minority of their colleagues as the only way to articulate real
frustrations of working in today’s highly complex hospital. In extreme
situations, these physicians become folk heroes to younger physicians
who envy their fortitude in confronting the power of the bureaucracy.
A few days after a recent outburst by a particularly unpleasant and
repeat offender, I was enjoying my daily interval on the stationary
bicycle at my gym. My thoughts were wandering to a broad range of
topics. I spent some time considering what really drives this
nonproductive behavior and how otherwise valuable physicians could be
channeled successfully into a more collegial state. As in the past, I
was long on theory but short on conviction that it would make a
difference.
After my workout as I prepared to shower, I received an urgent email.
A patient I was consulting for upper extremity embolization had
developed confusion and possible cerebral emboli despite full
anticoagulation. I responded that I was on my way to see her and
suggested a few diagnostic tests and consultations.
As I typed my message, a custodial employee of the gym reminded me
that no cellular telephones were allowed in the locker room. I pointed
out that I was not using my cellular telephone but rather an email
function and I was not offending anyone by talking. He again pointed out
that cellular telephones were not allowed under any circumstances. As I
argued back, “I am a physician and this is an emergency.” My voice got
louder and I became confrontational. I told him to call the manager.
Another member next to me said quietly that the reason for the cellular
telephone ban was the photographic potential of the devices and that I
could have simply moved to the reception area and used the telephone any
way I wished.
I felt like the fool I was. I trudged off to the showers feeling, as
in the Texas homily, lower than a snake’s belly. After toweling off, I
approached the employee and apologized for my behavior and for making
his job more difficult. I told him he had handled the situation far
better than me and I admired his restraint.
The lessons were stark and undeniable. Like my disruptive colleagues,
I had justified my boorish behavior with patient care. I had assumed my
need to break the rules far outweighed the reasonable and rational
policy of the establishment; after all, I was important and people
depended on me. Worse yet, I felt empowered to take out my frustration,
enhanced by my worry about the patient, on someone unlikely to retaliate
against me for fear of job loss.
I have come to realize that irrespective of disposition, when the
setting is right, we are all potentially disruptive. The only questions
are how frequent and how severe. Even more importantly, from a
prognostic perspective, can we share the common drivers of these
behaviors and develop insights that will lead to avoidance?
The most common approaches used today are only moderately effective.
As in many other institutions, when physicians are deemed by their peers
to have violated a carefully defined code of conduct, they are advised
to apologize to any offended personnel. In many instances, these
apologies are sincere and are, in fact, appreciated by all.
Unfortunately, on occasion, the interaction is viewed as a forced
function and the behavior is soon repeated albeit in a different nursing
unit or operating room.
When such failures occur, persistently disruptive physicians are
referred to our physician well-being committee. Through a highly
confidential process, efforts are made to explore the potential causes
for the behavior and acquaint the referred physician with the
consequences of their actions on hospital function. Often, behavioral
contracts are drawn up to precisely outline the individual’s issues and
subsequent medical staff penalties if further violations occur.
That said, as well intentioned and psychologically sound as these
programs are, there remains a hard core of repeat offenders. Despite the
heightened stress and ill will engendered by disruptive physicians’
behavior, they simply cannot interact in other than confrontational
fashion when frustrated by real or imagined shortcomings in the
environment.
Based on nearly 20 years of physician management experience, it is my
belief that in these few physicians, such behaviors are hard wired and
fairly resistant to traditional counseling. An unfortunate end game is
termination from a medical staff if the hostile working environment
created by their outbursts is viewed as a liability threat by the
institution. Such actions are always painful and bring no satisfaction
to anyone involved. These high-stakes dramas, often involving critical
institutional players on both sides, are played out behind closed doors.
Few people are privy to the details of either the infraction or the
attempts at remediation. Misunderstandings in the staff are common.
I suggest that an underused remedy is more intense peer pressure
through continued education of those colleagues who might silently
support these outbursts without fully realizing the consequences. This
would begin by treating these incidents in the same way that we do other
significant adverse events that occur in our hospitals. In confidential
but interdisciplinary sessions, the genesis, nature, and consequences
of the interaction could be explored openly. If indeed the inciting
event was judged to be an important patient care issue, the problem
could be identified and addressed yet clearly separated from the
counterproductive interaction that followed. In addition to the
deterrence provided by the more public airing of the incidents, the
tenuous linkage between abusive behavior and patient protection could be
severed. It is this linkage that provides any superficial legitimacy to
the outbursts.
Through this process, peer pressure would be increased and provide a
greater impetus for self-control and more productive interactions.
Importantly, with such a direct and full examination of both the
character and costs of poor conduct, whatever support exists for such
behaviors within the medical staff would be diminished.
Bruce Gewertz, MD, Cedars-Sinai Health System
Published Online: March 11, 2015. doi:10.1001/jamasurg.2014.2911. Conflict of Interest Disclosures: None reported.
The
Obama
administration’s troubling flirtation with another mortgage
meltdown took an unsettling turn on Tuesday with Federal Housing Finance
Agency Director
Mel Watt
’s testimony before the House Financial Services Committee.
Mr.
Watt told the committee that, having received “feedback from
stakeholders,” he expects to release by the end of March new guidance on
the “guarantee fee” charged by
Fannie Mae
and
Freddie Mac
to cover the credit risk on loans the federal mortgage agencies guarantee.
Here
we go again. In the Obama administration, new guidance on housing
policy invariably means lowering standards to get mortgages into the
hands of people who may not be able to afford them.
Earlier this
month, President Obama announced that the Federal Housing
Administration (FHA) will begin lowering annual mortgage-insurance
premiums “to make mortgages more affordable and accessible.” While that
sounds good in the abstract, the decision is a bad one with serious
consequences for the housing market.
Government programs to make
mortgages more widely available to low- and moderate-income families
have consistently offered overleveraged, high-risk loans that set up too
many homeowners to fail. In the long run-up to the 2008 financial
crisis, for example, federal mortgage agencies and their regulators
cajoled and wheedled private lenders to loosen credit standards. They
have been doing so again. When the next housing crash arrives, private
lenders will be blamed—and homeowners and taxpayers will once again pay
dearly.
Lowering annual mortgage-insurance premiums is part of a
new affordable-lending effort by the Obama administration. More
specifically, it is the latest salvo in a price war between two
government mortgage giants to meet government mandates.
Fannie
Mae fired the first shot in December when it relaunched the 30-year, 97%
loan-to-value, or LTV, mortgage (a type of loan that was suspended in
2013). Fannie revived these 3% down-payment mortgages at the behest of
its federal regulator, the Federal Housing Finance Agency (FHFA)—which
has run Fannie Mae and Freddie Mac since 2008, when both
government-sponsored enterprises (GSEs) went belly up and were put into
conservatorship. The FHA’s mortgage-premium price rollback was a
counteroffensive.
Déjà vu: Fannie launched its first price war
against the FHA in 1994 by introducing the 30-year, 3% down-payment
mortgage. It did so at the behest of its then-regulator, the Department
of Housing and Urban Development. This and other actions led HUD in 2004
to credit Fannie Mae’s “substantial part in the ‘revolution’ ” in
“affordable lending” to “historically underserved households.”
Fannie’s
goal in 1994 and today is to take market share from the FHA, the main
competitor for loans it and Freddie Mac need to meet mandates set by
Congress since 1992 to increase loans to low- and moderate-income
homeowners. The weapons in this war are familiar—lower pricing and
progressively looser credit as competing federal agencies fight over
existing high-risk lending and seek to expand such lending.
Mortgage
price wars between government agencies are particularly dangerous,
since access to low-cost capital and minimal capital requirements gives
them the ability to continue for many years—all at great risk to the
taxpayers. Government agencies also charge low-risk consumers more than
necessary to cover the risk of default, using the overage to lower fees
on loans to high-risk consumers.
Starting in 2009 the FHFA
released annual studies documenting the widespread nature of these
cross-subsidies. The reports showed that low down payment, 30-year loans
to individuals with low FICO scores were consistently subsidized by
less-risky loans.
Unfortunately, special interests such as the
National Association of Realtors—always eager to sell more houses and
reap the commissions—and the left-leaning Urban Institute were
cheerleaders for loose credit. In 1997, for example, HUD commissioned
the Urban Institute to study Fannie and Freddie’s single-family
underwriting standards. The Urban Institute’s 1999 report found that
“the GSEs’ guidelines, designed to identify creditworthy applicants, are
more likely to disqualify borrowers with low incomes, limited wealth,
and poor credit histories; applicants with these characteristics are
disproportionately minorities.” By 2000 Fannie and Freddie did away with
down payments and raised debt-to-income ratios. HUD encouraged them to
more aggressively enter the subprime market, and the GSEs decided to
re-enter the “liar loan” (low doc or no doc) market, partly in a desire
to meet higher HUD low- and moderate-income lending mandates.
On
Jan. 6, the Urban Institute announced in a blog post: “FHA: Time to stop
overcharging today’s borrowers for yesterday’s mistakes.” The institute
endorsed an immediate cut of 0.40% in mortgage-insurance premiums
charged by the FHA. But once the agency cuts premiums, Fannie and
Freddie will inevitably reduce the guarantee fees charged to cover the
credit risk on the loans they guarantee.
Now the other shoe appears poised to drop, given Mr. Watt’s promise on Tuesday to issue new guidance on guarantee fees.
This
is happening despite Congress’s 2011 mandate that Fannie’s regulator
adjust the prices of mortgages and guarantee fees to make sure they
reflect the actual risk of loss—that is, to eliminate dangerous and
distortive pricing by the two GSEs. Ed DeMarco, acting director of the
FHFA since March 2009, worked hard to do so but left office in January
2014. Mr. Watt, his successor, suspended
Mr. DeMarc
o’s efforts to comply with Congress’s mandate. Now that Fannie
will once again offer heavily subsidized 3%-down mortgages, massive new
cross-subsidies will return, and the congressional mandate will be
ignored.
The law stipulates that the FHA maintain a
loss-absorbing capital buffer equal to 2% of the value of its
outstanding mortgages. The agency obtains this capital from profits
earned on mortgages and future premiums. It hasn’t met its capital
obligation since 2009 and will not reach compliance until the fall of
2016, according to the FHA’s latest actuarial report. But if the economy
runs into another rough patch, this projection will go out the window.
Congress
should put an end to this price war before it does real damage to the
economy. It should terminate the ill-conceived GSE affordable-housing
mandates and impose strong capital standards on the FHA that can’t be
ignored as they have been for five years and counting.
Mr. Pinto,
former chief credit officer of Fannie Mae, is co-director and
chief risk officer of the International Center on Housing Risk at the
American Enterprise Institute.
Condolences to the family of
Luke Somers,
the kidnapped American journalist who was murdered Saturday
during a rescue attempt by U.S. special forces in Yemen. His death is a
moment for sadness and anger, but also for pride in the rescue team and
praise for the
Obama
Administration for ordering the attempt.
According to the
Journal’s account based on military and Administration sources, some 40
special forces flew to a remote part of Yemen, marching five miles to
escape detection, but lost the element of surprise about 100 yards from
the jihadist hideout. One of the terrorists was observed by drone
surveillance to enter a building where it is believed he shot Somers and
a South African hostage,
Pierre Korkie.
The special forces carried the wounded men out by helicopter, but
one died on route and the other aboard a Navy ship.
There is no
blame for failing to save Somers, whose al Qaeda captors had released a
video on Thursday vowing to kill him in 72 hours if the U.S. did not
meet unspecified demands. The jihadists were no doubt on high alert
after special forces conducted a rescue attempt in late November at a
hillside cave. The commandos rescued eight people, mostly Yemenis, but
Somers had been moved.
It’s a tribute to the skill of U.S. special forces that these
high-risk missions against a dangerous enemy don’t fail more often. But
given good intelligence and a reasonable chance to save Somers, the
fault would have been not to try for fear of failure or political blame.
The reality is that most American and British citizens captured
by jihadists are now likely to be murdered as a terrorist statement.
This isn’t always true for citizens of other countries that pay ransom.
But the U.S. and U.K. rightly refuse on grounds that the payments give
incentive for more kidnappings while enriching the terrorists.
Jihadists
don’t distinguish between civilians and soldiers, or among journalists,
clergy, doctors or aid workers. They are waging what they think is a
struggle to the death against other religious faiths and the West. Their
goal is to kill for political control and their brand of Islam.
The
murders are likely to increase as the U.S. fight against Islamic State
intensifies. The jihadists know from experience that they can’t win a
direct military confrontation, so their goal is to weaken the resolve of
democracies at home. Imposing casualties on innocent Americans abroad
and attacking the homeland are part of their military strategy.
They don’t seem to realize that such brutality often backfires, reinforcing U.S. public resolve, as even
Osama bin Laden
understood judging by his intercepted communications. But
Americans need to realize that there are no safe havens in this long
war. Everyone is a potential target.
So we are entering an era
when the U.S. will have to undertake more such rescues of Americans
kidnapped overseas. The results will be mixed, but even failed attempts
will send a message to jihadists that capturing Americans will make them
targets—and that there is no place in the world they can’t be found and
killed.
It’s a tragedy that fanatical Islamists have made the
world so dangerous, but Americans should be proud of a country that has
men and women willing to risk their own lives to leave no American
behind.
Jonathan Gruber’s ‘Stupid’ Budget Tricks. WSJ Editorial His ObamaCare candor shows how Congress routinely cons taxpayers.Wall Street Journal, Nov. 14, 2014 6:51 p.m. ET
As a rule, Americans don’t like to be called “stupid,” as Jonathan Gruber is discovering. Whatever his academic contempt for voters, the ObamaCare architect and Massachusetts Institute of Technology economist deserves the Presidential Medal of Freedom for his candor about the corruption of the federal budget process.
In his now-infamous talk at the University of Pennsylvania last year, Professor Gruber argued that the Affordable Care Act “would not have passed” had Democrats been honest about the income-redistribution policies embedded in its insurance regulations. But the more instructive moment is his admission that “this bill was written in a tortured way to make sure CBO did not score the mandate as taxes. If CBO scored the mandate as taxes, the bill dies.”
Mr. Gruber means the Congressional Budget Office, the institution responsible for putting “scores” or official price tags on legislation. He’s right that to pass ObamaCare Democrats perpetrated the rawest, most cynical abuse of the CBO since its creation in 1974.
In another clip from Mr. Gruber’s seemingly infinite video library, he discusses how he and Democrats wrote the law to game the CBO’s fiscal conventions and achieve goals that would otherwise be “politically impossible.” In still another, he explains that these ruses are “a sad statement about budget politics in the U.S., but there you have it.”
Yes you do. Such admissions aren’t revelations, since the truth has long been obvious to anyone curious enough to look. We and other critics wrote about ObamaCare’s budget gimmicks during the debate, and Rep. Paul Ryan exposed them at the 2010 “health summit.” President Obama changed the subject.
But rarely are liberal intellectuals as full frontal as Mr. Gruber about the accounting fraud ingrained in ObamaCare. Also notable are his do-what-you-gotta-do apologetics: “I’d rather have this law than not,” he says.
Recall five years ago. The White House wanted to pretend that the open-ended new entitlement would spend less than $1 trillion over 10 years and reduce the deficit too. Congress requires the budget gnomes to score bills as written, no matter how unrealistic the assumption or fake the promise. Democrats with the help of Mr. Gruber carefully designed the bill to exploit this built-in gullibility.
So they used a decade of taxes to fund merely six years of insurance subsidies. They made-believe that Medicare payments to hospitals will some day fall below Medicaid rates. A since-repealed program for long-term care front-loaded taxes but back-loaded spending, meant to gradually go broke by design. Remember the spectacle of Democrats waiting for the white smoke to come up from CBO and deliver the holy scripture verdict?
On the tape, Mr. Gruber also identifies a special liberal manipulation: CBO’s policy reversal to not count the individual mandate to buy insurance as an explicit component of the federal budget. In 1994, then CBO chief Robert Reischauer reasonably determined that if the government forces people to buy a product by law, then those transactions no longer belong to the private economy but to the U.S. balance sheet. The CBO’s face-melting cost estimate helped to kill HillaryCare.
The CBO director responsible for this switcheroo that moved much of ObamaCare’s real spending off the books was Peter Orszag, who went on to become Mr. Obama’s budget director. Mr. Orszag nonetheless assailed CBO during the debate for not giving him enough credit for the law’s phantom “savings.”
Then again, Mr. Gruber told a Holy Cross audience in 2010 that although ObamaCare “is 90% health insurance coverage and 10% about cost control, all you ever hear people talk about is cost control. How it’s going to lower the cost of health care, that’s all they talk about. Why? Because that’s what people want to hear about because a majority of Americans care about health-care costs.”
***
Both political parties for some reason treat the CBO with the same reverence the ancient Greeks reserved for the Delphic oracle, but Mr. Gruber’s honesty is another warning that the budget rules are rigged to expand government and hide the true cost of entitlements. CBO scores aren’t unambiguous facts but are guesses about the future, biased by the Keynesian assumptions and models its political masters in Congress instruct it to use.
Republicans who now run Congress can help taxpayers by appointing a new CBO director, as is their right as the majority. Current head Doug Elmendorf is a respected economist, and he often has a dry wit as he reminds Congressfolk that if they feed him garbage, he must give them garbage back. But if the GOP won’t abolish the institution, then they can find a replacement who is as candid as Mr. Gruber about the flaws and limitations of the CBO status quo. The Tax Foundation’s Steve Entin would be an inspired pick.
Democrats are now pretending they’ve never heard of Mr. Gruber, though they used to appeal to his authority when he still had some. His commentaries are no less valuable because he is now a political liability for Democrats.
The "no blood for oil" crowd has piped up with surprising speed and noisiness in the short hours since President Obama recommitted U.S. forces to the fight in Iraq.
Steve Coll, a writer for the New Yorker, suggests in a piece posted on the magazine's website that "Kurdish oil greed," whose partner Mr. Obama now becomes, has been a primary factor in making Iraq a failed state. That's apparently because of the Kurds' unwillingness to reach a revenue-sharing deal with Baghdad. For good measure, he refers readers to a Rachel Maddow video, featuring Steve Coll, that argues that the U.S. invaded Iraq to gets its oil in the first place.
John B. Judis, a veteran editor of the New Republic, in contrast is relatively sane under the headline "The U.S. Airstrikes in Northern Iraq Are All About Oil." While nodding toward Mr. Obama's stated humanitarian justifications, he insists oil "lies near the center of American motives for intervention." There are a few problems with this argument. Oil exists in the hinterland of Erbil, all right, the capital of a stable, prosperous and relatively free Kurdistan that President Obama now is trying to protect from the Islamic murderers of ISIS.
But oil also exists in northwestern Iraq—in fact, vast amounts of oil around Mosul, whose fall did not trigger Obama intervention. Oil is in Libya, where the U.S. quickly took a hike after the fall of Gadhafi. Oil is in Canada, where Mr. Obama, who just fatally risked his legacy with his core admirers by dispatching forces to the Mideast, can't bring himself to choose between his labor and greenie constituents by deciding to approve or veto the Keystone pipeline.
Oil apparently explains nothing except when it explains everything. Another problem is that Americans are both consumers and producers of oil. So does the U.S. want high or low prices? A bigger producer in recent years, America presumably has seen its interest shifting steadily in the direction of higher prices. Yet acting to protect Kurdish production would have the opposite effect.
But then Mr. Coll especially is ritualizing, not thinking—and what he's ritualizing is a certain leftist hymn about the origins of the 2003 Iraq war. Never mind that if the U.S. had wanted Iraq's oil, it would have been vastly cheaper to buy it— Saddam was certainly eager to sell. Never mind that the Bush administration, after overthrowing Saddam, stood idly by while Baghdad awarded the biggest contracts to India, China and Angola.
It was not a Bushie but Madeleine Albright, in her maiden speech as Bill Clinton's secretary of state, who first laid out the case for regime change in Iraq.
In the same 1997 speech, she explained, "Last August, Iraqi forces took advantage of intra-Kurdish tensions and attacked the city of Irbil, in northern Iraq. President Clinton responded by expanding the no-fly zone to the southern suburbs of Baghdad. . . . Contrary to some expectations, the attack on Irbil has not restored Saddam Hussein's authority in the north. We are firmly engaged alongside Turkey and the United Kingdom in helping the inhabitants of the region find stability and work towards a unified and pluralistic Iraq."
Madame Secretary did not mention oil any more than President Obama did last week. Of course, the catechism holds that, when politicians aren't freely voicing their obsession with oil as Bush and Cheney supposedly did while cooking up the Iraq War, politicians are concealing their obsession with oil. In fact, oil was not yet produced in significant quantities in Erbil at the time. It was the peace and stability that Presidents Bush, Clinton and Bush provided, and that President Obama is trying to restore, that allowed the flowering of Iraqi Kurdistan, including its oil industry.
By now, America has invested 23 years in shielding northern Iraq from the suppurating chaos that seems to flow endlessly from Baghdad and its Sunni-dominated Western suburbs. It's one of our few conspicuous successes in Iraq. Politics, in the best and worst senses of the word, drives every political decision. Despite his palpable lack of enthusiasm, President Obama knows surrender in northern Iraq would be an intolerable disgrace for his administration and U.S. policy. So he sends in the troops.
We come to an irony. The liberal habit of assuming everyone else's motives are corrupt is, of course, an oldie-moldie, if a tad free-floating in this case. But the critics in question don't actually oppose Mr. Obama's intervention, the latest in our costly and thankless efforts in Iraq. They don't exactly endorse it either. The New Yorker's Mr. Coll especially seems out to avoid committing himself while striking a knowing, superior tone about the alleged centrality of oil, which is perhaps the most ignoble reason to pick up a pen on this subject right now.
The Department of Justice isn't known for a sense of humor. But on Monday it announced a civil settlement with Citigroup over failed mortgage investments that covers almost exactly the period when current Treasury Secretary Jack Lew oversaw divisions at Citi that presided over failed mortgage investments. Now, that's funny.
Though Justice, five states and the FDIC are prying $7 billion from the bank for allegedly misleading investors, there's no mention in the settlement of clawing back even a nickel of Mr. Lew's compensation. We also see no sanction for former Treasury Secretary Timothy Geithner, who allowed Citi to build colossal mortgage risks outside its balance sheet while overseeing the bank as president of the New York Federal Reserve.
The settlement says Citi's alleged misdeeds began in 2006, the year Mr. Lew joined the bank, and the agreement covers conduct "prior to January 1, 2009." That was shortly before Mr. Lew left to work for President Obama and two weeks before Mr. Lew received $944,518 from Citi in "salary, payout for vested restricted stock," and "discretionary cash compensation for work performed in 2008," according to a 2010 federal disclosure report. That was also the year Citi began receiving taxpayer bailouts of $45 billion in cash, plus hundreds of billions more in taxpayer guarantees.
While Attorney General Eric Holder is forgiving toward his Obama cabinet colleagues, he seems to believe that some housing transactions can never be forgiven. The $7 billion settlement includes the same collateralized debt obligation for which the bank already agreed to pay $285 million in a settlement with the Securities and Exchange Commission. The Justice settlement also includes a long list of potential charges not covered by the agreement, so prosecutors can continue to raid the Citi ATM.
Citi offers in return what looks like a blanket agreement not to sue the government over any aspect of the case, and waives its right to defend itself "based in whole or in part on a contention that, under the Double Jeopardy Clause in the Fifth Amendment of the Constitution, or under the Excessive Fines Clause in the Eighth Amendment of the Constitution, this Agreement bars a remedy sought in such criminal prosecution or administrative action." We hold no brief for Citi, which has been rescued three times by the feds. But what kind of government demands the right to exact repeated punishments for the same offense?
The bank's real punishment should have been failure, as former FDIC Chairman Sheila Bair and we argued at the time. Instead, the regulators kept Citi alive with taxpayer money far beyond what it provided most other banks as part of the Troubled Asset Relief Program. Keeping it alive means they can now use Citi as a political target when it's convenient to claim they're tough on banks.
And speaking of that $7 billion, good luck finding a justification for it in the settlement agreement. The number seems to have been pulled out of thin air since it's unrelated to Citi's mortgage-securities market share or any other metric we can see beyond having media impact.
If this sounds cynical, readers should consult the Justice Department's own leaks to the press about how the Citi deal went down. Last month the feds were prepared to bring charges against the bank, but the necessities of public relations intervened.
According to the Journal, "News had leaked that afternoon, June 17, that the U.S. had captured Ahmed Abu Khatallah, a key suspect in the attacks on the American consulate in Benghazi in 2012. Justice Department officials didn't want the announcement of the suit against Citigroup—and its accompanying litany of alleged misdeeds related to mortgage-backed securities—to be overshadowed by questions about the Benghazi suspect and U.S. policy on detainees. Citigroup, which didn't want to raise its offer again and had been preparing to be sued, never again heard the threat of a suit."
This week's settlement includes $4 billion for the Treasury, roughly $500 million for the states and FDIC, and $2.5 billion for mortgage borrowers. That last category has become a fixture of recent government mortgage settlements, even though the premise of this case involves harm done to bond investors, not mortgage borrowers.
But the Obama Administration's references to the needs of Benghazi PR remind us that it could be worse. At least Mr. Holder isn't blaming the Geithner and Lew failures on a video.
It is now five years since the end of the most recent U.S. financial crisis of 2007-09. Stocks have made record highs, junk bonds and leveraged loans have boomed, house prices have risen, and already there are cries for lower credit standards on mortgages to "increase access."
Meanwhile, in vivid contrast to the Swiss central bank, which marks its investments to market, the Federal Reserve has designed its own regulatory accounting so that it will never have to recognize any losses on its $4 trillion portfolio of long-term bonds and mortgage securities.
Who remembers that such "special" accounting is exactly what the Federal Home Loan Bank Board designed in the 1980s to hide losses in savings and loans? Who remembers that there even was a Federal Home Loan Bank Board, which for its manifold financial sins was abolished in 1989?
It is 25 years since 1989. Who remembers how severe the multiple financial crises of the 1980s were?
The government of Mexico defaulted on its loans in 1982 and set off a global debt crisis. The Federal Reserve's double-digit interest rates had rendered insolvent the aggregate savings and loan industry, until then the principal supplier of mortgage credit. The oil bubble collapsed with enormous losses.
Between 1982 and 1992, a disastrous 2,270 U.S. depository institutions failed. That is an average of more than 200 failures a year or four a week over a decade. From speaking to a great many audiences about financial crises, I can testify that virtually no one knows this.
In the wake of the housing bust, I was occasionally asked, "Will we learn the lessons of this crisis?" "We will indeed," I would reply, "and we will remember them for at least four or five years." In 2007 as the first wave of panic was under way, I heard a senior international economist opine in deep, solemn tones, "What we have learned from this crisis is the importance of liquidity risk." "Yes," I said, "that's what we learn from every crisis."
The political reactions to the 1980s included the Financial Institutions Reform, Recovery and Enforcement Act of 1989, the FDIC Improvement Act of 1991, and the very ironically titled GSE Financial Safety and Soundness Act of 1992. Anybody remember the theories behind those acts?
After depositors in savings and loan associations were bailed out to the tune of $150 billion (the Federal Savings and Loan Insurance Corporation having gone belly up), then-Treasury Secretary Nicholas Brady pronounced that the great legislative point was "never again." Never, that is, until the Mexican debt crisis of 1994, the Asian debt crisis of 1997, and the Long-Term Capital Management crisis of 1998, all very exciting at the time.
And who remembers the Great Recession (so called by a prominent economist of the time) in 1973-75, the huge real-estate bust and New York City's insolvency crisis? That was the decade before the 1980s.
Viewing financial crises over several centuries, the great economic historian Charles Kindleberger concluded that they occur on average about once a decade. Similarly, former Fed Chairman Paul Volcker wittily observed that "about every 10 years, we have the biggest crisis in 50 years."
What is it about a decade or so? It seems that is long enough for memories to fade in the human group mind, as they are overlaid with happier recent experiences and replaced with optimistic new theories.
Speaking in 2013, Paul Tucker, the former deputy governor for financial stability of the Bank of England—a man who has thought long and hard about the macro risks of financial systems—stated, "It will be a while before confidence in the system is restored." But how long is "a while"? I'd say less than a decade.
Mr. Tucker went on to proclaim, "Never again should confidence be so blind." Ah yes, "never again." If Mr. Tucker's statement is meant as moral suasion, it's all right. But if meant as a prediction, don't bet on it.
Former Treasury Secretary Tim Geithner, for all his daydream of the government as financial Platonic guardian, knows this. As he writes in "Stress Test," his recent memoir: "Experts always have clever reasons why the boom they are enjoying will avoid the disastrous patterns of the past—until it doesn't." He predicts: "There will be a next crisis, despite all we did."
Right. But when? On the historical average, 2009 + 10 = 2019. Five more years is plenty of time for forgetting. Mr. Pollock is a resident fellow at the American Enterprise Institute and was president and CEO of the Federal Home Loan Bank of Chicago 1991-2004.
China will scrap caps on retail prices for low-cost medicine and is moving toward free-market pricing for pharmaceuticals, after price controls led to drug quality problems and shortages in the country.
The move could be a welcome one for global pharmaceutical companies, which have been under scrutiny in China since last year for their sales and marketing practices.
The world's most populous country is the third-largest pharmaceutical market, behind the U.S. and Japan, according to data from consulting firm McKinsey & Co., but Beijing has used price caps and other measures to keep medical care affordable.
Price caps will be lifted for 280 medicines made by Western drug companies and 250 Chinese patent drugs, the National Development and Reform Commission, China's economic planning body, said Thursday. The move will affect prices on drugs such as antibiotics, painkillers and vitamins, it said.
The statement said local governments will have until July 1 to unveil details of the plan. In China, local authorities have broad oversight over how drugs are distributed to local hospitals.
Aiming to keep prices low, some manufacturers cut corners on production, exposing consumers to safety risks, said Helen Chen, a Shanghai-based partner and director of L.E.K. Consulting. Many also closed production, creating shortages of low-cost drugs such as thyroid medication.
"It means the [commission] recognizes that forcing prices down and focusing purely on price does sacrifice drug safety, quality and availability," said Ms. Chen.
Several drug makers, including GlaxoSmithKline PLC, didn't immediately respond to requests for comment. Spokeswomen for Sanofi and Pfizer Inc. said that because implementation of the new policy is unclear, it is too early to understand how it will affect their business in China.
The industry was dealt a blow last summer when Chinese authorities accused Glaxo of bribing doctors, hospitals and local officials to increase sales of their drugs. The U.K. company has said some of its employees may have violated Chinese law.
The central government, which began overhauling the country's health-care system in 2009, has until now largely favored pricing caps and has encouraged provincial governments to cut health-care costs and prices. Regulators phased out five years ago premium pricing for a list of "essential drugs" to be available in hospitals.
Chinese leaders want health care to be more accessible and affordable, but there have been unintended consequences in attempting to ensure the lowest prices on drugs. For instance, many pharmaceutical companies registered to sell the thyroid medication Tapazole have halted production in recent years after pricing restrictions squeezed out profits, experts say, creating a shortage. Chinese patients with hyperthyroidism struggled to find the drug and many suffered with increased anxiety, muscle weakness and sleep disorder, according to local media reports.
In 2012, some drug-capsule manufacturers were found to be using industrial gelatin to cut production costs. The industrial gelatin contained the chemical chromium, which can be carcinogenic with frequent exposure, according to the U.S. Centers for Disease Control and Prevention.
"Manufacturers have attempted to save costs, and doing that has meant using lower-quality ingredients," said Ms. Chen.
The pricing reversal won't necessarily alleviate pricing pressure for these drugs, experts say. To get drugs into hospitals, companies must compete in a tendering process at the provincial level, said Justin Wang, also a partner at L.E.K. "It's still unclear how the provinces will react to this new national list," Mr. Wang said.
If provinces don't change their current system, price will remain a key competitive factor for drug makers, said Franck Le Deu, a partner at McKinsey's China division.
"The bottom line is that there may be more safety and more pricing transparency, but the focus intensifies on creating more innovative drugs," Mr. Le Deu said.
Translated by Un Liberal Recalcitrant from Thomas Piketty Revives Marx for the 21st Century, below.
Thomas Piketty li agrada el capitalisme, ja que assigna eficientment els recursos. Però ell no li agrada com es distribueix la renda. No, pensa que pràcticament és una il·legitimitat moral qualsevol acumulació de riquesa, i és una qüestió de justícia que aquesta desigualtat pot radicar en la nostra economia. La manera de fer això és eliminar les rendes altes i reduir la riquesa existent a través d'impostos.
"El capital al segle XXI" és una densa exploració de Thomas Piketty en la història dels salaris i de la riquesa en els últims tres segles. Presenta un desgavell de dades sobre la distribució dels ingressos en molts països, provant de demostrar que la desigualtat ha augmentat dràsticament en les últimes dècades i que aviat tornarà a ser pitjor. Independentment de si un està convençut per les dades del Sr Piketty -i hi ha raons per a l'escepticisme, donat el cas de les pròpies advertències de l'autor i pel fet que moltes estadístiques es basen en mostres molt limitades dels registres de l'impost sobre béns de dubtosa extrapolació- en última instància aquest és un fet de poca importància. Conseqüentment aquest llibre no és tant un treball d'anàlisi econòmica com el d’una norma ideològica estranya.
Professor de l'Escola d'Economia de París, el Sr Piketty creu que només la productivitat dels treballadors de baixos ingressos pot ser mesurada de forma objectiva. Ell postula que quan un treball és replicable, com el d’un "treballador de la línia de muntatge o el d’un cambrer de menjar ràpid", es pot mesurar de forma relativament fàcil el valor aportat per cada treballador. Per tant, aquests treballadors tenen dret al que guanyen. Ell troba que la productivitat de les persones amb alts ingressos ´rd més difícil de mesurar i creu que els seus salaris es troben en el final de la "gran mesura arbitrària". Són el reflex d'una "construcció ideològica" més del mèrit.
Segons Piketty, els sous altíssims per "supermanagers" corporatius ha estat la major font d'augment de la desigualtat, i aquests executius només poden haver arribat a la seva recompensa gràcies a la sort o falles en el govern corporatiu. Es requereix només una mirada ocasional a aquest diari per confirmar això. No obstant, l'autor creu que cap CEO podria mai justificar el seu salari en funció del rendiment. Ell no diu que qualsevol professional –atletes, metges, professors d'economia, que venen llibres electrònics per 21,99$ de marge amb cost zero per còpia- tingui dret a majors ingressos perquè no vol "gaudir de la construcció d'una jerarquia moral de la riquesa".
Ell admet que els empresaris són "absolutament indispensables" per al desenvolupament econòmic, però el seu èxit, també, està generalment contaminat. Mentre que alguns tenen èxit gràcies al "treball per part del veritable emprenedor," altres tenen senzillament sort o aconsegueixen l’èxit a través del "robatori descarat". Fins i tot seria el cas de les fortunes fetes del treball empresarial que evolucionen ràpidament cap a una "concentració excessiva i duradora del capital". Això és una injustícia d'auto-reforç, perquè "la propietat a vegades comença amb el robatori, i el retorn arbitrari sobre el capital pot perpetuar fàcilment el delicte inicial. "De fet tot el llibre incorpora com a una hostilitat gairebé medieval la idea de que el capital financer tingui un retorn o benefici.
El Sr Piketty creu que com més rica es torna una societat, més gent va a la recerca de la millor posició social relativa, condicionant més desigualtat. Rememora les atemporals autoritats econòmiques com Jane Austen i Honoré de Balzac en la cartografia del nostre futur. Al llarg del llibre es divaga amb les maquinacions inadequades, perseguint de personatges de novel·les com "Sentit i sensibilitat", i obsessivament, amb el calculador "Papà Goriot": Són els fruits del treball dur superiors a les intencions per casar-se i aconseguir una fortuna? Si no és així, "per què treballar, i per què comportar-se moralment bé?"
Mentre que els executius corporatius dels Estats Units són la seva “bèstia especial”, el Sr Piketty també està profundament preocupat per les desenes de milions de persones treballadores -un grup que ell anomena despectivament "petits rendistes"- que els seus ingressos els col·loca molt lluny de l’u per cent, però que encara tenen estalvis, comptes de jubilació i altres actius. Considera que aquest gran grup demogràfic es farà més gran i que el seu creixement de riquesa es transmetrà mitjançant les herències, essent això "una forma bastant preocupant de desigualtat". Es lamenta del difícil que és "corregir" tot això perquè es tracta d'un ampli segment de la població, no una petita elit, més fàcilment “satanitzable” .
Llavors, què cal fer ? El Sr Piketty insta a constituir una taxa impositiva del 80 % en els ingressos a partir del 500.000$ o 1 milió. "Això no és per recaptar diners per a l'educació o per augmentar les prestacions d'atur. Contràriament, no ho espera d’un impost d'aquest tipus perquè el seu propòsit és, simplement, "posar fi a aquest tipus d'ingressos”. També serà necessari imposar una taxa -del 50/60%- sobre els ingressos més baixos, com els de 200.000$. Afegeix també ha ha d'haver un impost a la riquesa anual de fins el 10 % sobre les fortunes més grans i una càrrega fiscal, d'una sola vegada, de fins el 20% sobre els nivells de riquesa més baixos. Ell, alegrement, ens assegura que res d'això reduiria el creixement econòmic, la productivitat, l'emprenedoria o la innovació.
No és que el creixement i la millora no estigui en la ment del senyor Piketty, sin´ó que es tracta com un assumpte econòmic i com un mitjà per a una major justícia distributiva. S'assumeix que l'economia és estàtica i de suma zero; si l'ingrés d'un grup de població augmenta, un altre necessàriament ha d’empobrir-se. Ell veu la igualtat de resultats com la finalitat última i exclusivament per al seu propi bé. Objectius -tals alternatives com la maximització de la riquesa general de la societat o l'augment de la llibertat econòmica o la recerca de la major igualtat possible d'oportunitats o fins i tot, com en la filosofia de John Rawls, el que garanteix que el benestar dels més desfavorits es maximitza -són ni mínimament esmentats.
No hi ha dubte que la pobresa, la desocupació i la desigualtat d'oportunitats són els principals reptes per a les societats capitalistes, i els diversos graus de la sort, el treball dur, la mandra i el mèrit són inherents a la humanitat. El Sr Piketty no és el primer visionari utòpic. Cita, per exemple, "l’experiment soviètic" que va permetre a l'home llançar "les seves cadenes juntament amb el jou de la riquesa acumulada." En el seu relat, només va portar el desastre humà perquè les societats necessiten mercats i propietat privada per tenir una economia que funcioni. Ell diu que les seves solucions ofereixen una "resposta menys violenta i més eficient per l'etern problema del capital privat i del seu benefici. En lloc d'Austen i Balzac, el professor hauria de llegir "Rebel·lió a la Granja” i "Darkness at Noon".
Shuchman és un gestor de fons de Nova York que escriu sovint sobre el dret i l'economia.
---
Thomas Piketty Revives Marx for the 21st Century. By Daniel Schuman An 80% tax rate on incomes above $500,000 is not meant to bring in money for education or benefits, but 'to put an end to such incomes.'
Wall Street Journal, April 21, 2014 7:18 p.m. ET http://online.wsj.com/news/articles/SB10001424052702303825604579515452952131592
Thomas Piketty likes capitalism because it efficiently allocates resources. But he does not like how it allocates income. There is, he thinks, a moral illegitimacy to virtually any accumulation of wealth, and it is a matter of justice that such inequality be eradicated in our economy. The way to do this is to eliminate high incomes and to reduce existing wealth through taxation.
"Capital in the Twenty-First Century" is Mr. Piketty's dense exploration of the history of wages and wealth over the past three centuries. He presents a blizzard of data about income distribution in many countries, claiming to show that inequality has widened dramatically in recent decades and will soon get dangerously worse. Whether or not one is convinced by Mr. Piketty's data—and there are reasons for skepticism, given the author's own caveats and the fact that many early statistics are based on extremely limited samples of estate tax records and dubious extrapolation—is ultimately of little consequence. For this book is less a work of economic analysis than a bizarre ideological screed.
A professor at the Paris School of Economics, Mr. Piketty believes that only the productivity of low-wage workers can be measured objectively. He posits that when a job is replicable, like an "assembly line worker or fast-food server," it is relatively easy to measure the value contributed by each worker. These workers are therefore entitled to what they earn. He finds the productivity of high-income earners harder to measure and believes their wages are in the end "largely arbitrary." They reflect an "ideological construct" more than merit.
Soaring pay for corporate "supermanagers" has been the largest source of increased inequality, according to Mr. Piketty, and these executives can only have attained their rewards through luck or flaws in corporate governance. It requires only an occasional glance at this newspaper to confirm that this can be the case. But the author believes that no CEO could ever justify his or her pay based on performance. He doesn't say whether any occupation—athletes? physicians? economics professors who sell zero-marginal-cost e-books for $21.99 a copy?—is entitled to higher earnings because he does not wish to "indulge in constructing a moral hierarchy of wealth."
He does admit that entrepreneurs are "absolutely indispensable" for economic development, but their success, too, is usually tainted. While some succeed thanks to "true entrepreneurial labor," some are simply lucky or succeed through "outright theft." Even the fortunes made from entrepreneurial labor, moreover, quickly evolve into an "excessive and lasting concentration of capital." This is a self-reinforcing injustice because "property sometimes begins with theft, and the arbitrary return on capital can easily perpetuate the initial crime." Indeed laced throughout the book is an almost medieval hostility to the notion that financial capital earns a return.
Mr. Piketty believes that the wealthier a society becomes, the more people will claw for the best relative social station and the more inequality will ensue. He turns to those timeless economic authorities Jane Austen and Honoré de Balzac in mapping our future. Throughout the book, he importunately digresses with the dowry-chasing machinations of characters in novels like "Sense and Sensibility" and " Père Goriot. " He is obsessed with the following calculus: Are the fruits of working hard greater than those attainable by marrying into a top fortune? If not, "why work? And why behave morally at all?"
While America's corporate executives are his special bête noire, Mr. Piketty is also deeply troubled by the tens of millions of working people—a group he disparagingly calls "petits rentiers"—whose income puts them nowhere near the "one percent" but who still have savings, retirement accounts and other assets. That this very large demographic group will get larger, grow wealthier and pass on assets via inheritance is "a fairly disturbing form of inequality." He laments that it is difficult to "correct" because it involves a broad segment of the population, not a small elite that is easily demonized.
So what is to be done? Mr. Piketty urges an 80% tax rate on incomes starting at "$500,000 or $1 million." This is not to raise money for education or to increase unemployment benefits. Quite the contrary, he does not expect such a tax to bring in much revenue, because its purpose is simply "to put an end to such incomes." It will also be necessary to impose a 50%-60% tax rate on incomes as low as $200,000 to develop "the meager US social state." There must be an annual wealth tax as high as 10% on the largest fortunes and a one-time assessment as high as 20% on much lower levels of existing wealth. He breezily assures us that none of this would reduce economic growth, productivity, entrepreneurship or innovation.
Not that enhancing growth is much on Mr. Piketty's mind, either as an economic matter or as a means to greater distributive justice. He assumes that the economy is static and zero-sum; if the income of one population group increases, another one must necessarily have been impoverished. He views equality of outcome as the ultimate end and solely for its own sake. Alternative objectives—such as maximizing the overall wealth of society or increasing economic liberty or seeking the greatest possible equality of opportunity or even, as in the philosophy of John Rawls, ensuring that the welfare of the least well-off is maximized—are scarcely mentioned.
There is no doubt that poverty, unemployment and unequal opportunity are major challenges for capitalist societies, and varying degrees of luck, hard work, sloth and merit are inherent in human affairs. Mr. Piketty is not the first utopian visionary. He cites, for instance, the "Soviet experiment" that allowed man to throw "off his chains along with the yoke of accumulated wealth." In his telling, it only led to human disaster because societies need markets and private property to have a functioning economy. He says that his solutions provide a "less violent and more efficient response to the eternal problem of private capital and its return." Instead of Austen and Balzac, the professor ought to read "Animal Farm" and "Darkness at Noon." Mr. Shuchman is a New York fund manager who often writes on law and economics.
Views from FYR of Macedonia: Russia, the West, America After sharing with some people an article
on new software for US military cargo helicopters to take more autonomous decisions*,
a Macedonian in the group wrote (Spanish):
Si, tenemos
suerte y los rusos nos defienden. Si no estamos j[xxx]dos con los americanos
y sus maquinas de muerte
Newly Unveiled Technology Runs on Tablet App, Enables Unmanned Aircraft to Choose Flight Routes and Landing Sites
[photo removed: The U.S. Navy has unveiled new drone technology that allows
the craft to take off and plot its routes autonomously, without a
ground-based pilot. The drones carrying the new software will likely be
deployed within a year. Photo: Office of Naval Research]
By Dion Nissenbaum
WASHINGTON—Rear Adm. Matthew Klunder has positioned himself to become the
Jeff Bezos
of the Pentagon.
Much as the
Amazon.com Inc.
founder envisions mini-drones that deliver small packages across
America, Adm. Klunder, the Navy's research chief, wants to create
innovative unmanned helicopters able to perform tasks now carried out by
humans: resupplying troops in remote areas and rescuing wounded Marines
from the battlefield.
His plan is moving ahead, and Navy
officials will unveil new technology Saturday that with the push of a
button allows helicopters—manned or unmanned—to choose their own routes,
take off and land.
The Navy views the five-year, $100 million
program as a major advance in the Pentagon's hopes for taking the
"manned out of unmanned" aircraft. Over the next decade, the military is
aiming to create autonomous drones that can help soldiers carry out
night raids, search oceans for trouble, and select targets for attack.
[photo removed: Lance Cpl. Cody Barss
uses a tablet computer during a demonstration in Quantico, Va., of a
system that could support forces on the front lines, as an alternative
to convoys, manned aircraft or air drops.
U.S. Navy]
This is "truly leap-ahead technology," Adm. Klunder said of the new autonomous helicopter advances.
"What
we're talking about doing with full size helicopters—and we've done
it—we're talking about delivering 5,000 pounds of cargo," he said.
The Navy program has put the system through successful test runs at the Marine Corps Base in Quantico, Va.
Using
a special app and a tablet, operators given only a half-hour of
training were able to direct small helicopters to land on their own. The
helicopters can choose their own routes, pick landing sites and change
their destination if they spot unexpected obstacles that emerge at the
last minute.
Autonomous technology will make it easier for the
military, which won't have to rely on highly trained operators to route
and land helicopters.
But while it could mark the advent of an
era in which the military operates more sophisticated equipment with
fewer people, it also is likely to stir concern about an overreliance on
technology.
"We're starting to move into an autonomous regime,
and that's going to have hugely disruptive effects," said Shawn Brimley,
a former Pentagon official who is now executive director of the Center
for a New American Security. "I would almost call it a revolution."
To
some, the foray into autonomous aircraft is a move that conjures images
of killer drones, capable of choosing targets and hunting them down
without human oversight.
To address those concerns, the Pentagon
has devised special guidelines meant to ensure that the military won't
allow drones to carry out "kill missions" without human involvement.
Autonomous
drones that require less human oversight could also take some strain
off the Pentagon as it cuts back the size of the military to deal with
budget cuts.
The Pentagon's expanding drone fleet has limited
ability to operate autonomously. The Navy's experimental combat drone,
the X-47B, landed itself on an aircraft carrier last July. The Army
wants to create a robot that can operate on its own in helping soldiers
search for suspects.
The Navy and the Marine Corps envision
using the autonomous helicopter technology to more easily fly tons of
supplies to remote bases. Military developers are even talking about the
prospect of using the system to carry out emergency battlefield
evacuations for troops.
The new systems, developed by Lockheed
Martin Corp. and Aurora Flight Sciences, are designed to be used on the
Pentagon's biggest helicopters, officials said.
Development still
has a way to go. The new drone capabilities still must face more
challenging trials, such as flying at night and in difficult weather.
But military officials expressed confidence they could begin using the
system in a year.
"As far as innovative projects go, I can't
think of one that's more important to the Marine Corps right now—or one
that shows as much promise," said Brig. Gen. Kevin Killea, commander of
the Marine Corps Warfighting Laboratory and Adm. Klunder's deputy.