The Exaggerated Death of the Middle Class. By Ron Haskins and Scott Winship
Brookings, December 11, 2012
http://www.brookings.edu/research/opinions/2012/12/11-middle-class-haskins-winship?cid=em_es122712
Excerpts:
The most easily obtained income figures are not the most appropriate ones for assessing changes in living standards; those are also the figures that are often used to reach unwarranted conclusions about “middle class decline.” For example, analysts and pundits often rely on data that do not include all sources of income. Consider data on comprehensive income assembled by Cornell University economist Richard Burkhauser and his colleagues for the period between 1979—the year it supposedly all went wrong for working Americans—and 2007, before the Great Recession.
When Burkhauser looked at market income as reported to the Internal Revenue Service (IRS), the basis for the top 1 percent inequality figures that inspired Occupy Wall Street, he found that incomes for the bottom 60 percent of tax filers stagnated or declined over the nearly three-decade period. Incomes in the middle fifth of tax returns grew by only 2 percent on average, and those in the bottom fifth declined by 33 percent.
Things appeared somewhat better when Burkhauser looked at the definition of income favored by the Census Bureau which, unlike IRS figures, includes government cash payments from programs like Social Security and welfare, and looks at households rather than tax returns.
Still, the income of the middle fifth only rose by 15 percent over the entire three decades, much less than 1 percent per year. The Census Bureau reports that from 2000 to 2010, the income of the middle fifth actually fell by 8 percent. With numbers like these, it’s understandable why so many people think the American middle class is under threat and in decline.
But there are three reasons why even the Census Bureau figures are deceiving. The size of U.S. households, which has been declining, is not taken into account. The figures ignore the net impact on income of government taxes and non-cash transfers like food stamps and health insurance, which benefit the poor and middle class much more than richer households, and the value of health insurance provided by employers is also left out.
Burkhauser and his colleagues show that if these factors are taken into account, the incomes of the bottom fifth of households actually increased by 26 percent, rather than declining by 33 percent. Those of the middle fifth increased by 37 percent, rather than by only 2 percent. There is no disappearing middle class in these data; nor can household income, even at the bottom, be characterized as stagnant, let alone declining. Even after 2000, estimates from the Congressional Budget Office (CBO) show the bottom 60 percent of households got 10 percent richer by 2009, the most recent year available.
Making sense of income trends
Aside from the brighter picture presented by the Burkhauser and CBO analyses, there is a more complicated trend emerging in the United States. Four factors, both inside and outside the market, explain those trends.
The first market factor affecting middle-class income is a longtime trend of low literacy and math achievement in U.S. schools, which partially explains why conventional analyses of income show stagnation and decline. Young Americans entering the job market need skills valuable in a modern economy if they expect to earn a decent wage. Education and technical training are key to acquiring these skills. Yet the achievement test scores of children in literacy and math have been stagnant for more than two decades and are consistently far down the list in international comparisons.
It is true that African American and Hispanic students have closed part of the gap between themselves and Caucasian and Asian students; but the gap between students from economically advantaged families and students from disadvantaged ones has widened substantially—by 30 to 40 percent over the past 25 years.1
In a nation committed to educational equality and economic mobility, the income gap in achievement test scores is deeply problematic. Far from increasing educational equality as an important route to boosting economic opportunity, the American educational system reinforces the advantages that students from middle-class families bring with them to the classroom. Thus, the nation has two education problems that are limiting the income of workers at both the bottom and middle of the distribution: the average student is not learning enough, compared with students from other nations, and students from poor families are falling further and further behind.
It is difficult to see how students with a poor quality of education will be able to support a family comfortably in our technologically advanced economy if they rely exclusively on their earnings.
The second market factor is the increasing share of our economy devoted to health care. According to the Kaiser Foundation, employer-sponsored health insurance premiums for families increased 113 percent between 2001 and 2011. Most economists would say that this money comes directly out of worker wages. In other words, if it weren’t for the remarkable increase in the cost of health care, workers’ wages would be higher. When the portion of market compensation received in the form of health insurance is ignored in conventional analyses, income gains over time are understated.
Turning to non-market factors, marriage and childbearing increasingly distinguish the haves and have-nots.
Families have fewer children, and more U.S. adults are living alone today than in the past. As a result, households on average are better off since there are fewer mouths to feed, regardless of income. At the same time, single parenthood has grown more common, thereby increasing inequality between the poor and the middle class. Female-headed families are more than four times as likely to be in poverty, and children from these families are more likely to have trouble in school as compared with children in married-couple families. The increasing tendency of similarly educated men and women to marry each other also contributes to rising inequality.
The most important non-market factor is the net impact of government taxes and transfer payments on household income. The budget of the U.S. government for 2012 is $3.6 trillion. About 65 percent of that amount is spent on transfer payments to individuals. The biggest transfer payments are: $770 billion for Social Security, $560 billion for Medicare, $262 billion for Medicaid, and nearly $100 billion for nutrition programs. In addition to these federal expenditures, state governments also spend tens of billions of dollars on programs for low-income households. Almost all of the over $1 trillion in state and federal spending on means-tested programs (those that provide benefits only to people below some income cutoff) goes to low-income households.
Thus, taking into account the progressive nature of Social Security and Medicare benefits, the effect of government expenditures is to greatly increase household income at the bottom and reduce economic inequality.
Similarly, federal taxation—and to a lesser extent state taxation—is progressive. Americans in the bottom 40 percent of the income distribution pay negative federal income taxes because the Earned Income Tax Credit and the Child Tax Credit actually pay cash to millions of low-income families with children.
IRS data on incomes incorporate only the small fraction of transfer income that is taxable. Census data includes all cash transfer payments but leaves out non-cash transfers—among which Medicaid and Medicare benefits are the most important—and taxes.
The bottom line is that market income has grown, and government programs have greatly increased the well-being of low-income and middle-class households. The middle class is not shrinking or becoming impoverished. Rather, changes in workers’ skills and employers’ demand for them, along with changes in families’ size and makeup, have caused the incomes of the well-off to climb much faster than the incomes of most Americans.
Rising inequality can occur even as everyone experiences improvement in living standards.
Even so, unless the nation’s education system improves, especially for children from poor families, millions of working Americans will continue to rely on government transfer payments. This signals a real problem. Millions of individuals and families at the bottom and in the middle of the income distribution are dependent on government to enjoy a decent or rising standard of living. While the U.S. middle class may not be shrinking, the trends outlined above make clear why this is no reason for complacency. Today’s form of widespread dependency on government benefits has helped stem a decline in income, but far better would be to have more people earning all or nearly all their income through work. Getting there, though, will require deeper reforms in the structure of the U.S. education system.
---
1 Sean F. Reardon, Wither Opportunity? Rising Inequality and the Uncertain Life Chances of Low-Income Children (New York: Russel Sage Foundation Press, 2001).
Thursday, December 27, 2012
Tuesday, December 25, 2012
Smarter Ways to Discipline Children
Smarter Ways to Discipline Children. By Andrea Petersen
Research Suggests Which Strategies Really Get Children to Behave; How Timeouts Can Work BetterWSJ, December 24, 2012
http://online.wsj.com/article/SB10001424127887323277504578189680452680490.html
When it comes to disciplining her generally well-behaved kids, Heather Henderson has tried all the popular tricks. She's tried taking toys away. (Her boys, ages 4 and 6, never miss them.) She's tried calm explanations about why a particular behavior—like hitting your brother—is wrong. (It doesn't seem to sink in.) And she's tried timeouts. "The older one will scream and yell and bang on walls. He just loses it," says the 41-year-old stay-at-home mother in Syracuse, N.Y.
What can be more effective are techniques that psychologists often use with the most difficult kids, including children with attention deficit hyperactivity disorder and oppositional defiant disorder. Approaches, with names like "parent management training" and "parent-child interaction therapy," are backed up by hundreds of research studies and they work on typical kids, too. But while some of the approaches' components find their way into popular advice books, the tactics remain little known among the general public.
The general strategy is this: Instead of just focusing on what happens when a child acts out, parents should first decide what behaviors they want to see in their kids (cleaning their room, getting ready for school on time, playing nicely with a sibling). Then they praise those behaviors when they see them. "You start praising them and it increases the frequency of good behavior," says Timothy Verduin, clinical assistant professor of child and adolescent psychiatry at the Child Study Center at NYU Langone Medical Center in New York.
This sounds simple, but in real life can be tough. People's brains have a "negativity bias," says Alan E. Kazdin, a professor of psychology and child psychiatry at Yale University and director of the Yale Parenting Center. We pay more attention to when kids misbehave than when they act like angels. Dr. Kazdin recommends at least three or four instances of praise for good behavior for every timeout a kid gets. For young children, praise needs to be effusive and include a hug or some other physical affection, he says.
According to parent management training, when a child does mess up, parents should use mild negative consequences (a short timeout or a verbal reprimand without shouting).
Giving a child consequences runs counter to some popular advice that parents should only praise their kids. But reprimands and negative nonverbal responses like stern looks, timeouts and taking away privileges led to greater compliance by kids according to a review article published this month in the journal Clinical Child and Family Psychology Review.
"There's a lot of fear around punishment out there," says Daniela J. Owen, a clinical psychologist at the San Francisco Bay area Center for Cognitive Therapy in Oakland, Calif. and the lead author of the study. "Children benefit from boundaries and limits." The study found that praise and positive nonverbal responses like hugs and rewards like ice cream or stickers, however, didn't lead to greater compliance in the short term. "If your child is cleaning up and he puts a block in the box and you say 'great job,' it doesn't mean the child is likely to put another block in the box," says Dr. Owen.
But in the long run, regular praise does make a child more likely to comply, possibly because the consistent praise strengthens the parent-child relationship overall, Dr. Owen says. The article reviewed 41 studies looking at discipline strategies and child compliance.
Parents who look for discipline guidance often find conflicting advice from the avalanche of books and mommy blogs and the growing number of so-called parent coaches. (In 2011, 3,520 parenting books were published or distributed in the U.S., up from 2,774 in 2007, according to Bowker Books In Print database.)
"Many of the things that are recommended we know now to be wrong," says Dr. Kazdin, a leading expert on parent management training. "It is the equivalent of telling people to smoke a lot for their health."
Parents often torpedo their discipline efforts by giving vague, conditional commands and not giving kids enough time to comply with them, says Dr. Verduin, who practices parent-child interaction therapy. When crossing the street, "A bad command would be, 'be careful.' A good command would be 'hold my hand,' " he says. He also instructs parents to count to five to themselves after giving a child a directive, like, for example, "Put on your coat." "Most parents wait a second or two," he says, before making another command, which can easily devolve into yelling and threats.
The techniques are applicable to all ages, but psychologists note that starting early is better. Once kids hit about 10 or 11, discipline gets a lot harder. "Parents don't have as much leverage" with tweens and teens, says Dr. Verduin. "Kids don't care as much what the parents think about them."
Some parents try and reason with young children, which Dr. Kazdin says is bound to fail to change a kid's behavior. Reason doesn't change behavior, which is why stop-smoking messages don't usually work, Dr. Kazdin says. Overly harsh punishments also fail. "One of the side effects of punishment is noncompliance and aggression," he says.
Spanking, in particular, has been linked to aggressive behavior in kids and anger problems and increased marital conflict later on in adulthood. Still, 26% of parents "often" or "sometimes" spank their 19-to-35-month-old children, according to a 2004 study in the journal Pediatrics, which analyzed survey data collected by the federal government from 2,068 parents of young children.
At the Yale Parenting Center, psychologists have found that getting kids to "practice" temper tantrums can lessen their frequency and intensity. Dr. Kazdin recommends that parents have their kids "practice" once or twice a day. Gradually, ask the child to delete certain unwanted behaviors from the tantrum, like kicking or screaming. Then effusively praise those diluted tantrums. Soon, for most children, "the real tantrums start to change," he says. "From one to three weeks, they are kind of over." As for whining, Dr. Kazin recommends whining right along with your child. "It changes the stimulus. You will likely end up laughing," he says.
Researchers noted that not every technique is effective for every child. Some parents find other creative solutions that work for their kids.
Karen Pesapane has found yelling "pillow fight," when her two kids are arguing can put a halt to the bickering. "Their sour attitudes change almost immediately into silliness and I inevitably become their favorite target," said Ms. Pesapane, a 34-year-old from Silver Spring, Md., who works in fundraising for a nonprofit and has a daughter 10, and a son, 6.
Dayna Even has found spending one hour a day fully focused on her 6-year-old son, Maximilian, means "he's less likely to act out, he's more likely to play independently and less likely to interrupt adults," says the 51-year-old writer and tutor in Kailua, Hawaii.
Parents need to take a child's age into account. Benjamin Siegel, professor of pediatrics at the Boston University School of Medicine notes that it isn't until about age 3 that children can really start to understand and follow rules. Dr. Siegel is the chair of the American Academy of Pediatrics' committee that is currently reworking the organization's guidelines on discipline, last updated in 1998.
Research Suggests Which Strategies Really Get Children to Behave; How Timeouts Can Work BetterWSJ, December 24, 2012
http://online.wsj.com/article/SB10001424127887323277504578189680452680490.html
When it comes to disciplining her generally well-behaved kids, Heather Henderson has tried all the popular tricks. She's tried taking toys away. (Her boys, ages 4 and 6, never miss them.) She's tried calm explanations about why a particular behavior—like hitting your brother—is wrong. (It doesn't seem to sink in.) And she's tried timeouts. "The older one will scream and yell and bang on walls. He just loses it," says the 41-year-old stay-at-home mother in Syracuse, N.Y.
What can be more effective are techniques that psychologists often use with the most difficult kids, including children with attention deficit hyperactivity disorder and oppositional defiant disorder. Approaches, with names like "parent management training" and "parent-child interaction therapy," are backed up by hundreds of research studies and they work on typical kids, too. But while some of the approaches' components find their way into popular advice books, the tactics remain little known among the general public.
The general strategy is this: Instead of just focusing on what happens when a child acts out, parents should first decide what behaviors they want to see in their kids (cleaning their room, getting ready for school on time, playing nicely with a sibling). Then they praise those behaviors when they see them. "You start praising them and it increases the frequency of good behavior," says Timothy Verduin, clinical assistant professor of child and adolescent psychiatry at the Child Study Center at NYU Langone Medical Center in New York.
This sounds simple, but in real life can be tough. People's brains have a "negativity bias," says Alan E. Kazdin, a professor of psychology and child psychiatry at Yale University and director of the Yale Parenting Center. We pay more attention to when kids misbehave than when they act like angels. Dr. Kazdin recommends at least three or four instances of praise for good behavior for every timeout a kid gets. For young children, praise needs to be effusive and include a hug or some other physical affection, he says.
According to parent management training, when a child does mess up, parents should use mild negative consequences (a short timeout or a verbal reprimand without shouting).
Giving a child consequences runs counter to some popular advice that parents should only praise their kids. But reprimands and negative nonverbal responses like stern looks, timeouts and taking away privileges led to greater compliance by kids according to a review article published this month in the journal Clinical Child and Family Psychology Review.
"There's a lot of fear around punishment out there," says Daniela J. Owen, a clinical psychologist at the San Francisco Bay area Center for Cognitive Therapy in Oakland, Calif. and the lead author of the study. "Children benefit from boundaries and limits." The study found that praise and positive nonverbal responses like hugs and rewards like ice cream or stickers, however, didn't lead to greater compliance in the short term. "If your child is cleaning up and he puts a block in the box and you say 'great job,' it doesn't mean the child is likely to put another block in the box," says Dr. Owen.
But in the long run, regular praise does make a child more likely to comply, possibly because the consistent praise strengthens the parent-child relationship overall, Dr. Owen says. The article reviewed 41 studies looking at discipline strategies and child compliance.
Parents who look for discipline guidance often find conflicting advice from the avalanche of books and mommy blogs and the growing number of so-called parent coaches. (In 2011, 3,520 parenting books were published or distributed in the U.S., up from 2,774 in 2007, according to Bowker Books In Print database.)
"Many of the things that are recommended we know now to be wrong," says Dr. Kazdin, a leading expert on parent management training. "It is the equivalent of telling people to smoke a lot for their health."
Parents often torpedo their discipline efforts by giving vague, conditional commands and not giving kids enough time to comply with them, says Dr. Verduin, who practices parent-child interaction therapy. When crossing the street, "A bad command would be, 'be careful.' A good command would be 'hold my hand,' " he says. He also instructs parents to count to five to themselves after giving a child a directive, like, for example, "Put on your coat." "Most parents wait a second or two," he says, before making another command, which can easily devolve into yelling and threats.
The techniques are applicable to all ages, but psychologists note that starting early is better. Once kids hit about 10 or 11, discipline gets a lot harder. "Parents don't have as much leverage" with tweens and teens, says Dr. Verduin. "Kids don't care as much what the parents think about them."
Some parents try and reason with young children, which Dr. Kazdin says is bound to fail to change a kid's behavior. Reason doesn't change behavior, which is why stop-smoking messages don't usually work, Dr. Kazdin says. Overly harsh punishments also fail. "One of the side effects of punishment is noncompliance and aggression," he says.
Spanking, in particular, has been linked to aggressive behavior in kids and anger problems and increased marital conflict later on in adulthood. Still, 26% of parents "often" or "sometimes" spank their 19-to-35-month-old children, according to a 2004 study in the journal Pediatrics, which analyzed survey data collected by the federal government from 2,068 parents of young children.
At the Yale Parenting Center, psychologists have found that getting kids to "practice" temper tantrums can lessen their frequency and intensity. Dr. Kazdin recommends that parents have their kids "practice" once or twice a day. Gradually, ask the child to delete certain unwanted behaviors from the tantrum, like kicking or screaming. Then effusively praise those diluted tantrums. Soon, for most children, "the real tantrums start to change," he says. "From one to three weeks, they are kind of over." As for whining, Dr. Kazin recommends whining right along with your child. "It changes the stimulus. You will likely end up laughing," he says.
Researchers noted that not every technique is effective for every child. Some parents find other creative solutions that work for their kids.
Karen Pesapane has found yelling "pillow fight," when her two kids are arguing can put a halt to the bickering. "Their sour attitudes change almost immediately into silliness and I inevitably become their favorite target," said Ms. Pesapane, a 34-year-old from Silver Spring, Md., who works in fundraising for a nonprofit and has a daughter 10, and a son, 6.
Dayna Even has found spending one hour a day fully focused on her 6-year-old son, Maximilian, means "he's less likely to act out, he's more likely to play independently and less likely to interrupt adults," says the 51-year-old writer and tutor in Kailua, Hawaii.
Parents need to take a child's age into account. Benjamin Siegel, professor of pediatrics at the Boston University School of Medicine notes that it isn't until about age 3 that children can really start to understand and follow rules. Dr. Siegel is the chair of the American Academy of Pediatrics' committee that is currently reworking the organization's guidelines on discipline, last updated in 1998.
Monday, December 24, 2012
A case study in the dangers of the Law of the Sea Treaty
Lawless at Sea. WSJ Editorial
A case study in the dangers of the Law of the Sea Treaty.
The Wall Street Journal, December 24, 2012, on page A12
http://online.wsj.com/article/SB10001424127887324407504578187523862827016.html
The curious case of the U.S. hedge fund, the Argentine ship and Ghana is getting curiouser, and now it has taken a turn against national sovereignty. That's the only reasonable conclusion after a bizarre ruling this month from the International Tribunal for the Law of the Sea in Hamburg.
The tribunal—who knew it existed?—ordered the Republic of Ghana to overrule a decision of its own judiciary that had enforced a U.S. court judgment. The Hamburg court is the misbegotten child of the 1982 United Nations Convention on the Law of the Sea. Sold as a treaty to ensure the free movement of people and goods on the high seas, it was rejected by Ronald Reagan as an effort to control and redistribute the resources of the world's oceans.
The U.S. never has ratified the treaty, despite a push by President Obama, and now the solons of Hamburg have demonstrated the wisdom of that decision. While debates on the treaty have centered around the powers a country might enjoy hundreds of miles off its coast, many analysts have simply assumed that nations would still exercise control over the waters just offshore.
Now the Hamburg court has trampled local law in a case involving a ship sitting in port, and every country is now on notice that a Hamburg court is claiming authority over its internal waters.
Specifically, Hamburg ordered Ghana to release a sailing ship owned by the Argentine navy. On October 2, a subsidiary of U.S. investment fund Elliott Management persuaded a Ghanaian judge to order the seizure of the vessel. The old-fashioned schooner, used to train cadets, was on a tour of West Africa.
U.S. hedge funds don't normally seize naval ships, but in this case Elliott and the Ghanaian court are on solid ground. Elliott owns Argentine bonds on which Buenos Aires has been refusing to pay since its 2001 default. Elliott argues that a contract is a contract, and a federal court in New York agrees. Argentina had freely decided to issue its debt in U.S. capital markets and had agreed in its bond contracts to waive the sovereign immunity that would normally prevent lenders from seizing things like three-masted frigates.
To his credit, Judge Richard Adjei-Frimpong of Ghana's commercial court noted that Argentina had specifically waived its immunity when borrowing the money and that under Ghanaian law the ship could therefore be attached by creditors with a valid U.S. judgment registered in Ghana. He ordered the ship held at port until Buenos Aires starts following the orders of the U.S. court.
But in its recent ruling, which ordered Ghana to release the ship by December 22, the Hamburg court claimed that international law requires immunity for the Argentine "warship," as if Argentina never waived immunity and as if this is an actual warship. On Wednesday, Ghana released the vessel, and the ship set sail from the port of Tema for its trans-Atlantic voyage.
So here we have a case in which a small African nation admirably tried to adhere to the rule of law. Yet it was bullied by a global tribunal serving the ends of Argentina, which has brazenly violated the law in refusing to pay its debts and defying Ghana's court order. The next time the Senate moves to ratify the Law of the Sea Treaty, Ghana should be exhibit A for opponents.
A case study in the dangers of the Law of the Sea Treaty.
The Wall Street Journal, December 24, 2012, on page A12
http://online.wsj.com/article/SB10001424127887324407504578187523862827016.html
The curious case of the U.S. hedge fund, the Argentine ship and Ghana is getting curiouser, and now it has taken a turn against national sovereignty. That's the only reasonable conclusion after a bizarre ruling this month from the International Tribunal for the Law of the Sea in Hamburg.
The tribunal—who knew it existed?—ordered the Republic of Ghana to overrule a decision of its own judiciary that had enforced a U.S. court judgment. The Hamburg court is the misbegotten child of the 1982 United Nations Convention on the Law of the Sea. Sold as a treaty to ensure the free movement of people and goods on the high seas, it was rejected by Ronald Reagan as an effort to control and redistribute the resources of the world's oceans.
The U.S. never has ratified the treaty, despite a push by President Obama, and now the solons of Hamburg have demonstrated the wisdom of that decision. While debates on the treaty have centered around the powers a country might enjoy hundreds of miles off its coast, many analysts have simply assumed that nations would still exercise control over the waters just offshore.
Now the Hamburg court has trampled local law in a case involving a ship sitting in port, and every country is now on notice that a Hamburg court is claiming authority over its internal waters.
Specifically, Hamburg ordered Ghana to release a sailing ship owned by the Argentine navy. On October 2, a subsidiary of U.S. investment fund Elliott Management persuaded a Ghanaian judge to order the seizure of the vessel. The old-fashioned schooner, used to train cadets, was on a tour of West Africa.
U.S. hedge funds don't normally seize naval ships, but in this case Elliott and the Ghanaian court are on solid ground. Elliott owns Argentine bonds on which Buenos Aires has been refusing to pay since its 2001 default. Elliott argues that a contract is a contract, and a federal court in New York agrees. Argentina had freely decided to issue its debt in U.S. capital markets and had agreed in its bond contracts to waive the sovereign immunity that would normally prevent lenders from seizing things like three-masted frigates.
To his credit, Judge Richard Adjei-Frimpong of Ghana's commercial court noted that Argentina had specifically waived its immunity when borrowing the money and that under Ghanaian law the ship could therefore be attached by creditors with a valid U.S. judgment registered in Ghana. He ordered the ship held at port until Buenos Aires starts following the orders of the U.S. court.
But in its recent ruling, which ordered Ghana to release the ship by December 22, the Hamburg court claimed that international law requires immunity for the Argentine "warship," as if Argentina never waived immunity and as if this is an actual warship. On Wednesday, Ghana released the vessel, and the ship set sail from the port of Tema for its trans-Atlantic voyage.
So here we have a case in which a small African nation admirably tried to adhere to the rule of law. Yet it was bullied by a global tribunal serving the ends of Argentina, which has brazenly violated the law in refusing to pay its debts and defying Ghana's court order. The next time the Senate moves to ratify the Law of the Sea Treaty, Ghana should be exhibit A for opponents.
Saturday, December 22, 2012
Novel Drug Approvals Strong in 2012
Novel Drug Approvals Strong in 2012
Dec 21, 2012
http://www.innovation.org/index.cfm/NewsCenter/Newsletters?NID=208
Over the past year, biopharmaceutical researchers' work has continued to yield innovative treatments to improve the lives of patients. In fiscal year (FY) 2012 (October 1, 2011 – September 30, 2012), the U.S. Food and Drug Administration (FDA) approved 35 new medicines, keeping pace with the previous fiscal year’s approvals and representing one of the highest levels of FDA approvals in recent years.[i] For the calendar year FDA is on track to approve more new medicines than any year since 2004.[ii]
A recent report from the FDA highlights the groundbreaking medicines to treat diseases ranging from the very common to the most rare. Some are the first treatment option available for a condition, others improve care for treatable diseases.
Notable approvals in FY 2012 include:
Building on these noteworthy approvals, we look to the new year where continued innovation is needed to leverage our growing understanding of the underpinnings of human disease and to harness the power of scientific research tools to discover and develop new medicines.
To learn more about the more than 3,200 new medicines in development visit http://www.innovation.org/index.cfm/FutureOfInnovation/NewMedicinesinDevelopment.
Dec 21, 2012
http://www.innovation.org/index.cfm/NewsCenter/Newsletters?NID=208
Over the past year, biopharmaceutical researchers' work has continued to yield innovative treatments to improve the lives of patients. In fiscal year (FY) 2012 (October 1, 2011 – September 30, 2012), the U.S. Food and Drug Administration (FDA) approved 35 new medicines, keeping pace with the previous fiscal year’s approvals and representing one of the highest levels of FDA approvals in recent years.[i] For the calendar year FDA is on track to approve more new medicines than any year since 2004.[ii]
A recent report from the FDA highlights the groundbreaking medicines to treat diseases ranging from the very common to the most rare. Some are the first treatment option available for a condition, others improve care for treatable diseases.
Notable approvals in FY 2012 include:
- A breakthrough personalized medicine for a rare form of cystic fibrosis;
- The first approved human cord blood product;
- A total of ten drugs to treat cancer, including the first treatments for advanced basal cell carcinoma and myelofibrosis and a targeted therapy for HER2-positive metastatic breast cancer;
- Nine treatments for rare diseases; and
- Important new therapies for HIV, macular degeneration, and meningitis.
Building on these noteworthy approvals, we look to the new year where continued innovation is needed to leverage our growing understanding of the underpinnings of human disease and to harness the power of scientific research tools to discover and develop new medicines.
To learn more about the more than 3,200 new medicines in development visit http://www.innovation.org/index.cfm/FutureOfInnovation/NewMedicinesinDevelopment.
Thursday, December 20, 2012
IMF's "European Union: Financial Sector Assessment," Preliminary Conclusions
"European Union: Financial Sector Assessment," Preliminary Conclusions by the IMF Staff
Press Release No. 12/500
Dec 20, 2012
http://www.imf.org/external/np/sec/pr/2012/pr12500.htm
A Financial Sector Assessment Program (FSAP) team led by the Monetary and Capital Markets Department of the International Monetary Fund (IMF) visited the European Union (EU) during November 27–December 13, 2012, to conduct a first-ever overall EU-wide assessment of the soundness and stability of the EU’s financial sector (EU FSAP). The EU FSAP builds on the 2011 European Financial Stability Exercise (EFFE) and on recent national FSAPs in EU member states.
The mission arrived at the following preliminary conclusions, which are subject to review and consultation with European institutions and national authorities:
The EU is facing great challenges, with continuing banking and sovereign debt crises in some parts of the Union. Significant progress has been made in recent months in laying the groundwork for strengthening the EU’s finacial sector. Implementation of policy decisions is needed. Although the breadth of the necessary agenda is significant, the details of the agreed frameworks need to be put in place to avoid delays in reaching consensus on key issues.
The present conjuncture makes management of the situation particularly difficult. The crisis reveals that handling financial system problems at the national level has been costly, calling for a Europe-wide approach. Interlinkages among the countries of the EU are particularly pronounced, and the need to provide more certainty on the health of banks has led to proposals for establishing a single supervisory mechanism (SSM) associated with the European Central Bank (ECB), initially for the euro area but potentially more widely in the EU.
The mission’s recommendations include the following:
Steps toward banking union
The December 13 EU Council agreement on the SSM is a strong achievement. It needs to be followed up with a structure that has as few gaps as possible, including with regard to the interaction of the SSM with national authorities under the prospective harmonized resolution and deposit guarantee arrangements. The SSM is only an initial step toward an effective Banking Union—actions toward a single resolution authority with common backstops, a deposit guarantee scheme, and a single rulebook, will also be essential.
Reinvigorating the single financial market in Europe
Harmonization of the regulatory structure across Europe needs to be expedited. EU institutions should accelerate passage of the Fourth Capital Requirements Directive, the Capital Requirements Regulation, the directives for harmonizing resolution and deposit insurance, as well as the regulatory regime for insurance Solvency II at the latest by mid–2013, thus enabling the issuance of single rulebooks for banking, insurance, and securities. Moreover, the European Commission should increase the resources and powers of the European Supervisory Authorities as needed to successfully achieve those mandates, while also enhancing their operational independence.
Improved and expanded stress testing
European stress testing needs to go beyond microprudential solvency, and increasingly serve to identify other vulnerabilities, such as liquidity risks and structural weaknesses. Confidence in the results of stress tests can be enhanced by an asset quality review, harmonized definitions of non-performing loans, and standardized loan classification, while maintaining a high level of disclosure. Experience suggests that the benefits of a bold approach outweigh the risks.
Splitting bank and sovereign risk
Measures must be pursued to separate bank and sovereign risk, including by making the ESM operational expeditiously for bank recapitalizations. Strong capital buffers will be important for the banks to perform their intermediating role effectively, to stimulate growth, and so safeguard financial stability.
Effective crisis management framework to minimize costs to taxpayers
Taxpayers’ potential liability following bank failures can be reduced by resolution regimes that include statutory bail-in powers. A common deposit insurance fund, preferably financed ex ante by levies on the banking sector, could also reduce the cost to taxpayers, even if it takes time to build up reserves. Granting preferential rights to depositor guarantee schemes in the creditor hierarchy could also reduce costs, particularly while guarantee funds are being built.
The European Commission and member states should assess the costs and benefits of the various plans for structural measures aimed at reducing banks’ complexity and potential taxpayer liability with a view towards formulating a coordinated proposal. If adopted, it would be important to ensure that such measures are complementary to the international reform agenda, not cause distortions in the single market, and not lead to regulatory arbitrage.
Lastly, the mission would like to extend their thanks to European institutions for close cooperation and assistance in completing this FSAP analysis.
Press Release No. 12/500
Dec 20, 2012
http://www.imf.org/external/np/sec/pr/2012/pr12500.htm
A Financial Sector Assessment Program (FSAP) team led by the Monetary and Capital Markets Department of the International Monetary Fund (IMF) visited the European Union (EU) during November 27–December 13, 2012, to conduct a first-ever overall EU-wide assessment of the soundness and stability of the EU’s financial sector (EU FSAP). The EU FSAP builds on the 2011 European Financial Stability Exercise (EFFE) and on recent national FSAPs in EU member states.
The mission arrived at the following preliminary conclusions, which are subject to review and consultation with European institutions and national authorities:
The EU is facing great challenges, with continuing banking and sovereign debt crises in some parts of the Union. Significant progress has been made in recent months in laying the groundwork for strengthening the EU’s finacial sector. Implementation of policy decisions is needed. Although the breadth of the necessary agenda is significant, the details of the agreed frameworks need to be put in place to avoid delays in reaching consensus on key issues.
The present conjuncture makes management of the situation particularly difficult. The crisis reveals that handling financial system problems at the national level has been costly, calling for a Europe-wide approach. Interlinkages among the countries of the EU are particularly pronounced, and the need to provide more certainty on the health of banks has led to proposals for establishing a single supervisory mechanism (SSM) associated with the European Central Bank (ECB), initially for the euro area but potentially more widely in the EU.
The mission’s recommendations include the following:
Steps toward banking union
The December 13 EU Council agreement on the SSM is a strong achievement. It needs to be followed up with a structure that has as few gaps as possible, including with regard to the interaction of the SSM with national authorities under the prospective harmonized resolution and deposit guarantee arrangements. The SSM is only an initial step toward an effective Banking Union—actions toward a single resolution authority with common backstops, a deposit guarantee scheme, and a single rulebook, will also be essential.
Reinvigorating the single financial market in Europe
Harmonization of the regulatory structure across Europe needs to be expedited. EU institutions should accelerate passage of the Fourth Capital Requirements Directive, the Capital Requirements Regulation, the directives for harmonizing resolution and deposit insurance, as well as the regulatory regime for insurance Solvency II at the latest by mid–2013, thus enabling the issuance of single rulebooks for banking, insurance, and securities. Moreover, the European Commission should increase the resources and powers of the European Supervisory Authorities as needed to successfully achieve those mandates, while also enhancing their operational independence.
Improved and expanded stress testing
European stress testing needs to go beyond microprudential solvency, and increasingly serve to identify other vulnerabilities, such as liquidity risks and structural weaknesses. Confidence in the results of stress tests can be enhanced by an asset quality review, harmonized definitions of non-performing loans, and standardized loan classification, while maintaining a high level of disclosure. Experience suggests that the benefits of a bold approach outweigh the risks.
Splitting bank and sovereign risk
Measures must be pursued to separate bank and sovereign risk, including by making the ESM operational expeditiously for bank recapitalizations. Strong capital buffers will be important for the banks to perform their intermediating role effectively, to stimulate growth, and so safeguard financial stability.
Effective crisis management framework to minimize costs to taxpayers
Taxpayers’ potential liability following bank failures can be reduced by resolution regimes that include statutory bail-in powers. A common deposit insurance fund, preferably financed ex ante by levies on the banking sector, could also reduce the cost to taxpayers, even if it takes time to build up reserves. Granting preferential rights to depositor guarantee schemes in the creditor hierarchy could also reduce costs, particularly while guarantee funds are being built.
The European Commission and member states should assess the costs and benefits of the various plans for structural measures aimed at reducing banks’ complexity and potential taxpayer liability with a view towards formulating a coordinated proposal. If adopted, it would be important to ensure that such measures are complementary to the international reform agenda, not cause distortions in the single market, and not lead to regulatory arbitrage.
Lastly, the mission would like to extend their thanks to European institutions for close cooperation and assistance in completing this FSAP analysis.
Wednesday, December 19, 2012
The future of financial globalisation - BIS Annual Conference
The future of financial globalisation
http://www.bis.org/publ/bppdf/bispap69.htm
The BIS 11th Annual Conference took place in Lucerne, Switzerland on 21-22 June 2012. The event brought together senior representatives of central banks and academic institutions, who exchanged views on the conference theme of "The future of financial globalisation". This volume contains the opening address of Stephen Cecchetti (Economic Adviser, BIS), a keynote address from Amartya Sen (Harvard University), and the available contributions of the policy panel on "Will financial globalisation survive?". The participants in the policy panel discussion, chaired by Jaime Caruana (General Manager, BIS), were Ravi Menon (Monetary Authority of Singapore), Jacob Frenkel (JP Morgan Chase International) and José Dario Uribe Escobar (Banco de la Repubblica).
The papers presented at the conference and the discussants' comments are released as BIS Working Papers 397 to 400:
1 Financial Globalisation and the Crisis, BIS Working Papers No 397
by Philip R. Lane
Comments by Dani Rodrik
The global financial crisis provides an important testing ground for the financial globalisation model. We ask three questions. First, did financial globalisation materially contribute to the origination of the global financial crisis? Second, once the crisis occurred, how did financial globalisation affect the incidence and propagation of the crisis across different countries? Third, how has financial globalisation affected the management of the crisis at national and international levels?
2 The great leveraging, BIS Working Papers No 398
by Alan M. Taylor
Comments by Barry Eichengreen and Dr. Y V Reddy
What can history can tell us about the relationship between the banking system, financial crises, the global economy, and economic performance? Evidence shows that in the advanced economies we live in a world that is more financialized than ever before as measured by importance of credit in the economy. I term this long-run evolution "The Great Leveraging" and present a ten-point examination of its main contours and implications.
3 Global safe assets, BIS Working Papers No 399
by Pierre-Olivier Gourinchas and Olivier Jeanne
Comments by Peter R Fisher and Fabrizio Saccomanni
Will the world run out of 'safe assets' and what would be the consequences on global financial stability? We argue that in a world with competing private stores of value, the global economic system tends to favor the riskiest ones. Privately produced stores of value cannot provide sufficient insurance against global shocks. Only public safe assets may, if appropriately supported by monetary policy. We draw some implications for the global financial system.
4 Capital Flows and the Risk-Taking Channel of Monetary Policy, BIS Working Papers No 400
by Valentina Bruno and Hyun Song Shin
Comments by Lars E O Svensson and John B Taylor
This paper examines the relationship between low interests maintained by advanced economy central banks and credit booms in emerging economies. In a model with crossborder banking, low funding rates increase credit supply, but the initial shock is amplified through the "risk-taking channel" of monetary policy where greater risk-taking interacts with dampened measured risks that are driven by currency appreciation to create a feedback loop. In an empirical investigation using VAR analysis, we find that expectations of lower short-term rates dampen measured risks and stimulate cross-border banking sector capital flows.
http://www.bis.org/publ/bppdf/bispap69.htm
The BIS 11th Annual Conference took place in Lucerne, Switzerland on 21-22 June 2012. The event brought together senior representatives of central banks and academic institutions, who exchanged views on the conference theme of "The future of financial globalisation". This volume contains the opening address of Stephen Cecchetti (Economic Adviser, BIS), a keynote address from Amartya Sen (Harvard University), and the available contributions of the policy panel on "Will financial globalisation survive?". The participants in the policy panel discussion, chaired by Jaime Caruana (General Manager, BIS), were Ravi Menon (Monetary Authority of Singapore), Jacob Frenkel (JP Morgan Chase International) and José Dario Uribe Escobar (Banco de la Repubblica).
The papers presented at the conference and the discussants' comments are released as BIS Working Papers 397 to 400:
1 Financial Globalisation and the Crisis, BIS Working Papers No 397
by Philip R. Lane
Comments by Dani Rodrik
The global financial crisis provides an important testing ground for the financial globalisation model. We ask three questions. First, did financial globalisation materially contribute to the origination of the global financial crisis? Second, once the crisis occurred, how did financial globalisation affect the incidence and propagation of the crisis across different countries? Third, how has financial globalisation affected the management of the crisis at national and international levels?
2 The great leveraging, BIS Working Papers No 398
by Alan M. Taylor
Comments by Barry Eichengreen and Dr. Y V Reddy
What can history can tell us about the relationship between the banking system, financial crises, the global economy, and economic performance? Evidence shows that in the advanced economies we live in a world that is more financialized than ever before as measured by importance of credit in the economy. I term this long-run evolution "The Great Leveraging" and present a ten-point examination of its main contours and implications.
3 Global safe assets, BIS Working Papers No 399
by Pierre-Olivier Gourinchas and Olivier Jeanne
Comments by Peter R Fisher and Fabrizio Saccomanni
Will the world run out of 'safe assets' and what would be the consequences on global financial stability? We argue that in a world with competing private stores of value, the global economic system tends to favor the riskiest ones. Privately produced stores of value cannot provide sufficient insurance against global shocks. Only public safe assets may, if appropriately supported by monetary policy. We draw some implications for the global financial system.
4 Capital Flows and the Risk-Taking Channel of Monetary Policy, BIS Working Papers No 400
by Valentina Bruno and Hyun Song Shin
Comments by Lars E O Svensson and John B Taylor
This paper examines the relationship between low interests maintained by advanced economy central banks and credit booms in emerging economies. In a model with crossborder banking, low funding rates increase credit supply, but the initial shock is amplified through the "risk-taking channel" of monetary policy where greater risk-taking interacts with dampened measured risks that are driven by currency appreciation to create a feedback loop. In an empirical investigation using VAR analysis, we find that expectations of lower short-term rates dampen measured risks and stimulate cross-border banking sector capital flows.
Tuesday, December 18, 2012
The rise of the older worker
The rise of the older worker, by Jim Hillage, research director
Institute for Employment Studies
December 12, 2012
http://www.employment-studies.co.uk/press/26_12.php
There are more people working in the UK today than at anytime in our history. Today's labour market statistics show another increase in the numbers employed taking the total to 29,600,000, up 40,000 on the previous quarter and 500,000 on a year ago.
Almost half of the rise has been among people aged 50 or over, with the fastest rate of increase occurring among those 65 or over, particularly among older women.
There are now almost a million people aged 65 or over in jobs, double the number ten years ago and up 13 per cent over the past year. Although these older workers comprise only three percent of the working population, they account for 20 per cent of the recent growth in employment. However this group has a very different labour market profile to the rest of the working population, particularly younger people, and there is no evidence to suggest older workers are gaining employment at the expense of the young generation. For example:
Institute for Employment Studies
December 12, 2012
http://www.employment-studies.co.uk/press/26_12.php
There are more people working in the UK today than at anytime in our history. Today's labour market statistics show another increase in the numbers employed taking the total to 29,600,000, up 40,000 on the previous quarter and 500,000 on a year ago.
Almost half of the rise has been among people aged 50 or over, with the fastest rate of increase occurring among those 65 or over, particularly among older women.
There are now almost a million people aged 65 or over in jobs, double the number ten years ago and up 13 per cent over the past year. Although these older workers comprise only three percent of the working population, they account for 20 per cent of the recent growth in employment. However this group has a very different labour market profile to the rest of the working population, particularly younger people, and there is no evidence to suggest older workers are gaining employment at the expense of the young generation. For example:
- 30 per cent of older workers (ie aged 65+) work in managerial and professional jobs, compared with only nine per cent of younger workers (aged 16 to 24). Conversely 34 per cent of young people work in sales, care and leisure jobs, compared with only 14 of their older counterparts.
- Nearly four in ten older workers are self-employed, compared with five per cent of younger workers.
- Most (69 per cent) of 65 plus year olds work part-time, compared with 39 per cent of young workers (and 27 per cent of all those in work).
‘There are a number of reasons why older workers are
staying on in work. In some cases employers want to retain their skills
and experience and encourage them to stay on, albeit on a part-time
basis, and most older employees have been working for their employer for
at least ten years and often in smaller workplaces. Conversely, some
older people have to stay in work as their pensions are inadequate and
it is interesting to note that employment of older workers is highest in
London and the South East, where living costs are highest. Finally,
there is also a growing group of self-employed who still want to retain
their work connections and interests.’
2012 © Institute for Employment Studies
---
Update: Long-Term Jobless Begin to Find Work. By Ben Casselman
The Wall Street Journal, January 11, 2013, on page A2
http://online.wsj.com/article/SB10001424127887323442804578233390580359994.html
The epidemic of long-term unemployment, one of the most pernicious and persistent challenges bedeviling the U.S. economy, is finally showing signs of easing.
The long-term unemployed—those out of work more than six months—made up 39.1% of all job seekers in December, according to the Labor Department, the first time that figure has dropped below 40% in more than three years.
[http://si.wsj.net/public/resources/images/P1-BJ893_ECONOM_NS_20130110190005.jpg]
The problem is far from solved. Nearly 4.8 million Americans have been out of work for more than six months, down from a peak of more than 6.5 million in 2010 but still a level without precedent since World War II.
The recent signs of progress mark a reversal from earlier in the recovery, when long-term unemployment proved resistant to improvement elsewhere in the labor market.
Total unemployment peaked in late 2009 and has dropped relatively steadily since then, while the number of long-term unemployed continued to rise into 2010 and then fell only slowly through much of 2011.
More recently, however, unemployment has fallen more quickly among the long-term jobless than among the broader population. In the past year, the number of long-term unemployed workers has dropped by 830,000, accounting for nearly the entire 843,000-person drop in overall joblessness.
When Michael Leahy lost his job as a manager at a Connecticut bank in 2010, the state had already shed about 10,000 financial-sector jobs in the previous two years and he had difficulty even landing an interview. By the time banks started hiring again, Mr. Leahy, now 59, had been out of work for more than a year and found himself getting passed over for candidates with jobs or ones who had been laid off more recently.
In July, however, Mr. Leahy was accepted into a program for the long-term unemployed run by the Work Place, a local workforce development agency. The program helped Mr. Leahy improve his resume and interviewing skills, and ultimately connected him with a local bank that was hiring.
Mr. Leahy began a new job in December. The chance to work again in his chosen field, he said, was more than worth the roughly 15% pay cut from his previous job.
"The thing that surprised me is this positive feeling I have every day of getting up in the morning and knowing I have a place to go to and a place where people are waiting for me," Mr. Leahy said.
[http://si.wsj.net/public/resources/images/NA-BU523B_ECONO_D_20130110211902.jpg]
The decline in long-term unemployment is good news for the broader economy. Many economists, including Federal Reserve Chairman Ben Bernanke, feared that many long-term unemployed workers would become permanently unemployable, creating a "structural" unemployment problem akin to what Europe suffered in the 1980s. But those fears are beginning to recede along with the ranks of the long-term unemployed.
"I don't think it's the case that the long-term unemployed are no longer employable," said Omair Sharif, an economist for RBS Securities Inc. "In fact, they've been the ones getting the jobs."
Not all the drop in long-term joblessness can be attributed to workers finding positions. In recent years, millions of Americans have given up looking for work, at which point they no longer count as "unemployed" in official statistics.
The recent drop in long-term unemployment, however, doesn't appear to be due to such dropouts. The number of people who aren't in the labor force but say they want a job has risen by only about 400,000 in the past year, while the number of Americans with jobs has risen by 2.4 million. That suggests at least much of the improvement is due to people finding jobs, not dropping out, Mr. Sharif said.
The average unemployed worker has now been looking for 38 weeks, down from a peak of nearly 41 weeks and the lowest level since early 2011.
The long-term unemployed still face grim odds of finding work. About 10% of long-term job seekers found work in April, the most recent month for which a detailed breakdown is available, compared with about a quarter of more recently laid-off workers. The ranks of the short-term jobless are more quickly refreshed by newly laid-off workers, however. As a result, the total number of short-term unemployed has fallen more slowly in recent months, even though individual workers still stand a far better chance of finding work early in their search.
And when the long-term unemployed do find work, their new jobs generally pay less than their old ones—often much less. A recent study from economists at Boston University, Columbia University and the Institute for Employment Research found that every additional year out of work reduces workers' wages when they do find a job by 11%.
Moreover, the recent gains have yet to reach the longest of the long-term unemployed: While the number of people unemployed for between six months and two years has fallen by 12% in the past year, the ranks of those jobless for three years or longer has barely budged at all.
Patricia Soprych, a 51-year-old widow in Skokie, Ill., recently got a job as a grocery-store cashier after more than a year of looking for work. But the job is part-time and pays the minimum wage, which she finds barely enough to make ends meet.
"You say the job market's getting better. Yeah, for these $8.25-an-hour jobs," Ms. Soprych said.
Economists cite several reasons for the drop in long-term unemployment. Most significant is the gradual healing of the broader labor market, which has seen the unemployment rate drop to 7.8% in December from a high of 10% in 2009. After initially benefiting mostly the more recently laid-off, that progress is now being felt among the longer-term jobless as well.
The gradual strengthening in the housing market could lead to more improvement. Many of the long-term unemployed are former construction workers who lost jobs when the housing bubble burst. Rising home building has yet to lead to a surge in construction employment, but many experts expect hiring to pick up in 2013.
Another possible factor behind the recent progress: the gradual reduction in emergency unemployment benefits available to laid-off workers. During the recession, Congress extended unemployment benefits to as long as 99 weeks in some states. Today, benefits last 73 weeks at most, and less time in many states. Research suggests that unemployment payments lead some recipients not to look as hard for jobs, and the loss of benefits may have pushed some job seekers to accept work they might otherwise have rejected, said Gary Burtless, an economist at the Brookings Institution.
---
Update: Long-Term Jobless Begin to Find Work. By Ben Casselman
The Wall Street Journal, January 11, 2013, on page A2
http://online.wsj.com/article/SB10001424127887323442804578233390580359994.html
The epidemic of long-term unemployment, one of the most pernicious and persistent challenges bedeviling the U.S. economy, is finally showing signs of easing.
The long-term unemployed—those out of work more than six months—made up 39.1% of all job seekers in December, according to the Labor Department, the first time that figure has dropped below 40% in more than three years.
[http://si.wsj.net/public/resources/images/P1-BJ893_ECONOM_NS_20130110190005.jpg]
The problem is far from solved. Nearly 4.8 million Americans have been out of work for more than six months, down from a peak of more than 6.5 million in 2010 but still a level without precedent since World War II.
The recent signs of progress mark a reversal from earlier in the recovery, when long-term unemployment proved resistant to improvement elsewhere in the labor market.
Total unemployment peaked in late 2009 and has dropped relatively steadily since then, while the number of long-term unemployed continued to rise into 2010 and then fell only slowly through much of 2011.
More recently, however, unemployment has fallen more quickly among the long-term jobless than among the broader population. In the past year, the number of long-term unemployed workers has dropped by 830,000, accounting for nearly the entire 843,000-person drop in overall joblessness.
When Michael Leahy lost his job as a manager at a Connecticut bank in 2010, the state had already shed about 10,000 financial-sector jobs in the previous two years and he had difficulty even landing an interview. By the time banks started hiring again, Mr. Leahy, now 59, had been out of work for more than a year and found himself getting passed over for candidates with jobs or ones who had been laid off more recently.
In July, however, Mr. Leahy was accepted into a program for the long-term unemployed run by the Work Place, a local workforce development agency. The program helped Mr. Leahy improve his resume and interviewing skills, and ultimately connected him with a local bank that was hiring.
Mr. Leahy began a new job in December. The chance to work again in his chosen field, he said, was more than worth the roughly 15% pay cut from his previous job.
"The thing that surprised me is this positive feeling I have every day of getting up in the morning and knowing I have a place to go to and a place where people are waiting for me," Mr. Leahy said.
[http://si.wsj.net/public/resources/images/NA-BU523B_ECONO_D_20130110211902.jpg]
The decline in long-term unemployment is good news for the broader economy. Many economists, including Federal Reserve Chairman Ben Bernanke, feared that many long-term unemployed workers would become permanently unemployable, creating a "structural" unemployment problem akin to what Europe suffered in the 1980s. But those fears are beginning to recede along with the ranks of the long-term unemployed.
"I don't think it's the case that the long-term unemployed are no longer employable," said Omair Sharif, an economist for RBS Securities Inc. "In fact, they've been the ones getting the jobs."
Not all the drop in long-term joblessness can be attributed to workers finding positions. In recent years, millions of Americans have given up looking for work, at which point they no longer count as "unemployed" in official statistics.
The recent drop in long-term unemployment, however, doesn't appear to be due to such dropouts. The number of people who aren't in the labor force but say they want a job has risen by only about 400,000 in the past year, while the number of Americans with jobs has risen by 2.4 million. That suggests at least much of the improvement is due to people finding jobs, not dropping out, Mr. Sharif said.
The average unemployed worker has now been looking for 38 weeks, down from a peak of nearly 41 weeks and the lowest level since early 2011.
The long-term unemployed still face grim odds of finding work. About 10% of long-term job seekers found work in April, the most recent month for which a detailed breakdown is available, compared with about a quarter of more recently laid-off workers. The ranks of the short-term jobless are more quickly refreshed by newly laid-off workers, however. As a result, the total number of short-term unemployed has fallen more slowly in recent months, even though individual workers still stand a far better chance of finding work early in their search.
And when the long-term unemployed do find work, their new jobs generally pay less than their old ones—often much less. A recent study from economists at Boston University, Columbia University and the Institute for Employment Research found that every additional year out of work reduces workers' wages when they do find a job by 11%.
Moreover, the recent gains have yet to reach the longest of the long-term unemployed: While the number of people unemployed for between six months and two years has fallen by 12% in the past year, the ranks of those jobless for three years or longer has barely budged at all.
Patricia Soprych, a 51-year-old widow in Skokie, Ill., recently got a job as a grocery-store cashier after more than a year of looking for work. But the job is part-time and pays the minimum wage, which she finds barely enough to make ends meet.
"You say the job market's getting better. Yeah, for these $8.25-an-hour jobs," Ms. Soprych said.
Economists cite several reasons for the drop in long-term unemployment. Most significant is the gradual healing of the broader labor market, which has seen the unemployment rate drop to 7.8% in December from a high of 10% in 2009. After initially benefiting mostly the more recently laid-off, that progress is now being felt among the longer-term jobless as well.
The gradual strengthening in the housing market could lead to more improvement. Many of the long-term unemployed are former construction workers who lost jobs when the housing bubble burst. Rising home building has yet to lead to a surge in construction employment, but many experts expect hiring to pick up in 2013.
Another possible factor behind the recent progress: the gradual reduction in emergency unemployment benefits available to laid-off workers. During the recession, Congress extended unemployment benefits to as long as 99 weeks in some states. Today, benefits last 73 weeks at most, and less time in many states. Research suggests that unemployment payments lead some recipients not to look as hard for jobs, and the loss of benefits may have pushed some job seekers to accept work they might otherwise have rejected, said Gary Burtless, an economist at the Brookings Institution.
Monday, December 17, 2012
Optimal Oil Production and the World Supply of Oil
Optimal Oil Production and the World Supply of Oil. By Nikolay Aleksandrov, Raphael Espinoza, and Lajos Gyurko
IMF Working Paper No. 12/294
Dec 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=40169.0
Summary: We study the optimal oil extraction strategy and the value of an oil field using a multiple real option approach. The numerical method is flexible enough to solve a model with several state variables, to discuss the effect of risk aversion, and to take into account uncertainty in the size of reserves. Optimal extraction in the baseline model is found to be volatile. If the oil producer is risk averse, production is more stable, but spare capacity is much higher than what is typically observed. We show that decisions are very sensitive to expectations on the equilibrium oil price using a mean reverting model of the oil price where the equilibrium price is also a random variable. Oil production was cut during the 2008–2009 crisis, and we find that the cut in production was larger for OPEC, for countries facing a lower discount rate, as predicted by the model, and for countries whose governments’ finances are less dependent on oil revenues. However, the net present value of a country’s oil reserves would be increased significantly (by 100 percent, in the most extreme case) if production was cut completely when prices fall below the country's threshold price. If several producers were to adopt such strategies, world oil prices would be higher but more stable.
Excerpts:
In this paper we investigate the optimal oil extraction strategy of a small oil producer facing uncertain oil prices. We use a multiple real option approach. Extracting a barrel of oil is similar to exercising a call option, i.e. oil production can be modeled as the right to produce a barrel of oil with the payoff of the strategy depending on uncertain oil prices. Production is optimal if the payoff of extracting oil exceeds the value of leaving oil under the ground for later extraction (the continuation value). For an oil producer, the optimal extraction path corresponds to the optimal strategy of an investor holding a multiple real option with finite number of exercises (finite reserves of oil). At any single point in time, the oil producer is also limited in the number of options he can exercise, because of capacity constraints.
Our first contribution is to present the solution to the stochastic optimization problem as an exercise rule for a multiple real option and to solve the problem numerically using the Monte Carlo methods developed by Longstaff and Schwartz (2001), Rogers (2002), and extended by Aleksandrov and Hambly (2010), Bender (2011), and Gyurko, Hambly and Witte (2011). The Monte Carlo regression method is flexible and it remains accurate even for high-dimensionality problems, i.e. when there are several state variables, for instance when the oil price process is driven by two state variables, when extraction costs are stochastic, or when the size of reserves is a random variable.
We solve the real option problem for a small producer (with reserves of 12 billion barels) and for a large producer (with reserves of 100 bilion barrels) and compute the threshold below which it is optimal to defer production. In our baseline model, we find that the small producer should only produce when prices are high (higher than US$73 per barrel at 2000 constant prices), whereas for the large producer, full production is optimal as soon as prices exceed US$39. Optimal production is found to be volatile given the stochastic process of oil prices. As a result, we show that the net present value of oil reserves would be substantially higher if countries were willing to vary production when oil prices change. This result has important implications for oil production policy and for the design of macroeconomic policies that depend on inter-temporal and inter-generational equity considerations. It also implies that the world supply curve would be very elastic to prices if all countries were optimizing production as in the baseline model—and as a result, prices would tend to be higher but much less volatile.
We investigate why observed production is not as volatile as what is predicted by the baseline calibration of the model. One possible explanation is that producers are risk averse. Under this assumption, production is accelerated and is more stable, but a risk averse producer should also maintain large spare capacity, a result at odds with the evidence that oil producers almost always produce at full capacity. A second potential explanation is that producers are uncertain about the actual size of their oil reserves. Using panel data on recoverable reserves, we show however that, historically, this uncertainty has been diminishing with time and therefore this explanation is incomplete, since even mature oil exporters maintain low spare capacity. A third explanation may be that the oil price process, and in particular the equilibrium oil price, is unknown to the decision makers. Indeed, the optimal reaction to an increase in oil prices depends on whether the price increase is perceived to be temporary or to reflect a permanent shift in prices. If shocks are known to be primarily temporary, production should increase in the face of oil price increases. But if shocks are thought to be accompanied by movements in the equilibrium price, the continuation value jumps at the same time as the immediate payoff from extracting oil. In that case an increase in price may not result in an increase in production. Faced with uncertain views on the optimal strategy, the safe decision might well be to remain prudent with changes in production.
In practice, world oil production is partially cut in the face of negative demand shocks. The last section of the paper investigates whether the reduction in oil production during the 2008–2009 crisis can be explained by the determinants predicted by the model. We find that the cut in production was larger for OPEC, for countries facing a lower discount rate, as predicted by the model, and for countries with government finances less dependent on oil revenues.
IMF Working Paper No. 12/294
Dec 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=40169.0
Summary: We study the optimal oil extraction strategy and the value of an oil field using a multiple real option approach. The numerical method is flexible enough to solve a model with several state variables, to discuss the effect of risk aversion, and to take into account uncertainty in the size of reserves. Optimal extraction in the baseline model is found to be volatile. If the oil producer is risk averse, production is more stable, but spare capacity is much higher than what is typically observed. We show that decisions are very sensitive to expectations on the equilibrium oil price using a mean reverting model of the oil price where the equilibrium price is also a random variable. Oil production was cut during the 2008–2009 crisis, and we find that the cut in production was larger for OPEC, for countries facing a lower discount rate, as predicted by the model, and for countries whose governments’ finances are less dependent on oil revenues. However, the net present value of a country’s oil reserves would be increased significantly (by 100 percent, in the most extreme case) if production was cut completely when prices fall below the country's threshold price. If several producers were to adopt such strategies, world oil prices would be higher but more stable.
Excerpts:
In this paper we investigate the optimal oil extraction strategy of a small oil producer facing uncertain oil prices. We use a multiple real option approach. Extracting a barrel of oil is similar to exercising a call option, i.e. oil production can be modeled as the right to produce a barrel of oil with the payoff of the strategy depending on uncertain oil prices. Production is optimal if the payoff of extracting oil exceeds the value of leaving oil under the ground for later extraction (the continuation value). For an oil producer, the optimal extraction path corresponds to the optimal strategy of an investor holding a multiple real option with finite number of exercises (finite reserves of oil). At any single point in time, the oil producer is also limited in the number of options he can exercise, because of capacity constraints.
Our first contribution is to present the solution to the stochastic optimization problem as an exercise rule for a multiple real option and to solve the problem numerically using the Monte Carlo methods developed by Longstaff and Schwartz (2001), Rogers (2002), and extended by Aleksandrov and Hambly (2010), Bender (2011), and Gyurko, Hambly and Witte (2011). The Monte Carlo regression method is flexible and it remains accurate even for high-dimensionality problems, i.e. when there are several state variables, for instance when the oil price process is driven by two state variables, when extraction costs are stochastic, or when the size of reserves is a random variable.
We solve the real option problem for a small producer (with reserves of 12 billion barels) and for a large producer (with reserves of 100 bilion barrels) and compute the threshold below which it is optimal to defer production. In our baseline model, we find that the small producer should only produce when prices are high (higher than US$73 per barrel at 2000 constant prices), whereas for the large producer, full production is optimal as soon as prices exceed US$39. Optimal production is found to be volatile given the stochastic process of oil prices. As a result, we show that the net present value of oil reserves would be substantially higher if countries were willing to vary production when oil prices change. This result has important implications for oil production policy and for the design of macroeconomic policies that depend on inter-temporal and inter-generational equity considerations. It also implies that the world supply curve would be very elastic to prices if all countries were optimizing production as in the baseline model—and as a result, prices would tend to be higher but much less volatile.
We investigate why observed production is not as volatile as what is predicted by the baseline calibration of the model. One possible explanation is that producers are risk averse. Under this assumption, production is accelerated and is more stable, but a risk averse producer should also maintain large spare capacity, a result at odds with the evidence that oil producers almost always produce at full capacity. A second potential explanation is that producers are uncertain about the actual size of their oil reserves. Using panel data on recoverable reserves, we show however that, historically, this uncertainty has been diminishing with time and therefore this explanation is incomplete, since even mature oil exporters maintain low spare capacity. A third explanation may be that the oil price process, and in particular the equilibrium oil price, is unknown to the decision makers. Indeed, the optimal reaction to an increase in oil prices depends on whether the price increase is perceived to be temporary or to reflect a permanent shift in prices. If shocks are known to be primarily temporary, production should increase in the face of oil price increases. But if shocks are thought to be accompanied by movements in the equilibrium price, the continuation value jumps at the same time as the immediate payoff from extracting oil. In that case an increase in price may not result in an increase in production. Faced with uncertain views on the optimal strategy, the safe decision might well be to remain prudent with changes in production.
In practice, world oil production is partially cut in the face of negative demand shocks. The last section of the paper investigates whether the reduction in oil production during the 2008–2009 crisis can be explained by the determinants predicted by the model. We find that the cut in production was larger for OPEC, for countries facing a lower discount rate, as predicted by the model, and for countries with government finances less dependent on oil revenues.
Subscribe to:
Posts (Atom)