Under Secretary for International Affairs Dr. Lael Brainard Testimony Before the House Committee on Ways and Means on the U.S.-China Economic Relationship
http://www.treasury.gov/press-center/press-releases/Pages/tg1336.aspx
Oct 25, 2011
Chairman Camp, Ranking Member Levin, distinguished members of the Committee, thank you for the opportunity to testify today on our economic relationship with China.
Challenges and Opportunities
Since the outset, President Obama has placed a high priority on pursuing a more balanced and fair economic relationship with China. This is central to our goal of doubling exports in five years and supporting several million U.S. jobs. And, indeed, since 2009, U.S. exports to China have grown by 61 percent, nearly twice as fast as our exports to the rest of the world. Despite this progress, the playing field is still uneven. To secure the future for our children, the Administration will continue working hard to get the economic relationship right.
China needs to take action at an accelerated rate, so that the potential of our relationship translates into real near-term benefits for our companies and workers. China’s leaders understand that China must shift to domestic consumption-led growth, provide a secure environment for the protection and enforcement of intellectual property rights, level the playing field between state-owned and private enterprises—domestic and foreign, and liberalize the exchange rate and financial markets. China needs to take these actions to sustain its own growth, as well as to address the concerns of its trade partners. On these issues, we have actively pressed China to accelerate the pace of reform in order to achieve more balanced growth and create fairer competition, and there has been some progress, but there are strong interests within China that favor a go-slow approach.
In the wake of the financial crisis, with American households saving more and demand weak in Europe and Japan, our exports increasingly will be directed at the fast-growing emerging markets if we are to create the good jobs with good wages that we need to grow our economy. For the next decade, China is expected to be the biggest source of demand growth in the global economy. The International Monetary Fund (IMF) forecasts that China’s growth will average 9.4 percent per year over the next five years, and the Organization of Economic Cooperation and Development (OECD) estimates that China’s share of global imports will increase from six percent in 2008, to over nine percent in 2012. This is a market opportunity that we must seize.
Foreign investment also is playing an increasingly important role in supporting jobs in the United States, and we expect this trend to continue. In 2009, majority-owned U.S. affiliates of foreign companies were an important contributor to U.S. economic activity, employing approximately five percent of the U.S. private workforce and 17 percent in the U.S. manufacturing sector. In the decade ahead, China will be a fast-growing source of foreign direct investment among major economies. Indeed, the stock of Chinese foreign investment in the United States more than doubled last year alone. Protecting national security is always our first concern, but where Chinese investment does not affect national security, we should welcome it. To create jobs here at home, it matters whether Chinese investment ultimately ends up in Anhui province, Argentina, or Alabama.
In order to derive a better balance of benefits from trade and investment opportunities with China, we need to see progress on three key challenges. First, in many sectors in which the United States is competitive globally, China must address a range of discriminatory policies, including those that favor domestic state-owned enterprises through barriers to foreign goods, services, and investment, as well as the provision of subsidies and preferential access to raw materials, land, credit, and government procurement. Second, rampant theft of intellectual property in China lowers the return to investments in research and development and innovation that represent a fundamental source of our country’s national competitive edge. Third, China must shift to a pattern of growth that can be sustained, drawing on home-grown demand rather than excessive dependence on exports. This requires that China bring its exchange rate into alignment with market fundamentals.
China’s Reforms
China’s current headline growth rate may look enviable right now, but China will face daunting challenges in coming years. We have a tremendous stake in ensuring that China deals with those challenges in a way that fundamentally reorients its growth pattern through greater balance and fairer competition.
China has had remarkable success in lifting hundreds of millions of its citizens out of poverty. But it has come at some cost, including large-scale environmental degradation and an economy that spends much more on investment than goods and services for its people. Chinese leaders understand that, with per capita income of around one-tenth of that of the United States in 2011,[1] and per capita household spending less than one-twentieth of that in the United States, the way China grew in the last two decades will not get them to the next stage of development. Instead, China will face what economists call the “middle income trap.”
China’s excessive dependence on growth driven by exports to advanced economies and investment will need to change. During the 2008-2009 global crisis, China was able to sustain growth through a massive credit-fueled investment boom. This will leave a financial hangover for years. China risks repeating the experience of other fast growing Asian economies that experienced sharp falls in growth soon after their investment-to-gross domestic product (GDP) ratios peaked. With investment reaching an all-time high of almost 48 percent of GDP, however, China’s peak is higher than other Asian economies.
China already is seeing rapidly slowing labor force growth, and the number of workers in China soon will be on the decline. While China maintains many advantages, a study by KPMG concluded that rising labor costs in China are shifting a rising market share of light manufactured goods to other producers in Asia.[2] A recent study by the Boston Consulting Group similarly concluded that China’s cost advantage is rapidly eroding.[3]
In the face of overinvestment and rising wages, China will need to move up the value chain. But China’s weak protection and enforcement of intellectual property rights threaten to retard the development of Chinese innovation and Chinese brands.
And the adjustment process – whether to greater consumption-led growth, higher value services, or innovation-intensive activities – is hampered by China’s continued excessive reliance on administrative controls, such as credit quotas to maintain price stability and intervention to temper exchange rate adjustment, that are subject to political determinations and thus leave policy making behind the curve. These controls are reflected in a financial system that fails to offer Chinese households financial assets that keeps up with inflation, let alone economic growth, and starves China’s most innovative firms and sectors of capital, despite massive domestic savings, while also depriving foreign competitors of the opportunity to offer a full range of products and services. Relying more on market-based prices, such as exchange and interest rates that facilitate adjustment to changing conditions, would make China’s growth more resilient, and avoid an excessive build-up of foreign exchange reserves.
For sustained growth, China wants greater access to U.S. technologies and high-tech dual use exports, to make progress on bilateral investment, and wants their exports to be accorded the same terms of access as exports from other market economies. We are willing to make progress on these issues, but our ability to move will depend in part on how much progress we see from China on issues that are important to us.
U.S. Engagement and Enforcement
We have worked tirelessly across the Administration to pursue a tight set of priorities with China – using the Strategic and Economic Dialogue (S&ED), as well as the Joint Commission on Commerce and Trade (JCCT). And since many other countries share our concerns, we also pursue these issues through multilateral channels, such as the G-20, the IMF, and the World Trade Organization (WTO), which are critical complements to our bilateral engagement. To advance our goals, whether it is faster appreciation of the exchange rate or reduced barriers to U.S. exports, we need to work smartly with our partners around the world and with China. And when engagement proves insufficient, this Administration will continue to be more aggressive than any of its predecessors in using all appropriate tools to address the particular problem, such as going after China’s unfair trade practices by taking China to the WTO and vigorously applying U.S. trade remedy laws.
While we face substantial challenges, and our job is far from finished, we have made important progress towards leveling the playing field and making the bilateral relationship more beneficial for American companies and workers. China’s trade surplus has declined from 7.7 percent of GDP in 2008, to 3.9 percent in 2010, and has declined further in the first half of this year compared to the same period last year, though an important part of the decline was due to slower growth in China’s export markets. In both its latest Five-Year Plan and the recent S&ED, China committed to targets to promote consumption-led growth, including raising household incomes, increasing minimum wages, and increasing services relative to GDP.
On the exchange rate, since China resumed exchange rate adjustment in June 2010, the renminbi has appreciated about seven percent against the U.S. dollar and about ten percent taking into account China’s higher rate of inflation relative to inflation in the United States. China’s currency has appreciated nearly forty percent against the dollar over the past five years in real terms. But the continued rapid pace of foreign reserve accumulation and the ongoing decline in the share of Chinese consumption in GDP indicate that the real exchange rate of the renminbi remains misaligned despite recent movement, and a faster pace of appreciation is needed.
Renminbi appreciation on its own will not erase our trade deficit. But allowing the exchange rate to adjust fully to reflect market forces is the most powerful near-term tool available to the Chinese government to achieve two of its top economic goals: combating inflation and shifting the composition of demand towards domestic consumption. By contrast, persistent misalignment holds back the rebalancing in demand needed to sustain the global recovery both in China and the world, and gives rise to substantial international concerns and ultimately to trade frictions. Further, emerging markets that compete with China resist appreciation of their own currencies to maintain their competitiveness vis-à-vis China.
At the G-20 earlier this month, surplus emerging markets such as China committed to accelerate the rebalancing of demand towards domestic consumption, and to move toward more market-determined exchange rates through greater exchange rate flexibility.
We also are making progress on our bilateral trade and investment priorities, in close collaboration with the Office of the U.S. Trade Representative and the Department of Commerce. At the most recent S&ED, after commitments made during the January state visit of President Hu and the prior December JCCT, China pledged to rescind all of its government procurement indigenous innovation catalogues, including by provincial and municipal governments. So far, the Central government has repealed four key measures that underpinned the indigenous innovation product accreditation system, and a number of local governments have taken positive steps. China also pledged to increase inspections of government computers to ensure that agencies use legitimate software, and to improve its high-level government coordination and leadership mechanisms to enhance long-term protection and enforcement of intellectual property rights. And last year, China met its S&ED pledge to raise the threshold for central government review of foreign investments from $100 to $300 million, leaving more foreign investment approvals to the mayors and governors who better understand the benefits of foreign direct investment.
Reforming and opening up China’s financial sector also remains a key priority. This not only would provide Chinese households with savings and insurance products to meet their financial goals without having to save so much of their income, but also would level the playing field with China’s state-owned enterprises for access to credit. We will continue pushing hard to address market access barriers in China’s financial sector, and we are seeing modest signs of progress. China now allows foreign banks to underwrite corporate bonds and is creating more opportunities for our financial services firms to manage investments in China as well as manage Chinese investments abroad. At the most recent S&ED, China committed to allow foreign firms to sell mutual funds, provide custody services, and sell mandatory auto liability insurance.
In short, while we will stand up to unfair and discriminatory practices and demand change, we will continue to engage with and encourage China as it pursues its reforms. And to meet this generational challenge, we must continue to work to strengthen the multilateral system that governs trade and finance, and not turn away from it. I believe this is the best way to promote American interests.
Thank you.
References
[1] September 2011 IMF World Economic Outlook Database, using market exchange rates.
[2] KPMG International, Product Sourcing in Asia Pacific 2011, pp. 7-9.
[3] Boston Consulting Group, Made in America, Again, August 2011, p. 5.
Tuesday, October 25, 2011
Are businesses run by women less productive than businesses run by men?
Where You Work: How Does Gender Matter?
Mary Hallward-Driemeier
Oct 2011
Are businesses run by women less productive than businesses run by men? If we perform a very simple comparison of the average productivity of female and male-owned enterprises, we might answer “yes”. But if we look a bit more closely at the data, a large part of this gap is explained by the fact that women and men are doing different things. If you compare women and men in the same sectors and in similar types of enterprises, the gap shrinks dramatically. Where you work is more important than gender in accounting for the observed productivity gap.
Using data on over 9000 registered enterprises from 32 countries in Sub-Saharan Africa, we see a productivity gap of 6 percent. However, controlling for sector, size and capital intensity, the gap disappears (see chart 1). If we include unregistered firms in the analysis, the unconditional productivity gap widens, as women are disproportionately in the informal sector where productivity is even lower. Nevertheless, the same pattern holds: there is little gender performance gap between similar enterprises.
A similar story emerges when we look at the obstacles faced by men’s and women’s businesses. Once the characteristics of the enterprise are controlled for, gender differences in obstacles are generally not significant. Access to electricity is a constraint, particularly for smaller and medium firms, and issues of skills and regulations for larger firms, regardless of the gender of the entrepreneur. There are, however, two exceptions that have a direct gender angle. First, women report having a harder time accessing credit. This is not only due to their running smaller firms (which itself may be a result of this constraint), women often have less access to collateral. This is correlated with a country’s gender gaps in formal property rights and practical constraints in accessing justice (this topic will be elaborated in an upcoming blog).
Second, women often face greater difficulties in dealing with
government authorities. They may be expected to pay higher amounts to
get things done, may be less likely to get things done even having paid –
and the ‘payment’ sought may not only be monetary. Indeed, over a
quarter of respondents, male and female, reported that they had heard of
sexual favors being requested to obtain licenses, receive credit or in
dealing with the tax authorities.
If where you work rather than gender is associated with performance and the main constraints to firm growth, policy makers need to understand why the observed gender patterns of entrepreneurship persist. One of the most significant predictors of whether an entrepreneur joins the formal or informal sector and the size of their enterprises is education. What we find is that across sectors, there are large gaps in education, but few gender education gaps within a sector. Women and men in the formal sector have very similar educational backgrounds, likewise in the informal sector (see Chart 2). However, where gender comes in is that women have fewer years of education, helping explain why fewer women are in the formal sector.
To expand women’s opportunities, more women need to be able to shift where they work. Tackling underlying disparities in access to human capital and assets are key to these efforts. With the same backgrounds, women are able to run the same types of firms as men with equal results.
See full story: https://blogs.worldbank.org/allaboutfinance/where-you-work-how-does-gender-matter to view full story.
Mary Hallward-Driemeier
Oct 2011
Are businesses run by women less productive than businesses run by men? If we perform a very simple comparison of the average productivity of female and male-owned enterprises, we might answer “yes”. But if we look a bit more closely at the data, a large part of this gap is explained by the fact that women and men are doing different things. If you compare women and men in the same sectors and in similar types of enterprises, the gap shrinks dramatically. Where you work is more important than gender in accounting for the observed productivity gap.
Using data on over 9000 registered enterprises from 32 countries in Sub-Saharan Africa, we see a productivity gap of 6 percent. However, controlling for sector, size and capital intensity, the gap disappears (see chart 1). If we include unregistered firms in the analysis, the unconditional productivity gap widens, as women are disproportionately in the informal sector where productivity is even lower. Nevertheless, the same pattern holds: there is little gender performance gap between similar enterprises.
Chart 1: The Gender Gap in Average Firm Labor
Controlling for enterprise characteristics removes the gender gap in productivity (registered firms in 32 Sub-Saharan African countries)
Controlling for enterprise characteristics removes the gender gap in productivity (registered firms in 32 Sub-Saharan African countries)
Once comparing like with like, the finding of no or few significant
differences between female and male entrepreneurs in performance is
encouraging. It confirms that Sub-Saharan Africa has considerable hidden
growth potential in its women, and that tapping that
potential—including improving women’s choices of where to be active
economically—can make a real contribution to the region’s growth.
A similar story emerges when we look at the obstacles faced by men’s and women’s businesses. Once the characteristics of the enterprise are controlled for, gender differences in obstacles are generally not significant. Access to electricity is a constraint, particularly for smaller and medium firms, and issues of skills and regulations for larger firms, regardless of the gender of the entrepreneur. There are, however, two exceptions that have a direct gender angle. First, women report having a harder time accessing credit. This is not only due to their running smaller firms (which itself may be a result of this constraint), women often have less access to collateral. This is correlated with a country’s gender gaps in formal property rights and practical constraints in accessing justice (this topic will be elaborated in an upcoming blog).
If where you work rather than gender is associated with performance and the main constraints to firm growth, policy makers need to understand why the observed gender patterns of entrepreneurship persist. One of the most significant predictors of whether an entrepreneur joins the formal or informal sector and the size of their enterprises is education. What we find is that across sectors, there are large gaps in education, but few gender education gaps within a sector. Women and men in the formal sector have very similar educational backgrounds, likewise in the informal sector (see Chart 2). However, where gender comes in is that women have fewer years of education, helping explain why fewer women are in the formal sector.
Chart 2: Education varies more by formal/informal sector than by gender
To expand women’s opportunities, more women need to be able to shift where they work. Tackling underlying disparities in access to human capital and assets are key to these efforts. With the same backgrounds, women are able to run the same types of firms as men with equal results.
See full story: https://blogs.worldbank.org/allaboutfinance/where-you-work-how-does-gender-matter to view full story.
Friday, October 21, 2011
The Case Against Global-Warming Skepticism
The Case Against Global-Warming Skepticism. By Richard A Muller
There were good reasons for doubt, until now.
http://online.wsj.com/article/SB10001424052970204422404576594872796327348.html
WSJ, Oct 21, 2011
Are you a global warming skeptic? There are plenty of good reasons why you might be.
As many as 757 stations in the United States recorded net surface-temperature cooling over the past century. Many are concentrated in the southeast, where some people attribute tornadoes and hurricanes to warming.
The temperature-station quality is largely awful. The most important stations in the U.S. are included in the Department of Energy's Historical Climatology Network. A careful survey of these stations by a team led by meteorologist Anthony Watts showed that 70% of these stations have such poor siting that, by the U.S. government's own measure, they result in temperature uncertainties of between two and five degrees Celsius or more. We do not know how much worse are the stations in the developing world.
Using data from all these poor stations, the U.N.'s Intergovernmental Panel on Climate Change estimates an average global 0.64ºC temperature rise in the past 50 years, "most" of which the IPCC says is due to humans. Yet the margin of error for the stations is at least three times larger than the estimated warming.
We know that cities show anomalous warming, caused by energy use and building materials; asphalt, for instance, absorbs more sunlight than do trees. Tokyo's temperature rose about 2ºC in the last 50 years. Could that rise, and increases in other urban areas, have been unreasonably included in the global estimates? That warming may be real, but it has nothing to do with the greenhouse effect and can't be addressed by carbon dioxide reduction.
Moreover, the three major temperature analysis groups (the U.S.'s NASA and National Oceanic and Atmospheric Administration, and the U.K.'s Met Office and Climatic Research Unit) analyze only a small fraction of the available data, primarily from stations that have long records. There's a logic to that practice, but it could lead to selection bias. For instance, older stations were often built outside of cities but today are surrounded by buildings. These groups today use data from about 2,000 stations, down from roughly 6,000 in 1970, raising even more questions about their selections.
On top of that, stations have moved, instruments have changed and local environments have evolved. Analysis groups try to compensate for all this by homogenizing the data, though there are plenty of arguments to be had over how best to homogenize long-running data taken from around the world in varying conditions. These adjustments often result in corrections of several tenths of one degree Celsius, significant fractions of the warming attributed to humans.
And that's just the surface-temperature record. What about the rest? The number of named hurricanes has been on the rise for years, but that's in part a result of better detection technologies (satellites and buoys) that find storms in remote regions. The number of hurricanes hitting the U.S., even more intense Category 4 and 5 storms, has been gradually decreasing since 1850. The number of detected tornadoes has been increasing, possibly because radar technology has improved, but the number that touch down and cause damage has been decreasing. Meanwhile, the short-term variability in U.S. surface temperatures has been decreasing since 1800, suggesting a more stable climate.
Without good answers to all these complaints, global-warming skepticism seems sensible. But now let me explain why you should not be a skeptic, at least not any longer.
Over the last two years, the Berkeley Earth Surface Temperature Project has looked deeply at all the issues raised above. I chaired our group, which just submitted four detailed papers on our results to peer-reviewed journals. We have now posted these papers online at www.BerkeleyEarth.org to solicit even more scrutiny.
Our work covers only land temperature—not the oceans—but that's where warming appears to be the greatest. Robert Rohde, our chief scientist, obtained more than 1.6 billion measurements from more than 39,000 temperature stations around the world. Many of the records were short in duration, and to use them Mr. Rohde and a team of esteemed scientists and statisticians developed a new analytical approach that let us incorporate fragments of records. By using data from virtually all the available stations, we avoided data-selection bias. Rather than try to correct for the discontinuities in the records, we simply sliced the records where the data cut off, thereby creating two records from one.
We discovered that about one-third of the world's temperature stations have recorded cooling temperatures, and about two-thirds have recorded warming. The two-to-one ratio reflects global warming. The changes at the locations that showed warming were typically between 1-2ºC, much greater than the IPCC's average of 0.64ºC.
To study urban-heating bias in temperature records, we used satellite determinations that subdivided the world into urban and rural areas. We then conducted a temperature analysis based solely on "very rural" locations, distant from urban ones. The result showed a temperature increase similar to that found by other groups. Only 0.5% of the globe is urbanized, so it makes sense that even a 2ºC rise in urban regions would contribute negligibly to the global average.
What about poor station quality? Again, our statistical methods allowed us to analyze the U.S. temperature record separately for stations with good or acceptable rankings, and those with poor rankings (the U.S. is the only place in the world that ranks its temperature stations). Remarkably, the poorly ranked stations showed no greater temperature increases than the better ones. The mostly likely explanation is that while low-quality stations may give incorrect absolute temperatures, they still accurately track temperature changes.
When we began our study, we felt that skeptics had raised legitimate issues, and we didn't know what we'd find. Our results turned out to be close to those published by prior groups. We think that means that those groups had truly been very careful in their work, despite their inability to convince some skeptics of that. They managed to avoid bias in their data selection, homogenization and other corrections.
Global warming is real. Perhaps our results will help cool this portion of the climate debate. How much of the warming is due to humans and what will be the likely effects? We made no independent assessment of that.
Mr. Muller is a professor of physics at the University of California, Berkeley, and the author of "Physics for Future Presidents" (W.W. Norton & Co., 2008).
There were good reasons for doubt, until now.
http://online.wsj.com/article/SB10001424052970204422404576594872796327348.html
WSJ, Oct 21, 2011
Are you a global warming skeptic? There are plenty of good reasons why you might be.
As many as 757 stations in the United States recorded net surface-temperature cooling over the past century. Many are concentrated in the southeast, where some people attribute tornadoes and hurricanes to warming.
The temperature-station quality is largely awful. The most important stations in the U.S. are included in the Department of Energy's Historical Climatology Network. A careful survey of these stations by a team led by meteorologist Anthony Watts showed that 70% of these stations have such poor siting that, by the U.S. government's own measure, they result in temperature uncertainties of between two and five degrees Celsius or more. We do not know how much worse are the stations in the developing world.
Using data from all these poor stations, the U.N.'s Intergovernmental Panel on Climate Change estimates an average global 0.64ºC temperature rise in the past 50 years, "most" of which the IPCC says is due to humans. Yet the margin of error for the stations is at least three times larger than the estimated warming.
We know that cities show anomalous warming, caused by energy use and building materials; asphalt, for instance, absorbs more sunlight than do trees. Tokyo's temperature rose about 2ºC in the last 50 years. Could that rise, and increases in other urban areas, have been unreasonably included in the global estimates? That warming may be real, but it has nothing to do with the greenhouse effect and can't be addressed by carbon dioxide reduction.
Moreover, the three major temperature analysis groups (the U.S.'s NASA and National Oceanic and Atmospheric Administration, and the U.K.'s Met Office and Climatic Research Unit) analyze only a small fraction of the available data, primarily from stations that have long records. There's a logic to that practice, but it could lead to selection bias. For instance, older stations were often built outside of cities but today are surrounded by buildings. These groups today use data from about 2,000 stations, down from roughly 6,000 in 1970, raising even more questions about their selections.
On top of that, stations have moved, instruments have changed and local environments have evolved. Analysis groups try to compensate for all this by homogenizing the data, though there are plenty of arguments to be had over how best to homogenize long-running data taken from around the world in varying conditions. These adjustments often result in corrections of several tenths of one degree Celsius, significant fractions of the warming attributed to humans.
And that's just the surface-temperature record. What about the rest? The number of named hurricanes has been on the rise for years, but that's in part a result of better detection technologies (satellites and buoys) that find storms in remote regions. The number of hurricanes hitting the U.S., even more intense Category 4 and 5 storms, has been gradually decreasing since 1850. The number of detected tornadoes has been increasing, possibly because radar technology has improved, but the number that touch down and cause damage has been decreasing. Meanwhile, the short-term variability in U.S. surface temperatures has been decreasing since 1800, suggesting a more stable climate.
Without good answers to all these complaints, global-warming skepticism seems sensible. But now let me explain why you should not be a skeptic, at least not any longer.
Over the last two years, the Berkeley Earth Surface Temperature Project has looked deeply at all the issues raised above. I chaired our group, which just submitted four detailed papers on our results to peer-reviewed journals. We have now posted these papers online at www.BerkeleyEarth.org to solicit even more scrutiny.
Our work covers only land temperature—not the oceans—but that's where warming appears to be the greatest. Robert Rohde, our chief scientist, obtained more than 1.6 billion measurements from more than 39,000 temperature stations around the world. Many of the records were short in duration, and to use them Mr. Rohde and a team of esteemed scientists and statisticians developed a new analytical approach that let us incorporate fragments of records. By using data from virtually all the available stations, we avoided data-selection bias. Rather than try to correct for the discontinuities in the records, we simply sliced the records where the data cut off, thereby creating two records from one.
We discovered that about one-third of the world's temperature stations have recorded cooling temperatures, and about two-thirds have recorded warming. The two-to-one ratio reflects global warming. The changes at the locations that showed warming were typically between 1-2ºC, much greater than the IPCC's average of 0.64ºC.
To study urban-heating bias in temperature records, we used satellite determinations that subdivided the world into urban and rural areas. We then conducted a temperature analysis based solely on "very rural" locations, distant from urban ones. The result showed a temperature increase similar to that found by other groups. Only 0.5% of the globe is urbanized, so it makes sense that even a 2ºC rise in urban regions would contribute negligibly to the global average.
What about poor station quality? Again, our statistical methods allowed us to analyze the U.S. temperature record separately for stations with good or acceptable rankings, and those with poor rankings (the U.S. is the only place in the world that ranks its temperature stations). Remarkably, the poorly ranked stations showed no greater temperature increases than the better ones. The mostly likely explanation is that while low-quality stations may give incorrect absolute temperatures, they still accurately track temperature changes.
When we began our study, we felt that skeptics had raised legitimate issues, and we didn't know what we'd find. Our results turned out to be close to those published by prior groups. We think that means that those groups had truly been very careful in their work, despite their inability to convince some skeptics of that. They managed to avoid bias in their data selection, homogenization and other corrections.
Global warming is real. Perhaps our results will help cool this portion of the climate debate. How much of the warming is due to humans and what will be the likely effects? We made no independent assessment of that.
Mr. Muller is a professor of physics at the University of California, Berkeley, and the author of "Physics for Future Presidents" (W.W. Norton & Co., 2008).
Thursday, October 20, 2011
BCBS: Basel III definition of capital - Frequently asked questions
BCBS: Basel III definition of capital - Frequently asked questions
Oct 20, 2011
The Basel Committee on Banking Supervision has received a number of interpretation questions related to the December 2010 publication of the Basel III regulatory frameworks for capital and liquidity and the 13 January 2011 press release on the loss absorbency of capital at the point of non-viability. To help ensure a consistent global implementation of Basel III, the Committee will continue to review frequently asked questions and to periodically publish answers along with any technical elaboration of the rules text and interpretative guidance that may be necessary.
The frequently asked questions (FAQs) published in this document correspond to the definition of capital sections of the Basel III rules text. These FAQs are in addition to the first set of FAQs published in July 2011. The questions and answers are grouped according to the relevant paragraphs of the rules text. FAQs that have been added since the publication of the first version of this document are shaded yellow; the earlier July 2011 FAQs that have been revised are shaded red.
Contents
Paragraphs 52-53 (Criteria for Common Equity Tier 1)
Paragraphs 54-56 (Criteria for Additional Tier 1 capital)
Paragraphs 60-61 (Provisions)
Paragraphs 62-65 (Minority interest and other capital that is issued out of consolidated subsidiaries that is held by third parties)
Paragraphs 67-68 (Goodwill and other intangibles)
Paragraphs 69-70 (Deferred tax assets)
Paragraphs 76-77 (Defined benefit pension fund assets and liabilities)
Paragraphs 79-85 (Investments in the capital of banking financial and insurance entities)
Paragraphs 94-96 (Transitional arrangements)
Press release 13 January 2011 (Loss absorbency at the point of non-viability)
General questions
http://www.bis.org/publ/bcbs204.htm
Oct 20, 2011
The Basel Committee on Banking Supervision has received a number of interpretation questions related to the December 2010 publication of the Basel III regulatory frameworks for capital and liquidity and the 13 January 2011 press release on the loss absorbency of capital at the point of non-viability. To help ensure a consistent global implementation of Basel III, the Committee will continue to review frequently asked questions and to periodically publish answers along with any technical elaboration of the rules text and interpretative guidance that may be necessary.
The frequently asked questions (FAQs) published in this document correspond to the definition of capital sections of the Basel III rules text. These FAQs are in addition to the first set of FAQs published in July 2011. The questions and answers are grouped according to the relevant paragraphs of the rules text. FAQs that have been added since the publication of the first version of this document are shaded yellow; the earlier July 2011 FAQs that have been revised are shaded red.
Contents
Paragraphs 52-53 (Criteria for Common Equity Tier 1)
Paragraphs 54-56 (Criteria for Additional Tier 1 capital)
Paragraphs 60-61 (Provisions)
Paragraphs 62-65 (Minority interest and other capital that is issued out of consolidated subsidiaries that is held by third parties)
Paragraphs 67-68 (Goodwill and other intangibles)
Paragraphs 69-70 (Deferred tax assets)
Paragraphs 76-77 (Defined benefit pension fund assets and liabilities)
Paragraphs 79-85 (Investments in the capital of banking financial and insurance entities)
Paragraphs 94-96 (Transitional arrangements)
Press release 13 January 2011 (Loss absorbency at the point of non-viability)
General questions
http://www.bis.org/publ/bcbs204.htm
Rapid Credit Growth: Boon or Boom-Bust?
Rapid Credit Growth: Boon or Boom-Bust? By Selim Elekdag & Yiqun Wu
IMF Working Paper No. 11/241
October 01, 2011
http://www.imfbookstore.org/IMFORG/WPIEA2011241
Summary: Episodes of rapid credit growth, especially credit booms, tend to end abruptly, typically in the form of financial crises. This paper presents the findings of a comprehensive event study focusing on 99 credit booms. Loose monetary policy stances seem to have contributed to the build-up of credit booms across both advanced and emerging economies. In particular, domestic policy rates were below trend during the pre-peak phase of credit booms and likely fuelled macroeconomic and financial imbalances. For emerging economies, while credit booms are associated with episodes of large capital inflows, international interest rates (a proxy for global liquidity) are virtually flat during these periods. Therefore, although external factors such as global liquidity conditions matter, and possibly increasingly so over time, domestic factors (especially monetary policy) also appear to be important drivers of real credit growth across emerging economies.
Executive Summary
This paper is motivated by rapid credit growth across many emerging economies, particularly those in Asia. It presents the results of a comprehensive event study which identifies 99 credit booms, of which 39 and 60 originated in advanced and emerging economies, respectively. Episodes of excessive credit growth—credit booms—lead to growing financial imbalances, and tend to end abruptly, often in the form of financial crises. In particular, relative to booms in other emerging economies, credit booms in emerging Asia were associated with a higher incidence of crises historically.
Three other main conclusions include the following:
IMF Working Paper No. 11/241
October 01, 2011
http://www.imfbookstore.org/IMFORG/WPIEA2011241
Summary: Episodes of rapid credit growth, especially credit booms, tend to end abruptly, typically in the form of financial crises. This paper presents the findings of a comprehensive event study focusing on 99 credit booms. Loose monetary policy stances seem to have contributed to the build-up of credit booms across both advanced and emerging economies. In particular, domestic policy rates were below trend during the pre-peak phase of credit booms and likely fuelled macroeconomic and financial imbalances. For emerging economies, while credit booms are associated with episodes of large capital inflows, international interest rates (a proxy for global liquidity) are virtually flat during these periods. Therefore, although external factors such as global liquidity conditions matter, and possibly increasingly so over time, domestic factors (especially monetary policy) also appear to be important drivers of real credit growth across emerging economies.
Executive Summary
This paper is motivated by rapid credit growth across many emerging economies, particularly those in Asia. It presents the results of a comprehensive event study which identifies 99 credit booms, of which 39 and 60 originated in advanced and emerging economies, respectively. Episodes of excessive credit growth—credit booms—lead to growing financial imbalances, and tend to end abruptly, often in the form of financial crises. In particular, relative to booms in other emerging economies, credit booms in emerging Asia were associated with a higher incidence of crises historically.
Three other main conclusions include the following:
- First, as credit booms build, they are jointly associated with deteriorating bank and corporate balance sheet soundness, and symptoms of overheating including: large capital inflows (including less stable bank flows), widening current account deficits, buoyant asset prices, and strong domestic demand.
- Second, while credit booms are associated with episodes of large capital inflows, international interest rates (a proxy for global liquidity), are virtually flat during these periods, which suggests the important role of domestic factors in driving credit growth across emerging economies. This may reflect, in part, that capital inflows are being channeled into other asset classes including real estate, equity, and corporate bonds, for example.
- Third, loose macroeconomic policy stances seem to have contributed to the build-up of credit booms. In particular, this seems to be the case for monetary policy across both advanced and emerging economies. For emerging economies, while international interest rates were essentially flat, domestic policy rates were below trend during the pre-peak phase of credit booms. Therefore, although external factors such as global liquidity conditions matter, and possibly increasingly so over time, domestic factors (especially monetary policy) also appear to be important drivers of real credit growth across emerging economies including those in Asia.
Chellaney: Hydro-control turning China into dreaded hydra?
THE WATER HEGEMON
Hydro-control turning China into dreaded hydra? By Brahma Chellaney
Bangkok Post, Oct 18, 2011 at 12:00 AM
With Beijing controlling the sources of Asia's most important rivers, water has increasingly become a new political divide in China's relations with neighbours like India, Russia, Kazakhstan, Nepal and the Mekong River countries.
http://www.bangkokpost.com/opinion/opinion/261849/hydro-control-turning-china-into-dreaded-hydra
International discussion about China's rise has focused on its increasing trade muscle, growing maritime ambitions, and expanding capacity to project military power. One critical issue, however, usually escapes attention: China's rise as a hydro-hegemon with no modern historical parallel.
The Mekong River, whose water level last March dropped to only 33 centimetres, the lowest in 50 years. People living downriver in Thailand, Laos and Cambodia attributed the fall in water level to newly constructed dams in China.
No other country has ever managed to assume such unchallenged riparian pre-eminence on a continent by controlling the headwaters of multiple international rivers and manipulating their cross-border flows. China, the world's biggest dam builder _ with slightly more than half of the approximately 50,000 large dams on the planet _ is rapidly accumulating leverage against its neighbours by undertaking massive hydro-engineering projects on transnational rivers.
Asia's water map fundamentally changed after the 1949 Communist victory in China. Most of Asia's important international rivers originate in territories that were forcibly annexed to the People's Republic of China. The Tibetan Plateau, for example, is the world's largest freshwater repository and the source of Asia's greatest rivers, including those that are the lifeblood for mainland China and South and Southeast Asia. Other such Chinese territories contain the headwaters of rivers like the Irtysh, Illy and Amur, which flow to Russia and Central Asia.
This makes China the source of cross-border water flows to the largest number of countries in the world. Yet China rejects the very notion of water sharing or institutionalised cooperation with downriver countries. Whereas riparian neighbours in Southeast and South Asia are bound by water pacts that they have negotiated between themselves, China does not have a single water treaty with any co-riparian country. Indeed, having its cake and eating it, China is a dialogue partner but not a member of the Mekong River Commission, underscoring its intent not to abide by the Mekong basin community's rules or take on any legal obligations.
Worse, while promoting multilateralism on the world stage, China has given the cold shoulder to multilateral cooperation among river-basin states. The lower-Mekong countries, for example, view China's strategy as an attempt to "divide and conquer". Although China publicly favours bilateral initiatives over multilateral institutions in addressing water issues, it has not shown any real enthusiasm for meaningful bilateral action. As a result, water has increasingly become a new political divide in the country's relations with neighbours like India, Russia, Kazakhstan, and Nepal.
China deflects attention from its refusal to share water, or to enter into institutionalised cooperation to manage common rivers sustainably, by flaunting the accords that it has signed on sharing flow statistics with riparian neighbours. These are not agreements to cooperate on shared resources, but rather commercial accords to sell hydrological data that other upstream countries provide free to downriver states.
In fact, by shifting its frenzied dam building from internal rivers to international rivers, China is now locked in water disputes with almost all co-riparian states. Those disputes are bound to worsen, given China's new focus on erecting mega-dams, best symbolised by its latest addition on the Mekong _ the 4,200-megawatt Xiaowan Dam, which dwarfs Paris's Eiffel Tower in height _ and a 38,000-megawatt dam planned on the Brahmaputra at Metog, close to the disputed border with India. The Metog Dam will be twice as large as the 18,300-megawatt Three Gorges Dam, currently the world's largest, construction of which uprooted at least 1.7 million Chinese.
In addition, China has identified another mega-dam site on the Brahmaputra at Daduqia, which, like Metog, is to harness the force of a nearly 3,000-metre drop in the river's height as it takes a sharp southerly turn from the Himalayan range into India, forming the world's longest and steepest canyon. The Brahmaputra Canyon _ twice as deep as the Grand Canyon in the United States _ holds Asia's greatest untapped water reserves.
The countries likely to bear the brunt of such massive diversion of waters are those located farthest downstream on rivers like the Brahmaputra and Mekong _ Bangladesh, whose very future is threatened by climate and environmental change, and Vietnam, a rice bowl of Asia. China's water appropriations from the Illy River threaten to turn Kazakhstan's Lake Balkhash into another Aral Sea, which has shrunk to less than half its original size.
In addition, China has planned the "Great Western Route", the proposed third leg of the Great South-North Water Diversion Project _ the most ambitious inter-river and inter-basin transfer programme ever conceived _ whose first two legs, involving internal rivers in China's ethnic Han heartland, are scheduled to be completed within three years.
The Great Western Route, centred on the Tibetan Plateau, is designed to divert waters, including from international rivers, to the Yellow River, the main river of water-stressed northern China, which also originates in Tibet.
With its industry now dominating the global hydropower-equipment market, China has also emerged as the largest dam builder overseas. From Pakistani-held Kashmir to Burma's troubled Kachin and Shan states, China has widened its dam building to disputed or insurgency-torn areas, despite local backlashes.
For example, units of the People's Liberation Army are engaged in dam and other strategic projects in the restive, Shia-majority region of Gilgit-Baltistan in Pakistan-held Kashmir. And China's dam building inside Burma to generate power for export to Chinese provinces has contributed to renewed bloody fighting recently, ending a 17-year ceasefire between the Kachin Independence Army and the Burmese government.
As with its territorial and maritime disputes with India, Vietnam, Japan and others, China is seeking to disrupt the status quo on international river flows. Persuading it to halt further unilateral appropriation of shared waters has thus become pivotal to Asian peace and stability. Otherwise, China is likely to emerge as the master of Asia's water taps, thereby acquiring tremendous leverage over its neighbours' behaviour.
Brahma Chellaney is Professor of Strategic Studies at the Centre for Policy Research and the author of "Water: Asia's New Battleground." Project Syndicate, 2011.
Hydro-control turning China into dreaded hydra? By Brahma Chellaney
Bangkok Post, Oct 18, 2011 at 12:00 AM
With Beijing controlling the sources of Asia's most important rivers, water has increasingly become a new political divide in China's relations with neighbours like India, Russia, Kazakhstan, Nepal and the Mekong River countries.
http://www.bangkokpost.com/opinion/opinion/261849/hydro-control-turning-china-into-dreaded-hydra
International discussion about China's rise has focused on its increasing trade muscle, growing maritime ambitions, and expanding capacity to project military power. One critical issue, however, usually escapes attention: China's rise as a hydro-hegemon with no modern historical parallel.
The Mekong River, whose water level last March dropped to only 33 centimetres, the lowest in 50 years. People living downriver in Thailand, Laos and Cambodia attributed the fall in water level to newly constructed dams in China.
No other country has ever managed to assume such unchallenged riparian pre-eminence on a continent by controlling the headwaters of multiple international rivers and manipulating their cross-border flows. China, the world's biggest dam builder _ with slightly more than half of the approximately 50,000 large dams on the planet _ is rapidly accumulating leverage against its neighbours by undertaking massive hydro-engineering projects on transnational rivers.
Asia's water map fundamentally changed after the 1949 Communist victory in China. Most of Asia's important international rivers originate in territories that were forcibly annexed to the People's Republic of China. The Tibetan Plateau, for example, is the world's largest freshwater repository and the source of Asia's greatest rivers, including those that are the lifeblood for mainland China and South and Southeast Asia. Other such Chinese territories contain the headwaters of rivers like the Irtysh, Illy and Amur, which flow to Russia and Central Asia.
This makes China the source of cross-border water flows to the largest number of countries in the world. Yet China rejects the very notion of water sharing or institutionalised cooperation with downriver countries. Whereas riparian neighbours in Southeast and South Asia are bound by water pacts that they have negotiated between themselves, China does not have a single water treaty with any co-riparian country. Indeed, having its cake and eating it, China is a dialogue partner but not a member of the Mekong River Commission, underscoring its intent not to abide by the Mekong basin community's rules or take on any legal obligations.
Worse, while promoting multilateralism on the world stage, China has given the cold shoulder to multilateral cooperation among river-basin states. The lower-Mekong countries, for example, view China's strategy as an attempt to "divide and conquer". Although China publicly favours bilateral initiatives over multilateral institutions in addressing water issues, it has not shown any real enthusiasm for meaningful bilateral action. As a result, water has increasingly become a new political divide in the country's relations with neighbours like India, Russia, Kazakhstan, and Nepal.
China deflects attention from its refusal to share water, or to enter into institutionalised cooperation to manage common rivers sustainably, by flaunting the accords that it has signed on sharing flow statistics with riparian neighbours. These are not agreements to cooperate on shared resources, but rather commercial accords to sell hydrological data that other upstream countries provide free to downriver states.
In fact, by shifting its frenzied dam building from internal rivers to international rivers, China is now locked in water disputes with almost all co-riparian states. Those disputes are bound to worsen, given China's new focus on erecting mega-dams, best symbolised by its latest addition on the Mekong _ the 4,200-megawatt Xiaowan Dam, which dwarfs Paris's Eiffel Tower in height _ and a 38,000-megawatt dam planned on the Brahmaputra at Metog, close to the disputed border with India. The Metog Dam will be twice as large as the 18,300-megawatt Three Gorges Dam, currently the world's largest, construction of which uprooted at least 1.7 million Chinese.
In addition, China has identified another mega-dam site on the Brahmaputra at Daduqia, which, like Metog, is to harness the force of a nearly 3,000-metre drop in the river's height as it takes a sharp southerly turn from the Himalayan range into India, forming the world's longest and steepest canyon. The Brahmaputra Canyon _ twice as deep as the Grand Canyon in the United States _ holds Asia's greatest untapped water reserves.
The countries likely to bear the brunt of such massive diversion of waters are those located farthest downstream on rivers like the Brahmaputra and Mekong _ Bangladesh, whose very future is threatened by climate and environmental change, and Vietnam, a rice bowl of Asia. China's water appropriations from the Illy River threaten to turn Kazakhstan's Lake Balkhash into another Aral Sea, which has shrunk to less than half its original size.
In addition, China has planned the "Great Western Route", the proposed third leg of the Great South-North Water Diversion Project _ the most ambitious inter-river and inter-basin transfer programme ever conceived _ whose first two legs, involving internal rivers in China's ethnic Han heartland, are scheduled to be completed within three years.
The Great Western Route, centred on the Tibetan Plateau, is designed to divert waters, including from international rivers, to the Yellow River, the main river of water-stressed northern China, which also originates in Tibet.
With its industry now dominating the global hydropower-equipment market, China has also emerged as the largest dam builder overseas. From Pakistani-held Kashmir to Burma's troubled Kachin and Shan states, China has widened its dam building to disputed or insurgency-torn areas, despite local backlashes.
For example, units of the People's Liberation Army are engaged in dam and other strategic projects in the restive, Shia-majority region of Gilgit-Baltistan in Pakistan-held Kashmir. And China's dam building inside Burma to generate power for export to Chinese provinces has contributed to renewed bloody fighting recently, ending a 17-year ceasefire between the Kachin Independence Army and the Burmese government.
As with its territorial and maritime disputes with India, Vietnam, Japan and others, China is seeking to disrupt the status quo on international river flows. Persuading it to halt further unilateral appropriation of shared waters has thus become pivotal to Asian peace and stability. Otherwise, China is likely to emerge as the master of Asia's water taps, thereby acquiring tremendous leverage over its neighbours' behaviour.
Brahma Chellaney is Professor of Strategic Studies at the Centre for Policy Research and the author of "Water: Asia's New Battleground." Project Syndicate, 2011.
Monday, October 17, 2011
Making Banks Safer: Can Volcker and Vickers Do It?
Making Banks Safer: Can Volcker and Vickers Do It?
Authors: Chow, Julian T.S. ; Surti, Jay
IMF Working Paper
October 01, 2011
Summary: This paper assesses proposals to redefine the scope of activities of systemically important financial institutions. Alongside reform of prudential regulation and oversight, these have been offered as solutions to the too-important-to-fail problem. It is argued that while the more radical of these proposals such as narrow utility banking do not adequately address key policy objectives, two concrete policy measures - the Volcker Rule in the United States and retail ring-fencing in the United Kingdom - are more promising while still entailing significant implementation challenges. A risk factor common to all the measures is the potential for activities identified as too risky for retail banks to migrate to the unregulated parts of the financial system. Since this could lead to accumulation of systemic risk if left unchecked, it appears unlikely that any structural engineering will lessen the policing burden on prudential authorities and on the banks.
Section I, Why redefine scope?
The business of banking involves leveraged intermediation managed by people subject to limited liability and, typically, to profit sharing contracts. This combination is well-known to generate incentives for risk-taking that may be excessive from the perspective of bank creditors. Creditor guarantees such as deposit insurance are known to exacerbate this incentive problem because they weaken creditors’ incentive to monitor and discipline management.
These issues are magnified in the case of systemically important financial institutions (SIFIs). Owing to their size, interconnectedness, or complexity, the negative externalities emanating from financial distress at SIFIs makes them a source of systemic risk, leading to them being perceived to be too-important-to-fail (TITF). Consequently, the market implicitly—and often correctly—assumes that apart from explicit deposit insurance, creditor guarantees of a much wider nature would be extended when such firms are threatened by imminent failure.
This serves to weaken the mitigating force of market discipline. Prior to the crisis, the high likelihood of public support assumed in a distress situation contributed to the ability of SIFIs to carry thinner capital buffers at lower cost, acquire complex business models, and accumulate systemic risk. This trend was reinforced by the diversification premier attributed to universal banks by market participants and prudential authorities, enabling them to integrate the provision of retail, investment, and wholesale banking services without erecting the necessary firewalls there-between. These developments resulted in networks of financial interconnections within and across internationally active SIFIs that proved to be difficult, time consuming and costly to unravel. This made it seemingly less costly, during the crisis, to allocate tax payer resources to preventing SIFI failures than to allowing them, with subsequent resolution and restructuring of their businesses.
Diversification of business lines could serve to better protect a universal bank against idiosyncratic shocks that adversely impact individual lines of business. At the same time, the free flow of capital and liquidity and the associated growth in intra-group exposures would also increase the likelihood of intra-firm contagion in the event of an exogenous shock. Unlike investment banking clients, retail banking customers typically have few options other than their banks for conducting vital financial transactions. Ensuring business continuity of services to such clients, therefore, serves a clear and important social welfare objective. But, complex business models and high levels of intra-group exposures present a barrier to quickly spinning off the retail parts of a universal bank which can ensure such business continuity.
Restricting the scope of a regulated bank’s business activities could, therefore, serve a number of important policy objectives. From a financial stability perspective, it could limit contagion within and across firms. From the perspective of consumer protection, it could ensure a more efficient provision of assurance of the continuity of retail banking services. And, by more credibly restricting the ambit of tax-payer funded creditor guarantees to depositors it could furnish these benefits more efficiently and cheaply from a social cost perspective.
Accordingly, the official response to the crisis has, besides recognizing the need for strengthened regulation and oversight of SIFIs, also included complementary proposals to redesign and refocus their business activities. A number of concrete proposals have been made, including:
This paper focuses on the motivation, content, operational challenges, and potential costs of these proposals to narrow the scope of banking business. The more radical proposals discussed under the narrow banking umbrella involve strict limits on what retail banks’ permissible activities ought to be and could entail significant dead-weight costs if implemented as recommended. By contrast, the design and motivation for the Volcker rule and retail ring-fence are more precisely targeted at the problems arising from the integrated business models used by SIFIs before the crisis.
The challenge facing these latter proposals lies in the feasibility and cost of their implementation. In the case of the Volcker rule, for example, it will be challenging for prudential authorities to tell apart permissible activities (market making and underwriting) from prohibited ones (proprietary trading) when assessing banks’ exposures to securities markets. Similar difficulties will be faced by supervisors assessing the nature of and purpose of hedging tools and contracts utilized by ring-fenced banks. This presents policy makers with a dilemma. Should they invest the financial cost and time towards gathering more contemporaneous information in order to create better filters and limit loopholes? Or, if this is viewed as being too costly or simply inefficient, should they move to outright prohibition of all activities related to securities markets?
The danger with the second option lies in generating incentives to push risk taking beyond the borders of the regulated financial system. If there are indeed no direct financial linkages between retail financial firms and such shadow banking entities, such risk taking may cease being a problem of regulation. However, systemic risk will continue to accumulate in the shadow banks, and since the participants in the regulated and shadow systems are the same, or are, in general linked, a crisis in that sector will continue to exercise a contagion impact on the regulated banking sector.
http://www.imfbookstore.org/IMFORG/WPIEA2011236
Authors: Chow, Julian T.S. ; Surti, Jay
IMF Working Paper
October 01, 2011
Summary: This paper assesses proposals to redefine the scope of activities of systemically important financial institutions. Alongside reform of prudential regulation and oversight, these have been offered as solutions to the too-important-to-fail problem. It is argued that while the more radical of these proposals such as narrow utility banking do not adequately address key policy objectives, two concrete policy measures - the Volcker Rule in the United States and retail ring-fencing in the United Kingdom - are more promising while still entailing significant implementation challenges. A risk factor common to all the measures is the potential for activities identified as too risky for retail banks to migrate to the unregulated parts of the financial system. Since this could lead to accumulation of systemic risk if left unchecked, it appears unlikely that any structural engineering will lessen the policing burden on prudential authorities and on the banks.
Section I, Why redefine scope?
The business of banking involves leveraged intermediation managed by people subject to limited liability and, typically, to profit sharing contracts. This combination is well-known to generate incentives for risk-taking that may be excessive from the perspective of bank creditors. Creditor guarantees such as deposit insurance are known to exacerbate this incentive problem because they weaken creditors’ incentive to monitor and discipline management.
These issues are magnified in the case of systemically important financial institutions (SIFIs). Owing to their size, interconnectedness, or complexity, the negative externalities emanating from financial distress at SIFIs makes them a source of systemic risk, leading to them being perceived to be too-important-to-fail (TITF). Consequently, the market implicitly—and often correctly—assumes that apart from explicit deposit insurance, creditor guarantees of a much wider nature would be extended when such firms are threatened by imminent failure.
This serves to weaken the mitigating force of market discipline. Prior to the crisis, the high likelihood of public support assumed in a distress situation contributed to the ability of SIFIs to carry thinner capital buffers at lower cost, acquire complex business models, and accumulate systemic risk. This trend was reinforced by the diversification premier attributed to universal banks by market participants and prudential authorities, enabling them to integrate the provision of retail, investment, and wholesale banking services without erecting the necessary firewalls there-between. These developments resulted in networks of financial interconnections within and across internationally active SIFIs that proved to be difficult, time consuming and costly to unravel. This made it seemingly less costly, during the crisis, to allocate tax payer resources to preventing SIFI failures than to allowing them, with subsequent resolution and restructuring of their businesses.
Diversification of business lines could serve to better protect a universal bank against idiosyncratic shocks that adversely impact individual lines of business. At the same time, the free flow of capital and liquidity and the associated growth in intra-group exposures would also increase the likelihood of intra-firm contagion in the event of an exogenous shock. Unlike investment banking clients, retail banking customers typically have few options other than their banks for conducting vital financial transactions. Ensuring business continuity of services to such clients, therefore, serves a clear and important social welfare objective. But, complex business models and high levels of intra-group exposures present a barrier to quickly spinning off the retail parts of a universal bank which can ensure such business continuity.
Restricting the scope of a regulated bank’s business activities could, therefore, serve a number of important policy objectives. From a financial stability perspective, it could limit contagion within and across firms. From the perspective of consumer protection, it could ensure a more efficient provision of assurance of the continuity of retail banking services. And, by more credibly restricting the ambit of tax-payer funded creditor guarantees to depositors it could furnish these benefits more efficiently and cheaply from a social cost perspective.
Accordingly, the official response to the crisis has, besides recognizing the need for strengthened regulation and oversight of SIFIs, also included complementary proposals to redesign and refocus their business activities. A number of concrete proposals have been made, including:
- Narrow Utility Banking—essentially a reversion of deposit-funded banks into traditional payment function outfits with lending (and investment banking) being carried out by independent finance companies funded by non-deposit means.
- The Volcker Rule—prohibiting banks from carrying out certain types of investment banking activities if they are to continue to seek deposit funding and to retain banking licenses.
- A Retail Ring-fence—that, while not prohibiting banking groups from providing both retail and wholesale banking services, mandates legal subsidiarization of certain retail activities, prohibits this subsidiary from undertaking other businesses and risks, and establishes minimum capital and liquidity standards for it on a solo basis. While not limiting capital and liquidity benefits to the retail subsidiary from other affiliates when necessary, the ring-fence limits capital and liquidity transfers in the opposite direction, to non-ring-fenced affiliates. Such functional subsidiarization could enable continuation of retail operations under distress or failure of a SIFI’s other businesses.
This paper focuses on the motivation, content, operational challenges, and potential costs of these proposals to narrow the scope of banking business. The more radical proposals discussed under the narrow banking umbrella involve strict limits on what retail banks’ permissible activities ought to be and could entail significant dead-weight costs if implemented as recommended. By contrast, the design and motivation for the Volcker rule and retail ring-fence are more precisely targeted at the problems arising from the integrated business models used by SIFIs before the crisis.
The challenge facing these latter proposals lies in the feasibility and cost of their implementation. In the case of the Volcker rule, for example, it will be challenging for prudential authorities to tell apart permissible activities (market making and underwriting) from prohibited ones (proprietary trading) when assessing banks’ exposures to securities markets. Similar difficulties will be faced by supervisors assessing the nature of and purpose of hedging tools and contracts utilized by ring-fenced banks. This presents policy makers with a dilemma. Should they invest the financial cost and time towards gathering more contemporaneous information in order to create better filters and limit loopholes? Or, if this is viewed as being too costly or simply inefficient, should they move to outright prohibition of all activities related to securities markets?
The danger with the second option lies in generating incentives to push risk taking beyond the borders of the regulated financial system. If there are indeed no direct financial linkages between retail financial firms and such shadow banking entities, such risk taking may cease being a problem of regulation. However, systemic risk will continue to accumulate in the shadow banks, and since the participants in the regulated and shadow systems are the same, or are, in general linked, a crisis in that sector will continue to exercise a contagion impact on the regulated banking sector.
http://www.imfbookstore.org/IMFORG/WPIEA2011236
Thursday, October 13, 2011
Global Poverty Estimates: A Sensitivity Analysis
Global Poverty Estimates: A Sensitivity Analysis. By Shatakshee Dhongde & Camelia Minoiu
IMF Working Paper
Oct 13, 2011
Summary: Current estimates of global poverty vary substantially across studies. In this paper we undertake a novel sensitivity analysis to highlight the importance of methodological choices in estimating global poverty. We measure global poverty using different data sources, parametric and nonparametric estimation methods, and multiple poverty lines. Our results indicate that estimates of global poverty vary significantly when they are based alternately on data from household surveys versus national accounts but are relatively consistent across different estimation methods. The decline in poverty over the past decade is found to be robust across methodological choices.
Introduction
Global poverty monitoring has been brought to the forefront of the international policy arena with the adoption of the Millennium Development Goals (MDG) by the United Nations. The first MDG proposes reducing global poverty by the year 2015 and is stated as “halving the proportion of people with an income level below $1/day between 1990 and 2015” (United Nations, 2000). Progress towards attaining this MDG is monitored using global poverty estimates published by the World Bank and a number of independent scholars. The process is not only expensive (Moss, 2010) but also mired with conceptual, methodological, and datarelated problems (Klasen, 2009).
Current estimates of global poverty proposed in the literature differ in magnitude as well as in the rate of change in poverty. Consider, for instance, Chen and Ravallion (2010) and Pinkovskiy and Sala-i-Martin (2009)—two studies that estimate global poverty using the international poverty line of $1/day (see Figure 1). Chen and Ravallion (2010) estimate that in 2005 nearly 26 percent of the population in the developing countries was poor, and the global poverty count fell by 520 million individuals since 1981. By contrast, Pinkovskiy and Sala-i-Martin (2009) estimate poverty to have been ten times lower in 2005, which implies a reduction of almost 350 million individuals since 1981. Although there is general agreement that global poverty has declined over the years, the estimated level of poverty and rate of poverty decline vary substantially across studies.
This paper aims to contribute to the debate on global poverty not by providing a new set of estimates, but by addressing two important questions. First, we ask why estimates from different studies differ so much. As we unravel the various assumptions made by researchers, we show that global poverty estimates are simply not comparable across studies. For instance, they differ in terms of underlying data sources, number of countries included, welfare metric, adjustments to mean incomes, and statistical methods employed to estimate the income distribution. Given this variety of methodological choices, we arrive at our second question: Can we assess the impact of different approaches on the resulting poverty estimates? Since global poverty estimation requires making multiple assumptions simultaneously, we aim to isolate and assess separately the relative importance of each such assumption by undertaking a novel sensitivity analysis.
An important hurdle in estimating long-term trends in global poverty is the lack of high-quality, consistent survey data. The poor are those individuals whose income is less than or equal to some threshold set by the poverty line. If countries had complete information on every individual’s income then with an agreed-upon global poverty line, identifying the poor would be a straightforward exercise. However, there are severe data limitations.
Data on income is typically collected through household surveys (HS) of nationally representative samples. However, survey data are often available for periods far apart and suffer from a number of inconsistencies (regarding sampling and interviewing techniques, definitions of variables, and coverage) that render them incomparable across countries. Nonetheless, they are the sole source of information on the relative distribution of incomes in a country—that is, the shares of national income possessed by different population groups (quintiles, deciles). HS also provide estimates of mean income/consumption which are used to scale the income shares to obtain mean incomes by population group. A more readilyaccessible and consistently-recorded source of information are national account statistics (NAS) which also provide aggregate income or consumption estimates and are available for most countries on a yearly basis.
A key methodological choice in estimating global poverty is whether to use data on mean income/consumption from HS or NAS or whether to combine data from the two sources. Some studies in the literature analyzed the sources of discrepancies between the levels and growth rates of income/consumption data from HS and NAS (Ravallion, 2003; Deaton, 2005). However these studies did not measure the precise effect of using HS and NAS data on global poverty levels and trends. In order to determine how sensitive global poverty estimates are to alternate data sources, we estimate global poverty by anchoring relative distributions alternately to HS and NAS estimates of mean income and consumption. This is our first sensitivity exercise.
The second sensitivity exercise concerns the choice of statistical method used to estimate income distributions from grouped data, that is, data on mean income or consumption for population groups (quintiles, deciles). We estimate global poverty by estimating each country’s distribution using different methods. These include the General Quadratic (GQ) and the Beta Lorenz curve, and the lognormal and Singh-Maddala functional forms for the income density function.2 In addition to these parametric specifications, we also consider the nonparametric kernel density method whose performance we assess in conjunction with four different bandwidths—a parameter that controls the smoothness of the income distribution.
As a benchmark, we follow the World Bank methodology to the extent possible and estimate global poverty in 1995 and 2005—the latest year for which data is available for many countries. Data on the relative distribution of income across population deciles is collected for 65 countries from the World Bank’s poverty monitoring website PovcalNet. Our sample covers more than 70 percent of the total world population and includes all countries for which both HS and NAS data are available in both years. Global poverty is estimated using international poverty lines ranging from $1/day to $2.5/day to provide further insight into how methodological choices impact poverty rates at different income cutoffs.
Our results are twofold. First, a large share of the variation in estimated poverty levels and trends can be attributed to the choice between HS and NAS as the source of data. Global poverty estimates vary not only in terms of the proportion of the poor, and correspondingly the number of poor, but also in terms of the rates of decline in poverty. Poverty estimates based on HS and NAS do not tend to converge in higher income countries. Second, the choice of statistical method used to estimate the income distribution affects poverty levels to a lesser extent. A comparison of poverty estimates across parametric and nonparametric techniques reveals that the commonly used lognormal specification consistently underestimates poverty levels. While there is little doubt that the proportion of poor declined between 1995 and 2005, our results underscore the fact that global poverty counts are highly sensitive to methodological approach.
You can buy the print version here, or ask us for a digital copy.
IMF Working Paper
Oct 13, 2011
Summary: Current estimates of global poverty vary substantially across studies. In this paper we undertake a novel sensitivity analysis to highlight the importance of methodological choices in estimating global poverty. We measure global poverty using different data sources, parametric and nonparametric estimation methods, and multiple poverty lines. Our results indicate that estimates of global poverty vary significantly when they are based alternately on data from household surveys versus national accounts but are relatively consistent across different estimation methods. The decline in poverty over the past decade is found to be robust across methodological choices.
Introduction
Global poverty monitoring has been brought to the forefront of the international policy arena with the adoption of the Millennium Development Goals (MDG) by the United Nations. The first MDG proposes reducing global poverty by the year 2015 and is stated as “halving the proportion of people with an income level below $1/day between 1990 and 2015” (United Nations, 2000). Progress towards attaining this MDG is monitored using global poverty estimates published by the World Bank and a number of independent scholars. The process is not only expensive (Moss, 2010) but also mired with conceptual, methodological, and datarelated problems (Klasen, 2009).
Current estimates of global poverty proposed in the literature differ in magnitude as well as in the rate of change in poverty. Consider, for instance, Chen and Ravallion (2010) and Pinkovskiy and Sala-i-Martin (2009)—two studies that estimate global poverty using the international poverty line of $1/day (see Figure 1). Chen and Ravallion (2010) estimate that in 2005 nearly 26 percent of the population in the developing countries was poor, and the global poverty count fell by 520 million individuals since 1981. By contrast, Pinkovskiy and Sala-i-Martin (2009) estimate poverty to have been ten times lower in 2005, which implies a reduction of almost 350 million individuals since 1981. Although there is general agreement that global poverty has declined over the years, the estimated level of poverty and rate of poverty decline vary substantially across studies.
This paper aims to contribute to the debate on global poverty not by providing a new set of estimates, but by addressing two important questions. First, we ask why estimates from different studies differ so much. As we unravel the various assumptions made by researchers, we show that global poverty estimates are simply not comparable across studies. For instance, they differ in terms of underlying data sources, number of countries included, welfare metric, adjustments to mean incomes, and statistical methods employed to estimate the income distribution. Given this variety of methodological choices, we arrive at our second question: Can we assess the impact of different approaches on the resulting poverty estimates? Since global poverty estimation requires making multiple assumptions simultaneously, we aim to isolate and assess separately the relative importance of each such assumption by undertaking a novel sensitivity analysis.
An important hurdle in estimating long-term trends in global poverty is the lack of high-quality, consistent survey data. The poor are those individuals whose income is less than or equal to some threshold set by the poverty line. If countries had complete information on every individual’s income then with an agreed-upon global poverty line, identifying the poor would be a straightforward exercise. However, there are severe data limitations.
Data on income is typically collected through household surveys (HS) of nationally representative samples. However, survey data are often available for periods far apart and suffer from a number of inconsistencies (regarding sampling and interviewing techniques, definitions of variables, and coverage) that render them incomparable across countries. Nonetheless, they are the sole source of information on the relative distribution of incomes in a country—that is, the shares of national income possessed by different population groups (quintiles, deciles). HS also provide estimates of mean income/consumption which are used to scale the income shares to obtain mean incomes by population group. A more readilyaccessible and consistently-recorded source of information are national account statistics (NAS) which also provide aggregate income or consumption estimates and are available for most countries on a yearly basis.
A key methodological choice in estimating global poverty is whether to use data on mean income/consumption from HS or NAS or whether to combine data from the two sources. Some studies in the literature analyzed the sources of discrepancies between the levels and growth rates of income/consumption data from HS and NAS (Ravallion, 2003; Deaton, 2005). However these studies did not measure the precise effect of using HS and NAS data on global poverty levels and trends. In order to determine how sensitive global poverty estimates are to alternate data sources, we estimate global poverty by anchoring relative distributions alternately to HS and NAS estimates of mean income and consumption. This is our first sensitivity exercise.
The second sensitivity exercise concerns the choice of statistical method used to estimate income distributions from grouped data, that is, data on mean income or consumption for population groups (quintiles, deciles). We estimate global poverty by estimating each country’s distribution using different methods. These include the General Quadratic (GQ) and the Beta Lorenz curve, and the lognormal and Singh-Maddala functional forms for the income density function.2 In addition to these parametric specifications, we also consider the nonparametric kernel density method whose performance we assess in conjunction with four different bandwidths—a parameter that controls the smoothness of the income distribution.
As a benchmark, we follow the World Bank methodology to the extent possible and estimate global poverty in 1995 and 2005—the latest year for which data is available for many countries. Data on the relative distribution of income across population deciles is collected for 65 countries from the World Bank’s poverty monitoring website PovcalNet. Our sample covers more than 70 percent of the total world population and includes all countries for which both HS and NAS data are available in both years. Global poverty is estimated using international poverty lines ranging from $1/day to $2.5/day to provide further insight into how methodological choices impact poverty rates at different income cutoffs.
Our results are twofold. First, a large share of the variation in estimated poverty levels and trends can be attributed to the choice between HS and NAS as the source of data. Global poverty estimates vary not only in terms of the proportion of the poor, and correspondingly the number of poor, but also in terms of the rates of decline in poverty. Poverty estimates based on HS and NAS do not tend to converge in higher income countries. Second, the choice of statistical method used to estimate the income distribution affects poverty levels to a lesser extent. A comparison of poverty estimates across parametric and nonparametric techniques reveals that the commonly used lognormal specification consistently underestimates poverty levels. While there is little doubt that the proportion of poor declined between 1995 and 2005, our results underscore the fact that global poverty counts are highly sensitive to methodological approach.
You can buy the print version here, or ask us for a digital copy.
Wednesday, October 12, 2011
Personalized Therapies Mark Significant Leap Forward in Fight Against Cancer
Personalized Therapies Mark Significant Leap Forward in Fight Against Cancer |
http://www.innovation.org/index.cfm/NewsCenter/Newsletters/Newsletters?NID=191
October 12, 2011
This year marks the 40th anniversary of the signing of the National Cancer Act of 1971. Indeed, the 12 million cancer survivors living in the U.S. today attest to the significant progress in cancer prevention and treatment we have made over the past decades. Despite the remarkable advances we have made there are still more than 550,000 men and woman who lose their battle to cancer each year. Recently released scientific data demonstrate that the collective commitment to cancer research is unwavering and our knowledge of the biology of cancer and ability to treat it continues to expand. One promising trend in cancer research: drug developers are harnessing an improved understanding of the molecular basis of many types of cancer to develop therapies uniquely targeted to these pathways. For example, a newly approved drug for lung cancer called crizotinib is targeted to a mutation in a gene called anaplastic lymphoma kinase, or ALK. Mutations in the ALK gene are found in approximately 5% of patients with non-small-cell lung cancer. In data presented at this year’s meeting of the American Society of Clinical Oncology (ASCO), 54% of patients who received crizotinib were still alive after two years compared to just 12% in a control group. Crizotinib received fast-track review by the U.S. Food and Drug Administration (FDA) and was approved in August ahead of the six-month priority review schedule. Dramatic advances are being made in the treatment of the skin cancer melanoma as well. More than 60 drugs are currently in development for the disease and this year two new medicines have been approved – the first approvals for the disease in 13 years. The first, ipilimumab, was approved in March and was the first treatment ever approved by FDA to show a survival benefit for patients with metastatic melanoma. In August the second, a new personalized medicine called vemurafenib, was approved to treat this deadliest form of skin cancer. This drug, which is taken orally, selectively inhibits a mutated form of the BRAF kinase gene. The mutated gene is associated with increased tumor aggressiveness, decreased survival, and is found in approximately half of all malignant melanomas. Recently reported clinical trials results demonstrate that the medicine reduces the risk of death by 63% percent. Personalized medicine holds great potential beyond these two select examples in lung cancer and melanoma. MD Anderson Cancer Center recently reported on the results of a large-scale clinical trial examining the effect of matching targeted therapies with specific gene mutations across many cancer types. According to the results of the study, patients who received a targeted therapy demonstrated a 27% response rate compared to 5% for those whose therapy was not matched. This clinical trial marks the largest examination of a personalized approach to cancer care to date, and as principal investigator Apostolia-Maria Tsimberidou, M.D., Ph.D. concludes, "This study suggests that a personalized approach is needed to improve clinical outcomes for patients with cancer." As these and many other studies illustrate, a dramatic transformation in cancer diagnosis and treatment is underway. Therapies targeted to the genetic and molecular underpinnings of disease are being developed, and patient outcomes are improving as a result. The studies highlighted above only begin to scratch the surface of the remarkable potential of personalized, targeted therapies, but are an indication of the great reward of years of research and investment, as well as great promise for continued innovation in the years to come. |
Subscribe to:
Posts (Atom)