Effects of Culture on Firm Risk-Taking: A Cross-Country and Cross-Industry Analysis. By Roxana Mihet
IMF Working Paper No. 12/210
Aug 2012
http://www.imfbookstore.org/ProdDetails.asp?ID=WPIEA2012210
Summary: This paper investigates the effects of national culture on firm risk-taking, using a comprehensive dataset covering 50,000 firms in 400 industries in 51 countries. Risk-taking is found to be higher for domestic firms in countries with low uncertainty aversion, low tolerance for hierarchical relationships, and high individualism. Domestic firms in such countries tend to take substantially more risk in industries which are more informationally opaque (e.g. finance, mining, IT). Risk-taking by foreign firms is best explained by the cultural norms of their country of origin. These cultural norms do not proxy for legal constraints, insurance safety nets, or economic development.
http://www.imf.org/external/pubs/cat/longres.aspx?sk=26204.0
Excerpts:
Introduction
Understanding whether national culture affects a society‟s likelihood to generate risk-seeking firms is important for effective policy-making and for improving corporate governance. It can enrich discussions on government policies that encourage entrepreneurship and innovation. A grasp of the impact of cultural influences on corporate risk-taking would allow policy-makers to better customize their policies for firms with different risk appetites, thus promoting more competitive business environments. Understanding the impact of culture on corporate risk-taking decisions is also important to the internal conduct of multinational firms. Internal decisions in multinational firms, such as the decision to pursue a risky R&D project, require well-orchestrated responses from executives with diverse cultural backgrounds. Even in firms with standardized operating procedures, the interpretation of various financial decisions can vary among executives from different societies as a result of their cultural differences (Tse et al. 1988). Accounting for the impact of cultural influences on decision-making allows the firms themselves to accommodate and adapt to such differences, hence diminishing “noisy” interactions among executives and errors in decision-making.
This study employs four dimensions of national culture identified by Hofstede (2001) and an international sample of 50,000 firms spread across 400 industries in 51 countries to analyze the effects of cultural differences on corporate risk-taking. More specifically, it tries to identify the channels through which cultural values can influence corporate risk-taking. Culture can affect the institutional and economic development at the macro level, the industrial diversification and industry concentration at the market structure level, as well as the corporate and individual decision-making at the micro level, all of which may in turn influence firm risk-taking decisions.
Previous literature has shown that national culture does in fact predict cross-country differences in the degree of institutional and economic development. Culture has been linked with creditor rights and investor protection (Stulz and Williamson 2003), with judicial efficiency (Radenbaugh et al. 2006), with corporate governance (Doidge et al. 2007), with bankruptcy protection and insolvency management (Beraho and Elisu 2010) and with overall levels of transparency and corruption (Husted 1999). Research has further established that national culture has an impact on the composition and leadership structure of boards of directors (Li and Harrison 2008) and also on individual decision-making at the micro level (Hilary and Hui 2009; Halek and Eisenhauer 2001; and Graham et al. 2009). On the other hand, attitudes towards risk are likely to be indirectly affected by culture through many of the factors listed above, as well as directly by national cultural norms, which may encourage or deter risk-taking.
This paper is not the first to study the impact of cultural values on corporate risk-taking. The extant literature has briefly studied the relation between culture and risk-taking, but has mostly focused on firms in the banking and the financial sectors (Houston et al. 2010; Kanagaretnam et al. 2011; Lehnert et al. 2011; Li and Zahra 2012). For example, Kanagaretnam et al. (2011) show that aggressive risk-taking activities by banks are more likely in societies with low uncertainty avoidance and high individualism. They show that cultural differences between societies have a profound influence on the level of bank risk-taking, and the ability to explain bank financial troubles during the recent financial crisis. On the other hand, Griffin et al. (2012) show that uncertainty avoidance is negatively and individualism is positively associated with firm-level riskiness in the non-financial sector (in the manufacturing sector).
This paper innovated in at least four ways. First, this paper takes a more holistic approach to the study of cultural influences on corporate risk-taking by studying not only the banking and the financial sectors, but all industries in a market economy. We take this approach in order to capture cross-industrial differences in risk-taking. The influence of cultural factors, such as national uncertainty aversion, may be of greater importance for firms in more informationally opaque industries such as information technologies, financial services, oil extraction, and chemicals, where information uncertainty is higher relative to manufacturing and industrial firms, because of the greater complexity of operations and the difficulty of assessing and managing risk. Thus, we test whether corporate risk-taking in informationally more opaque industries is more sensitive to a country‟s national cultural norms. Second, we differentiate between the direct and indirect effects of national culture on firm risk-taking. We specifically test whether cultural norms remain important in determining corporate risk-taking behaviors even after taking into account their impact on the institutional, economic and industrial environments. Third, unlike previous research which has used standard ordinary least squares analyses, we model both the direct and indirect effects of culture on risk-taking by employing a hierarchical linear mixed model. The hierarchical linear mixed model allows testing multi-level theories, simultaneously modeling variables at the firm, industry and country level without having to recourse to data aggregation or disaggregation as previous cultural economics studies have had to do. Fourth, by using a hierarchical linear model in explaining firm-level risk-taking, we can model not only the firm, industry and country-level influences on risk-taking, but also their cross-level interactions.
This paper finds that:
Culture impacts corporate risk-taking directly and not merely though indirect channels such as the legal and regulatory frameworks.
Corporate risk-taking is higher in societies with low uncertainty avoidance, low tolerance for hierarchical relationships and in societies which value individualism over collectivism, with these effects even more accentuated in societies with better formal institutions.
Additionally, firms in countries ranking high in uncertainty-aversion and low in individualism take significantly less risk in industrial sectors which are more informationally opaque (e.g. finance, IT, oil refinery and mining), compared to firms in countries lower in uncertainty-aversion and higher in individualism.
Risk-taking by foreign firms is best explained by the cultural norms of their country of origin.
These cultural dimensions are not proxying for legal constraints, economic development, bankruptcy costs, insurance safety nets, or many other factors.
The results of this study inform both theory and policy in several ways. First, these findings strengthen the argument that the same institutional rules can produce different economic outcomes in culturally-different societies. Second, they imply that policy-makers should take into account cross-cultural values and norms when drafting policies that promote competitive business environments. Third, they enrich governmental discussions on policies that address risk-taking in informationally opaque sectors.
Literature review
Several research studies in the financial, accounting, and management literatures have explored the importance of cultural values in decision-making. These studies find that culture can explain the institutional, legal and economic environments of a country at the macro level which can influence corporate risk-taking decisions, and offer evidence of the impact of culture on financial decision-making by individuals at the micro level beyond traditional economic arguments.
At the micro level, culture has (unsurprisingly) been shown to affect individual risk-taking behaviors. Breuer et al. (2011) find that individualism is linked to overconfidence and overoptimism and has a significantly positive effect on individual financial risk-taking and the decision to own stocks. Tse et al. (1988) show that home culture has predictable, significant effects on the decision-making of executives. Two decades later, Graham et al. (2010), using survey data in the U.S., also show that CEOs are not immune to the effects of culture. They find that CEOs‟ decision-making is strongly influenced by cultural values such as uncertainty-aversion.
At the macro level, cultural heritage has been linked to corporate governance, investor protection, creditor rights, bankruptcy protection, judicial efficiency, accounting transparency, and corruption. Doidge et al. (2007) find that cross-cultural differences explain much more of the variance in corporate governance than observable firm characteristics. Hope (2003a) shows evidence that both legal origin and culture (as proxied by Hofstede‟s cultural dimensions) are important in explaining firms‟ disclosure practices and investor protection. In fact, he finds that although legal origin is a key determinant of disclosure levels, its importance decreases with the richness of a firm‟s information environment, while culture still remains a significant determinant. Licht et al. (2005) find that social norms of governance correlate strongly and systematically with high individualism and low power distance. Stulz et al. (2003) find that cultural heritage, proxied by religion and language, predicts the cross-sectional variation in creditor rights better than a country‟s trade openness, economic development, legal origin, or language. Other studies find that culture predicts judicial efficiency and the transparency of accounting systems. Radenbaugh et al. (2006) find that countries in the Anglo cluster have an accounting system which is more transparent and less conservative than either the Germanic or the Latin accounting systems. Beraho et al. (2010) show that cross-cultural variables have a direct influence on the propensity to file for bankruptcy and on insolvency laws. Lastly, both Getz and Volkema (2001) and Robertson and Watson (2004) link cultural differences to corruption levels.
Furthermore, recent research has also linked cultural variables to economic and market development, although the evidence is mixed. Guiso et al. (2006) find that national culture impacts economic outcomes, by influencing national savings rates and income redistributions. Kwok and Tadesse (2006) find that culture explains cross-country variations in financial systems, with higher uncertainty-avoidance countries dominated by bank-based financial systems, rather than by stock-markets. Kirca et al. (2009) show that national culture impacts the implementation of market-oriented practices (i.e., generation, dissemination, and utilization of market intelligence) and the internalization of market-oriented values and norms (i.e., innovativeness, flexibility, openness of internal communication, speed, quality emphasis, competence emphasis, inter-functional cooperation, and responsibility). Lee and Peterson (2000) show that only countries with specific cultural tendencies (i.e., countries which emphasize individualism) tend to engender a strong entrepreneurial orientation, hence experiencing more entrepreneurship and global competitiveness. On the other hand, Pryor (2005) argues that cultural variables do not seem related to the level of economic development and are not useful in understanding economic growth or differences in levels of economic performance across countries. Additionally, Herger et al. (2008) also argue that cultural beliefs do not seem to support or impede financial development. This mixed evidence points to the idea that national culture might only indirectly influence economic and market development through its effects on the legal and institutional contexts.
The institutional and economic environments have been shown to affect corporate risk-taking decisions. There is a small strand of literature which has explored corporate risk-taking around the world which reflects countries‟ institutional and economic environments. For example, Laeven and Levine (2009) show that risk-taking by banks varies positively with the comparative power of shareholders within each bank. Moreover, they show that the relations between bank risk-taking and capital regulation, deposit insurance mechanisms, and bank activities restrictiveness, depend critically on the bank‟s ownership structure. Claessens et al. (2000) show that corporations in common law countries and market-based financial systems have less risky financing patterns, and that the stronger protection of equity and creditor rights is also associated with less financial risk. Overall, while the literature is relatively small, national culture has been indirectly linked with corporate risk-taking decisions in formal studies, although most of them only analyze the banking sector.
Culture has also been directly linked with corporate risk-taking, although again, most studies have focused on either the financial or the manufacturing sectors separately. Kanagaretnam et al. (2011) show that banks in high uncertainty avoidance societies tend to take less risk, whereas banks in high individualism societies take more risk. However, they do not control for institutional variables such as corporate governance, bankruptcy protection, judicial efficiency, transparency, and corruption, which have shown to be affected by national cultural norms and which could at their turn affect corporate risk-taking. Griffin et al. (2012) study the impact of culture on firms in the manufacturing sector in the period 1997-2006. To the best of our knowledge, they are the only ones who use a hierarchical linear mixed model to analyze the impact of culture on corporate risk-taking. They show that individualism has positive and significant direct effects, while uncertainty avoidance has negative and significant direct effects on corporate risk-taking.
This paper contributes to the literature on the impact of culture on firm risk-taking in several ways. While previous studies have studied either the direct or the indirect effects of culture on risk-taking, this paper tries to reconcile the two strands of literature and assess them simultaneously by using a hierarchical linear mixed model. This allow to test whether cultural norms remain important in determining corporate risk-taking behaviors even after taking into account their impact on the institutional, economic and industrial environments. Moreover, this paper extends the analyses of Griffin et al. (2012) and Kanagaretnam et al. (2011) to capture cross-industrial differences in risk-taking. Given the importance to national and global economies of the highly leveraged sector of finance, or the highly innovative sector of IT, or the highly risky commodity industries1, and given that firms in these industries are markedly different from manufacturing firms and have been more adversely affected by the recent global economic crisis, it is very important to understand the role of culture on cross-industrial variation in corporate risk-taking.
Tuesday, August 28, 2012
Monday, August 27, 2012
Incentivizing Calculated Risk-Taking: an Experiment with Commercial Bank Loan Officers. By Martin Kanz and Leora Klapper
Incentivizing Calculated Risk-Taking: an Experiment with Commercial Bank Loan Officers. By Martin Kanz and Leora Klapper
Mon, Aug 27, 2012 08:42am
http://blogs.worldbank.org/allaboutfinance/incentivizing-calculated-risk-taking-an-experiment-with-commercial-bank-loan-officers
In the aftermath of the global financial crisis, there has been much criticism of compensation practices at banks. Although much of this debate has focused on executive compensation (see the recent debate on this blog), there is a growing recognition that non-equity incentives for loan officers and other employees at the lower tiers of a bank’s corporate hierarchy may share some of the blame — volume incentives for mortgage brokers in the United States that rewarded high-risk lending at wildly unsustainable terms are a particularly striking case in point.
The view that excessive risk-taking in the run-up to the crisis had its roots in flawed incentives at all levels of financial institutions — not just at the top — has made inroads in policy circles, and has been reflected in efforts to regulate how banks can pay their loan officers. Well-intentioned as these efforts may be, they mask the fact that providing performance incentives in lending is, in fact, a very difficult problem. Assessing a borrower’s creditworthiness requires a complex tradeoff between risk and return; it contains an inherent element of deferred compensation and requires the interpretation of a noisy signal about an applicant’s actual creditworthiness. Whether and how performance incentives work in this setting is unclear: the limited evidence that exists about the impact of performance pay on employee behavior comes from the labor economics literature and suggests that — even in simple production tasks — the behavioral response to incentives tends to be much more complex than a simple mapping from stronger incentives to greater effort and performance.
So how does “pay-for-performance” affect the risk-appetite and lending decisions of loan officers? In a recent paper, coauthored with Shawn Cole of Harvard Business School we designed a field experiment with real-life loan officers to examine the impact of performance incentives on loan officer behavior. Working with a number of leading commercial banks in India, we recruited more than 200 loan officers with an average of more than ten years of experience in banking and brought them to a behavioral economics lab. In the lab, participants were asked to evaluate a set of loan applications under different, exogenously assigned incentives. This cross-over between an actual field experiment and a controlled lab setting allowed us to study risk-taking behavior using a real life population of highly experienced loan officers, while being able to get detailed measurements of risk-assessment and risk-taking behavior — the kind of data that would usually only be available from a lab experiment.
We deliberately set up our experiment in an informationally challenging emerging credit market — the Indian market for unsecured small enterprise loans. Borrowers in this market typically lack reliable credit scores and an established track record of formal sector borrowing. This generally rules out the use of predictive credit scoring and other advance loan approval technologies, making banks particularly reliant on the risk-assessment of their frontline employees. The credit files that our loan officers evaluated in the experiment consisted of actual loan applications from small enterprises applying for their first formal-sector loan. Each loan was matched with ten months of repayment history from the lender’s proprietary database (not surprisingly, more than 90% of defaults occur in the first three months of a loan’s tenure). This allowed us to compare the actual outcome of the loan with the loan officer’s decision and risk assessment in the experiment and to offer incentive payments based on the profitability of lending decisions loan officers took in the lab.
The reassuring news is that basic incentives seem to work quite well in lending. We find that pay for performance (incentives that reward profitable lending and penalize default) indeed induces loan officers to exert much greater effort in reviewing the information that is presented to them. This is all well, but the real question is whether this translates into improved lending decisions. One common concern with performance pay in lending is that stronger incentives may indeed make loan officers much more conscientious, so conscientious in fact that they may shy away from risks that would be profitable from the viewpoint of the bank and simply stop lending. In our experiment, we find this not to be the case: when loan officers faced high-powered incentives, the probability that they would approve a non-performing loan was reduced by 11% while overall lending went down by only 3.6%. In other words, more stringent incentive schemes actually made loan officers better at identifying and eliminating bad credits from the pool of loan applicants. Profits per loan increased by up to 4% over the median loan size and by more than 40% compared with the case when loan officers faced volume incentives.
These strong results highlighting the negative impact of volume incentives are in line with much recent evidence using observational data (Agarwal and Ben-David 2012; Berg, Puri, and Rocholl 2012). So is pay-for performance the solution to all of a bank’s internal agency problems? Unfortunately not. In an additional set of experiments, we varied the time horizon of the loan officer’s compensation contract — an important second dimension of the incentive scheme over which a bank typically has control. Interestingly, our results show that performance incentives quickly lose their bite as they are deferred even by a couple months. Given that in real life performance pay typically occurs in the form of a quarterly or annual bonus, this casts some doubt on the wisdom of trying to fix agency problems within financial institutions with monetary incentives alone. Interestingly, however, deferred compensation also makes permissive incentive schemes less tempting and can attenuate many of the negative effects of volume incentives. Some direct advice that comes out of this finding is that if a bank finds it necessary to provide volume incentives, it can limit the potential damage through deferred compensation.
Perhaps most interestingly, the results from our experiment also show that incentives affect not only actual lending decisions, they also distort loan officers’ subjective assessment of credit risk. Put simply, we find that when participants faced incentives that emphasize lending volume over loan quality, they started viewing their clients’ creditworthiness through rose-colored glasses. They inflated internal risk ratings — which were neither seen by any supervisor nor tied to incentives — by up to .3 standard deviations for the same loan. This finding resonates with the psychological concept of “cognitive dissonance” (Akerlof and Dickens 1982) and is in line with behavioral economics explanations that have tried to make sense of seemingly irrational behavior in sub-prime lending prior to the crisis, which are nicely summarized in a recent article by Nicholas Berberis (2012) from the Yale School of Management.
What are we to take away from these results? The question of how to better align private incentives with public interest is a major unresolved policy question that has arisen from the global financial crisis. Our experiments provide some of the first rigorous evidence on the link between performance pay and behavior among loan originators, which we hope will be a first step that can help tackle this important issue from the angle of corporate governance —– with the ultimate aim of making compensation policy a more effective component of a bank’s risk management mechanisms. Much work has recently gotten underway in this exciting research agenda, but it is clear that much more evidence is needed to translate these findings into meaningful policy prescriptions. To contribute to this agenda, we are currently working on a number of follow-up experiments to more fully understand the behavioral and psychological implications of the problem of incentives and individual risk-taking. Stay tuned.
References
Agarwal, Sumit, and Itzhak Ben-David. 2012. “Do Loan Officer Incentives Lead to Lax Lending Standards?” Ohio State University, Fisher College of Business. Working Paper WP-2012-7.
Agarwal, Sumit, and Faye H. Wang. 2009. “Perverse Incentives at the Banks? Evidence from a Natural Experiment.” Federal Reserve Bank of Chicago. Working Paper WP-09-08.
Akerlof, George A., and William T. Dickens. 1982. “The Economic Consequences of Cognitive Dissonance.” American Economic Review 72 (3):307–19.
Baker, George, Michael Jensen, and Kevin Murphy. 1988. “Compensation and Incentives: Practice vs. Theory.” Journal of Finance 43 (3):593–616.
Bandiera, Oriana, Iwan Barankay, and Imran Rasul. 2007. “Incentives for Managers and Inequality among Workers: Evidence from a Firm-Level Experiment.” Quarterly Journal of Economics 122 (2):729–73.
_____. 2009. “Social Connections and Incentives in the Workplace: Evidence from Personnel Data.” Econometrica 77 (4):1047–94.
_____ Team Incentives: Evidence from a Firm Level Experiment. Journal of the European Economic Association, forthcoming.
Barberis, Nicholas. 2012. “Psychology and the Financial Crisis of 2007-2008.” In Financial Innovation and the Crisis, edited by M. Haliassos. Cambridge, MA: MIT Press.
Berg, Tobias, Manju Puri, and Jorg Rocholl. 2012. “Loan Officer Incentives and the Limits of Hard Information.” Duke University Fuqua School of Business Working Paper.
Mon, Aug 27, 2012 08:42am
http://blogs.worldbank.org/allaboutfinance/incentivizing-calculated-risk-taking-an-experiment-with-commercial-bank-loan-officers
In the aftermath of the global financial crisis, there has been much criticism of compensation practices at banks. Although much of this debate has focused on executive compensation (see the recent debate on this blog), there is a growing recognition that non-equity incentives for loan officers and other employees at the lower tiers of a bank’s corporate hierarchy may share some of the blame — volume incentives for mortgage brokers in the United States that rewarded high-risk lending at wildly unsustainable terms are a particularly striking case in point.
The view that excessive risk-taking in the run-up to the crisis had its roots in flawed incentives at all levels of financial institutions — not just at the top — has made inroads in policy circles, and has been reflected in efforts to regulate how banks can pay their loan officers. Well-intentioned as these efforts may be, they mask the fact that providing performance incentives in lending is, in fact, a very difficult problem. Assessing a borrower’s creditworthiness requires a complex tradeoff between risk and return; it contains an inherent element of deferred compensation and requires the interpretation of a noisy signal about an applicant’s actual creditworthiness. Whether and how performance incentives work in this setting is unclear: the limited evidence that exists about the impact of performance pay on employee behavior comes from the labor economics literature and suggests that — even in simple production tasks — the behavioral response to incentives tends to be much more complex than a simple mapping from stronger incentives to greater effort and performance.
So how does “pay-for-performance” affect the risk-appetite and lending decisions of loan officers? In a recent paper, coauthored with Shawn Cole of Harvard Business School we designed a field experiment with real-life loan officers to examine the impact of performance incentives on loan officer behavior. Working with a number of leading commercial banks in India, we recruited more than 200 loan officers with an average of more than ten years of experience in banking and brought them to a behavioral economics lab. In the lab, participants were asked to evaluate a set of loan applications under different, exogenously assigned incentives. This cross-over between an actual field experiment and a controlled lab setting allowed us to study risk-taking behavior using a real life population of highly experienced loan officers, while being able to get detailed measurements of risk-assessment and risk-taking behavior — the kind of data that would usually only be available from a lab experiment.
We deliberately set up our experiment in an informationally challenging emerging credit market — the Indian market for unsecured small enterprise loans. Borrowers in this market typically lack reliable credit scores and an established track record of formal sector borrowing. This generally rules out the use of predictive credit scoring and other advance loan approval technologies, making banks particularly reliant on the risk-assessment of their frontline employees. The credit files that our loan officers evaluated in the experiment consisted of actual loan applications from small enterprises applying for their first formal-sector loan. Each loan was matched with ten months of repayment history from the lender’s proprietary database (not surprisingly, more than 90% of defaults occur in the first three months of a loan’s tenure). This allowed us to compare the actual outcome of the loan with the loan officer’s decision and risk assessment in the experiment and to offer incentive payments based on the profitability of lending decisions loan officers took in the lab.
The reassuring news is that basic incentives seem to work quite well in lending. We find that pay for performance (incentives that reward profitable lending and penalize default) indeed induces loan officers to exert much greater effort in reviewing the information that is presented to them. This is all well, but the real question is whether this translates into improved lending decisions. One common concern with performance pay in lending is that stronger incentives may indeed make loan officers much more conscientious, so conscientious in fact that they may shy away from risks that would be profitable from the viewpoint of the bank and simply stop lending. In our experiment, we find this not to be the case: when loan officers faced high-powered incentives, the probability that they would approve a non-performing loan was reduced by 11% while overall lending went down by only 3.6%. In other words, more stringent incentive schemes actually made loan officers better at identifying and eliminating bad credits from the pool of loan applicants. Profits per loan increased by up to 4% over the median loan size and by more than 40% compared with the case when loan officers faced volume incentives.
These strong results highlighting the negative impact of volume incentives are in line with much recent evidence using observational data (Agarwal and Ben-David 2012; Berg, Puri, and Rocholl 2012). So is pay-for performance the solution to all of a bank’s internal agency problems? Unfortunately not. In an additional set of experiments, we varied the time horizon of the loan officer’s compensation contract — an important second dimension of the incentive scheme over which a bank typically has control. Interestingly, our results show that performance incentives quickly lose their bite as they are deferred even by a couple months. Given that in real life performance pay typically occurs in the form of a quarterly or annual bonus, this casts some doubt on the wisdom of trying to fix agency problems within financial institutions with monetary incentives alone. Interestingly, however, deferred compensation also makes permissive incentive schemes less tempting and can attenuate many of the negative effects of volume incentives. Some direct advice that comes out of this finding is that if a bank finds it necessary to provide volume incentives, it can limit the potential damage through deferred compensation.
Perhaps most interestingly, the results from our experiment also show that incentives affect not only actual lending decisions, they also distort loan officers’ subjective assessment of credit risk. Put simply, we find that when participants faced incentives that emphasize lending volume over loan quality, they started viewing their clients’ creditworthiness through rose-colored glasses. They inflated internal risk ratings — which were neither seen by any supervisor nor tied to incentives — by up to .3 standard deviations for the same loan. This finding resonates with the psychological concept of “cognitive dissonance” (Akerlof and Dickens 1982) and is in line with behavioral economics explanations that have tried to make sense of seemingly irrational behavior in sub-prime lending prior to the crisis, which are nicely summarized in a recent article by Nicholas Berberis (2012) from the Yale School of Management.
What are we to take away from these results? The question of how to better align private incentives with public interest is a major unresolved policy question that has arisen from the global financial crisis. Our experiments provide some of the first rigorous evidence on the link between performance pay and behavior among loan originators, which we hope will be a first step that can help tackle this important issue from the angle of corporate governance —– with the ultimate aim of making compensation policy a more effective component of a bank’s risk management mechanisms. Much work has recently gotten underway in this exciting research agenda, but it is clear that much more evidence is needed to translate these findings into meaningful policy prescriptions. To contribute to this agenda, we are currently working on a number of follow-up experiments to more fully understand the behavioral and psychological implications of the problem of incentives and individual risk-taking. Stay tuned.
References
Agarwal, Sumit, and Itzhak Ben-David. 2012. “Do Loan Officer Incentives Lead to Lax Lending Standards?” Ohio State University, Fisher College of Business. Working Paper WP-2012-7.
Agarwal, Sumit, and Faye H. Wang. 2009. “Perverse Incentives at the Banks? Evidence from a Natural Experiment.” Federal Reserve Bank of Chicago. Working Paper WP-09-08.
Akerlof, George A., and William T. Dickens. 1982. “The Economic Consequences of Cognitive Dissonance.” American Economic Review 72 (3):307–19.
Baker, George, Michael Jensen, and Kevin Murphy. 1988. “Compensation and Incentives: Practice vs. Theory.” Journal of Finance 43 (3):593–616.
Bandiera, Oriana, Iwan Barankay, and Imran Rasul. 2007. “Incentives for Managers and Inequality among Workers: Evidence from a Firm-Level Experiment.” Quarterly Journal of Economics 122 (2):729–73.
_____. 2009. “Social Connections and Incentives in the Workplace: Evidence from Personnel Data.” Econometrica 77 (4):1047–94.
_____ Team Incentives: Evidence from a Firm Level Experiment. Journal of the European Economic Association, forthcoming.
Barberis, Nicholas. 2012. “Psychology and the Financial Crisis of 2007-2008.” In Financial Innovation and the Crisis, edited by M. Haliassos. Cambridge, MA: MIT Press.
Berg, Tobias, Manju Puri, and Jorg Rocholl. 2012. “Loan Officer Incentives and the Limits of Hard Information.” Duke University Fuqua School of Business Working Paper.
Measuring Systemic Risk-Adjusted Liquidity (SRL) - A Model Approach. By Andreas Jobst
Measuring Systemic Risk-Adjusted Liquidity (SRL) - A Model Approach. By Andreas Jobst
IMF Working Paper No. 12/209
Aug 2012
http://www.imfbookstore.org/ProdDetails.asp?ID=WPIEA2012209
Summary: Little progress has been made so far in addressing—in a comprehensive way—the externalities caused by impact of the interconnectedness within institutions and markets on funding and market liquidity risk within financial systems. The Systemic Risk-adjusted Liquidity (SRL) model combines option pricing with market information and balance sheet data to generate a probabilistic measure of the frequency and severity of multiple entities experiencing a joint liquidity event. It links a firm’s maturity mismatch between assets and liabilities impacting the stability of its funding with those characteristics of other firms, subject to individual changes in risk profiles and common changes in market conditions. This approach can then be used (i) to quantify an individual institution’s time-varying contribution to system-wide liquidity shortfalls and (ii) to price liquidity risk within a macroprudential framework that, if used to motivate a capital charge or insurance premia, provides incentives for liquidity managers to internalize the systemic risk of their decisions. The model can also accommodate a stress testing approach for institution-specific and/or general funding shocks that generate estimates of systemic liquidity risk (and associated charges) under adverse scenarios.
Excerpts:
A defining characteristic of the recent financial crisis was the simultaneous and widespread dislocation in funding markets, which can adversely affect financial stability in absence of suitable liquidity risk management and policy responses. In particular, banks’ common asset exposures and their increased reliance on short-term wholesale funding in tandem with high leverage levels helped propagate rising counterparty risk due to greater interdependence within the financial system. The implications from liquidity risk management decisions made by some institutions spilled over to other markets and other institutions, contributing to others’ losses, amplifying solvency concerns, and exacerbating overall liquidity stress as a result of these negative dynamics. Thus, private sector liquidity (as opposed to monetary liquidity), which is created largely through banks and other financial institutions via bilateral arrangements and organized trading venues, is invariably influenced by common channels of market pricing that can amplify cyclical movements in system-wide financial conditions with the potential of negative externalities resulting from individual actions (CGFS, 2011).
The opportunity cost of holding liquidity is invariably cyclical, resulting in a notorious underpricing of liquidity risk, which tends to perpetuate a disregard for the potential inability of markets to sustain sufficient liquidity transformation under stress. Banks have an incentive to minimize liquidity (and mitigate the opportunity cost of holding excess liquidity in lieu of return-generating assets) in anticipation that central banks will almost certainly intervene in times of stress as lenders-of-last-resort. Even without central bank support, liquidity risk is most expensive when it is needed most while generating little if any additional return in good times. While central banks can halt a deterioration of funding conditions in order to maintain the efficient operation of funding markets (see Figure 1), prevent financial firms from failing, and, thus, limit the impact of liquidity shortfalls on the real economy, their implicit subsidization of bank funding accentuates the magnitude of liquidity risks under stress. Central bank measures during the credit crisis have further reinforced this perception of contingent liquidity support, giving financial institutions an incentive to hold less liquidity than needed (IMF, 2010a).
Current systemic risk analysis—as a fundamental pillar of macroprudential surveillance and policy—is mostly focused on solvency conditions. Disruptions to the flow of financial services become systemic if there is the potential of financial instability to trigger serious negative spillovers to the real economy. Macroprudential policy aims to limit, mitigate or reduce systemic risk, thereby minimizing the incidence and impact of disruptions in the provision of key financial services that can have adverse consequences for the real economy (and broader implications for economic growth). Substantial work is underway to develop enhanced analytical tools that can help to identify and measure systemic risk in a forward-looking way, and, thus, support improved policy judgments. While systemic solvency risk has already entered the prudential debate in the form of additional capital rules that apply to systemically important financial institutions (SIFIs), little progress has been made so far in addressing systemic liquidity risk.
In contrast, proposals aimed at measuring and regulating systemic liquidity risk caused by the interconnectedness across financial institutions and financial markets have been few and far between. Systemic liquidity risk is associated with the possibility that maturity transformation in systemically important institutions and markets is disrupted by common shocks that overwhelm the capacity to fulfill all planned payment obligations as and when they come due. For instance, multiple institutions may face simultaneous difficulties in rolling over their short-term debts or in obtaining new short-term funding (much less longterm funding). However, progress in developing a systemic liquidity risk framework have been hampered by the rarity of system-wide liquidity risk events, the multiplicity of interactions between institutions and funding markets, and the conceptual challenges in modeling liquidity conditions affecting institutions and transactions separately or jointly. The policy objective of such efforts would be to minimize the possibility of systemic risk from liquidity disruptions that necessitate costly public sector support. While a financial institution’s failure can cause an impairment of all or parts of the financial system, firms are not charged for the possibility that their risk-taking affects the operation of the financial system as a whole. In fact, individual actions might cause losses elsewhere in the system through direct credit exposures and financial guarantees, forced asset sales, and greater uncertainty regarding mutual exposures (possibly in combination with greater risk aversion of investors), which increases the cost of funding for all financial institutions. These “negative externalities” impose costs to the system, which increases the greater the importance of a single institution to the system (“too-important-to-fail”) and the higher the level of asymmetric information as coordination failures accentuate the impact of common shocks. Thus, more stringent prudential liquidity requirements, much like higher capital levels, might be beneficial ex ante by creating incentives of shareholders to limit excessive risk-taking, which would otherwise increase the potential loss in case of failure (Jensen and Meckling, 1976; Holmstrom and Tirole, 1997). However, certain liquidity standards might also encourage greater concentrations in assets that receive a more favorable regulatory treatment based on their liquidity characteristics during normal times (which remains to be tested during times of stress).
A number of prudential reforms and initiatives are underway to address shortcomings in financial institutions’ liquidity practices, which have resulted in more stringent supervisory liquidity requirements. Under the post-crisis revisions of the existing Basel Accord, known as Basel III, the Basel Committee on Banking Supervision (BCBS, 2010a, 2010b and 2009) has proposed two quantitative liquidity standards to be applied at a global level and published a qualitative guidance to strengthen liquidity risk management practices in banks. Under this proposal, individual banks are expected to maintain a stable funding structure, reduce maturity transformation, and hold a sufficient stock of assets that should be available to meet its funding needs in times of stress—as measured by two standardized ratios:
* Liquidity Coverage Ratio (LCR). This ratio is intended to promote short-term resilience to potential liquidity disruptions by requiring banks to hold sufficient highquality liquid assets to withstand the run-off of liabilities over a stressed 30-day scenario specified by supervisors. More specifically, “the LCR numerator consists of a stock of unencumbered, high-quality liquid assets that must be available to cover any net [cash] outflow, while the denominator is comprised of cash outflows less cash inflows (subject to a cap at 75 [percent] of total outflows) that are expected to occur in a severe stress scenario (BCBS, 2011 and 2012b).”
* Net Stable Funding Ratio (NSFR). This structural ratio limits the stock of unstable funding by encouraging longer term borrowing in order to restrict liquidity mismatches from excessive maturity transformation. It requires banks to establish a stable funding profile over the short term (i.e., the use of stable (long-term and/or stress-resilient) sources to continuously fund short-term cash flow obligations that arise from lending and investment activities). The NSFR reflects the proportion of long-term assets that are funded by stable sources of funding with maturities of more than one year (except deposits), which includes customer deposits, long-term wholesale funding, and equity (but excludes short-term funding). A value of this ratio of less than 100 percent indicates a shortfall in stable funding based on “the difference between balance sheet positions after the application of available stable funding factors and the application of required stable funding factors for banks where the former is less than the latter (BCBS, 2011 and 2012b).”
However, these prudential measures do not directly targeting system-wide implications. The current approach assumes that sufficient institutional liquidity would reduce the likelihood of knock-on effects on solvency conditions in distress situations and complement the risk absorption role of capital—but without considering system-wide effects. Larger liquidity buffers at each bank should lower the risk that multiple institutions will simultaneously face liquidity shortfalls, which would ensure that central banks are asked to perform only as lenders of last resort—and not as lenders of first resort. However, this rationale underpinning the Basel liquidity standards ignores the impact of the interconnectedness of various institutions and their diverse funding structures across a host of financial markets and jurisdictions on the probability of such simultaneous shortfalls. Moreover, in light of the protracted adoption of both the LCR and the NSFR (whose implementation is envisaged in 2015 and 2018, respectively) and the associated risk of undermining timely adjustment of industry standards, Perotti (2012) argues for strong transitional tools in the form of “prudential risk surcharges.” These would be imposed on the gap between current liquidity positions of banks and the envisaged minimum liquid standards at a level high enough to compensate for and discourage the creation of systemic risk in order to ensure early adoption of safer standards while offering sufficient flexibility of banks to chart their own path towards compliance.
An effective macroprudential approach that targets systemic liquidity risk presupposes the use of objective and meaningful measures that can be applied in a consistent and transparent fashion (and the attendant design of appropriate policy instruments). Ideally, any such methodology would need to allow for extensive back-testing and should benefit from straightforward application (and avoid complex modeling (or stress-testing)). While it should not be too data intensive to compute and implement, enough data would need to be collected to ensure the greatest possible coverage of financial intermediaries in order to accommodate different financial sector characteristics and supervisory regimes across national boundaries. In addition, the underlying measure of systemic risk should be time-varying, and, if possible, it should offset the procyclical tendencies of liquidity risk and account for changes to an institution’s risk contribution, which might not necessarily follow cyclical patterns. Finally, it would also motivate a risk-adjusted pricing scheme so that institutions that contribute to systemic liquidity risk are assigned a proportionately higher charge (while the opposite would hold true for firms that help absorb system-wide shocks from sudden increases in liquidity risk).
In this regard, several proposals are currently under discussion (see Table 1), including the internalization of public sector cost of liquidity risk via insurance schemes (Goodhart, 2009; Gorton and Metrick, 2009; Perotti and Suarez, 2009 and 2011), capital charges (Brunnermeier and Pedersen, 2009), taxation (Acharya and others, 2010a and 2010b), investment requirements (Cao and Illing, 2009; Farhi and others, 2009), as well as arrangements aimed at mitigating the system-wide effects from the fire sale liquidation of assets in via collateral haircuts (Valderrama, 2010) and modifications of resolution regimes (Roe, 2009; Acharya and Oncu, 2010). In particular, Gorton (2009) advocates a systemic liquidity risk insurance guarantee fee that explicitly recognizes the public sector cost of supporting secured funding markets if fragility were to materialize. Roe (2009) argues that the internalization of such cost would ideally be achieved by exposing the lenders to credit risk of the counterparty (and not just that of the collateral) by disallowing unrestricted access to collateral even in case of default of the counterparty. In this way, lenders would exercise greater effort in discriminating ex ante between safer and riskier borrowers. Such incentives could be supported by time-varying increase in liquidity requirements, which also curb credit expansion fueled by short-term and volatile wholesale funding and reduce dangerous reliance on such funding (Jácome and Nier, 2012).
In this paper, we propose a structural approach—the systemic risk-adjusted liquidity (SRL) model—for the structural assessment and stress testing of systemic liquidity risk. Although macroprudential surveillance relies primarily on prudential regulation and supervision, calibrated and used to limit systemic risk, additional measures and instruments are needed to directly address systemic liquidity risk. This paper underscores why more needs to be done to develop macroprudential techniques to measure and mitigate such risks arising from individual or collective financial arrangements—both institutional and marketbased—that could either lead directly to system-wide distress of institutions and/or significantly amplify its consequences. The SRL model complements the current Basel III liquidity framework by extending the prudential assessment of stable funding (based on the NSFR) to a system-wide approach, which can help substantiate different types of macroprudential tools, such as a capital surcharge, a fee, a tax, or an insurance premium that can be used to price contingent liquidity access.
The SRL model quantifies how the size and interconnectedness of individual institutions (with varying degrees of leverage and maturity mismatches defining their risk profile) can create short-term liquidity risk on a system-wide level and under distress conditions. The model combines quantity-based indicators of institution-specific funding liquidity (conditional on maturity mismatches and leverage), while adverse shocks to various market rates are used to alter the price-based measures of monetary and funding liquidity that, in turn, form the stress scenarios for systemic liquidity risk within the model (see Table 2 and Box 2). In this way, the SRL model fosters a better understanding of institutional vulnerabilities to the liquidity cycle and related build-ups of risks based on market rates that are available at high frequencies and which lend themselves to the identification of periods of heightened systemic liquidity risk (CGFS, 2011).
This approach forms the basis for a possible capital charge or an insurance premium—a pre-payment for the contingent (official) liquidity support that financial institutions eventually receive in times of joint distress—by identifying and measuring ways in which they contribute to aggregate risk over the short-term. Such a liquidity charge should reflect the marginal contribution of short-term funding decisions by institutions to the generation of systemic risk from the simultaneous realization of liquidity shortfalls. Proper pricing of the opportunity cost of holding insufficient liquidity—especially for very adverse funding situations—would help lower the scale of contingent liquidity support from the public sector (or collective burden sharing mechanisms). The charge needs to be risk-based, should be increasing in a common maturity mismatch of assets and liabilities, and would be applicable to all institutions with access to safety net guarantees. Since liquidity runs are present in the escalating phase of all systemic crises, our focus is on short-term wholesale liabilities, properly weighted by the bank's maturity mismatch.
http://www.imf.org/external/pubs/cat/longres.aspx?sk=26203.0
IMF Working Paper No. 12/209
Aug 2012
http://www.imfbookstore.org/ProdDetails.asp?ID=WPIEA2012209
Summary: Little progress has been made so far in addressing—in a comprehensive way—the externalities caused by impact of the interconnectedness within institutions and markets on funding and market liquidity risk within financial systems. The Systemic Risk-adjusted Liquidity (SRL) model combines option pricing with market information and balance sheet data to generate a probabilistic measure of the frequency and severity of multiple entities experiencing a joint liquidity event. It links a firm’s maturity mismatch between assets and liabilities impacting the stability of its funding with those characteristics of other firms, subject to individual changes in risk profiles and common changes in market conditions. This approach can then be used (i) to quantify an individual institution’s time-varying contribution to system-wide liquidity shortfalls and (ii) to price liquidity risk within a macroprudential framework that, if used to motivate a capital charge or insurance premia, provides incentives for liquidity managers to internalize the systemic risk of their decisions. The model can also accommodate a stress testing approach for institution-specific and/or general funding shocks that generate estimates of systemic liquidity risk (and associated charges) under adverse scenarios.
Excerpts:
A defining characteristic of the recent financial crisis was the simultaneous and widespread dislocation in funding markets, which can adversely affect financial stability in absence of suitable liquidity risk management and policy responses. In particular, banks’ common asset exposures and their increased reliance on short-term wholesale funding in tandem with high leverage levels helped propagate rising counterparty risk due to greater interdependence within the financial system. The implications from liquidity risk management decisions made by some institutions spilled over to other markets and other institutions, contributing to others’ losses, amplifying solvency concerns, and exacerbating overall liquidity stress as a result of these negative dynamics. Thus, private sector liquidity (as opposed to monetary liquidity), which is created largely through banks and other financial institutions via bilateral arrangements and organized trading venues, is invariably influenced by common channels of market pricing that can amplify cyclical movements in system-wide financial conditions with the potential of negative externalities resulting from individual actions (CGFS, 2011).
The opportunity cost of holding liquidity is invariably cyclical, resulting in a notorious underpricing of liquidity risk, which tends to perpetuate a disregard for the potential inability of markets to sustain sufficient liquidity transformation under stress. Banks have an incentive to minimize liquidity (and mitigate the opportunity cost of holding excess liquidity in lieu of return-generating assets) in anticipation that central banks will almost certainly intervene in times of stress as lenders-of-last-resort. Even without central bank support, liquidity risk is most expensive when it is needed most while generating little if any additional return in good times. While central banks can halt a deterioration of funding conditions in order to maintain the efficient operation of funding markets (see Figure 1), prevent financial firms from failing, and, thus, limit the impact of liquidity shortfalls on the real economy, their implicit subsidization of bank funding accentuates the magnitude of liquidity risks under stress. Central bank measures during the credit crisis have further reinforced this perception of contingent liquidity support, giving financial institutions an incentive to hold less liquidity than needed (IMF, 2010a).
Current systemic risk analysis—as a fundamental pillar of macroprudential surveillance and policy—is mostly focused on solvency conditions. Disruptions to the flow of financial services become systemic if there is the potential of financial instability to trigger serious negative spillovers to the real economy. Macroprudential policy aims to limit, mitigate or reduce systemic risk, thereby minimizing the incidence and impact of disruptions in the provision of key financial services that can have adverse consequences for the real economy (and broader implications for economic growth). Substantial work is underway to develop enhanced analytical tools that can help to identify and measure systemic risk in a forward-looking way, and, thus, support improved policy judgments. While systemic solvency risk has already entered the prudential debate in the form of additional capital rules that apply to systemically important financial institutions (SIFIs), little progress has been made so far in addressing systemic liquidity risk.
In contrast, proposals aimed at measuring and regulating systemic liquidity risk caused by the interconnectedness across financial institutions and financial markets have been few and far between. Systemic liquidity risk is associated with the possibility that maturity transformation in systemically important institutions and markets is disrupted by common shocks that overwhelm the capacity to fulfill all planned payment obligations as and when they come due. For instance, multiple institutions may face simultaneous difficulties in rolling over their short-term debts or in obtaining new short-term funding (much less longterm funding). However, progress in developing a systemic liquidity risk framework have been hampered by the rarity of system-wide liquidity risk events, the multiplicity of interactions between institutions and funding markets, and the conceptual challenges in modeling liquidity conditions affecting institutions and transactions separately or jointly. The policy objective of such efforts would be to minimize the possibility of systemic risk from liquidity disruptions that necessitate costly public sector support. While a financial institution’s failure can cause an impairment of all or parts of the financial system, firms are not charged for the possibility that their risk-taking affects the operation of the financial system as a whole. In fact, individual actions might cause losses elsewhere in the system through direct credit exposures and financial guarantees, forced asset sales, and greater uncertainty regarding mutual exposures (possibly in combination with greater risk aversion of investors), which increases the cost of funding for all financial institutions. These “negative externalities” impose costs to the system, which increases the greater the importance of a single institution to the system (“too-important-to-fail”) and the higher the level of asymmetric information as coordination failures accentuate the impact of common shocks. Thus, more stringent prudential liquidity requirements, much like higher capital levels, might be beneficial ex ante by creating incentives of shareholders to limit excessive risk-taking, which would otherwise increase the potential loss in case of failure (Jensen and Meckling, 1976; Holmstrom and Tirole, 1997). However, certain liquidity standards might also encourage greater concentrations in assets that receive a more favorable regulatory treatment based on their liquidity characteristics during normal times (which remains to be tested during times of stress).
A number of prudential reforms and initiatives are underway to address shortcomings in financial institutions’ liquidity practices, which have resulted in more stringent supervisory liquidity requirements. Under the post-crisis revisions of the existing Basel Accord, known as Basel III, the Basel Committee on Banking Supervision (BCBS, 2010a, 2010b and 2009) has proposed two quantitative liquidity standards to be applied at a global level and published a qualitative guidance to strengthen liquidity risk management practices in banks. Under this proposal, individual banks are expected to maintain a stable funding structure, reduce maturity transformation, and hold a sufficient stock of assets that should be available to meet its funding needs in times of stress—as measured by two standardized ratios:
* Liquidity Coverage Ratio (LCR). This ratio is intended to promote short-term resilience to potential liquidity disruptions by requiring banks to hold sufficient highquality liquid assets to withstand the run-off of liabilities over a stressed 30-day scenario specified by supervisors. More specifically, “the LCR numerator consists of a stock of unencumbered, high-quality liquid assets that must be available to cover any net [cash] outflow, while the denominator is comprised of cash outflows less cash inflows (subject to a cap at 75 [percent] of total outflows) that are expected to occur in a severe stress scenario (BCBS, 2011 and 2012b).”
* Net Stable Funding Ratio (NSFR). This structural ratio limits the stock of unstable funding by encouraging longer term borrowing in order to restrict liquidity mismatches from excessive maturity transformation. It requires banks to establish a stable funding profile over the short term (i.e., the use of stable (long-term and/or stress-resilient) sources to continuously fund short-term cash flow obligations that arise from lending and investment activities). The NSFR reflects the proportion of long-term assets that are funded by stable sources of funding with maturities of more than one year (except deposits), which includes customer deposits, long-term wholesale funding, and equity (but excludes short-term funding). A value of this ratio of less than 100 percent indicates a shortfall in stable funding based on “the difference between balance sheet positions after the application of available stable funding factors and the application of required stable funding factors for banks where the former is less than the latter (BCBS, 2011 and 2012b).”
However, these prudential measures do not directly targeting system-wide implications. The current approach assumes that sufficient institutional liquidity would reduce the likelihood of knock-on effects on solvency conditions in distress situations and complement the risk absorption role of capital—but without considering system-wide effects. Larger liquidity buffers at each bank should lower the risk that multiple institutions will simultaneously face liquidity shortfalls, which would ensure that central banks are asked to perform only as lenders of last resort—and not as lenders of first resort. However, this rationale underpinning the Basel liquidity standards ignores the impact of the interconnectedness of various institutions and their diverse funding structures across a host of financial markets and jurisdictions on the probability of such simultaneous shortfalls. Moreover, in light of the protracted adoption of both the LCR and the NSFR (whose implementation is envisaged in 2015 and 2018, respectively) and the associated risk of undermining timely adjustment of industry standards, Perotti (2012) argues for strong transitional tools in the form of “prudential risk surcharges.” These would be imposed on the gap between current liquidity positions of banks and the envisaged minimum liquid standards at a level high enough to compensate for and discourage the creation of systemic risk in order to ensure early adoption of safer standards while offering sufficient flexibility of banks to chart their own path towards compliance.
An effective macroprudential approach that targets systemic liquidity risk presupposes the use of objective and meaningful measures that can be applied in a consistent and transparent fashion (and the attendant design of appropriate policy instruments). Ideally, any such methodology would need to allow for extensive back-testing and should benefit from straightforward application (and avoid complex modeling (or stress-testing)). While it should not be too data intensive to compute and implement, enough data would need to be collected to ensure the greatest possible coverage of financial intermediaries in order to accommodate different financial sector characteristics and supervisory regimes across national boundaries. In addition, the underlying measure of systemic risk should be time-varying, and, if possible, it should offset the procyclical tendencies of liquidity risk and account for changes to an institution’s risk contribution, which might not necessarily follow cyclical patterns. Finally, it would also motivate a risk-adjusted pricing scheme so that institutions that contribute to systemic liquidity risk are assigned a proportionately higher charge (while the opposite would hold true for firms that help absorb system-wide shocks from sudden increases in liquidity risk).
In this regard, several proposals are currently under discussion (see Table 1), including the internalization of public sector cost of liquidity risk via insurance schemes (Goodhart, 2009; Gorton and Metrick, 2009; Perotti and Suarez, 2009 and 2011), capital charges (Brunnermeier and Pedersen, 2009), taxation (Acharya and others, 2010a and 2010b), investment requirements (Cao and Illing, 2009; Farhi and others, 2009), as well as arrangements aimed at mitigating the system-wide effects from the fire sale liquidation of assets in via collateral haircuts (Valderrama, 2010) and modifications of resolution regimes (Roe, 2009; Acharya and Oncu, 2010). In particular, Gorton (2009) advocates a systemic liquidity risk insurance guarantee fee that explicitly recognizes the public sector cost of supporting secured funding markets if fragility were to materialize. Roe (2009) argues that the internalization of such cost would ideally be achieved by exposing the lenders to credit risk of the counterparty (and not just that of the collateral) by disallowing unrestricted access to collateral even in case of default of the counterparty. In this way, lenders would exercise greater effort in discriminating ex ante between safer and riskier borrowers. Such incentives could be supported by time-varying increase in liquidity requirements, which also curb credit expansion fueled by short-term and volatile wholesale funding and reduce dangerous reliance on such funding (Jácome and Nier, 2012).
In this paper, we propose a structural approach—the systemic risk-adjusted liquidity (SRL) model—for the structural assessment and stress testing of systemic liquidity risk. Although macroprudential surveillance relies primarily on prudential regulation and supervision, calibrated and used to limit systemic risk, additional measures and instruments are needed to directly address systemic liquidity risk. This paper underscores why more needs to be done to develop macroprudential techniques to measure and mitigate such risks arising from individual or collective financial arrangements—both institutional and marketbased—that could either lead directly to system-wide distress of institutions and/or significantly amplify its consequences. The SRL model complements the current Basel III liquidity framework by extending the prudential assessment of stable funding (based on the NSFR) to a system-wide approach, which can help substantiate different types of macroprudential tools, such as a capital surcharge, a fee, a tax, or an insurance premium that can be used to price contingent liquidity access.
The SRL model quantifies how the size and interconnectedness of individual institutions (with varying degrees of leverage and maturity mismatches defining their risk profile) can create short-term liquidity risk on a system-wide level and under distress conditions. The model combines quantity-based indicators of institution-specific funding liquidity (conditional on maturity mismatches and leverage), while adverse shocks to various market rates are used to alter the price-based measures of monetary and funding liquidity that, in turn, form the stress scenarios for systemic liquidity risk within the model (see Table 2 and Box 2). In this way, the SRL model fosters a better understanding of institutional vulnerabilities to the liquidity cycle and related build-ups of risks based on market rates that are available at high frequencies and which lend themselves to the identification of periods of heightened systemic liquidity risk (CGFS, 2011).
This approach forms the basis for a possible capital charge or an insurance premium—a pre-payment for the contingent (official) liquidity support that financial institutions eventually receive in times of joint distress—by identifying and measuring ways in which they contribute to aggregate risk over the short-term. Such a liquidity charge should reflect the marginal contribution of short-term funding decisions by institutions to the generation of systemic risk from the simultaneous realization of liquidity shortfalls. Proper pricing of the opportunity cost of holding insufficient liquidity—especially for very adverse funding situations—would help lower the scale of contingent liquidity support from the public sector (or collective burden sharing mechanisms). The charge needs to be risk-based, should be increasing in a common maturity mismatch of assets and liabilities, and would be applicable to all institutions with access to safety net guarantees. Since liquidity runs are present in the escalating phase of all systemic crises, our focus is on short-term wholesale liabilities, properly weighted by the bank's maturity mismatch.
http://www.imf.org/external/pubs/cat/longres.aspx?sk=26203.0
Friday, August 24, 2012
Regulators Captured - WSJ Editorial about the SEC and money-market funds
Regulators Captured
The Wall Street Journal, August 24, 2012, on page A10
http://online.wsj.com/article/SB10000872396390444812704577607421541441692.html
Economist George Stigler described the process of "regulatory capture," in which government agencies end up serving the industries they are supposed to regulate. This week lobbyists for money-market mutual funds provided still more evidence that Stigler deserved his Nobel. At the Securities and Exchange Commission, three of the five commissioners blocked a critical reform to help prevent a taxpayer bailout like the one the industry received in 2008.
Assistant editorial page editor James Freeman on the SEC's nixing a proposed rule that would hold money market funds more accountable.
SEC rules have long allowed money-fund operators to employ an accounting fiction that makes their funds appear safer than they are. Instead of share prices that fluctuate, like other kinds of securities, money funds are allowed to report to customers a fixed net asset value (NAV) of $1 per share—even if that's not exactly true.
As long as the value of a fund's underlying assets doesn't stray too far from that magical figure, fund sponsors can present a picture of stability to customers. Money funds are often seen as competitors to bank accounts and now hold $1.6 trillion in assets.
But during times of crisis, as in 2008, investors are reminded how different money funds are from insured deposits. When one fund "broke the buck"—its asset value fell below $1 per share—it triggered an institutional run on all money funds. The Treasury responded by slapping a taxpayer guarantee on the whole industry.
SEC Chairman Mary Schapiro has been trying to eliminate this systemic risk by taking away the accounting fiction that was created when previous generations of lobbyists captured the SEC. She made the sensible case that money-fund prices should float like the securities they are.
But industry lobbyists are still holding hostages. Commissioners Luis Aguilar, Dan Gallagher and Troy Paredes refused to support reform, so taxpayers can expect someday a replay of 2008. True to the Stigler thesis, the debate has focused on how to maintain the current money-fund business model while preventing customers from leaving in a crisis. The SEC goal should be to craft rules so that when customers leave a fund, it is a problem for fund managers, not taxpayers.
The industry shrewdly lobbied Beltway conservatives, who bought the line that this was a defense against costly regulation, even though regulation more or less created the money-fund industry. Free-market think tanks have been taken for a ride, some of them all too willingly.
The big winners include dodgy European banks, which can continue to attract U.S. money funds chasing higher yields knowing the American taxpayer continues to offer an implicit guarantee.
The industry shouldn't celebrate too much, though, because regulation may now be imposed by the new Financial Stability Oversight Council. Federal Reserve and Treasury officials want to do something, and their preference will probably be more supervision and capital positions that will raise costs that the industry can pass along to consumers. By protecting the $1 fixed NAV, free-marketeers may have guaranteed more of the Dodd-Frank-style regulation they claim to abhor.
The losers include the efficiency and fairness of the U.S. economy, as another financial industry gets government to guarantee its business model. Congratulations.
The Wall Street Journal, August 24, 2012, on page A10
http://online.wsj.com/article/SB10000872396390444812704577607421541441692.html
Economist George Stigler described the process of "regulatory capture," in which government agencies end up serving the industries they are supposed to regulate. This week lobbyists for money-market mutual funds provided still more evidence that Stigler deserved his Nobel. At the Securities and Exchange Commission, three of the five commissioners blocked a critical reform to help prevent a taxpayer bailout like the one the industry received in 2008.
Assistant editorial page editor James Freeman on the SEC's nixing a proposed rule that would hold money market funds more accountable.
SEC rules have long allowed money-fund operators to employ an accounting fiction that makes their funds appear safer than they are. Instead of share prices that fluctuate, like other kinds of securities, money funds are allowed to report to customers a fixed net asset value (NAV) of $1 per share—even if that's not exactly true.
As long as the value of a fund's underlying assets doesn't stray too far from that magical figure, fund sponsors can present a picture of stability to customers. Money funds are often seen as competitors to bank accounts and now hold $1.6 trillion in assets.
But during times of crisis, as in 2008, investors are reminded how different money funds are from insured deposits. When one fund "broke the buck"—its asset value fell below $1 per share—it triggered an institutional run on all money funds. The Treasury responded by slapping a taxpayer guarantee on the whole industry.
SEC Chairman Mary Schapiro has been trying to eliminate this systemic risk by taking away the accounting fiction that was created when previous generations of lobbyists captured the SEC. She made the sensible case that money-fund prices should float like the securities they are.
But industry lobbyists are still holding hostages. Commissioners Luis Aguilar, Dan Gallagher and Troy Paredes refused to support reform, so taxpayers can expect someday a replay of 2008. True to the Stigler thesis, the debate has focused on how to maintain the current money-fund business model while preventing customers from leaving in a crisis. The SEC goal should be to craft rules so that when customers leave a fund, it is a problem for fund managers, not taxpayers.
The industry shrewdly lobbied Beltway conservatives, who bought the line that this was a defense against costly regulation, even though regulation more or less created the money-fund industry. Free-market think tanks have been taken for a ride, some of them all too willingly.
The big winners include dodgy European banks, which can continue to attract U.S. money funds chasing higher yields knowing the American taxpayer continues to offer an implicit guarantee.
The industry shouldn't celebrate too much, though, because regulation may now be imposed by the new Financial Stability Oversight Council. Federal Reserve and Treasury officials want to do something, and their preference will probably be more supervision and capital positions that will raise costs that the industry can pass along to consumers. By protecting the $1 fixed NAV, free-marketeers may have guaranteed more of the Dodd-Frank-style regulation they claim to abhor.
The losers include the efficiency and fairness of the U.S. economy, as another financial industry gets government to guarantee its business model. Congratulations.
Monday, August 20, 2012
The South China Sea's Gathering Storm. By James Webb, US Senator
The South China Sea's Gathering Storm. By James Webb
The Wall Street Journal, August 19, 2012, page A11
http://online.wsj.com/article/SB10000872396390444184704577587483914661256.html
Since World War II, despite the costly flare-ups in Korea and Vietnam, the United States has proved to be the essential guarantor of stability in the Asian-Pacific region, even as the power cycle shifted from Japan to the Soviet Union and most recently to China. The benefits of our involvement are one of the great success stories of American and Asian history, providing the so-called second tier countries in the region the opportunity to grow economically and to mature politically.
As the region has grown more prosperous, the sovereignty issues have become more fierce. Over the past two years Japan and China have openly clashed in the Senkaku Islands, east of Taiwan and west of Okinawa, whose administration is internationally recognized to be under Japanese control. Russia and South Korea have reasserted sovereignty claims against Japan in northern waters. China and Vietnam both claim sovereignty over the Paracel Islands. China, Vietnam, the Philippines, Brunei and Malaysia all claim sovereignty over the Spratly Islands, the site of continuing confrontations between China and the Philippines.
Such disputes involve not only historical pride but also such vital matters as commercial transit, fishing rights, and potentially lucrative mineral leases in the seas that surround the thousands of miles of archipelagos. Nowhere is this growing tension clearer than in the increasingly hostile disputes in the South China Sea.
On June 21, China's State Council approved the establishment of a new national prefecture which it named Sansha, with its headquarters on Woody Island in the Paracel Islands. Called Yongxing by the Chinese, Woody Island has no indigenous population and no natural water supply, but it does sport a military-capable runway, a post office, a bank, a grocery store and a hospital.
The Paracels are more than 200 miles southeast of Hainan, mainland China's southernmost territory, and due east of Vietnam's central coast. Vietnam adamantly claims sovereignty over the island group, the site of a battle in 1974 when China attacked the Paracels in order to oust soldiers of the former South Vietnamese regime.
The potential conflicts stemming from the creation of this new Chinese prefecture extend well beyond the Paracels. Over the last six weeks the Chinese have further proclaimed that the jurisdiction of Sansha includes not just the Paracel Islands but virtually the entire South China Sea, connecting a series of Chinese territorial claims under one administrative rubric. According to China's official news agency Xinhua, the new prefecture "administers over 200 islets" and "2 million square kilometers of water." To buttress this annexation, 45 legislators have been appointed to govern the roughly 1,000 people on these islands, along with a 15-member Standing Committee, plus a mayor and a vice mayor.
These political acts have been matched by military and economic expansion. On July 22, China's Central Military Commission announced that it would deploy a garrison of soldiers to guard the islands in the area. On July 31, it announced a new policy of "regular combat-readiness patrols" in the South China Sea. And China has now begun offering oil exploration rights in locations recognized by the international community as within Vietnam's exclusive economic zone.
For all practical purposes China has unilaterally decided to annex an area that extends eastward from the East Asian mainland as far as the Philippines, and nearly as far south as the Strait of Malacca. China's new "prefecture" is nearly twice as large as the combined land masses of Vietnam, South Korea, Japan and the Philippines. Its "legislators" will directly report to the central government.
American reaction has been muted. The State Department waited until Aug. 3 before expressing official concern over China's "upgrading of its administrative level . . . and establishment of a new military garrison" in the disputed areas. The statement was carefully couched within the context of long-standing policies calling for the resolution of sovereignty issues in accordance with international law and without the use of military force.
Even so, the Chinese government responded angrily, warning that State Department officials had "confounded right and wrong, and sent a seriously wrong message." The People's Daily, a quasi-official publication, accused the U.S. of "fanning the flames and provoking division, deliberately creating antagonism with China." Its overseas edition said it was time for the U.S. to "shut up."
In truth, American vacillations have for years emboldened China. U.S. policy with respect to sovereignty issues in Asian-Pacific waters has been that we take no sides, that such matters must be settled peacefully among the parties involved. Smaller, weaker countries have repeatedly called for greater international involvement.
China, meanwhile, has insisted that all such issues be resolved bilaterally, which means either never or only under its own terms. Due to China's growing power in the region, by taking no position Washington has by default become an enabler of China's ever more aggressive acts.
The U.S., China and all of East Asia have now reached an unavoidable moment of truth. Sovereignty disputes in which parties seek peaceful resolution are one thing; flagrant, belligerent acts are quite another. How this challenge is addressed will have implications not only for the South China Sea, but also for the stability of East Asia and for the future of U.S.-China relations.
History teaches us that when unilateral acts of aggression go unanswered, the bad news never gets better with age. Nowhere is this cycle more apparent than in the alternating power shifts in East Asia. As historian Barbara Tuchman noted in her biography of U.S. Army Gen. Joseph Stillwell, it was China's plea for U.S. and League of Nations support that went unanswered following Japan's 1931 invasion of Manchuria, a neglect that "brewed the acid of appeasement that . . . opened the decade of descent to war" in Asia and beyond.
While America's attention is distracted by the presidential campaign, all of East Asia is watching what the U.S. will do about Chinese actions in the South China Sea. They know a test when they see one. They are waiting to see whether America will live up to its uncomfortable but necessary role as the true guarantor of stability in East Asia, or whether the region will again be dominated by belligerence and intimidation.
The Chinese of 1931 understood this threat and lived through the consequences of an international community's failure to address it. The question is whether the China of 2012 truly wishes to resolve issues through acceptable international standards, and whether the America of 2012 has the will and the capacity to insist that this approach is the only path toward stability.
Mr. Webb, a Democrat, is a U.S. senator from Virginia.
The Wall Street Journal, August 19, 2012, page A11
http://online.wsj.com/article/SB10000872396390444184704577587483914661256.html
Since World War II, despite the costly flare-ups in Korea and Vietnam, the United States has proved to be the essential guarantor of stability in the Asian-Pacific region, even as the power cycle shifted from Japan to the Soviet Union and most recently to China. The benefits of our involvement are one of the great success stories of American and Asian history, providing the so-called second tier countries in the region the opportunity to grow economically and to mature politically.
As the region has grown more prosperous, the sovereignty issues have become more fierce. Over the past two years Japan and China have openly clashed in the Senkaku Islands, east of Taiwan and west of Okinawa, whose administration is internationally recognized to be under Japanese control. Russia and South Korea have reasserted sovereignty claims against Japan in northern waters. China and Vietnam both claim sovereignty over the Paracel Islands. China, Vietnam, the Philippines, Brunei and Malaysia all claim sovereignty over the Spratly Islands, the site of continuing confrontations between China and the Philippines.
Such disputes involve not only historical pride but also such vital matters as commercial transit, fishing rights, and potentially lucrative mineral leases in the seas that surround the thousands of miles of archipelagos. Nowhere is this growing tension clearer than in the increasingly hostile disputes in the South China Sea.
On June 21, China's State Council approved the establishment of a new national prefecture which it named Sansha, with its headquarters on Woody Island in the Paracel Islands. Called Yongxing by the Chinese, Woody Island has no indigenous population and no natural water supply, but it does sport a military-capable runway, a post office, a bank, a grocery store and a hospital.
The Paracels are more than 200 miles southeast of Hainan, mainland China's southernmost territory, and due east of Vietnam's central coast. Vietnam adamantly claims sovereignty over the island group, the site of a battle in 1974 when China attacked the Paracels in order to oust soldiers of the former South Vietnamese regime.
The potential conflicts stemming from the creation of this new Chinese prefecture extend well beyond the Paracels. Over the last six weeks the Chinese have further proclaimed that the jurisdiction of Sansha includes not just the Paracel Islands but virtually the entire South China Sea, connecting a series of Chinese territorial claims under one administrative rubric. According to China's official news agency Xinhua, the new prefecture "administers over 200 islets" and "2 million square kilometers of water." To buttress this annexation, 45 legislators have been appointed to govern the roughly 1,000 people on these islands, along with a 15-member Standing Committee, plus a mayor and a vice mayor.
These political acts have been matched by military and economic expansion. On July 22, China's Central Military Commission announced that it would deploy a garrison of soldiers to guard the islands in the area. On July 31, it announced a new policy of "regular combat-readiness patrols" in the South China Sea. And China has now begun offering oil exploration rights in locations recognized by the international community as within Vietnam's exclusive economic zone.
For all practical purposes China has unilaterally decided to annex an area that extends eastward from the East Asian mainland as far as the Philippines, and nearly as far south as the Strait of Malacca. China's new "prefecture" is nearly twice as large as the combined land masses of Vietnam, South Korea, Japan and the Philippines. Its "legislators" will directly report to the central government.
American reaction has been muted. The State Department waited until Aug. 3 before expressing official concern over China's "upgrading of its administrative level . . . and establishment of a new military garrison" in the disputed areas. The statement was carefully couched within the context of long-standing policies calling for the resolution of sovereignty issues in accordance with international law and without the use of military force.
Even so, the Chinese government responded angrily, warning that State Department officials had "confounded right and wrong, and sent a seriously wrong message." The People's Daily, a quasi-official publication, accused the U.S. of "fanning the flames and provoking division, deliberately creating antagonism with China." Its overseas edition said it was time for the U.S. to "shut up."
In truth, American vacillations have for years emboldened China. U.S. policy with respect to sovereignty issues in Asian-Pacific waters has been that we take no sides, that such matters must be settled peacefully among the parties involved. Smaller, weaker countries have repeatedly called for greater international involvement.
China, meanwhile, has insisted that all such issues be resolved bilaterally, which means either never or only under its own terms. Due to China's growing power in the region, by taking no position Washington has by default become an enabler of China's ever more aggressive acts.
The U.S., China and all of East Asia have now reached an unavoidable moment of truth. Sovereignty disputes in which parties seek peaceful resolution are one thing; flagrant, belligerent acts are quite another. How this challenge is addressed will have implications not only for the South China Sea, but also for the stability of East Asia and for the future of U.S.-China relations.
History teaches us that when unilateral acts of aggression go unanswered, the bad news never gets better with age. Nowhere is this cycle more apparent than in the alternating power shifts in East Asia. As historian Barbara Tuchman noted in her biography of U.S. Army Gen. Joseph Stillwell, it was China's plea for U.S. and League of Nations support that went unanswered following Japan's 1931 invasion of Manchuria, a neglect that "brewed the acid of appeasement that . . . opened the decade of descent to war" in Asia and beyond.
While America's attention is distracted by the presidential campaign, all of East Asia is watching what the U.S. will do about Chinese actions in the South China Sea. They know a test when they see one. They are waiting to see whether America will live up to its uncomfortable but necessary role as the true guarantor of stability in East Asia, or whether the region will again be dominated by belligerence and intimidation.
The Chinese of 1931 understood this threat and lived through the consequences of an international community's failure to address it. The question is whether the China of 2012 truly wishes to resolve issues through acceptable international standards, and whether the America of 2012 has the will and the capacity to insist that this approach is the only path toward stability.
Mr. Webb, a Democrat, is a U.S. senator from Virginia.
Thursday, August 2, 2012
Recovery and resolution of financial market infrastructures, consultative report issued by CPSS-IOSCO
Recovery and resolution of financial market infrastructures, consultative report issued by CPSS-IOSCO
July 31, 2012
http://www.bis.org/press/p120731.htm
The Committee on Payment and Settlement Systems (CPSS) and the
International Organization of Securities Commissions (IOSCO) have today
published for public comment a consultative report on the Recovery and resolution of financial market infrastructures.
Financial market infrastructures (FMIs) play an essential role in the global financial system. The disorderly failure of an FMI can lead to severe systemic disruption if it causes markets to cease to operate effectively. Accordingly, all types of FMIs should generally be subject to regimes and strategies for recovery and resolution.
The CPSS-IOSCO Principles for financial market infrastructures (Principles) published in April 2012 require that FMIs have effective strategies, rules and procedures to enable them to recover from financial stresses. The Financial Stability Board's Key Attributes of Effective Resolution Regimes for Financial Institutions (Key Attributes) published in 2011 further require that jurisdictions establish resolution regimes to allow for the resolution of a financial institution in circumstances where recovery is no longer feasible. An effective resolution regime must enable resolution without systemic disruption or exposing the taxpayer to loss. To achieve this in the context of FMIs, relevant authorities must have powers to maintain an FMI's critical services.
The purpose of the report released today is to outline the issues that should be taken into account for different types of FMIs when putting in place effective recovery plans and resolution regimes that are consistent with the Principles and Key Attributes. The report also seeks consultees' views on a number of technical points related to these issues.
Paul Tucker, Deputy Governor, Financial Stability of the Bank of England and CPSS Chairman said, "The vital role of the financial system's infrastructure makes it essential that credible recovery plans and resolution regimes exist. FMIs need to be a source of strength and continuity for the financial markets they serve."
"This is even more important as a safeguard given the commitment made by G20 Leaders in 2009 that all standardised OTC derivatives should be cleared through central counterparties," added Masamichi Kono, Vice Commissioner for International Affairs, Financial Services Agency, Japan and Chairman of the IOSCO Board.
Amongst its conclusions, the report states that the Key Attributes will provide a framework for resolution of FMIs under a statutory resolution regime.
Published alongside the report is a cover note that lists the specific issues on which the committees seek comments during the public consultation period.
Comments on the report are invited from all interested parties and should be sent by 28 September 2012 (see note 1 below). After the consultation period, CPSS-IOSCO will report on how the methodology for assessing compliance with the Key Attributes, currently being prepared by the Financial Stability Board, should be revised to reflect FMI-specific elements taking into account the conclusions of this report and the comments received.
July 31, 2012
http://www.bis.org/press/p120731.htm
Financial market infrastructures (FMIs) play an essential role in the global financial system. The disorderly failure of an FMI can lead to severe systemic disruption if it causes markets to cease to operate effectively. Accordingly, all types of FMIs should generally be subject to regimes and strategies for recovery and resolution.
The CPSS-IOSCO Principles for financial market infrastructures (Principles) published in April 2012 require that FMIs have effective strategies, rules and procedures to enable them to recover from financial stresses. The Financial Stability Board's Key Attributes of Effective Resolution Regimes for Financial Institutions (Key Attributes) published in 2011 further require that jurisdictions establish resolution regimes to allow for the resolution of a financial institution in circumstances where recovery is no longer feasible. An effective resolution regime must enable resolution without systemic disruption or exposing the taxpayer to loss. To achieve this in the context of FMIs, relevant authorities must have powers to maintain an FMI's critical services.
The purpose of the report released today is to outline the issues that should be taken into account for different types of FMIs when putting in place effective recovery plans and resolution regimes that are consistent with the Principles and Key Attributes. The report also seeks consultees' views on a number of technical points related to these issues.
Paul Tucker, Deputy Governor, Financial Stability of the Bank of England and CPSS Chairman said, "The vital role of the financial system's infrastructure makes it essential that credible recovery plans and resolution regimes exist. FMIs need to be a source of strength and continuity for the financial markets they serve."
"This is even more important as a safeguard given the commitment made by G20 Leaders in 2009 that all standardised OTC derivatives should be cleared through central counterparties," added Masamichi Kono, Vice Commissioner for International Affairs, Financial Services Agency, Japan and Chairman of the IOSCO Board.
Amongst its conclusions, the report states that the Key Attributes will provide a framework for resolution of FMIs under a statutory resolution regime.
Published alongside the report is a cover note that lists the specific issues on which the committees seek comments during the public consultation period.
Comments on the report are invited from all interested parties and should be sent by 28 September 2012 (see note 1 below). After the consultation period, CPSS-IOSCO will report on how the methodology for assessing compliance with the Key Attributes, currently being prepared by the Financial Stability Board, should be revised to reflect FMI-specific elements taking into account the conclusions of this report and the comments received.
Notes
- Comments on the report should be sent by 28 September 2012 to both the CPSS secretariat (cpss@bis.org) and the IOSCO secretariat (fmiresolution@iosco.org). The comments will be published on the websites of the BIS and IOSCO unless commentators have requested otherwise.
- The CPSS serves as a forum for central banks to monitor and analyse developments in payment and settlement arrangements as well as in cross-border and multicurrency settlement schemes. The CPSS secretariat is hosted by the BIS. More information about the CPSS, and all its publications, can be found on the BIS website at www.bis.org/cpss.
- IOSCO is an international policy forum for securities regulators. Its objective is to review major regulatory issues related to international securities and futures transactions and to coordinate practical responses to these concerns.
- Both committees are recognised as international standard-setting bodies by the Financial Stability Board (www.financialstabilityboard.org)
-
The documents are on the websites of the BIS at http://www.bis.org/publ/cpss103.htm and IOSCO at http://www.iosco.org/library/pubdocs/pdf/IOSCOPD388.pdf.
-
The CPSS-IOSCO Principles for financial market infrastructures, can be found on the websites of the BIS at http://www.bis.org/publ/cpss101.htm and IOSCO at http://www.iosco.org/library/pubdocs/pdf/IOSCOPD377.pdf.
- The Financial Stability Board's Key attributes of effective resolution regimes for financial institutions can be found on the FSB's website at http://www.financialstabilityboard.org/publications/r_111104cc.pdf.
Tuesday, July 31, 2012
Measuring Systemic Liquidity Risk and the Cost of Liquidity Insurance. By Tiago Severo
Measuring Systemic Liquidity Risk and the Cost of Liquidity Insurance. By Tiago Severo
IMF Working Paper No. 12/194
Jul 31, 2012
http://www.imfbookstore.org/ProdDetails.asp?ID=WPIEA2012194
Summary: I construct a systemic liquidity risk index (SLRI) from data on violations of arbitrage relationships across several asset classes between 2004 and 2010. Then I test whether the equity returns of 53 global banks were exposed to this liquidity risk factor. Results show that the level of bank returns is not directly affected by the SLRI, but their volatility increases when liquidity conditions deteriorate. I do not find a strong association between bank size and exposure to the SLRI - measured as the sensitivity of volatility to the index. Surprisingly, exposure to systemic liquidity risk is positively associated with the Net Stable Funding Ratio (NSFR). The link between equity volatility and the SLRI allows me to calculate the cost that would be borne by public authorities for providing liquidity support to the financial sector. I use this information to estimate a liquidity insurance premium that could be paid by individual banks in order to cover for that social cost.
Excerpts:
Introduction
Liquidity risk has become a central topic for investors, regulators and academics in the aftermath of the global financial crisis. The sharp decline of real estate prices in the U.S. and parts of Europe and the consequent collapse in the values of related securities created widespread concerns about the solvency of banks and other financial intermediaries. The resulting increase in counterparty risk induced investors to shy away from risky short-term funding markets [Gorton and Metrick (2010)] and to store funds in safe and liquid assets, especially U.S. government debt. The dry-up in funding markets hit levered financial intermediaries involved in maturity and liquidity transformation hard [Brunnermeier (2009)], propagating the initial shock through global markets.
Central bankers in major countries responded to the contraction in liquidity by pumping an unprecendented amount of funds into securities and interbank markets, and by creating and extending liquidity backstop lines to rescue troubled financial intermediaries. Such measures have exposed public finances, and ultimately taxpayers, to the risk of substantial losses. Understanding the origins of systemic liquidity risk in financial markets is, therefore, an invaluable tool for policymakers to reduce the chance of facing these very same challenges again in the future. In particular, if public support in periods of widespread distress cannot be prevented—due to commitment problems—supervisors and regulators should ensure that financial intermediaries are properly monitored and charged to reflect the contingent benefits they enjoy.
The present paper brings three contributions to the topic of systemic liquidity risk:
1) It produces a systemic liquidity risk index (SLRI) calculated from violations of “arbitrage” relationships in various securities markets.
2) It estimates the exposure of 53 global banks to this aggregate risk factor.
3) It uses the information in 2) to devise an insurance system where banks pre-pay for the costs faced by public authorities for providing contingent liquidity support.
Results indicate that systemic illiquidity became a widespread problem in the aftermath of Lehman’s bankruptcy, and it only recovered after several months. Systemic liquidity risk spiked again during the Greek sovereign crisis in the second quarter of 2010, albeit at much more moderate levels. Yet, the renewed concerns regarding sovereign default in peripheral Europe observed in the last quarter of 2010 did not induce global liquidity shortfalls.
In terms of exposures of individual institutions, I find that, in general, systemic liquidity risk does not affect the level of bank stock returns on a systematic fashion. However, liquidity risk is strongly correlated with the volatility of bank stocks: system wide illiquidity is associated with riskier banks. Estimates also show that U.S. and U.K. banks were relatively more exposed to liquidity conditions compared to Japanese institutions, with continental European banks lying in the middle of the distribution. More specifically, the results indicate that U.S. and U.K. banks’ stocks became much more volatile relative to their Asian peers when liquidity evaporated. This likely reflects the higher degree of maturity transformation and the reliance on very short-term funding by Anglo-Saxon banks. A natural question is whether bank specific characteristics beyond geographic location reflect the different degrees of exposure to liquidity risk.
I start the quest for those bank characteristics by looking at the importance of bank size for liquidity risk exposure. Market participants, policymakers and academics have highlighted the role of size and interconnectedness as a source of systemic risk. To verify this claim, I form quintile portfolios of banks based on market capitalization and test whether there are significant differences in the sensitivity of their return volatility to the SLRI. The estimates suggest that size has implications for liquidity risk, but the relationship is highly non-linear. The association between size and sensitivity to liquidity conditions is only relevant for the very large banks, and it becomes pronounced only when liquidity conditions deteriorate substantially.
Recently, the Basel Committee on Banking Supervision produced, for the first time, a framework (based on balance sheet information) to regulate banks’ liquidity risk. In particular, it proposed two liquidity ratios that shall be monitored by supervisors: the Liquidity Coverage Ratio (LCR), which indicates banks’ ability to withstand a short-term liquidity crisis, and the Net Stable Funding Ratio (NSFR), which measures the long-term, structural funding mismatches in a bank. Forming quintile portfolios based on banks' NSFR, I find that, if anything, the regulatory ratio is positively associated with the exposure to the SLRI. In other words, banks with a high NSFR (the ones deamed to be structurally more liquid) are in fact slightly more sensitive to liquidity conditions. This counterintuitive result needs to be qualified. As noted later, the SLRI captures short-term liquidity stresses, whereas the NSFR is designed as a medium to long-term indicator of liquidity conditions. Certainly, it would be more appropriate to test the performance of LCR instead. However, the data necessary for its computation are not readily available.
The link between bank stock volatility and the SLRI allows me to calculate the cost faced by public authorities for providing liquidity support for banks. Relying on the contingent claims approach (CCA), I use observable information on a bank’s equity and the book value of its liabilities to back out the unobserved level and volatility of its assets. I then estimate by how much the level and volatility of implied assets change as liquidity conditions deteriorate, and how such changes affect the price of a hypothetical put option on the bank’s assets. Because the price of this put indicates the cost of public support to banks, variations in the put due to fluctuations in the SLRI provide a benchmark for charging banks according to their exposure to systemic liquidity risk, a goal that has been advocated by many experts on bank regulation.
http://www.imf.org/external/pubs/cat/longres.aspx?sk=26131.0
IMF Working Paper No. 12/194
Jul 31, 2012
http://www.imfbookstore.org/ProdDetails.asp?ID=WPIEA2012194
Summary: I construct a systemic liquidity risk index (SLRI) from data on violations of arbitrage relationships across several asset classes between 2004 and 2010. Then I test whether the equity returns of 53 global banks were exposed to this liquidity risk factor. Results show that the level of bank returns is not directly affected by the SLRI, but their volatility increases when liquidity conditions deteriorate. I do not find a strong association between bank size and exposure to the SLRI - measured as the sensitivity of volatility to the index. Surprisingly, exposure to systemic liquidity risk is positively associated with the Net Stable Funding Ratio (NSFR). The link between equity volatility and the SLRI allows me to calculate the cost that would be borne by public authorities for providing liquidity support to the financial sector. I use this information to estimate a liquidity insurance premium that could be paid by individual banks in order to cover for that social cost.
Excerpts:
Introduction
Liquidity risk has become a central topic for investors, regulators and academics in the aftermath of the global financial crisis. The sharp decline of real estate prices in the U.S. and parts of Europe and the consequent collapse in the values of related securities created widespread concerns about the solvency of banks and other financial intermediaries. The resulting increase in counterparty risk induced investors to shy away from risky short-term funding markets [Gorton and Metrick (2010)] and to store funds in safe and liquid assets, especially U.S. government debt. The dry-up in funding markets hit levered financial intermediaries involved in maturity and liquidity transformation hard [Brunnermeier (2009)], propagating the initial shock through global markets.
Central bankers in major countries responded to the contraction in liquidity by pumping an unprecendented amount of funds into securities and interbank markets, and by creating and extending liquidity backstop lines to rescue troubled financial intermediaries. Such measures have exposed public finances, and ultimately taxpayers, to the risk of substantial losses. Understanding the origins of systemic liquidity risk in financial markets is, therefore, an invaluable tool for policymakers to reduce the chance of facing these very same challenges again in the future. In particular, if public support in periods of widespread distress cannot be prevented—due to commitment problems—supervisors and regulators should ensure that financial intermediaries are properly monitored and charged to reflect the contingent benefits they enjoy.
The present paper brings three contributions to the topic of systemic liquidity risk:
1) It produces a systemic liquidity risk index (SLRI) calculated from violations of “arbitrage” relationships in various securities markets.
2) It estimates the exposure of 53 global banks to this aggregate risk factor.
3) It uses the information in 2) to devise an insurance system where banks pre-pay for the costs faced by public authorities for providing contingent liquidity support.
Results indicate that systemic illiquidity became a widespread problem in the aftermath of Lehman’s bankruptcy, and it only recovered after several months. Systemic liquidity risk spiked again during the Greek sovereign crisis in the second quarter of 2010, albeit at much more moderate levels. Yet, the renewed concerns regarding sovereign default in peripheral Europe observed in the last quarter of 2010 did not induce global liquidity shortfalls.
In terms of exposures of individual institutions, I find that, in general, systemic liquidity risk does not affect the level of bank stock returns on a systematic fashion. However, liquidity risk is strongly correlated with the volatility of bank stocks: system wide illiquidity is associated with riskier banks. Estimates also show that U.S. and U.K. banks were relatively more exposed to liquidity conditions compared to Japanese institutions, with continental European banks lying in the middle of the distribution. More specifically, the results indicate that U.S. and U.K. banks’ stocks became much more volatile relative to their Asian peers when liquidity evaporated. This likely reflects the higher degree of maturity transformation and the reliance on very short-term funding by Anglo-Saxon banks. A natural question is whether bank specific characteristics beyond geographic location reflect the different degrees of exposure to liquidity risk.
I start the quest for those bank characteristics by looking at the importance of bank size for liquidity risk exposure. Market participants, policymakers and academics have highlighted the role of size and interconnectedness as a source of systemic risk. To verify this claim, I form quintile portfolios of banks based on market capitalization and test whether there are significant differences in the sensitivity of their return volatility to the SLRI. The estimates suggest that size has implications for liquidity risk, but the relationship is highly non-linear. The association between size and sensitivity to liquidity conditions is only relevant for the very large banks, and it becomes pronounced only when liquidity conditions deteriorate substantially.
Recently, the Basel Committee on Banking Supervision produced, for the first time, a framework (based on balance sheet information) to regulate banks’ liquidity risk. In particular, it proposed two liquidity ratios that shall be monitored by supervisors: the Liquidity Coverage Ratio (LCR), which indicates banks’ ability to withstand a short-term liquidity crisis, and the Net Stable Funding Ratio (NSFR), which measures the long-term, structural funding mismatches in a bank. Forming quintile portfolios based on banks' NSFR, I find that, if anything, the regulatory ratio is positively associated with the exposure to the SLRI. In other words, banks with a high NSFR (the ones deamed to be structurally more liquid) are in fact slightly more sensitive to liquidity conditions. This counterintuitive result needs to be qualified. As noted later, the SLRI captures short-term liquidity stresses, whereas the NSFR is designed as a medium to long-term indicator of liquidity conditions. Certainly, it would be more appropriate to test the performance of LCR instead. However, the data necessary for its computation are not readily available.
The link between bank stock volatility and the SLRI allows me to calculate the cost faced by public authorities for providing liquidity support for banks. Relying on the contingent claims approach (CCA), I use observable information on a bank’s equity and the book value of its liabilities to back out the unobserved level and volatility of its assets. I then estimate by how much the level and volatility of implied assets change as liquidity conditions deteriorate, and how such changes affect the price of a hypothetical put option on the bank’s assets. Because the price of this put indicates the cost of public support to banks, variations in the put due to fluctuations in the SLRI provide a benchmark for charging banks according to their exposure to systemic liquidity risk, a goal that has been advocated by many experts on bank regulation.
http://www.imf.org/external/pubs/cat/longres.aspx?sk=26131.0
Sunday, July 29, 2012
Austerity Debate a Matter of Degree -- In Europe, Opinions Differ on Depth, Timing of Cuts; International Monetary Fund Has Change of Heart
Austerity Debate a Matter of Degree. By Stephen Fidler
In Europe, Opinions Differ on Depth, Timing of Cuts; International Monetary Fund Has Change of Heart
Wall Street Journal, February 17, 2012
http://online.wsj.com/article/SB10001424052970204792404577227273553955752.html
Excerpts
In the U.S., the debate about whether the government should start cutting its budget deficit opens up a deep ideological divide. Many countries in Europe don't have that luxury.
True, there may be questions about how hard to cut budgets and how best to time the cuts, but with government-bond investors going on strike, policy makers either don't have a choice or feel they don't. Budget austerity is also a recipe favored by Germany and other euro-zone governments that hold the Continent's purse strings.
Once upon a time, the International Monetary Fund, which also provides bailout funds and lend its crisis management expertise to euro-zone governments, would have been right there with the Germans: It never handled a financial crisis for which tough austerity wasn't the prescribed medicine. In Greece, however, officials say the IMF supported spreading the budget pain over a number of years rather than concentrating it at the front end.
That is partly because overpromising the undeliverable hurts government credibility, which is essential to overcoming the crisis. But it is also because the IMF's view has shifted.
"Over its history, the IMF has become less dogmatic about fiscal austerity being always the right response to a crisis," said Laurence Ball, economics professor at Johns Hopkins University, and a part-time consultant to the IMF.
These days, the fund worries more than it did about the negative impact that cutting budgets has on short-term growth prospects—a traditional concern of Keynesian economists.
"Fiscal consolidation typically has a contractionary effect on output. A fiscal consolidation equal to 1% of [gross domestic product] typically reduces GDP by about 0.5% within two years and raises the unemployment rate by about 0.3 percentage point," the IMF said in its 2010 World Economic Outlook:
But that isn't the full story. In the first place, the IMF agrees that reducing government debt—which is what austerity should eventually achieve—has long-term economic benefits. For example, in a growing economy close with strong employment, reduced competition for savings should lower the cost of capital for private entrepreneurs.
That suggests that, where bond markets give governments the choice, there is a legitimate debate to be had about timing of austerity. The IMF economic models suggest it will be five years before the "break-even" point when the benefits to growth of cutting debt start to exceed the "Keynesian" effects of austerity.
There is an alternative hypothesis that has a lot of support in Germany, and among the region's central bankers. This is the notion that budget cutbacks stimulate growth in the short term, often referred to as the "expansionary fiscal contraction" hypothesis.
Manfred Neumann, professor emeritus of economics at the Institute for Economic Policy at the University of Bonn, said the view is also called the "German hypothesis" since it emerged from a round of German budget cutting in the early 1980s.
"The positive effect of austerity is much stronger than most people believe," he said. The explanation for the beneficial impact is that cutting government debt generates an improvement in confidence among households and entrepreneurs, he said.
The IMF concedes there may be something in this for countries where people are worried about the risk that the government might default—but only up to a point. It concedes that fiscal retrenchment in such countries "tends to be less contractionary" than in countries not facing market pressures—but doesn't conclude that budget cutting in such circumstances is actually expansionary.
Each side of the debate invokes its own favored study. Support for the "German hypothesis" comes from two Harvard economists with un-German names—Alberto Alesina and Silvia Ardagna. But their critics, who include Mr. Ball, say their sample includes many irrelevant episodes for which their model fails to correct—including, for example, the U.S. "fiscal correction" that was born out of the U.S. economic boom of the late 1990s.
Mr. Alesina didn't respond to an email asking for comment, but Mr. Neumann said he isn't confident that studies, such as the IMF's, that appear to refute the hypothesis manage to isolate the effects of the austerity policy from other effects of a financial crisis.
Some of the IMF's conclusions, however, bode ill for the euro zone's budget cutters.
The first is that the contractionary effects of fiscal retrenchment are often partly offset by an increase in exports—but less so in countries where the exchange rate is fixed. Second, the pain is greater if central banks can't offset the fiscal austerity through a stimulus in monetary policy. With interest rates close to zero in the euro zone, such a stimulus is hard to achieve. Third, when many countries are cutting budgets at the same time, the effect on economic activity in each is magnified.
If you are a government in budget-cutting mode, there are, however, better and worse ways of doing it. The IMF says spending cuts tend to have less negative impact on the economy than tax increases. However, that is partly because central banks tend to cut interest rates more aggressively when they see spending cuts.
Mr. Neumann sees an austerity hierarchy. It is better to cut government consumption and transfers, including staff costs, than government investment—though it may be harder politically. If you are raising taxes, better to raise those with no impact on incentives—such as inheritance or wealth taxes—than those that hurt incentives, such as income or payroll taxes.
Raising sales or value-added taxes may have less impact on incentives—but have other undesirable effects, such as increasing inflation, that could deter central banks from easing policy.
In Europe, Opinions Differ on Depth, Timing of Cuts; International Monetary Fund Has Change of Heart
Wall Street Journal, February 17, 2012
http://online.wsj.com/article/SB10001424052970204792404577227273553955752.html
Excerpts
In the U.S., the debate about whether the government should start cutting its budget deficit opens up a deep ideological divide. Many countries in Europe don't have that luxury.
True, there may be questions about how hard to cut budgets and how best to time the cuts, but with government-bond investors going on strike, policy makers either don't have a choice or feel they don't. Budget austerity is also a recipe favored by Germany and other euro-zone governments that hold the Continent's purse strings.
Once upon a time, the International Monetary Fund, which also provides bailout funds and lend its crisis management expertise to euro-zone governments, would have been right there with the Germans: It never handled a financial crisis for which tough austerity wasn't the prescribed medicine. In Greece, however, officials say the IMF supported spreading the budget pain over a number of years rather than concentrating it at the front end.
That is partly because overpromising the undeliverable hurts government credibility, which is essential to overcoming the crisis. But it is also because the IMF's view has shifted.
"Over its history, the IMF has become less dogmatic about fiscal austerity being always the right response to a crisis," said Laurence Ball, economics professor at Johns Hopkins University, and a part-time consultant to the IMF.
These days, the fund worries more than it did about the negative impact that cutting budgets has on short-term growth prospects—a traditional concern of Keynesian economists.
"Fiscal consolidation typically has a contractionary effect on output. A fiscal consolidation equal to 1% of [gross domestic product] typically reduces GDP by about 0.5% within two years and raises the unemployment rate by about 0.3 percentage point," the IMF said in its 2010 World Economic Outlook:
But that isn't the full story. In the first place, the IMF agrees that reducing government debt—which is what austerity should eventually achieve—has long-term economic benefits. For example, in a growing economy close with strong employment, reduced competition for savings should lower the cost of capital for private entrepreneurs.
That suggests that, where bond markets give governments the choice, there is a legitimate debate to be had about timing of austerity. The IMF economic models suggest it will be five years before the "break-even" point when the benefits to growth of cutting debt start to exceed the "Keynesian" effects of austerity.
There is an alternative hypothesis that has a lot of support in Germany, and among the region's central bankers. This is the notion that budget cutbacks stimulate growth in the short term, often referred to as the "expansionary fiscal contraction" hypothesis.
Manfred Neumann, professor emeritus of economics at the Institute for Economic Policy at the University of Bonn, said the view is also called the "German hypothesis" since it emerged from a round of German budget cutting in the early 1980s.
"The positive effect of austerity is much stronger than most people believe," he said. The explanation for the beneficial impact is that cutting government debt generates an improvement in confidence among households and entrepreneurs, he said.
The IMF concedes there may be something in this for countries where people are worried about the risk that the government might default—but only up to a point. It concedes that fiscal retrenchment in such countries "tends to be less contractionary" than in countries not facing market pressures—but doesn't conclude that budget cutting in such circumstances is actually expansionary.
Each side of the debate invokes its own favored study. Support for the "German hypothesis" comes from two Harvard economists with un-German names—Alberto Alesina and Silvia Ardagna. But their critics, who include Mr. Ball, say their sample includes many irrelevant episodes for which their model fails to correct—including, for example, the U.S. "fiscal correction" that was born out of the U.S. economic boom of the late 1990s.
Mr. Alesina didn't respond to an email asking for comment, but Mr. Neumann said he isn't confident that studies, such as the IMF's, that appear to refute the hypothesis manage to isolate the effects of the austerity policy from other effects of a financial crisis.
Some of the IMF's conclusions, however, bode ill for the euro zone's budget cutters.
The first is that the contractionary effects of fiscal retrenchment are often partly offset by an increase in exports—but less so in countries where the exchange rate is fixed. Second, the pain is greater if central banks can't offset the fiscal austerity through a stimulus in monetary policy. With interest rates close to zero in the euro zone, such a stimulus is hard to achieve. Third, when many countries are cutting budgets at the same time, the effect on economic activity in each is magnified.
If you are a government in budget-cutting mode, there are, however, better and worse ways of doing it. The IMF says spending cuts tend to have less negative impact on the economy than tax increases. However, that is partly because central banks tend to cut interest rates more aggressively when they see spending cuts.
Mr. Neumann sees an austerity hierarchy. It is better to cut government consumption and transfers, including staff costs, than government investment—though it may be harder politically. If you are raising taxes, better to raise those with no impact on incentives—such as inheritance or wealth taxes—than those that hurt incentives, such as income or payroll taxes.
Raising sales or value-added taxes may have less impact on incentives—but have other undesirable effects, such as increasing inflation, that could deter central banks from easing policy.
Saturday, July 28, 2012
The Statistical Definition of Public Sector Debt. An Overview of the Coverage of Public Sector Debt for 61 Countries
What Lies Beneath: The Statistical Definition of Public Sector Debt. An Overview of the Coverage of Public Sector Debt for 61 Countries.By Robert Dippelsman, Claudia Dziobek, and Carlos A. Gutiérrez Mangas
IMF Staff Discussion Note
http://www.imf.org/external/pubs/cat/longres.aspx?sk=26101.0
Excerpts
Executive Summary
While key macroeconomic indicators such as Gross Domestic Product (GDP) or Consumer Price Index (CPI) are based on internationally accepted methodologies, indicators related to the debt of the public sector often do not follow international standards and can have several different definitions. As this paper shows, the absence of the standard nomenclature can lead to major misunderstandings in the fiscal policy debate. The authors present examples that show that debt-to-GDP ratios for a country at any given time can range from 40 to over 100 percent depending on the definition used. Debt statistics, for example, may include or exclude state and local governments and may cover all debt instruments or just a subset. The authors suggest that gross debt of the general government (―gross debt‖) should be globally adopted as the headline indicator supplemented by other measures of government debt for risk-based assessments of the fiscal position. Broader measures, including net debt and detailed information on contingent liabilities and derivatives, could be considered. The standard nomenclature of government and of debt instruments helps users understand the concepts in line with the Public Sector Debt Statistics Guide. Use of more standard definitions of government debt would improve data comparability, would benefit IMF surveillance, programs, and debt sustainability analysis, and would help country authorities specify and monitor fiscal rules. Data disaggregated by government subsector and debt instrument for 61 countries from the IMF‘s Government Finance Statistics Yearbook (GFSY) database are presented to illustrate the importance and viability of adopting this approach.
Most key macroeconomic indicators such as GDP, the consumer price index (CPI), data on monetary aggregates or balance of payments follow internationally accepted definitions. In contrast, countries often do not follow international guidelines for public debt data. As this paper shows, failure to apply global standards can lead to important misunderstandings because of the potentially large magnitudes involved. International guidelines on the compilation of public sector debt are well established and are summarized in the recently published Public Sector Debt Statistics Guide (Debt Guide). The Debt Guide also describes applications of these guidelines for the analysis of debt sustainability, fiscal risk, and vulnerability.
The authors seek in this paper to provide a more intuitive application of the various concepts and definitions found in the Debt Guide, and propose that global standard definitions of ―gross debt‖ referring to the ―general government‖ be adopted as a headline measure. As with other headline indicators, a variety of narrower and wider indicators remain valuable and useful for different purposes. The notion of gross debt will be familiar to macroeconomic statisticians, but, as a practical matter, the adoption of global standard statistical definitions of debt will require some development efforts in terms of source data availability and training for compilers of debt statistics. A particular challenge is complete coverage of all relevant institutions and financial instruments. Detailed information on contingent liabilities and derivatives should also be considered. Coordination across agencies that work with debt related data is also critical, as with other complex datasets such as GDP.
Many users are not aware of the extent to which differences in concepts and methods matter. Box 1 below highlights the four key dimensions of public sector debt. Countries publish data, for example, either including or excluding state and local governments, pension funds, and public corporations. Also, while much of the policy debate centers on government liabilities, some countries have begun to publish and focus policy analysis on net debt (financial assets minus liabilities). Debt data frequently only include two (of the six) debt instruments available: debt securities and loans. Debt instruments such as other accounts payable or insurance technical reserves are often not taken into account. In many cases the method of valuation is not explicitly mentioned even though market versus nominal valuation can be significantly different. Consolidation, which refers to the process of netting out intra-governmental obligations, is another important factor rarely specified in published data. And finally, debt data may be compiled using cash data and excluding non-cash items such as arrears or using accrual (or partial accrual) methods to reflect important non-cash obligations.
Conclusions
The headline indicator for government debt should be defined as ―gross debt of the general government‖ or GL3/D4 in this paper‘s nomenclature. The authors suggest that countries should aspire to publish timely data on the broader concept of gross debt.
Data on the institutional level of the general government (GL3) would be consistent with a broad range of data uses and with the data requirements of other macroeconomic datasets, notably the national accounts. Including the full range of debt instruments is desirable particularly because some of these may expand in times of financial distress and could thus serve as valuable indicators of distress. Clarity of what the debt data cover would help build understanding of the data and their comparability across countries.
A global standard would facilitate communication on the main concepts in public sector debt statistics and it would bring greater precision to research on fiscal issues, and lead to improved cross-country comparison. This framework uses a nomenclature inspired by the approach in monetary data where M1 through M4 (monetary aggregates) reflect institutional and instrument coverage as well.
The methodological framework of government debt presented here is widely accepted among statisticians. The relevant definitions, concepts, classification, and guidance of compilation are summarized in GFSM 2001 and the Debt Guide. These standards are fully consistent with the overarching statistical methodology of the 2008 SNA and other international macroeconomic methodologies such as the Sixth Edition of Balance of Payments and International Investment Position Manual (BPM6) and broadly consistent with the European System of Accounts (ESA) manual and the more specialized manuals of deficit and debt that govern the Excessive Deficit Procedure.
However, the methodology is not always well defined in the policy debate. An international convention to view GL3/D4 as the desirable headline indicator of government debt, consistent with the international standards, would go a long way to create more transparency and better comparability of international data.
Our contribution is to provide a presentational framework and nomenclature that highlights the importance of different instruments, institutional coverage, and valuation and consolidation as key indicators of debt. Indeed, we have noted that other, more narrowly defined concepts can meaningfully supplement the comprehensive measure of debt. These narrower measures may be important for a risk-based assessment of the fiscal position, but they are not substitutes for a global indicator.
Further extensions of this work are the development of the statistical reporting of broader measures, for example net debt of the general government and the presentation of information on derivatives, and contingent liabilities.
The new debt database launched by the IMF and World Bank in 2010 is structured along government levels, debt instruments, consolidation and valuation as discussed in this paper. However, some countries report data only on the GL2 level and cover mostly D1. Developing data on the broader statistics will take some time, although Australia, Canada, and some other countries already publish or plan to publish GL3/D4 data or publish components that would allow the calculation of GL3/D4.
Debt statistics for various levels of government and instruments were shown for 61 countries and these data highlight some interesting patterns that merit further analysis such as the degree of fiscal autonomy of state and local government to issue debt, the degree of development of markets for government debt securities. The authors conclude that further research would be worthwhile on the advantages of a global standard of government debt for such topics as data comparability, IMF surveillance, programs, debt sustainability analysis, and the analysis of fiscal rules.
IMF Staff Discussion Note
http://www.imf.org/external/pubs/cat/longres.aspx?sk=26101.0
Excerpts
Executive Summary
While key macroeconomic indicators such as Gross Domestic Product (GDP) or Consumer Price Index (CPI) are based on internationally accepted methodologies, indicators related to the debt of the public sector often do not follow international standards and can have several different definitions. As this paper shows, the absence of the standard nomenclature can lead to major misunderstandings in the fiscal policy debate. The authors present examples that show that debt-to-GDP ratios for a country at any given time can range from 40 to over 100 percent depending on the definition used. Debt statistics, for example, may include or exclude state and local governments and may cover all debt instruments or just a subset. The authors suggest that gross debt of the general government (―gross debt‖) should be globally adopted as the headline indicator supplemented by other measures of government debt for risk-based assessments of the fiscal position. Broader measures, including net debt and detailed information on contingent liabilities and derivatives, could be considered. The standard nomenclature of government and of debt instruments helps users understand the concepts in line with the Public Sector Debt Statistics Guide. Use of more standard definitions of government debt would improve data comparability, would benefit IMF surveillance, programs, and debt sustainability analysis, and would help country authorities specify and monitor fiscal rules. Data disaggregated by government subsector and debt instrument for 61 countries from the IMF‘s Government Finance Statistics Yearbook (GFSY) database are presented to illustrate the importance and viability of adopting this approach.
Most key macroeconomic indicators such as GDP, the consumer price index (CPI), data on monetary aggregates or balance of payments follow internationally accepted definitions. In contrast, countries often do not follow international guidelines for public debt data. As this paper shows, failure to apply global standards can lead to important misunderstandings because of the potentially large magnitudes involved. International guidelines on the compilation of public sector debt are well established and are summarized in the recently published Public Sector Debt Statistics Guide (Debt Guide). The Debt Guide also describes applications of these guidelines for the analysis of debt sustainability, fiscal risk, and vulnerability.
The authors seek in this paper to provide a more intuitive application of the various concepts and definitions found in the Debt Guide, and propose that global standard definitions of ―gross debt‖ referring to the ―general government‖ be adopted as a headline measure. As with other headline indicators, a variety of narrower and wider indicators remain valuable and useful for different purposes. The notion of gross debt will be familiar to macroeconomic statisticians, but, as a practical matter, the adoption of global standard statistical definitions of debt will require some development efforts in terms of source data availability and training for compilers of debt statistics. A particular challenge is complete coverage of all relevant institutions and financial instruments. Detailed information on contingent liabilities and derivatives should also be considered. Coordination across agencies that work with debt related data is also critical, as with other complex datasets such as GDP.
Many users are not aware of the extent to which differences in concepts and methods matter. Box 1 below highlights the four key dimensions of public sector debt. Countries publish data, for example, either including or excluding state and local governments, pension funds, and public corporations. Also, while much of the policy debate centers on government liabilities, some countries have begun to publish and focus policy analysis on net debt (financial assets minus liabilities). Debt data frequently only include two (of the six) debt instruments available: debt securities and loans. Debt instruments such as other accounts payable or insurance technical reserves are often not taken into account. In many cases the method of valuation is not explicitly mentioned even though market versus nominal valuation can be significantly different. Consolidation, which refers to the process of netting out intra-governmental obligations, is another important factor rarely specified in published data. And finally, debt data may be compiled using cash data and excluding non-cash items such as arrears or using accrual (or partial accrual) methods to reflect important non-cash obligations.
Box 1. Key Dimensions to Measure Government Gross Debt
Institutional Coverage of Government
Instrument Coverage of Debt
Valuation of Debt Instruments (market and nominal)
Consolidation of Intra-Government Holdings
Source: Public Sector Debt Statistics Guide.
Conclusions
The headline indicator for government debt should be defined as ―gross debt of the general government‖ or GL3/D4 in this paper‘s nomenclature. The authors suggest that countries should aspire to publish timely data on the broader concept of gross debt.
Data on the institutional level of the general government (GL3) would be consistent with a broad range of data uses and with the data requirements of other macroeconomic datasets, notably the national accounts. Including the full range of debt instruments is desirable particularly because some of these may expand in times of financial distress and could thus serve as valuable indicators of distress. Clarity of what the debt data cover would help build understanding of the data and their comparability across countries.
A global standard would facilitate communication on the main concepts in public sector debt statistics and it would bring greater precision to research on fiscal issues, and lead to improved cross-country comparison. This framework uses a nomenclature inspired by the approach in monetary data where M1 through M4 (monetary aggregates) reflect institutional and instrument coverage as well.
The methodological framework of government debt presented here is widely accepted among statisticians. The relevant definitions, concepts, classification, and guidance of compilation are summarized in GFSM 2001 and the Debt Guide. These standards are fully consistent with the overarching statistical methodology of the 2008 SNA and other international macroeconomic methodologies such as the Sixth Edition of Balance of Payments and International Investment Position Manual (BPM6) and broadly consistent with the European System of Accounts (ESA) manual and the more specialized manuals of deficit and debt that govern the Excessive Deficit Procedure.
However, the methodology is not always well defined in the policy debate. An international convention to view GL3/D4 as the desirable headline indicator of government debt, consistent with the international standards, would go a long way to create more transparency and better comparability of international data.
Our contribution is to provide a presentational framework and nomenclature that highlights the importance of different instruments, institutional coverage, and valuation and consolidation as key indicators of debt. Indeed, we have noted that other, more narrowly defined concepts can meaningfully supplement the comprehensive measure of debt. These narrower measures may be important for a risk-based assessment of the fiscal position, but they are not substitutes for a global indicator.
Further extensions of this work are the development of the statistical reporting of broader measures, for example net debt of the general government and the presentation of information on derivatives, and contingent liabilities.
The new debt database launched by the IMF and World Bank in 2010 is structured along government levels, debt instruments, consolidation and valuation as discussed in this paper. However, some countries report data only on the GL2 level and cover mostly D1. Developing data on the broader statistics will take some time, although Australia, Canada, and some other countries already publish or plan to publish GL3/D4 data or publish components that would allow the calculation of GL3/D4.
Debt statistics for various levels of government and instruments were shown for 61 countries and these data highlight some interesting patterns that merit further analysis such as the degree of fiscal autonomy of state and local government to issue debt, the degree of development of markets for government debt securities. The authors conclude that further research would be worthwhile on the advantages of a global standard of government debt for such topics as data comparability, IMF surveillance, programs, debt sustainability analysis, and the analysis of fiscal rules.
Thursday, July 26, 2012
BIS - Capital requirements for bank exposures to central counterparties + Basel III counterparty credit risk FAQs + other doc
Basel III counterparty credit risk - Frequently asked questions (update of FAQs published in November 2011) (25.07.2012 12:10)
http://www.bis.org/publ/ bcbs228.htm
Regulatory treatment of valuation adjustments to derivative liabilities: final rule issued by the Basel Committee (25.07.2012 12:05)
http://www.bis.org/press/ p120725b.htm
Capital requirements for bank exposures to central counterparties (25.07.2012 12:00)
http://www.bis.org/publ/ bcbs227.htm
http://www.bis.org/publ/
Regulatory treatment of valuation adjustments to derivative liabilities: final rule issued by the Basel Committee (25.07.2012 12:05)
http://www.bis.org/press/
Capital requirements for bank exposures to central counterparties (25.07.2012 12:00)
http://www.bis.org/publ/
Wednesday, July 18, 2012
On Graen's "Unwritten Rules for Your Career: 15 Secrets for Fast-track Success"
Miner (2005) says (chp 14), citing Graen (1989), that those interested in achieving their personal ends would need to focus on:
Bibliography
Graen, George (1989). Unwritten Rules for Your Career: 15 Secrets for Fast-track Success. New York: John Wiley.
Graen, George (2003). Dealing with Diversity. Greenwich, CT: Information Age Publishing.
Miner, John B. Organizational behavior I. Essential theories of motivation and leadership. Armonk, NY: M. E. Sharpe.
things a person should do to achieve fast-track status in management, what unwritten rules exist in organizations, and how to become an insider who understands these rules and follows them to move up the hierarchy. These unwritten rules are part of the informal organization and constitute the secrets of organizational politics.
There are fifteen such secrets of the fast track:
1. Find the hidden strategies of your organization and use them to achieve your objectives. (This involves forming working relationships—networks—with people who have access to special resources, skills, and abilities to do important work.)
2. Do your homework in order to pass the tests. (These tests can range from sample questions to command performances; you should test others, as well, to evaluate sources of information.)
3. Accept calculated risks by using appropriate contingency plans. (Thus, learn to improve your decision average by taking calculated career risks.)
4. Recognize that apparently complete and final plans are merely flexible guidelines to the actions necessary for implementation. (Thus, make your plans broad and open-ended so that you can adapt them as they are implemented.)
5. Expect to be financially undercompensated for the first half of your career and to be overcompensated for the second half. (People on the fast track inevitably grow out of their job descriptions and take on extra duties beyond what they are paid to do.)
6. Work to make your boss successful. (This is at the heart of the exchange between the two of you and involves a process of reciprocal promotion.)
7. Work to get your boss to promote your career. (This is the other side of the coin and involves grooming your replacement as well.)
8. Use reciprocal relationships to build supportive networks. (It is important that these be competence networks involving effective working relationships and competent people.)
9. Do not let your areas of competence become too narrowly specialized. (Avoid the specialists trap by continually taking on new challenges.)
10. Try to act with foresight more often than with hindsight. (Be proactive by identifying the right potential problem, choosing the right solution, and choosing the best implementation process.)
11. Develop cordial relationships with your competitors: Be courteous, considerate, and polite in all relationships. (You need not like all these people, but making unnecessary enemies is an expensive luxury.)
12. Seek out key expert insiders and learn from them. (Have numerous mentors and preserve these relationships of your reciprocal network.)
13. Make sure to acknowledge everyone’s contribution. (Giving credit can be used as a tool to develop a network of working relationships.)
14. Prefer equivalent exchanges between peers instead of rewards and punishments between unequal partners. (Equivalent exchanges are those in which a resource, service, or behavior is given with the understanding that something of equivalent value will eventually be returned; this requires mutual trust.)
15. Never take unfair advantage of anyone, and avoid letting anyone take unfair advantage of you. (Networks cannot be maintained without a reputation for trustworthiness.)
More recently, in another book, Graen (2003) has revisited this topic and set forth another partially overlapping list of thirteen actions that distinguish key players from others [...]. These guidelines [...] for how to play the hierarchy and gain fast-track status are as follows:
1. Demonstrate initiative to get things done (i.e., engage in organizational citizenship behaviors).
2. Exercise leadership to make the unit more effective (i.e., become an informal group leader).
3. Show a willingness to take risks to accomplish assignments (i.e., go against group pressures in order to surface problems if necessary).
4. Strive to add value to the assignments (i.e., enrich your own job by making it more challenging and meaningful).
5. Actively seek out new job assignments for self-improvement (i.e., seek out opportunities for growth).
6. Persist on a valuable project after others give up (and learn not to make the same mistake twice).
7. Build networks to extend capability, especially among those responsible for getting work done.
8. Influence others by doing something extra (i.e., this means building credibility and adjusting your interpersonal style to match others).
9. Resolve ambiguity by dealing constructively to resolve ambiguity (i.e., gather as much information as possible and obtain frequent feedback).
10. Seek wider exposure to managers outside the home division, which helps in gathering information.
11. Build on existing skills. Apply technical training on the job and build on that training to develop broader expertise; be sure not to allow obsolescence to creep in.
12. Develop a good working relationship with your boss. Work to build and maintain a close working relationship with the immediate supervisor (Strive to build a high quality LMX, devote energy to this goal—see Maslyn and Uhl-Bien, 2001).
13. Promote your boss. Work to get the immediate supervisor promoted (i.e., try to make that person look good; as your boss goes up, so well may you).
Bibliography
Graen, George (1989). Unwritten Rules for Your Career: 15 Secrets for Fast-track Success. New York: John Wiley.
Graen, George (2003). Dealing with Diversity. Greenwich, CT: Information Age Publishing.
Miner, John B. Organizational behavior I. Essential theories of motivation and leadership. Armonk, NY: M. E. Sharpe.
Tuesday, July 10, 2012
Quality of Government and Living Standards: Adjusting for the Efficiency of Public Spending
Quality of Government and Living Standards: Adjusting for the Efficiency of Public Spending. By Grigoli, Francesco; Ley, Eduardo
IMF Working Paper No. 12/182
Jul 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=26052.0
Summary: It is generally acknowledged that the government’s output is difficult to define and its value is hard to measure. The practical solution, adopted by national accounts systems, is to equate output to input costs. However, several studies estimate significant inefficiencies in government activities (i.e., same output could be achieved with less inputs), implying that inputs are not a good approximation for outputs. If taken seriously, the next logical step is to purge from GDP the fraction of government inputs that is wasted. As differences in the quality of the public sector have a direct impact on citizens’ effective consumption of public and private goods and services, we must take them into account when computing a measure of living standards. We illustrate such a correction computing corrected per capita GDPs on the basis of two studies that estimate efficiency scores for several dimensions of government activities. We show that the correction could be significant, and rankings of living standards could be re-ordered as a result.
Excerpts:
Despite its acknowledged shortcomings, GDP per capita is still the most commonly used summary indicator of living standards. Much of the policy advice provided by international organizations is based on macroeconomic magnitudes as shares of GDP, and framed on cross-country comparisons of per capita GDP. However, what GDP does actually measure may differ significantly across countries for several reasons. We focus here on a particular source for this heterogeneity: the quality of public spending. Broadly speaking, the ‘quality of public spending’ refers to the government’s effectiveness in transforming resources into socially valuable outputs. The opening quote highlights the disconnect between spending and value when the discipline of market transactions is missing.
Everywhere around the world, non-market government accounts for a big share of GDP and yet it is poorly measured—namely the value to users is assumed to equal the producer’s cost. Such a framework is deficient because it does not allow for changes in the amount of output produced per unit of input, that is, changes in productivity (for a recent review of this issue, see Atkinson and others, 2005). It also assumes that these inputs are fully used. To put it another way, standard national accounting assumes that government activities are on the best practice frontier. When this is not the case, there is an overstatement of national production. This, in turn, could result in misleading conclusions, particularly in cross-country comparisons, given that the size, scope, and performance of public sectors vary so widely.
Moreover, in the national accounts, this attributed non-market (government and non-profit sectors) “value added” is further allocated to the household sector as “actual consumption.” As Deaton and Heston (2008) put it: “[...] there are many countries around the world where government-provided health and education is inefficient, sometimes involving mass absenteeism by teachers and health workers [...] so that such ‘actual’ consumption is anything but actual. To count the salaries of AWOL government employees as ‘actual’ benefits to consumers adds statistical insult to original injury.” This “statistical insult” logically follows from the United Nations System of National Accounts (SNA) framework once ‘waste’ is classified as income—since national income must be either consumed or saved. Absent teachers and health care workers are all too common in many low-income countries (Chaudhury and Hammer, 2004; Kremer and others, 2005; Chaudhury and others, 2006; and World Bank, 2004). Beyond straight absenteeism, which is an extreme case, generally there are significant cross-country differences in the quality of public sector services. World Bank (2011) reports that in India, even though most children of primaryschool age are enrolled in school, 35 percent of them cannot read a simple paragraph and 41 percent cannot do a simple subtraction.
It must be acknowledged, nonetheless, that for many of government’s non-market services, the output is difficult to define, and without market prices the value of output is hard to measure. It is because of this that the practical solution adopted in the SNA is to equate output to input costs. This choice may be more adequate when using GDP to measure economic activity or factor employment than when using GDP to measure living standards.
Moving beyond this state of affairs, there are two alternative approaches. One is to try to find indicators for both output quantities and prices for direct measurement of some public outputs, as recommended in SNA 93 (but yet to be broadly implemented). The other is to correct the input costs to account for productive inefficiency, namely to purge from GDP the fraction of these inputs that is wasted. We focus here on the nature of this correction. As the differences in the quality of the public sector have a direct impact on citizens’ effective consumption of public and private goods and services, it seems natural to take them into account when computing a measure of living standards.
To illustrate, in a recent study, Afonso and others (2010) compute public sector efficiency scores for a group of countries and conclude that “[...] the highest-ranking country uses onethird of the inputs as the bottom ranking one to attain a certain public sector performance score. The average input scores suggest that countries could use around 45 per cent less resources to attain the same outcomes if they were fully efficient.” In this paper, we take such a statement to its logical conclusion. Once we acknowledge that the same output could be achieved with less inputs, output value cannot be equated to input costs. In other words, waste should not belong in the living-standards indicator—it still remains a cost of government but it must be purged from the value of government services. As noted, this adjustment is especially relevant for cross-country comparisons.
...
In this context, as noted, the standard practice is to equate the value of government outputs to its cost, notwithstanding the SNA 93 proposal to estimate government outputs directly. The value added that, say, public education contributes to GDP is based on the wage bill and other costs of providing education, such as outlays for utilities and school supplies. Similarly for public health, the wage bill of doctors, nurses and other medical staff and medical supplies measures largely comprises its value added. Thus, in the (pre-93) SNA used almost everywhere, non-market output, by definition, equals total costs. Yet the same costs support widely different levels of public output, depending on the quality of the public sector.
Note that value added is defined as payments to factors (labor and capital) and profits. Profits are assumed to be zero in the non-commercial public sector. As for the return to capital, in the current SNA used by most countries, public capital is attributed a net return of zero—i.e., the return from public capital is equated to its depreciation rate. This lack of a net return measure in the SNA is not due to a belief that the net return is actually zero, but to the difficulties of estimating the return.
Atkinson and others (2005, page 12) state some of the reasons behind current SNA practice: “Wide use of the convention that (output = input) reflects the difficulties in making alternative estimates. Simply stated, there are two major problems: (a) in the case of collective services such as defense or public administration, it is hard to identify the exact nature of the output, and (b) in the case of services supplied to individuals, such as health or education, it is hard to place a value on these services, as there is no market transaction.”
Murray (2010) also observes that studies of the government’s production activities, and their implications for the measurement of living standards, have long been ignored. He writes: “Looking back it is depressing that progress in understanding the production of public services has been so slow. In the market sector there is a long tradition of studying production functions, demand for inputs, average and marginal cost functions, elasticities of supply, productivity, and technical progress. The non-market sector has gone largely
unnoticed. In part this can be explained by general difficulties in measuring the output of services, whether public or private. But in part it must be explained by a completely different perspective on public and private services. Resource use for the production of public services has not been regarded as inputs into a production process, but as an end in itself, in the form of public consumption. Consequently, the production activity in the government sector has not been recognized.” (Our italics.)
The simple point that we make in this paper is that once it is recognized that the effectiveness of the government’s ‘production function’ varies significantly across countries, the simple convention of equating output value to input cost must be revisited. Thus, if we learn that the same output could be achieved with less inputs, it is more appropriate to credit GDP or GNI with the required inputs rather than with the actual inputs that include waste. While perceptions of government effectiveness vary widely among countries as, e.g., the World Bank’s Governance indicators attests (Kaufmann and others 2009), getting reliable measures of government actual effectiveness is a challenging task as we shall discuss below.
In physics, efficiency is defined as the ratio of useful work done to total energy expended, and the same general idea is associated with the term when discussing production. Economists simply replace ‘useful work’ by ‘outputs’ and ‘energy’ by ‘inputs.’ Technical efficiency means the adequate use of the available resources in order to obtain the maximum product. Why focus on technical efficiency and not other concepts of efficiency, such as price or allocative efficiency? Do we have enough evidence on public sector inefficiency to make the appropriate corrections?
The reason why we focus on technical efficiency in this preliminary inquiry is twofold. First, it corresponds to the concept of waste. Productive inefficiency implies that some inputs are wasted as more could have been produced with available inputs. In the case of allocative inefficiency, there could be a different allocation of resources that would make everyone better off but we cannot say that necessarily some resources are unused—although they are certainly not aligned with social preferences. Second, measuring technical inefficiency is easier and less controversial than measuring allocative inefficiency. To measure technical inefficiency, there are parametric and non-parametric methods allowing for construction of a best practice frontier. Inefficiency is then measured by the distance between this frontier and the actual input-output combination being assessed.
Indicators (or rather ranges of indicators) of inefficiency exist for the overall public sector and for specific activities such as education, healthcare, transportation, and other sectors. However, they are far from being uncontroversial. Sources of controversy include: omission of inputs and/or outputs, temporal lags needed to observe variations in the output indicators, choice of measures of outputs, and mixing outputs with outcomes. For example, many social and macroeconomic indicators impact health status beyond government spending (Spinks and Hollingsworth, 2009, and Joumard and others, 2010) and they should be taken into account. Most of the output indicators available show autocorrelation and changes in inputs typically take time to materialize into outputs’ variations. Also, there is a trend towards using outcome rather than output indicators for measuring the performance of the public sector. In health and education, efficiency studies have moved away from outputs (e.g., number of prenatal interventions) to outcomes (e.g., infant mortality rates). When cross-country analyses are involved, however, it must be acknowledged that differences in outcomes are explained not only by differences in public sector outputs but also differences in other environmental factors outside the public sector (e.g., culture, nutrition habits).
Empirical efficiency measurement methods first construct a reference technology based on observed input-output combinations, using econometric or linear programming methods. Next, they assess the distance of actual input-output combinations from the best-practice frontier. These distances, properly scaled, are called efficiency measures or scores. An inputbased efficiency measure informs us on the extent it is possible to reduce the amount of the inputs without reducing the level of output. Thus, an efficiency score, say, of 0.8 means that using best practices observed elsewhere, 80 percent of the inputs would suffice to produce the same output.
We base our corrections to GDP on the efficiency scores estimated in two papers: Afonso and others (2010) for several indicators referred to a set of 24 countries, and Evans and others (2000) focusing on health, for 191 countries based on WHO data. These studies employ techniques similar to those used in other studies, such as Gupta and Verhoeven (2001), Clements (2002), Carcillo and others (2007), and Joumard and others (2010).
? Afonso and others (2010) compute public sector performance and efficiency indicators (as performance weighted by the relevant expenditure needed to achieve it) for 24 EU and emerging economies. Using DEA, they conclude that on average countries could use 45 percent less resources to attain the same outcomes, and deliver an additional third of the fully efficient output if they were on the efficiency frontier. The study included an analysis of the efficiency of education and health spending that we use here.
? Evans and others (2000) estimate health efficiency scores for the 1993–1997 period for 191 countries, based on WHO data, using stochastic frontier methods. Two health outcomes measures are identified: the disability adjusted life expectancy (DALE) and a composite index of DALE, dispersion of child survival rate, responsiveness of the health care system, inequities in responsiveness, and fairness of financial contribution. The input measures are health expenditure and years of schooling with the addition of country fixed effects. Because of its large country coverage, this study is useful for illustrating the impact of the type of correction that we are discussing
here.
We must note that ideally, we would like to base our corrections on input-based technical efficiency studies that deal exclusively with inputs and outputs, and do not bring outcomes into the analysis. The reason is that public sector outputs interact with other factors to produce outcomes, and here cross-country hetereogenity can play an important role driving cross-country differences in outcomes. Unfortunately, we have found no technical-efficiency studies covering a broad sample of countries that restrict themselves to input-output analysis. In particular, these two studies deal with a mix of outputs and outcomes. The results reported here should thus be seen as illustrative. Furthermore, it should be underscored that the level of “waste” that is identified for each particular country varies significantly across studies, which implies that any associated measures of GDP adjusting for this waste will also differ.
...
We have argued here that the current practice of estimating the value of the government’s non-market output by its input costs is not only unsatisfactory but also misleading in crosscountry comparisons of living standards. Since differences in the quality of the public sector have an impact on the population’s effective consumption and welfare, they must be taken into account in comparisons of living standards. We have performed illustrative corrections of the input costs to account for productive inefficiency, thus purging from GDP the fraction of these inputs that is wasted.
Our results suggest that the magnitude of the correction could be significant. When correcting for inefficiencies in the health and education sectors, the average loss for a set of 24 EU member states and emerging economies amounts to 4.1 percentage points of GDP. Sector-specific averages for education and health are 1.5 and 2.6 percentage points of GDP, implying that 32.6 and 65.0 percent of the inputs are wasted in the respective sectors. These corrections are reflected in the GDP-per-capita ranking, which gets reshuffled in 9 cases out of 24. In a hypothetical scenario where the inefficiency of the health sector is assumed to be representative of the public sector as a whole, the rank reordering would affect about 50 percent of the 93 countries in the sample, with 70 percent of it happening in the lower half of the original ranking. These results, however, should be interpreted with caution, as the purpose of this paper is to call attention to the issue, rather than to provide fine-tuned waste estimates.
A natural way forward involves finding indicators for both output quantities and prices for direct measurement of some public outputs. This is recommended in SNA 93 but has yet to be implemented in most countries. Moreover, in recent times there has been an increased interest in outcomes-based performance monitoring and evaluation of government activities (see Stiglitz and others, 2010). As argued also in Atkinson (2005), it will be important to measure not only public sector outputs but also outcomes, as the latter are what ultimately affect welfare. A step in this direction is suggested by Abraham and Mackie (2006) for the US, with the creation of “satellite” accounts in specific areas as education and health. These extend the accounting of the nation’s productive inputs and outputs, thereby taking into account specific aspects of non-market activities.
IMF Working Paper No. 12/182
Jul 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=26052.0
Summary: It is generally acknowledged that the government’s output is difficult to define and its value is hard to measure. The practical solution, adopted by national accounts systems, is to equate output to input costs. However, several studies estimate significant inefficiencies in government activities (i.e., same output could be achieved with less inputs), implying that inputs are not a good approximation for outputs. If taken seriously, the next logical step is to purge from GDP the fraction of government inputs that is wasted. As differences in the quality of the public sector have a direct impact on citizens’ effective consumption of public and private goods and services, we must take them into account when computing a measure of living standards. We illustrate such a correction computing corrected per capita GDPs on the basis of two studies that estimate efficiency scores for several dimensions of government activities. We show that the correction could be significant, and rankings of living standards could be re-ordered as a result.
Excerpts:
Despite its acknowledged shortcomings, GDP per capita is still the most commonly used summary indicator of living standards. Much of the policy advice provided by international organizations is based on macroeconomic magnitudes as shares of GDP, and framed on cross-country comparisons of per capita GDP. However, what GDP does actually measure may differ significantly across countries for several reasons. We focus here on a particular source for this heterogeneity: the quality of public spending. Broadly speaking, the ‘quality of public spending’ refers to the government’s effectiveness in transforming resources into socially valuable outputs. The opening quote highlights the disconnect between spending and value when the discipline of market transactions is missing.
Everywhere around the world, non-market government accounts for a big share of GDP and yet it is poorly measured—namely the value to users is assumed to equal the producer’s cost. Such a framework is deficient because it does not allow for changes in the amount of output produced per unit of input, that is, changes in productivity (for a recent review of this issue, see Atkinson and others, 2005). It also assumes that these inputs are fully used. To put it another way, standard national accounting assumes that government activities are on the best practice frontier. When this is not the case, there is an overstatement of national production. This, in turn, could result in misleading conclusions, particularly in cross-country comparisons, given that the size, scope, and performance of public sectors vary so widely.
Moreover, in the national accounts, this attributed non-market (government and non-profit sectors) “value added” is further allocated to the household sector as “actual consumption.” As Deaton and Heston (2008) put it: “[...] there are many countries around the world where government-provided health and education is inefficient, sometimes involving mass absenteeism by teachers and health workers [...] so that such ‘actual’ consumption is anything but actual. To count the salaries of AWOL government employees as ‘actual’ benefits to consumers adds statistical insult to original injury.” This “statistical insult” logically follows from the United Nations System of National Accounts (SNA) framework once ‘waste’ is classified as income—since national income must be either consumed or saved. Absent teachers and health care workers are all too common in many low-income countries (Chaudhury and Hammer, 2004; Kremer and others, 2005; Chaudhury and others, 2006; and World Bank, 2004). Beyond straight absenteeism, which is an extreme case, generally there are significant cross-country differences in the quality of public sector services. World Bank (2011) reports that in India, even though most children of primaryschool age are enrolled in school, 35 percent of them cannot read a simple paragraph and 41 percent cannot do a simple subtraction.
It must be acknowledged, nonetheless, that for many of government’s non-market services, the output is difficult to define, and without market prices the value of output is hard to measure. It is because of this that the practical solution adopted in the SNA is to equate output to input costs. This choice may be more adequate when using GDP to measure economic activity or factor employment than when using GDP to measure living standards.
Moving beyond this state of affairs, there are two alternative approaches. One is to try to find indicators for both output quantities and prices for direct measurement of some public outputs, as recommended in SNA 93 (but yet to be broadly implemented). The other is to correct the input costs to account for productive inefficiency, namely to purge from GDP the fraction of these inputs that is wasted. We focus here on the nature of this correction. As the differences in the quality of the public sector have a direct impact on citizens’ effective consumption of public and private goods and services, it seems natural to take them into account when computing a measure of living standards.
To illustrate, in a recent study, Afonso and others (2010) compute public sector efficiency scores for a group of countries and conclude that “[...] the highest-ranking country uses onethird of the inputs as the bottom ranking one to attain a certain public sector performance score. The average input scores suggest that countries could use around 45 per cent less resources to attain the same outcomes if they were fully efficient.” In this paper, we take such a statement to its logical conclusion. Once we acknowledge that the same output could be achieved with less inputs, output value cannot be equated to input costs. In other words, waste should not belong in the living-standards indicator—it still remains a cost of government but it must be purged from the value of government services. As noted, this adjustment is especially relevant for cross-country comparisons.
...
In this context, as noted, the standard practice is to equate the value of government outputs to its cost, notwithstanding the SNA 93 proposal to estimate government outputs directly. The value added that, say, public education contributes to GDP is based on the wage bill and other costs of providing education, such as outlays for utilities and school supplies. Similarly for public health, the wage bill of doctors, nurses and other medical staff and medical supplies measures largely comprises its value added. Thus, in the (pre-93) SNA used almost everywhere, non-market output, by definition, equals total costs. Yet the same costs support widely different levels of public output, depending on the quality of the public sector.
Note that value added is defined as payments to factors (labor and capital) and profits. Profits are assumed to be zero in the non-commercial public sector. As for the return to capital, in the current SNA used by most countries, public capital is attributed a net return of zero—i.e., the return from public capital is equated to its depreciation rate. This lack of a net return measure in the SNA is not due to a belief that the net return is actually zero, but to the difficulties of estimating the return.
Atkinson and others (2005, page 12) state some of the reasons behind current SNA practice: “Wide use of the convention that (output = input) reflects the difficulties in making alternative estimates. Simply stated, there are two major problems: (a) in the case of collective services such as defense or public administration, it is hard to identify the exact nature of the output, and (b) in the case of services supplied to individuals, such as health or education, it is hard to place a value on these services, as there is no market transaction.”
Murray (2010) also observes that studies of the government’s production activities, and their implications for the measurement of living standards, have long been ignored. He writes: “Looking back it is depressing that progress in understanding the production of public services has been so slow. In the market sector there is a long tradition of studying production functions, demand for inputs, average and marginal cost functions, elasticities of supply, productivity, and technical progress. The non-market sector has gone largely
unnoticed. In part this can be explained by general difficulties in measuring the output of services, whether public or private. But in part it must be explained by a completely different perspective on public and private services. Resource use for the production of public services has not been regarded as inputs into a production process, but as an end in itself, in the form of public consumption. Consequently, the production activity in the government sector has not been recognized.” (Our italics.)
The simple point that we make in this paper is that once it is recognized that the effectiveness of the government’s ‘production function’ varies significantly across countries, the simple convention of equating output value to input cost must be revisited. Thus, if we learn that the same output could be achieved with less inputs, it is more appropriate to credit GDP or GNI with the required inputs rather than with the actual inputs that include waste. While perceptions of government effectiveness vary widely among countries as, e.g., the World Bank’s Governance indicators attests (Kaufmann and others 2009), getting reliable measures of government actual effectiveness is a challenging task as we shall discuss below.
In physics, efficiency is defined as the ratio of useful work done to total energy expended, and the same general idea is associated with the term when discussing production. Economists simply replace ‘useful work’ by ‘outputs’ and ‘energy’ by ‘inputs.’ Technical efficiency means the adequate use of the available resources in order to obtain the maximum product. Why focus on technical efficiency and not other concepts of efficiency, such as price or allocative efficiency? Do we have enough evidence on public sector inefficiency to make the appropriate corrections?
The reason why we focus on technical efficiency in this preliminary inquiry is twofold. First, it corresponds to the concept of waste. Productive inefficiency implies that some inputs are wasted as more could have been produced with available inputs. In the case of allocative inefficiency, there could be a different allocation of resources that would make everyone better off but we cannot say that necessarily some resources are unused—although they are certainly not aligned with social preferences. Second, measuring technical inefficiency is easier and less controversial than measuring allocative inefficiency. To measure technical inefficiency, there are parametric and non-parametric methods allowing for construction of a best practice frontier. Inefficiency is then measured by the distance between this frontier and the actual input-output combination being assessed.
Indicators (or rather ranges of indicators) of inefficiency exist for the overall public sector and for specific activities such as education, healthcare, transportation, and other sectors. However, they are far from being uncontroversial. Sources of controversy include: omission of inputs and/or outputs, temporal lags needed to observe variations in the output indicators, choice of measures of outputs, and mixing outputs with outcomes. For example, many social and macroeconomic indicators impact health status beyond government spending (Spinks and Hollingsworth, 2009, and Joumard and others, 2010) and they should be taken into account. Most of the output indicators available show autocorrelation and changes in inputs typically take time to materialize into outputs’ variations. Also, there is a trend towards using outcome rather than output indicators for measuring the performance of the public sector. In health and education, efficiency studies have moved away from outputs (e.g., number of prenatal interventions) to outcomes (e.g., infant mortality rates). When cross-country analyses are involved, however, it must be acknowledged that differences in outcomes are explained not only by differences in public sector outputs but also differences in other environmental factors outside the public sector (e.g., culture, nutrition habits).
Empirical efficiency measurement methods first construct a reference technology based on observed input-output combinations, using econometric or linear programming methods. Next, they assess the distance of actual input-output combinations from the best-practice frontier. These distances, properly scaled, are called efficiency measures or scores. An inputbased efficiency measure informs us on the extent it is possible to reduce the amount of the inputs without reducing the level of output. Thus, an efficiency score, say, of 0.8 means that using best practices observed elsewhere, 80 percent of the inputs would suffice to produce the same output.
We base our corrections to GDP on the efficiency scores estimated in two papers: Afonso and others (2010) for several indicators referred to a set of 24 countries, and Evans and others (2000) focusing on health, for 191 countries based on WHO data. These studies employ techniques similar to those used in other studies, such as Gupta and Verhoeven (2001), Clements (2002), Carcillo and others (2007), and Joumard and others (2010).
? Afonso and others (2010) compute public sector performance and efficiency indicators (as performance weighted by the relevant expenditure needed to achieve it) for 24 EU and emerging economies. Using DEA, they conclude that on average countries could use 45 percent less resources to attain the same outcomes, and deliver an additional third of the fully efficient output if they were on the efficiency frontier. The study included an analysis of the efficiency of education and health spending that we use here.
? Evans and others (2000) estimate health efficiency scores for the 1993–1997 period for 191 countries, based on WHO data, using stochastic frontier methods. Two health outcomes measures are identified: the disability adjusted life expectancy (DALE) and a composite index of DALE, dispersion of child survival rate, responsiveness of the health care system, inequities in responsiveness, and fairness of financial contribution. The input measures are health expenditure and years of schooling with the addition of country fixed effects. Because of its large country coverage, this study is useful for illustrating the impact of the type of correction that we are discussing
here.
We must note that ideally, we would like to base our corrections on input-based technical efficiency studies that deal exclusively with inputs and outputs, and do not bring outcomes into the analysis. The reason is that public sector outputs interact with other factors to produce outcomes, and here cross-country hetereogenity can play an important role driving cross-country differences in outcomes. Unfortunately, we have found no technical-efficiency studies covering a broad sample of countries that restrict themselves to input-output analysis. In particular, these two studies deal with a mix of outputs and outcomes. The results reported here should thus be seen as illustrative. Furthermore, it should be underscored that the level of “waste” that is identified for each particular country varies significantly across studies, which implies that any associated measures of GDP adjusting for this waste will also differ.
...
We have argued here that the current practice of estimating the value of the government’s non-market output by its input costs is not only unsatisfactory but also misleading in crosscountry comparisons of living standards. Since differences in the quality of the public sector have an impact on the population’s effective consumption and welfare, they must be taken into account in comparisons of living standards. We have performed illustrative corrections of the input costs to account for productive inefficiency, thus purging from GDP the fraction of these inputs that is wasted.
Our results suggest that the magnitude of the correction could be significant. When correcting for inefficiencies in the health and education sectors, the average loss for a set of 24 EU member states and emerging economies amounts to 4.1 percentage points of GDP. Sector-specific averages for education and health are 1.5 and 2.6 percentage points of GDP, implying that 32.6 and 65.0 percent of the inputs are wasted in the respective sectors. These corrections are reflected in the GDP-per-capita ranking, which gets reshuffled in 9 cases out of 24. In a hypothetical scenario where the inefficiency of the health sector is assumed to be representative of the public sector as a whole, the rank reordering would affect about 50 percent of the 93 countries in the sample, with 70 percent of it happening in the lower half of the original ranking. These results, however, should be interpreted with caution, as the purpose of this paper is to call attention to the issue, rather than to provide fine-tuned waste estimates.
A natural way forward involves finding indicators for both output quantities and prices for direct measurement of some public outputs. This is recommended in SNA 93 but has yet to be implemented in most countries. Moreover, in recent times there has been an increased interest in outcomes-based performance monitoring and evaluation of government activities (see Stiglitz and others, 2010). As argued also in Atkinson (2005), it will be important to measure not only public sector outputs but also outcomes, as the latter are what ultimately affect welfare. A step in this direction is suggested by Abraham and Mackie (2006) for the US, with the creation of “satellite” accounts in specific areas as education and health. These extend the accounting of the nation’s productive inputs and outputs, thereby taking into account specific aspects of non-market activities.
Subscribe to:
Posts (Atom)