Quality of Government and Living Standards: Adjusting for the Efficiency of Public Spending. By Grigoli, Francesco; Ley, Eduardo
IMF Working Paper No. 12/182
Jul 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=26052.0
Summary: It is generally acknowledged that the government’s output is difficult to define and its value is hard to measure. The practical solution, adopted by national accounts systems, is to equate output to input costs. However, several studies estimate significant inefficiencies in government activities (i.e., same output could be achieved with less inputs), implying that inputs are not a good approximation for outputs. If taken seriously, the next logical step is to purge from GDP the fraction of government inputs that is wasted. As differences in the quality of the public sector have a direct impact on citizens’ effective consumption of public and private goods and services, we must take them into account when computing a measure of living standards. We illustrate such a correction computing corrected per capita GDPs on the basis of two studies that estimate efficiency scores for several dimensions of government activities. We show that the correction could be significant, and rankings of living standards could be re-ordered as a result.
Excerpts:
Despite its acknowledged shortcomings, GDP per capita is still the most commonly used summary indicator of living standards. Much of the policy advice provided by international organizations is based on macroeconomic magnitudes as shares of GDP, and framed on cross-country comparisons of per capita GDP. However, what GDP does actually measure may differ significantly across countries for several reasons. We focus here on a particular source for this heterogeneity: the quality of public spending. Broadly speaking, the ‘quality of public spending’ refers to the government’s effectiveness in transforming resources into socially valuable outputs. The opening quote highlights the disconnect between spending and value when the discipline of market transactions is missing.
Everywhere around the world, non-market government accounts for a big share of GDP and yet it is poorly measured—namely the value to users is assumed to equal the producer’s cost. Such a framework is deficient because it does not allow for changes in the amount of output produced per unit of input, that is, changes in productivity (for a recent review of this issue, see Atkinson and others, 2005). It also assumes that these inputs are fully used. To put it another way, standard national accounting assumes that government activities are on the best practice frontier. When this is not the case, there is an overstatement of national production. This, in turn, could result in misleading conclusions, particularly in cross-country comparisons, given that the size, scope, and performance of public sectors vary so widely.
Moreover, in the national accounts, this attributed non-market (government and non-profit sectors) “value added” is further allocated to the household sector as “actual consumption.” As Deaton and Heston (2008) put it: “[...] there are many countries around the world where government-provided health and education is inefficient, sometimes involving mass absenteeism by teachers and health workers [...] so that such ‘actual’ consumption is anything but actual. To count the salaries of AWOL government employees as ‘actual’ benefits to consumers adds statistical insult to original injury.” This “statistical insult” logically follows from the United Nations System of National Accounts (SNA) framework once ‘waste’ is classified as income—since national income must be either consumed or saved. Absent teachers and health care workers are all too common in many low-income countries (Chaudhury and Hammer, 2004; Kremer and others, 2005; Chaudhury and others, 2006; and World Bank, 2004). Beyond straight absenteeism, which is an extreme case, generally there are significant cross-country differences in the quality of public sector services. World Bank (2011) reports that in India, even though most children of primaryschool age are enrolled in school, 35 percent of them cannot read a simple paragraph and 41 percent cannot do a simple subtraction.
It must be acknowledged, nonetheless, that for many of government’s non-market services, the output is difficult to define, and without market prices the value of output is hard to measure. It is because of this that the practical solution adopted in the SNA is to equate output to input costs. This choice may be more adequate when using GDP to measure economic activity or factor employment than when using GDP to measure living standards.
Moving beyond this state of affairs, there are two alternative approaches. One is to try to find indicators for both output quantities and prices for direct measurement of some public outputs, as recommended in SNA 93 (but yet to be broadly implemented). The other is to correct the input costs to account for productive inefficiency, namely to purge from GDP the fraction of these inputs that is wasted. We focus here on the nature of this correction. As the differences in the quality of the public sector have a direct impact on citizens’ effective consumption of public and private goods and services, it seems natural to take them into account when computing a measure of living standards.
To illustrate, in a recent study, Afonso and others (2010) compute public sector efficiency scores for a group of countries and conclude that “[...] the highest-ranking country uses onethird of the inputs as the bottom ranking one to attain a certain public sector performance score. The average input scores suggest that countries could use around 45 per cent less resources to attain the same outcomes if they were fully efficient.” In this paper, we take such a statement to its logical conclusion. Once we acknowledge that the same output could be achieved with less inputs, output value cannot be equated to input costs. In other words, waste should not belong in the living-standards indicator—it still remains a cost of government but it must be purged from the value of government services. As noted, this adjustment is especially relevant for cross-country comparisons.
...
In this context, as noted, the standard practice is to equate the value of government outputs to its cost, notwithstanding the SNA 93 proposal to estimate government outputs directly. The value added that, say, public education contributes to GDP is based on the wage bill and other costs of providing education, such as outlays for utilities and school supplies. Similarly for public health, the wage bill of doctors, nurses and other medical staff and medical supplies measures largely comprises its value added. Thus, in the (pre-93) SNA used almost everywhere, non-market output, by definition, equals total costs. Yet the same costs support widely different levels of public output, depending on the quality of the public sector.
Note that value added is defined as payments to factors (labor and capital) and profits. Profits are assumed to be zero in the non-commercial public sector. As for the return to capital, in the current SNA used by most countries, public capital is attributed a net return of zero—i.e., the return from public capital is equated to its depreciation rate. This lack of a net return measure in the SNA is not due to a belief that the net return is actually zero, but to the difficulties of estimating the return.
Atkinson and others (2005, page 12) state some of the reasons behind current SNA practice: “Wide use of the convention that (output = input) reflects the difficulties in making alternative estimates. Simply stated, there are two major problems: (a) in the case of collective services such as defense or public administration, it is hard to identify the exact nature of the output, and (b) in the case of services supplied to individuals, such as health or education, it is hard to place a value on these services, as there is no market transaction.”
Murray (2010) also observes that studies of the government’s production activities, and their implications for the measurement of living standards, have long been ignored. He writes: “Looking back it is depressing that progress in understanding the production of public services has been so slow. In the market sector there is a long tradition of studying production functions, demand for inputs, average and marginal cost functions, elasticities of supply, productivity, and technical progress. The non-market sector has gone largely
unnoticed. In part this can be explained by general difficulties in measuring the output of services, whether public or private. But in part it must be explained by a completely different perspective on public and private services. Resource use for the production of public services has not been regarded as inputs into a production process, but as an end in itself, in the form of public consumption. Consequently, the production activity in the government sector has not been recognized.” (Our italics.)
The simple point that we make in this paper is that once it is recognized that the effectiveness of the government’s ‘production function’ varies significantly across countries, the simple convention of equating output value to input cost must be revisited. Thus, if we learn that the same output could be achieved with less inputs, it is more appropriate to credit GDP or GNI with the required inputs rather than with the actual inputs that include waste. While perceptions of government effectiveness vary widely among countries as, e.g., the World Bank’s Governance indicators attests (Kaufmann and others 2009), getting reliable measures of government actual effectiveness is a challenging task as we shall discuss below.
In physics, efficiency is defined as the ratio of useful work done to total energy expended, and the same general idea is associated with the term when discussing production. Economists simply replace ‘useful work’ by ‘outputs’ and ‘energy’ by ‘inputs.’ Technical efficiency means the adequate use of the available resources in order to obtain the maximum product. Why focus on technical efficiency and not other concepts of efficiency, such as price or allocative efficiency? Do we have enough evidence on public sector inefficiency to make the appropriate corrections?
The reason why we focus on technical efficiency in this preliminary inquiry is twofold. First, it corresponds to the concept of waste. Productive inefficiency implies that some inputs are wasted as more could have been produced with available inputs. In the case of allocative inefficiency, there could be a different allocation of resources that would make everyone better off but we cannot say that necessarily some resources are unused—although they are certainly not aligned with social preferences. Second, measuring technical inefficiency is easier and less controversial than measuring allocative inefficiency. To measure technical inefficiency, there are parametric and non-parametric methods allowing for construction of a best practice frontier. Inefficiency is then measured by the distance between this frontier and the actual input-output combination being assessed.
Indicators (or rather ranges of indicators) of inefficiency exist for the overall public sector and for specific activities such as education, healthcare, transportation, and other sectors. However, they are far from being uncontroversial. Sources of controversy include: omission of inputs and/or outputs, temporal lags needed to observe variations in the output indicators, choice of measures of outputs, and mixing outputs with outcomes. For example, many social and macroeconomic indicators impact health status beyond government spending (Spinks and Hollingsworth, 2009, and Joumard and others, 2010) and they should be taken into account. Most of the output indicators available show autocorrelation and changes in inputs typically take time to materialize into outputs’ variations. Also, there is a trend towards using outcome rather than output indicators for measuring the performance of the public sector. In health and education, efficiency studies have moved away from outputs (e.g., number of prenatal interventions) to outcomes (e.g., infant mortality rates). When cross-country analyses are involved, however, it must be acknowledged that differences in outcomes are explained not only by differences in public sector outputs but also differences in other environmental factors outside the public sector (e.g., culture, nutrition habits).
Empirical efficiency measurement methods first construct a reference technology based on observed input-output combinations, using econometric or linear programming methods. Next, they assess the distance of actual input-output combinations from the best-practice frontier. These distances, properly scaled, are called efficiency measures or scores. An inputbased efficiency measure informs us on the extent it is possible to reduce the amount of the inputs without reducing the level of output. Thus, an efficiency score, say, of 0.8 means that using best practices observed elsewhere, 80 percent of the inputs would suffice to produce the same output.
We base our corrections to GDP on the efficiency scores estimated in two papers: Afonso and others (2010) for several indicators referred to a set of 24 countries, and Evans and others (2000) focusing on health, for 191 countries based on WHO data. These studies employ techniques similar to those used in other studies, such as Gupta and Verhoeven (2001), Clements (2002), Carcillo and others (2007), and Joumard and others (2010).
? Afonso and others (2010) compute public sector performance and efficiency indicators (as performance weighted by the relevant expenditure needed to achieve it) for 24 EU and emerging economies. Using DEA, they conclude that on average countries could use 45 percent less resources to attain the same outcomes, and deliver an additional third of the fully efficient output if they were on the efficiency frontier. The study included an analysis of the efficiency of education and health spending that we use here.
? Evans and others (2000) estimate health efficiency scores for the 1993–1997 period for 191 countries, based on WHO data, using stochastic frontier methods. Two health outcomes measures are identified: the disability adjusted life expectancy (DALE) and a composite index of DALE, dispersion of child survival rate, responsiveness of the health care system, inequities in responsiveness, and fairness of financial contribution. The input measures are health expenditure and years of schooling with the addition of country fixed effects. Because of its large country coverage, this study is useful for illustrating the impact of the type of correction that we are discussing
here.
We must note that ideally, we would like to base our corrections on input-based technical efficiency studies that deal exclusively with inputs and outputs, and do not bring outcomes into the analysis. The reason is that public sector outputs interact with other factors to produce outcomes, and here cross-country hetereogenity can play an important role driving cross-country differences in outcomes. Unfortunately, we have found no technical-efficiency studies covering a broad sample of countries that restrict themselves to input-output analysis. In particular, these two studies deal with a mix of outputs and outcomes. The results reported here should thus be seen as illustrative. Furthermore, it should be underscored that the level of “waste” that is identified for each particular country varies significantly across studies, which implies that any associated measures of GDP adjusting for this waste will also differ.
...
We have argued here that the current practice of estimating the value of the government’s non-market output by its input costs is not only unsatisfactory but also misleading in crosscountry comparisons of living standards. Since differences in the quality of the public sector have an impact on the population’s effective consumption and welfare, they must be taken into account in comparisons of living standards. We have performed illustrative corrections of the input costs to account for productive inefficiency, thus purging from GDP the fraction of these inputs that is wasted.
Our results suggest that the magnitude of the correction could be significant. When correcting for inefficiencies in the health and education sectors, the average loss for a set of 24 EU member states and emerging economies amounts to 4.1 percentage points of GDP. Sector-specific averages for education and health are 1.5 and 2.6 percentage points of GDP, implying that 32.6 and 65.0 percent of the inputs are wasted in the respective sectors. These corrections are reflected in the GDP-per-capita ranking, which gets reshuffled in 9 cases out of 24. In a hypothetical scenario where the inefficiency of the health sector is assumed to be representative of the public sector as a whole, the rank reordering would affect about 50 percent of the 93 countries in the sample, with 70 percent of it happening in the lower half of the original ranking. These results, however, should be interpreted with caution, as the purpose of this paper is to call attention to the issue, rather than to provide fine-tuned waste estimates.
A natural way forward involves finding indicators for both output quantities and prices for direct measurement of some public outputs. This is recommended in SNA 93 but has yet to be implemented in most countries. Moreover, in recent times there has been an increased interest in outcomes-based performance monitoring and evaluation of government activities (see Stiglitz and others, 2010). As argued also in Atkinson (2005), it will be important to measure not only public sector outputs but also outcomes, as the latter are what ultimately affect welfare. A step in this direction is suggested by Abraham and Mackie (2006) for the US, with the creation of “satellite” accounts in specific areas as education and health. These extend the accounting of the nation’s productive inputs and outputs, thereby taking into account specific aspects of non-market activities.
Tuesday, July 10, 2012
Monday, July 9, 2012
Macro-prudential Policy in a Fisherian Model of Financial Innovation
Macro-prudential Policy in a Fisherian Model of Financial Innovation. By Bianchi, Javier; Boz, Emine; Mendoza, Enrique G.
IMF Working Paper No. 12/181
Jul 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=26051.0
Summary: The interaction between credit frictions, financial innovation, and a switch from optimistic to pessimistic beliefs played a central role in the 2008 financial crisis. This paper develops a quantitative general equilibrium framework in which this interaction drives the financial amplification mechanism to study the effects of macro-prudential policy. Financial innovation enhances the ability of agents to collateralize assets into debt, but the riskiness of this new regime can only be learned over time. Beliefs about transition probabilities across states with high and low ability to borrow change as agents learn from observed realizations of financial conditions. At the same time, the collateral constraint introduces a pecuniary externality, because agents fail to internalize the effect of their borrowing decisions on asset prices. Quantitative analysis shows that the effectiveness of macro-prudential policy in this environment depends on the government's information set, the tightness of credit constraints and the pace at which optimism surges in the early stages of financial innovation. The policy is least effective when the government is as uninformed as private agents, credit constraints are tight, and optimism builds quickly.
Excerpts:
Policymakers have responded to the lapses in financial regulation in the years before the 2008 global financial crisis and the unprecedented systemic nature of the crisis itself with a strong push to revamp financial regulation following a "macro-prudential" approach. This approach aims to focus on the macro (i.e. systemic) implications that follow from the actions of credit market participants, and to implement policies that influence behavior in "good times" in order to make financial crises less severe and less frequent. The design of macro-prudential policy is hampered, however, by the need to develop models that are reasonably good at explaining the macro dynamics of financial crises and at capturing the complex dynamic interconnections between potential macro-prudential policy instruments and the actions of agents in credit markets.
The task of developing these models is particularly challenging because of the fast pace of financial development. Indeed, the decade before the 2008 crash was a period of significant financial innovation, which included both the introduction of a large set of complex financial instruments, such as collateralized debt obligations, mortgage backed securities and credit default swaps, and the enactment of major financial reforms of a magnitude and scope unseen since the end of the Great Depression. Thus, models of macro-prudential regulation have to take into account the changing nature of the financial environment, and hence deal with the fact that credit market participants, as well as policymakers, may be making decisions lacking perfect information about the true riskiness of a changing financial regime.
This paper proposes a dynamic stochastic general equilibrium model in which the interaction between financial innovation, credit frictions and imperfect information is at the core of the financial transmission mechanism, and uses it to study its quantitative implications for the design and effectiveness of macro-prudential policy. In the model, a collateral constraint limits the agents' ability to borrow to a fraction of the market value of the assets they can offer as collateral. Financial innovation enhances the ability of agents to "collateralize," but also introduces risk because of the possibility of fluctuations in collateral requirements or loan-to-value ratios. We take literally the definition of financial innovation to be the introduction of a truly new financial regime. This forces us to deviate from the standard assumption that agents formulate rational expectations with full information about the stochastic process driving fluctuations in credit conditions. In particular, we assume that agents learn (in Bayesian fashion) about the transition probabilities of financial regimes only as they observe regimes with high and low ability to borrow over time. In the long run, and in the absence of new waves of financial innovation, they learn the true transition probabilities and form standard rational expectations, but in the short run agents' beliefs display waves of optimism and pessimism depending on their initial priors and on the market conditions they observe. These changing beliefs influence agents' borrowing decisions and equilibrium asset prices, and together with the collateral constraint they form a financial amplification feedback mechanism: optimistic (pessimistic) expectations lead to over-borrowing (under-borrowing) and increased (reduced) asset prices, and as asset prices change the ability to borrow changes as well.
Our analysis focuses in particular on a learning scenario in which the arrival of financial innovation starts an "optimistic phase," in which a few observations of enhanced borrowing ability lead agents to believe that the financial environment is stable and risky assets are not "very risky." Hence, they borrow more and bid up the price of risky assets more than in a full-information ra- tional expectations equilibrium. The higher value of assets in turn relaxes the credit constraint. Thus, the initial increase in debt due to optimism is amplified by the interaction with the collateral constraint via optimistic asset prices. Conversely, when the first realization of the low-borrowing- ability regime is observed, a "pessimistic phase" starts in which agents overstate the probability of continuing in poor financial regimes and overstate the riskiness of assets. This results in lower debt levels and lower asset prices, and the collateral constraint amplifies this downturn.
Macro-prudential policy action is desirable in this environment because the collateral constraint introduces a pecuniary externality in credit markets that leads to more debt and financial crises that are more severe and frequent than in the absence of this externality. The externality exists because individual agents fail to internalize the effect of their borrowing decisions on asset prices, particularly future asset prices in states of financial distress (in which the feedback loop via the collateral constraint triggers a financial crash).
There are several studies in the growing literature on macro-prudential regulation that have examined the implications of this externality, but typically under the assumption that agents form rational expectations with full information (e.g. Lorenzoni (2008), Stein (2011), Bianchi (2011), Bianchi and Mendoza (2010), Korinek (2010), Jeanne and Korinek (2010), Benigno, Chen, Otrok, Rebucci, and Young (2010)). In contrast, the novel contribution of this paper is in that we study the effects of macro-prudential policy in an environment in which the pecuniary externality is influenced by the interaction of the credit constraint with learning about the riskiness of a new financial regime. The analysis of Boz and Mendoza (2010) suggest that taking this interaction into account can be important, because they found that the credit constraint in a learning setup produces significantly larger effects on debt and asset prices than in a full-information environment with the same credit constraint. Their study, however, focused only on quantifying the properties of the decentralized competitive equilibrium and abstracted from normative issues and policy analysis. The policy analysis of this paper considers a social planner under two different informational assumptions. First, an uninformed planner who has to learn about the true riskiness of the new financial environment, and faces the set of feasible credit positions supported by the collateral values of the competitive equilibrium with learning. We start with a baseline scenario in which private agents and the planner have the same initial priors and thus form the same sequence of beliefs, and study later on scenarios in which private agents and the uninformed planner form different beliefs. Second, an informed planner with full information, who therefore knows the true transition probabilities across financial regimes, and faces a set of feasible credit positions consistent with the collateral values of the full-information, rational expectations competitive equilibrium.
We compute the decentralized competitive equilibrium of the model with learning (DEL) and contrast this case with the above social planner equilibria. We then compare the main features of these equilibria, in terms of the behavior of macroeconomic aggregates and asset pricing indicators, and examine the characteristics of macro-prudential policies that support the allocations of the planning problems as competitive equilibria. This analysis emphasizes the potential limitations of macro-prudential policy in the presence of significant financial innovation, and highlights the relevance of taking into account informational frictions in evaluating the effectiveness of macro-prudential policy.
The quantitative analysis indicates that the interaction of the collateral constraint with optimistic beliefs in the DEL equilibrium can strengthen the case for introducing macro-prudential regulation compared with the decentralized equilibrium under full information (DEF). This is because, as Boz and Mendoza (2010) showed, the interaction of these elements produces larger amplification both of the credit boom in the optimistic phase and of the financial crash when the economy switches to the bad financial regime. The results also show, however, that the effectiveness of macro-prudential policy varies widely with the assumptions about the information set and collateral pricing function used by the social planner. Moreover, for the uninformed planner, the effectiveness of macro-prudential policy also depends on the tightness of the borrowing constraint and the pace at which optimism builds in the early stages of financial innovation.
Consider first the uninformed planner. For this planner, the undervaluation of risk weakens the incentives to build precautionary savings against states of nature with low-borrowing-ability regimes over the long run, because this planner underestimates the probability of landing on and remaining in those states. In contrast, the informed planner assesses the correct probabilities of landing and remaining in states with good and bad credit regimes, so its incentives to build precautionary savings are stronger. In fact, the informed planner's optimal macro-prudential policy features a precautionary component that lowers borrowing levels at given asset prices, and a component that influences portfolio choice of debt v. assets to address the effect of the agents' mispricing of risk on collateral prices.
It is important to note that even the uninformed planner has the incentive to use macro-prudential policy to tackle the pecuniary externality and alter debt and asset pricing dynamics. In our baseline calibration, however, the borrowing constraint becomes tightly binding in the early stages of financial innovation as optimism builds quickly, and as a result macro-prudential policy is not very effective (i.e. debt positions and asset prices differ little between the DEL and the uninformed planner). Intuitively, since a binding credit constraint implies that debt equals the high-credit-regime fraction of the value of collateral, debt levels for the uninformed social planner and the decentralized equilibrium are similar once the constraint becomes binding for the planner. But this is not a general result.2 Variations in the information structure in which optimism builds more gradually produce outcomes in which macro-prudential policy is effective even when the
planner has access to the same information set. On the other hand, it is generally true that the uninformed planner allows larger debt positions than the informed planner because of the lower precautionary savings incentives.
We also analyze the welfare losses that arise from the pecuniary externality and the optimism embedded in agents' subjective beliefs. The losses arising due to their combined e®ect are large, reaching up to 7 percent in terms of a compensating variation in permanent consumption that equalizes the welfare of the informed planner with that of the DEL economy. The welfare losses attributable to the pecuniary externality alone are relatively small, in line with the findings reported by Bianchi (2011) and Bianchi and Mendoza (2010), and they fall significantly at the peak of optimism.
Our model follows a long and old tradition of models of financial crises in which credit frictions and imperfect information interact. This notion dates back to the classic work of Fisher (1933), in which he described his debt-deflation financial amplification mechanism as the result of a feedbackloop between agents' beliefs and credit frictions (particularly those that force fires sales of assets and goods by distressed borrowers). Minsky (1992) is along a similar vein. More recently, macroeconomic models of financial accelerators (e.g. Bernanke, Gertler, and Gilchrist (1999), Kiyotaki and Moore (1997), Aiyagari and Gertler (1999)) have focused on modeling financial amplification but typically under rational expectations with full information about the stochastic processes of exogenous shocks.
The particular specification of imperfect information and learning that we use follows closely that of Boz and Mendoza (2010) and Cogley and Sargent (2008a), in which agents observe regime realizations of a Markov-switching process without noise but need to learn its transition probability matrix. The imperfect information assumption is based on the premise that the U.S. financial system went through significant changes beginning in the mid-90s as a result of financial innovation and deregulation that took place at a rapid pace. As in Boz and Mendoza (2010), agents go through a learning process in order to "discover" the true riskiness of the new financial environment as they observe realizations of regimes with high or low borrowing ability.
Our quantitative analysis is related to Bianchi and Mendoza (2010)'s quantitative study of macro-prudential policy. They examined an asset pricing model with a similar collateral constraint and used comparisons of the competitive equilibria vis-a-vis a social planner to show that optimal macro-prudential policy curbs credit growth in good times and reduces the frequency and severity of financial crises. The government can accomplish this by using Pigouvian taxes on debt and dividends to induce agents to internalize the model's pecuniary externality. Bianchi and Mendoza's framework does not capture, however, the role of informational frictions interacting with frictions in financial markets, and thus is silent about the implications of di®erences in the information sets of policy-makers and private agents.
Our paper is also related to Gennaioli, Shleifer, and Vishny (2010), who study financial innovation in an environment in which "local thinking" leads agents to neglect low probability adverse events (see also Gennaioli and Shleifer (2010)). As in our model, the informational friction distorts decision rules and asset prices, but the informational frictions in the two setups differ.3 Moreover, the welfare analysis of Gennaioli, Shleifer, and Vishny (2010) focuses on the effect of financial innovation under local thinking, while we emphasize the interaction between a fire-sale externality and informational frictions.
Finally, our work is also related to the argument developed by Stein (2011) to favor a cap and trade system to address a pecuniary externality that leads banks to issue excessive short-term debt in the presence of private information. Our analysis differs in that we study the implications of a form of model uncertainty (i.e. uncertainty about the transition probabilities across financial regimes) for macro-prudential regulation, instead of private information, and we focus on Pigouvian taxes as a policy instrument to address the pecuniary externality.
IMF Working Paper No. 12/181
Jul 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=26051.0
Summary: The interaction between credit frictions, financial innovation, and a switch from optimistic to pessimistic beliefs played a central role in the 2008 financial crisis. This paper develops a quantitative general equilibrium framework in which this interaction drives the financial amplification mechanism to study the effects of macro-prudential policy. Financial innovation enhances the ability of agents to collateralize assets into debt, but the riskiness of this new regime can only be learned over time. Beliefs about transition probabilities across states with high and low ability to borrow change as agents learn from observed realizations of financial conditions. At the same time, the collateral constraint introduces a pecuniary externality, because agents fail to internalize the effect of their borrowing decisions on asset prices. Quantitative analysis shows that the effectiveness of macro-prudential policy in this environment depends on the government's information set, the tightness of credit constraints and the pace at which optimism surges in the early stages of financial innovation. The policy is least effective when the government is as uninformed as private agents, credit constraints are tight, and optimism builds quickly.
Excerpts:
Policymakers have responded to the lapses in financial regulation in the years before the 2008 global financial crisis and the unprecedented systemic nature of the crisis itself with a strong push to revamp financial regulation following a "macro-prudential" approach. This approach aims to focus on the macro (i.e. systemic) implications that follow from the actions of credit market participants, and to implement policies that influence behavior in "good times" in order to make financial crises less severe and less frequent. The design of macro-prudential policy is hampered, however, by the need to develop models that are reasonably good at explaining the macro dynamics of financial crises and at capturing the complex dynamic interconnections between potential macro-prudential policy instruments and the actions of agents in credit markets.
The task of developing these models is particularly challenging because of the fast pace of financial development. Indeed, the decade before the 2008 crash was a period of significant financial innovation, which included both the introduction of a large set of complex financial instruments, such as collateralized debt obligations, mortgage backed securities and credit default swaps, and the enactment of major financial reforms of a magnitude and scope unseen since the end of the Great Depression. Thus, models of macro-prudential regulation have to take into account the changing nature of the financial environment, and hence deal with the fact that credit market participants, as well as policymakers, may be making decisions lacking perfect information about the true riskiness of a changing financial regime.
This paper proposes a dynamic stochastic general equilibrium model in which the interaction between financial innovation, credit frictions and imperfect information is at the core of the financial transmission mechanism, and uses it to study its quantitative implications for the design and effectiveness of macro-prudential policy. In the model, a collateral constraint limits the agents' ability to borrow to a fraction of the market value of the assets they can offer as collateral. Financial innovation enhances the ability of agents to "collateralize," but also introduces risk because of the possibility of fluctuations in collateral requirements or loan-to-value ratios. We take literally the definition of financial innovation to be the introduction of a truly new financial regime. This forces us to deviate from the standard assumption that agents formulate rational expectations with full information about the stochastic process driving fluctuations in credit conditions. In particular, we assume that agents learn (in Bayesian fashion) about the transition probabilities of financial regimes only as they observe regimes with high and low ability to borrow over time. In the long run, and in the absence of new waves of financial innovation, they learn the true transition probabilities and form standard rational expectations, but in the short run agents' beliefs display waves of optimism and pessimism depending on their initial priors and on the market conditions they observe. These changing beliefs influence agents' borrowing decisions and equilibrium asset prices, and together with the collateral constraint they form a financial amplification feedback mechanism: optimistic (pessimistic) expectations lead to over-borrowing (under-borrowing) and increased (reduced) asset prices, and as asset prices change the ability to borrow changes as well.
Our analysis focuses in particular on a learning scenario in which the arrival of financial innovation starts an "optimistic phase," in which a few observations of enhanced borrowing ability lead agents to believe that the financial environment is stable and risky assets are not "very risky." Hence, they borrow more and bid up the price of risky assets more than in a full-information ra- tional expectations equilibrium. The higher value of assets in turn relaxes the credit constraint. Thus, the initial increase in debt due to optimism is amplified by the interaction with the collateral constraint via optimistic asset prices. Conversely, when the first realization of the low-borrowing- ability regime is observed, a "pessimistic phase" starts in which agents overstate the probability of continuing in poor financial regimes and overstate the riskiness of assets. This results in lower debt levels and lower asset prices, and the collateral constraint amplifies this downturn.
Macro-prudential policy action is desirable in this environment because the collateral constraint introduces a pecuniary externality in credit markets that leads to more debt and financial crises that are more severe and frequent than in the absence of this externality. The externality exists because individual agents fail to internalize the effect of their borrowing decisions on asset prices, particularly future asset prices in states of financial distress (in which the feedback loop via the collateral constraint triggers a financial crash).
There are several studies in the growing literature on macro-prudential regulation that have examined the implications of this externality, but typically under the assumption that agents form rational expectations with full information (e.g. Lorenzoni (2008), Stein (2011), Bianchi (2011), Bianchi and Mendoza (2010), Korinek (2010), Jeanne and Korinek (2010), Benigno, Chen, Otrok, Rebucci, and Young (2010)). In contrast, the novel contribution of this paper is in that we study the effects of macro-prudential policy in an environment in which the pecuniary externality is influenced by the interaction of the credit constraint with learning about the riskiness of a new financial regime. The analysis of Boz and Mendoza (2010) suggest that taking this interaction into account can be important, because they found that the credit constraint in a learning setup produces significantly larger effects on debt and asset prices than in a full-information environment with the same credit constraint. Their study, however, focused only on quantifying the properties of the decentralized competitive equilibrium and abstracted from normative issues and policy analysis. The policy analysis of this paper considers a social planner under two different informational assumptions. First, an uninformed planner who has to learn about the true riskiness of the new financial environment, and faces the set of feasible credit positions supported by the collateral values of the competitive equilibrium with learning. We start with a baseline scenario in which private agents and the planner have the same initial priors and thus form the same sequence of beliefs, and study later on scenarios in which private agents and the uninformed planner form different beliefs. Second, an informed planner with full information, who therefore knows the true transition probabilities across financial regimes, and faces a set of feasible credit positions consistent with the collateral values of the full-information, rational expectations competitive equilibrium.
We compute the decentralized competitive equilibrium of the model with learning (DEL) and contrast this case with the above social planner equilibria. We then compare the main features of these equilibria, in terms of the behavior of macroeconomic aggregates and asset pricing indicators, and examine the characteristics of macro-prudential policies that support the allocations of the planning problems as competitive equilibria. This analysis emphasizes the potential limitations of macro-prudential policy in the presence of significant financial innovation, and highlights the relevance of taking into account informational frictions in evaluating the effectiveness of macro-prudential policy.
The quantitative analysis indicates that the interaction of the collateral constraint with optimistic beliefs in the DEL equilibrium can strengthen the case for introducing macro-prudential regulation compared with the decentralized equilibrium under full information (DEF). This is because, as Boz and Mendoza (2010) showed, the interaction of these elements produces larger amplification both of the credit boom in the optimistic phase and of the financial crash when the economy switches to the bad financial regime. The results also show, however, that the effectiveness of macro-prudential policy varies widely with the assumptions about the information set and collateral pricing function used by the social planner. Moreover, for the uninformed planner, the effectiveness of macro-prudential policy also depends on the tightness of the borrowing constraint and the pace at which optimism builds in the early stages of financial innovation.
Consider first the uninformed planner. For this planner, the undervaluation of risk weakens the incentives to build precautionary savings against states of nature with low-borrowing-ability regimes over the long run, because this planner underestimates the probability of landing on and remaining in those states. In contrast, the informed planner assesses the correct probabilities of landing and remaining in states with good and bad credit regimes, so its incentives to build precautionary savings are stronger. In fact, the informed planner's optimal macro-prudential policy features a precautionary component that lowers borrowing levels at given asset prices, and a component that influences portfolio choice of debt v. assets to address the effect of the agents' mispricing of risk on collateral prices.
It is important to note that even the uninformed planner has the incentive to use macro-prudential policy to tackle the pecuniary externality and alter debt and asset pricing dynamics. In our baseline calibration, however, the borrowing constraint becomes tightly binding in the early stages of financial innovation as optimism builds quickly, and as a result macro-prudential policy is not very effective (i.e. debt positions and asset prices differ little between the DEL and the uninformed planner). Intuitively, since a binding credit constraint implies that debt equals the high-credit-regime fraction of the value of collateral, debt levels for the uninformed social planner and the decentralized equilibrium are similar once the constraint becomes binding for the planner. But this is not a general result.2 Variations in the information structure in which optimism builds more gradually produce outcomes in which macro-prudential policy is effective even when the
planner has access to the same information set. On the other hand, it is generally true that the uninformed planner allows larger debt positions than the informed planner because of the lower precautionary savings incentives.
We also analyze the welfare losses that arise from the pecuniary externality and the optimism embedded in agents' subjective beliefs. The losses arising due to their combined e®ect are large, reaching up to 7 percent in terms of a compensating variation in permanent consumption that equalizes the welfare of the informed planner with that of the DEL economy. The welfare losses attributable to the pecuniary externality alone are relatively small, in line with the findings reported by Bianchi (2011) and Bianchi and Mendoza (2010), and they fall significantly at the peak of optimism.
Our model follows a long and old tradition of models of financial crises in which credit frictions and imperfect information interact. This notion dates back to the classic work of Fisher (1933), in which he described his debt-deflation financial amplification mechanism as the result of a feedbackloop between agents' beliefs and credit frictions (particularly those that force fires sales of assets and goods by distressed borrowers). Minsky (1992) is along a similar vein. More recently, macroeconomic models of financial accelerators (e.g. Bernanke, Gertler, and Gilchrist (1999), Kiyotaki and Moore (1997), Aiyagari and Gertler (1999)) have focused on modeling financial amplification but typically under rational expectations with full information about the stochastic processes of exogenous shocks.
The particular specification of imperfect information and learning that we use follows closely that of Boz and Mendoza (2010) and Cogley and Sargent (2008a), in which agents observe regime realizations of a Markov-switching process without noise but need to learn its transition probability matrix. The imperfect information assumption is based on the premise that the U.S. financial system went through significant changes beginning in the mid-90s as a result of financial innovation and deregulation that took place at a rapid pace. As in Boz and Mendoza (2010), agents go through a learning process in order to "discover" the true riskiness of the new financial environment as they observe realizations of regimes with high or low borrowing ability.
Our quantitative analysis is related to Bianchi and Mendoza (2010)'s quantitative study of macro-prudential policy. They examined an asset pricing model with a similar collateral constraint and used comparisons of the competitive equilibria vis-a-vis a social planner to show that optimal macro-prudential policy curbs credit growth in good times and reduces the frequency and severity of financial crises. The government can accomplish this by using Pigouvian taxes on debt and dividends to induce agents to internalize the model's pecuniary externality. Bianchi and Mendoza's framework does not capture, however, the role of informational frictions interacting with frictions in financial markets, and thus is silent about the implications of di®erences in the information sets of policy-makers and private agents.
Our paper is also related to Gennaioli, Shleifer, and Vishny (2010), who study financial innovation in an environment in which "local thinking" leads agents to neglect low probability adverse events (see also Gennaioli and Shleifer (2010)). As in our model, the informational friction distorts decision rules and asset prices, but the informational frictions in the two setups differ.3 Moreover, the welfare analysis of Gennaioli, Shleifer, and Vishny (2010) focuses on the effect of financial innovation under local thinking, while we emphasize the interaction between a fire-sale externality and informational frictions.
Finally, our work is also related to the argument developed by Stein (2011) to favor a cap and trade system to address a pecuniary externality that leads banks to issue excessive short-term debt in the presence of private information. Our analysis differs in that we study the implications of a form of model uncertainty (i.e. uncertainty about the transition probabilities across financial regimes) for macro-prudential regulation, instead of private information, and we focus on Pigouvian taxes as a policy instrument to address the pecuniary externality.
Sunday, July 8, 2012
Margin requirements for non-centrally-cleared derivatives - BIS consultative document
Margin requirements for non-centrally-cleared derivatives - consultative document
BIS, July 2012
http://www.bis.org/publ/bcbs226.htm
The G20 Leaders agreed in 2011 to add margin requirements on non-centrally-cleared derivatives to the reform programme for over-the-counter (OTC) derivatives markets. Margin requirements can further mitigate systemic risk in the derivatives markets. In addition, they can encourage standardisation and promote central clearing of derivatives by reflecting the generally higher risk of non-centrally-cleared derivatives. The consultative paper published today lays out a set of high-level principles on margining practices and treatment of collateral, and proposes margin requirements for non-centrally-cleared derivatives.
These policy proposals are articulated through a set of key principles that primarily seek to ensure that appropriate margining practices will be established for all non-centrally-cleared OTC derivative transactions. These principles will apply to all transactions that involve either financial firms or systemically important non-financial entities.
The Basel Committee and IOSCO would like to solicit feedback from the public on questions related to the scope, feasibility and impact of the margin requirements. Responses to the public consultation, together with the QIS results, will be considered in formulating a final joint proposal on margin requirements on non-centrally-cleared derivatives by year-end.
Excerpts:
Objectives of margin requirements for non-centrally-cleared derivatives
Margin requirements for non-centrally-cleared derivatives have two main benefits:
Reduction of systemic risk. Only standardised derivatives are suitable for central clearing. A substantial fraction of derivatives are not standardised and will not be able to be cleared.4 These non-centrally-cleared derivatives, which total hundreds of trillions of dollars of notional amounts,5 will pose the same type of systemic contagion and spillover risks that materialised in the recent financial crisis. Margin requirements for non-centrally-cleared derivatives would be expected to reduce contagion and spillover effects by ensuring that collateral are available to offset losses caused by the default of a derivatives counterparty. Margin requirements can also have broader macroprudential benefits, by reducing the financial system’s vulnerability to potentially de-stabilising procyclicality and limiting the build-up of uncollateralised exposures within the financial system.
Promotion of central clearing. In many jurisdictions central clearing will be mandatory for most standardised derivatives. But clearing imposes costs, in part because CCPs require margin to be posted. Margin requirements on non-centrally-cleared derivatives, by reflecting the generally higher risk associated with these derivatives, will promote central clearing, making the G20’s original 2009 reform program more effective. This could, in turn, contribute to the reduction of systemic risk.
The effectiveness of margin requirements could be undermined if the requirements were not consistent internationally. Activity could move to locations with lower margin requirements, raising two concerns:
The effectiveness of the margin requirements could be undermined (ie regulatory arbitrage).
Financial institutions that operate in the low-margin locations could gain a competitive advantage (ie unlevel playing field).
Margin and capital
Both capital and margin perform important risk mitigation functions but are distinct in a number of ways. First, margin is “defaulter-pay”. In the event of a counterparty default, margin protects the surviving party by absorbing losses using the collateral provided by the defaulting entity. In contrast, capital adds loss absorbency to the system, because it is “survivor-pay”, using capital to meet such losses consumes the surviving entity’s own financial resources. Second, margin is more “targeted” and dynamic, with each portfolio having its own designated margin for absorbing the potential losses in relation to that particular portfolio, and with such margin being adjusted over time to reflect changes in the risk of that portfolio. In contrast, capital is shared collectively by all the entity’s activities and may thus be more easily depleted at a time of stress, and is difficult to rapidly adjust to reflect changing risk exposures. Capital requirements against each exposure are not designed to be sufficient to cover the loss on the default of the counterparty but rather the probability weighted loss given such default. For these reasons, margin can be seen as offering enhanced protection against counterparty credit risk where it is effectively implemented. In order for margin to act as an effective risk mitigant, that margin must be (i) accessible at the time of need and (ii) in a form that can be liquidated rapidly in a period of financial stress at a predictable price.
The interaction between capital and margin, however, is complex and is an area in which the full range of interactions needs further careful consideration. When calibrating the application of capital and margin, consideration must be given to factors such as: (i) differences in capital requirements across different types of entities; (ii) the effect certain margin requirements may have on the capital calculations of different types of regulated entities subject to differing capital requirements; and (iii) the current asymmetrical treatment of collateral in many regulatory capital frameworks where benefit is given for collateral received, but no cost is incurred for the (encumbrance) risks of collateral posted.
Impact of margin requirements on liquidity
The potential benefits of margin requirements must be weighed against the liquidity impact that would result from derivative counterparties’ need to provide liquid, high-quality collateral to meet those requirements, including potential changes to market functioning as result of an increasing demand for such collateral in the aggregate. Financial institutions may need to obtain and deploy additional liquidity resources to meet margin requirements that exceed current practices. Moreover, the liquidity impact of margin requirements cannot be considered in isolation. Rather, it is important to recognise ongoing and parallel regulatory initiatives that will also have significant liquidity impacts; examples of such initiatives include the BCBS’s Liquidity Coverage Ratio (LCR), Net Stable Funding Ratio (NSFR) and global mandates for central clearing of standardised derivatives.
The US SEC has pointed out that the proposed margin requirements could have a much greater impact on securities firms regulated under net capital rules. Under such rules, securities firms are required to maintain at all times a minimum level of ‘net capital” (meaning highly liquid capital) in excess of all subordinated liabilities. When calculating the “net capital”, the firm must deduct all assets that cannot be readily convertible into cash, and adjust the value of liquid assets by appropriate haircuts. As such, in computing “net capital”, assets that are delivered by the firm to another party as margin collateral are treated as unsecured receivables from the party holding the collateral and are thus deducted in full when calculating net capital.
As discussed in Part C of this consultative paper, the BCBS and IOSCO plan to conduct a quantitative impact study (QIS) in order to gauge the impact of the margin proposals. In particular, the QIS will assess the amount of margin required on non-centrally-cleared derivatives as well as the amount of available collateral that could be used to satisfy these requirements. The QIS will be conducted during the consultation period, and its results will inform the BCBS’s and IOSCO’s joint final proposal.
Macroprudential considerations
The BCBS and IOSCO also note that national supervisors may wish to establish margin requirements for non-centrally-cleared derivatives that, in addition to achieving the two principal benefits noted above, also create other desirable macroprudential outcomes. Further work by the relevant authorities is likely required to consider the details of how such outcomes might be identified and operationalised. The BCBS and IOSCO encourage further consideration of other potential macroprudential benefits of margin requirements for non-centrally-cleared derivatives and of the need for international coordination that may arise in this respect.
BIS, July 2012
http://www.bis.org/publ/bcbs226.htm
The G20 Leaders agreed in 2011 to add margin requirements on non-centrally-cleared derivatives to the reform programme for over-the-counter (OTC) derivatives markets. Margin requirements can further mitigate systemic risk in the derivatives markets. In addition, they can encourage standardisation and promote central clearing of derivatives by reflecting the generally higher risk of non-centrally-cleared derivatives. The consultative paper published today lays out a set of high-level principles on margining practices and treatment of collateral, and proposes margin requirements for non-centrally-cleared derivatives.
These policy proposals are articulated through a set of key principles that primarily seek to ensure that appropriate margining practices will be established for all non-centrally-cleared OTC derivative transactions. These principles will apply to all transactions that involve either financial firms or systemically important non-financial entities.
The Basel Committee and IOSCO would like to solicit feedback from the public on questions related to the scope, feasibility and impact of the margin requirements. Responses to the public consultation, together with the QIS results, will be considered in formulating a final joint proposal on margin requirements on non-centrally-cleared derivatives by year-end.
Excerpts:
Objectives of margin requirements for non-centrally-cleared derivatives
Margin requirements for non-centrally-cleared derivatives have two main benefits:
Reduction of systemic risk. Only standardised derivatives are suitable for central clearing. A substantial fraction of derivatives are not standardised and will not be able to be cleared.4 These non-centrally-cleared derivatives, which total hundreds of trillions of dollars of notional amounts,5 will pose the same type of systemic contagion and spillover risks that materialised in the recent financial crisis. Margin requirements for non-centrally-cleared derivatives would be expected to reduce contagion and spillover effects by ensuring that collateral are available to offset losses caused by the default of a derivatives counterparty. Margin requirements can also have broader macroprudential benefits, by reducing the financial system’s vulnerability to potentially de-stabilising procyclicality and limiting the build-up of uncollateralised exposures within the financial system.
Promotion of central clearing. In many jurisdictions central clearing will be mandatory for most standardised derivatives. But clearing imposes costs, in part because CCPs require margin to be posted. Margin requirements on non-centrally-cleared derivatives, by reflecting the generally higher risk associated with these derivatives, will promote central clearing, making the G20’s original 2009 reform program more effective. This could, in turn, contribute to the reduction of systemic risk.
The effectiveness of margin requirements could be undermined if the requirements were not consistent internationally. Activity could move to locations with lower margin requirements, raising two concerns:
The effectiveness of the margin requirements could be undermined (ie regulatory arbitrage).
Financial institutions that operate in the low-margin locations could gain a competitive advantage (ie unlevel playing field).
Margin and capital
Both capital and margin perform important risk mitigation functions but are distinct in a number of ways. First, margin is “defaulter-pay”. In the event of a counterparty default, margin protects the surviving party by absorbing losses using the collateral provided by the defaulting entity. In contrast, capital adds loss absorbency to the system, because it is “survivor-pay”, using capital to meet such losses consumes the surviving entity’s own financial resources. Second, margin is more “targeted” and dynamic, with each portfolio having its own designated margin for absorbing the potential losses in relation to that particular portfolio, and with such margin being adjusted over time to reflect changes in the risk of that portfolio. In contrast, capital is shared collectively by all the entity’s activities and may thus be more easily depleted at a time of stress, and is difficult to rapidly adjust to reflect changing risk exposures. Capital requirements against each exposure are not designed to be sufficient to cover the loss on the default of the counterparty but rather the probability weighted loss given such default. For these reasons, margin can be seen as offering enhanced protection against counterparty credit risk where it is effectively implemented. In order for margin to act as an effective risk mitigant, that margin must be (i) accessible at the time of need and (ii) in a form that can be liquidated rapidly in a period of financial stress at a predictable price.
The interaction between capital and margin, however, is complex and is an area in which the full range of interactions needs further careful consideration. When calibrating the application of capital and margin, consideration must be given to factors such as: (i) differences in capital requirements across different types of entities; (ii) the effect certain margin requirements may have on the capital calculations of different types of regulated entities subject to differing capital requirements; and (iii) the current asymmetrical treatment of collateral in many regulatory capital frameworks where benefit is given for collateral received, but no cost is incurred for the (encumbrance) risks of collateral posted.
Impact of margin requirements on liquidity
The potential benefits of margin requirements must be weighed against the liquidity impact that would result from derivative counterparties’ need to provide liquid, high-quality collateral to meet those requirements, including potential changes to market functioning as result of an increasing demand for such collateral in the aggregate. Financial institutions may need to obtain and deploy additional liquidity resources to meet margin requirements that exceed current practices. Moreover, the liquidity impact of margin requirements cannot be considered in isolation. Rather, it is important to recognise ongoing and parallel regulatory initiatives that will also have significant liquidity impacts; examples of such initiatives include the BCBS’s Liquidity Coverage Ratio (LCR), Net Stable Funding Ratio (NSFR) and global mandates for central clearing of standardised derivatives.
The US SEC has pointed out that the proposed margin requirements could have a much greater impact on securities firms regulated under net capital rules. Under such rules, securities firms are required to maintain at all times a minimum level of ‘net capital” (meaning highly liquid capital) in excess of all subordinated liabilities. When calculating the “net capital”, the firm must deduct all assets that cannot be readily convertible into cash, and adjust the value of liquid assets by appropriate haircuts. As such, in computing “net capital”, assets that are delivered by the firm to another party as margin collateral are treated as unsecured receivables from the party holding the collateral and are thus deducted in full when calculating net capital.
As discussed in Part C of this consultative paper, the BCBS and IOSCO plan to conduct a quantitative impact study (QIS) in order to gauge the impact of the margin proposals. In particular, the QIS will assess the amount of margin required on non-centrally-cleared derivatives as well as the amount of available collateral that could be used to satisfy these requirements. The QIS will be conducted during the consultation period, and its results will inform the BCBS’s and IOSCO’s joint final proposal.
Macroprudential considerations
The BCBS and IOSCO also note that national supervisors may wish to establish margin requirements for non-centrally-cleared derivatives that, in addition to achieving the two principal benefits noted above, also create other desirable macroprudential outcomes. Further work by the relevant authorities is likely required to consider the details of how such outcomes might be identified and operationalised. The BCBS and IOSCO encourage further consideration of other potential macroprudential benefits of margin requirements for non-centrally-cleared derivatives and of the need for international coordination that may arise in this respect.
Friday, July 6, 2012
The (Other) Deleveraging. By Manmohan Singh
The (Other) Deleveraging. By Manmohan Singh
IMF Working Paper No. 12/179
July 2012
http://www.imfbookstore.org/IMFORG/WPIEA2012179
Summary: Deleveraging has two components--shrinking of balance sheets due to increased haircuts/shedding of assets, and the reduction in the interconnectedness of the financial system. We focus on the second aspect and show that post-Lehman there has been a significant decline in the interconnectedness in the pledged collateral market between banks and nonbanks. We find that both the collateral and its associated velocity are not rebounding as of end-2011 and still about $4-5 trillion lower than the peak of $10 trillion as of end-2007. This paper updates Singh (2011) and we use this data to compare with the monetary aggregates (largely due to QE efforts in US, Euro area and UK), and discuss the overall financial lubrication that likely impacts the conduct of global monetary policy.
Excerpts:
Deleveraging from shrinking of bank balance sheets is not (yet) taking place; however, we still find the financial system imploding.
The reduction in debt (or deleveraging) has two components. The first (and more familiar) involves the shrinking of balance sheets. The other is a reduction in the interconnectedness of the financial system (Figure 1). Most recent researchers have focused on the impact of smaller balance sheets, overlooking this ‘other’ deleveraging resulting from reduced interconnectedness. Yet, as the current crisis unfolds, key actors in the global financial system seem to be “ring fencing” themselves owing to heightened counterparty risk. While “rational” from an individual perspective, this behavior may have unintended consequences for the financial markets.
The interconnections nexus has become considerably more complex over the past two decades. The interconnectedness of the financial system aspect may be viewed from the lens of collateral chains. Typically, collateral from hedge funds, pension, insurers, central banks etc., is intermediated by the large global banks. For example, a Hong Kong hedge fund may get financing from UBS secured by its collateral. This collateral may include, say, Indonesian bonds which will be pledged to UBS, (U.K.) for re-use. There may be demand for such bonds from, for instance, a pension fund in Chile who may have Santander as its global bank. However, due to heightened counterparty risk, UBS may not want to onward pledge to Santander, despite demand for the collateral with UBS. Fewer trusted counterparties in the market owing to elevated counterparty risk leads to stranded liquidity pools, incomplete markets, idle collateral and shorter collateral chains, missed trades and deleveraging. In volume terms, over the past decade this collateral use has become on par with monetary aggregates like M2.
The balance sheet shrinking due to ‘price decline’ (i.e., increased haircuts) has been studied extensively [...]. But the balance sheet shrinkage is being postponed—Euro area bank balance sheets may have increased up to €500bn since the end of November, 2011 helped by the liquidity injection from ECB’s 3-year Long Term Repo Operations or LTROs (net of reduced Monthly Repurchase Operations, MROs).
However, de-leveraging of the financial system due to the shortening of ‘re-pledging chains’ has not (yet) received attention. This deleveraging is taking place despite the recent official sector support. This second component of deleveraging is contributing towards the higher credit cost to the real economy. In fact, relative to 2006, the primary indices that measure aggregate borrowing cost are well over 2.5 times in the U.S. and 4 times in the Eurozone (see Figure 2). This is after adjusting for the central bank rate cuts which have lowered the total cost of borrowing for similar corporates (e.g., in the U.S., from about 6% in 2006 to about 4% at present). Figure 3 shows that for the past three decades, the cost of borrowing for financials has been below non-financials; however this has changed post-Lehman. Since much of the real economy resorts to banks to borrow (aside from the large industrials), the higher borrowing cost for banks is then passed on the real economy.
As the “other” deleveraging continues, the financial system remains short of high-grade collateral that can be re-pledged. Recent official sector efforts such as ECB’s “flexibility” (and the ELA programs of national central banks in the Eurozone) in accepting “bad” collateral attempts to keep the good/bad collateral ratio in the market higher than otherwise. ECB’s acceptance of good and bad collateral at non market price brings Gresham's law into play. But, if such moves become part of the central banker’s standard toolkit, the fiscal aspects and risks associated with such policies cannot be ignored. By so doing, the central banks have interposed themselves as risk-taking intermediaries with the potential to bring significant unintended consequences.
IMF Working Paper No. 12/179
July 2012
http://www.imfbookstore.org/IMFORG/WPIEA2012179
Summary: Deleveraging has two components--shrinking of balance sheets due to increased haircuts/shedding of assets, and the reduction in the interconnectedness of the financial system. We focus on the second aspect and show that post-Lehman there has been a significant decline in the interconnectedness in the pledged collateral market between banks and nonbanks. We find that both the collateral and its associated velocity are not rebounding as of end-2011 and still about $4-5 trillion lower than the peak of $10 trillion as of end-2007. This paper updates Singh (2011) and we use this data to compare with the monetary aggregates (largely due to QE efforts in US, Euro area and UK), and discuss the overall financial lubrication that likely impacts the conduct of global monetary policy.
Excerpts:
Deleveraging from shrinking of bank balance sheets is not (yet) taking place; however, we still find the financial system imploding.
The reduction in debt (or deleveraging) has two components. The first (and more familiar) involves the shrinking of balance sheets. The other is a reduction in the interconnectedness of the financial system (Figure 1). Most recent researchers have focused on the impact of smaller balance sheets, overlooking this ‘other’ deleveraging resulting from reduced interconnectedness. Yet, as the current crisis unfolds, key actors in the global financial system seem to be “ring fencing” themselves owing to heightened counterparty risk. While “rational” from an individual perspective, this behavior may have unintended consequences for the financial markets.
The interconnections nexus has become considerably more complex over the past two decades. The interconnectedness of the financial system aspect may be viewed from the lens of collateral chains. Typically, collateral from hedge funds, pension, insurers, central banks etc., is intermediated by the large global banks. For example, a Hong Kong hedge fund may get financing from UBS secured by its collateral. This collateral may include, say, Indonesian bonds which will be pledged to UBS, (U.K.) for re-use. There may be demand for such bonds from, for instance, a pension fund in Chile who may have Santander as its global bank. However, due to heightened counterparty risk, UBS may not want to onward pledge to Santander, despite demand for the collateral with UBS. Fewer trusted counterparties in the market owing to elevated counterparty risk leads to stranded liquidity pools, incomplete markets, idle collateral and shorter collateral chains, missed trades and deleveraging. In volume terms, over the past decade this collateral use has become on par with monetary aggregates like M2.
The balance sheet shrinking due to ‘price decline’ (i.e., increased haircuts) has been studied extensively [...]. But the balance sheet shrinkage is being postponed—Euro area bank balance sheets may have increased up to €500bn since the end of November, 2011 helped by the liquidity injection from ECB’s 3-year Long Term Repo Operations or LTROs (net of reduced Monthly Repurchase Operations, MROs).
However, de-leveraging of the financial system due to the shortening of ‘re-pledging chains’ has not (yet) received attention. This deleveraging is taking place despite the recent official sector support. This second component of deleveraging is contributing towards the higher credit cost to the real economy. In fact, relative to 2006, the primary indices that measure aggregate borrowing cost are well over 2.5 times in the U.S. and 4 times in the Eurozone (see Figure 2). This is after adjusting for the central bank rate cuts which have lowered the total cost of borrowing for similar corporates (e.g., in the U.S., from about 6% in 2006 to about 4% at present). Figure 3 shows that for the past three decades, the cost of borrowing for financials has been below non-financials; however this has changed post-Lehman. Since much of the real economy resorts to banks to borrow (aside from the large industrials), the higher borrowing cost for banks is then passed on the real economy.
As the “other” deleveraging continues, the financial system remains short of high-grade collateral that can be re-pledged. Recent official sector efforts such as ECB’s “flexibility” (and the ELA programs of national central banks in the Eurozone) in accepting “bad” collateral attempts to keep the good/bad collateral ratio in the market higher than otherwise. ECB’s acceptance of good and bad collateral at non market price brings Gresham's law into play. But, if such moves become part of the central banker’s standard toolkit, the fiscal aspects and risks associated with such policies cannot be ignored. By so doing, the central banks have interposed themselves as risk-taking intermediaries with the potential to bring significant unintended consequences.
Thursday, July 5, 2012
Paths to Eurobonds
Paths to Eurobonds. By Stijn Claessens, Ashoka Mody, and Shahin Vallée
July, 2012
IMF Working Paper No. 12/172
http://www.imfbookstore.org/ProdDetails.asp?ID=WPIEA2012172
Summary: This paper discusses proposals for common euro area sovereign securities. Such instruments can potentially serve two functions: in the short-term, stabilize financial markets and banks and, in the medium-term, help improve the euro area economic governance framework through enhanced fiscal discipline and risk-sharing. Many questions remain on whether financial instruments can ever accomplish such goals without bold institutional and political decisions, and, whether, in the absence of such decisions, they can create new distortions. The proposals discussed are also not necessarily competing substitutes; rather, they can be complements to be sequenced along alternative paths that possibly culminate in a fully-fledged Eurobond. The specific path chosen by policymakers should allow for learning and secure the necessary evolution of institutional infrastructures and political safeguards.
Excerpts:
The European Monetary Union was purposefully designed as a monetary union without a fiscal union. History has not been kind to such arrangements, as Bordo et al. (2011) argue and as several critics had warned before the eurozone came into being (for a review of that earlier literature, see Bornhorst, Mody, and Ohnsorge, forthcoming). The ongoing crisis appears to have validated these concerns. The absence of formal pooling of resources has required the construction of additional arrangements for inter-governmental fiscal support to respond to countries in crisis. These arrangements include the European Financial Stability Facility (EFSF) and the European Stability Mechanism (ESM). And as the crisis has evolved, the European Central Bank (the ECB) has needed to play an important role in supporting banks and, indirectly, sovereigns in need.
In this context, the common issuance of debt in the euro area has been increasingly evoked— including most recently by the European Parliament and the European Council—both as an immediate response to the financial crisis and as a structural feature of the monetary union.
This paper is a review of various proposals for common debt issuance. Clearly, common instruments are not the only or necessarily the primary way to reduce financial instability or improve economic, financial and fiscal governance in the euro area. Indeed, common debt issuance is inextricably linked to the shape and form of a future fiscal union. Because a fiscal (and banking) union is likely a longer-term project, a discussion of common instruments today can help sharpen the discussion of the choices underlying a fiscal union and possibly initiate more limited forms of risk-sharing and pooling that create a valuable learning process.
In undertaking this review, we are motivated by the following questions:
Conclusions
Common debt could bring reprieve from current financial instability. Specifically, the creation of a large safe asset can reduce flight to safety from one sovereign to another and weaken the links between banks and their respective sovereigns that are currently destabilizing. Common debt issuance could also be a structural stabilizing feature of the euro area by helping to create deeper and more liquid financial markets allowing the monetary union to capture the liquidity gains of a broader sovereign debt market. Importantly, these initiatives can serve to focus attention on the need for fiscal federalism including macroeconomic stabilization and risk-sharing mechanisms but also fiscal discipline.
But there clearly are risks associated with such common instruments. In terms of fiscal discipline, the pricing approaches, where countries’ own debt is lower ranked and hence pays a higher price, are intriguing. But the tranching creates new challenges, not least if the junior tranches replicate the instability that we are currently witnessing. Similarly, to the extent that funds are earmarked to repay the common debt, greater pro-cyclicality may ensue as earmarked resources are less available to deal with adverse shocks.
Ideally then, common debt should follow from a fundamental discussion of the long-term shape of a fiscal, financial and monetary union. The absence of a debate on fiscal union reflects in part historical concerns that one group of countries may become dependent on another group on a permanent basis. But short of addressing these fundamental issues completely, common debt issuance can initiate a political process towards this goal. If, for the moment, there is only appetite for limited and bounded fiscal risk-sharing, then the Eurobills can start a learning process. These could be scaled up if proven successful and evolve towards more ambitious structures. If the assessment is that a key task today is to bring debt-to-GDP ratios down before further progress can be made, then the Redemption Pact is the right first step. But this would take 20-25 years and delay the creation of a permanent mechanism to complete the monetary union.
Thus, addressing both the current debt overhang problem and insuring against loss of market access likely requires combining several proposals. And while a gradual phase-in provides some advantages, in particular as it can foster a political discussion about fiscal risk-sharing and transfers, the current financial crisis might call for more rapid introduction. Regardless, steps towards common debt issuance require an open political discussion given the importance of accountability and legitimacy dimensions associated with the embryonic creation of a fiscal union. Federations are not static political constructs and common debt issuance can both contribute to effective economic management and act as a catalyst for political change. In that sense, the proposals put forward are a constructive feature of the ongoing discussion, forcing a critical and focused rethinking of the EMU architecture.
July, 2012
IMF Working Paper No. 12/172
http://www.imfbookstore.org/ProdDetails.asp?ID=WPIEA2012172
Summary: This paper discusses proposals for common euro area sovereign securities. Such instruments can potentially serve two functions: in the short-term, stabilize financial markets and banks and, in the medium-term, help improve the euro area economic governance framework through enhanced fiscal discipline and risk-sharing. Many questions remain on whether financial instruments can ever accomplish such goals without bold institutional and political decisions, and, whether, in the absence of such decisions, they can create new distortions. The proposals discussed are also not necessarily competing substitutes; rather, they can be complements to be sequenced along alternative paths that possibly culminate in a fully-fledged Eurobond. The specific path chosen by policymakers should allow for learning and secure the necessary evolution of institutional infrastructures and political safeguards.
Excerpts:
The European Monetary Union was purposefully designed as a monetary union without a fiscal union. History has not been kind to such arrangements, as Bordo et al. (2011) argue and as several critics had warned before the eurozone came into being (for a review of that earlier literature, see Bornhorst, Mody, and Ohnsorge, forthcoming). The ongoing crisis appears to have validated these concerns. The absence of formal pooling of resources has required the construction of additional arrangements for inter-governmental fiscal support to respond to countries in crisis. These arrangements include the European Financial Stability Facility (EFSF) and the European Stability Mechanism (ESM). And as the crisis has evolved, the European Central Bank (the ECB) has needed to play an important role in supporting banks and, indirectly, sovereigns in need.
In this context, the common issuance of debt in the euro area has been increasingly evoked— including most recently by the European Parliament and the European Council—both as an immediate response to the financial crisis and as a structural feature of the monetary union.
This paper is a review of various proposals for common debt issuance. Clearly, common instruments are not the only or necessarily the primary way to reduce financial instability or improve economic, financial and fiscal governance in the euro area. Indeed, common debt issuance is inextricably linked to the shape and form of a future fiscal union. Because a fiscal (and banking) union is likely a longer-term project, a discussion of common instruments today can help sharpen the discussion of the choices underlying a fiscal union and possibly initiate more limited forms of risk-sharing and pooling that create a valuable learning process.
In undertaking this review, we are motivated by the following questions:
* How does the proposal change incentives of governments (debtors) and creditors? Does it offer clarity on how average and marginal costs of borrowing would be affected, and how default would be treated?
* What is the nature of the insurance that is being offered? Would the new instrument help reduce risk and improve liquidity? Who will want to hold those instruments?
* Would the (currently perverse) sovereign-bank linkages be reduced? What are effects on current financial markets (ill)functioning?
* What are the phasing-in, transitional, legal, and institutional issues?
* And, are there paths along which the different proposed instruments may be combined?
Conclusions
Common debt could bring reprieve from current financial instability. Specifically, the creation of a large safe asset can reduce flight to safety from one sovereign to another and weaken the links between banks and their respective sovereigns that are currently destabilizing. Common debt issuance could also be a structural stabilizing feature of the euro area by helping to create deeper and more liquid financial markets allowing the monetary union to capture the liquidity gains of a broader sovereign debt market. Importantly, these initiatives can serve to focus attention on the need for fiscal federalism including macroeconomic stabilization and risk-sharing mechanisms but also fiscal discipline.
But there clearly are risks associated with such common instruments. In terms of fiscal discipline, the pricing approaches, where countries’ own debt is lower ranked and hence pays a higher price, are intriguing. But the tranching creates new challenges, not least if the junior tranches replicate the instability that we are currently witnessing. Similarly, to the extent that funds are earmarked to repay the common debt, greater pro-cyclicality may ensue as earmarked resources are less available to deal with adverse shocks.
Ideally then, common debt should follow from a fundamental discussion of the long-term shape of a fiscal, financial and monetary union. The absence of a debate on fiscal union reflects in part historical concerns that one group of countries may become dependent on another group on a permanent basis. But short of addressing these fundamental issues completely, common debt issuance can initiate a political process towards this goal. If, for the moment, there is only appetite for limited and bounded fiscal risk-sharing, then the Eurobills can start a learning process. These could be scaled up if proven successful and evolve towards more ambitious structures. If the assessment is that a key task today is to bring debt-to-GDP ratios down before further progress can be made, then the Redemption Pact is the right first step. But this would take 20-25 years and delay the creation of a permanent mechanism to complete the monetary union.
Thus, addressing both the current debt overhang problem and insuring against loss of market access likely requires combining several proposals. And while a gradual phase-in provides some advantages, in particular as it can foster a political discussion about fiscal risk-sharing and transfers, the current financial crisis might call for more rapid introduction. Regardless, steps towards common debt issuance require an open political discussion given the importance of accountability and legitimacy dimensions associated with the embryonic creation of a fiscal union. Federations are not static political constructs and common debt issuance can both contribute to effective economic management and act as a catalyst for political change. In that sense, the proposals put forward are a constructive feature of the ongoing discussion, forcing a critical and focused rethinking of the EMU architecture.
Tuesday, July 3, 2012
Monitoring indicators for intraday liquidity management - consultative document
Monitoring indicators for intraday liquidity management - consultative document
Basel Committee, July 2012
Intraday liquidity can be defined as funds that are accessible
during the business day, usually to enable financial institutions to
make payments in real time. The Basel Committee's proposed Monitoring indicators for intraday liquidity management
are intended to allow banking supervisors to monitor a bank's intraday
liquidity risk management. Over time, the indicators will also help
supervisors to gain a better understanding of banks' payment and
settlement behaviour and their management of intraday liquidity risk.
The Basel Committee welcomes comments on this consultative document.
Comments should be submitted by Friday 14 September 2012 by e-mail to: baselcommittee@bis.org.
Alternatively, comments may be sent by post to the Secretariat of the
Basel Committee on Banking Supervision, Bank for International
Settlements, CH-4002 Basel, Switzerland. All comments may be published
on the website of the Bank for International Settlements unless a
comment contributor specifically requests confidential treatment.
Saturday, June 30, 2012
Jonathan Haidt's The Righteous Mind: Why Good People Are Divided by Politics and Religion
Jonathan Haidt: He Knows Why We Fight. By Holman W Jenkins, Jr
Conservative or liberal, our moral instincts are shaped by evolution to strengthen 'us' against 'them.'
The Wall Street Journal, June 30, 2012, page A13
http://online.wsj.com/article/SB10001424052702303830204577446512522582648.html
Nobody who engages in political argument, and who isn't a moron, hasn't had to recognize the fact that decent, honest, intelligent people can come to opposite conclusions on public issues.
Jonathan Haidt, in an eye-opening and deceptively ambitious best seller, tells us why. The reason is evolution. Political attitudes are an extension of our moral reasoning; however much we like to tell ourselves otherwise, our moral responses are basically instinctual, despite attempts to gussy them up with ex-post rationalizations.
Our constellation of moral instincts arose because it helped us to cooperate. It helped us, in unprecedented speed and fashion, to dominate our planet. Yet the same moral reaction also means we exist in a state of perpetual, nasty political disagreement, talking past each other, calling each other names.
So Mr. Haidt explains in "The Righteous Mind: Why Good People Are Divided by Politics and Religion," undoubtedly one of the most talked-about books of the year. "The Righteous Mind" spent weeks on the hardcover best-seller list. Mr. Haidt considers himself mostly a liberal, but his book has been especially popular in the conservative blogosphere. Some right-leaning intellectuals are even calling it the most important book of the year.
It's full of ammunition that conservatives will love to throw out at cocktail parties. His research shows that conservatives are much better at understanding and anticipating liberal attitudes than liberals are at appreciating where conservatives are coming from. Case in point: Conservatives know that liberals are repelled by cruelty to animals, but liberals don't think (or prefer not to believe) that conservatives are repelled too.
Mr. Haidt, until recently a professor of moral psychology at the University of Virginia, says the surveys conducted by his research team show that liberals are strong on evolved values he defines as caring and fairness. Conservatives value caring and fairness too but tend to emphasize the more tribal values like loyalty, authority and sanctity.
Conservatives, Mr. Haidt says, have been more successful politically because they play to the full spectrum of sensibilities, and because the full spectrum is necessary for a healthy society. An admiring review in the New York Times sums up this element of his argument: "Liberals dissolve moral capital too recklessly. Welfare programs that substitute public aid for spousal and parental support undermine the ecology of the family. Education policies that let students sue teachers erode classroom authority. Multicultural education weakens the cultural glue of assimilation."
Such a book is bound to run into the charge of scientism—claiming scientific authority for a mix of common sense, exhortation or the author's own preferences. Let it be said that Mr. Haidt is sensitive to this complaint. If he erred, he says, it was on the side of being accessible, readable and, he hopes, influential.
As we sit in his new office at New York University, he professes an immodest aim: He wants liberals and conservatives to listen to each other more, hate each other less, and to understand that their differences are largely rooted in psychology, not open-minded consideration of the facts. "My big issue, the one I'm somewhat evangelical about, is civil disagreement," he says.
A shorthand he uses is "follow the sacred"—and not in a good way. "Follow the sacred and there you will find a circle of motivated ignorance." Today's political parties are most hysterical, he says, on the issues they "sacralize." For the right, it's taxes. For the left, the sacred issues were race and gender but are becoming global warming and gay marriage.
Yet between the lines of his book is an even more dramatic claim: The same moral psychology that makes our politics so nasty also underlies the amazing triumph of the human species. "We shouldn't be here at all," he tells me. "When I think about life on earth, there should not be a species like us. And if there was, we should be out in the jungle killing each other in small groups. That's what you should expect. The fact that we're here [in politics] arguing viciously and nastily with each other, and no guns, that itself is a miracle. And I think we can make [our politics] a little better. That's my favorite theme."
Who is Jon Haidt? A nice Jewish boy from central casting, he grew up in Scarsdale, N.Y. His father was a corporate lawyer. "When the economy opened out in the '50s and '60s and Jews could go everywhere, he was part of that generation. He and all his buddies from Brooklyn did very well."
His family was liberal in the FDR tradition. At Yale he studied philosophy and, in standard liberal fashion, "emerged pretty convinced that I was right about everything." It took a while for him to discover the limits of that stance. "I wouldn't say I was mugged by reality. I would say I was gradually introduced to it academically," he says today.
In India, where he performed field studies early in his professional career, he encountered a society in some ways patriarchal, sexist and illiberal. Yet it worked and the people were lovely. In Brazil, he paid attention to the experiences of street children and discovered the "most dangerous person in the world is mom's boyfriend. When women have a succession of men coming through, their daughters will get raped," he says. "The right is right to be sounding the alarm about the decline of marriage, and the left is wrong to say, 'Oh, any kind of family is OK.' It's not OK."
At age 41, he decided to try to understand what conservatives think. The quest was part of his effort to apply his understanding of moral psychology to politics. He especially sings the praises of Thomas Sowell's "Conflict of Visions," which he calls "an incredible book, a brilliant portrayal" of the argument between conservatives and liberals about the nature of man. "Again, as a moral psychologist, I had to say the constrained vision [of human nature] is correct."
That is, our moral instincts are tribal, adaptive, intuitive and shaped by evolution to strengthen "us" against "them." He notes that, in the 1970s, the left tended to be categorically hostile to evolutionary explanations of human behavior. Yet Mr. Haidt, the liberal and self-professed atheist, says he now finds the conservative vision speaks more insightfully to our evolved nature in ways that it would be self-defeating to discount.
"This is what I'm trying to argue for, and this is what I feel I've discovered from reading a lot of the sociology," he continues. "You need loyalty, authority and sanctity"—values that liberals are often suspicious of—"to run a decent society."
Mr. Haidt, a less chunky, lower-T version of Adam Sandler, has just landed a new position at the Stern School of Business at NYU. He arrived with his two children and wife, Jane, after a successful and happy 16-year run at the University of Virginia. An introvert by his own account, and never happier than when laboring in solitude, he nevertheless sought out the world's media capital to give wider currency to the ideas in the "The Righteous Mind."
Mr. Haidt's book, as he's the first to notice, has given comfort to conservatives. Its aim is to help liberals. Though he calls himself a centrist, he remains a strongly committed Democrat. He voted for one Republican in his life—in 2000 crossing party lines to cast a ballot for John McCain in the Virginia primary. "I wasn't trying to mess with the Republican primary," he adds. "I really liked McCain."
His disappointment with President Obama is quietly evident. Ronald Reagan understood that "politics is more like religion than like shopping," he says. Democrats, after a long string of candidates who flogged policy initiatives like items in a Wal-Mart circular, finally found one who could speak to higher values than self-interest. "Obama surely had a chance to remake the Democratic Party. But once he got in office, I think, he was consumed with the difficulty of governing within the Beltway."
The president has reverted to the formula of his party—bound up in what Mr. Haidt considers obsolete interest groups, battles and "sacred" issues about which Democrats cultivate an immunity to compromise.
Mr. Haidt lately has been speaking to Democratic groups and urging attachment to a new moral vision, albeit one borrowed from the Andrew Jackson campaign of 1828: "Equal opportunity for all, special privileges for none."
Racial quotas and reflexive support for public-sector unions would be out. His is a reformed vision of a class-based politics of affirmative opportunity for the economically disadvantaged. "I spoke to some Democrats about things in the book and they asked, how can we weaponize this? My message to them was: You're not ready. You don't know what you stand for yet. You don't have a clear moral vision."
Like many historians of modern conservatism, he cites the 1971 Powell Memo—written by the future Supreme Court Justice Lewis Powell Jr.—which rallied Republicans to the defense of free enterprise and limited government. Democrats need their own version of the Powell Memo today to give the party a new and coherent moral vision of activist government in the good society. "The moral rot a [traditional] liberal welfare state creates over generations—I mean, the right is right about that," says Mr. Haidt, "and the left can't see it."
Yet one challenge becomes apparent in talking to Mr. Haidt: He's read his book and cheerfully acknowledges that he avoids criticizing too plainly the "sacralized" issues of his liberal friends.
In his book, for instance, is passing reference to Western Europe's creation of the world's "first atheistic societies," also "the least efficient societies ever known at turning resources (of which they have a lot) into offspring (of which they have very few)."
What does he actually mean? He means Islam: "Demographic curves are very hard to bend," he says. "Unless something changes in Europe in the next century, it will eventually be a Muslim continent. Let me say it diplomatically: Most religions are tribal to some degree. Islam, in its holy books, seems more so. Christianity has undergone a reformation and gotten some distance from its holy books to allow many different lives to flourish in Christian societies, and this has not happened in Islam."
Mr. Haidt is similarly tentative in spelling out his thoughts on global warming. The threat is real, he suspects, and perhaps serious. "But the left is now embracing this as their sacred issue, which guarantees that there will be frequent exaggerations and minor—I don't want to call it fudging of data—but there will be frequent mini-scandals. Because it's a moral crusade, the left is going to have difficulty thinking clearly about what to do."
Mr. Haidt, I observe, is noticeably less delicate when stepping on the right's toes. He reviles George W. Bush, whom he blames for running up America's debt and running down its reputation. He blames Newt Gingrich for perhaps understanding his book's arguments too well and importing an uncompromising moralistic language into the partisan politics of the 1990s.
Mr. Haidt also considers today's Republican Party a curse upon the land, even as he admires conservative ideas. He says its defense of lower taxes on capital income—mostly reported by the rich—is indefensible. He dismisses Mitt Romney as a "moral menial," a politician so cynical about the necessary cynicism of politics that he doesn't bother to hide his cynicism. (Some might call that a virtue.) He finds it all too typical that Republicans abandoned their support of the individual health-care mandate the moment Mr. Obama picked it up (though he also finds Chief Justice John Roberts's bend-over-backwards effort to preserve conservative constitutional principle while upholding ObamaCare "refreshing").
Why is his language so much less hedged when discussing Republicans? "Liberals are my friends, my colleagues, my social world," he concedes. Liberals also are the audience he hopes most to influence, helping Democrats to recalibrate their political appeal and their attachment to a faulty welfare state.
To which a visitor can only say, God speed. Even with his parsing out of deep psychological differences between conservatives and liberals, American politics still seem capable of a useful fluidity. To make progress we need both parties, and right now we could use some progress on taxes, incentives, growth and entitlement reform.
Mr. Jenkins writes the Journal's Business World column.
Conservative or liberal, our moral instincts are shaped by evolution to strengthen 'us' against 'them.'
The Wall Street Journal, June 30, 2012, page A13
http://online.wsj.com/article/SB10001424052702303830204577446512522582648.html
Nobody who engages in political argument, and who isn't a moron, hasn't had to recognize the fact that decent, honest, intelligent people can come to opposite conclusions on public issues.
Jonathan Haidt, in an eye-opening and deceptively ambitious best seller, tells us why. The reason is evolution. Political attitudes are an extension of our moral reasoning; however much we like to tell ourselves otherwise, our moral responses are basically instinctual, despite attempts to gussy them up with ex-post rationalizations.
Our constellation of moral instincts arose because it helped us to cooperate. It helped us, in unprecedented speed and fashion, to dominate our planet. Yet the same moral reaction also means we exist in a state of perpetual, nasty political disagreement, talking past each other, calling each other names.
So Mr. Haidt explains in "The Righteous Mind: Why Good People Are Divided by Politics and Religion," undoubtedly one of the most talked-about books of the year. "The Righteous Mind" spent weeks on the hardcover best-seller list. Mr. Haidt considers himself mostly a liberal, but his book has been especially popular in the conservative blogosphere. Some right-leaning intellectuals are even calling it the most important book of the year.
It's full of ammunition that conservatives will love to throw out at cocktail parties. His research shows that conservatives are much better at understanding and anticipating liberal attitudes than liberals are at appreciating where conservatives are coming from. Case in point: Conservatives know that liberals are repelled by cruelty to animals, but liberals don't think (or prefer not to believe) that conservatives are repelled too.
Mr. Haidt, until recently a professor of moral psychology at the University of Virginia, says the surveys conducted by his research team show that liberals are strong on evolved values he defines as caring and fairness. Conservatives value caring and fairness too but tend to emphasize the more tribal values like loyalty, authority and sanctity.
Conservatives, Mr. Haidt says, have been more successful politically because they play to the full spectrum of sensibilities, and because the full spectrum is necessary for a healthy society. An admiring review in the New York Times sums up this element of his argument: "Liberals dissolve moral capital too recklessly. Welfare programs that substitute public aid for spousal and parental support undermine the ecology of the family. Education policies that let students sue teachers erode classroom authority. Multicultural education weakens the cultural glue of assimilation."
Such a book is bound to run into the charge of scientism—claiming scientific authority for a mix of common sense, exhortation or the author's own preferences. Let it be said that Mr. Haidt is sensitive to this complaint. If he erred, he says, it was on the side of being accessible, readable and, he hopes, influential.
As we sit in his new office at New York University, he professes an immodest aim: He wants liberals and conservatives to listen to each other more, hate each other less, and to understand that their differences are largely rooted in psychology, not open-minded consideration of the facts. "My big issue, the one I'm somewhat evangelical about, is civil disagreement," he says.
A shorthand he uses is "follow the sacred"—and not in a good way. "Follow the sacred and there you will find a circle of motivated ignorance." Today's political parties are most hysterical, he says, on the issues they "sacralize." For the right, it's taxes. For the left, the sacred issues were race and gender but are becoming global warming and gay marriage.
Yet between the lines of his book is an even more dramatic claim: The same moral psychology that makes our politics so nasty also underlies the amazing triumph of the human species. "We shouldn't be here at all," he tells me. "When I think about life on earth, there should not be a species like us. And if there was, we should be out in the jungle killing each other in small groups. That's what you should expect. The fact that we're here [in politics] arguing viciously and nastily with each other, and no guns, that itself is a miracle. And I think we can make [our politics] a little better. That's my favorite theme."
Who is Jon Haidt? A nice Jewish boy from central casting, he grew up in Scarsdale, N.Y. His father was a corporate lawyer. "When the economy opened out in the '50s and '60s and Jews could go everywhere, he was part of that generation. He and all his buddies from Brooklyn did very well."
His family was liberal in the FDR tradition. At Yale he studied philosophy and, in standard liberal fashion, "emerged pretty convinced that I was right about everything." It took a while for him to discover the limits of that stance. "I wouldn't say I was mugged by reality. I would say I was gradually introduced to it academically," he says today.
In India, where he performed field studies early in his professional career, he encountered a society in some ways patriarchal, sexist and illiberal. Yet it worked and the people were lovely. In Brazil, he paid attention to the experiences of street children and discovered the "most dangerous person in the world is mom's boyfriend. When women have a succession of men coming through, their daughters will get raped," he says. "The right is right to be sounding the alarm about the decline of marriage, and the left is wrong to say, 'Oh, any kind of family is OK.' It's not OK."
At age 41, he decided to try to understand what conservatives think. The quest was part of his effort to apply his understanding of moral psychology to politics. He especially sings the praises of Thomas Sowell's "Conflict of Visions," which he calls "an incredible book, a brilliant portrayal" of the argument between conservatives and liberals about the nature of man. "Again, as a moral psychologist, I had to say the constrained vision [of human nature] is correct."
That is, our moral instincts are tribal, adaptive, intuitive and shaped by evolution to strengthen "us" against "them." He notes that, in the 1970s, the left tended to be categorically hostile to evolutionary explanations of human behavior. Yet Mr. Haidt, the liberal and self-professed atheist, says he now finds the conservative vision speaks more insightfully to our evolved nature in ways that it would be self-defeating to discount.
"This is what I'm trying to argue for, and this is what I feel I've discovered from reading a lot of the sociology," he continues. "You need loyalty, authority and sanctity"—values that liberals are often suspicious of—"to run a decent society."
Mr. Haidt, a less chunky, lower-T version of Adam Sandler, has just landed a new position at the Stern School of Business at NYU. He arrived with his two children and wife, Jane, after a successful and happy 16-year run at the University of Virginia. An introvert by his own account, and never happier than when laboring in solitude, he nevertheless sought out the world's media capital to give wider currency to the ideas in the "The Righteous Mind."
Mr. Haidt's book, as he's the first to notice, has given comfort to conservatives. Its aim is to help liberals. Though he calls himself a centrist, he remains a strongly committed Democrat. He voted for one Republican in his life—in 2000 crossing party lines to cast a ballot for John McCain in the Virginia primary. "I wasn't trying to mess with the Republican primary," he adds. "I really liked McCain."
His disappointment with President Obama is quietly evident. Ronald Reagan understood that "politics is more like religion than like shopping," he says. Democrats, after a long string of candidates who flogged policy initiatives like items in a Wal-Mart circular, finally found one who could speak to higher values than self-interest. "Obama surely had a chance to remake the Democratic Party. But once he got in office, I think, he was consumed with the difficulty of governing within the Beltway."
The president has reverted to the formula of his party—bound up in what Mr. Haidt considers obsolete interest groups, battles and "sacred" issues about which Democrats cultivate an immunity to compromise.
Mr. Haidt lately has been speaking to Democratic groups and urging attachment to a new moral vision, albeit one borrowed from the Andrew Jackson campaign of 1828: "Equal opportunity for all, special privileges for none."
Racial quotas and reflexive support for public-sector unions would be out. His is a reformed vision of a class-based politics of affirmative opportunity for the economically disadvantaged. "I spoke to some Democrats about things in the book and they asked, how can we weaponize this? My message to them was: You're not ready. You don't know what you stand for yet. You don't have a clear moral vision."
Like many historians of modern conservatism, he cites the 1971 Powell Memo—written by the future Supreme Court Justice Lewis Powell Jr.—which rallied Republicans to the defense of free enterprise and limited government. Democrats need their own version of the Powell Memo today to give the party a new and coherent moral vision of activist government in the good society. "The moral rot a [traditional] liberal welfare state creates over generations—I mean, the right is right about that," says Mr. Haidt, "and the left can't see it."
Yet one challenge becomes apparent in talking to Mr. Haidt: He's read his book and cheerfully acknowledges that he avoids criticizing too plainly the "sacralized" issues of his liberal friends.
In his book, for instance, is passing reference to Western Europe's creation of the world's "first atheistic societies," also "the least efficient societies ever known at turning resources (of which they have a lot) into offspring (of which they have very few)."
What does he actually mean? He means Islam: "Demographic curves are very hard to bend," he says. "Unless something changes in Europe in the next century, it will eventually be a Muslim continent. Let me say it diplomatically: Most religions are tribal to some degree. Islam, in its holy books, seems more so. Christianity has undergone a reformation and gotten some distance from its holy books to allow many different lives to flourish in Christian societies, and this has not happened in Islam."
Mr. Haidt is similarly tentative in spelling out his thoughts on global warming. The threat is real, he suspects, and perhaps serious. "But the left is now embracing this as their sacred issue, which guarantees that there will be frequent exaggerations and minor—I don't want to call it fudging of data—but there will be frequent mini-scandals. Because it's a moral crusade, the left is going to have difficulty thinking clearly about what to do."
Mr. Haidt, I observe, is noticeably less delicate when stepping on the right's toes. He reviles George W. Bush, whom he blames for running up America's debt and running down its reputation. He blames Newt Gingrich for perhaps understanding his book's arguments too well and importing an uncompromising moralistic language into the partisan politics of the 1990s.
Mr. Haidt also considers today's Republican Party a curse upon the land, even as he admires conservative ideas. He says its defense of lower taxes on capital income—mostly reported by the rich—is indefensible. He dismisses Mitt Romney as a "moral menial," a politician so cynical about the necessary cynicism of politics that he doesn't bother to hide his cynicism. (Some might call that a virtue.) He finds it all too typical that Republicans abandoned their support of the individual health-care mandate the moment Mr. Obama picked it up (though he also finds Chief Justice John Roberts's bend-over-backwards effort to preserve conservative constitutional principle while upholding ObamaCare "refreshing").
Why is his language so much less hedged when discussing Republicans? "Liberals are my friends, my colleagues, my social world," he concedes. Liberals also are the audience he hopes most to influence, helping Democrats to recalibrate their political appeal and their attachment to a faulty welfare state.
To which a visitor can only say, God speed. Even with his parsing out of deep psychological differences between conservatives and liberals, American politics still seem capable of a useful fluidity. To make progress we need both parties, and right now we could use some progress on taxes, incentives, growth and entitlement reform.
Mr. Jenkins writes the Journal's Business World column.
A framework for dealing with domestic systemically important banks - consultative document
A framework for dealing with domestic systemically important banks - consultative document
June 2012
The consultative document sets out a framework of principles covering the assessment methodology and the higher loss absorbency requirement for domestic systemically important banks (D-SIBs). The D-SIB framework takes a complementary perspective of the global systemically important bank (G-SIB) framework published by the Basel Committee in November 2011. It focuses on the impact that the distress or failure of banks will have on the domestic economy. While not all D-SIBs are significant from a global perspective, the failure of such a bank could have an important impact on its domestic financial system and economy compared to non-systemic institutions. In order to accommodate the structural characteristics of individual jurisdictions, the assessment and application of policy tools should allow for an appropriate degree of national discretion. That is why the D-SIB framework is proposed to be a principles-based approach, which contrasts with the prescriptive approach in the G-SIB framework.
The proposed D-SIB framework requires banks, which have been identified as D-SIBs by their national authorities, to comply with the principles beginning in January 2016. This is consistent with the phase-in arrangements for the G-SIB framework and means that national authorities will establish a D-SIB framework by 2016. The Basel Committee will introduce a strong peer review process for the implementation of the principles. This will help ensure that appropriate and effective frameworks for D-SIBs are in place across different jurisdictions.
The Basel Committee welcomes comments on this consultative document. Comments should be submitted by Wednesday, 1 August 2012 by e-mail to: baselcommittee@bis.org. Alternatively, comments may be sent by post to the Secretariat of the Basel Committee on Banking Supervision, Bank for International Settlements, CH-4002 Basel, Switzerland. All comments may be published on the website of the Bank for International Settlements unless a comment contributor specifically requests confidential treatment.
Friday, June 29, 2012
Basel Committee - The internal audit function in banks - final document
Basel Committee - The internal audit function in banks - final document
June 28, 2012
The Basel Committee on Banking Supervision today issued its final document The internal audit function in banks
.
This supervisory guidance is built around 20 principles that seek to
promote a strong internal audit function within banks. Drawing on
lessons learned from the financial crisis, the principles revise and
update the Committee's supervisory guidance issued in 2001, also taking
account of developments in supervisory practices and in banking
organisations. For that purpose, the guidance addresses supervisory
expectations for the internal audit function and the supervisory
assessment of that function. It also encourages bank internal auditors
to comply with national and international professional standards on
internal auditing. Finally, it promotes due consideration of prudential
issues by internal auditors. An annex to the consultative document
details responsibilities of a bank's audit committee.
Mr Stefan Ingves, Chairman of the Basel Committee on Banking
Supervision and Governor of Sveriges Riksbank, Sweden's central bank,
noted that "an internal audit function, independent from management and
composed of competent auditors, is a key component of a bank's sound
governance framework. The Committee's document lays out expectations
that should help banks and their supervisors strengthen professional
practices in this area."
An earlier version of today's guidance was issued for consultation
in December 2011. The Committee wishes to thank those who provided feedback and comments.
Tuesday, June 26, 2012
Rising Tensions Over China's Monopoly on Rare Earths?
Rising Tensions Over China's Monopoly on Rare Earths?, by June Nakano
East-West Center Asia Pacific Bulletin, no. 163
Washington, DC, May 2, 2012
http://www.eastwestcenter.org/publications/rising-tensions-over-china%E2%80%99s-monopoly-rare-earths
Jane Nakano, Fellow with the Energy and National Security Program at the Center for Strategic and International Studies, explains that "the current rare earth contention should serve as a reminder of the fundamental importance of supply diversification, and the enduring value that research and development plays in meeting many of the energy and resource related challenges society faces today."
Excerpts:
The recent Chinese industry consolidation may not be a welcome development as it will most likely increase the price of many rare earth materials. However, it is probably too short-sighted to view this move as a simple measure to side-step international complaints about China’s restrictive export policies on rare earth materials. In reality, the consolidation likely has multiple objectives, such as to demonstrate to the Chinese public an effort to both curb pollution and eradicate illegal mining, to ensure an adequate level of supply to domestic consumers, and to encourage higher value exports—if the consolidation leads to an in-flow of foreign rare earth processors to China. It would be neither easy nor particularly meaningful to determine which factor is most dominant.
East-West Center Asia Pacific Bulletin, no. 163
Washington, DC, May 2, 2012
http://www.eastwestcenter.org/publications/rising-tensions-over-china%E2%80%99s-monopoly-rare-earths
Jane Nakano, Fellow with the Energy and National Security Program at the Center for Strategic and International Studies, explains that "the current rare earth contention should serve as a reminder of the fundamental importance of supply diversification, and the enduring value that research and development plays in meeting many of the energy and resource related challenges society faces today."
Excerpts:
The recent Chinese industry consolidation may not be a welcome development as it will most likely increase the price of many rare earth materials. However, it is probably too short-sighted to view this move as a simple measure to side-step international complaints about China’s restrictive export policies on rare earth materials. In reality, the consolidation likely has multiple objectives, such as to demonstrate to the Chinese public an effort to both curb pollution and eradicate illegal mining, to ensure an adequate level of supply to domestic consumers, and to encourage higher value exports—if the consolidation leads to an in-flow of foreign rare earth processors to China. It would be neither easy nor particularly meaningful to determine which factor is most dominant.
Wednesday, June 20, 2012
Too Much Finance?
Too Much Finance? By Jean-Louis Arcand, Enrico Berkes, and Ugo Panizza
IMF Working Paper No. 12/161
June, 2012
http://www.imfbookstore.org/ProdDetails.asp?ID=WPIEA2012161
Summary: This paper examines whether there is a threshold above which financial development no longer has a positive effect on economic growth. We use different empirical approaches to show that there can indeed be "too much" finance. In particular, our results suggest that finance starts having a negative effect on output growth when credit to the private sector reaches 100% of GDP. We show that our results are consistent with the "vanishing effect" of financial development and that they are not driven by output volatility, banking crises, low institutional quality, or by differences in bank regulation and supervision.
Excerpts:
Introduction
In this paper we use different datasets and empirical approaches to show that there can indeed be “too much” finance. In particular, our results show that the marginal effect of financial depth on output growth becomes negative when credit to the private sector reaches 80-100% of GDP. This result is surprisingly consistent across different types of estimators (simple cross-sectional and panel regressions as well as semi-parametric estimators) and data (country-level and industry-level). The threshold at which we find that financial depth starts having a negative effect on growth is similar to the threshold at which Easterly, Islam, and Stiglitz (2000) find that financial depth starts having a positive effect on volatility. This finding is consistent with the literature on the relationship between volatility and growth (Ramey and Ramey, 1995) and that on the persistence of negative output shocks (Cerra and Saxena, 2008). However, we show that our finding of a non-monotone relationship between financial depth and economic growth is robust to controlling for macroeconomic volatility, banking crises, and institutional quality.
Our results differ from those of Rioja and Valev (2004) who find that, even in their “high region,” finance has a positive, albeit small, effect on economic growth. This difference is probably due to the fact that they set their threshold for the "high region" at a level of financial depth which is much lower than the level for which we start finding that finance has a negative effect on growth.
Our results are instead consistent with the vanishing effect of financial depth found by Rousseau and Wachtel (2011). If the true relationship between financial depth and economic growth is non-monotone, models that do not allow for non-monotonicity will lead to a downward bias in the estimated relationship between financial depth and economic growth.
IMF Working Paper No. 12/161
June, 2012
http://www.imfbookstore.org/ProdDetails.asp?ID=WPIEA2012161
Summary: This paper examines whether there is a threshold above which financial development no longer has a positive effect on economic growth. We use different empirical approaches to show that there can indeed be "too much" finance. In particular, our results suggest that finance starts having a negative effect on output growth when credit to the private sector reaches 100% of GDP. We show that our results are consistent with the "vanishing effect" of financial development and that they are not driven by output volatility, banking crises, low institutional quality, or by differences in bank regulation and supervision.
Excerpts:
Introduction
In this paper we use different datasets and empirical approaches to show that there can indeed be “too much” finance. In particular, our results show that the marginal effect of financial depth on output growth becomes negative when credit to the private sector reaches 80-100% of GDP. This result is surprisingly consistent across different types of estimators (simple cross-sectional and panel regressions as well as semi-parametric estimators) and data (country-level and industry-level). The threshold at which we find that financial depth starts having a negative effect on growth is similar to the threshold at which Easterly, Islam, and Stiglitz (2000) find that financial depth starts having a positive effect on volatility. This finding is consistent with the literature on the relationship between volatility and growth (Ramey and Ramey, 1995) and that on the persistence of negative output shocks (Cerra and Saxena, 2008). However, we show that our finding of a non-monotone relationship between financial depth and economic growth is robust to controlling for macroeconomic volatility, banking crises, and institutional quality.
Our results differ from those of Rioja and Valev (2004) who find that, even in their “high region,” finance has a positive, albeit small, effect on economic growth. This difference is probably due to the fact that they set their threshold for the "high region" at a level of financial depth which is much lower than the level for which we start finding that finance has a negative effect on growth.
Our results are instead consistent with the vanishing effect of financial depth found by Rousseau and Wachtel (2011). If the true relationship between financial depth and economic growth is non-monotone, models that do not allow for non-monotonicity will lead to a downward bias in the estimated relationship between financial depth and economic growth.
Monday, June 18, 2012
Monitoring Systemic Risk Based on Dynamic Thresholds
Monitoring Systemic Risk Based on Dynamic Thresholds. By Kasper Lund-Jensen
IMF Working Paper No. 12/159
June 2012
http://www.imfbookstore.org/ProdDetails.asp?ID=WPIEA2012159
Summary: Successful implementation of macroprudential policy is contingent on the ability to identify and estimate systemic risk in real time. In this paper, systemic risk is defined as the conditional probability of a systemic banking crisis and this conditional probability is modeled in a fixed effect binary response model framework. The model structure is dynamic and is designed for monitoring as the systemic risk forecasts only depend on data that are available in real time. Several risk factors are identified and it is hereby shown that the level of systemic risk contains a predictable component which varies through time. Furthermore, it is shown how the systemic risk forecasts map into crisis signals and how policy thresholds are derived in this framework. Finally, in an out-of-sample exercise, it is shown that the systemic risk estimates provided reliable early warning signals ahead of the recent financial crisis for several economies.
Excerpts:
Introduction
The financial crisis in 2007–09, and the following global economic recession, has highlighted the importance of a macroprudential policy framework which seeks to limit systemic financial risk. While there is still no consensus on how to implement macroprudential policy it is clear that successful implementation is contingent on establishing robust methods for monitoring systemic risk.3 This current paper makes a step towards achieving this goal. Systemic risk assessment in real time is a challenging task due to the intrinsically unpredictable nature of systemic financial risk. However, this study shows, in a fixed effect binary response model framework, that systemic risk does contain a component which varies in a predictable way through time and that modeling this component can potentially improve policy decisions.
In this paper, systemic risk is defined as the conditional probability of a systemic banking crisis and I am interested in modeling and forecasting this (potentially) time varying probability. If different systemic banking crises differ completely in terms of underlying causes, triggers, and economic impact the conditional crisis probability will be unpredictable. However, as illustrated in section IV, systemic banking crises appear to share many commonalities. For example, banking crises are often preceded by prolonged periods of high credit growth and tend to occur when the banking sector is highly leveraged.
Systemic risk can be characterized by both cross-sectional and time-related dimensions (e.g. Hartmann, de Bandt, and Alcalde, 2009). The cross-sectional dimension concerns how risks are correlated across financial institutions at a given point in time due to direct and indirect linkages across institutions and prevailing default conditions. The time series dimension concerns the evolution of systemic risk over time due to changes in the macroeconomic environment. This includes changes in the default cycle, changes in financial market conditions, and the potential build-up of financial imbalances such as asset and credit market bubbles. The focus in this paper is on the time dimension of systemic risk although the empirical analysis includes a variable that proxies for the strength of interconnectedness between financial institutions.
This paper makes the following contributions to the literature on systemic risk assessment: Firstly, it employs a dynamic binary response model, based on a large panel of 68 advanced and emerging economies, to identify leading indicators of systemic risk. While Demirgüç-Kunt and Detragiache (1998a) study the determinants of banking crises the purpose of this paper is to evaluate whether systemic risk can be monitored in real time. Consequently, it employs a purely dynamic model structure such that the systemic risk forecasts are based solely on information available in real time. Furthermore, the estimation strategy employed in this paper is consistent under more general conditions than a random effect estimator used in other studies (e.g. Demirgüç-Kunt and Detragiache (1998a) and Wong, Wong and Leung (2010)). Secondly, this paper shows how to derive risk factor thresholds in the binary response model framework. The threshold of a single risk factor is dynamic in the sense that it depends on the value of the other risk factors and it is argued that this approach has some advantages relative to static thresholds based on the signal extraction approach.4 Finally, I perform a pseudo out-of-sample analysis for the period 2001–2010 in order to assess whether the risk factors provided early-warning signals ahead of the recent financial crisis.
Based on the empirical analysis, I reach the following main conclusions:
1. Systemic risk, as defined here, does appear to be predictable in real time. In particular, the following risk factors are identified: banking sector leverage, equity price growth, the credit-to-GDP gap, real effective exchange rate appreciation, changes in the banks’ lending premium and the degree of banks interconnectedness as measured by the ratio of non-core to core bank liabilities. There is also some evidence which suggests that house price growth increases systemic risk but the effect is not statistically significant at conventional significance levels.
2. There exists a significant contagion effect between economies. When an economy with a large financial sector is experiencing a systemic banking crisis, the systemic risk forecasts in other economies increases significantly.
3. Rapid credit growth in a country is often associated with a higher level of systemic risk. However, as highlighted in a recent IMF report (2011), a boom in credit can also reflect a healthy market response to expected future productivity gains as a result of new technology, new resources or institutional improvements. Indeed, many episodes of credit booms were not followed by a systemic banking crisis or any other material instability. It is critical that a policymaker is able to distinguish between these two scenarios when implementing economic policy. I find empirical evidence which suggests that credit growth increases systemic risk considerably more when accompanied by high equity price growth. Therefore, I argue that the evolution in equity prices can be useful for identifying a healthy credit expansion.
4. In a crisis signaling exercise, I find that the binary response model approach outperforms the popular signal extracting approach in terms of type I and type II errors.
5. Based on a model specification with credit-to-GDP growth, banking sector leverage and equity price growth I carefully evaluate the optimal credit-to-GDP growth threshold. Contrary to the signal extraction approach the optimal threshold is not static but depends on the value of the other risk factors. For example, the threshold is around 10 percent if equity prices have decreased by 10 percent and banking sector leverage is around 130 percent but only around 0 percent if equity prices have grown by 20 percent and banking sector leverage is 160 percent. In comparison, the signal extraction method leads to a (static) credit-to-GDP growth threshold of 4.9 percent based on the same data sample.
6. In the out-of-sample analysis, I find that the systemic risk factors generally provided informative signals in many countries. Based on an in-sample calibration, around 50– 80 percent of the crises were correctly identified in real time without constructing too many false signals. In particular, a monitoring model based on credit-to-GDP growth and banking sector leverage signaled early warning signals ahead of the U.S. subprime crisis in 2007.
IMF Working Paper No. 12/159
June 2012
http://www.imfbookstore.org/ProdDetails.asp?ID=WPIEA2012159
Summary: Successful implementation of macroprudential policy is contingent on the ability to identify and estimate systemic risk in real time. In this paper, systemic risk is defined as the conditional probability of a systemic banking crisis and this conditional probability is modeled in a fixed effect binary response model framework. The model structure is dynamic and is designed for monitoring as the systemic risk forecasts only depend on data that are available in real time. Several risk factors are identified and it is hereby shown that the level of systemic risk contains a predictable component which varies through time. Furthermore, it is shown how the systemic risk forecasts map into crisis signals and how policy thresholds are derived in this framework. Finally, in an out-of-sample exercise, it is shown that the systemic risk estimates provided reliable early warning signals ahead of the recent financial crisis for several economies.
Excerpts:
Introduction
The financial crisis in 2007–09, and the following global economic recession, has highlighted the importance of a macroprudential policy framework which seeks to limit systemic financial risk. While there is still no consensus on how to implement macroprudential policy it is clear that successful implementation is contingent on establishing robust methods for monitoring systemic risk.3 This current paper makes a step towards achieving this goal. Systemic risk assessment in real time is a challenging task due to the intrinsically unpredictable nature of systemic financial risk. However, this study shows, in a fixed effect binary response model framework, that systemic risk does contain a component which varies in a predictable way through time and that modeling this component can potentially improve policy decisions.
In this paper, systemic risk is defined as the conditional probability of a systemic banking crisis and I am interested in modeling and forecasting this (potentially) time varying probability. If different systemic banking crises differ completely in terms of underlying causes, triggers, and economic impact the conditional crisis probability will be unpredictable. However, as illustrated in section IV, systemic banking crises appear to share many commonalities. For example, banking crises are often preceded by prolonged periods of high credit growth and tend to occur when the banking sector is highly leveraged.
Systemic risk can be characterized by both cross-sectional and time-related dimensions (e.g. Hartmann, de Bandt, and Alcalde, 2009). The cross-sectional dimension concerns how risks are correlated across financial institutions at a given point in time due to direct and indirect linkages across institutions and prevailing default conditions. The time series dimension concerns the evolution of systemic risk over time due to changes in the macroeconomic environment. This includes changes in the default cycle, changes in financial market conditions, and the potential build-up of financial imbalances such as asset and credit market bubbles. The focus in this paper is on the time dimension of systemic risk although the empirical analysis includes a variable that proxies for the strength of interconnectedness between financial institutions.
This paper makes the following contributions to the literature on systemic risk assessment: Firstly, it employs a dynamic binary response model, based on a large panel of 68 advanced and emerging economies, to identify leading indicators of systemic risk. While Demirgüç-Kunt and Detragiache (1998a) study the determinants of banking crises the purpose of this paper is to evaluate whether systemic risk can be monitored in real time. Consequently, it employs a purely dynamic model structure such that the systemic risk forecasts are based solely on information available in real time. Furthermore, the estimation strategy employed in this paper is consistent under more general conditions than a random effect estimator used in other studies (e.g. Demirgüç-Kunt and Detragiache (1998a) and Wong, Wong and Leung (2010)). Secondly, this paper shows how to derive risk factor thresholds in the binary response model framework. The threshold of a single risk factor is dynamic in the sense that it depends on the value of the other risk factors and it is argued that this approach has some advantages relative to static thresholds based on the signal extraction approach.4 Finally, I perform a pseudo out-of-sample analysis for the period 2001–2010 in order to assess whether the risk factors provided early-warning signals ahead of the recent financial crisis.
Based on the empirical analysis, I reach the following main conclusions:
1. Systemic risk, as defined here, does appear to be predictable in real time. In particular, the following risk factors are identified: banking sector leverage, equity price growth, the credit-to-GDP gap, real effective exchange rate appreciation, changes in the banks’ lending premium and the degree of banks interconnectedness as measured by the ratio of non-core to core bank liabilities. There is also some evidence which suggests that house price growth increases systemic risk but the effect is not statistically significant at conventional significance levels.
2. There exists a significant contagion effect between economies. When an economy with a large financial sector is experiencing a systemic banking crisis, the systemic risk forecasts in other economies increases significantly.
3. Rapid credit growth in a country is often associated with a higher level of systemic risk. However, as highlighted in a recent IMF report (2011), a boom in credit can also reflect a healthy market response to expected future productivity gains as a result of new technology, new resources or institutional improvements. Indeed, many episodes of credit booms were not followed by a systemic banking crisis or any other material instability. It is critical that a policymaker is able to distinguish between these two scenarios when implementing economic policy. I find empirical evidence which suggests that credit growth increases systemic risk considerably more when accompanied by high equity price growth. Therefore, I argue that the evolution in equity prices can be useful for identifying a healthy credit expansion.
4. In a crisis signaling exercise, I find that the binary response model approach outperforms the popular signal extracting approach in terms of type I and type II errors.
5. Based on a model specification with credit-to-GDP growth, banking sector leverage and equity price growth I carefully evaluate the optimal credit-to-GDP growth threshold. Contrary to the signal extraction approach the optimal threshold is not static but depends on the value of the other risk factors. For example, the threshold is around 10 percent if equity prices have decreased by 10 percent and banking sector leverage is around 130 percent but only around 0 percent if equity prices have grown by 20 percent and banking sector leverage is 160 percent. In comparison, the signal extraction method leads to a (static) credit-to-GDP growth threshold of 4.9 percent based on the same data sample.
6. In the out-of-sample analysis, I find that the systemic risk factors generally provided informative signals in many countries. Based on an in-sample calibration, around 50– 80 percent of the crises were correctly identified in real time without constructing too many false signals. In particular, a monitoring model based on credit-to-GDP growth and banking sector leverage signaled early warning signals ahead of the U.S. subprime crisis in 2007.
Wednesday, June 13, 2012
Fiscal Transparency, Fiscal Performance and Credit Ratings
Fiscal Transparency, Fiscal Performance and Credit Ratings. By Arbatli, Elif; Escolano, Julio
IMF Working Paper No. 12/156
June 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25996.0
Summary: This paper investigates the effect of fiscal transparency on market assessments of sovereign risk, as measured by credit ratings. It measures this effect through a direct channel (uncertainty reduction) and an indirect channel (better fiscal policies and outcomes), and it differentiates between advanced and developing economies. Fiscal transparency is measured by an index based on the IMF’s Reports on the Observance of Standards and Codes (ROSCs). We find that fiscal transparency has a positive and significant effect on ratings, but it works through different channels in advanced and developing economies. In advanced economies the indirect effect of transparency through better fiscal outcomes is more significant whereas for developing economies the direct uncertainty-reducing effect is more relevant. Our results suggest that a one standard deviation improvement in fiscal transparency index is associated with a significant increase in credit ratings: by 0.7 and 1 notches in advanced and developing economies respectively.
IMF Working Paper No. 12/156
June 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25996.0
Summary: This paper investigates the effect of fiscal transparency on market assessments of sovereign risk, as measured by credit ratings. It measures this effect through a direct channel (uncertainty reduction) and an indirect channel (better fiscal policies and outcomes), and it differentiates between advanced and developing economies. Fiscal transparency is measured by an index based on the IMF’s Reports on the Observance of Standards and Codes (ROSCs). We find that fiscal transparency has a positive and significant effect on ratings, but it works through different channels in advanced and developing economies. In advanced economies the indirect effect of transparency through better fiscal outcomes is more significant whereas for developing economies the direct uncertainty-reducing effect is more relevant. Our results suggest that a one standard deviation improvement in fiscal transparency index is associated with a significant increase in credit ratings: by 0.7 and 1 notches in advanced and developing economies respectively.
Tuesday, June 12, 2012
Bringing Africa Back to Life: The Legacy of George W. Bush
Bringing Africa Back to Life: The Legacy of George W. Bush. By Jim Landers
Dallas Morning News
June 08, 2012
LUSAKA, Zambia — On a beautiful Saturday morning, Delfi Nyankombe stood among her bracelets and necklaces at a churchyard bazaar and pondered a question: What do you think of George W. Bush?
“George Bush is a great man,” she answered. “He tried to help poor countries like Zambia when we were really hurting from AIDS. He empowered us, especially women, when the number of people dying was frightening. Now we are able to live.”
Nyankombe, 38, is a mother of three girls. She also admires the former president because of his current campaign to corral cervical cancer. Few are screened for the disease, and it now kills more Zambian women than any other cancer.
“By the time a woman knows, she may need radiation or chemotherapy that can have awful side effects, like fistula,” she said. “This is a big problem in Zambia, and he’s still helping us.”
The debate over a president’s legacy lasts many years longer than his term of office. At home, there’s still no consensus about the 2001-09 record of George W. Bush, with its wars and economic turmoil.
In Africa, he’s a hero.
“No American president has done more for Africa,” said Festus Mogae, who served as president of Botswana from 1998 to 2008. “It’s not only me saying that. All of my colleagues agree.”
AIDS was an inferno burning through sub-Saharan Africa. The American people, led by Bush, checked that fire and saved millions of lives.
People with immune systems badly weakened by HIV were given anti-retroviral drugs that stopped the progression of the disease. Mothers and newborns were given drugs that stopped the transmission of the virus from one generation to the next. Clinics were built. Doctors and nurses and lay workers were trained. A wrenching cultural conversation about sexual practices broadened, fueled by American money promoting abstinence, fidelity and the use of condoms.
“We kept this country from falling off the edge of a cliff,” said Mark Storella, the U.S. ambassador to Zambia. “We’ve saved hundreds of thousands of lives. We’ve assisted over a million orphans. We’ve created a partnership with Zambia that gives us the possibility of walking the path to an AIDS-free generation. This is an enormous achievement.”
Bush remains active in African health. Last September, he launched a new program — dubbed Pink Ribbon Red Ribbon — to tackle cervical and breast cancer among African women. The program has 14 co-sponsors, including the Obama administration.
Read the rest here: http://www.bushcenter.com/blog/2012/06/11/icymi-bringing-africa-back-to-life-the-legacy-of-george-w-bush
Dallas Morning News
June 08, 2012
LUSAKA, Zambia — On a beautiful Saturday morning, Delfi Nyankombe stood among her bracelets and necklaces at a churchyard bazaar and pondered a question: What do you think of George W. Bush?
“George Bush is a great man,” she answered. “He tried to help poor countries like Zambia when we were really hurting from AIDS. He empowered us, especially women, when the number of people dying was frightening. Now we are able to live.”
Nyankombe, 38, is a mother of three girls. She also admires the former president because of his current campaign to corral cervical cancer. Few are screened for the disease, and it now kills more Zambian women than any other cancer.
“By the time a woman knows, she may need radiation or chemotherapy that can have awful side effects, like fistula,” she said. “This is a big problem in Zambia, and he’s still helping us.”
The debate over a president’s legacy lasts many years longer than his term of office. At home, there’s still no consensus about the 2001-09 record of George W. Bush, with its wars and economic turmoil.
In Africa, he’s a hero.
“No American president has done more for Africa,” said Festus Mogae, who served as president of Botswana from 1998 to 2008. “It’s not only me saying that. All of my colleagues agree.”
AIDS was an inferno burning through sub-Saharan Africa. The American people, led by Bush, checked that fire and saved millions of lives.
People with immune systems badly weakened by HIV were given anti-retroviral drugs that stopped the progression of the disease. Mothers and newborns were given drugs that stopped the transmission of the virus from one generation to the next. Clinics were built. Doctors and nurses and lay workers were trained. A wrenching cultural conversation about sexual practices broadened, fueled by American money promoting abstinence, fidelity and the use of condoms.
“We kept this country from falling off the edge of a cliff,” said Mark Storella, the U.S. ambassador to Zambia. “We’ve saved hundreds of thousands of lives. We’ve assisted over a million orphans. We’ve created a partnership with Zambia that gives us the possibility of walking the path to an AIDS-free generation. This is an enormous achievement.”
Bush remains active in African health. Last September, he launched a new program — dubbed Pink Ribbon Red Ribbon — to tackle cervical and breast cancer among African women. The program has 14 co-sponsors, including the Obama administration.
Read the rest here: http://www.bushcenter.com/blog/2012/06/11/icymi-bringing-africa-back-to-life-the-legacy-of-george-w-bush
Systemic Risk and Asymmetric Responses in the Financial Industry
Systemic Risk and Asymmetric Responses in the Financial Industry. By López-Espinosa, Germán; Moreno, Antonio; Rubia, Antonio; and Valderrama, Laura
IMF Working Paper No. 12/152
June, 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25991.0
Summary: To date, an operational measure of systemic risk capturing non-linear tail comovement between system-wide and individual bank returns has not yet been developed. This paper proposes an extension of the so-called CoVaR measure that captures the asymmetric response of the banking system to positive and negative shocks to the market-valued balance sheets of individual banks. For the median of our sample of U.S. banks, the relative impact on the system of a fall in individual market value is sevenfold that of an increase. Moreover, the downward bias in systemic risk from ignoring this asymmetric pattern increases with bank size. The conditional tail comovement between the banking system and a top decile bank which is losing market value is 5.4 larger than the unconditional tail comovement versus only 2.2 for banks in the bottom decile. The asymmetric model also produces much better estimates and fitting, and thus improves the capacity to monitor systemic risk. Our results suggest that ignoring asymmetries in tail interdependence may lead to a severe underestimation of systemic risk in a downward market.
Excerpts:
In this paper, we discuss the suitability of the general modeling strategy implemented in Adrian and Brunnermeier (2011) and propose a direct extension which accounts for nonlinear tail comovements between individual bank returns and financial system returns. Like most VaR models, the CoVaR approach builds on semi-parametric assumptions that characterize the dynamics of the time series of returns. Among others, the procedure requires the specification of the functional form that relates the conditional quantile of the whole financial system to the returns of the individual firm. The model proposed by Adrian and Brunnermeier (2011) assumes that system returns depend linearly on individual returns, so changes in the latter would feed proportionally into the former. This assumption is simple, convenient, and to a large extent facilitates the estimation of the parameters involved and the generation of downside-risk comovement estimates. On the other hand, this structure imposes certain limitations, as it neglects nonlinear patterns in the propagation of volatility shocks and of perturbations to the risk factors affecting banks' exposures. Both patterns feature distinctively in downside-risk dynamics.
There are strong economic arguments that suggest that the financial system may respond nonlinearly to shocks initiated in a single institution. A sizeable, positive shock in an individual bank is unlikely to generate the same characteristic response (i.e., comovement with the system) in absolute terms than a massive negative shock of the same magnitude, particularly if dealing with large-scale financial institutions.2 The disruption to the banking system caused by the failure of a financial institution may occur through direct exposures to the failing institution, through the contraction of financial services provided by the weakening institution (clearing, settlement, custodial or collateral management services), or from a shock to investor confidence that spreads out to sound institutions under adverse selection imperfections (Nier, 2011). Indeed, an extreme idiosyncratic shock in the banking industry, will not only reduce the market value of the stocks a¤ected, but may also spread uncertainty in the system rushing depositors and lending counterparties to withdraw their holdings from performing institutions and across unrelated asset classes, precipitating widespread insolvency. Historical experience suggests that a confidence loss in the soundness of the banking sector takes time to dissipate and may generate devastating e¤ects on the real economy. Bernanke (1983) comes to the conclusion that bank runs were largely responsible of the systemic collapse of the financial industry and the subsequent contagion to the real sectors during the Great Depression. Another channel of contagion in a downward market is through the fire-sales of assets initiated by the stricken institution to restore its capital adequacy, causing valuation losses in firms holding equivalent securities. This mechanism, induced by the generalized collateral lending practices that are prevalent in the wholesale interbank market, can exacerbate price volatility in a crisis situation, as discussed by Brunnermeier and Pedersen (2009). The increased complexity and connectedness of financial institutions can generate "Black Swan" effects, morphing small perturbations in one part of the financial system into large negative shocks on seemingly unrelated parts of the system. These arguments suggest that the financial system is more sensitive to downside losses than upside gains. In such a case, the linear assumption involved in Adrian and Brunnermeier (2011) would neglect a key aspect of downside risk modeling and lead to underestimate the extent of systemic risk contribution of an individual bank.
We propose a simple extension of this procedure that encompasses the linear functional form as a special case and which, more generally, allows us to capture asymmetric patterns in systemic spillovers. We shall refer to this specification as asymmetric CoVaR in the sequel. This approach retains the tractability of the linear model, which ensures that parameters can readily be identified by appropriate techniques, and produces CoVaR estimates which are expected to be more accurate. Furthermore, given the resultant estimates, the existence of nonlinear patterns that motivate the asymmetric model can be addressed formally through a standard Wald test statistic. In this paper, we analyze the suitability of the asymmetric CoVaR in a comprehensive sample of U.S. banks over the period 1990-2010. We find strong statistical evidence suggesting the existence of asymmetric patterns in the marginal contribution of these banks to the systemic risk. Neglecting these nonlinearities gives rise to estimates that systematically underestimate the marginal contribution to systemic risk. Remarkably, the magnitude of the bias is tied to the size of the firm, so that the bigger the company, the greater the underestimation bias. This result is consistent with the too-big-to-fail hypothesis which stresses the need to maintain continuity of the vital economic functions of a large financial institution whose disorderly failure would cause significant disruption to the wider financial system.3 Ignoring the existence of asymmetries would thus lead to conservative estimates of risk contributions, more so in large firms which are more likely to be systemic. Accounting for asymmetries in a simple extension of the model would remove that bias.
Concluding Remarks
In this paper, we have discussed the suitability of the CoVaR procedure recently proposed by Adrian and Brunnermeier (2011). This valuable approach helps understand the drivers of systemic risk in the banking industry. Implementing this procedure in practice requires specifying the unobservable functional form that relates the dynamics of the conditional tail of system's returns to the returns of an individual bank. Adrian and Brunnermeier (2011) build on a model that assumes a simple linear representation, such that returns are proportional.
We show that this approach may provide a reasonable approximation for small-sized banks. However, in more general terms, and particularly for large-scale banks, the linear assumption leads to a severe underestimation of the conditional comovement in a downward market and, hence, their systemic importance may be perceived to be lower than their actual contribution to systemic risk. Yet, how to measure and buttress e¤ectively the resilience of the financial system to losses crystallizing in a stress scenario is the main concern of policy makers, regulatory authorities, and financial markets alike. Witness the rally on U.S. equities and dollar on March 14, 2012 after the regulator announced favorable bank stress test results for the largest nineteen U.S. bank holding companies.
The reason is that the symmetric model implicitly assumes that positive and negative individual returns are equally strong to capture downside risk comovement. Our empirical results however, provide robust evidence that negative shocks to individual returns generate a much larger impact on the financial system than positive disturbances. For a median-sized bank, the relative impact ratio is sevenfold. We contend that this non-linear pattern should be acknowledged in the econometric modeling of systemic risk to avoid a serious misappraisal of risk. Moreover, our analysis suggests that the symmetric specification introduces systematic biases in risk assessment as a function of bank size. Specifically, the distortion caused by a linear model misspecification is more pronounced for larger banks, which are also more systemic on average. Our results show that tail interdependence between the financial system and a bottom-size decile bank which is contracting its balance sheet is 2.2 times larger than its average comovement. More strikingly, this ratio reaches 5.4 for the top-size decile bank. This result is in line with the too-big-to-fail hypothesis and lends support to recent recommendations put forth by the Financial Stability Board to require higher loss absorbency capacity on large banks. Likewise, it is consistent with the resolution plan required by the Dodd-Frank Act for bank holding companies and non-bank financial institutions with $50 billion or more in total assets. Submitting periodically a plan for rapid and orderly resolution in the event of financial distress or failure will enable the FDIC to evaluate potential loss severity and minimize the disruption that a failure may have in the rest of the system, thus performing its resolution functions more e¢ ciently. This measure will also help alleviate moral hazard concerns associated with systemic institutions and strengthen the stability of the overall financial system.
To capture the asymmetric pattern on tail comovement, we propose a simple yet e¤ective extension of the original CoVaR model. This extension preserves the tractability of the original model and its suitability can formally be tested formally through a simple Wald-type test, given the estimates of the model. We show that this simple extension is robust to more general specifications capturing non-linear patterns in returns, though at the expense of losing tractability.
The refinement of the CoVaR statistical measure presented in the paper aims at quantifying asymmetric spillover e¤ects when strains in banks' balance sheets are elevated, and thus contributes a step towards strengthening systemic risk monitoring in stress scenarios. Furthermore, its focus on tail comovement originated from negative perturbations in the growth rate of market-valued banks' balance sheets, may yield insights into the impact on the financial system from large-scale deleveraging by banks seeking to rebuild their capital cushions. This particular concern has been recently rekindled by the continued spikes in volatility in euro area financial markets. It has been estimated that, following pressures on the European banking system as banks cope with sovereign stress, European banks may shrink their combined balance sheet significantly with the potential of unleashing shockwaves to emerging economies hurting their financial stability (IMF, 2012). The estimation of the impact on the real economy from aggregate weakness of the financial sector, and the design of optimal macroprudential policies to arrest systemic risk when tail interdependencies feed non-linearly into the financial system, are left for future research.
IMF Working Paper No. 12/152
June, 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25991.0
Summary: To date, an operational measure of systemic risk capturing non-linear tail comovement between system-wide and individual bank returns has not yet been developed. This paper proposes an extension of the so-called CoVaR measure that captures the asymmetric response of the banking system to positive and negative shocks to the market-valued balance sheets of individual banks. For the median of our sample of U.S. banks, the relative impact on the system of a fall in individual market value is sevenfold that of an increase. Moreover, the downward bias in systemic risk from ignoring this asymmetric pattern increases with bank size. The conditional tail comovement between the banking system and a top decile bank which is losing market value is 5.4 larger than the unconditional tail comovement versus only 2.2 for banks in the bottom decile. The asymmetric model also produces much better estimates and fitting, and thus improves the capacity to monitor systemic risk. Our results suggest that ignoring asymmetries in tail interdependence may lead to a severe underestimation of systemic risk in a downward market.
Excerpts:
In this paper, we discuss the suitability of the general modeling strategy implemented in Adrian and Brunnermeier (2011) and propose a direct extension which accounts for nonlinear tail comovements between individual bank returns and financial system returns. Like most VaR models, the CoVaR approach builds on semi-parametric assumptions that characterize the dynamics of the time series of returns. Among others, the procedure requires the specification of the functional form that relates the conditional quantile of the whole financial system to the returns of the individual firm. The model proposed by Adrian and Brunnermeier (2011) assumes that system returns depend linearly on individual returns, so changes in the latter would feed proportionally into the former. This assumption is simple, convenient, and to a large extent facilitates the estimation of the parameters involved and the generation of downside-risk comovement estimates. On the other hand, this structure imposes certain limitations, as it neglects nonlinear patterns in the propagation of volatility shocks and of perturbations to the risk factors affecting banks' exposures. Both patterns feature distinctively in downside-risk dynamics.
There are strong economic arguments that suggest that the financial system may respond nonlinearly to shocks initiated in a single institution. A sizeable, positive shock in an individual bank is unlikely to generate the same characteristic response (i.e., comovement with the system) in absolute terms than a massive negative shock of the same magnitude, particularly if dealing with large-scale financial institutions.2 The disruption to the banking system caused by the failure of a financial institution may occur through direct exposures to the failing institution, through the contraction of financial services provided by the weakening institution (clearing, settlement, custodial or collateral management services), or from a shock to investor confidence that spreads out to sound institutions under adverse selection imperfections (Nier, 2011). Indeed, an extreme idiosyncratic shock in the banking industry, will not only reduce the market value of the stocks a¤ected, but may also spread uncertainty in the system rushing depositors and lending counterparties to withdraw their holdings from performing institutions and across unrelated asset classes, precipitating widespread insolvency. Historical experience suggests that a confidence loss in the soundness of the banking sector takes time to dissipate and may generate devastating e¤ects on the real economy. Bernanke (1983) comes to the conclusion that bank runs were largely responsible of the systemic collapse of the financial industry and the subsequent contagion to the real sectors during the Great Depression. Another channel of contagion in a downward market is through the fire-sales of assets initiated by the stricken institution to restore its capital adequacy, causing valuation losses in firms holding equivalent securities. This mechanism, induced by the generalized collateral lending practices that are prevalent in the wholesale interbank market, can exacerbate price volatility in a crisis situation, as discussed by Brunnermeier and Pedersen (2009). The increased complexity and connectedness of financial institutions can generate "Black Swan" effects, morphing small perturbations in one part of the financial system into large negative shocks on seemingly unrelated parts of the system. These arguments suggest that the financial system is more sensitive to downside losses than upside gains. In such a case, the linear assumption involved in Adrian and Brunnermeier (2011) would neglect a key aspect of downside risk modeling and lead to underestimate the extent of systemic risk contribution of an individual bank.
We propose a simple extension of this procedure that encompasses the linear functional form as a special case and which, more generally, allows us to capture asymmetric patterns in systemic spillovers. We shall refer to this specification as asymmetric CoVaR in the sequel. This approach retains the tractability of the linear model, which ensures that parameters can readily be identified by appropriate techniques, and produces CoVaR estimates which are expected to be more accurate. Furthermore, given the resultant estimates, the existence of nonlinear patterns that motivate the asymmetric model can be addressed formally through a standard Wald test statistic. In this paper, we analyze the suitability of the asymmetric CoVaR in a comprehensive sample of U.S. banks over the period 1990-2010. We find strong statistical evidence suggesting the existence of asymmetric patterns in the marginal contribution of these banks to the systemic risk. Neglecting these nonlinearities gives rise to estimates that systematically underestimate the marginal contribution to systemic risk. Remarkably, the magnitude of the bias is tied to the size of the firm, so that the bigger the company, the greater the underestimation bias. This result is consistent with the too-big-to-fail hypothesis which stresses the need to maintain continuity of the vital economic functions of a large financial institution whose disorderly failure would cause significant disruption to the wider financial system.3 Ignoring the existence of asymmetries would thus lead to conservative estimates of risk contributions, more so in large firms which are more likely to be systemic. Accounting for asymmetries in a simple extension of the model would remove that bias.
Concluding Remarks
In this paper, we have discussed the suitability of the CoVaR procedure recently proposed by Adrian and Brunnermeier (2011). This valuable approach helps understand the drivers of systemic risk in the banking industry. Implementing this procedure in practice requires specifying the unobservable functional form that relates the dynamics of the conditional tail of system's returns to the returns of an individual bank. Adrian and Brunnermeier (2011) build on a model that assumes a simple linear representation, such that returns are proportional.
We show that this approach may provide a reasonable approximation for small-sized banks. However, in more general terms, and particularly for large-scale banks, the linear assumption leads to a severe underestimation of the conditional comovement in a downward market and, hence, their systemic importance may be perceived to be lower than their actual contribution to systemic risk. Yet, how to measure and buttress e¤ectively the resilience of the financial system to losses crystallizing in a stress scenario is the main concern of policy makers, regulatory authorities, and financial markets alike. Witness the rally on U.S. equities and dollar on March 14, 2012 after the regulator announced favorable bank stress test results for the largest nineteen U.S. bank holding companies.
The reason is that the symmetric model implicitly assumes that positive and negative individual returns are equally strong to capture downside risk comovement. Our empirical results however, provide robust evidence that negative shocks to individual returns generate a much larger impact on the financial system than positive disturbances. For a median-sized bank, the relative impact ratio is sevenfold. We contend that this non-linear pattern should be acknowledged in the econometric modeling of systemic risk to avoid a serious misappraisal of risk. Moreover, our analysis suggests that the symmetric specification introduces systematic biases in risk assessment as a function of bank size. Specifically, the distortion caused by a linear model misspecification is more pronounced for larger banks, which are also more systemic on average. Our results show that tail interdependence between the financial system and a bottom-size decile bank which is contracting its balance sheet is 2.2 times larger than its average comovement. More strikingly, this ratio reaches 5.4 for the top-size decile bank. This result is in line with the too-big-to-fail hypothesis and lends support to recent recommendations put forth by the Financial Stability Board to require higher loss absorbency capacity on large banks. Likewise, it is consistent with the resolution plan required by the Dodd-Frank Act for bank holding companies and non-bank financial institutions with $50 billion or more in total assets. Submitting periodically a plan for rapid and orderly resolution in the event of financial distress or failure will enable the FDIC to evaluate potential loss severity and minimize the disruption that a failure may have in the rest of the system, thus performing its resolution functions more e¢ ciently. This measure will also help alleviate moral hazard concerns associated with systemic institutions and strengthen the stability of the overall financial system.
To capture the asymmetric pattern on tail comovement, we propose a simple yet e¤ective extension of the original CoVaR model. This extension preserves the tractability of the original model and its suitability can formally be tested formally through a simple Wald-type test, given the estimates of the model. We show that this simple extension is robust to more general specifications capturing non-linear patterns in returns, though at the expense of losing tractability.
The refinement of the CoVaR statistical measure presented in the paper aims at quantifying asymmetric spillover e¤ects when strains in banks' balance sheets are elevated, and thus contributes a step towards strengthening systemic risk monitoring in stress scenarios. Furthermore, its focus on tail comovement originated from negative perturbations in the growth rate of market-valued banks' balance sheets, may yield insights into the impact on the financial system from large-scale deleveraging by banks seeking to rebuild their capital cushions. This particular concern has been recently rekindled by the continued spikes in volatility in euro area financial markets. It has been estimated that, following pressures on the European banking system as banks cope with sovereign stress, European banks may shrink their combined balance sheet significantly with the potential of unleashing shockwaves to emerging economies hurting their financial stability (IMF, 2012). The estimation of the impact on the real economy from aggregate weakness of the financial sector, and the design of optimal macroprudential policies to arrest systemic risk when tail interdependencies feed non-linearly into the financial system, are left for future research.
Friday, June 8, 2012
Policy Analysis and Forecasting in the World Economy: A Panel Unobserved Components Approach
Policy Analysis and Forecasting in the World Economy: A Panel Unobserved Components Approach. By Francis Vitek
IMF Working Paper No. 12/149
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25974.0
Summary: This paper develops a structural macroeconometric model of the world economy, disaggregated into thirty five national economies. This panel unobserved components model features a monetary transmission mechanism, a fiscal transmission mechanism, and extensive macrofinancial linkages, both within and across economies. A variety of monetary policy analysis, fiscal policy analysis, spillover analysis, and forecasting applications of the estimated model are demonstrated, based on a Bayesian framework for conditioning on judgment.
IMF Working Paper No. 12/149
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25974.0
Summary: This paper develops a structural macroeconometric model of the world economy, disaggregated into thirty five national economies. This panel unobserved components model features a monetary transmission mechanism, a fiscal transmission mechanism, and extensive macrofinancial linkages, both within and across economies. A variety of monetary policy analysis, fiscal policy analysis, spillover analysis, and forecasting applications of the estimated model are demonstrated, based on a Bayesian framework for conditioning on judgment.
Thursday, June 7, 2012
Policies for Macrofinancial Stability: How to Deal with the Credit Booms
Policies for Macrofinancial Stability: How to Deal with the Credit Booms. By Dell'Ariccia, Giovanni; Igan, Deniz; Laeven, Luc; Tong, Hui; Bakker, Bas B.; Vandenbussche, Jérôme
IMF Staff Discussion Notes No. 12/06
June 07, 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25935.0
Excerpts
Executive summary
Credit booms buttress investment and consumption and can contribute to long-term financial deepening. But they often end up in costly balance sheet dislocations, and, more often than acceptable, in devastating financial crises whose cost can exceed the benefits associated with the boom. These risks have long been recognized. But, until the global financial crisis in 2008, policy paid limited attention to the problem. The crisis—preceded by booms in many of the hardest-hit countries—has led to a more activist stance. Yet, there is little consensus about how and when policy should intervene. This note explores past credit booms with the objective of assessing the effectiveness of macroeconomic and macroprudential policies in reducing the risk of a crisis or, at least, limiting its consequences.
It should be recognized at the onset that a more interventionist policy will inevitably imply some trade-offs. No policy tool is a panacea for the ills stemming from credit booms, and any form of intervention will entail costs and distortions, the relevance of which will depend on the characteristics and institutions of individual countries. With these caveats in mind, the analysis in this note brings the following insights.
First, credit booms are often triggered by financial reform, capital inflow surges associated with capital account liberalizations, and periods of strong economic growth. They tend to be more frequent in fixed exchange rate regimes, when banking supervision is weak, and when macroeconomic policies are loose.
Second, not all booms are bad. About a third of boom cases end up in financial crises. Others do not lead to busts but are followed by extended periods of below-trend economic growth. Yet many result in permanent financial deepening and benefit long-term economic growth. Third, it is difficult to tell “bad” from “good” booms in real time. But there are useful telltales. Bad booms tend to be larger and last longer (roughly half of the booms lasting longer than six years end up in a crisis).
Fourth, monetary policy is in principle the natural lever to contain a credit boom. In practice, however, capital flows (and related concerns about exchange rate volatility) and currency substitution limit its effectiveness in small open economies. In addition, since booms can occur in low-inflation environments, a conflict may emerge with its primary objective.
Fifth, given its time lags, fiscal policy is ill-equipped to timely stop a boom. But consolidation during the boom years can help create fiscal room to support the financial sector or stimulate the economy if and when a bust arrives.
Finally, macroprudential tools have at times proven effective in containing booms, and more often in limiting the consequences of busts, thanks to the buffers they helped to build. Their more targeted nature limits their costs, although their associated distortions, should these tools be abused, can be severe. Moreover, circumvention has often been a major issue, underscoring the importance of careful design, coordination with other policies (including across borders), and close supervision to ensure the efficacy of these tools.
Conclusions
Prolonged credit booms are a harbinger of financial crises and have real costs. Our analysis shows that, while only a minority of booms end up in crises, those that do can have longlasting and devastating real effects if left unaddressed. Yet it appears to be difficult to identify bad booms as they emerge, and the cost of intervening too early and running the risk of stopping a good boom therefore has to be weighed against the desire to prevent financial crises.
While the analysis offers some insights into the origins and dynamics of credit booms, from a policy perspective a number of questions remain unaddressed. In part this reflects the limited experience to date with macroprudential policies and the simultaneous use of multiple policy tools, making it hard to disentangle specific policy measures’ effectiveness.
First, while monetary policy tightening seems the natural response to rapid credit growth, we find only weak empirical evidence that it contains booms and their fallout on the economy. This may be partly the result of a statistical bias. But there are several “legitimate” factors that limit the use and effectiveness of monetary policy in dealing with credit booms, especially in small open economies. In contrast, there is more consistent evidence that macroprudential policy is up to this task, although it is more exposed to circumvention.
All of the above raise important questions about the optimal policy response to credit booms. Our view is that when credit booms coincide with periods of general overheating in the economy, monetary policy should act first and foremost. If the boom lasts and is likely to end up badly or if it occurs in the absence of overheating, then macroprudential policy should come into play. Preferably, this should be in combination and coordination with macroeconomic policy, especially when macroeconomic policy is already being used to address overheating of the economy.
Second, questions remain about the optimal mix and modality of macroprudential policies, also in light of political economy considerations and the type of supervisory arrangements in the country. Political economy considerations call for a more rules-based approach to setting macroprudential policy to avoid pressure from interest groups to relax regulation during a crisis. But such considerations have to be weighed against the practical problems and unintended effects of a rules-based approach, such as the calibration of rules with rather demanding data requirements and the risk of circumvention in the presence of active earnings management. The design of a macroprudential framework should also consider the capacity and ability of supervisors to enforce such rules so that unintended and potentially dangerous side effects can be avoided.
Third, the optimal macroprudential policy response to credit booms, as well as the optimal policy mix, will likely have to depend on the type of credit boom. Because of data limitations, our analysis has focused on aggregate credit. While it seems natural that policy response should adapt to and be targeted to the type of credit, additional analysis is needed to assess the effectiveness of policies to curtail booms that differ in the type of credit.
Fourth, policy coordination, across different authorities and across borders, may increase the effectiveness of monetary tightening and macroprudential policies. Cooperation and a continuous flow of information among national supervisors, especially regarding the activities of institutions that are active across borders, are crucial. Equally important is the coordination of regulations and actions among supervisors of different types of financial institutions. Whether and how national policymakers take into account the effects of their actions on the financial and macroeconomic stability of other countries is a vital issue, calling for further regional and global cooperation in the setup of macroprudential policy frameworks and the conduct of macroeconomic policies.
IMF Staff Discussion Notes No. 12/06
June 07, 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25935.0
Excerpts
Executive summary
Credit booms buttress investment and consumption and can contribute to long-term financial deepening. But they often end up in costly balance sheet dislocations, and, more often than acceptable, in devastating financial crises whose cost can exceed the benefits associated with the boom. These risks have long been recognized. But, until the global financial crisis in 2008, policy paid limited attention to the problem. The crisis—preceded by booms in many of the hardest-hit countries—has led to a more activist stance. Yet, there is little consensus about how and when policy should intervene. This note explores past credit booms with the objective of assessing the effectiveness of macroeconomic and macroprudential policies in reducing the risk of a crisis or, at least, limiting its consequences.
It should be recognized at the onset that a more interventionist policy will inevitably imply some trade-offs. No policy tool is a panacea for the ills stemming from credit booms, and any form of intervention will entail costs and distortions, the relevance of which will depend on the characteristics and institutions of individual countries. With these caveats in mind, the analysis in this note brings the following insights.
First, credit booms are often triggered by financial reform, capital inflow surges associated with capital account liberalizations, and periods of strong economic growth. They tend to be more frequent in fixed exchange rate regimes, when banking supervision is weak, and when macroeconomic policies are loose.
Second, not all booms are bad. About a third of boom cases end up in financial crises. Others do not lead to busts but are followed by extended periods of below-trend economic growth. Yet many result in permanent financial deepening and benefit long-term economic growth. Third, it is difficult to tell “bad” from “good” booms in real time. But there are useful telltales. Bad booms tend to be larger and last longer (roughly half of the booms lasting longer than six years end up in a crisis).
Fourth, monetary policy is in principle the natural lever to contain a credit boom. In practice, however, capital flows (and related concerns about exchange rate volatility) and currency substitution limit its effectiveness in small open economies. In addition, since booms can occur in low-inflation environments, a conflict may emerge with its primary objective.
Fifth, given its time lags, fiscal policy is ill-equipped to timely stop a boom. But consolidation during the boom years can help create fiscal room to support the financial sector or stimulate the economy if and when a bust arrives.
Finally, macroprudential tools have at times proven effective in containing booms, and more often in limiting the consequences of busts, thanks to the buffers they helped to build. Their more targeted nature limits their costs, although their associated distortions, should these tools be abused, can be severe. Moreover, circumvention has often been a major issue, underscoring the importance of careful design, coordination with other policies (including across borders), and close supervision to ensure the efficacy of these tools.
Conclusions
Prolonged credit booms are a harbinger of financial crises and have real costs. Our analysis shows that, while only a minority of booms end up in crises, those that do can have longlasting and devastating real effects if left unaddressed. Yet it appears to be difficult to identify bad booms as they emerge, and the cost of intervening too early and running the risk of stopping a good boom therefore has to be weighed against the desire to prevent financial crises.
While the analysis offers some insights into the origins and dynamics of credit booms, from a policy perspective a number of questions remain unaddressed. In part this reflects the limited experience to date with macroprudential policies and the simultaneous use of multiple policy tools, making it hard to disentangle specific policy measures’ effectiveness.
First, while monetary policy tightening seems the natural response to rapid credit growth, we find only weak empirical evidence that it contains booms and their fallout on the economy. This may be partly the result of a statistical bias. But there are several “legitimate” factors that limit the use and effectiveness of monetary policy in dealing with credit booms, especially in small open economies. In contrast, there is more consistent evidence that macroprudential policy is up to this task, although it is more exposed to circumvention.
All of the above raise important questions about the optimal policy response to credit booms. Our view is that when credit booms coincide with periods of general overheating in the economy, monetary policy should act first and foremost. If the boom lasts and is likely to end up badly or if it occurs in the absence of overheating, then macroprudential policy should come into play. Preferably, this should be in combination and coordination with macroeconomic policy, especially when macroeconomic policy is already being used to address overheating of the economy.
Second, questions remain about the optimal mix and modality of macroprudential policies, also in light of political economy considerations and the type of supervisory arrangements in the country. Political economy considerations call for a more rules-based approach to setting macroprudential policy to avoid pressure from interest groups to relax regulation during a crisis. But such considerations have to be weighed against the practical problems and unintended effects of a rules-based approach, such as the calibration of rules with rather demanding data requirements and the risk of circumvention in the presence of active earnings management. The design of a macroprudential framework should also consider the capacity and ability of supervisors to enforce such rules so that unintended and potentially dangerous side effects can be avoided.
Third, the optimal macroprudential policy response to credit booms, as well as the optimal policy mix, will likely have to depend on the type of credit boom. Because of data limitations, our analysis has focused on aggregate credit. While it seems natural that policy response should adapt to and be targeted to the type of credit, additional analysis is needed to assess the effectiveness of policies to curtail booms that differ in the type of credit.
Fourth, policy coordination, across different authorities and across borders, may increase the effectiveness of monetary tightening and macroprudential policies. Cooperation and a continuous flow of information among national supervisors, especially regarding the activities of institutions that are active across borders, are crucial. Equally important is the coordination of regulations and actions among supervisors of different types of financial institutions. Whether and how national policymakers take into account the effects of their actions on the financial and macroeconomic stability of other countries is a vital issue, calling for further regional and global cooperation in the setup of macroprudential policy frameworks and the conduct of macroeconomic policies.
IMF Staff Notes: Externalities and Macro-Prudential Policy
Externalities and Macro-Prudential Policy. By De Nicoló, Gianni; Favara, Giovanni; Ratnovski, Lev
IMF Staff Discussion Notes No. 12/05
June 07, 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25936.0
Excerpts
Executive Summary
The recent financial crisis has led to a reexamination of policies for macroeconomic and financial stability. Part of the current debate involves the adoption of a macroprudential approach to financial regulation, with an aim toward mitigating boom-bust patterns and systemic risks in financial markets.
The fundamental rationale behind macroprudential policies, however, is not always clearly articulated. The contribution of this paper is to lay out the key sources of market failures that can justify macroprudential regulation. It explains how externalities associated with the activity of financial intermediaries can lead to systemic risk, and thus require specific policies to mitigate such risk.
The paper classifies externalities that can lead to systemic risk as:
1. Externalities related to strategic complementarities, that arise from the strategic interaction of banks (and other financial institutions) and cause the build-up of vulnerabilities during the expansionary phase of a financial cycle;
2. Externalities related to fire sales, that arise from a generalized sell-off of financial assets causing a decline in asset prices and a deterioration of the balance sheets of intermediaries, especially during the contractionary phase of a financial cycle; and
3. Externalities related to interconnectedness, caused by the propagation of shocks from systemic institutions or through financial networks.
The correction of these externalities can be seen as intermediate targets for macroprudential policy, since policies that control externalities mitigate market failures that create systemic risk.
This paper discusses how the main proposed macroprudential policy tools—capital requirements, liquidity requirements, restrictions on activities, and taxes—address the identified externalities. It is argued that each externality can be corrected by different tools that can complement each other. Capital surcharges, however, are likely to play an important role in the design of macroprudential regulation.
This paper’s analysis of macroprudential policy complements the more traditional one that builds on the distinction between time-series and cross-sectional dimensions of systemic risk.
Conclusions
This paper has argued that the first step in the economic analysis of macroprudential policy is the identification of market failures that contribute to systemic risk. Externalities are an important source of such market failures, and macroprudential policy should be thought of as an attempt to correct these externalities.
Building on the discussion in the academic literature, the paper has identified externalities that lead to systemic risk: externalities due to strategic complementarities, which contribute to the accumulation of vulnerabilities during the expansionary phase of a financial cycle; and externalities due to fire sales and interconnectedness, which tend to exacerbate negative shocks especially during a contractionary phase.
The correction of these externalities can be seen as an intermediate targets for macroprudential policy, since policies that control externalities mitigate market failures that create systemic risk. This paper has studied how the identified externalities can be corrected by the main macroprudential policy proposals: capital requirements, liquidity requirements, restrictions on bank activities, and taxation. The main finding is that even though some of these policies can complement each other in correcting the same externality, capital requirements are likely to play an important role in the design of any macroprudential framework.
It has also been argued that although externalities can be proxied through a variety of risk measurements, the accumulation of evidence on the effectiveness of alternative policy tools remains the most pressing concern for the design of macroprudential policy.
IMF Staff Discussion Notes No. 12/05
June 07, 2012
http://www.imf.org/external/pubs/cat/longres.aspx?sk=25936.0
Excerpts
Executive Summary
The recent financial crisis has led to a reexamination of policies for macroeconomic and financial stability. Part of the current debate involves the adoption of a macroprudential approach to financial regulation, with an aim toward mitigating boom-bust patterns and systemic risks in financial markets.
The fundamental rationale behind macroprudential policies, however, is not always clearly articulated. The contribution of this paper is to lay out the key sources of market failures that can justify macroprudential regulation. It explains how externalities associated with the activity of financial intermediaries can lead to systemic risk, and thus require specific policies to mitigate such risk.
The paper classifies externalities that can lead to systemic risk as:
1. Externalities related to strategic complementarities, that arise from the strategic interaction of banks (and other financial institutions) and cause the build-up of vulnerabilities during the expansionary phase of a financial cycle;
2. Externalities related to fire sales, that arise from a generalized sell-off of financial assets causing a decline in asset prices and a deterioration of the balance sheets of intermediaries, especially during the contractionary phase of a financial cycle; and
3. Externalities related to interconnectedness, caused by the propagation of shocks from systemic institutions or through financial networks.
The correction of these externalities can be seen as intermediate targets for macroprudential policy, since policies that control externalities mitigate market failures that create systemic risk.
This paper discusses how the main proposed macroprudential policy tools—capital requirements, liquidity requirements, restrictions on activities, and taxes—address the identified externalities. It is argued that each externality can be corrected by different tools that can complement each other. Capital surcharges, however, are likely to play an important role in the design of macroprudential regulation.
This paper’s analysis of macroprudential policy complements the more traditional one that builds on the distinction between time-series and cross-sectional dimensions of systemic risk.
Conclusions
This paper has argued that the first step in the economic analysis of macroprudential policy is the identification of market failures that contribute to systemic risk. Externalities are an important source of such market failures, and macroprudential policy should be thought of as an attempt to correct these externalities.
Building on the discussion in the academic literature, the paper has identified externalities that lead to systemic risk: externalities due to strategic complementarities, which contribute to the accumulation of vulnerabilities during the expansionary phase of a financial cycle; and externalities due to fire sales and interconnectedness, which tend to exacerbate negative shocks especially during a contractionary phase.
The correction of these externalities can be seen as an intermediate targets for macroprudential policy, since policies that control externalities mitigate market failures that create systemic risk. This paper has studied how the identified externalities can be corrected by the main macroprudential policy proposals: capital requirements, liquidity requirements, restrictions on bank activities, and taxation. The main finding is that even though some of these policies can complement each other in correcting the same externality, capital requirements are likely to play an important role in the design of any macroprudential framework.
It has also been argued that although externalities can be proxied through a variety of risk measurements, the accumulation of evidence on the effectiveness of alternative policy tools remains the most pressing concern for the design of macroprudential policy.
Subscribe to:
Posts (Atom)