Collective Risk Management in a Flight to Quality Episode


1 • • THE JOURNAL OF FINANCE OCTOBER 2008 VOL. LXIII, NO. 5 Collective Risk Management in a Flight to Quality Episode ∗ RICARDO J. CABALLERO and ARVIND KRISHNAMURTHY ABSTRACT Severe f light to quality episodes involve uncertainty about the environment, not only risk about asset payoffs. The uncertainty is triggered by unusual events and untested financial innovations that lead agents to question their worldview. We present a model of crises and central bank policy that incorporates Knightian uncertainty. The model explains crisis regularities such as market-wide capital immobility, agents’ disen- gagement from risk, and liquidity hoarding. We identify a social cost of these behav- iors, and a benefit of a lender of last resort facility. The benefit is particularly high because public and private insurance are complements during uncertainty-driven crises. Policy practitioners operating under a risk-management para- digm may, at times, be led to undertake actions intended to provide in- ... When confronted with surance against especially adverse outcomes uncertainty, especially Knightian uncertainty, human beings invariably attempt to disengage from medium to long-term commitments in favor of safety and liquidity ... The immediate response on the part of the central bank to such financial implosions must be to inject large quantities of liquidity ” Alan Greenspan (2004). ... F LIGHT TO QUALITY EPISODES ARE AN IMPORTANT SOURCE of financial and macroeco- nomic instability. Modern examples of these episodes in the U.S. include the Penn Central default of 1970, the stock market crash of 1987, the events in the fall of 1998 beginning with the Russian default and ending with the bailout of LTCM, and the events that followed the attacks of 9/11. Behind each of ∗ Caballero is from MIT and NBER. Krishnamurthy is from Northwestern University and NBER. We are grateful to Marios Angeletos, Olivier Blanchard, Phil Bond, Craig Burnside, Jon Faust, Xavier Gabaix, Jordi Gali, Michael Golosov, Campbell Harvey, William Hawkins, Burton Holli- field, Bengt Holmstrom, Urban Jermann, Dimitri Vayanos, Ivan Werning, an associate editor and anonymous referee, as well as seminar participants at Atlanta Fed Conference on Systematic Risk, Bank of England, Central Bank of Chile, Columbia, DePaul, Imperial College, London Business School, London School of Economics, Northwestern, MIT, Wharton, NY Fed Liquidity Conference, NY Fed Money Markets Conference, NBER Economic Fluctuations and Growth meeting, NBER Macroeconomics and Individual Decision Making Conference, Philadelphia Fed, and University of British Columbia Summer Conference for their comments. Vineet Bhagwat, David Lucca, and Alp Simsek provided excellent research assistance. Caballero thanks the NSF for financial sup- port. This paper covers the same substantive issues as (and hence replaces) “Flight to Quality and Collective Risk Management,” NBER WP # 12136. 2195

2 2196 The Journal of Finance these episodes lies the specter of a meltdown that may lead to a prolonged slowdown as in Japan during the 1990s, or even a catastrophe like the Great 1 Depression. In each of these cases, as hinted at by Greenspan (2004), the Fed intervened early and stood ready to intervene as much as needed to prevent a meltdown. In this paper we present a model to study the benefits of central bank actions during f light to quality episodes. Our model has two key ingredients: capi- tal/liquidity shortages and Knightian uncertainty (Knight (1921)). The capital shortage ingredient is a recurring theme in the empirical and theoretical liter- ature on financial crises and requires little motivation. Knightian uncertainty is less commonly studied, but practitioners perceive it as a central ingredient ’ s quote). to f light to quality episodes (see Greenspan Most f light to quality episodes are triggered by unusual or unexpected events. In 1970, the Penn Central Railroad s default on prime-rated commercial paper ’ caught the market by surprise and forced investors to reevaluate their models of credit risk. The ensuing dynamics temporarily shut out a large segment of commercial paper borrowers from a vital source of funds. In October 1987, the speed of the stock market decline took investors and market makers by sur- prise, causing them to question their models. Investors pulled back from the market while key market-makers widened bid-ask spreads. In the fall of 1998, the comovement of Russian government bond spreads, Brazilian spreads, and U.S. Treasury bond spreads was a surprise to even sophisticated market par- ticipants. These high correlations rendered standard risk management mod- els obsolete, leaving financial market participants searching for new models. Agents responded by making decisions using “ worst-case ” scenarios and “ stress- testing ” models. Finally, after 9/11, regulators were concerned that commercial banks would respond to the increased uncertainty over the status of other com- mercial banks by individually hoarding liquidity and that such actions would 2 lead to gridlock in the payments system. — re- The common aspects of investor behavior across these episodes — evaluation of models, conservatism, and disengagement from risky activities indicate that these episodes involved Knightian uncertainty (i.e., immeasurable risk) and not merely an increase in risk exposure. The emphasis on tail out- comes and worst-case scenarios in agents ’ decision rules suggests uncertainty aversion. Finally, an important observation about these events is that when it comes to f light to quality episodes, history seldom repeats itself. Similar mag- nitudes of commercial paper default (Mercury Finance in 1997) or stock market pullbacks (mini-crash of 1989) did not lead to similar investor responses. Today, as opposed to in 1998, market participants understand that correlations should be expected to rise during periods of reduced liquidity. Creditors understand the risk involved in lending to hedge funds. While in 1998 hedge funds were still 1 See table 1 (part A) in Barro (2006) for a list of extreme events, measured in terms of decline th in GDP, in developed economies during the 20 century. 2 See Calomiris (1994) on the Penn Central default, Melamed (1998) on the 1987 market crash, Scholes (2000) on the events of 1998, and Stewart (2002) or McAndrews and Potter (2002) on 9/11.

3 Collective Risk Management in a Flight to Quality Episode 2197 a novel financial vehicle, the large reported losses of the Amaranth hedge fund in 2006 barely caused a ripple in financial markets. The one-of-a-kind aspect of f light to quality episodes suggests that these events are fundamentally about 3 uncertainty rather than risk. Section I of the paper lays out a model of financial crises based on liquidity ’ s equilibrium and shortages and Knightian uncertainty. We analyze the model show that an increase in Knightian uncertainty or decrease in aggregate liquid- ity can generate f light to quality effects. In the model, when an agent is faced with Knightian uncertainty, he considers the worst case among the scenarios over which he is uncertain. This modeling of agent decision making and Knight- ian uncertainty draws from the decision theory literature, and in particular from Gilboa and Schmeidler (1989). When the aggregate quantity of liquidity is limited, the Knightian agent grows concerned that he will be caught in a sit- uation in which he needs liquidity, but there is not enough liquidity available to him. In this context, agents react by shedding risky financial claims in favor of safe and uncontingent claims. Financial intermediaries become self-protective and hoard liquidity. Investment banks and trading desks turn conservative in their allocation of risk capital. They lock up capital and become unwilling to f lexibly move it across markets. The main results of our paper are in Sections II and III. As indicated by for- mer Federal Reserve Chairman Greenspan ’ s comments, the Fed has historically intervened during f light to quality episodes. We analyze the macroeconomic properties of the equilibrium and study the effects of central bank actions in collective our environment. First, we show that Knightian uncertainty leads to a bias in agents ’ actions: Each agent covers himself against his own worst-case scenario, but the scenario that the collective of agents are guarding against is impossible, and known to be so despite agents ’ uncertainty about the envi- ronment. We show that agents ’ conservative actions such as liquidity hoarding and locking-up of capital are macroeconomically costly because scarce liquidity goes wasted. Second, we show that central bank policy can be designed to alle- viate the overconservatism. A lender of last resort (LLR), even one facing the same incomplete knowledge that triggers agents ’ Knightian responses, finds that committing to add liquidity in the unlikely event that the private sector s ’ liquidity is depleted is beneficial. Agents respond to the LLR by freeing up capital and altering decisions in a manner that wastes less private liquidity. Public and private provision of insurance are complements in our model: Each pledged dollar of public intervention in the extreme event is matched by a com- parable private sector reaction to free up capital. In this sense, the Fed ’ s LTCM restructuring was important not for its direct effect, but because it served as ’ s readiness to intervene should conditions worsen. We also a signal of the Fed show that the LLR must be a last-resort policy: If liquidity injections take place 3 This observation suggests a possible way to empirically disentangle uncertainty aversion from risk aversion or extreme forms of risk aversion such as negative skew aversion. A risk averse agent behaves conservatively during times of high risk — it does not matter whether the risk involves something new or not. For an uncertainty-averse agent, new forms of risk elicit the conservative reaction.

4 2198 The Journal of Finance ’ s mistakes and reduces the too often, the policy exacerbates the private sector value of intervention. This occurs for reasons akin to the moral hazard problem identified with the LLR. Our model is most closely related to the literature on banking crises initiated 4 by Diamond and Dybvig (1983). While our environment is a variant of that in Diamond and Dybvig, it does not include the sequential service constraint of Diamond and Dybvig, and instead emphasizes Knightian uncertainty. The mod- eling difference leads to different applications of our crisis model. For instance, our model applies to a wider set of financial intermediaries than commercial banks financed by demandable deposit contracts. More importantly, because our model of crises centers on Knightian uncertainty, its insights most directly apply to circumstances of market-wide uncertainty, such as the new financial innovations or events discussed above. At a theoretical level, the sequential service constraint in the Diamond and Dybvig model creates a coordination failure. The bank panic ” occurs because “ each depositor runs conjecturing that other depositors will run. The externality 5 in the depositor s run decisions is central to their model of crises. ’ In our model, it is an increase in Knightian uncertainty that generates the panic behavior. Of course, in reality crises may ref lect both the type of externalities that Diamond and Dybvig highlight and the uncertainties that we study. In the Diamond and Dybvig analysis, the LLR is always beneficial because it rules out the “ bad ” run 6 equilibrium caused by the coordination failure. ’ As noted above, our model s prescriptions center on situations of market-wide uncertainty. In particular, our model prescribes that the benefit of the LLR is highest when there is both insufficient aggregate liquidity and Knightian uncertainty. Holmstrom and Tirole (1998) study how a shortage of aggregate collateral limits private liquidity provision (see also Woodford (1990)). Their analysis suggests that a credible government can issue government bonds that can then be used by the private sector for liquidity provision. The key difference between our paper and those of Holmstrom and Tirole and Woodford is that we show aggregate collateral may be inefficiently used, so that private sector liquidity provision is limited. In our model, government intervention not only adds to the private sector ’ s collateral, but also, and more centrally, improves the use of private collateral. 4 The literature on banking crises is too large to discuss here. See Gorton and Winton (2003) for a survey of this literature. 5 More generally, other papers in the crisis literature also highlight how investment externalities can exacerbate crises. Some examples in this literature include Allen and Gale (1994), Gromb and Vayanos (2002), Caballero and Krishnamurthy (2003), or Rochet and Vives (2004). In many of these models, incomplete markets lead to an inefficiency that creates a role for central bank policy (see Rochet and Vives (2004) or Allen and Gale (2004)). 6 More generally, other papers in the crisis literature also highlight how investment externalities can exacerbate crises. Some examples in this literature include Allen and Gale (1994), Gromb and Vayanos (2002), Caballero and Krishnamurthy (2003), or Rochet and Vives (2004). In many of these models, incomplete markets leads to an inefficiency that creates a role for central bank policy (see Bhattacharya and Gale (1987), Rochet and Vives (2004) or Allen and Gale (2004)).

5 Collective Risk Management in a Flight to Quality Episode 2199 ’ Hara (2005) are two related anal- Routledge and Zin (2004) and Easley and O 7 yses of Knightian uncertainty in financial markets. Routledge and Zin begin from the observation that financial institutions follow decision rules to pro- tect against a worst case scenario. They develop a model of market liquidity in which an uncertainty averse market maker sets bids and asks to facilitate trade of an asset. Their model captures an important aspect of f light to quality, namely, that uncertainty aversion can lead to a sudden widening of the bid-ask spread, causing agents to halt trading and reducing market liquidity. Both our paper and Routledge and Zin share the emphasis on financial intermediation and uncertainty aversion as central ingredients in f light to quality episodes. However, each paper captures different aspects of f light to quality. Easley and O Hara (2005) study a model in which uncertainty-averse traders focus on a ’ worst-case scenario when making an investment decision. Like us, Easley and ’ O Hara point out that government intervention in a worst-case scenario can ’ Hara study how uncertainty aversion affects have large effects. Easley and O investor participation in stock markets, while the focus of our study is on un- certainty aversion and financial crises. I. The Model We study a model conditional on entering a turmoil period in which liquidity risk and Knightian uncertainty coexist. Our model is silent on what triggers the episode. In practice, we think that the occurrence of an unusual event, such as the Penn Central default or the losses on AAA-rated subprime mortgage-backed securities, causes agents to reevaluate their models and triggers robustness concerns. Our goal is to present a model to study the role of a centralized liq- uidity provider such as the central bank. A. The Environment A.1. Preferences and Shocks The model has a continuum of competitive agents, which are indexed by ∈ ω ≡ [0, 1]. An agent may receive a liquidity shock in which he needs some  liquidity immediately. We view these liquidity shocks as a parable for a sudden need for capital by a financial market specialist (e.g., a trading desk, hedge fund, market maker). The shocks are correlated across agents. With probability φ (1), the economy is hit by a first wave of liquidity shocks. In this wave, a randomly chosen group φ of one-half of the agents have liquidity needs. We denote by (1) the probability ω 7 A growing economics literature aims to formalize Knightian uncertainty (a partial list of contri- butions includes Gilboa and Schmeidler (1989), Dow and Werlang (1992), Epstein and Wang (1994), Hansen and Sargent (1995, 2003), Skiadas (2003), Epstein and Schneider (2004), and Hansen (2006)). As in much of this literature, we use a max-min device to describe agents ’ expected utility. Our treatment of Knightian uncertainty is most similar to Gilboa and Schmeidler, in that agents choose a worst case among a class of priors.

6 2200 The Journal of Finance ω receiving a shock in the first wave, and note that, of agent ∫ φ (1) φ d ω = (1) . (1) ω 2  Equation (1) states that on average, across all agents, the probability of an φ (1) agent receiving a shock in the first wave is . 2 (2 | 1), a second wave of liquidity shocks hits the economy. In With probability φ the second wave of liquidity shocks, the other half of the agents need liquidity. (2) = φ (1) φ φ | 1). The probability with which agent ω is in this second wave Let (2 φ is (2), which satisfies ω ∫ φ (2) φ (2) d ω = . (2) ω 2  With probability 1 − φ (1) > 0 the economy experiences no liquidity shocks. We note that the sequential shock structure means that (1) φ (2) > 0 . (3) >φ This condition states that, in aggregate, a single-wave event is more likely than the two-wave event. We refer to the two-wave event as an extreme event, capturing an unlikely but severe liquidity crisis in which many agents are affected. Relation (3), which derives from the sequential shock structure, plays an important role in our analysis. We model the liquidity shock as a shock to preferences (e.g., as in Diamond ω receives utility and Dybvig (1983)). Agent ω U ( c (4) , c . , c c ) = α β u ( c + ) + α ) u ( c 2 1 T T 2 1 2 1 We define α α = = 0 if the agent is in the early wave; α 1, = 0, α = 1 if the 1 2 1 2 0 if the agent is not hit by a shock. = = 0, α agent is in the second wave; and, α 1 2 date 1, “ the second shock date as “ date We will refer to the first shock date as ” ” and the final date as “ date T. ” 2, u : R The function → R is twice continuously differentiable, increasing, and + ′ u ( c ) =∞ . Preferences are strictly concave, and it satisfies the condition lim 0 c → c concave over and c as , and linear over c c . We view the preference over T T 2 1 capturing a time in the future when market conditions are normalized and the trader is effectively risk neutral. The concave preferences over c and c 2 1 ref lect the potentially higher marginal value of liquidity during a time of market 1 1) − , can be thought of as an interest rate ( β distress. The discount factor, β facing the trader. A.2. Endowment and Securities Each agent is endowed with Z units of goods. These goods can be stored at gross return of one, and then liquidated if an agent receives a liquidity shock. If we interpret the agents of the model as financial traders, we may think of Z as the capital or liquidity of a trader.

7 Collective Risk Management in a Flight to Quality Episode 2201 Agents can also trade financial claims that are contingent on shock realiza- tions. As we will show, these claims allow agents who do not receive a shock to insure agents who do receive a shock. We assume all shocks are observable and contractible. There is no concern that an agent will pretend to have a shock and collect on an insurance claim. Markets are complete. There are claims on all histories of shock realizations. We will be more precise in specifying these contingent claims when we analyze the equilibrium. A.3. Probabilities and Uncertainty Agents trade contingent claims to insure against their liquidity shocks. In making the insurance decisions, agents have a probability model of the liquidity shocks in mind. φ (2). φ We assume that agents know the aggregate shock probabilities, (1) and We may think that agents observe the past behavior of the economy and form precise estimates of these aggregate probabilities. However, centrally to our model, the same past data do not reveal whether a given ω is more likely to be in the first wave or the second wave. Agents treat the latter uncertainty as Knightian. φ Formally, we use ω receiving (1) to denote the true probability of agent ω ω φ the first shock, and (1) to denote agent ω ’ s perception of the relevant true ω ω φ probability (similarly for ω φ (2)). We assume that each agent (2) and knows ω ω his probability of receiving a shock either in the first or second wave, φ (1) + ω 8 (2), and thus the perceived probabilities satisfy φ ω (2) φ φ + (1) ω ω φ (2) = φ (1) (1) + φ + φ = (2) (5) . ω ω ω ω 2 We define (2) φ ω ω θ . (6) (2) − ≡ φ ω ω 2 ω That is, θ ref lects how much agent s probability assessment of being second ω ’ ω ’ is higher than the average agent in the economy s true probability of being second. This relation also implies that (1) φ ω ω θ − φ = (1) − . ω ω 2 ω θ Agents consider a range of probability models in the set , with support ω [ − K , + K ]( K <φ (2) / 2), and design insurance portfolios that are robust to their model uncertainty. We follow Gilboa and Schmeidler ’ s (1989) Maximin Expected 8 For further clarification of the structure of shocks and agents ’ uncertainty, see the event tree that is detailed in the Appendix.

8 2202 The Journal of Finance Utility representation of Knightian uncertainty aversion and write ω ω max | (7) ) c , c , min c ( ], U [ E θ 0 T 2 1 ω ω θ ∈ , , c ) ( c c T 1 2 ω K captures the extent of agents ’ uncertainty. where In a f light to quality, such as during the fall of 1998 or 9/11, agents are concerned about systemic risk and unsure of how this risk will impinge on their activities. They may have a good understanding of their own markets, but be unsure of how the behavior of agents in other markets may affect them. For example, during 9/11 market participants feared gridlock in the payments system. Each participant knew how much he owed to others but was unsure whether resources owed to him would arrive (see, for example, Stewart (2002) or McAndrews and Potter (2002)). In our model, agents are certain about the probability of receiving a shock, but are uncertain about the probability with which their shocks will occur early or late relative to others. ’ max-min preferences in (7) as descriptive of their decision We view agents rules. The widespread use of worst-case scenario analysis in decision making by financial firms is an example of the robustness preferences of such agents. It is also important to note that the objective function in (7) works through altering the probability distribution used by agents. That is, given an agent ’ s uncertainty, the min operator in (7) has the agent making decisions using the worst-case probability distribution over this uncertainty. This objective is differ- ent from one that asymmetrically penalizes bad outcomes. That is, a loss aver- sion or negative skewness aversion objective function leads an agent to worry ω about worst cases through the utility function U directly. This asymmetric util- ity function model predicts that agents always worry about the downside. Our Knightian uncertainty objective predicts that agents worry about the downside in particular during times of model uncertainty. As discussed in the introduc- “ ” tion, it appears that f light to quality episodes have a newness/uncertainty element, which our model can capture. The distinction is also relevant because probabilities have to satisfy adding ∫ up constraints across all agents, that is, θ d ω = 0. Indeed, we use the term ω  “ collective ” bias to refer to a situation where agents ’ individual probability dis- tributions from the min operator in (7) fail to satisfy an adding-up constraint. As we will explain below, the efficiency results we present later in the paper stem from this aspect of our model. A.4. Symmetry To simplify our analysis we assume that the agents are symmetric at date 0. While each agent s true θ ’ may be different, the θ for every agent is drawn ω ω from the same . The symmetry applies in other dimensions as well: φ ) are , K , Z , and u ( c ω the same for all ω . Moreover, this information is common knowledge. As noted above, φ (1) and φ (2) are also common knowledge.

9 Collective Risk Management in a Flight to Quality Episode 2203 The tree on the left depicts the possible states realized for agent ω Figure 1. Benchmark case. . The economy can go through zero (lower branch), one (middle branch), or two (upper branch) waves ω of shocks. In each of these cases, agent may or may not be affected. The first column lists the state, s , for agent ω corresponding to that branch of the tree. The second column lists the probability of occurring. The last column lists the consumption bundle given to the agent by the planner state s . in state s B. A Benchmark We begin by analyzing the problem for the case K = 0. This case clarifies the nature of cross-insurance that is valuable in our economy as well. We derive the equilibrium as a solution to a planning problem, where the planner allocates Z the across agents as a function of shock realizations. Figure 1 below describes the event tree of the economy. The economy may ω receive zero, one, or two waves of shocks. An agent may be affected in the first or second wave in the two-wave case, or may be affected or not affected in = (#ofwaves, ω ’ s shock) the state for agent the one-wave event. We denote by s s ω ’ s allocation as a function of the state is denoted by C ω . Agent , where, in the event of agent ω being affected by a shock, the agent receives a consumption allocation upon incidence of the shock as well as a consumption allocation at date T . For example, if the economy is hit by two waves of shocks in which agent ω s = (2, 1) and agent is affected by the first wave, we denote the state as s s c ’ s allocation as ( ω c C ). Finally, , ={ C ω } is the consumption plan for agent 1 T (equal to that for every agent, by symmetry). We note that c is the same in both state (2, 1) and state (1, 1). This is 1 because of the sequential shock structure in the economy. An agent who re- ceives a shock first needs resources at that time, and the amount of resources delivered cannot be made contingent on whether the one- or two-wave event transpires. Figure 1 also gives the probabilities of each state s . Since agents are ex ante identical and K = 0, each agent has the same probability of arriving at state s φ . Thus we know that ω (2) = φ (2) / 2, which implies that the probability of ω being hit by a shock in the second wave is one-half. Likewise, the probability of

10 2204 The Journal of Finance being hit by a shock in the first wave is one-half. These computations lead to ω the probabilities given in Figure 1. s problem is to solve The planner ’ ∑ s ω s max p ( ) U C C subject to resource constraints that, for every shock realization, the promised Z consumption amounts are not more than the total endowment of , that is, no 0, c Z ≤ ) ( 1 no 1, 1,1 ≤ Z c c + + c 1 T T 2 ) ( 1 2,1 2,2 ≤ , Z c c + c + + c 1 2 T T 2 , every consumption amount as well as nonnegativity constraints that, for each s s in C is nonnegative. It is obvious that if shocks do not occur, then the planner will give to each Z 0, no c of the agents for consumption at date T . Thus = and we can drop this Z T constant from the objective. We rewrite the problem as ) ) ( ( (2) φ − (1) φ φ (2) 1,1 2,1 1, no 2,2 max + β β c + β c + + β ) ) + u ( c c ) c + ( c c u ( u 2 1 1 T T T T C 2 2 subject to resource and nonnegativity constraints. 1,1 no 1, c Observe that c enter as a sum in both the objective and the con- and T T 1,1 2,1 2,2 c straints. Without loss of generality we set 0 . Likewise, c enter = and c T T T as a sum in both the objective and the constraints. Without loss of generality 2,1 c we set = 0. The reduced problem is: T ( ) ( ) 1, 2,2 no max β + ) c ( u c (2) φ (1) u ( c φ ) + (2) c β φ − (1) φ + 2 1 T T 1, no 2,2 c ( c , c ) , c , 1 2 T T subject to no 1, c c + = 2 Z 1 T 2,2 c + c 2 + c Z = 2 1 T 2,2 1, no c c , . 0 , c c ≥ , 2 1 T T Note that the resource constraints must bind. The solution hinges on whether the nonnegativity constraints on consumption bind or not. If the nonnegativity constraints do not bind, then the first-order conditions c for and yield c 2 1 ′− 1 ∗ c = c . ( β ) ≡ c = u 2 1 The solution implies that 2,2 1, no ∗ ∗ c − c . ), = 2( Z c = 2 Z − c T T

11 Collective Risk Management in a Flight to Quality Episode 2205 ∗ . We refer to this case Thus, the nonnegativity constraints do not bind if ≥ c Z . When Z as one of sufficient aggregate liquidity is large enough, agents are able to finance a consumption plan in which marginal utility is equalized across all states. At the optimum, agents equate the marginal utility of early con- T β given the linear utility sumption with that of date consumption, which is c over means that agents discount the future heavily and β . A low value of T require more early consumption. Loosely speaking we can think of this case constrained ” and places a high value on current as one where an agent is “ Z ) to satisfy agents liquidity. As a result, the economy needs more liquidity ( ’ needs. insufficient Now consider the case in which there is liquidity so that agents ∗ are not able to achieve full insurance. This is the case where Z < c . It is obvious 2,2 c that = 0 in this case (i.e., the planner uses all of the limited liquidity towards T 1, no shock states). Thus, for a given c and the = c c we have that c = 2 Z − 2 1 1 T problem is max (8) φ (1) u ( c ) ) + φ (2) u (2 Z − c c ) + ( φ (1) − φ (2)) β (2 Z − 1 1 1 c 1 with first-order condition ( ) φ (2) (2) φ ′ ′ u Z ) = (2 ( − c ) + β c 1 − u (9) . 1 1 (1) φ φ (1) ′ ∗ u Since c (i.e., ) >β (2 c Z < c − ) we can order 1 2 ′ ′ β< u c ) < u ( (2 Z − (10) . ) ⇒ c Z > c 1 1 1 The last inequality on the right of (10) is the important result from the anal- ysis. Agents who are affected by the first wave of shocks receive more liquidity than agents who are affected by the second wave. There is cross-insurance be- tween agents. Intuitively, this is because the probability of the second wave occurring is strictly smaller than that of the first wave (or, equivalently, condi- tional on the first wave having taken place there is a chance the economy will be spared a second wave). Thus, when liquidity is scarce (small Z ) it is optimal to allocate more of the limited liquidity to the more likely shock. On the other Z hand, when liquidity is plentiful (large ) the liquidity allocation of each agent is not contingent on the order of the shocks. This is because there is enough liquidity to cover all shocks. We summarize these results as follows: P ROPOSITION 1: The equilibrium in the benchmark economy with K = 0 has two cases: ∗ The economy has insufficient aggregate liquidity if Z < c (1) . In this case, ∗ c > c . > Z > c 2 1 Agents are partially insured against liquidity shocks. First-wave liquidity shocks are more insured than second-wave liquidity shocks.

12 2206 The Journal of Finance ∗ . In this case, The economy has suf c fi ≥ (2) cient aggregate liquidity if Z ∗ c = c c = 2 1 and agents are fully insured against liquidity shocks. Flight to quality effects, and a role for central bank intervention, arise only in the first case (insufficient aggregate liquidity). This is the case we analyze in detail in the next sections. C. Implementation There are two natural implementations of the equilibrium: financial inter- mediation, and trading in shock-contingent claims. In the intermediation implementation, each agent deposits Z in an interme- c diary initially and receives the right to withdraw > if he receives a shock Z 1 in the first wave. Since shocks are fully observable, the withdrawal can be con- ditioned on the agents ’ shocks. Agents who do not receive a shock in the first wave own claims to the rest of the intermediary s assets ( Z − c ’ c ). The sec- < 1 1 ond group of agents either redeem their claims upon incidence of the second wave of shocks, or at date . Finally, if no shocks occur, the intermediary is T liquidated at date T and all agents receive Z . In the contingent claims implementation, each agent purchases a claim that c pays 2( − Z ) > 0 in the event that the agent receives a shock in the first wave. 1 The agent sells an identical claim to every other agent, paying 2( c − Z ) in case 1 of the first-wave shock. Note that this is a zero-cost strategy since both claims must have the same price. If no shocks occur, agents consume their own Z . If an agent receives a shock in the first wave, he receives 2( c Z − c (since one-half of − ) and pays out Z 1 1 the agents are affected in the first wave), to net c − . Added to his initial Z 1 Z − . Any later agent has Z , he has total liquidity of c liquidity endowment of 1 ( c T − Z ) = 2 Z − c units of liquidity to either finance a second shock, or date 1 1 consumption. Finally, note that if there is sufficient aggregate liquidity either the interme- diation or contingent claims implementation achieves the optimal allocation. Moreover, in this case, the allocation is also implementable by self-insurance. ∗ and liquidates Z Each agent keeps his c Z to finance a shock. The self- < ∗ Z < c insurance implementation is not possible when , because the allocation requires each agent to receive more than his endowment of if the agent is hit Z first. D. K > 0 Robustness Case We now turn to the general problem, K > 0. Once again, we derive the equi- librium by solving a planning problem where the planner allocates the Z to agents as a function of shocks. When K > 0, agents make decisions based on

13 Collective Risk Management in a Flight to Quality Episode 2207 ω Figure 2. Robustness case. The tree on the left depicts the possible states realized for agent . The economy can go through zero (lower branch), one (middle branch), or two (upper branch) waves of shocks. In each of these cases, agent ω may or may not be affected. The first column lists the state, ’ s s , for agent ω corresponding to that branch of the tree. The second column lists the agent occurring. s probability of state perceived probabilities. This decision making process is encompassed in the “ worst-case ” s objective to ’ planning problem by altering the planner ∑ s s , ω max p . ) min (11) U ( C ω ∈ θ C ω The only change in the problem relative to the 0 case is that probabilities = K are based on the worst-case min rule. ’ s worst-case prob- Figure 2 redraws the event tree now indicating the agent ω abilities. We use the notation that φ (2) is agent ω ’ s worst-case probability of ω being hit second. In our setup, this assessment only matters when the economy ’ is going through a two-wave event in which the agent is unsure if other agents 9 shocks are going to occur before or after agent ’ s. ω We simplify the problem following some of the steps of the previous deriva- no 0, must be equal to Z . Since the problem in the one-wave c tion. In particular, T 1,1 1, no c node is the same as in the previous case, we observe that and c enter as a T T 1,1 0. The reduced = c sum in both the objective and the constraints and choose T problem is then ( ) ( ) 2,1 ω ω ω θ C V ; ≡ max (2) min φ φ β − (1) u ( c (2) ) + c φ 1 ω ω ω T ω θ ∈ C ω (12) ( ) (2) φ − (1) φ 1, no 2,2 ω c (2) + c . β ( φ + u ) + c β 2 ω T T 2 9 2,2, ω ω φ = p (2) by definition. This implies that We derive the probabilities as follows. First, ω 2,1, ω ω p (2) − φ = φ (2) since the probabilities have to sum up to the probability of a two-wave event ω (2) φ − (1) φ ω 2,1, ω ω φ p (2)). We rewrite ( (2) = φ φ using relation (5). The probability of (1) − (2) − φ = ω ω ω 2 ω ω 1,1, ω 2,1, 2,1, ω ω 1,1, being hit first is φ , we can rewrite this to obtain + p p = . Substituting for p (1) = p ω φ (1) + φ (2) , ω ω 1, no 1,1, ω 1, no , = . Finally, φ (1) − φ (2), which we can use to solve for p . p + p 2

14 2208 The Journal of Finance The first two terms in this objective are the utility from the consumption bundle if the agent is hit first (either in the one-wave or two-wave event). The third term is the utility from the consumption bundle if the agent is hit second. The last term is the utility from the bundle when the agent is not hit in a one-wave event. The resource constraints for this problem are no 1, c c ≤ 2 Z + 1 T 2,2 2,1 c + c c . + c + Z ≤ 2 2 1 T T The optimization is also subject to nonnegativity constraints. P Let ROPOSITION 2: ) ( ′ ) Z ( β − (1) φ (2) φ − u K ≡ . ′ ( u 4 ) Z Then, the equilibrium in the robust economy depends on both K and Z as follows: cient aggregate liquidity, there are two cases: When there is insuf (1) fi ≤ K < (i) For 0 decisions satisfy ’ K, agents (2) φ − (1) φ ′ ω ′ ω φ + (2) u ) ( c φ ) = β (1) c ( u , (13) 1 2 ω ω 2 ω θ where, the worst-case probabilities are based on = : K ω φ (1) (2) φ ω ω φ (2) = (1) = + , K − . φ K ω ω 2 2 In the solution, ∗ c Z < c < < c 2 1 increasing. We refer to this as the ( K ) decreasing and c ) with c K ( 2 1 “ partially robust ” case. (ii) For K ≥ K, agents ’ = K, and decisions are as if K ∗ c Z = c < c = . 1 2 We refer to this as the “ fully robust ” case. (2) When there is suf fi cient aggregate liquidity ( Z ) , agents ’ decisions satisfy ∗ c = c . = c Z < 1 2 The formal proof of the proposition is in the Appendix, and is complicated by ω the need to account for all possible consumption plans for every given θ sce- ω nario when solving the max-min problem. However, there is a simple intuition that explains the results.

15 Collective Risk Management in a Flight to Quality Episode 2209 2,1 2,2 c We show in the Appendix that and are always equal to zero. Dropping c T T these controls, the problem simplifies to (2) φ − (1) φ 1, no ω ω max + ) c ( u φ . (2) (1) u ( c min ) + φ c β 1 2 ω ω T ω no 1, ∈ θ 2 ω c , , c c 2 1 T For the case of insufficient aggregate liquidity, the resource constraints give 1, no c Z − c c , . = 2 − = 2 Z c 1 2 1 T ω is θ Then the first-order condition for the max problem for a given value of ω φ − (1) φ (2) ω ′ ω ′ φ . ( c (2) u ) ( c = ) φ β u (1) + 2 1 ω ω 2 (1) φ ω ω φ In the benchmark case, the uncertain probabilities are = = and φ (1) (2) ω ω 2 φ (2) , which yield the solution calling for more liquidity to whoever is affected by 2 the first shock ( c 0, agents are uncertain over whether their > c > ). When K 2 1 shocks are early or late relative to other agents. Under the maximin decision rule, agents use the worst case-probability in making decisions. Thus, they bias 10 up the probability of being second relative to that of being first. is K When small, agents first-order condition is ’ ( ( ) ) (1) φ φ (2) φ (1) − φ (2) ′ ′ = ) c ( c ) + β ( K − + K u u . 2 1 2 2 2 As K becomes larger, c increases toward c c is set K sufficiently large, .For 1 2 2 ̄ equal to c . This defines the threshold of fully robust K . In this “ case, agents ” 1 are insulated against their uncertainty over whether their shocks are likely to be first or second. E. Flight to Quality A f light to quality episode can be understood in our model as a comparative . To motivate this comparative static within our model, let us static across K ∗ Z introduce a date c − 1 as a contracting date for agents. Each agent has < units of the good at this date and has preferences as described earlier (only over consumption at date and/or date 1, 2). At date 0, a value of K is realized T to be either = 0or K K 0. The K > 0 event is a low probability unusual event > that may trigger f light to quality. For example, the K > 0 event may be that the downgrade of a top name is imminent in the credit derivatives market. Today (i.e., date − 1) market participants know that such an event may transpire and also are aware that in the event there will be considerable uncertainty over 10 In the solution, agents have distorted beliefs and in particular disagree: Agent thinks his ω ∫ θ K , but he also knows that = 0. That is, a given agent thinks that all other agents θ ω d = ω ω ω ∈  on average have a θ = 0, but the agent himself has the worst-case θ . This raises the question of ω whether it is possible for the planner to design a mechanism that exploits this disagreement in a way that agents end up agreeing. We answer this question in the Appendix, and conclude that allowing for a fuller mechanism does not alter the solution.

16 2210 The Journal of Finance − 1, agents enter into an arrangement, where the terms of outcomes. At date , as dictated by Proposition 2. We can K the contract are contingent on the state 11 think of the f light to quality in comparing the contracts across the states. In this subsection we discuss three concrete examples of f light to qual- ity events in the context of our model. Our first two examples identify the model in terms of the financial intermediation implementation discussed ear- lier, while the last example identifies the model in terms of the contingent claims implementation. The first example is one of uncertainty-driven contagion and is drawn from the events of the fall of 1998. We interpret the agents of our model as the trading desks of an investment bank. Each trading desk concentrates in a dif- ferent asset market. At date − 1 the trading desks pool their capital with a c top-level risk manager of the investment bank, retaining of capital to cover 2 any needs that may arise in their particular market ( “ committed capital ). They ” also agree that the top-level risk manager will provide an extra c − c 0to > 2 1 “ ). ” trading capital cover shocks that hit whichever market needs capital first ( At date 0, Russia defaults. An agent in an unrelated market — that is, a market ω ω φ in which shocks are now no more likely then before, so that (2) is φ (1) + ω ω unchanged — suddenly becomes concerned that other trading desks will suffer shocks first and hence that the agent ’ s trading desk will not have as much capital available in the event of a shock. The agent responds by lobbying the top-level risk manager to increase his committed capital up to a level of c = . c 2 1 As a result, every trading desk now has less capital in the (likelier) event of a single shock. Scholes (2000) argues that during the 1998 crisis, the natural liq- uidity suppliers (hedge funds and trading desks) became liquidity demanders. In our model, uncertainty causes the trading desks to tie up more of the capital of the investment bank. The average market has less capital to absorb shocks, suggesting reduced liquidity in all markets. In this example, the Russian default leads to less liquidity in other un- related asset markets. Gabaix, Krishnamurthy, and Vigneron (2006) present evidence that the mortgage-backed securities market, a market unrelated to the sovereign bond market, suffered lower liquidity and wider spreads in the Z 1998 crisis. Note also that in this example there is no contagion effect if is large as the agents ’ trading desk will not be concerned about having the ∗ Z > c necessary capital to cover shocks when . Thus, any realized losses by investment banks during the Russian default strengthen the mechanism we highlight. 11 An alternative way to motivate the comparative static is in terms of the rewriting of contracts. Suppose that it is costless to write contracts at date − 1, but that it costs a small amount to write contracts at date 0. Then it is clear that at date − K = 0 1, agents will write contracts based on the K > 0 event transpires, agents will rewrite the contracts accordingly. case of Proposition 2. If the We may think of a f light to quality in terms of this rewriting of contracts. Note that the only benefit in writing a contract at date − 1 that is fully contingent on K is to save the rewriting costs .In particular, if = 0 it is not possible to improve the allocation based on signing contingent date − 1 contracts. Agents are identical at both date − 1 and at date 0, so that there are no extra allocation gains from writing the contracts early.

17 Collective Risk Management in a Flight to Quality Episode 2211 Our second example is a variant of the classical bank run, but on the credit side of a commercial bank. The agents of the model are corporates. The corpo- − 1 and sign revolving credit lines Z in a commercial bank at date rates deposit that give them the right to c if they receive a shock. The corporates are also 1 aware that if banking conditions deteriorate (a second wave of shocks) the bank will raise lending standards/loan rates so that the corporates will effectively c receive only < c . The f light to quality event is triggered by the commercial 2 1 bank suffering losses and corporates becoming concerned that the two-wave event will transpire. They respond by preemptively drawing down credit lines, c effectively leading all firms to receive less than . Gatev and Strahan (2006) 1 present evidence of this sort of credit line run during periods when the spread between commercial paper and Treasury bills widens (as in the fall of 1998). The last example is one of the interbank market for liquidity and the payment Z Treasury system. The agents of the model are all commercial banks that have bills at the start of the day. Each commercial bank knows that there is some possibility that it will suffer a large outf low from its reserve account, which it can offset by selling Treasury bills. To fix ideas, suppose that bank A is worried about this happening at 4pm. At date − 1, the banks enter into an interbank lending arrangement so that a bank that suffers such a shock first, receives credit on advantageous terms (worth c of T-bills). If a second set of shocks 1 (say, the discount window). At hits, banks receive credit at worse terms of c 2 date 0, 9/11 occurs. Suppose that bank A is a bank outside New York City that is not directly affected by the events, but that is concerned about a possible reserve outf low at 4pm. However, now bank A becomes concerned that other commercial banks will need liquidity and that these needs may arise before 4pm. Then bank A will renegotiate its interbank lending arrangements and c become unwilling to provide to any banks that receive shocks first. Rather, 1 it will hoard its Treasury bills of Z to cover its own possible shock at 4pm. In this example, uncertainty causes banks to hoard resources, which is often the systemic concern in a payments gridlock (e.g., Stewart (2002) and McAndrews and Potter (2002)). The different interpretations we have offered show that the model ’ s agents and their actions can be mapped into the actors and actions during a f light to quality episode in a modern financial system. As is apparent, our environment is a variant of the one that Diamond and Dybvig (1983) study. In that model, the sequential service constraint creates a coordination failure and the possibility of a bad crisis equilibrium in which depositors run on the bank. In our model, the crisis is a rise in Knightian uncertainty rather than the realization of the bad equilibrium. The association of crises with a rise in uncertainty is the novel prediction of our model, and one that fits many of the f light to quality episodes we have discussed in this paper. Other variants of the Diamond and Dybvig model such as Rochet and Vives (2004) associate crises with low values of commercial bank assets. While our model shares this feature (i.e., Z must be less ∗ than c ), it provides a sharper prediction through the uncertainty channel. Our model also offers interpretations of a crisis in terms of the rewriting of financial contracts triggered by an increase in uncertainty, rather than the behavior of

18 2212 The Journal of Finance s depositors. Of course, in practice both the coordination failures that a bank ’ Diamond and Dyvbig highlight and the uncertainties we highlight are likely to be present, and may interact, during financial crises. II. Collective Bias and the Value of Intervention In this section, we study the benefits of central bank actions in the f light to quality episode of our model. We show that a central bank can intervene to improve aggregate outcomes. The analysis also clarifies the source of the benefit in our model. A. Central Bank Information and Objective (1) and φ (2), and knows The central bank knows the aggregate probabilities φ φ that the s are drawn from a common distribution for all ω . We previously ’ ω note that this information is common knowledge, so we are not endowing the central bank with any more information than agents have. The central bank also understands that because of agents ’ ex ante symmetry, all agents choose the CB s , s same contingent consumption plan C the probabilities that . We denote by p ω . Like ω the central bank assigns to the different events that may affect agent s p agents, the central bank does not know the true probabilities . Additionally, ω , CB s s , ω p p may differ from . ω ω The central bank is concerned with the equally weighted ex post utility that agents derive from their consumption plans: ∫ ∑ s s , CB CB V ω ≡ ) d C U ( p ω ∈ ω  (14) ∑ s s p = ( . C U ) The step in going from the first to second line is an important one in the analysis. ’ In the first line, the central bank s objective ref lects the probabilities for each agent . However, since the central bank is concerned with the aggregate out- ω come, we integrate over agents, exchanging the integral and summation, and arrive at a central bank objective that only ref lects the aggregate probabilities s p . Note that the individual probability uncertainties disappear when aggregat- ing, and that the aggregate probabilities that appear are common knowledge φ (1) and φ (2)). Finally, as our ear- (i.e., they can be written solely in terms of no 1, lier analysis has shown that only c 0 need to be considered, we can , c , c > 2 1 T reduce the objective to (2) φ − φ (1) (1) φ (2) φ no 1, CB V . ) + + ) = u ( c u c β c ( 2 1 T 2 2 2 The next two subsections explain how a central bank that maximizes the objective function in (14) will intervene. For now, we note that one can view the objective in (14) as descriptive of how central banks behave: Central banks are

19 Collective Risk Management in a Flight to Quality Episode 2213 interested in the collective outcome, and thus it is natural that the objective adopts the average consumption utility of agents in the economy. We return to a fuller discussion of the objective function in Section D where we explain this criterion in terms of welfare and Pareto improving policies. B. Collective Risk Management and Wasted Liquidity Starting from the robust equilibrium of Proposition 2, consider a central bank ’ decisions by increasing c that alters agents by an infinitesimal amount, and 1 1, no c decreasing c by the same amount. The value of the reallocation based and 2 T on the central bank objective in (14) is φ (1) (2) φ φ (2) − (1) φ ′ ′ β. u u (15) ( ) − ) − c ( c 2 1 2 2 2 ∗ ′− 1 First, note that if there is sufficient aggregate liquidity, c = c = = u c ). ( β 2 1 For this case, (1) φ (2) φ (2) φ − (1) φ ′ ′ ) − ( ( c ) − c = u 0 β u 1 2 2 2 2 and equation (15) implies that there is no gain to the central bank from a reallocation. Turning next to the insufficient liquidity case, the first-order condition for agents in the robustness equilibrium satisfies φ φ (2) − (1) ω ′ ω ′ φ φ u u (2) ( c ( c (1) ) − β ) − = 0, 2 1 ω ω 2 or ( ( ) ) (1) φ φ (2) (2) (1) − φ φ ′ ′ c − ( β ) − ) ( c − u . K 0 + u K = 1 2 2 2 2 Rearranging this equation we have that φ (1) (2) φ − (1) φ (2) φ ′ ′ ′ ′ c . ) − β c ( ) )) c ( − ( c ( ) + u u = K u u ( 1 1 2 2 2 2 2 Substituting this relation into (15), it follows that the value of the reallocation ′ ′ ( u to the central bank is K K c + u ( ( c 0. That is, )), which is positive for all ) > 2 1 the reallocation is valuable to the central bank because, from its perspective, agents are wasting aggregate liquidity by self-insuring excessively rather than cross-insuring risks. Summarizing these results: P ROPOSITION 3: For any K > 0 , if the economy has insuf fi cient aggregate liq- ∗ ) , on average agents choose too much insurance against receiving < c ( uidity Z shocks second relative to receiving shocks fi rst. A central bank that maximizes

20 2214 The Journal of Finance the expected (ex post) utility of agents in the economy can improve outcomes by insurance toward the rst shock. ’ fi reallocating agents C. Is the Central Bank Less Knightian or More Informed than Agents? In particular, are these the reasons the central bank can improve outcomes? The answer is no. To see this, note that any randomly chosen agent in this economy would reach the same conclusion as the central bank if charged with optimizing the expected ex post utility of the collective set of agents. , who is Knightian and uncertain about the true values ω ̃ Suppose that agent of θ , is given such a mandate. Then this agent will solve ω ( ) ∫ φ (2) φ − (1) 1, no ω ̃ ω ̃ max φ d ω. (2) + (1) u ( c ) ) + φ c ( u min β c 2 1 ω ω T ω ̃ no 1, ∈ θ 2 , c , c c ω 2 1 T Since aggregate probabilities are common knowledge we have that ∫ ∫ φ (1) (2) φ ̃ ω ω ̃ φ = d (1) ω (2) d ω = φ , . ω ω 2 2 Substituting these expressions back into the objective and dropping the min ̃ ω θ operator (since now no expression in the optimization depends on ) yields ω φ (1) (2) φ (2) φ φ − (1) 1, no max , ) + + ) u ( c β u ( c c 2 1 T no 1, 2 2 2 , c , c c 1 2 T which is the same objective as that of the central bank. If it is not an informational advantage or the absence of Knightian traits in the central bank, what is behind the gain we document? The combination of two features drives our results: The central bank is concerned with aggregates and individual agents are “ ” (Knightian) not about aggregate shocks uncertain but about the impact of these shocks on their individual outcomes. Since individual agents make decisions about their own allocation of liquidity collectively biased rather than about the aggregate, they make choices that are when looked at from the aggregate perspective. Let us develop the collective bias concept in more detail. In the fully robust equilibrium of Proposition 2 agents insure equally against first and second shocks. To arrive at the equal insurance solution, robust agents evaluate their first order conditions (equation 13) at conservative probabilities: ) ( ′ ∗ (2) φ (1) − φ u ( ) c ω ω φ − (2) = (1) φ (16) . ω ω ′ Z 2 ( u ) Suppose we compute the probability of one and two aggregate shocks using agents ’ conservative probabilities: ∫ ∫ ω ω ̄ ̄ (1) ≡ 2 φ (1) d ω d φ φ (2) ≡ 2 ω. (2) φ , ω ω  

21 Collective Risk Management in a Flight to Quality Episode 2215 “ 2 The in front of these expressions ref lects the fact that only one-half of the ” agents are affected by each of the shocks. Integrating equation (16) and using conservative probabilities are such ’ the definitions above, we find that agents that ( ) ∗ ′ u ( c ) ̄ ̄ φ (2)) (1) − − φ (2) = ( φ (1) φ <φ (1) − φ (2) . ′ u Z ( ) The last inequality follows in the case of insufficient aggregate liquidity ( < Z ∗ c ). Implicitly, these conservative probabilities overweight an agent ’ s chances of being affected second in the two-wave event. Since each agent is concerned about the scenario in which he receives a shock last and there is little liquidity left, robustness considerations lead each agent to bias upwards the probability of receiving a shock later than the average agent. However, every agent cannot “ be later than the ” Across all agents, the conservative probabilities average. violate the known probabilities of the first- and second-wave events. Note that each agent ’ s conservative probabilities are individually plausible. Given the range of uncertainty over θ has a higher , it is possible that agent ω ω than average probability of being second. Only when viewed from the aggregate does it become apparent that the scenario that the collective of conservative agents are guarding against is impossible. D. Welfare ’ s objective in (14). Agents We next discuss our specification of the central bank in our model choose the worst case among a class of priors when making deci- sions. That is, they are not rational from the perspective of Bayesian decision theory and therefore do not satisfy the Savage axioms for decision making. As Sims (2001) notes, this departure from rational expectations can lead to a situ- ation where a maximin agent accepts a series of bets that have him lose money with probability one. The appropriate notion of welfare in models where agents 12 are not rational is subject to some debate in the literature. It is beyond the scope of this paper to settle this debate. Our aim in this section is to clarify the issue in the present context and offer some arguments in favor of objective (14). “ libertarian ” welfare criterion whereby agents ’ At one extreme, consider a choices are by definition what maximizes their utility. That is, define ∫ ∑ ω , s CB s V p min C ( U ) d ω. = ω ω ∈ θ ω  ∈ ω This is an objective function based on each agent ω ’ s ex ante utility, which is evaluated using that agent ’ s worst-case probabilities. The difference relative to the objective in (14) is that all utility here is anticipatory. ” That is, the agent “ 12 The debate centers on whether or not the planner should use the same model to describe choices and welfare (see, for example, Gul and Pesendorfer (2005) and Bernheim and Rangel (2005) for two sides of the argument). See also Sims (2001) in the context of a central bank ’ s objective.

22 2216 The Journal of Finance at date 0 from making a decision that avoids a worst-case enjoys happiness outcome. Note that such a specification differs from standard expected utility T when the agent whereby the agent only receives happiness at dates 1, 2, and actually consumes. CB Under the objective V ’ s choices are efficient and there is no role the agent for the central bank. We can see this immediately because the planning problem in deriving Proposition 2 was based on the latter objective function. The objective function we use in (14) is based on ex post consumption utility, and assumes that agents do not receive any anticipatory utility. More generally, CB CB λ consider an objective function + λ ) V V with λ ∈ [0, 1]. Then it is (1 − λ clear that as long as < 1, that is, the welfare function places some weight on ex post consumption utility, there is a role for the central bank. In this sense, the no-intervention case is an extreme one. There is a further reason to restrict attention to the λ = 0 case, as in (14). Consider the following thought experiment: Suppose that we repeat infinitely many times the liquidity episode we have described. At the beginning of each episode, agent θ ω draws a ∈ . These draws are i.i.d. across episodes, and the ω will be zero. In each episode, since agent agent knows that on average his θ ω ω does not know the θ for that episode, the agent ’ s worst-case decision rule ω CB θ has him using K ω V = is the average consumption utility of agent . Then, ω 13 across all of these episodes. The preceding two arguments are ones in favor of using an ex post consump- tion utility welfare criterion, where each agent is weighted equally. The last point we discuss is when equal weighting is appropriate. Thus far, since agents are ex ante identical, a policy that improves the average agent ’ s ex post con- sumption utility also improves each agent ’ s ex post expected consumption util- ity. Suppose, however, that a fraction of the agents in the economy are Bayesian (i.e., rational) and they know that their true θ is equal to . For these agents, K ω the worst-case probabilities are truly their own probabilities. Thus, define the welfare of the rational agents as ∫ ∑ s , R s ω V p , ω = d U ( C ) ω R ω ∈  R where  is the subset of  corresponding to the rational agents, and the prob- ω , s p abilities are based on θ = K . ω ω R ∈  \  The rest of the agents, ω θ s such that the , are Knightian with ω K in a similar across both classes of agents is zero. We define V average θ ω way to the objective in (14) as the average ex post consumption utility of the Knightian agents. We now have a situation where there is ex ante heterogeneity among agents so that equal weighting is no longer appropriate. Suppose that the central bank 13 Of course, in living through repeated liquidity events, an agent learns over time about the true distribution of θ . However, it is still the case that along this learning path, K remains strictly ω positive (while shrinking) and hence the qualitative features of our argument go through for a small enough discount rate.

23 Collective Risk Management in a Flight to Quality Episode 2217 cannot discriminate among the two classes of agents. Is the intervention still Pareto improving? The result in Proposition 3 still applies to the Knightian agents. The central K V bank will compute a first-order gain in from the reallocation intervention. Importantly, note that the envelope theorem implies that changing the rational decisions results in only a second-order utility loss to the rational agents. agents ’ R V That is, a small intervention means that the loss in is small compared to the K ’ . Thus, although the central bank s policy is not Pareto improving, V gain in it involves asymmetric gains to the Knightian agents. Camerer et al. (2003) propose this type of asymmetric paternalism criterion in evaluating policies when some agents are behavioral. E. Risk Aversion versus Uncertainty Aversion We note previously that from a positive standpoint our model of uncertainty “ new ” aversion predicts a f light to quality when there is a shock, whereas a model with extreme risk aversion predicts conservative behavior in response to any negative shock, new or not. We close this section by noting that the normative implications of uncertainty aversion also differ from that of extreme risk aversion. Without collective bias, and regardless of the agent ’ s degree of risk aversion, our central bank sees no reason to reallocate liquidity toward the first wave of shocks beyond the private sector ’ s choices. We can see this because setting K 0 in our model represents a model without uncertainty aversion. = u ( · As we have imposed only weak requirements on ), the utility function can be chosen to represent extreme forms of risk aversion. However, the results of Proposition 3 establish that there is a gain for the central bank only if K > 0 ∗ Z < and c . We conclude that there is a role for the central bank only in situations of Knightian uncertainty and insufficient aggregate liquidity. Of course not all recessionary episodes exhibit these ingredients. But there are many scenarios in which they are present, such as during October 1987 and in the fall of 1998. III. An Application: Lender of Last Resort The abstract reallocation experiment considered in Proposition 3 makes clear that during f light to quality episodes the central bank will find it desirable to induce agents to insure less against second shocks and more against first shocks. In this section we discuss an application of this result and consider a lender of last resort (LLR) policy in light of the gain identified in Proposition 3. As in Woodford (1990) and Holmstrom and Tirole (1998), we assume the LLR has access to collateral that private agents do not (or at least, it has access at a lower cost). Woodford and Holmstrom and Tirole focus on the direct value of intervening using this collateral. Our novel result is that, because of the reallocation benefit of Proposition 3, the value of the LLR exceeds the direct value of the intervention. Thus, our model sheds light on a new benefit of the LLR.

24 2218 The Journal of Finance The model also stipulates when the benefit is highest. As we have remarked previously, the reallocation benefit only arises in situations where K > 0 and ∗ Z < c . This carries over directly to our analysis of the LLR: The benefits are ∗ Z c 0 and > highest when K < . We also show that the LLR must be a last resort policy. If liquidity injections take place too often, the reallocation effect works against the policy and reduces its value. A. LLR Policy Formally, the central bank credibly expands the resources of agents in the G two-shock event by an amount Z . That is, agents who are affected second in = (2, 2)), will have their consumption increased from c the two-wave event ( s 2 G G c to (twice Z 2 because one-half measure of agents are affected by the Z + 2 second shock). The resource constraints for agents (for the reduced problem) are 1, no c c + ≤ 2 Z (17) 1 T G c c ≤ (18) 2 Z + 2 Z + . 1 2 ’ In practice, the central bank s promise may be supported by a credible commit- ment to costly ex post inf lation or taxation and carried out by guaranteeing, against default, the liabilities of financial intermediaries who have sold finan- cial claims against extreme events. Since we are interested in computing the G marginal benefit of intervention, we study an infinitesimal intervention of Z . If the central bank offers more insurance against the two-shock event, this insurance has a direct benefit in terms of reducing the disutility of an adverse benefit of the LLR is outcome. The direct ∫ , direct CB ′ ′ V (2) c φ . ) u = ( c ( 2 ) d ω = φ (2) u ω ω 2, 2 G Z  ’ s second-shock insurance leads agents The anticipation of the central bank to re-optimize their insurance decisions. Agents reduce their private insurance against the publicly insured second-shock and increase their first-shock insur- ance. The total benefit of the intervention includes both the direct benefit as well as any benefit from portfolio re-optimization: ] [ ∫ no 1, dc dc φ dc (2) (1) φ − ω 1, ω 2, ω T , total CB , ′ ′ V ω. d φ ( ( + c u ) = φ ) (1) u + (2) c β 1, 2, ω ω ω ω G Z G G G 2 dZ dZ dZ  From (13), the first-order condition for agent decisions in the robust equilibrium gives φ (1) φ − (2) (2) φ φ (1) ′ ′ ′ ′ u + K ( u u ) = c ( c )) ( c c ) ( + ( ) + u β . 1 2 2 1 2 2 2 CB , total We simplify the expression for V (2) φ by integrating through φ (1) and ω ω G Z ′ u and then substituting for ( c ) from the first-order condition. These operations 1

25 Collective Risk Management in a Flight to Quality Episode 2219 yield ) ( ) ( no 1, (2) (1) φ − φ (2) dc φ dc dc dc 2 1 1 total , CB T ′ V β + + = ) ( c + u 2 G Z G G G G 2 2 dZ dZ dZ dZ dc 1 ′ ′ u ( K + + u c ( c . )) ( ) 2 1 G dZ G Last, we differentiate the resource constraints (17) and (18) with respect to Z to find 1, no dc dc dc dc 2 1 1 T = + 2, + = 0 . G G G G dZ dZ dZ dZ We have dc 1 total CB , ′ ′ ′ V u u ( c )) ) + K ( = φ ( c c ) + u (2) ( 1 2 2 G Z G dZ dc 1 CB direct , ′ ′ + u + ( . ) K u ( ( c V )) = c 2 1 G Z G dZ The additional benefit we identify is due to portfolio re-optimization: Agents cut back on the publicly insured second shock and increase first-shock insurance, thereby moving their decisions closer to what the central bank would choose for them. In this sense, the LLR policy can help to implement the policy suggested in Proposition 3. We also note that without Knightian uncertainty ( K = 0), there is no gain (beyond the direct benefit) from the policy. Moreover, it is straightforward to ∗ see that if > c Z , then agents will not use the additional insurance to cover their liquidity shocks, but will re-optimize in a way as to use the insurance at date T . In this case there is no gain to offering the public insurance (since dc 1 = 0). We summarize these results as follows: G dZ ∗ P < ROPOSITION 0 and Z 4: c For K , the total value of the lender of last resort > policy exceeds its direct value: direct CB , total , CB V > . V G G Z Z It is important to note that under the LLR policy the central bank injects resources only rarely. As we associate the second-shock event with an extreme and unlikely event, in expectation the central bank does not promise many ’ s (1983) anal- resources. This aspect of policy is similar to Diamond and Dybvig ysis of a LLR. However, there are a few important differences in the mecha- nism through which the policies work. As there is no coordination failure in our model, the policy does not work by ruling out a “ bad ” equilibrium. Rather, the policy works by reducing the agents anxiety ” that they will receive a shock last ’“ when the economy has depleted its liquidity resources. It is this anxiety that ω leads agents to use a high φ (2) in their decision rules. From this standpoint, ω

26 2220 The Journal of Finance it is also clear that an important ingredient in the policy is that agents have to believe that the central bank will have the necessary resources in the two-event shock to reduce their anxiety. Credibility and commitment are central to the 14 working of our LLR policy. B. Moral Hazard and Early Interventions The policy we have suggested cuts against the usual moral hazard critique of central bank interventions. The moral hazard critique is predicated on agents responding to the provision of public insurance by cutting back on their own insurance activities. In our model, in keeping with the moral hazard critique, agents reallocate insurance away from the publicly insured shock. However, when f light to quality is the concern, the reallocation improves (ex post) out- 15 comes on average. Public and private provision of insurance are complements in our model. This logic suggests that interventions against first shocks may be subject ’ portfolio re-optimization would lead to the moral hazard critique as agents them toward more insurance against the second shock. To consider the “ early case, suppose that the central bank credibly offers to increase the intervention ” G consumption of agents who are affected in the first shock from c . c + 2 Z to 1 1 The resource constraints for agents (for the reduced problem) are 1, no G c c + ≤ 2 Z + 2 Z 1 T G + c ≤ 2 Z c 2 Z + . 2 1 The benefit of intervention in the first shock is direct ∫ , direct , CB rst fi ′ ′ V 2 ) c φ ( (1) u . ( c ) u = d ω = φ (1) ω 1, 2 ω G Z  ’ We compute the total benefit as previously except that we substitute agents first-order condition using φ (2) (1) φ (1) − φ (2) φ ′ ′ ′ ′ c ( ( c u ) − β c c ) = . + ) )) ( ( u ( − K u u 1 2 2 1 2 2 2 Also, using the fact that no 1, dc dc dc dc 1 2 1 T 2, 2, = + = + G G G G dZ dZ dZ dZ 14 In this sense the policy relates to the government bond policy of Woodford (1990) and Holmstrom and Tirole (1998) who argue that government promises are unique because they have greater collateral backing than private sector promises. 15 Note that if the direct effect of intervention is insufficient to justify intervention, then the lender of last resort policy is time inconsistent. This result is not surprising as the benefit of the policy comes precisely from the private sector reaction, not from the policy itself.

27 Collective Risk Management in a Flight to Quality Episode 2221 we find that dc 1 fi , CB , fi , rst direct , CB rst direct CB total , ′ ′ V )) c + u ( ( c u ) ( K . = < V − V 2 1 G G G Z Z Z G dZ The expected cost of the early intervention policy is much larger than the second-shock intervention, since the central bank rather than the private sec- tor bears the cost of insurance against the (likely) single-shock event. Agents reallocate the expected resources from the central bank to the two-shock event, which is exactly the opposite of what the central bank wants to achieve. In this sense, interventions in intermediate events are subject to the moral hazard critique. We conclude that the lender of last resort facility, to be effective and improve private financial markets, has to be a last and not an intermediate resort. C. Multiple Shocks It is clear that the LLR should not intervene during early shocks and instead should only pledge resources for late shocks; but if we move away from our two-shock model to a more realistic context with multiple potential waves of aggregate shocks, how late is late? To answer this question we extend the model to consider multiple shocks. We n 1 ... N waves of shocks, each affecting = assume the economy may experience 1 of the agents. The probability of the economy experiencing n waves is denoted N ( n ), with φ ( n ) <φ ( n − φ ω ’ s probability of being affected in the n th 1). Also, each ∫ ) n ( φ wave satisfies ( ω ) d . = φ n ω ∈ ω  N 1 The LLR policy takes the following form: The central bank injects N − j + 1 units of liquidity for all shocks after (and including) the th wave ( ≤ N ). We j j c also simplify our analysis by focusing on the fully robust case in which is the n ∗ β = 0, thereby assuring that Z same for all c n and by setting < and allowing us n , no N in the intervention . The term c + rises to c c to disregard effects on n n T + 1 N − j 1 1 (i.e., injected to a measure of agents). 1 N − N j + j is The direct value of the intervention as a function of ∫ N ∑ N CB , direct ′ V ω ) c ( = u ) d n ( φ , ω ω n G Z + N − j 1  n = j N ∑ 1 ′ = u ( . c n ( φ ) ) 1 j 1 + − N = n j Agents reduce insurance against the publicly insured shocks and increase their private insurance for the rest of the shocks. The total benefit of the in- tervention includes both the direct benefit as well as any benefit from portfolio

28 2222 The Journal of Finance re-optimization: ∫ N ∑ dc , n ω , total CB ′ V n ) u = ( c ω. ) d φ ( , n ω ω G Z G dZ  n 1 = From the resource constraint we have that N ∑ dc n , ω N = . G dZ = 1 n dc n , ω In the fully robust case, c and . We then have are the same for all n n , ω G dZ N N ∑ ∑ 1 dc 1 1 , CB total ′ ′ V ( φ = u (19) ( c ( n ) = u n ( c ) ) . ) φ 1 1 G Z G N N dZ = n = n 1 1 Note that this expression is independent of the intervention rule j . In contrast, CB , direct V it is apparent that is decreasing with respect to j since the φ ( n ) ’ s are G Z monotonically decreasing. Thus, the ratio ∑ N total , CB 1 ) n ( φ V G 1 n = N Z = ∑ N , CB direct 1 φ ) n ( V G = j n 1 + j − N Z j > 1 and is increasing with respect to j . is strictly greater than one for all Of course, the above result does not suggest that intervention should occur only in the N th shock. Instead, it suggests that for any given amount of re- sources available for intervention, the LLR should first pledge resources to the N th shock and continue to do so until it completely replaces private insurance; it should then move on to the N 1st shock, and so on. − The multiple shock model also clarifies another benefit of late intervention. As j rises, events that are being insured by the LLR become increasingly less likely. If we take the case where the shadow cost of the LLR resources for the central bank is constant, the expected cost of the LLR policy falls as j rises, while the expected benefit remains constant. In other words, as j rises, it is the private sector that increasingly improves the allocation of scarce private resources to early and more likely aggregate shocks, thereby reducing the extent of the f light to quality phenomenon. In contrast, the central bank plays a decreasingly small role in terms of the expected value of resources actually disbursed as j increases. Thus, while a well-designed LLR policy may indeed have a direct effect only in highly unlikely events, the policy is not irrelevant for likely outcomes. Its main benefits come from unlocking private markets to insure more likely and less extreme events. IV. Final Remarks We present a model of financial crises and the role of a lender of last re- sort that centers on Knightian uncertainty and liquidity shortages. While

29 Collective Risk Management in a Flight to Quality Episode 2223 ’ s quote Knightian uncertainty is discussed in policy circles (e.g., see Greenspan in the Introduction), it is not standard in academic analyses of crises, which instead emphasize liquidation externalities (e.g., Diamond and Dybvig (1983)). Thus, rather than ending with a summary of findings, it is useful to take stock by considering points of similarity and departure between our Knightian uncer- tainty model and the standard liquidation externality model. uncertainty shock, ” As we argue throughout the paper, an — “ distinct from — a liquidation shock is an important element although possibly correlated with in many financial crises. The uncertainty shock, like the liquidation shock, reproduces the behavior of agents during a f light to quality episode. Agents move toward holding uncontingent and safe assets, financial intermediaries hoard liquidity, and investment banks and trading desks turn conservative in their allocation of risk capital so that capital becomes immobile across markets. On these points our model emphasizes a different mechanism but shares many of the predictions of liquidation models. Despite this similarity, we think it is valuable to spell out the model of the uncertainty shock. At a minimum, doing so clarifies and expands the set of pre- conditions for crises. There may be crisis events where liquidation externalities are absent but in which there is a substantial amount of Knightian uncertainty. At the other extreme, the most severe crises in the U.S. and abroad seem to have elements of both an uncertainty and a liquidation shock. In such cases our model helps to understand why these crises may have been so severe. Furthermore, there are some events — for example, the recent liquidation of the Amaranth hedge fund ’ s sizeable portfolio — for which the preconditions seem to fit the liquidation crisis model, but yet did not trigger crises. Our model suggests that the absence of significant uncertainty around these events may be one reason why (in the particular case of Amaranth, financial specialists had already learned from the LTCM experience). Thus, our model also helps “ dog that did not bark. ” More broadly, it suggests that to shed light on the an important precondition for crises is the presence of new ” shocks, perhaps “ surrounding new and untested financial innovations. This prediction is also useful to guide policymakers on where crises may arise. From a policymaker ’ s perspective, our model, like the liquidation externality model, shows that there are benefits to a lender of last resort facility during a crisis. However, in our model the central issue to determine the value of an LLR facility is not only the potential for coordination failure but also the extent of uncertainty in the marketplace. For example, we prescribe that a default by a hedge fund — even one that is large — should not elicit a central bank reaction unless the default triggers considerable uncertainty in other market participants and hedge funds are financially weak. Yet another subtle difference between the two models is in the incentive effects of policy intervention. In the liquidation model, moral hazard is often an important issue that tempers policy intervention. This is because private and public insurance are substitutes in this model. In contrast, our analysis reveals that in the uncertainty model there are dimensions in which private and public insurance are complements rather than substitutes, suggesting that

30 2224 The Journal of Finance the moral hazard issue is less important for uncertainty driven crises. Having said this, a different kind of moral hazard problem may arise in the uncertainty model if agents can invest resources to affect the degree of uncertainty they face. In the liquidation model, ex ante policy recommendations typically center on prudential risk management. For example, in many analyses there are exter- nalities present that drive a wedge between private and social incentives to insure against a financial crisis episode. In such scenarios, ex ante regulations to reduce leverage, increase liquidity ratios, or tighten capital requirements are beneficial. In our model, an important dimension of the crisis is that there is uncertainty about outcomes. Agents cannot refer to history to understand how a crisis will unfold because the historical record may not span the event space. In such a case it is unclear whether any entity, private or public, can arrive at the appropriate ex ante risk management strategy, calling into ques- tion the feasibility of these policy recommendations. Instead, in our uncertainty model, the most beneficial ex ante actions are ones that help to reduce the ex- tent of uncertainty should a crisis occur. In some cases, this may simply involve making common knowledge information that is known to subsets of market — for example, making common knowledge the portfolio positions participants of the major players in a market. In other cases, this may involve the central bank facilitating discussions among the private sector on how each party will react in a crisis scenario. These points are pertinent, and can be illustrated, in the credit derivatives market. There is currently considerable uncertainty over how the downgrade 16 of a top name will affect the credit derivatives market (see Geithner (2006)). Market participants are aware that such a downgrade will occur at some point, but given the lack of history, they are uncertain of how events will unfold. Will the market absorb such a shock without losing liquidity? Could such a shock result in a credit crunch that causes the corporate sector to suffer, and trigger a domino effect of downgrades? Are back-office and settlement procedures suffi- cient to handle such an event? Our model suggests that the central bank stands ready to act as the LLR in the event that a downgrade triggers these uncer- tainties. There have also been recent moves to increase transparency and risk assessment in this market, as well as streamline back-office settlement proce- dures. Our model suggests that such ex ante actions may reduce uncertainty and be beneficial. Finally, as we note, Knightian uncertainty may often be associated with financial innovations. This suggests that crises surrounding financial innova- tions may by fertile ground to look empirically for the effects we have modeled, and disentangle them from other more well understood effects. It also suggests a new perspective on the costs and benefits of innovation. For example, our model suggests that in a dynamic context with endogenous financial innova- tion, it is the pace of this innovation that inherently creates uncertainty and 16 The model may also be applied to the subprime crisis, as we discuss in Caballero and Krishnamurthy (2008).

31 Collective Risk Management in a Flight to Quality Episode 2225 hence the potential for a f light to quality episode. Financial innovation facil- itates risk sharing and leverage, but also introduces sources of uncertainty about the resilience of the new system to large shocks. This uncertainty is only resolved once the system has been tested by a crisis or near-crisis, after which the economy may enjoy the full benefits of the innovation. We are currently exploring these issues. Appendix Event Tree and Probabilities : The event tree is pictured below. The (2); the probability of probability of two waves affecting the economy is φ φ (1) − φ (2); and, the probability of no one wave affecting the economy is − φ (1). We use the dashed box around “ 1st waves affecting the economy is 1 ” to signify that agents are unsure of whether they are in the upper wave branch (one more wave will occur) or the middle branch (no further shocks). Consider an agent ω who may be affected in these waves. Suppose that his probability of being affected by a shock when the event is the middle branch ( “ 1 ” ) is one-half. Suppose that his probability of being affected by a first shock wave “ 2 waves ” )is ψ when the event is the upper branch ( , while his probability of ω . Moreover, suppose that the agent is being affected by a second shock is 1 ψ − ω uncertain about ψ , which we interpret as the agent being uncertain about his ω likelihood of being first or second, in the case of a two-wave event. ’ s probability of being affected by a first shock is The agent 1 φ (1) = φ (2) ψ (2)) + ( φ (1) − φ . ω ω 2 The agent ’ s probability of being affected by a second shock is φ ) . = φ (2)(1 − ψ (2) ω ω

32 2226 The Journal of Finance Note that φ (1) + φ (2) 1 φ = , (1) − φ (2)) φ (1) (2) = φ (2) + ( φ + ω ω 2 2 and φ (2) (1) φ φ = (2) ψ φ − − (1) ω ω 2 2 φ φ (2) (2) φ + − (2) . =− (2) ψ φ ω ω 2 2 These expressions show that the event tree is consistent with agents being certain about their probability of receiving a shock, but being uncertain about their relative probabilities of being first or second. In the text, we describe the φ (2) uncertainty in terms of φ − ψ . (2) rather than in terms of ω ω 2 Proof of Proposition 2 : We focus on the case of insufficient aggregate liquidity ∗ c ( Z < ). The other case follows the same logic as the K = 0 case. We are looking for a solution to the problem in equation (12). We can describe this problem in the game-theoretic language often used in max-min problems. The agent ω ω ( C ; θ V C chooses to maximize “ nature ” will choose θ ) anticipating that to ω ω ω ; θ V minimize C ( s choice of . ’ ) given the agent C ω ω ω ̄ ̄ ̄ θ The solution ( , θ ∈ C ) has to satisfy a pair of optimization problems. First, ω ω ω ω ̄ ; C V θ ( s choice ’ θ ). That is, nature chooses optimally given the agent argmin ω θ ω ω ω ω ̄ ̄ ̄ C ∈ argmax C of . Second, C θ V ; ). That is, the agent chooses C optimally given ( C ω ω 17 ̄ nature ’ s choice of θ . ω We compute ) ( ∂ V 2,1 2,2 . ) c − u ( c ( ) + β u c = c − 1 2 T T ω ∂θ ω ∂ V Let us first ask whether there exists a solution in which 0. If so, then < ω ∂θ ω ω ω clearly θ . Taking this value of θ let us consider the agent s problem in =+ K ’ ω ω 2,1 2,1 equation (12). First note that c = 0. To see this, suppose that > 0. Then c T T 2,2 2,1 c we can reduce δ and increase c by δ and produce a utility gain of by T T ω ω ω φ δ ( 0. φ + φ − (2) (2)) > 0 when θ (2) > ω ω ω ∂ V 0as, < With this knowledge, we rewrite the condition that ω ∂θ ω 2,2 u ( c ) + β c c c < u ( c > ) ⇒ . 1 2 1 2 T ∗ ∗ If c it follows from the resource constraint that Z < c c > c . But if < c and 2 2 1 2,2 ∗ = c ’ s problem we must have that c < 0 (i.e., do not then from the agent c 2 T save any resources for date T if these resources could be used earlier). 17 ω before nature chooses C does not affect our problem. To θ The fact that the agent chooses ω see this, note that choosing first only gives the agent an advantage if the agent can induce nature ω ω ̄ ̄ to choose a θ C θ . Suppose the agent chooses C  = ). Clearly, this · to increase V ( different from ω ω ω ω ω ̄ ̄ ̄ V θ below V ( choice reduces and make the agent C = ). Thus, nature can always choose to set θ , θ ω ω ω ̄ strictly worse off than at the choice C = C .

33 Collective Risk Management in a Flight to Quality Episode 2227 ’ s problem in (12) for values of Thus, we only need to consider the agent 1, no ω c is , c > 0. The first-order condition for the agent at θ , c =+ K 2 1 ω T ( ) ) ( (1) φ φ − (1) φ (2) φ (2) ′ ′ K . + K u − u c = ) + β ( c ( ) 2 1 2 2 2 = s problem is ’ K c We note that for 0, the unique solution to the agent c . > 2 1 K , a solution exists in which the agent chooses Thus, for small values of c c > 2 1 ω θ =+ K . This is the partially robust solution given in the and nature chooses ω proposition. As becomes larger, c K c . This occurs falls and at some point c Z / c = = 2 1 2 1 when K solves ( ( ) ) (1) φ φ (2) φ (2) φ (1) − ′ ′ ( ) K = u u K Z ) + β ( Z − + , 2 2 2 which gives the expression for the value of K defined in the proposition. ω c = K , the solution θ =+ c still solves both the K and > Note that if K 2 1 ω s and nature s optimization problems. The agent ’ s choice is uniquely op- agent ’ ’ ω ω θ timal at − θ =+ , while nature is indifferent over values of ∈ [ ]. This K , + K K ω ω is the fully robust solution given in the proposition. ∂ V We have thus far shown that considering the case in which ≤ 0, the ω ∂θ ω solution given in the proposition is the only solution to the problem in (12). We conclude by showing that there are no other solutions to the problem. To do this, we only need to consider whether there exists a solution in which ∂ V > 0. ω ∂θ ω ∂ V ω Suppose there does exist such a solution. If > 0, then θ =− K . We can ω ω ∂θ ω go back through arguments similar to those previously offered to show that 2,2 2,1 ∂ V c > c must both be zero in this case. Then the condition that 0is and ω T T ∂θ ω equivalent to c c . < 2 1 The first-order condition for the agent is ( ( ) ) φ (1) (2) φ − (1) φ (2) φ ′ ′ ( ( c = ) + β ) c u − K . + K u 2 1 2 2 2 The solution to this problem is c > c , which is a contradiction. Thus, there 1 2 ∂ V 0. > does not exist a solution in which ω ∂θ ω : The planning problem consided in Propo- The Expanded Planning Problem sition 2 solved for allocations contingent on the number of shocks affecting the economy and the time at which a given agent is affected. In particular, we did not allow allocations to be directly contingent on agents ’ beliefs about their shock probabilities. This Appendix considers an expanded planning problem that considers such allocations and shows that the solution is the same as in Proposition 2. That is, allowing for more contingencies does not alter our results.

34 2228 The Journal of Finance Why consider allocations contingent on beliefs? In our equilibrium, agents ω thinks his have distorted beliefs and in particular disagree: Agent θ , = K ω ∫ but he also knows that θ 0. That is, a given agent thinks that d ω = ω  ω ∈ θ all other agents on average have a 0, but the agent himself has the = ω worst case . This raises the question of whether it is possible to construct θ a mechanism that exploits this disagreement in a way that agents end up agreeing. Let us consider this issue formally as a mechanism design problem. Denote ω ω ˆ of an agent as type the “ ” θ s type as θ ’ . Also and denote the report of the agent ω ω ⋃ ˆ  denote the set of reports from all agents as ∈ ]. A consumption [ − K , + K  ∈ ω ˆ C ( ω allocation to agent ω  ). as a function of the reports is , ω from this consumption allocation given his type θ The utility of ω is ω ( ) (2) φ (1) φ − no 1, ω ω ω ˆ ˆ ˆ ˆ U C θ (  ω , ); = φ . (1) ) (2) u ( c u ( ω , (  )) + c ( ω , ,  ω (  )) + φ c β 2 1 ω ω ω T 2 ω ω ω The probabilities φ (2) are a function of θ φ . Also, we have written this (1) and ω ω ω no 1, problem for the simplified case in which we only need to consider c c , > 0 c , 2 1 T (i.e., the reduced problem in Proposition 2). s problem is to choose an allocation function C : The planner ’ ∫ ( ) ω ˆ max θ , ); ω U ( C  (A1) d ω ω ˆ , (  ) C ω ω ∈  subject to resource constraints ∫ ˆ ˆ ( ( Z ( ω , 2  ) + c ≤ c ω , ω  )) d 2 1 ω  ∈ and ∫ ( ) no 1, ˆ ˆ c .  ) + c 2 ( ≤ ( ω , ω  ) , d ω Z 1 T ω  ∈ Finally the “ of the agent is also a function of the allocation ” type ) ( ω ω ˆ θ (A2) . U ∈ C ( ω , argmin  ); θ ω θ ∈ ω ω ω ’ s The problem as we have written it is quite general. It describes each agent consumption allocation as a function of the entire set of reports of agents ’ types, where types are interpreted to be the agents ’ beliefs over their shock probabilities. We argue that the current planning problem as described in Section II.D subsumes the one in (A1). The strategy for the proof is to consider a relaxed version of the problem in (A1) and show that the solution to that problem is the same as the solution to the planning problem described in the text. Given this result, we conclude that allowing for a more general allocation does not affect the results.

35 Collective Risk Management in a Flight to Quality Episode 2229 ω without having to rely θ Suppose that the planner knows the agent types ω on reports. Giving the planner more information allows the planner to imple- ment strictly better allocations. As a result, this is a relaxed version of the ω ,  C problem we have written down in (A1). Thus, suppose that ( ), where ⋃ ∈  C [ − is the allocation to agent , + K ] is the set of agent types and K  ω ∈ as a function of the agent types directly and not the reports of agents. ω ω Note that θ , as given in (A2), is a function of the consumption allocations of ω ω . That is, given consumption allocations to agent ω , the the planner to agent ω θ planner can directly compute . This implies that we can drop the dependence ω on  , and the planner ’ s choice is over C C ω ): The planner chooses numbers in ( 1, no c ), c . This problem is the one solved in the text ( ω ), and c ( ω ω ( ω ) for each 1 2 T of the paper. The result is given in Proposition 2. Thus, we conclude that al- lowing for allocations that depend on the types of the agents does not alter the equilibrium. REFERENCES Allen, Franklin, and Douglas Gale, 1994, Limited market participation and volatility of asset prices, American Economic Review 84, 933 – 955. Allen, Franklin, and Douglas Gale, 2004, Financial intermediaries and markets, Econometrica 72, – 1061. 1023 Barro, Robert, 2006, Rare disasters and asset markets in the Twentieth century, The Quarterly Journal of Economics – 66. 121, 823 Bernheim, Douglas, and Antonio Rangel, 2005, Behavioral public economics: Welfare and policy analysis with non-standard decision makers, NBER Working paper # 11518. Bhattacharya, Sudipto, and Douglas Gale, 1987, Preference shocks, liquidity, and central bank New Approaches to Monetary policy, in Barnett, William A. and Singleton, Kenneth J. eds.: Economics. International Symposia in Economic Theory and Econometrics series (Cambridge University Press, Cambridge, UK). Caballero, Ricardo, and Arvind Krishnamurthy, 2003, Excessive dollar debt: Underinsurance and domestic financial underdevelopment, Journal of Finance – 893. 58, 867 Caballero, Ricardo, and Arvind Krishnamurthy, 2008, Musical chairs: A comment on the credit crisis, 11, Banque de France, 9 – 12. Financial Stability Review: Special Issue on Liquidity Calomiris, Charles, 1994, Is the discount window necessary? A Penn Central perspective, , Review Federal Reserve Bank of St. Louis. Camerer, Colin, Samuel Issacharoff, George Loewenstein, Ted O ’ Donoghue, and Matthew Rabin, 2003, Regulation for conservatives: Behavioral economics and the case for “ asymmetric pater- nalism, ” 151, 1211 – 1254. University of Pennsylvania Law Review Journal Diamond, Douglas, and Philip Dybvig, 1983, Bank runs, deposit insurance, and liquidity, 91, 401 – 419. of Political Economy Dow, James, and Sergio Werlang, 1992, Uncertainty aversion, risk aversion, and the optimal choice Econometrica 60, 197 – 204. of portfolio, Easley, David, and Maureen O ’ Hara, 2005, Regulation and return: The role of ambiguity, Working paper, Cornell University. Journal of Economic Theory Epstein, Larry, and Martin Schneider, 2004, Recursive multiple priors, 113, 1 – 31. Epstein, Larry, and Tan Wang, 1994, Intertemporal asset pricing under Knightian uncertainty, Econometrica 62, 283 – 322. Gabaix, Xavier, Arvind Krishnamurthy, and Olivier Vigneron, 2007, Limits of arbitrage: The- ory and evidence from the mortgage-backed securities market, Journal of Finance 62, 557 – 595.

36 2230 The Journal of Finance ’ advantage in hedging liquidity risk: Theory and Gatev, Evan, and Phil Strahan, 2006, Banks 61, 867 892. evidence from the commercial paper market, Journal of Finance – Geithner, Timothy, 2006, Implications of growth in credit derivatives for financial stability, Re- marks at the New York University Stern School of Business Third Credit Risk Conference. Gilboa, Itzak, and David Schmeidler, 1989, Maxmin expected utility with non-unique priors, Jour- – 153. nal of Mathematical Economics 18, 141 Goldstein, Itay, and Ady Pauzner, 2005, Demand deposit contracts and the probability of bank Journal of Finance – 1328. runs, 60, 1293 Gorton, Gary, and Andrew Winton, 2003, Financial intermediation, in G.M. Constantinides, M. Handbook of the Economics of Finance (Elsevier, Amsterdam). Harris, & R. M. Stulz, eds.: Greenspan, Alan, 2004, Risk and uncertainty in monetary policy, Remarks for the AEA Meetings. Gromb, Denis, and Dimitri Vayanos, 2002, Equilibrium and welfare in markets with financially Journal of Financial Economics constrained arbitrageurs, – 407. 66, 361 Gul, Faruk, and Wolfgang Pesendorfer, 2005, The case for mindless economics, Working paper, Princeton University. Hansen, Lars, and Thomas Sargent, 1995, Discounted linear exponential quadratic Gaussian con- trol, IEEE Transactions on Automatic Control 40, 968 – 971. Journal of Hansen, Lars, and Thomas Sargent, 2003, Robust control of forward-looking models, 50, 581 – 604. Monetary Economics Hansen, Lars, Thomas Sargent, Gauhar Turmuhambetova, and Noah Williams, 2006, Robust con- trol and model misspecification, Journal of Economic Theory – 90. 128, 45 Holmstrom, Bengt, and Jean Tirole, 1998, Private and public supply of liquidity, Journal of Political Economy 106, 1 – 40. Knight, Frank, 1921, Risk, Uncertainty and Pro t (Houghton Miff lin, Boston). fi McAndrews, Jamie, and Simon Potter, 2002, The liquidity effcts of the events of September 11, 2001, 8, 59 – 79. Federal Reserve Bank of New York Economic Policy Review Melamed, Leo, 1998, Comments to Nicholas Brady ’ s: The crash, the problems exposed, and the Brookings-Wharton Papers on remedies, in Robert E. Litan and Anthony M. Santomero, eds.: (The Brookings Institution Press, Washington, DC). Financial Services Rochet, Jean-Charles, and Xavier Vives, 2004, Coordination failures and the lender of last resort: Was Bagehot right after all? Journal of the European Economic Association 2, 1116 – 1147. Routledge, Bryan, and Stanley Zin, 2004, Model uncertainty and liquidity, NBER Working paper #8683. Scholes, Myron, 2000, Crisis and risk management, American Economic Review 90, 17 – 21. Sims, Christopher, 2001, Pitfalls of a minimax approach to model uncertainty, American Economic 91, 51 – 54. Review Finance and Stochastics 7, 475 – 489. Skiadas, Costis, 2003, Robust control and recursive utility, Stewart Jr., Jamie, 2002, Challenges to the payments system following September 11, Remarks by First Vice President Jamie B. Stewart, Jr. before the Bankers ’ Association for Finance and Trade. Woodford, Michael, 1990, Public debt as private liquidity, American Economic Review, Papers and Proceedings 80, 382 – 388.

Related documents