Plight of the Fortune Tellers:
Why We Need to Manage Financial Risk Differently

Riccardo Rebonato

Princeton University Press 2007
A book review by Danny Yee © 2010
Plight of the Fortune Tellers looks at problems with the management of financial risk. It explores a number of ways of getting a handle on risks, of representing and managing them, but its focus is on the unavoidable problems faced in dealing with extreme, low frequency events.

Rebonato has first-hand experience not just with the theory of risk management but with its practice, and sets it in its institutional and regulatory context. He also touches on the philosophical and epistemological background. And his presentation is accessible and entertaining, so Plight of the Fortune Tellers should appeal to quite a broad audience: the approach is non-technical, but definitely more "popular science" than "current events". (It was published in 2007, just before financial risk hit the headlines.)

From psychology Rebonato takes the idea that humans have two basic systems for analysing risks: System I is a fast, intuition-driven system, while System II involves slower deliberation (but is still prone to being misled). One of his goals is to find ways of presenting information about risks in such a way that System I intuitions will give sensible results.

A key argument of Plight of the Fortune Tellers is that we need Bayesian subjective probabilities to model risks, with frequentist methods as a limiting case. The probabilities involved with financial shocks are unlike those of coin tosses and more like the odds on the next president of the United States. And even simple probabilities involve some theoretical background: Rebonato asks us to imagine a Martian who has never seen a coin before facing a coin-tossing game. (Apparently his first choice of title was Coin-Tossing Martians and the Next President of the United States.) Another emphasis is on the calculation of probabilities from actions.

Turning to investment choices, Rebonato explains the different kind of risks involved with "selling insurance" and "buying lottery tickets". Outside toy examples, however, things get messy. Even if we can construct a full profit-and-loss distribution it's not clear what to do with it. One approach is utility theory, with rational agents maximising consumption, but it turns out that "we cannot simultaneously explain risk aversion over very large and very small stakes with the same utility function". Another approach is prospect theory, which emphasizes gains and losses instead of consumption. This is psychologically more plausible and "more faithfully descriptive of actual behavior in financial institutions than a theory that rests on the concept of consumption". But it allows arbitrage constructions, with choices changing depending on the presentation of options.

Why do banks need risk management at all? Academic theory suggests that they should just maximise returns and let investors manage their own risks. Rebonato argues that one goal is to reduce volatility and make banks more attractive to investors.

Analysing the risks with hold-to-maturity loans, the traditional staple of banks, is one thing; complex trading books full of esoteric derivatives are another matter entirely. Bank failures following catastrophic trading losses have brought the attention of regulators, and lead to the concept of "VaR", or "value at risk". The basic idea here is to take historical data and simply count the occasions on which a certain loss level is exceeded. So the "95th percentile, one-day VaR" is the level of loss that was exceeded in just 5% of days. Originally regulators used this along with a "fudge factor" for safety, but the trend has been towards increasing the percentile, to 99 and then 99.9, with some banks even talking about 99.97. Rebonato proceeds to argue that "the estimation of these percentiles is not difficult — it is meaningless".

The lower the frequency of an event and the lower the sampling rate, the more historical data we need — and it has to be data that is still relevant. To get a grip on "the 99.95th percentile of the yearly loss distribution for a corporate loan portfolio", for example, would require 2000 years of data, going back to "the famous corporate loan book crises of the Paleochristian era". (Which brings to mind those world-system theorists who want to trace Kondratieff B cycles back into the Iron Age.)

This problem is clear with a simple nonparametric model, but more sophisticated statistical modelling doesn't help. Empirical or fundamental fitting may work well near the centre of distributions, but doesn't do any better with tails. Monte Carlo methods don't help, since they don't improve confidence in the choice of distribution. And ignorance of tails is an inherent feature of the Central Limit Theorem.

"whatever clever statistical trick we might turn to, the picture is not changed substantially. Unless we have a structural model that tells us how things should behave and that has been exogenously validated using data other than (and presumably more plentiful than) those we are examining, there is no escaping the consequences of a simple rule of thumb: if we collect 1,000 data points, we cannot say much about what happens much less frequently than once in a thousand observations."

Rebonato also explains here the difference between correlation and codependence, and the problems of "amplification and altered codependence" during extreme events.

He then returns to looking at different kinds of probabilities and his argument for the use of a Bayesian approach. With short-horizon trading-book "market risk", a frequentist approach can work:

"if the phenomenon I want to investigate is very time homogeneous, if I can collect data very frequently, if what I am interested in happens not too rarely, and if the horizon of my statistical prediction is short, I am clearly in Frequentist Land."

But there are many ways in which the assumptions here can fail, and where a Bayesian approach copes better. And "when we try to make predictions about the riskiness of long-term portfolios our approach should be much more heavily reliant on subjective probability than frequentist probability". (Rebonato doesn't provide any details of just how one might go about doing this; that is the subject of his more recent book Coherent Stress Testing. It seems to me that what he really wants Bayesian subjective probabilities for is the ability to incorporate knowledge of the world, something which could surely be done more effectively by using causal models directly.)

The idea of "economic capital" is that a bank would maintain some level of reserves even in the absence of regulation, in order to signal its level of risk to the markets and reduce its cost of debt. So an AA rating is supposed to indicate that the chance of company survival for the next year is 99.97%. Which runs into all the problems described earlier. Rebonato suggests, however, that economic-capital reasoning is not necessarily bad — "if applied to significantly lower percentiles (of the order of 90% or below, say)".

In a final chapter Rebonato offers some positive suggestions, with advice for both risk managers and regulators. He suggests analysing return and risk separately. Frequentist approaches are no use for predicting returns, where decisions are more often driven by exogenous subjective beliefs. With risks, he recommends consideration of dispersion, asymmetry and kurtosis, and presents a simple quantitative measure for providing some intuition about the size of tail risks. And elementary heuristics can be useful, such as consideration of worst plausible outcomes, best plausible gains, and breakevens. For regulators, Rebonato emphasizes the need to keep regulations simple and expressed in the language banks use internally, lest they be ignored or given lip service.

Rebonato follows what now seems a hard rule for popular science books in having no equations at all, but parts of Plight of the Fortune Tellers still assume a certain amount of comfort with mathematical thinking. It also assumes at least rudimentary familiarity with corporate finance and the workings of financial markets: it is written for, and from the perspective of, those working in risk-management in a bank, albeit for managers rather than quants.

In some ways it may be an advantage that Plight of the Fortune-Tellers was written before the Global Financial Crisis started in 2007. Unlike some of the topical books in this area, it is not tied up "explaining the last crisis" and remains quite general. Indeed much of what Rebonato writes is relevant to understanding risks and extreme events in other areas where there is extensive quantitative data, perhaps most obviously in the environmental sciences.

The bottom line? Plight of the Fortune Tellers is the best non-specialist introduction to quantitative financial risk management I have found.

September 2010

External links:
- buy from
- buy from or
- share this review on Facebook or Twitter
Related reviews:
- Nassim Nicholas Taleb - The Black Swan: The Impact of the Highly Improbable
- Riccardo Rebonato - Coherent Stress Testing: A Bayesian Approach to the Analysis of Financial Stress
- books about economics + finance
- more popular science
- books published by Princeton University Press
%T Plight of the Fortune Tellers
%S Why We Need to Manage Financial Risk Differently
%A Rebonato, Riccardo
%I Princeton University Press
%D 2007
%O hardcover, notes, index
%G ISBN-13 9780691133614
%P 272pp