RiskTech Forum

Misys: Risk Management Beyond VaR

Posted: 14 November 2013  |  Source: Misys


After more than twenty years of detailed development of risk measurement and management techniques, it is essential to ask why the Global Financial Crisis took so many of us by surprise. A central reason is that distributional analysis, which is the foundation for concepts such as value-at-risk, was too often treated as a fully comprehensive basis for measuring risk. In fact, short-term market volatility is one important aspect of risk, but it may actually be a misleading indicator of a system’s vulnerability to major systemic crises.
This paper discusses a number of diverse considerations that risk managers need to incorporate into their thought processes and recurring procedures if they are to fulfill their role more effectively in the future.

The Rise of Value-at-Risk

In the mid-1980s, a series of huge, well publicised and highly embarrassing losses occurred at some of Wall Street’s biggest and supposedly most sophisticated trading firms. These events gave birth to financial risk management as a distinct professional activity. In the 25 to 30 years since then, many tools and techniques have been developed to measure, monitor and (hopefully) control risk. The common characteristic of virtually all this work is that it utilised classical statistical techniques to derive measures of short-term volatility. The poster child, or whipping boy depending on your point of view, for this technique is what we have some to know as Value-at-Risk or VaR. Some analysts, such as Nassim Nicholas Taleb, argue that this entire enterprise was simply wrongheaded and positively dangerous. I beg to differ.
For better or worse, I am old enough to remember the world before VaR. Market risk controls consisted of a complex web of micro position limits.
In the fixed income arena, these included:
• controls on total net duration adjusted open positions
• limits on duration adjusted mismatches at multiple points along the yield curve,
• a limit on the sum of the absolute values of such tenor specific mismatches,
• gross position limits and
• issuer concentration limits.
In the options arena, this maze of limits included controls on
• delta sensitivity for each specific underlying price or rate,
• gamma sensitivity or the marginal change in delta as underlying prices changed, and
• vega sensitivity to changes in implied volatility.
These limits usually applied both to sensitivities for individual underlying reference entities and to various aggregates of these sensitivities such as all equity positions or certain categories of equities grouped by industry, geography or credit rating.
Very importantly, this maze of limits conveyed no instinctive sense of how much risk they allowed. Market risk committees were repeatedly asked for higher limits despite having no real sense of the risks inherent in the limits already in place. In this context, VaR emerged as the first effective communication tool between trading and general management. For the first time it was possible to aggregate risks across very different trading activities to provide some sense of enterprisewide exposure.
Like all useful innovations, however, VaR had notable weaknesses from the beginning. The first weakness of VaR was that, inadvertently or deliberately, it was oversold to senior management. Financial risk managers must bear some responsibility for creating a false sense of security among senior managers and watchdogs. For far too long, many were prepared to use the sloppy shorthand of calling VaR the “worst case loss.” A far better alternate shorthand description is to call VaR “the minimum twice-a-year loss.” This terminology conveys two things. First it indicates the approximate rarity of the stated loss threshold being breeched. Second, it invites the right question, namely “How big could the loss be on those two days a year?” To put it bluntly, VaR says nothing about what lurks beyond the 1% threshold.

Victims of Our Hidden Assumptions
A central weakness of financial risk management has been to neglect the important distinction between “risk” and “uncertainty” that Frank Knight enunciated in his 1921 book Risk, Uncertainty and Profit . Knight defines “risk” as randomness that can be analysed using a distributional framework and “uncertainty” as randomness that cannot be so analysed. Situations in the “risk” domain are characterised by repeated realisations of random events generated by a process that exhibits stochastic stability or, at least, a high degree of stochastic inertia. In layman’s terms, this means that the nature of the randomness changes only slowly over time. Risk, in this sense, was the basic subject of Peter Bernstein’s well known book Against the Gods: The remarkable story of risk . It is not surprising that an early review by The Economist of what went wrong with risk management during the Global Financial Crisis was titled The Gods Strike Back.

Please register or log in to download the report.