1/1

RISK MEASURES : The Hedging Objective

Hugues Pirotte, Professor of Finance, Solvay BS, ULB

“You can easily beat the market…if you have the wrong measure”. This statement is issued from a PhD thesis at the end of the 90s, when the market started to get worried about model risk, i.e. the propensity to be galvanized by numbers whilst having the wrong measure, by construction or through its implementation.

In IAS39/IFRS9, one the methods devised to test for the hedge effectiveness and largely used is the dollar-offset method. It implies looking at variations on both (A) the underlying (to be hedged) position, and (B) the derivative used to hedge the first one, and to divide both to see if the ratio is close enough to 1. If it is the case, it means they seem to compensate each other well.

It is a method easy to understand and seems simple to implement; yet, it has flaws. Being far from 1 and close to 1 doesn’t imply necessarily that the hedge is bad (false negatives) or good (false positives), respectively. For example, when variations are very small compared to the notional over a certain time period, the ratio could give strange values even though it wouldn’t mean the hedge is actually bad. Adjusted versions have been proposed, but they seem more like “cooking the formula” than trying to find the right approach. Pedagogically, even if simple, this method bears the problem to treat both positions separately which encourages the management to focus on the P&L of the hedging instrument instead on the result as a whole.

Let’s revert to what hedging actually means. To hedge means to reduce our sensitivity to the impact of a given phenomenon. In finance, this translates into reducing the variability of a position, a budget, a cash flow forecast, etc..., thus reducing the impact of uncertainty on our firm. Based on this, any measure involving a notion of volatility would be more semantically adequate.

“To hedge means to reduce our sensitivity to the impact of a given phenomenon.”

Various types of implementations exist. Regressing the variations of B over A and looking at the  would tell us of much of the variance of B is explained by A (1). Computing the volatility of changes of the combination of A+B compared to A alone, would be another one (2). Their limitations are this time more linked to their statistical properties, rather than to their meaning. For (1), if there would be a trend in the variations, we might just capture some spurious correlation. For (2), some derivative could be showing its true reality in extreme cases, not in standard ones. If the volatility is computed historically, and over a period of “normal” changes, that might remain unnoticed until it actually happens. You might then to use some form of expected shortfall to highlight what happens in the “tails” of the distribution. Globally, blindly believing about the result of a measure without confirming it with additional insights would be insane.

One of these insights could be just the true understanding of the derivative being used, and accepting to look more at its critical terms. Auditors tend to like measures even though a linear derivative such as a forward on exactly the same underlying with similar date and amount is used. But very few control if the quantitative implementation does interpret well the instrument. They are satisfied with the result because they know what the result should be for that type of instrument. Thus, most of the time, it is an exercise of getting the implementation right for a result we already know. And if a strange result is obtained, the natural thinking is that there is a flaw in the calculation. But, the adequacy of the measure and its limitations can play a big role. Finally, few analysts check that all the features (you know, the footnotes in the contracts) are actually represented in the calculations.

“The idea is to model the future, not the past, still using data from the past to infer the future.”

Back to the volatility measure (s), much has been said and developed around it. First appearing in a text of Karl Pearson of 1894, the same that developed the concept of correlation, it was already proposed by Gauss earlier under the name of “mean error”. Its main form is the well know “average distance to the mean”, where distances are squared to avoid compensation of positive and negative values (a version has been proposed with “absolute” values but offers less tractability in mathematical derivations):

where  are the returns of the underlying  over past regular periods,  the number of them, and , their mean. We tend to use the volatility as a measure that is kept constant during a period. At the beginning of the 80s, Engle and Bollerslev devised a form of “stochastic volatility”, a volatility that can vary through time and proposed a modeling known today as GARCH in its more generalized form. In that respect, the volatility is modelled as varying but with the possibility to allow to (a) revert to some long-term value, (b) be close to that of the period before, (c) be more or less impacted by the more recent data. It can be noted that that traditional form gives the same weight to each observation while the GARCH can allow for a decaying weight as the data becomes older.

This interest in trying to better model the volatility is due to its unavoidable presence in anything related with probabilities in finance: optional contracts, potential future outcomes of a project or a firm, etc… And of course, the idea is to model the future, not the past, still using data from the past to infer the future. In that respect, it can be shown that it is easier to approach, to avoid saying “predict”, the future volatility (the “behavior”) rather than the actual level of a variable. In that quest, “quants” would also use the implied volatility, obtained from watching option prices and deducting the volatility to be used in Black-Scholes option pricing models to get those prices, but it might be sometimes “too” proactive. It can be indeed strongly impacted by temporary offer and demand imbalances, not reflecting really the true underlying uncertainty in that case. As always, no measure is the “one”; everyone has its limitations and you might want to have more than one to corroborate your analysis.