Issam S. STRUB[∗] and and Christopher A. UDY[∗∗]

## 2. Sharpe ratio based RATS

When a trading strategy is applied to a given asset, the ﬂuctuations in the volatility of the asset returns will typically lead to changes in the volatility of the strategy returns. In practice, portfolio managers aim to limit these variations and keep the tail risk of the strategy below a predetermined level by dynamically adjusting trade size. This section presents techniques to achieve this objective.

#### 2.1 Tail Risk Measures

The Sharpe ratio based RATS multiplies a Model RATS by a coefﬁcient representing the past performance of the trading strategy, as modelled by its Sharpe ratio and volatility σ. We begin by setting a given level of Maximal Acceptable Loss (or ruin). Then, the Sharpe ratio over a certain period of time (for example the previous 12 months) is computed as well as the strategy’s annualised volatility. Using this data, Monte Carlo simulations are run and yield a matrix of probabilities of returns (see WILLIAMS ). A maximum probability (MaxP) of ruin is chosen and the loss corresponding to this probability is computed using the aforementioned matrix; more precisely, we look for
a Loss Percentage such that P (Returns < Loss Percentage) = MaxP. Now that this value has been computed, it is divided by the volatility σ, yielding the Unacceptable Loss Measure (ULM): A trade size which corresponds to a probability MaxP of Maximal Acceptable Loss is now selected; this is given by the quotient of the maximum Percentage Acceptable Loss over the Loss Percentage reached with probability MaxP then multiplied by the Model RATS: In the practical case mentioned in THOMPSON , the maximum Percentage Acceptable Loss is 10%, the chosen probability of reaching this ruin level is 5% and the volatility is σ = 12.8%. Using Monte Carlo simulations it is found that for the current Sharpe ratio of 2.1, a probability of 5 % corresponds to a loss of 7.4% = 0.58 × σ. In other words  One possible extension would be to use the CS ratio (CASCON and SHADWICK ) rather than the Sharpe ratio to compute the RATS as this would better estimate the downside risk. This is examined below.

## 3. CS Ratio based RATS

Indeed, the Sharpe ratio based RATS previously described relies on the Sharpe ratio and volatility of a given strategy to assess its probability of ruin using Monte Carlo simulations. However, return distributions with the same Sharpe ratio and standard deviation can exhibit widely differing tails and thus downside risk may not be properly estimated. Secondly, Monte Carlo simulations assume a known return distribution, typically normal, which signiﬁcantly un-derestimates tail risk. To address this, the CS RATS was developed using the CS ratio introduced in CASCON and SHADWICK  and the ﬁltered historical simulation (FHS) method presented in BARONE-ADESI et al. . The CS ratio is deﬁned by: where ω = 1/2 E(|x − µ|) is the standard dispersion. Like the Sharpe ratio, the CS ratio has the property of being scale invariant and the added advantage of taking into account the full distribution including tails without making any assumption of normality.
The ﬁltered historical simulation (FHS) method is an attempt to combine the advantages of a historical sampling technique and a parametric estimation model; the latter attempts to capture conditional heteroskedasticity but assumes a normal distribution while the former does not assume a speciﬁc distribution but does not capture conditional het-eroskedasticity. The FHS method relies on a model based approach for the volatility, typically using a GARCH type model, while remaining model free in terms of the distribution. In particular, this method has the notable advantage of being able to simulate extreme losses even if they are not present in the sample historical returns used for the simulation, thus taking tail risk into consideration more accurately

#### 3.1 Filtered Historical Simulation

In the following we will apply this method to compute a CS Ratio based RATS. We use daily returns from a given strategy and denote Rt as the daily log return at time t and σt as the standard deviation of daily returns. As mentioned above, the historical simulation method assumes that daily returns are independent and identically distributed (i.i.d.) through time; however, signiﬁcant autocorrelation is usually found for the daily squared returns. To produce a sequence of i.i.d. observations, we ﬁt an AR(1) ﬁrst order autoregressive model to the daily returns: In our case we will choose a Student t distribution for the standardised returns {zt} to account for increased tail risk as the t distribution has fatter tails than the corresponding normal distribution.
To model the variance of returns we use an exponential GARCH (EGARCH) model developed in NELSON , NELSON and CAO  to capture the asymmetry in volatility induced by large positive and negative returns. Indeed, volatility usually increases more after a large negative return than after a large positive return due to the leverage effect (see BLACK  for more on the subject). The following model will be used: After ﬁtting an AR(1)/EGARCH(1,1) model to the daily returns, the standardised residuals (which are now i.i.d.) are used in a historical sampling technique. The i.i.d. property is important for bootstrapping, as it allows the sampling procedure to safely avoid the pitfalls of sampling from a population in which successive observations are serially dependent. We now simulate a number N1 of independent random trials over a given time horizon of N2 days; unlike Monte Carlo simulations we do not make a speciﬁc distributional assumption regarding the standardised returns {zt} and instead use the past returns data. Eventually, we end up with N1 daily return series each covering N2 trading days.
We begin by computing the CS Ratio of the current strategy on a rolling one year window and select the simulated return series which have a CS Ratio within a certain speciﬁed range of the aforementioned CS Ratio, and we then compute the loss corresponding to a 5% probability for each simulated return series previously selected. We take the largest of these losses and use it to compute the RATS corresponding to a 10% maximum acceptable loss for the strategy as was done for the Sharpe Ratio based RATS. This in turn yields the CS Ratio based RATS (CS RATS) which is adjusted weekly.

## 4. Risk Adjusted Trade Size with Expected Shortfall and Extreme Value Theory

The latest risk management tool developed by the Cambridge Strategy called Extreme RATS (ERATS) uses the most advanced quantitative measures of extreme risk, Extreme Value Theory and Expected Shortfall.

#### 4.1 Expected Shortfall

The RATS computation from the previous sections is based on a maximum acceptable loss at a certain conﬁdence level, or, equivalently, a 95 % Value at Risk (VaR) of 10 %. However, Value at Risk is concerned only with the number of losses that exceed the VaR and not the size of these losses. Many investment professionals would like to consider a risk measure that accounts for both the magnitude of large losses as well as their probability of occurring. The most complete measure of large losses is the shape of the tail of the return distribution beyond the VaR. This led to the introduction of Expected Shortfall (ES) (see CHRISTOFFERSEN , MCNEIL et al. ) (also referred to as Conditional Value at Risk (CVaR) or TailVaR). The expected shortfall for a daily return distribution F at a given conﬁdence level α is given by: where the VaR is: While expected shortfall presents a number of additional interesting properties, such as being a coherent risk measure and a convex function of the portfolio weights (unlike VaR) which makes it suitable for portfolio optimisation, its computation requires an explicit expression of the return distribution function F which is usually unknown in practice. To address this problem, one possible method consists in ﬁtting a known distribution to the left tail of historical daily returns and then making use of this distribution to compute the expected shortfall. One such distribution is the Generalised Pareto Distribution (GPD) originating from Extreme Value Theory (EVT), a branch of statistics dedicated to modelling extreme events. Figure 1: Left: Comparative plot of the left tail of standardised residuals obtained after ﬁltering daily returns, GPD distribution ﬁtted to the standardised residuals through MLE, and normal standard distribution. The empirical data present a noticeably fatter tail than the normal distribution while the GPD provides a good approximation. Right: Comparison of daily cumulative returns obtained using either Sharpe RATS, CS RATS or ERATS computed weekly between January 2008 and April 2009.

#### 4.2 Extreme Value Theory

The central result in extreme value theory states that the extreme tail of a wide range of distributions can approximately be described by the GPD (BALKEMA and DE HAAN , PICKANDS ): The shape and scale parameters ζ and β can be estimated using Maximum Likelihood Estimation (MLE) by fitting a GPD distribution to the tail of the return distribution beyond a given threshold u. Once this is done, the expected shortfall can be computed: where the VaR for a GPD can be estimated by: with N the total number of observations and Nu the number of observations exceeding the threshold u.

#### 4.3 Practical Implementation

The preceding result requires that observations be i.i.d., therefore, we will start by ﬁltering daily returns using the AR(1)/EGARCH(1,1) model with a t-distribution as described earlier and apply EVT to the standardised residuals (see MCNEIL and FREY , NYSTR ¨OM and SKOGLUND ), with a GPD being ﬁtted to the tails through MLE. An example of this technique is presented in Figure 1; the normal distribution is clearly inadequate to model the left tail and the GPD provides a much more suitable approximation. Once this is done, we obtain the shape and scale parameters corresponding to the GPD ﬁtted to the left tail and replace these values in Equation (10) to compute the expected shortfall at the level of conﬁdence desired (95 % in this case). This value is then used to obtain the RATS level using the following formula:  Figure 2: Logarithm of the Omega function for each strategy and for a normal standard distribution as a function of standardised daily returns. The Omega curve corresponding to the ERATS is substantially higher than the two others both for the left (downside) and right (upside) part.

where ERATS designates the RATS level computed using EVT. The maximum acceptable ES replaces the maximum VaR used in the previous RATS formulæ presented in this article. The Max ES can be computed through the rule of thumb that one would not want an ES higher than the one corresponding to the Max VaR for a normal distribution; this value, in turn, can be easily computed from the VaR, as for normal distributions, ES and VaR are related by the formula: ES ≃ 1.26 × VaR. This yields the ERATS level, and as was done with the CS RATS, the daily returns for the following week are adjusted using the ERATS as the leverage indicator. Note that there is no longer a simulation step as in the CS RATS. This is not necessary as the ES coupled with EVT takes into account the entire left tail of the return distribution, thereby modelling downside risk accurately. This, in turn, means that the computational time is dramatically reduced and the ERATS algorithm can be readily integrated in trading platforms and provide real time extreme risk management. We now turn to the application of the previous approaches to compute RATS levels from daily returns as recorded by the Asian strategy from January 2008 to April 2009, using the Sharpe ratio based RATS, the CS RATS and the ERATS.

## 5. Results The numerical results are summarised in Table 1 as well as Figure 1. The ERATS yields higher cumulative returns than the Sharpe RATS or CS RATS with similar leverage overall; in particular, the daily returns with ERATS have a higher CS ratio (2.8) compared to returns with Sharpe RATS (2.3). Figure 2 shows the logarithm of the Omega function (a novel risk measure introduced in KEATING and SHADWICK [2002a,b] which takes into account the entire shape of the return distribution) of standardised returns; the fact that the Omega curve corresponding to the ERATS strategy is noticeably higher than those related to the Sharpe RATS or CS RATS both on the left side (which represents downside risk) and on the right side (corresponding to upside potential) illustrates the ability of the ERATS to reduce risk while improving returns; in particular, the left part of the Omega curve for the ERATS is very close to the curve corresponding to the normal distribution, a direct consequence of the criterion used for the expected shortfall value. The ability of the ERATS to reduce downside risk (originating from the traditionally fatter left tails encountered in many trading strategies) while preserving upside potential is conﬁrmed in Table 1 as the ERATS strategy presents the lowest Max Daily Drawdown and the highest Max Daily Gain.

__

#### References

A. BALKEMA and L. DE HAAN. Residual life time at great age. Annals of Probability, 2:792–804, 1974.
G. BARONE-ADESI, K. GIANNOPOULOS, and L. VOSPER. VaR without correlations for nonlinear portfolios. Journal of Futures Markets, 19(April):583–602, 1999.
F. BLACK. Studies in stock price volatility changes. Proceedings of the Meeting of the Business and Economic Statistics Section. American Statistical Association, pages 177–181, 1976.
A. CASCON and W. F. SHADWICK. The standard dispersion and its application to risk analysis for portfolio management. The Journal of Investment Consulting, 8(2):38–54, 2007.
P. F. CHRISTOFFERSEN. Elements of Financial Risk Management. Academic Press, 2003.
C. KEATING and W. F. SHADWICK. A universal performance measure. Technical report, The Finance Development Centre, London, 2002a.
C. KEATING and W. F. SHADWICK. An introduction to Omega. Technical report, The Finance Development Centre, London, 2002b.
A. J. MCNEIL and R. FREY. Estimation of tail-related risk measures for heteroscedastic financial time series: an extreme value approach. Journal of Empirical Finance, 7:271–300, 2000.
A. J. MCNEIL, R. FREY, and P. EMBRECHTS. Quantitative Risk Management : Concepts, Techniques, and Tools. Princeton University Press, 2005.
D. B. NELSON. Conditional heteroskedasticity in asset returns: A new approach. Econometrica, 59:347–370, 1991.
D. B. NELSON and C. Q. CAO. Inequality constraints in the univariate GARCH model. Journal of Business and Economic Statistics, 10:229–235, 1992.
K. NYSTR¨O M and J. SKOGLUND. Univariate extreme value theory, GARCH and measures of risk. Technical report, Preprint, Swedbank, 2002.
J. PICKANDS. Statistical inference using extreme order statistics. Annals of Statistics, 3:119–131, 1975.
R. THOMPSON. File note: Risk management – Ruin strategy. Technical report, The Cambridge Strategy (Asset Management) Limited, 2007.
S. WILLIAMS. How risky is your trading strategy ? Technical report, HSBC Global Research, 2005.

DR ISSAM STRUB: Dr Strub is a senior member of the Cambridge Strategy research group where he works on quantitative strategies as well as asset allocation and risk management tools; he has also authored a number of re-search articles in ﬁnancial and scientiﬁc journals. Prior to joining the ﬁrm, Dr Strub was a graduate student at the University of California, Berkeley, where he conducted research in an array of ﬁelds ranging from Partial Differential Equations and Fluid Mechanics to Scientiﬁc Computing and Optimisation; he obtained a Ph.D. in Engineering from the University of California in 2009.

CHRIS UDY: Chris joined the Cambridge Strategy in 2009 and is responsible for the ﬁrm ongoing research effort. His current research areas include Bayesian portfolio optimisation, heuristic performance measures, regime-switching models and Markov Chain Monte Carlo sampling techniques. Prior to joining the ﬁrm, Chris worked for Mount Row Capital, a systematic hedge fund, from 2004 where he was Chief Technology Ofﬁcer responsible for developing their proprietary trading programmes across various asset classes. Before this, Chris worked for RadioScape Ltd (a London based mathematical modelling company) as a Research Engineer on their 3G Telecommunication System Design and Trafﬁc modelling team where he was awarded a patent for his work in the digital IF arena. Subsequently he rose to hold various research positions in their Digital Audio Broadcast (DAB) stack design and product development teams. Chris has 1st class Honours degree in Engineering from the University of Auckland.

* Research Scientist, issam.strub@thecambridgestrategy.com, 7th Floor, Berger House, 36–38 Berkeley Square, London W1J 5AE, United Kingdom.
** Director of Research, chris.udy@thecambridgestrategy.com, Level 3, 75 Elizabeth St, Sydney, NSW 2000, Australia. 