Skip to main content

Issam S. STRUB[∗]

The Cambridge Strategy (Asset Management) Ltd

Abstract

This article introduces three algorithms for trade sizing with the objective of controlling tail risk or maximum drawdown when applied to a trading strategy. The first algorithm relies on historical volatility estimates while the second uses tail risk estimates obtained by applying Extreme Value Theory (EVT) to estimate Conditional Value at Risk (CVaR); the third algorithm also uses Extreme Value Theory applied to the drawdown distribution to compute the Conditional Drawdown at Risk (CDaR). These algorithms are applied to 10 years of daily returns from a trend following strategy trading the EURUSD and NZD-MXN currency pairs. In each case, the performance of the algorithms is analysed in detail and compared to the original strategy. The ability of these algorithms in terms of tail risk and drawdown control is eval-uated. The techniques presented in the article are readily applicable by investment managers to compute adequate trade size while maintaining a constant level of tail risk or limiting maximum drawdown to a chosen value.

1. Intro

Money management, also called position or trade sizing, consists in selecting an appropriate leverage level to be applied to a given strategy, where the leverage level is defined as the ratio of the position market value with respect to the total assets under management (AUM) or account size. At a glance, it seems quite obvi-ous that the ability to dynamically adjust position size can result in increased profits; indeed, an investment manager able to reduce (respectively increase) leverage before periods of underperformance (respectively outperformance) would obtain higher returns when compared to using a constant leverage. Practical evi-dence confirms that most traders tend to dynamically adjust their leverage level , usually relying on heuristic rules and being subject to behavioural biases, as demonstrated in LOCKE and MANN (2003), THALER and JOHNSON (1990). However, while a significant literature exists dealing with trading strategies profitabil-ity, there are fewer articles on money management and the limited literature on this topic often presents techniques which are not always applicable in practice by traders. Additionally, these money management techniques rarely make use of modern econometrics and risk management tools such as Extreme Value Theory or time series analysis.

Historically, money management has been applied to gambling as well as trading, typically with the aim of maximising growth rate. Indeed, in KELLY (1956), information theory and expected utility function theory (introduced in BERNOULLI (1738, 1954) in relation to the St. Petersburg paradox) were combined to obtain an optimal gambling strategy, the Kelly criterion: maximise the expected value of the logarithm of the gambler’s wealth at each bet to achieve an asymptotically optimal growth rate; such a strategy also minimises the expected time to reach a given wealth as demonstrated in BREIMAN (1961) in the case where stock returns are assumed to be independent, identically distributed (i.i.d.). These results were applied to investment management in LATAN ´E (1959), who advised investors to maximise the geometric mean of their portfolios. Later, the optimality of maximising expected log return was extended with no restrictions on the distribution of the market process in ALGOET and COVER (1988) and, in BROWNE and WHITT (1996), the Kelly criterion was generalised to the case in which the underlying stochastic process is a simple random walk in a random environment; OSORIO (2009) devised an analog to the Kelly criterion for fat tail returns modelled by a Student t-distribution and a log prospect rather than utility function.

In HAKANSSON (1970), closed form optimal consumption, investment and borrowing strategies were obtained for a class of utility functions corresponding to an investor looking to maximise the expected utility from consumption over time given an initial capital position and a known deterministic non-capital income stream. ROLL (1973) studied the implications of growth optimum portfolios in terms of observed stock returns for investors selecting such a portfolio. THORP (1971) applied the Kelly criterion of maximising logarithmic utility to portfolio choice and compared it to mean variance portfolio theory, concluding that the Kelly criterion does not always yield mean variance efficient portfolios.
SAMUELSON (1971, 1979) showed that while maximising the geometric mean utility at each stage may be asymptotically optimal, this does not imply that such a strategy is optimal in finite time; he also high-lighted the risk involved in using the Kelly criterion, namely that of excessive leverage leading to significant drawdowns. Later on, more work on the properties of the Kelly criterion and its application to finance was published in ROTANDO and THORP (1992); MACLEAN et al. (2004, 2010, 2011a,b); THORP (2006). The concept of optimal f, which is an extension of the Kelly criterion was developed in VINCE (2007, 2009) and the differences between the two approaches were detailed in VINCE (2011). Additionally, a number of books on money management aimed at practitioners have analysed and backtested the Kelly criterion, the optimal f and other approaches; see for example GEHM (1983); BALSARA (1992); GEHM (1995); JONES (1999); STRIDSMAN (2003); MCDOWELL (2008). Finally, the optimal f technique was applied to futures trading and compared to more naive approaches in ANDERSON and FAFF (2004); LAJBCYGIER and LIM (2007), each time with the conclusion that it resulted in leverage levels that would be unacceptable to most investors; this leads to heuristic approaches such as using a fraction of the Kelly ratio (such as half Kelly ratio).

The common feature of traditional money management techniques such as the Kelly criterion or the optimal f is their focus on maximising wealth growth, whereas in practice, both individual traders and fund managers are mostly concerned with maintaining a stable risk level through time and keeping their maxi-mum drawdown below a chosen threshold; otherwise, they will most likely suffer significant redemptions from investors or discontinue trading their strategy altogether. As such, maximising the rate of return is usually secondary to controlling risk. However, there is very little available on this topic in the existing lit-erature whether originating from academics or practitioners. Note that the practical shortcomings of utility maximisation had already been noted in ROY (1952) before the introduction of the Kelly criterion as the author explained that for the average investor, the first objective is to limit the risk of a disaster occurring: “In calling in a utility function to our aid, an appearance of generality is achieved at the cost of a loss of practical significance and applicability in our results. A man who seeks advice about his actions will not be grateful for the suggestion that he maximises expected utility.”

The techniques presented in this article were born out of the need for position sizing rules that could be computed and applied in practice and would result in consistent risk levels when implemented through varied market conditions and changing trading strategy performance. Algorithms designed to control return tail risk are presented first, while drawdown control is tackled at a later stage. These techniques are then applied to daily returns resulting from implementing a simple technical trading rule on the EURUSD and NZDMXN currency pairs; a detailed analysis and comparison of the performance of each money management technique concludes the article.

2. Tail Risk Control

When a trading strategy is applied to a given asset, the fluctuations in the volatility of the asset returns will typically lead to changes in the volatility of the strategy returns. In practice, portfolio managers aim to limit these variations and keep the tail risk of the strategy below a predetermined level by dynamically adjusting trade size. This section presents techniques to achieve this objective.

2.1 Tail Risk Measures

A common measure of tail risk is Value at Risk (VaR) (BEDER (1995); DUFFIE and PAN (1997); JORION (2006)), which is defined as the minimum loss experienced over a given time horizon with a given prob-ability. When applied to historical daily returns, VaR can be computed by ordering the daily returns and selecting the quantile corresponding to the confidence level chosen (for example 95%). Unfortunately, VaR is concerned only with the number of losses that exceed the VaR confidence level and not the magnitude of these losses; to obtain a more complete measure of large losses, one needs to examine the entire shape of the left tail of the return distribution beyond the VaR threshold, which leads to the Conditional Value at Risk (CVaR) also referred to as Expected Shortfall, Tail VaR or Mean Shortfall (ARTZNER et al. (1999); CHRISTOFFERSEN (2003); HARMANTZIS et al. (2006); MCNEIL et al. (2005)). CVaR can be defined as the average expected loss at a given confidence level; for example, at the 95% confidence level, the CVaR represents the average expected loss on the worst 5 days out of 100 whereas the VaR is the minimum loss on those days. In mathematical terms, the CVaR for a daily return distribution F at a confidence level α is given by:

LibertyRoad Capital - equation1,2

Computing CVaR requires an explicit expression of the portfolio return distribution function F which is usually unknown in practice. However, if historical daily returns are assumed to follow a normal (or Gaussian) distribution, VaR and CVaR can be easily obtained from the standard deviation σ and mean µ of returns; for example, at the 95% level, standard deviation, VaR and CVaR are related by:

LibertyRoad Capital - equation3

LibertyRoad Capital - tables 1,2

 

Figure 1: Top: Comparison of Generalised Pareto and normal distribution. Note that the Generalised Pareto Distribution models the left tail of the daily returns much more accurately than the normal distribution. Bottom: 95 % CVaR for each distribution. The CVaR is represented by the shaded area under the green (GPD) or red (normal distribution) curve. In the present case, it is apparent that the CVaR computed using a normal distribution underestimates the downside risk when compared to a GPD.

2.2 Volatility based Position Sizing

The first position sizing method consists in computing the historical volatility of daily returns generated by the strategy, converting this volatility number to a VaR number using the above formula (3) and adjusting leverage in order to match the target VaR level. The historical volatility of the strategy can be computed using the RiskMetrics exponentially weighted moving average introduced in ZANGARI (1996):

LibertyRoad Capital - equation4

where T is the length of the estimation window, λ the decay factor and r the mean return over the estimation window.
Once the volatility has been computed it can be converted into a VaR number using Equation (3) and the leverage or position size is adjusted through the formula:

LibertyRoad Capital - equation5

This process is typically implemented with a chosen frequency (daily, weekly, monthly) depending on the average holding period of the trading strategy.

2.3 Extreme Value Theory based Position Sizing

2.3.1 Extreme Value Theory

The previous money management method relies on the assumption that daily strategy returns are normally distributed. However, in practice, this is unlikely to be the case and tail risk can be more accurately measured using tools originating from Extreme Value Theory (EVT), a branch of statistics dedicated to modelling extreme events introduced in BALKEMA and DE HAAN (1974); PICKANDS (1975). The central result in Extreme Value Theory states that the extreme tail of a wide range of distributions can be approximately described by the Generalised Pareto Distribution (GPD) with shape and scale parameters ξ and β:

LibertyRoad Capital - equation6

where β > 0, and the support of Gξ,β is y ˃ 0 when ξ ˃ 0 and 0 ˂ y ˂ −βξ when ξ < 0.

The shape and scale parameters ξ and β can be estimated using Maximum Likelihood Estimation (MLE) by fitting a GPD distribution to the tail of the return distribution after a given threshold u. Once this is done, the CVaR can be computed:

LibertyRoad Capital - equation7

where the VaR for a GPD can be estimated by:

LibertyRoad Capital - equation8

with N the total number of observations and Nu the number of observations exceeding the threshold u.

Note that the preceding results requires that observations be independent and identically distributed, which is often not the case for daily returns as they present some level of autocorrelation. Therefore, we start by filtering the daily returns and then apply Extreme Value Theory to the standardised residuals (see MCNEIL and FREY (2000); NYSTR ¨OM and SKOGLUND (2005)), with a Generalised Pareto Distribution being fitted to the tails through Maximum Likelihood Estimation. Once this is done, we obtain the shape and scale parameters and replace these values in Equation (7) to compute the CVaR at the required confidence level. Extreme Value Theory has been used during the previous decade for risk management in finance with a notable increase in the number of publications on the subject since the recent financial crisis. We refer to BALI (2003); BEIRLANT et al. (2004); CASCON and SHADWICK (2009); COLES (2001); DE HAAN and FERREIRA (2006); EMBRECHTS (2011); GHORBEL and TRABELSI (2008, 2009); GOLDBERG et al. (2008, 2009); GUMBEL (2004); HUANG et al. (2012); LONGIN (2000); MCNEIL and FREY (2000); NYSTR ¨OM and SKOGLUND (2005) for a sample of publications dealing with Extreme Value Theory and its applications to financial risk modelling.

The significant improvement in tail risk modelling between the volatility/normal distribution and EVT approaches is illustrated in Figure 1. We consider 1000 daily returns for a stock and fit both a normal distribution and a Generalised Pareto Distribution to the left tail of the daily returns. We can see that while both techniques yield similar VaR numbers at the 95% confidence level (in this case 7.8%), the 95% CVaR, which can be visually identified as the area under a given distribution curve left of the 95% VaR threshold, is significantly higher (by a factor 2.4) when computed using the Generalised Pareto Distribution than when using volatility and a normal distribution assumption. Note that this is a pathological case which was chosen on purpose as the difference between the two methods is readily apparent. Still, relying on volatility and normal distribution assumptions can lead to significantly underestimating the tail risk generated by a given strategy, a dangerous situation to be in for any investment manager.

2.3.2 Filtered Historical Simulation

Applying Extreme Value Theory to tail risk estimation requires fitting a Generalised Pareto Distribution to the left tail of the strategy returns; in practice, if 250 days are considered and the 95% confidence level is desired, this means that the GPD has to be fitted to about 12 daily returns, a number which is typically too low to guarantee convergence of the Maximum Likelihood Estimation method and which will cause a high sensitivity to changes in historical returns. To circumvent these issues, simulations can be employed, generating a much larger number of daily returns to which left tail a GPD can be fitted more easily.

Choosing the appropriate simulation method is not necessarily straightforward. Indeed, if Monte Carlo simulations (METROPOLIS and ULAM (1949)) are selected, a distribution of returns has to be specified, usually a normal distribution, which negates the advantage of using Extreme Value Theory to estimate tail risk. Therefore, some form of historical simulation is highly preferable as it makes no assumption on the return distribution, instead relying on the past returns. However, as noted in PRITSKER (2006), such a method presents two potential issues.

First, the required sample size to obtain a statistically significant distribution is usually considered to be at least 250 days; this, in turn, raises the potential issue of not being sensitive enough to recent returns which presumably contain the most relevant information to predict future returns. To circumvent this problem, the weighted historical simulation (WHS) method was developed in BOUDOUKH et al. (1998); this method assigns probabilistic weights to the daily returns which decay exponentially with a chosen decay factor over time; thus recent returns have more influence than the more distant ones. Unfortunately, it is not clear how to select the correct time constant; also, an unintended consequence is that extreme events, which by nature occur rarely, might end up being discounted.

Second, the historical simulation method assumes that daily returns are independent and identically dis-tributed through time, which is not particularly realistic. Indeed, it is commonly observed that the volatility of returns evolves through time and that periods of high and low volatility do not occur at randomly spaced intervals but rather tend to be clustered together. The filtered historical simulation (FHS) method presented in BARONE-ADESI et al. (1999) is an attempt to combine the advantages of the historical and parametric methods; the variance-covariance method attempts to capture conditional heteroskedasticity but assumes a normal distribution while the historical method does not assume a specific distribution but does not capture conditional heteroskedasticity. The FHS method relies on a model based approach for the volatility, typi-cally using a GARCH type model, while remaining model free in terms of the distribution. In particular, this method has the notable advantage of being able to simulate extreme losses even if they are not present in the historical returns used for the simulation.

2.3.3 Practical Implementation

We begin by implementing the FHS method on a series of daily returns Rt with standard deviation σt. As mentioned above, the historical simulation method assumes that daily returns are i.i.d. through time; however, significant autocorrelation can often be found in the daily squared returns. To produce a sequence of i.i.d. observations, we fit an AR(1) first order autoregressive model to the daily returns:

LibertyRoad Capital - equation9

where we choose the standardised returns {zt} as following a Student’s t-distribution rather than a normal one to account for increased tail risk as the t-distribution has fatter tails.
To model the variation of the returns standard deviation, we can use a GARCH type model (BOLLER-SLEV (1986); ENGLE (1982, 2001); TAYLOR (1986)) such as the GARCH(1,1):

LibertyRoad Capital - equation10

Alternately, the GARCH model can be replaced by its extension, the exponential GARCH (EGARCH) model developed in NELSON (1991); NELSON and CAO (1992) to capture the asymmetry in volatility in-duced by large positive and negative returns. Indeed, volatility usually increases more after a large drop than after a large increase due to the leverage effect (BLACK (1976)). This model is defined by:

LibertyRoad Capital - equation11

Once an AR(1)/GARCH(1,1) model has been fitted to the daily returns, the autocorrelation of the squared returns is usually noticeably lower and these observations can now be used in a historical simu-lation method. The i.i.d. property is important for bootstrapping, as it allows the sampling procedure to safely avoid the pitfalls of sampling from a population in which successive observations are serially depen-dent. We simulate a number of independent random trials (10,000 in this article) over a time horizon of 252 trading days; unlike Monte Carlo simulations we do not make a specific distributional assumption regarding the standardised returns {zt} and instead use the past returns data. Given a sequence of past returns we can compute past standardised returns from observed returns and estimated standard deviations as the quotient of the residual of the AR(1) model over the standard deviation. Once the historical standardised returns are known, we generate future returns by drawing standardised returns with replacement. Eventually, we end up with 10,000 daily return series, each covering 252 trading days. These daily returns are aggregated to generate a distribution of 2,520,000 daily returns to which left tail a GPD is fitted, eventually yielding the CVaR. The high number of residuals ensures the stability of the method, as the left tail contains 126,000 returns for a 95% confidence level, which almost guarantees the convergence of the Maximum Likelihood Estimation algorithm used to fit the GPD to the left tail of the simulated return distribution. This CVaR number can be converted into a VaR number under normal distribution assumptions using Equation (3) and trade size adjusted through Equation (5). One of the advantages of using Extreme Value Theory to compute the CVaR is that the tail risk of the return distribution is measured much more accurately and less likely to be underestimated than when relying only on the volatility based method described earlier on.

 

3. Drawdown Control

While the previous section outlined money management tools to control tail risk, defined as daily VaR or CVaR at a given confidence level, the most adverse event from an investor or investment manager standpoint is probably a significant drawdown in which a number of negative daily returns are clustered together over a given period time. Indeed, most investors have strict drawdown limits (such as 20%) upon which they will redeem part or the entirety of their investment in a given fund. Therefore, for a money manager, experiencing a significant drawdown can lead to a drop in AUM which itself results in a loss of management fees; addi-tionally, most fund managers who charge performance fees have high watermarks in place which prevent them from collecting performance fees during a drawdown. Also, a manager trading a systematic strategy with proprietary or investor capital is likely to unnecessarily modify or discontinue the strategy if faced with an unacceptable drawdown; this can result in the loss of future performance as the changes may have been unwarranted. This leads us to develop a money management technique to control the maximum drawdown encountered by a given strategy. Earlier work on drawdown control through portfolio optimisation can be found in CVITANIC and KARATZAS (1995); GROSSMAN and ZHOU (1993).

3.1 Drawdown Measures

The maximum drawdown experienced over a given period of time is defined as the largest peak to trough loss in Net Asset Value of a portfolio. If W (t) represents the portfolio value at time t, the maximum drawdown over a time interval [0, T ] is defined by:

LibertyRoad Capital - equation12

The historical maximum drawdown is a number which varies widely even for strategies presenting the same mean and volatility and is based on the entire track record making difficult any comparison between strategies run over different time lengths. Therefore, as noted in HARDING et al. (2003), considering a drawdown distribution with reference to a confidence level would be more practical. The distribution of drawdowns over a given time period of N days can be computed by computing the maximum drawdown for blocks of N consecutive days from the track record of a strategy. As VaR and CVaR were defined for a daily return drawdown, the Drawdown at Risk (DaR) and Conditional Drawdown at Risk (CDaR) at a given confidence level can be obtained from the drawdown distribution. For example, the 63 days DaR at the 95% confidence level will be obtained by subdividing the historical daily returns in overlapping blocks of 63 consecutive daily returns, computing the maximum drawdown for each block thus forming the drawdown distribution and taking the 95th percentile of this distribution. Similarly the 63 day CDaR would be the average expected drawdown beyond the 95th percentile. The modelling of the drawdown distribution has been considered in CHEKHLOV et al. (2003, 2005); JOHANSEN and SORNETTE (2001); MENDES and BRANDI (2004); MENDES and LEAL (2005).

3.2 Practical Implementation

We construct a position sizing algorithm for drawdown control as was done earlier for tail risk control. Starting with a given number of daily historical returns such as 252 days, we apply an AR(1)/GARCH(1,1) filtering process and using FHS to simulate 10,000 daily return series of 252 days each. For each one of these return series, we generate a drawdown distribution by computing the maximum drawdown for overlapping blocks of consecutive daily returns of a given length (such as 63 days) thereby resulting in 190 drawdowns for each one of the 10,000 return series. The drawdowns are aggregated to generate a distribution of 1,900,000 drawdowns and a GPD is fitted to the right tail of this distribution containing the 5% largest drawdowns which yields the CDaR at the 95% confidence level. This number is compared to a set 95%CDaR target and the leverage is adjusted in consequence using a similar formula as for tail risk control:

LibertyRoad Capital - equation13

 

4. Applications

In order to analyse the effectiveness and performance of the trade sizing algorithms defined in the previous sections, we implement them on the daily returns generated by a systematic strategy applied to the EURUSD and NZDMXN currency pairs.

4.1 Trading Strategy

The trading strategy used in this article is a typical breakout trend following strategy, similar to strategies commonly used in futures and currency trading; it is based on a moving average with a ±2 standard deviation band; on any given day, if the price is above (resp. below) the upper (resp. lower) band, a long (resp. short) position is initiated, whereas if the price is between the two bands, no action is taken and the previous day trade direction is maintained. The strategy is traded over a 10 year period going from January 2001 to December 2010 which will be referred to as Year 1 to Year 10 in the following. The EURUSD and NZDMXN currency pairs were selected as they demonstrate different return profiles with NZDMXN being typically more volatile than EURUSD; also, the strategy performance is significantly higher for EURUSD than for NZDMXN , which gives us the opportunity to apply the money management algorithms in different settings. Indeed, looking at Tables 1 to 4 which summarise the performance for the original strategy as well as the money management techniques, we can see that the Sharpe ratio is 0.79 for the EURUSD strategy and 0.25 for the NZDMXN strategy. The maximum drawdown is also higher for the NZDMXN strategy at 23.51% compared to 15.25% for the EURUSD strategy.

LibertyRoad Capital - tables 3,4

Figure 2: Top: The autocorrelation function of the squared daily returns for the EURUSD strategy reaches significant values through time, thus preventing the use of unfiltered data for historical simulation. Bottom: Autocorrelation function of the standardised residuals after filtering with an AR(1)/GARCH(1,1) model; the autocorrelation has been almost entirely removed.

LibertyRoad Capital - tables 5,6

Figure 3: Top: Evolution of the Net Asset Value for the original EURUSD strategy and the volatility and EVT based position sizing methods. Bottom: Evolution of the leverage adjustment factor for the volatility and EVT based position sizing methods applied to the EURUSD strategy.

LibertyRoad Capital - tables 7,8

Figure 4: Top: Evolution of the Net Asset Value for the original NZDMXN strategy and the volatility and EVT based position sizing methods. Bottom: Evolution of the leverage adjustment factor for the volatility and EVT based position sizing methods applied to the NZDMXN strategy.

LibertyRoad Capital - tables 9,10

Figure 5: Top: Distribution of 63 day drawdowns for the EURUSD strategy. Bottom: Distribution of 63 day drawdowns for the NZDMXN strategy.

4.2 Tail Risk Control Techniques

We apply the the volatility and EVT based tail risk control techniques presented earlier to the EURUSD and NZDMXN strategy with the objective of maintaining a constant tail risk level over time set at a 95% VaR of 1.5%. For the volatility based technique, the historical volatility is computed at the end of each week using the RiskMetrics exponentially weighted moving average presented in Equation (4) and transformed into a VaR level resulting in a leverage adjustment coefficient which is applied to the strategy over the following week. The typical parameters, recommended in ZANGARI (1996) are used: λ = 0.94, T = 74 days; however, r is taken to be the mean of daily returns over the previous 74 days rather than zero.
For the EVT based algorithm, the first step consists in removing the autocorrelation from the daily returns by applying an AR(1)/GARCH(1,1) filtering process. Figure 2 illustrates the high level of autocor-relation in the squared returns and its almost complete removal after filtering; as a result, the standardised residuals can be considered approximately i.i.d. and used as input in the FHS algorithm to generate simu-lated return series. This process is applied weekly to the previous 252 daily returns from the strategy and generates after aggregation of the 10,000 series of 252 daily returns, one series of 2,520,000 daily returns to which left tail beyond the 5% threshold a GDP distribution is fitted. From the shape and scale parameters of the fitted GPD distribution a 95% CVaR is obtained and then converted into a 95% VaR using Equation (3). This ensures that the actual CVaR of the strategy is adjusted to match the CVaR corresponding to our target VaR level of 1.5% if the distribution was following a normal distribution. This means that if in fact the return distribution has a thicker left tail than a normal distribution, this will be taken into account as the 95% CVaR measured by EVT will be higher and leverage will be reduced in consequence.

4.2.1 Algorithmic presentation

The previous tail risk control techniques can be described in algorithmic form. For the volatility based algorithm:

1. At the end of week N, select the previous 74 daily returns and generate the volatility using Equa-tion (4).

2. Using Equation (3), convert the volatility into a 95% VaR.

3. Compute the Leverage Adjustment corresponding to a target 95% VaR of 1.5% using Equation (5).

4. Apply the Leverage Adjustment to the strategy during week N + 1.

5. At the end of week N + 1, repeat the algorithm starting from step 1.

For the EVT based algorithm:

1. At the end of week N, select the previous 252 daily returns and filter them using an AR(1)/GARCH(1,1) model; check that the autocorrelation has been brought to a sufficiently low level for the i.i.d. assump-tion to be valid.

2. Using the AR(1)/GARCH(1,1) model, simulate 10,000 daily returns of 252 days each, generating one series of 2,520,000 returns after aggregation.

3. Using MLE, fit a GPD distribution to the left tail (5% worst daily returns) of the simulated return series, yielding the shape and scale parameters.

4. Compute the 95% CVaR corresponding to the GPD parameters values and convert the CVaR into a VaR number using Equation (3).

5. Compute the Leverage Adjustment corresponding to a target 95% VaR of 1.5% using Equation (5).

6. Apply the Leverage Adjustment to the strategy during week N + 1.

7. At the end of week N + 1, repeat the algorithm starting from step 1.

4.3 Drawdown Control Technique

The drawdown control technique is applied to the EURUSD and NZDMXN strategy with the objective of limiting the maximum drawdown over each year to a set value, in this case chosen as 10%. Similarly to the EVT based technique for tail risk control, we start by filtering the previous 252 days and generating 10,000 series of 252 daily returns each using FHS. These daily returns are decomposed in blocks of 63 (which represents about 3 months) consecutive days from day 1 to day 63, day 2 to day 64, etc; a block length of 3 month was selected as it is a good estimate of the length of the worst drawdowns generated by the strategy; using higher block lengths such as 1 year would result in underleveraging. The maximum drawdown is computed for each block yielding a distribution of 190 drawdowns for each of the 10,000 series; these drawdowns are aggregated to yield one series of 1,900,000 drawdowns to which right tail a GPD distribution is fitted, yielding the 95% CDaR from which the leverage factor is computed. The interest of using EVT to estimate the CDaR is apparent from Figure 5, which shows the 63 day drawdown distribution for each strategy; both drawdown distributions present a right tail which is significantly longer than the left tail and which would not be measured accurately with a normal distribution; therefore, it is crucial to fit a GPD to the right tail in order to correctly estimate the CDaR.

4.3.1 Algorithmic presentation

The drawdown control technique can be described in algorithmic form:

1. At the end of week N, select the previous 252 daily returns and filter them using an AR(1)/GARCH(1,1) model; check that the autocorrelation has been brought to a sufficiently low level for the i.i.d. assump-tion to be valid.

2. Using the AR(1)/GARCH(1,1) model, simulate 10,000 daily returns of 252 days each.

3. Decompose each series of 252 daily returns into 190 overlapping blocks of 63 consecutive days.

4. Compute the maximum drawdown for each block, and aggregated all the drawdowns into one series of 1,900,000 drawdowns.

5. Using MLE, fit a GPD distribution to the right tail (5% largest drawdowns) of the simulated return series, yielding the shape and scale parameters.

6. Compute the 95% CDaR corresponding to the GPD parameters values.

7. Compute the Leverage Adjustment corresponding to a target 95% CDaR of 10% using Equation (13).

8. Apply the Leverage Adjustment to the strategy during week N + 1.

9. At the end of week N + 1, repeat the algorithm starting from step 1.

4.4 Results analysis

The performance data for the original strategy, the volatility and EVT based tail risk control techniques and the drawdown control technique are summarised in Tables 1 and 2 for the EURUSD strategy and Tables 3 and 4 for the NZDMXN strategy.

The effectiveness of the tail risk control techniques can be evaluated by looking at the fluctuations of the 95% Var when the techniques are applied. For the original strategy, the 95% VaR varies widely going from 0.69% in Year 6 to 1.39% in Year 8 for the EURUSD strategy and from 1.13% in Year 10 to 1.78%in Year 9 for the NZDMXN strategy. These variations are reduced for the volatility based technique with a range of 1.32% to 1.66% for the EURUSD strategy and 1.32% to 1.68% for the NZDMXN strategy, thereby demonstrating the ability of this technique to stabilise the 95% VaR around its target value of 1.5%. For the EVT based technique, the 95% VaR fluctuates from 0.93% to 1.32% for the EURUSD strategy and from 1.08% to 1.66% for the NZDMXN strategy, which can be explained since the method does not target a constant VaR but a constant CVaR and accounts for the entire tail risk rather than simply the 5% quantile. Also, Figures 3 and 4 show that the leverage adjustment factors vary much more abruptly for the volatility based technique compared to the EVT based technique. This means that the first method is more responsive to changes in VaR levels but would also incur higher transaction costs due to the frequent rebalancing. The leverage factor is usually lower for the EVT based technique, due to the use of EVT for tail risk computation which typically results in higher tail risk estimates than when relying on volatility.

Over the 10 year period, the realised 95% VaR when using the volatility based technique is almost exactly at the targeted level, being 1.50% and 1.52% for the EURUSD and NZDMXN strategy. For the EVT based strategy, the VaR is lower at 1.33% and 1.38% respectively. However, the 95% CVaR levels when using the volatility based strategy are 2.10% and 2.07% which is higher than the CVaR corresponding to the 1.5% VaR target for a normal distribution; indeed, from Equation (3), the 95% CVaR corresponding to a 95% VaR of 1.5% for a normal distribution is 1.89%, which serves as target CVaR for the EVT based algorithm. This target CVaR level is approximately equal to the overall CVaR over the 10 year period for the EVT based technique which yields a CVaR of 1.94% for both strategies. Thus, we have the confirmation that the EVT based algorithm adjusts the leverage factor to reach a CVaR target whereas the volatility based algorithm simply focuses on maintaining the VaR at its chosen value without accounting for the changes in tail risk beyond the VaR threshold. In practice, controlling the entire left tail is preferable and the EVT based method would be considered superior to its volatility based counterpart. Additionally, these gains in tail risk control do not come at the expense of performance as the Sharpe ratios for the tail control techniques are slightly higher than for the original strategy.

While the previous methods allow to stabilise tail risk at a set level, they do not have a direct effect on the maximum drawdown sustained by the strategy each year. This is the objective of the drawdown control technique which adjust the leverage factor to target a 10% CDaR at a 95% confidence level computed over a 3 months period, the aim being to keep the maximum drawdown for each year around or below 10%. The CDaR based technique reaches this objective as maximum drawdowns are in a 6.40% to 10.95% range for the EURUSD strategy and a 5.55% to 10.86% range for the NZDMXN strategy whereas the maximum drawdowns for the original strategies fluctuated from 5.52% to 15.25% and from 7.21% to 17.40% respec-tively. This demonstrates the ability to control maximum drawdown by using the CDaR based algorithm. The evolution of the leverage factor for the CDaR based algorithm is quite smooth, making it less likely to suffer from high transaction costs when implemented in practice. Once again, the Sharpe ratio for the CDaR based technique is slightly higher than for the original strategies.

 

5. Conclusion

A number of money management techniques were presented, with the aim of controlling either tail risk or drawdown rather than attempting to maximise return or expected utility at any cost as is the case for most money management techniques available in the existing literature. Indeed, the main concern of investment professionals is to remain at or below certain risk constraints set either internally or by investors; as such, maximising expected utility is only secondary to controlling risk as a breach of these risk limits would usually trigger significant redemptions or would lead the investment manager to stop trading the strategy altogether.
The first two methods aim to maintain a stable level of tail risk through time, using either historical volatility or Extreme Value Theory to measure tail risk. Both methods were applied to two sets of daily returns generated by applying a typical trend following strategy to the EURUSD and NZDMXN currency pairs over a 10 year period, and demonstrated the ability to target a given VaR level for the volatility based technique or a given CVaR level for the EVT based technique. The EVT based technique, which considers the entire left tail of the return distribution at a given confidence level, is superior to the volatility based technique which is oblivious to the size of losses beyond the VaR threshold and therefore can result in a higher overall tail risk than intended.

The third method focuses on drawdown control, by adjusting the leverage factor based on the Condi-tional Drawdown at Risk level generated by the strategy. The CDaR is computed by considering overlapping blocks of consecutive returns and calculating the maximum drawdown for each block, yielding a drawdown distribution from which the average expected drawdown beyond a certain confidence level (CDaR) can be obtained. Considering the drawdown distribution rather than the maximum drawdown over the entire pe-riod results in a more stable and robust estimate of potential drawdown. The drawdown control technique achieves its objective when applied to the two strategies as the maximum drawdown for each year remains around or below the targeted level.

__

References

ALGOET, P. H. and COVER, T. M. (1988). Asymptotic optimality and asymptotic equipartition properties of log-optimum investment. Annals of Probability, 16(2):876–898.
ANDERSON, J. A. and FAFF, R. W. (2004). Maximising futures returns using fixed fraction asset allocation. Applied Financial Economics, 14:1067–1073.
ARTZNER, P., DELBAEN, F., EBER, J. M., and HEATH, D. (1999). Coherent measures of risk. Mathematical Finance, 9(3):203–228.
BALI, T. G. (2003). An extreme value approach to estimating volatility and value at risk. Journal of Business, 76(1):83–108.
BALKEMA, A. and DE HAAN, L. (1974). Residual life time at great age. Annals of Probability, 2:792–804.
BALSARA, N. (1992). Money Management Strategies for Futures Traders. Wiley.
BARONE-ADESI, G., GIANNOPOULOS, K., and VOSPER, L. (1999). VaR without correlations for portfolios of derivative securities. Journal of Futures Markets, 19(5):583–602.
BEDER, T. S. (1995). VaR: Seductive but dangerous. Financial Analysts Journal, 51(5):12–24.
BEIRLANT, J., GOEGEBEUR, Y., SEGERS, J., and TEUGELS, J. (2004). Statistics of Extremes: Theory and Applications. Wiley.
BERNOULLI, D. (1738). Specimen theoriae novae de mensura sortis. Commentarii Academiae Scientiarum Imperialis Petropolitannae, Tomus V:175–192.
BERNOULLI, D. (1954). Exposition of a new theory on the measurement of risk. Econometrica, 22(1):23–36.
BLACK, F. (1976). Studies in stock price volatility changes. Proceedings of the 1976 American Statistical Association, Business and Economic Statistics Section., pages 177–181.
BOLLERSLEV, T. (1986). Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics, 31:307–327.
BOUDOUKH, J., RICHARDSON, M., and WHITELAW, R. (1998). The best of both worlds: A hybrid ap-
proach to calculating value at risk. Risk, 11(5):64–67.
BREIMAN, L. (1961). Optimal gambling systems for favorable games. In NEYMAN, J., editor, Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability: Contributions to the The-ory of Statistics, volume 1, pages 65–78. Statistical Laboratory of the University of California, Berkeley, University of California Press.
BROWNE, S. and WHITT, W. (1996). Portfolio choice and the Bayesian Kelly criterion. Advances in Applied Probability, 28(4):1145–1176.
CASCON, A. and SHADWICK, W. F. (2009). A new approach to tail risk. Journal of Investment Consulting, 10(1):33–48.
CHEKHLOV, A., URYASEV, S., and ZABARANKIN, M. (2003). Portfolio Optimization With Drawdown Constraints, chapter 13, pages 253–268. Asset and Liability Management Tools. Risk Books.
CHEKHLOV, A., URYASEV, S., and ZABARANKIN, M. (2005). Drawdown measure in portfolio optimization. International Journal of Theoretical and Applied Finance, 8(1):13–58.
CHRISTOFFERSEN, P. F. (2003). Elements of Financial Risk Management. Academic Press.
COLES, S. (2001). An Introduction to Statistical Modeling of Extreme Values. Springer.
CVITANIC, J. and KARATZAS, I. (1995). On portfolio optimization under ”drawdown” constraints. IMA Lecture Notes in Mathematics & Applications, 65:77–88.
DE HAAN, L. and FERREIRA, A. (2006). Extreme Value Theory: An Introduction. Springer.
DUFFIE, D. and PAN, J. (1997). An overview of value at risk. Journal of Derivatives, 4(3):7–49.
EMBRECHTS, P. (2011). Modelling Extremal Events. Springer.
ENGLE, R. F. (1982). Autoregressive conditional heteroskedasticity with estimates of the variance of United Kingdom inflation. Econometrica, 50(4):987–1007.
ENGLE, R. F. (2001). GARCH 101: The use of ARCH/GARCH models in applied econometrics. Journal of Economic Perspectives, 15(4):157–168.
GEHM, F. (1983). Commodity Market Money Management. Wiley.
GEHM, F. (1995). Quantitative Trading and Money Management. Wiley.
GHORBEL, A. and TRABELSI, A. (2008). Predictive performance of conditional Extreme Value Theory in Value-at-Risk estimation. International Journal of Monetary Economics and Finance, 1(2):121–148.
GHORBEL, A. and TRABELSI, A. (2009). Measure of financial risk using conditional extreme value copulas with EVT margins. Journal of Risk, 11(4):51–85.
GOLDBERG, L. R., HAYES, M. Y., MENCHERO, J., and MITRA, I. (2009). Extreme Risk Analysis. MSCI Working Paper.
GOLDBERG, L. R., MILLER, G., and WEINSTEIN, J. (2008). Beyond value-at-risk: forecasting portfolio loss at multiple horizons. Journal of Investment Management, 6(2):73–98.
GROSSMAN, S. J. and ZHOU, Z. (1993). Optimal investment strategies for controlling drawdowns. Mathematical Finance, 3(3):241–276.
GUMBEL, E. J. (2004). Statistics of Extremes. Dover.
HAKANSSON, N. (1970). Optimal investment and consumption strategies under risk for a class of utility functions. Econometrica, 38(5):587–607.
HARDING, D., NAKOU, G., and NEJJAR, A. (2003). The pros and cons of drawdown as a statistical measure for risk in investments. AIMA Journal, pages 16–17.
HARMANTZIS, F. C., MIAO, L., and CHIEN, Y. (2006). Empirical study of value-at-risk and expected shortfall models with heavy tails. Journal of Risk Finance, 7(2):117–135.
HUANG, W., LIU, Q., RHEE, S. G., and WU, F. (2012). Extreme downside risk and expected stock returns. Journal of Banking & Finance, 36(5):1492–1502.
JOHANSEN, A. and SORNETTE, D. (2001). Large stock market price drawdowns are outliers. Journal of Risk, 4(2):69–110.
JONES, R. (1999). The Trading Game. Wiley.
JORION, P. (2006). Value at Risk. McGraw-Hill.
KELLY, J. L. (1956). A new interpretation of information rate. Bell System Technical Journal, 35(4):917–926.
LAJBCYGIER, P. and LIM, E. (2007). How important is money management? Comparing the largest expected equity drawdown, optimal-f and two na¨ıve money management approaches. Journal of Trading, 2(3):58–75.
LATANE´, H. A. (1959). Criteria for choice among risky ventures. Journal of Political Economy, 67(2):144–155.
LOCKE, P. R. and MANN, S. C. (2003). Prior outcomes and risky choices by professional traders. Working paper.
LONGIN, F. M. (2000). From value at risk to stress testing: the extreme value approach. Journal of Banking & Finance, 24(7):1097–1130.
MACLEAN, L. C., SANEGRE, R., ZHAO, Y., and ZIEMBA, W. T. (2004). Capital growth with security. Journal of Economic Dynamics & Control, 28(4):937–954.
MACLEAN, L. C., THORP, E. O., ZHAO, Y., and ZIEMBA, W. T. (2011a). How does the Fortune’s Formula Kelly capital growth model perform? Journal of Portfolio Management, 37(4):96–111.
MACLEAN, L. C., THORP, E. O., and ZIEMBA, W. T. (2010). Good and bad properties of the Kelly criterion.Working paper.
MACLEAN, L. C., THORP, E. O., and ZIEMBA, W. T. (2011b). The Kelly Capital Growth Investment Criterion: Theory and Practice, volume 3 of Handbook in Financial Economic Series. World Scientific.
MCDOWELL, B. A. (2008). A Trader’s Money Management System. Wiley.
MCNEIL, A. J. and FREY, R. (2000). Estimation of tail-related risk measures for heteroscedastic financial time series: an extreme value approach. Journal of Empirical Finance, 7:271–300.
MCNEIL, A. J., FREY, R., and EMBRECHTS, P. (2005). Quantitative Risk Management : Concepts, Techniques, and Tools. Princeton University Press.
MENDES, B. V. M. and BRANDI, V. (2004). Modeling drawdowns and drawups in financial markets. Journal of Risk, 6(3):53–69.
MENDES, B. V. M. and LEAL, R. P. C. (2005). Maximum drawdown: Models and applications. Journal of Alternative Investments, 7(4):83–91.
METROPOLIS, N. and ULAM, S. (1949). The Monte Carlo method. Journal of the American Statistical Association, 44(247):335–341.
NELSON, D. B. (1991). Conditional heteroskedasticity in asset returns: A new approach. Econometrica, 59:347–370.
NELSON, D. B. and CAO, C. Q. (1992). Inequality constraints in the univariate GARCH model. Journal of Business and Economic Statistics, 10:229–235.
NYSTR ¨OM, K. and SKOGLUND, J. (2005). Efficient filtering of financial time series and Extreme Value Theory. Journal of Risk, 7(2):63–84.
OSORIO, R. (2009). Prospect-theory approach to the Kelly criterion for fat-tail portfolios: The case of Student’s t-distribution. Wilmott Journal, 1(2):101–107.
PICKANDS, J. (1975). Statistical inference using extreme order statistics. Annals of Statistics, 3:119–131. PRITSKER, M. (2006). The hidden dangers of historical simulation. Journal of Banking and Finance, 30(2):561–582.
ROLL, R. (1973). Evidence on the “growth-optimum” model. Journal of Finance, 28(3):551–566.
ROTANDO, L. M. and THORP, E. O. (1992). The Kelly criterion and the stock market. American Mathematical Monthly, 99(10):922–931.
ROY, A. D. (1952). Safety first and the holding of assets. Econometrica, 20(3):431–449.
SAMUELSON, P. A. (1971). The “fallacy” of maximizing the geometric mean in long sequences of investing or gambling. Proceedings of the National Academy of Sciences, 68(10):2493–2496.
SAMUELSON, P. A. (1979). Why we should not make mean log of wealth big though years to act are long. Journal of Banking & Finance, 3(4):305–307.
STRIDSMAN, T. (2003). Trading Systems and Money Management. McGraw-Hill.
TAYLOR, S. J. (1986). Modeling Financial Time Series. Wiley, Chichester, UK.
THALER, R. H. and JOHNSON, E. J. (1990). Gambling with the house money and trying to break even: The effects of prior outcomes on risky choice. Management Science, 36(6):643–660.
THORP, E. O. (1971). Portfolio choice and the Kelly criterion. Business and Economics Statistics Section, Proceedings of the American Statistical Association, pages 215–224.
THORP, E. O. (2006). The Kelly criterion in blackjack, sports betting and the stock market, volume 1 of Handbook of asset and liability management, chapter 9, pages 385–429. Elsevier.
VINCE, R. (2007). The Handbook of Portfolio Mathematics. Wiley.
VINCE, R. (2009). The Leverage Space Trading Model. Wiley.
VINCE, R. (2011). Optimal f and the Kelly criterion. IFTA Journal, 11:21–28.
ZANGARI, P. (1996). RiskMetrics – Technical Document, chapter 5. J. P. MORGAN.

 

About the author:

DR ISSAM STRUB: Dr Strub is a senior member of the Cambridge Strategy research group where he works on quantitative strategies as well as asset allocation and risk management tools; he has authored a number of research articles in financial and scientific journals and has been an invited speaker at financial conferences and roundtables. Prior to joining the Cambridge Strategy, Dr Strub was a graduate student at the University of California, Berkeley, where he conducted research in an array of fields ranging from Partial Differential Equations and Fluid Mechanics to Scientific Computing and Optimisation; he obtained a Ph.D. in Engineering from the University of California in 2009.

∗ Research Scientist, strub@cal.berkeley.edu

Leave a Reply

Close Menu

3rd Floor, Yamraj Building, Market Square, Road Town, Tortola,
British Virgin Islands

T: +33 6 78 63 10 62
E: info@libertyroadcapital.com