# Proxy / Cross Hedging

The root challenge of two current equity risk and alpha projects boil down to hedging using non-underlying instruments, known as *proxy hedging* or *cross hedging*. This technique is useful for *equity shaping* trades, as well as an underlying principle for both long / short and statarb:

**Price exposure**: neutralize price exposure for stock or basket, possibly leaving behind useful residual (such as dividends or rights)**Market exposure**: neutralize market exposure for a stock or basket, leaving just idiosyncratic exposure

Trading these hedges in practice is more difficult than standard texts suggest (*e.g.* Hull), as the real world rarely satisfies theory: market incompleteness (challenging risk-neutral models), stochastic covariation (challenging static models), and non-linear impulse response (challenging linear models).

This challenge is explored here via a multi-part series (see Empirical Quantiles and Proxy Selection, Empirical Copulas and Hedge Basis Risk, Lag Dynamics with Autocopulas, Exploratory Hedge Analysis, Proxy Hedging and Dependence, and Proxy Conditional Model Selection), including R code and real data. This first post presents the basic model. The second post will apply this model to a few well-known equities. Readers are *encouraged to comment on improvements or alternative techniques* on all posts, as this problem is real and remains an open research topic.

**Proxy Hedge Model**

Begin with the textbook mathematical model. This *proxy hedge model* is composed of two instruments and , long underlying and short hedge respectively, whose weighted sum generates a residual :

Where the hedge ratio determines the proportional amount of hedge per unit underlying . Although not modeled explicitly, the *joint relationship* of both sign and magnitude between and is important and represented as (*i.e.* correlation, but more generally copula). Both and are time-series whose values are stochastic with unknown distributions.

The purpose of this model is to *shape the residual* to generate P&L on the trade, also known as the *basis risk*. A means price exposure to is neutralized. When is a market index, non-zero is equivalent to neutralizing market exposure and leaving idiosyncratic exposure specific to (an exotic beta, of sorts). In both cases, the behavior of is asymptotic on the measure. Traditionally, futures were the preferred instrument for . More recently, ETFs are becoming increasingly interesting given their diversity (including levers).

Although the math is pedestrian, it belies two serious research challenges:

**Instrument selection**: choice of instrument for**Ratio calculation**: choice of algorithm to calculate

Both are explored below.

**Instrument Selection**

Deciding the universe of instruments to explore is more art than science given hedges are assumed, by definition, to not be derivatives on . Hence, the standard machinery is unavailable. Thus, significant exploratory analysis on the marginal and joint behavior of and is warranted.

One traditional answer is to seek out instruments which maximize Pearson correlation . This turns out to be fairly naïve: despite expressing a linear relationship, that does not necessarily imply measurement of the – relationship should be constrained to linearity (as Pearson expresses only linear sensitivity). A better answer is to follow economic intuition and identify instruments which have strong fundamental reasons why they covary.

**Ratio Calculation**

Techniques for calculating have been subject to intense research over recent decades. Two recent summary articles are de Prado and Leinweber (2011) and Alexander and Barbosa (2007).

For analysis here, the Box-Tiao Canonical Decomposition (BTCD) method from de Prado (2011) is selected. In short, this approach maximizes a measure of predictability generalized from Box and Tiao (1977) on the combined portfolio of and :

Whose solution based upon a generalized Rayleigh quotient form, in , is:

This method is particularly useful as it permits use of an *arbitrary forecasting model*, instead of limiting explanatory variables to forecast variables lagged by a single period (as is common with standard techniques).

The following is R code for BTCD using a standard vector autoregression (VAR) forecasting model (with Ledoit and Wolf (2004) covariance shrinkage), with minor simplification from the derivation provided in de Prado (p. 8):

require("vars") require("tawny") btcdHedge <- function(p, start=1, interval=4) { # Generate hedge using BTCD method, as defined by de Prado [2011]. # # Args: # p: matrix of instrument price data # start: index into p, which to begin generating hedge # interval: number of periods used to calibrate hedge # # Returns: BTCD hedge ratio vector end <- start+interval pvar <- VAR(p[start:end,], p=1, type="none") varfit <- fitted(pvar) B <- cov.shrink(p) # shrink covariance A <- t(varfit) %*% varfit C <- chol(B) CInv <- solve(C) D <- t(CInv) %*% A %*% CInv eigens <- eigen(D) z <- eigens$vectors[,length(eigens$values)] x <- CInv %*% z hedge <- x/x[1] # perform sanity check tx <- t(x) num <- tx %*% A %*% x denom <- tx %*% B %*% x check <- num / denom if ((check - eigens$values[length(eigens$values)]) > 0.0001) { message("failed sanity check") } return (hedge) }

This algorithm requires input of the duration of preceding contiguous time window over which the hedge is calculated, via the `interval`

argument. As with any window-based algorithm, the interval represents a tradeoff: shorter intervals are more responsible, but more noisy; longer intervals are smoother, but less responsive. Thus, selection of interval length is more art than science.

### Trackbacks

- Empirical Quantiles and Proxy Selection « Quantivity
- Empirical Copulas and Hedge Basis Risk « Quantivity
- Finanzas 101: Proxy Hedging « Quantitative Finance Club
- Lag Dynamics with Autocopulas « Quantivity
- Exploratory Hedge Analysis « Quantivity
- Proxy Hedging and Dependence « Quantivity
- Proxy Conditional Model Selection « Quantivity
- Index Return Decomposition « Quantivity
- Index Return Decomposition « Quantivity
- Return Decomposition via Mixing « Quantivity

how about leveraging mean-variance optimization to come up with an optimal hedge? The variables that you can manipulate are the risk factor/covariance model and the specific form of the utility function (where you can incorporate impact costs, etc… ) ? You can provide the optimizer a large universe of stocks and let it do the number crunching given your level of risk aversion and impact you are willing to take on…

@Anonu: yes, interesting potential approach; to clarify, how do you propose defining the objective function?

While not having conducted this analysis, my

a prioriintuition is that covariance instability will make the hedge ratio jump around erratically—similar to what occurs with Markowitz optimization, likely amplified by the constraint of only two instruments.Forget an optimal hedge, how about trying to estimate a certain % of the tail risk and hedge only that.

Riffing here, you could try to predict % cointegration, risk of divergence, and risk of loss in what you’re holding — a combination of those. E.g. are they cointegrated when you could lose money.

@Lao Tzu: uncertain whether cointegration is sufficient, as unit root implies linearly and temporally stable beta. Many proxy hedging problems do not possess these two properties.

maybe use a more “robust” type measure, all I’m saying is you only care if the hedge works at certain times so define a loss function that covers only those times

@Lao Tzu: while conceptually makes sense, practical problem is bleed during 99% of non-tail days: either bleed long index puts or suffer upside hedge drag due to asymmetry of upside and downside betas. Neither is appealing.

could be ETL @ 40%

Generally I do not learn article on blogs, however I would like to say that this write-up very pressured me to

take a look at and do it! Your writing style has been surprised

me. Thank you, quite great post.

Excellent post however I was wondering if you could write

a litte more on this subject? I’d be very grateful if you could elaborate a little bit more. Cheers!

Please clarify which aspect you are interested to see elaboration, as this is a large topic.