# Minimum Variance Sector Rotation

Numerous readers inquired how to rethink asset allocation in a world where Portfolio Theory is Dead. One approach is embracing dynamic asset allocation while assuming zero risk premium, recognizing that estimating portfolio return moments via standard longitudinal time series analysis turns out to be flawed in practice (irregardless of robust statistical estimators). In this world, the notion of *strategic* asset allocation is *nonsense* and thus buy-and-hold investors are unknowingly gambling.

Acknowledging this trend, the historical distinction between short-term “trading” and long-term “investing” is gradually blurring. This post rethinks sector rotation by applying minimum variance portfolios. Although Quantivity dislikes classic sector rotation as it’s both *discretionary* and *predictive*, applying minimum variance is interesting: systematic, prediction-free way to model rotation. In doing so, providing a quantitative lens to analyze rotation both *ex post* and *ex ante*.

Start with the following simple *minimum variance sector rotation* model, assuming no short sale restriction:

- Instruments: nine sectors constituting the S&P 500, represented by their corresponding ETFs; materials, energy, financials, tech, industrial, staples, utilities, healthcare, discretionary (XLB, XLE, XLF, XLK, XLI, XLP, XLU, XLV, XLY)
- Period: 2003 – 2010, which is interesting as it includes numerous fundamentally different market regimes, including one significant
*endogenous*crash - Weighting: generate sector weights
*once per year*on 1 Jan, based upon the preceding year of daily log returns

Note the choice of one year of daily returns is *arbitrary*, deliberately not based upon snooped or mined optimization. For easy reference, the log return performance of each sector is illustrated below:

Given this model, begin by letting the data speak and consider two *ex post* exploratory analyses focused on visualizing the sector covariance structure. First, visualize the proportion of variance via longitudinal principal components evaluated annually over the period (again using Ledoit-Wolf estimator):

This illustrates the primary component (labelled “Comp. 1”) is both *dominant yet fairly unstable* over the period: consistently explaining over 60% of variance, jumping nearly 20% year-over-year in 2007, and peaking at 84% in 2010. Recall standard interpretation of the primary component as the *market component*, reflecting the common variance amongst all instruments in the “market” (*e.g.* see Tsay [2005], p. 424).

The discontinuity in 2007 implies a fairly significant *market correlation regime shift*, towards greater co-movement during and after 2007. While this makes economic sense during 2007 – 2009 (crisis and subsequent grind down to capitulation in 2009), the continued increase in variance explained by the market component through 2010 is surprising given the steep recovery.

Second, turn attention to minimum variance and generate sector weights on an annual basis. Plot the temporal evolution of those weights over the period to see how each sector contributes to minimizing “risk” within the overall S&P 500 index at varying times during this period:

Economic intuition suggests sector weights should broadly reflect macro trends, just as with principal components. Several observations provide positive corroboration:

- Defensive sectors: stapes and healthcare (and utilities, to a lesser extent) are consistent and significant long weights, bearing out their mythology as “defensive” sectors; particularly interesting to compare with very strong post-crash performance of staples (see returns diagram above)
- Financials: peak slightly above zero weight in 2005 and subsequently decrease to small short weights; consistent with intuition, given their central role in the financial collapse
- Tech: negative weight coming out of post-tech bubble residual, increasing to modest long weight as the post-crash recovery accelerated in 2009 and tech helped lead the recovery
- Energy: on a nearly linear trend downward from modest long to small short weight, arguably reflecting the increasing price volatility of oil

Consider now an *ex ante* perspective: a *minimum variance sector rotation* strategy, based upon weights calculated above. Specifically, open positions on 1 Jan which correspond to the portfolio of minimum variance weights, calculated from the previous year per the above model. Admittedly simple, this strategy remains fully-invested at all times, rebalances annually, and does not filter based upon market regime. In other words, a *simple systematic dynamic asset allocation* based on annual lookback with very low transaction costs.

Given combo of strategy simplicity and remarkably dynamic market environment, *a priori* intuition suggests performance of this strategy should be boring. The actual cumulative log returns per year are illustrated in the following graphic:

Next, concatenated into continuous cumulative daily returns and overlay with SPY log returns for corresponding time period:

The obligatory strategy summary statistics versus beta benchmark:

MVP | SPY | |
---|---|---|

Max drawdown | 9.984622 % | 23.9207 % |

Std Dev | 0.008390046 | 0.01359853 |

Sharpe | 0.6171763 | 0.2172976 |

Wins | 54.05559 % | 54.76731 % |

Losses | 45.94441 % | 45.23269 % |

Avg Win | 0.3218431 % | 0.4352086 % |

Avg Loss | -0.2895475 % | -0.4163521 % |

In summary, performance of this strategy broadly matches aspirational intent for minimum variance portfolios: comparable performance during normal market environment, underperform during market exuberance, smaller comparative variance (*e.g.* average win/loss), higher sharpe, and significantly smaller max drawdown. All good stuff.

While hardy noteworthy, this analysis and toy strategy illustrates minimum variance portfolios applied to sector rotation do appear to trade consistent with their theoretical claims of benefit. Of course, there are many interesting avenues for improving this strategy.

For readers seeking to follow along in R, analysis and plotting code for this post are as follows:

require(xts) require(tseries) require(tawny) # model parameters s <- "2003-01-01" spyS <- "2004-01-01" e <- "2011-01-01" q <- "AdjClose" usEquities <- c("XLB", "XLE", "XLF", "XLK", "XLI", "XLP", "XLU", "XLV", "XLY") usEquityNames <- c("materials", "energy", "financials", "tech", "industrial", "staples", "utilities", "healthcare", "discretionary") colors <- c('black', 'red', 'blue', 'green', 'orange', 'purple', 'yellow', 'brown', 'pink'); usClose <- as.xts(data.frame(lapply(usEquities, get.hist.quote, start=s, end=e, quote=q))) usRets <- xts(data.frame(lapply(log(usClose), diff)), order.by=index(usClose))[2:nrow(usClose)] colnames(usRets) <- usEquities spy <- get.hist.quote("SPY", start=spyS, end=e, quote=q) # annualize trade returns and calculate MVP weights annualNames <- array(c("2003", "2004", "2005", "2006", "2007", "2008", "2009", "2010")) annualReturns <- do.call(rbind, sapply(annualNames, function (yr) { usRets[yr] })) annualWeights <- t(sapply(c(1:length(annualNames)), function(i) { minvar(annualReturns[annualNames[i]]) } )) colnames(annualWeights) <- usEquities rownames(annualWeights) <- annualNames annualTradeRets <- matrix(vapply(c(1:(nrow(annualNames)-1)), function (i) { r <- cumsum(annualReturns[annualNames[i+1]] %*% annualWeights[i,]); r[length(r)] }, -100)) dailyPnL <- do.call(rbind, sapply(c(1:(nrow(annualWeights)-1)), function (i) { matrix(annualReturns[annualNames[i+1]] %*% annualWeights[i,]) })) # plot longitudinal evolution of pca component variance pcaStds <- do.call(cbind, lapply(annualNames, function(yr) { sdev <- princomp(covmat=cov.shrink(annualReturns[yr]))$sdev; sdev^2/sum(sdev^2) })) colnames(pcaStds) <- annualNames pcaStdMeans <- matrix(rowMeans(pcaStds)) demeanedPcaStds <- sweep(pcaStds, 1, rowMeans(pcaStds), "-") plot(pcaStds[1,], ylim=range(pcaStds), type='l', xaxt="n", xlab="Year", ylab="Proportion of Variance", main="Longitudinal PCA Variance Decomposition by Component") lapply(c(2:5), function (i) { lines(pcaStds[i,], type='l', col=colors[i])}) axis(1, 1:nrow(annualNames), annualNames) legend(.45,legend=rownames(pcaStds)[1:5], fill=c(colors[1:5]), cex=0.5) # plot sector returns par(mfrow=c(3,3)) sapply(c(1:(ncol(usRets))), function (i) { plot(cumsum(usRets[,i]), type='l', xlab="", ylab="Return", main=format(usEquityNames[i])) }) # plot longitudinal annual weights plot(annualWeights[,1], ylim=range(annualWeights), type='o', ylab="Weight", xlab="Year", xaxt="n", main="Annualized Minimum Variance Sector Weights", col=colors[1]) axis(1, 1:nrow(annualWeights), rownames(annualWeights)) for (i in c(2:ncol(annualWeights))) { lines(annualWeights[,i], col=colors[i], type='o') } legend(-.4,legend=usEquityNames, fill=c(colors), cex=0.5) # plot cumulative daily returns par(mfrow=c(3,3)) sapply(c(1:(nrow(annualWeights)-1)), function (i) { plot(cumsum(annualReturns[annualNames[i+1]] %*% annualWeights[i,]), type='l', xlab="Trading Day", ylab="Return", main=format(annualNames[i+1])) }) # plot daily PnL cumDailyPnL <- cumsum(dailyPnL) cumSpy <- cumsum(diff(log(coredata(spy)))) maxRange <- max(range(cumDailyPnL), range(cumSpy)) minRange <- min(range(cumDailyPnL), range(cumSpy)) plot(cumDailyPnL, type='l', xlab="Trading Day", ylab="Return", main="Annualized Minimum Variance Sector Strategy P&L", ylim=c(minRange, maxRange)) lines(cumSpy, type='l', col='red') legend(.6,legend=c("MVP","SPY"), fill=c("black", "red"), cex=0.5) axis(1,index(usClose)) # print strategy summary statistics plSummary(dailyPnL) # function to generat weights for MVP from a return series minvar <- function(rets) { N <- ncol(rets) zeros <- array(0, dim = c(N,1)) aMat <- t(array(1, dim = c(1,N))) res <- solve.QP(cov.shrink(rets), zeros, aMat, bvec=1, meq = 1) return (res$solution) } # function to pretty print strategy statistics plSummary <-function(dailyPnL) { cumDailyPnL <- cumprod(1 + dailyPnL) - 1 cat("Max drawdown:", (maxdrawdown(dailyPnL)$maxdrawdown * 100), "%\n") cat("Std dev:", sd(dailyPnL), "\n") cat("Sharpe:", sharpe(cumDailyPnL), "\n") win <- mean(ifelse(dailyPnL > 0, 1, 0)) cat("Wins:", (win*100), "%\n") cat("Losses:", ((1-win)*100), "%\n") cat("Average Win:",(mean(ifelse(dailyPnL > 0, dailyPnL, 0)) * 100), "%\n") cat("Average Loss:",(mean(ifelse(dailyPnL < 0, dailyPnL, 0)) * 100), "%\n") }

Another interesting post (though the history on these ETFs goes back further than 2003 so you could have presented a longer backtest, perhaps this underperformed during the internet boom?).

Personally, I wouldn’t call this sector rotation. Sector rotation isn’t just a strategy that has the result that sector weights change over time. Sector rotation isn’t just about volatility and correlation; it is also about the returns on some sectors being expected to be stronger than others. This is just a straight-up minimum variance portfolio but only on sectors. Of course, you could do the same thing on asset classes or on the sub-indices of a bunch of asset classes (like for individual countries in EAFE).

I suppose a tricky thing is that if you didn’t want to invest in ETFs but instead were a portfolio manager forced in invest in equities how you would do it. One obvious way is to perform the minimum variance optimization on sectors and then use a constrained optimization when performing a second minimum variance optimization on individual stocks (so that sector weights equal the weights from the first).

But wouldn’t it be more relevant to consider whether this constrained minimum variance portfolio outperforms the unconstrained version? Surely you could also plot the sector weights of an unconstrained portfolio and see whether they vary over time (I would guess they do, but am unsure of how similar they are to what you do). While the minimum variance strategy you present surely beats SPY, it isn’t that controversial for someone to expect that to be so over this time period. It would be very interesting if a constrained minimum variance portfolio outperformed an unconstrained one over a long period of time. I don’t think that would be the case, but it would be interesting to check.

@John: thanks for complement; I agree with all your comments. Backtesting prior to 2003 would be quite interesting (particularly being able to pull in the tech bubble), although I have not done so. Depending on reader interest, I may do a follow-up post which digs into these issues including global ETF MVP, rolling weights, and longer backtest.

Can you clarify which constraint you are referring, in your final paragraph?

I just meant in regards to the strategy in the third paragraph. Perform the minimum variance optimization on the sectors, then perform a constrained minimum variance optimization on the stocks but constraining the total weights on the sectors to equal the weights you determine from the first optimization.

@John: happy to run the analysis, provided historical daily index components and weights (which should be available from CRSP and Bloomberg, neither of which I have current access).

I might be able to get you some more data than what is in the second post on Monday if you want. I think on Factset at work I have the sector data going back to the late 80s from S&P or MSCI. This data should track closely with what you did. If you want to go further back with data that isn’t exactly comparable to what you did, Ken French has industry portfolios going back to the 1960s on his website.

http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html

First of all, it’s a very interesting blog, thank a lot.

I’ve tried to run the code but it looks like it misses something.

I added these lines at the beginning:

library(tseries)

usEquities = c(“XLB”, “XLE”, “XLF”, “XLK”, “XLI”, “XLP”, “XLU”, “XLV”, “XLY”)

but now it display this message:

could not find function “minvar”

I am running R 2.9.0

Thanks again for the post

@Skif: thanks for complement. To your question, note code is not listed in execution order for clarity of presentation; also, some boilerplate is omitted (

e.g.library decls). For minvar(), it is defined at the bottom.I stumbled across your blog a few weeks back – I enjoy your analysis and insight on many of the topics that interest me.

Was curious if you’ve managed to find a decent source of historical data on the web. I see you use get.hist.quotes, which is quite useful. However, the underlying closing prints from Yahoo, though div-ajdusted, only get rounded out to 2 decimal points. Im not sure what kinda of effect this has on returns – probably minimal – but its always good to have more data!

@Emilio: thanks for kind words. What sort of data are you looking? Note that good, clean data for sub-daily frequency costs money.

Thank you for the great blog you sare and knowlege and also for the code in R.

I tried to use the code. All working well and learned a lot out of it. There is only one issue left: calculating a vector of wins in

# function to pretty print strategy statistics

plSummary <-function(dailyPnL)

{

cumDailyPnL 0, dailyPnL, 0)) * 100), “%\n”)

cat(“Average Loss:”,(mean(ifelse(dailyPnL < 0, dailyPnL, 0)) * 100), "%\n")

}

There is a problem in this function since win calculation is missing.

Thanks for your input on how to overcome this issue and future blog posts.

Regards,

Samo.

@Samo: post updated to resolve typo. Thanks for your complement.

Nice to see you explore this, since it is topical. However, let’s be clear… there really is no prediction-free way to model anything in finance. This is not physics, but a social system.

The risk premium might be zero, or it might not.

One prior consistent with min-risk is zero-risk premium, but minimum variance can just as well be viewed as “a priori” equal returns, whence the portfolio structure comes from the existence of unequal risk. That (in itself) should alert one to the temporary nature of any such regime. It is unlikely to be a persistently rewarded strategy.

In my view, we are not living in a world where the revenue dynamics of firms and sectors are the same. Therefore, while the industry does (currently) have a mania for risk-parity and min-variance this is (almost certainly) the worst time of all to adopt such a strategy. It is an example of how quantitative finance periodically craves the certainty of a “new rule” and then puts all eggs in the one basketđź™‚

@savvyyabby: I agree with your conceptual sentiments, namely: finance is not physics (but does have physics envy); there are no free lunches; finance follows trading regimes, and dynamics of firm and sectors are different.

That said, I am unclear on meaning of several of your comments / conjectures; do you mind clarifying the following: (i) “no prediction-free way to model anything in finance”, as certainly not all models admit prediction nor is that the intent of many models; and (ii) “worse time for all to adopt such a strategy”.

One example might clarify my comment. In the period up until 2007/08 it was common among quantitative practitioners to claim that “sector-neutrality” was the preferred strategy to highlight skill because models cannot pick sectors. The problem is that the definition of sector neutrality requires one to pick a sector classification under which to define it. So the claim of “not picking sectors” by being sector-neutral is logically flawed.

Investors who did this were actually all agreeing to the exact same sector scheme and then herding around neutrality on that choice. Sector neutral by one scheme is sector betting in another. If investors passively agree to all use the same scheme then they are not nearly as neutral as they seem!

This is the reason why I think risk-parity or the closely positioned minimum variance portfolios are a little dangerous right now. There is an embedded mindset now that risk is not rewarded. Perhaps the sign of the risk premium can be positive in bull markets and negative in bear markets. If so, then the minimum risk portfolio may simply be the average position of retreat for an investor who can’t really tell if they are in a bull market or a bear market.

They are constructions that remind me of the old economist joke: head in the oven, feet in the freezer and average temperature just right!

This is not to say there is no value in such a construction, just that the apparently good through-the-cycle performance may be hiding a more interesting question. Perhaps there are rewarded bets not being taken?

One has to recall that “market timing” is anathema in MPT. However, if the risk premium can be negative then it is the appropriate strategy. Certainly, in the GFC, those who followed common sense and sold early any assets that were likely to be held by levered parties avoided catastrophic losses. To my mind the minimum variance proposal is just a new rule advanced to “save” having to think about this important issue of the likely market trend.

That just makes it “less bad” not “good” or “deep” in any way. In contrast, an old-school strategist would simply estimate the likely magnitude of the ERP and shade that up prospectively when markets are low, and down when they are high. When you believe ERP < 0 then go to cash.

@savvyyabby: thanks for clarifying; agreed on all points. Or, in short: consensus portends crisis.

BTW: thanks for posting the R code. Very handy.

What is the optimal way to implement MVPs on ETFs in a long-only environment (such as in tax-advantaged accounts)? It seems the long-only constraint creates a very sparse matrix that is unsuitable for trading. Any thoughts?

(To be clear: I mean when any instrument replicating the short ETF is unavailable.)

@jones3316moonvest: to avoid sparse matrices, optimize MVP with upper bound constraints, as described in Minimum Variance Portfolios.

Great post…thanks for the info.

By the way, to save time for R newbies like myself, I thought I’d mention that your code requires the following packages:

require(xts)

require(tseries)

require(tawny)

Thanks again,

Dave

@Dave: thanks for comment. You are correct and post has been updated accordingly.

Great post!

In line 47 there seems to be an extra “)” after “usEquityNames”.

@vonjd: thanks for comment. Post updated to correct extraneous parenthesis.

Thanks very much for a very useful post.

Although I’m pretty familiar with R, I’m still not quite comfortable with the code. Can you give some pointers on how the code could be modified to recalculate weights quaterly or monthly?

@John: variety of ways; one way is changing frequency of usClose observations to desired interval.

Feel free to ping me privately on email, if you want to discuss specifics–as code is intended to be illustrative of principle, while suffering obvious software engineering shortcomings.