Skip to content

You Don’t Have Alpha

September 29, 2011

Gappy claimed in a thoughtful comment to P-Q Convergence that \mathbb{P} and \mathbb{Q} is a “false dichotomy” and referenced as justification, among others, the standard finance doctoral textbooks on modern asset pricing (e.g. Cochrane, Singleton, and Duffie). This claim motivated Quantivity to revisit the Chicago school, for which the academically-trained practitioner inside had developed a fairly strong pragmatic aversion to many years ago (e.g. does anyone grounded in the real world seriously believe discounted dividend / cashflow models, or “factor” and “styles” given the anomalies?).

This revisiting led to Cochrane’s recent AFA 2011 Presidential Address on Discount Rates (also available on video). This address is particularly remarkable juxtaposed against French’s 2008 Presidential Address on The Cost of Active Investing which echoed the previous generation of the Chicago school by ardently defending the passive investment grail (imagine having to defend that in the middle of financial meltdown).

Cochrane’s address includes one of the best quotes in the history of modern finance, worthy of reading in its entirety by every serious practitioner (p. 51):

I tried telling a hedge fund manager, “You don’t have alpha. I can replicate your returns with a value-growth, momentum, currency and term carry, and short-vol strategy.” He said, “‘Exotic beta’ is my alpha. I understand those systematic factors and know how to trade them. You don’t.” He has a point. How many investors have even thought through their exposures to carry-trade or short-volatility “systematic risks,” let alone have the ability to program computers to execute such strategies as “passive,” mechanical investments? To an investor who has not heard of it and holds the market index, a new factor is alpha. And that alpha has nothing to do with informational inefficiency.

Most active management and performance evaluation just is not well described by the alpha-beta, information-systematic, selection-style split anymore. There is no “alpha.” There is just beta you understand and beta you don’t understand, and beta you are positioned to buy vs. beta you are already exposed to and should sell.

While Quantivity may quibble with Cochrane’s terminology, this sentiment is not far off from the views of hedgie friends. In fewer words: abnormal returns reward regime-sensitive risk premia traded via established systematic trading methodologies.

Although Cochrane’s purpose in his Address is to set forth a proposed research agenda for the field, a more interesting way to read his remarks is with a forward-looking quant practitioner lens. Rather than spoil the fun for readers wanting to interpret this Address for themselves, commentary here is limited to a few observations:

  • Cyclicity: profitable trading can be taxonomized into unified logical frameworks which follow a cyclic knowledge diffusion curve (risk premia being the most recent), where new ideas evolve from highly profitable to commodities to eventual theoretical reconciliation by academia (and potentially reborn, per Gappy’s comment claiming recent resurrection of fundamental-based returns)
  • Incongruence: financial industry apparatus built to support the Fama-French world is increasingly misaligned with this evolution in framework
  • Crowding: deep quant analysis of strategy crowding (in contrast to classic behavioral herding) is arguably most valuable in the present era of commoditized risk premia strategies
  • Blogosphere evolution: many excellent finance blogs (see blogroll) are dedicated to a single strategy from the current risk premia framework, whose profitability will continue to fall as the intellectual returns to scale decrease due to further strategy commoditization and more disgruntled buy-and-hold investors transition to be noise traders

Finally, all these points echo Quantivity’s belief that finance is ultimately a self-fulfilling prophesy: what trades with edge is far more a derivative of what the masses believe than any intrinsic econometric truth. Perhaps this also speaks to econophysics: there are normative mathematical models, but they are not time-invariant (but, we already knew this from Naïve Backtesting is Bogus).

Depending on reader interest, subsequent posts may discuss implications in further detail.

11 Comments leave one →
  1. david varadi permalink
    September 30, 2011 8:24 pm

    excellent post— couldn’t agree with these points more, nor could I express them more eloquently. the beta arms race is in full force……
    best
    david (css analytics)

  2. October 1, 2011 6:59 am

    Phenomenal post!

  3. Michael Fox permalink
    October 5, 2011 7:00 am

    Can you elaborate on the impact of buy-and-hold investors becoming noise traders? Isn’t this good for the more disciplined traders?

    • quantivity permalink*
      October 5, 2011 11:44 pm

      @Michael: excellent question; worth a dedicated post. Here are a few thoughts.

      Insufficient research exists to conclude either way, but Quantivity intuition is no. The two dynamics that will drive this are herding and crowding. The buy-and-hold 401K baby boomer era resulted in well-known dynamics for both: unsophisticated investors strongly herded and passive indexing resulted in “dumb money” crowding by mutual funds (by intent). The tech bubble was the first hiccup, and the financial crisis broke the pattern. The confluence of both shattered the EMH façade, even for the most unsophisticated investors.

      In a world of increasing noise traders (much like pre-1950s), the herding and crowding dynamics of “dumb money” need be to (re)discovered; undoubtedly a big part will be technical (as that’s how many unsophisticated investors trade), evidenced by some recent papers beginning to document resumed profitability of simple technical analysis. Some HFs are working hard analyzing this, from which alpha will flow.

  4. October 9, 2011 12:50 pm

    Thanks for the post. At the end you made a reference to an older post of yours about backtesting. IMO the argument in that post involved a red herring fallacy. Backtesting is not used to find profitable systems but only to filter out historically non-profitable strategies, as Michael Harris puts it nicely in his relevant blog post (http://www.priceactionlab.com/Blog/2010/10/proper-use-of-back-testing/). Thus any reference to time invariance in relation to backtesting involves misdirection IMO.

    • quantivity permalink*
      October 9, 2011 2:49 pm

      @Rick: thanks for your comments, link, and kind word.

      Quantivity agrees, but only in part. Quantivity’s belief is captured in a conjecture on intrinsic orthogonal dimensionality, which is a bit more subtle than what you state and Michael elaborates in his post.

      Specifically, there exists a continuum measured in intrinsic dimensional data complexity. For example, FX has very low dimensionality; while equities have high dimensionality. The role and mechanics of backtesting fundamentally differs across this spectrum, due to the role of phenomenology.

      In low dimensional problems, the problem is reduced to phenomenology and the best mathematical tools are calibrated forecasts in the sense of signal processing or time series (rather than models from first mathematical principle, in the econometric or physical sense). In other words, curve calibration of varying sophistication. This reduction occurs because the intrinsic data dimensionality is insufficient, in practice, to prove or disprove non-phenomenological hypotheses (e.g. FX). Thus, backtesting for these problems are undertaken in a manner consistent with the description provided by you and Michael; and, indeed, claims of time-variance are uninformative as forecasts are defined by their intrinsic notions of time. Yet, as proven by nearly a hundred years of research, the accuracy of forecasting depends upon the consistency of regime (which was the intent of the Naïve Backtesting is Bogus post).

      In high dimensional problems, the problem is reduced to positing “fundamental” hypotheses with corresponding mathematical model(s). In other words, building distinct models which describe orthogonal aspects of the problem motivated by conjecture(s) whose intent is to explain from first principle, rather than explain phenomena (e.g. cornucopia of orthogonal equity models). This reduction is feasible because the data is, in practice, sufficiently rich to enable disproving fundamental hypothesis. Thus, backtesting for these problems is model calibration, including potentially both longitudinal and cross-sectional, and not forecasting in the classic sense of either signal processing or time series. In this context, the role of time-variance can be parsimoniously captured in the models (whether through regime change, stochastic evolution, or a combo of both) and its absence usually indicates a conceptual shortcoming.

      This conjecture provides a conceptual frame to evaluate backtesting methodologies. For example, consider regimes: low dimensional curve calibration lacks the mathematical sophistication to explicitly model regimes; in contrast, high-dimensional models can explicitly model regimes using numerous branches of probability and statistic theory.

      One subtlety in this conjecture is the role of dimensional curse. Classically, the curse of dimensionality was seen as a bad thing, for which the machine learning field has undertaken tremendous effort to build sophisticated heuristics and “shortcuts” to facilitate algorithmic feasibility. Yet, manifold learning suggests how and why this thinking may be incorrect: both tractability and insight arise from models which distinctly consider partitioned subsets of the full data space. For example, volatility modeling has made tremendous strides over the past 15 years precisely because it reduces the problem to just volatility (eliminating from consideration returns, correlation, and all the other important dimensions that characterize a data set). Similarly return modeling may make similar strides by separating sign and magnitude. In more formal language, one of the most important aspect of high-dimensional model building is correct choice of submanifold from the larger universe of data measuring the phenomenon.

      Finally, what makes backtesting particularly interesting in practice is the dimensionality of problems is rapidly evolving over time as more data becomes available. Thus, even the notion of backtesting for a given instrument class is evolving–such as HFT moving to FX, which will invariable increase its intrinsic data dimensionality.

      Perhaps this topic is worth a follow-up post.

  5. Kreg permalink
    September 6, 2016 12:29 am

    I’m very late getting to this excellent post. Thanks. The good news is it is never too late to understand the lesson from Robert Lucas’s Critique of Macro Economics in 1976. For systems with causality running both forwards and backwards through time, that is anything where expectations of the future affect outcomes today, he offers the only known (to me) path to a time invariant model. A lot of folk don’t like it, and it might not backtest really well, but it will not break down and in my experience (20+years) these models are getting better all of the time.

Trackbacks

  1. Weekender | The Trader
  2. Proxy / Cross Hedging « Quantivity
  3. BullseyeMicrocaps.com » Cochrane On Alpha-Beta
  4. Links 30 Mar « Pink Iguana

Leave a comment