Skip to content

Three Horsemen

July 25, 2009

Conventional wisdom tells us quantitative trading is hard. Yet, few know specifically why it is difficult. Moreover, different types of quantitative trading suffer from differing complexities, such as: mathematical modeling, information access (e.g. tick-level data), computational facilities, execution facilities (e.g. low latency), risk management (e.g. real-time VaR), and leverage (e.g. RegT vs portfolio margin).

Amongst all complexity, much is due to the three horsemen of quantitative trading: bias, stationarity, and ergodicity.

Human psychology informs us humans are particularly susceptible to attributional bias:

Cognitive bias that affects the way we determine who or what was responsible for an event or action.

CNBC is literally in the business of attributional bias: talking heads trying to explain why the market did what it did, despite everyone knowing full well such explanations are incomplete at best (and completely fallacious, at worst). This bias motivates the hypothesis that computers can system and program trade better than humans.

This difficulty is further compounded by a fundamentally mistaken assumption about randomness in the financial markets (black swans aside). Specifically, the vast majority of statistical analysis techniques make the following (often unstated) assumption:

A random process will not change its statistical properties with time and that such properties (such as the theoretical mean and variance of the process) can be deduced from a single, sufficiently long sample (realization) of the process. (Wikipedia)

This fallacy is inherited from probability theory, and formally means an assumption that randomness can be modeled as a stationary ergodic process. Stationarity refers to statistic properties not changing over time. Ergodicity refers to such properties being deducible from a single, sufficiently long sample of the random process. Unfortunately, both are bogus for the vast majority of quantitative finance (otherwise, every first-year econometrics grad student would be rich).

Examples of this fallacy litter the financial landscape, not just the quantitative world:

  • Past performance: many people make investing decisions based upon past performance (aka “chasing the hot hand”), hoping the future will be similar to the past
  • Credibility: many people assign credibility to those individuals who have prognosticated successfully in the past, compounded by our culture of celebrity
  • Asset allocation: many people allocate their portfolio according to simplified techniques derived from modern portfolio theory, ranging from classic 60/40 to new “Endowment” models
  • Option pricing: from Black–Scholes onward, the vast majority of mathematical finance is based upon mathematical formalism which assumes stationary ergodicity (e.g. martingales, Brownian motion, etc.)

And, arguably the most egregious, which seduces even the most rigorous statistical minds: time-series analysis (from regression analysis to principal component analysis) whose data spans many decades. This line of thinking is usually justified by appealing to the law of large numbers: “more data the better”. Unfortunately, lack of stationarity is far more statistically damaging than lack of large numbers.

These three factors often combine to confound quantitative trading: statistical techniques assuming stationary ergodicity are used to build systems, to which their authors attribute explanatory power for behavior of a set of instruments. Unfortunately, in practice, the resulting statistical models tend to be insufficiently stable and thus have limited success in generating consistent profit.

Given these three horsemen, challenge is trading in ways which minimize these fallacies.

10 Comments leave one →
  1. Dharma permalink
    August 19, 2009 12:36 am

    Thanks for this. Really helps for someone like me who is not so mathematically inclined.

  2. September 3, 2009 10:32 pm

    And, arguably the most egregious, which seduces even the most rigorous statistical minds: time-series analysis (from regression analysis to principal component analysis) whose data spans many decades.

    Time Series Analysis need not be always have an inbuilt assumption of stationarity “factored” inside it. There are models which try to model the time varying nature of returns & variance, (heteroscedasticity i.e) The last part, time varying nature of variance is I think, a far credible “factor”.


    • quantivity permalink
      September 4, 2009 9:16 am

      @Soham: thanks for your comment; absolutely, many modern techniques from GARCH to robust statistics; as you note, non-stationary variance has had particularly extensive treatment by the derivatives pricing literature, among others, via stochastic volatility and related techniques; to clarify, my intent with that comment was to question the suitability of long data panels, rather than whether the corresponding analysis technique assumed stationarity or not.

  3. qbit permalink
    October 28, 2009 4:35 am

    An ergodic process is necessarily stationary, no, so why do people use the term stationary ergodic process?

  4. Patrick permalink
    February 3, 2010 7:59 pm

    “Unfortunately, lack of stationarity is far more statistically damaging than lack of large numbers.”

    Unless you have a sufficiently large time-series of the Vanna/Vomma ratio.


  1. Stability by Quantile « Quantivity
  2. Stability by Quantile « Quantivity
  3. Why Moving Averages « Quantivity
  4. Generational Regimes « Quantivity
  5. Backtesting: You’re Doing it Wrong | Flirting With Models

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: