Conventional wisdom tells us quantitative trading is hard. Yet, few know specifically why it is difficult. Moreover, different types of quantitative trading suffer from differing complexities, such as: mathematical modeling, information access (e.g. tick-level data), computational facilities, execution facilities (e.g. low latency), risk management (e.g. real-time VaR), and leverage (e.g. RegT vs portfolio margin).
Amongst all complexity, much is due to the three horsemen of quantitative trading: bias, stationarity, and ergodicity.
Human psychology informs us humans are particularly susceptible to attributional bias:
Cognitive bias that affects the way we determine who or what was responsible for an event or action.
CNBC is literally in the business of attributional bias: talking heads trying to explain why the market did what it did, despite everyone knowing full well such explanations are incomplete at best (and completely fallacious, at worst). This bias motivates the hypothesis that computers can system and program trade better than humans.
This difficulty is further compounded by a fundamentally mistaken assumption about randomness in the financial markets (black swans aside). Specifically, the vast majority of statistical analysis techniques make the following (often unstated) assumption:
A random process will not change its statistical properties with time and that such properties (such as the theoretical mean and variance of the process) can be deduced from a single, sufficiently long sample (realization) of the process. (Wikipedia)
This fallacy is inherited from probability theory, and formally means an assumption that randomness can be modeled as a stationary ergodic process. Stationarity refers to statistic properties not changing over time. Ergodicity refers to such properties being deducible from a single, sufficiently long sample of the random process. Unfortunately, both are bogus for the vast majority of quantitative finance (otherwise, every first-year econometrics grad student would be rich).
Examples of this fallacy litter the financial landscape, not just the quantitative world:
- Past performance: many people make investing decisions based upon past performance (aka “chasing the hot hand”), hoping the future will be similar to the past
- Credibility: many people assign credibility to those individuals who have prognosticated successfully in the past, compounded by our culture of celebrity
- Asset allocation: many people allocate their portfolio according to simplified techniques derived from modern portfolio theory, ranging from classic 60/40 to new “Endowment” models
- Option pricing: from Black–Scholes onward, the vast majority of mathematical finance is based upon mathematical formalism which assumes stationary ergodicity (e.g. martingales, Brownian motion, etc.)
And, arguably the most egregious, which seduces even the most rigorous statistical minds: time-series analysis (from regression analysis to principal component analysis) whose data spans many decades. This line of thinking is usually justified by appealing to the law of large numbers: “more data the better”. Unfortunately, lack of stationarity is far more statistically damaging than lack of large numbers.
These three factors often combine to confound quantitative trading: statistical techniques assuming stationary ergodicity are used to build systems, to which their authors attribute explanatory power for behavior of a set of instruments. Unfortunately, in practice, the resulting statistical models tend to be insufficiently stable and thus have limited success in generating consistent profit.
Given these three horsemen, challenge is trading in ways which minimize these fallacies.