An astute reader suggested reproducing the results from a recent article on regime analysis by Kritzman et al., Regime Shifts: Implications for Dynamic Strategies in FAJ (May / June 2012). This is a fun exercise to be conducted over a series of posts, as doing so illustrates several important economic principles and some elegant mathematics.
This post begins by identifying macroeconomic market regimes arising from multi-asset economic activity.
Tadas asked an interesting question in his recent post: Where did all the finance bloggers go? A variety of folks gave thoughtful replies: Josh Brown, Flex Salmon, David Merkel, Scott Bell, the Macro Men, and bunch of anonymous professional traders. Undoubtedly, there is truth in all their observations.
Yet, perhaps there is a common root cause at work, not yet stated: implicit momentum bias.
GOOG unexpectedly disclosed their Q3 earnings early last week, on October 18th. While earnings were marginally interesting, much more amusing was the corresponding hiccup in intraday trading. This event provides an opportunity to dig into TAQ data, view through a HFT lens, and build intuition from some elegant ideas due to Mendelbrot, Clark, and Ané.
Few folks could be blamed for such flippancy, as it was mostly harmless throughout the great moderation. In fact, traders took apparent pride in their ignorance of macro—except the global macro guys, obviously. Then, along came a credit crisis.
With that swan, Quantivity concluded it was high time to formulate a systematic macro perspective: a “top down” complement to calibrate “bottom up” quant models. Quantivity brought great humility to this effort, due to both intrinsic complexity and comparatively weaker background in macro.
This post kicks off a few thoughts derived from this effort; hopefully a welcome addition alongside micro analysis. Two caveats are worth noting. First, confirmation bias is particularly dangerous with macro, and thus emphasis is placed on broadly considering diverse viewpoints. Second, these thoughts are posted with a bit of trepidation, given intense desire to avoid politics and policymaking.
Index Return Decomposition prompted several readers to inquire about forecasting the signs of returns, as implied by the decomposition variable. This is an interesting topic worth review, quick survey of intuition from the literature, and some R code for exploratory analysis.
This topic is known as direction-of-change forecasting in the literature. Needless to say, successful prediction of the sign for future returns is quite interesting from a trading perspective. Traditionally, only univariate return series were considered; Anatolyev (2008) is an exception, modeling two or more interrelated markets via dependence ratios. This literature tends to be a bit obtuse, due to commonly unstated stylistic assumptions regarding conditional return dynamics.
Quantivity is fortunate to be acquainted with numerous folks who have earned consistent returns over multiple decades without significant drawdown. Although they have varying trading strategies, there is a common theme which unifies them: top-down systematic focus on the sociology of market participants.
This focus is not behavioral finance, in search of anomalies driven by cognitive biases divergent from equilibrium (although majority do that too). Rather asking inferential sociological questions, such as: was the market “efficient”, in the Fama sense, during the post-war decades prior to 2000 because people expected it to be (blissfully ignoring a few hiccups); in contrast to how it is commonly understood and formalized, with reverse causality: market is assumed to be efficient, thus people understand it as such.
Similarly, have the past 15 years been “inefficient”, in the bubble and anomaly sense, because cultural faith among investors in such “efficiency” was lost; or, did they lose faith because the market became inefficient? Big difference.
In other words: is finance governed by physics, biology, or Peltzman?
A variety of techniques exist for estimating parameters of the return decomposition model, previously introduced in Index Return Decomposition. This post considers estimation of an independent mixture model via maximum likelihood (MLE), a workhorse of frequentist statistics and always a nice place to begin.
Recall is unobserved, and thus the model cannot be directly estimated via MLE. Thus, need to decide how to approach estimation for this latent variable. One way is to be naive, and simply assume is the deterministic difference in return between stock and index (technically, this generates a profile likelihood as formalized by Severini and Wong , which Murphy and Van Der Vaart  verify is well-behaved consistent with exact likelihood):
This assumption permits focus on estimating , providing insight into the mixing behavior of the return being decomposed: if a stock return behaves like its index, then mixing is low with small (in the limit, when a stock behaves identical to its index, as no mixing is required); in contrast, the stock return behaves independent from its index on a regular basis, then mixing is high with a large .