Risk is deeply underappreciated.

Moreover, it is misunderstood—even by many who have smelled it up close personally via big trading loses on hedged positions. Aaron Brown’s most recent text, Red-Blooded Risk, explains why.

In doing so, it is simultaneously brilliant and flawed. For the former, Brown deserves credit; for the latter, the publisher presumably deserves most of the blame.

Unmasking a phenomenon $f$ into its constituent parts $\textbf{g}$, via functional decomposition $\phi$, is one of the great beauties of mathematics:

$f(\textbf{x}) = \phi(g_1(\textbf{x}), g_2(\textbf{x}), \dots, g_n(\textbf{x}))$

This technique finds surprisingly often use in quant models.

Ongoing analysis and trading based on proxy hedging, exemplified by series beginning with Proxy / Cross Hedging, suggests potential for an equity decomposition model based on the relationship between returns of a stock $r_t$ and its corresponding index $i_t$:

$r_t = s_t \left[ \alpha_t | z_t | + (1 - \alpha_t) \beta | i_t | \right] + \epsilon_t$

To explain this model, let’s build it up from intuition.

Quantivity is pleasantly surprised to discover an increasing number of folks are deriving value from the Curated Quant Research Feed on @Quantivity. Indeed, the combo of daily curated feed with single-source retrospective search has become indispensable for personal research. Towards understanding why, Kedrosky provides nice explanation in his Curation is the New Search is the New Curation post earlier this year:

Head back to curation and watch new algos emerge on top of that next-gen curation again. Think of Twitter as a new stab at curation. Curated sites will re-seed a new generation of algorithmic search sites. In short, curation is the new search.

Indeed, intent of curation here is to maintain high signal-to-noise ratio for a mix of preprint and classics in a highly-specialized literature (i.e. combo of retail $\mathbb{P}$ and prop $\mathbb{Q}$) for which strong motivation exists elsewhere to obfuscate; and search over the stream provides ability to both rewind time and to integrate conceptual connectivity spanning time.

Algebraic geometry and topology traditionally focused on fairly pure math considerations. With the rise of high-dimensional machine learning, these fields are increasing being pulled into interesting computational applications such as manifold learning. Algebraic statistics and information geometry offer potential to help bridge these fields with modern statistics, especially time-series and random matrices.

Early evidence suggests potential for significant intellectual cross-fertilization with finance, both mathematical and computational. Geometrically, richer modeling and analysis of latent geometric structure than available from classic linear algebraic decomposition (e.g. PCA, one of the main workhorses of modern $\mathbb{P}$ finance); for example, cumulant component analysis. Topologically, more effective qualitative analysis of data sampled from manifolds or singular algebraic varieties; for example, persistent homology (see CompTop).

Lag Dynamics with Autocopulas investigated autocopulas for underlying and hedge instruments as applied to proxy / cross hedging, concluding the existence of large-magnitude temporal volatility clustering. This is indeed a known stylized fact of financial returns (see Tsay 2010, Chapters 2 and 3).

The classic discrete-time models for capturing such statistical conditionality are ARMA (see Box et. al (1994)) and GARCH (see Engle (1982) and Bollerslev (1986)), for returns and volatility respectively. Yet, therein lies a practical problem faced by hedge analysis: necessity to select a model with optimal parameters and error distribution for underlying and hedge. This post describes and implements such model selection for choosing a model from the universe of standard parameters and non-normal error distributions.

When asked to summarize their approach to proxy / cross hedging, senior folks from numerous big banks reduced it to correlation: hedge using an instrument whose correlation is close to -1. This perspective matches the popular practitioner literature, such as recently published text Hedging Market Exposures (Bychuk and Haughey, 2011). Moreover, this perspective is at the heart of much of the research literature, going back to original definition of optimal hedge ratio $\hat{\beta}$ (e.g. Hull, p. 57):

$\hat{\beta} = \rho ( \frac{\sigma_u}{\sigma_h} )$

Yet, while indeed true, this wisdom is not terribly helpful in practice for hedging well-known equities, as described in previous posts—as no instrument exists with such high correlation. This motivated revisiting the role of dependence in hedging, uncovering what may perhaps be an interesting result: multi-period asymptotically perfect hedges exist with $\rho \ll -1$.

Previous posts on empirical quantiles and copulas for proxy / cross hedge illustrate the potential insight from graphical visualization. This post continues the theme, illustrating exploratory data analysis for proxy hedging using classical statistical techniques.

In a world awash with symbolic models, there is ample room for graphical exploratory analysis in finance—as the fine texture of the real world differs from both mathematical formalisms and standard mental models. Indeed, alpha hides in the divergence between model and reality.