Skip to content

Return Decomposition via Mixing

December 28, 2011
About these ads
2 Comments leave one →
  1. sameersoi permalink
    January 3, 2012 1:24 pm

    hi quantivity, nice post! the skew-t has more parameters than the norml; did you consider the possibility of over-fitting? perhaps using aikaike or bayesian information criterion, imperfect as they are?

    • quantivity permalink*
      January 3, 2012 8:44 pm

      @sameersoi: thanks for complement. Intent here is more exploratory analysis than pure statistical inference, and thus preference is given to gaining insight from parsimonious models over model selection considerations (i.e. AIC / BIC).

      Separately, overfitting is a pervasive concern with ML, worth discussion for this model (e.g. cross-validation is natural in this context), and deliberately not considered in this post for brevity.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 262 other followers

%d bloggers like this: