Hidden Markov Models – Model Description Part 1 of 4

Hidden Markov Models

This post will develop a general framework for classification tasks using hidden markov models. The tutorial series will cover how to build and train a hidden markov models in R. Initially the maths will be explained, then an example in R provided and then an application on financial data will be explored.

General Pattern Recognition Framework

A set of features \mathbf{o} are derived from data set \mathbf{d} and a class \omega identified by finding the most likely class given the data \mathbf{o}

\hat{\omega}=argmax_{\omega}P(\omega|\mathbf{o})

However P(\omega|\mathbf{o}) is unknown, so Bayes’ rule must be used.

\hat{\omega}=argmax_{\omega}\frac{P(\mathbf{o}|\omega)P(\omega)}{P(\mathbf{o})}=argmax_{\omega}P(\mathbf{o}|\omega)P(\omega)

Since the maximisation does not depend upon P(\mathbf{o}) we can ignore it. The terms P(\mathbf{o}|\omega) and P(\omega) , are the likelihood of the data given the class and prior probability of a class respective, both terms are defined by a model. The feature model P(\mathbf{o}|\omega) will be described by the hidden markov model (HMM), each class will have it’s own HMM.

The Task at Hand

First we need to generate a set of features \mathbf{o} from the raw data \mathbf{d}. I will skip this step for now because it is specific to the application of your hidden markov model, for example in finance \mathbf{d} may be various stock prices and \mathbf{o} could be a set of technical indicators / volatility calculations applied to the data \mathbf{d}. HMM’s are popular in speech recognition and typically \mathbf{o} is a vector describing the characteristics of the frequency spectrum of the speech.

Secondly the feature vector \mathbf{o} must then be assigned a class from the HMM. This is done the via maximum likelihood estimation, the HMM is a generative model, choose the class that is most likely to have generated the feature vector \mathbf{o}.
For finance the class might be a market regime (trending/mean reverting) or in speech recognition the class is a word.

Example HMM Specification

hidden markov model

N The number of states in the HMM

a_{ij} The probability of transitioning from state i to state j

b_{j}(\mathbf{o}) The probability of generating feature vector \mathbf{o} upon entering state j (provided j is not the entry or exit state)

The HMM \lambda may be written as \lambda=[N,a_{ij},b_{j}]

\mathbf{O}=[\mathbf{o_{1},}\mathbf{o_{2},o_{T}]} the observed feature vectors

X=[x_{1},x_{2},x_{T}] the specified state sequence

The joint probability is the probability of jumping from one state to the next multiplied by the prob of generating the feature vector in that state:

p(\mathbf{O},X|\lambda)=a_{x_{0},x_{1}}\prod_{t=1}^{T}b_{x_{t}}(\mathbf{o_{t}})a_{x_{t},x_{t+1}}

Where x_{0} is always the entry state 1, and x_{T+1} is always the exit state N.

Likelihood Calculation

In the above joint probability calculation we have assumed a state sequence X. However this is a latent variable, we do not know it, it is hidden (hence the name HIDDEN markov model)! However if we sum over all possible state sequences we can marginalise it out.

p(\mathbf{O}|\lambda)=\sum_{X}p(\mathbf{O},X|\lambda)

This can be problematic due to the number of possible state sequences (especially in a real-time application), luckily algorithms exist to effectively perform the calculation without needing to explore every state sequence. One such algorithm is the forward algorithm.

What is b_{j}(\mathbf{o})?

This is the output distribution for a given state j. The distribution can be anything you like however it should hopefully match the distribution of the data at state j, and it must be mathematically tractable. The most natural choice at this stage is to assume \mathbf{o} can be described by the multivariate Gaussian. As a word of caution if the elements of your feature vector are highly correlated then \Sigma, the covariance matrix, has a lot of parameters to measure. See if you can collapse \Sigma
to a diagonal matrix.

E.g b_{j}(\mathbf{o})\sim N(\mathbf{o};\mu_{j},\Sigma_{j})

How to train b_{j}(\mathbf{o}) / Viterbi Parameter Estimation

We already know how to fit a normal distribution, the MLE for \mu is the mean, and \Sigma the covariance of the feature vector. However we must only calculate the mean and covariance on feature vectors that came from state j, this is known as Viterbi Segmentation. Viterbi Segmentation means there is a hard assignment between feature vector and the state that generated it, an alternative method is called Balm-Welch which probabilistically assigns feature vectors to multiple states.

State j generated observations starting at t_{j}

\hat{\mu}_{j}=\frac{1}{t_{j+1}-t_{j}}\sum_{t=t_{j}}^{t_{j+1}-1}\mathbf{o_{t}}

\hat{\mathbf{\Sigma}}_{j}=\frac{1}{t_{j+1}-t_{j}}\sum_{t=t_{j}}^{t_{j+1}-1}[(\mathbf{o_{t}-\hat{\mu}_{j}})(\mathbf{o_{t}-\hat{\mu}_{j}})

It is not known in advance which state generated which observation vector, fortunately there is an algorithm called the Viterbi algorithm to approximately solve this problem.

hmm training outline

The forward algorithm for efficient calculation of p(\mathbf{O}|\lambda) and the Viterbi algorithm will be explored in my next post.

4 thoughts on “Hidden Markov Models – Model Description Part 1 of 4

  1. Pingback: Daily Wrap for 5/19/2014 | The Whole Street

  2. Pingback: Hidden Markov Models – Forward & Viterbi Algorithm Part 2 of 4 | Gekko Quant – Quantitative Trading

  3. Hi Gekko, there is an excellent book about HMM in R: dynamic linear models with R. I think you already know, but hope this can be helpful to your readers.

Leave a Reply

Your email address will not be published. Required fields are marked *