In the previous post the Hidden Markov Model was defined, however efficient algorithms are need to calculate some the probabilities / perform the marginalisation over hidden states. Two algorithms that can be used are the forward algorithm and the Viterbi algorithm.
The forward algorithm calculates the likelihood of the data given the model over all possible state sequences.
The Viterbi algorithm calculates the likelihood of the data given the model over the single most likely state sequence.
The forward algorithm
The forward algorithm allows for efficient calculation of the likelihood function .
The forward variable is the likelihood of the HMM producing all the observations up to time t
and occupying state at time , it is defined as:
It is calculated recursively by calculating the forward variable for time being in state and then calculating the probability of moving to state at time :
Where is the probability of a jump from state to state , and is the probability of generating feature vector from state .
0.1 Forward Algorithm Initialisation
for and for
The Viterbi Algorithm
The forward algorithm calculated by summing over all state sequences, it is sometimes preferable to approximate which used all state sequences with which will use the single most likely state sequence. This is known as the Viterbi algorithm, the algorithm finds the most likely state sequence.
The probability of the best partial path of length through the HMM ended at state is defined as: . Where is the best partial path / state sequence.
As with the forward variable
can be calculated recursively
0.4 Viterbi Algorithm Initialisation
for and for
…………….. store the preceding node
store the preceding node
The most likely path is found by following the preceding node information backwards that is stored in
The direct calculation of will most likely cause arithmetic underflow errors. The probabilities can become so small that the computer is unable to calculate them correctly. You should instead calculate the log likelihood e.g