The Mathematical Garden

 | Welcome  | Games  | Mathematical methods  | About us |
Mathematical methods
HMMs
Introduction
The HMM
Estimate alpha
Simulate a HMM
Estimation of alpha
Extension of the model
Summary and references
Mathematical Induction
Pigeonhole principle
Random Walk
Solving Linear Systems

The Hidden Markov Models

We introduce a simple HMM through an example. We consider the process of choosing a die of four faces (a tetrahedron) and recording the number of dots by throwing the die. Suppose we have two dice A and B, each of them has four faces (1,2,3 and 4). Moreover, Die A is fair and Die B is bias. The probability distributions of dots obtained by throwing dice A and B are given in the table below.

Die 1 2 3 4
A 1/4 1/4 1/4 1/4
B 1/6 1/6 1/3 1/3
Table 1

Each time a die is chosen, with probability a, Die A is chosen and with probability ( 1 - a ), Die B is chosen. This process is hidden because we don't know which die is chosen. The value of a is to be determined and it characterized how fair the game is (the closer the value of a to 1, the more "fair" the game is). The chosen die is then thrown and the number of dots (this is observable) obtained is recorded. The following is a possible realization of the process:

A,1,A,2,B,1,B,2,A,1,B,2.

We note that the whole process of the HMM can be modeled by a classical Markov chain model with the transition probability matrix P being given by

P1 =
A B 1 2 3 4
A æ
ç
ç
ç
ç
ç
è
0 0 1/4 1/4 1/4 1/4 ö
÷
÷
÷
÷
÷
ø
B 0 0 1/6 1/6 1/3 1/3
1 a 1 - a 0 0 0 0
2 a 1 - a 0 0 0 0
3 a 1 - a 0 0 0 0
4 a 1 - a 0 0 0 0

It is clear that each state of the Markov chain has a period of two. Here if we are given the value of a then one can simulate the whole process easily because it is just a simple Markov chain process. However, very often the value of a is unknown and we may only be given a sequence of observable states. An important question is the following: how to estimate a from a given sequence of observable states?

Department of Mathematics, HKU, 2010