MARKOV
PROCESSES
Suppose a system has a finite number of states and that the sysytem undergoes changes from state to state with a probability for each distinct state transition that depends solely upon the current state. Then, the process of change is termed a Markov Chain or Markov Process.
Definition: If a
system featuring "n" distinct states undergoes state changes which
are strictly Markov in nature, then the probability that its current state is
" "
given that its previous state was " "
is the transition probability, "
".
The nxn matrix "
"
whose ijth element is
is termed the transition
matrix of the Markov chain.
Each column vector of the transition matrix is thus associated
with the preceding state. Since there are a total of "n" unique
transitions from this state, the sum of the components of
must add to "1", because it is a
certainty that the new state will be among the "n" distinct states.
Definition: The state vector for an observation of a Markov
chain featuring "n" distinct states is a column vector, , whose
kth component,
, is
the probability that the system is in state "
"
at that time.
Page 1 of 11
Example # 1: Drexel finds that 70% of it alumni who contribute to the annual fund will also contribute the next year and 20% of its alumni who do not contribute one year, will contribute the next year. Determine the probability that a new graduated student will be a contributor to the annual fund 10 years after she graduates.
Now that we have the transition matrix, we need a state vector. In fact, we need a particular state vector, namely the initial state vector. Our newly minted graduate became an alumnus immediately upon graduation. While she was a student, she was not an alumnus and thus did not contribute to the annual fund previously. Her last state vector reflects that.
Page 2 of 11
Therefore, 10 years after graduation, only 40% of those celebrating their 10th reunion are likely to be contributors.
What happens after 20 years?
We obtain the same result! In other words, the state vector
converged to a steady-state vector. I this case that steady-state vector is .
Page 3 of 11
Theorem: The
steady-state vector of the transition matrix "P" is the unique
probability vector that satisfies this equation: .
That is true because, irrespective of the starting state, eventually equilibrium must be achieved. The transient, or sorting-out phase takes a different number of iterations for different transition matrices, but eventually the state vector features components that are precisely what the transition matrix calls for. So, subsequent applications of "P" do not change the matured state vector.
Theorem: State
transition matrices all have as an eigenvalue.
We will just consider a 2x2 matrix here, but the result can be extended to an nxn.
Page 4 of 11
That proves the theorem for the 2x2 case. Now, let's find the other eigenvalue.
Since and
,
.
Therefore,
is the dominant eigenvalue. This fact will
manifest itself when we demonstrate that the corresponding eigenvector is
indeed the steady state vector,
. Let's find that corresponding eigenvector.
Page 5 of 11
But the components of the vector must add to "1". Otherwise, it can
not be a state vector.
Example # 2: Show
that the steady-state vector obtained in Example
# 1 is the eigenvector of the transition matrix corresponding to the
eigenvalue .
Page 6 of 11
If the steady- state vector is the eigenvector corresponding
to and the steady-state vector can also be found
by applying "P" to any initial state vector a sufficiently large
number of times, "m", then
must approach a specialized matrix.
Example # 3: Find for the matrix
, where
"N" is a very large positive integer.
Page 7 of 11
For both equations above to be true for all values of ,
. Then
we obtain these results.
Page 8 of 11
Example # 4: A rental
car agency has three locations. Cars can be picked-up any one of the three
locations and returned to any other location including the location it was
picked-up at. The matrix, "P", below is the transition matrix of this
Markov Process. Determine the eigenvalues and eigenvectors; find the
steady-state vector; and express in terms of the eigenvectors of "P".
Here the columns correspond to the pick-up locations and the
rows correspond to the return locations. For example, entry is the probability that a car rented at
Location # 2 will be returned to Location # 3.
Page 9 of 11
Note that rows 1 and 2 are identical in "P".
Therefore, . Also,
the sum of columns 1 and 2 is twice column 3 for
.
Finally, Markov processes have
The corresponding eigenvectors are found in
the usual way.
After a sufficient number of iterations, the state vector will nominally equal its steady-state vector. The eigenvector associated with the eigenvalue "1" determines the steady-state vector.
Page 10 of 11
Page 11 of 11