MATH 126A: Stochastic Processes - Spring 2023
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
MATH 126A: Stochastic Processes - Spring 2023
Markov Chains - Exercises
Exercise 1. Consider the 2-states Markov Chain (X0, X1 , X2 , x x x ) with Xi e {1, 2}, initial distribution
Φ = (Φ 1 , Φ2) and transition probabilities:
p e (0, 1)
q e (0, 1)
We will show using 3 different methods the convergence of the distribution of Xn to an invariant measure.
First method: the big theorems
(a) Compute the invariant probability distribution π and discuss briefly why a theorem shown in class shall apply for the convergence of the probability distribution of Xn to the invariant distribution.
Solution: The transition matrix is P = 、. We are looking for a row vector π = (π1 , π2 ) such that πP = π, meaning:
Both equations lead to the same relationship pπ 1 = qπ2 , or π 1 = qπ2 /p, and using the normalization condition π 1 + π2 = 1 we get (p + q)/pπ2 = 1, or π = ( , ) . Because we assume that p and q are not 0 and not 1, all the coefficients of the matrix are strictly positive . The Perron-Frobenius theorem thus ensures that for any initial condition Φ we have ΦPn o π when n o o .
Second Method: the recursive sequences
(b) Using the Markov property, show that the probability yn = qn(1, 1) = P[Xn = 1|X0 = 1] satisfies the recursion relationship:
yn+1 = (1 . p)yn + q(1 . yn)
with y1 = 1 . p.
Solution:
yn+1 = P[Xn+1 = 1|X0 = 1]
= P[Xn+1 = 1|Xn = 1, X0 = 1]P[Xn = 1|X0 = 1]+ P[Xn+1 = 1|Xn = 2, X0 = 1]P[Xn = 2|X0 = 1]
by the law of total probability. The Markov property ensures that P[Xn+1 = 1|Xn = 1, X0 = 1] = P[Xn+1 = 1|Xn = 1] (conditional on the present, the past does not matter), and the time-homogeneity ensures that P[Xn+1 = 1|Xn = 1] = (1 . p) . Similarly, P[Xn+1 = 1|Xn = 2, X0 = 1] = q . Eventually, noting that P[Xn = 2|X0 = 1] = 1 . yn , we conclude that:
yn+1 = (1 . p)yn + q(1 . yn).
For the initial condition we note that y0 = 1 . Also, we can certainly write that y1 = P[X1 = 1|X0 = 1] = 1 . p per the transition probability.
(c) Compute the solution of this recursion relationship (look for a particular solution of type xn = C e R and for the homogeneous solution of type an for some a).
Solution: Expanding the recursion relationship we get:
yn+1 = (1 . p . q)yn + q
which is a linear recursion relationship with an inhomogeneity. Let us first look for a constant particular solution, i. e . of the form yn = C . We have
C = (1 . p . q)C + q
whose unique solution is C = q/(p + q) . Homogeneous solutions of the equation are of the form xn = D(1 .p . q)n for some real value D needed to match the initial condition. The general solution is thus of the form:
yn = D(1 . p . q)n + q
p + q p + q .
(d) Do the same for zn = P[Xn = 1|X0 = 2] and conclude that for any initial condition, the probability distribution of Xn converges to the invariant probability.
Solution:
zn+1 = P[Xn+1 = 1|X0 = 2]
= P[Xn+1 = 1|Xn = 1]P[Xn = 1|X0 = 2]+
P[Xn+1 = 1|Xn = 2]P[Xn = 2|X0 = 2]
= (1 . p)zn + q(1 . zn)
which is the same equation as the one for yn but with a different initial condition, but either way, the solution will be of type yn = D(1 . p . q)n + . Now, with an arbitrary initial condition, we will have:
P[Xn = 1] = P[Xn = 1|X0 = 1]P[X0 = 1] + P[Xn = 1|X0 = 2]P[X0 = 2]
and because both P[Xn = 1|X0 = 1] and P[Xn = 1|X0 = 1] converge to q/(p + q) we will
get
q q q
which is π 1 .
A third approach: Linear algebra
(e) Show that the eigenvalues of the transition matrix are 1 and λ = 1 . p . q . Explain why
|λ| < 1.
Solution: We know that 1 is an eigenvalue, and the trace is the sum of eigenvalues, so we have 1 + λ = (1 . p) + (1 . q), thus λ = 1 . p . q . Because 0 < p, q < 1, we have 0 < p + q < 2 and .2 < .(p + q) < 0, so .1 < 1 . p . q < 1 and hence the absolute value is strictly less than 1 .
(f) We already know that the left eigenvector associated to 1 is the invariant distribution π
computed above. Show that e2 = (.1, 1) is a left eigenvector associated with λ .
Solution: We have (.1, 1) x P = (.1 - (1 . p) + 1 - q, .1 - p + 1 - (1 . q)) = (.1 + p + q, 1 . p . q) = (1 . p . q)(.1, 1) .
(g) Show that if the decomposition of the initial probability on these vectors is Φ = c1 π+c2 e2 ,
the normalization condition on probability distributions readily implies that c1 = 1.
Solution:Noting the components of e2 = (e2(1), e2(2)), we get Φ 1 + Φ2 = (c1 π 1 + c2 e2(1)) + (c1 π2 + c2 e2(2)) = c1 (π1 + π2 ) + c2 (e2(1) + e2(2)) = c1 because the sum of the components of e2 is zero and π 1 + π2 = 1 . By normalization of Φ, we get c1 = 1 .
(h) Recall why the probability distribution of Xn is
(P[Xn = 1], P[Xn = 2]) = ΦPn .
Using the decomposition on the eigenvectors above, conclude that this probability con- verges to the invariant distribution π for any initial condition Φ .
Solution:Because πP = π, we get by induction πPn = (πP)PnR1 = πPnR1 = x x x = π . Similarly, e2Pn = (e2P).PnR1 = λe2PnR1 and by immediate induction e2Pn = λne2 . Based on this,
ΦPn = (π + c2 e2 )Pn = πPn + c2 e2Pn = π + λne2 which converges to π as n o o because |λ| < 1 .
Exercise 2. Your pet mouse
When you’re not studying, you like to watch your mouse play in the card box house you built for it. The house floorplan is shown below. Your mouse switches room taking any door at random, and you track the sequence of rooms visited.
(a) Write the transition probability matrix of rooms visited and plot a diagram. Briefly explain why the room occupied by your mouse forms a time-homogeneous Markov Chain.
Solution:Because the mouse is changing rooms at random by taking any door accessible from the room it is currently in independently of which rooms it visited in the past and independently of the time, we are dealing with a time-homogeneous Markov chain. The transition matrix is
P = ╱ 2(0)/3
.
2/3
0
1/2
、
0 ( .
The diagram should just have three circles (A,B,C) with arrows going from any room to any other room (wherever the probabilities of the matrix are non-zero) .
(b) Compute the invariant distribution
Solution:Let π = (πA , πB , πC ) be the invariant distribution. The relationship πP = π
is telling us that:
,.π2 + π3 = π 1
.π 1 + π3 = π2
..π 1 + π2 = π3
with, by substitution, leads to π 1 = π2 and π 1 = π3 . The normalization condition yields π 1 + π2 + π3 = π 1 (1 + 1 + ) = π 1 = 1 which implies π 1 = π2 = and π3 = after simplification.
Exercise 3. Newspaper piling up:
The Smiths receive the paper every morning and place it on a pile after reading it. Each afternoon, with probability p = 1/3, someone trashes all the papers in the pile. Also, if there are at least 5 papers in the pile, Mr. or Mrs. Smith trash all the pile with probability 1. Let Xn be the number of papers in the pile the evening of day n.
(a) Explain why Xn is a time-homogeneous Markov chain
Solution:The number of papers in the pile at a given day only depends on the number of papers the day before, independently of the past number of journals on the pile (Markov property) and independently of the day considered (time-homogeneity), so we are dealing with a time-homogeneous Markov chain.
(b) What is the state space? What is the transition matrix?
Solution:The state space is {0, 1, 2, 3, 4} (or, {1, 2, 3, 4, 5} depending on whether or not you are looking at the number of journals before or after the time the pile is possibly trashed) . The transition matrix is:
〕(〕) 1/3 〕. 1(1)/3 |
2/3 0 0 0 0 |
0 2/3 0 0 0 |
0 0 2/3 0 0 |
0(0) 、 0 0(2)/3 |
(c) What is the invariant measure?
Solution:Denote π = (π0 , π 1 , π2 , π3 π4 ) the invariant probability. The formula π = πP gives the system of equations:
,. (π0 + π1 + π2 + π3 ) + π4 = π0
.
.(.) 2
.π 1 = π2
.(.) 2
.
.2
Let’s write everything in terms of, e .g., π4 . We get π3 = π4 , and then π2 = π4 , π 1 = π4 and π0 = π4 . Now, using the normalization condition, we get
1 = π4 ( + + + + 1) = π4
from which we conclude that the invariant measure is:
( 81 52 36 24 16 )
2023-05-12