Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit

January 2022 Exam

ST302

Stochastic processes

1. consider a discrete time Markov chain (xn)n>o  with ininite state space S = z.  Let the chain be determined by the transition probabilities

p(i, i + 1) = p   and   p(i,i - 1) = q   Ai e z,

where p +q = 1 with q < 1/2. Furthermore, let the starting point xo  = io  < 0 be a negative integer and consider theirst time that the Markov chain becomes positive, namely

T+ = inf{n > 0 xn  持 0}.

(a)  [5 Marks] First, argue that xn = io +Σk(n)=1 Yk, where theYk,s are suitable i.i.d. random variables. Next, argue that xn   +钝 as n 一 钝 (with probability 1) and deduce that the stopping time T+ is inite (with probability 1).

Hint: the strong law of large numbers says that if (Yk)k>1  are i.i.d. random variablesthen

P (nl 

Yk  = E[Y1]) = 1.

You may use this fact without proof.

(b)  [3 Marks] Show that xn - n(p - q) is a martingale and use a martingale argument to show that T+ has expectation

E[T+] =  .

You do not need to worry about the assumptions of any results you apply.

(c)  [5 Marks] Show that

Mn xn

is a martingale in the iltration generated by (xn)n>1. Now consider the stopping time Tj  = inf{n > 0 :xn  = j} for any given integer j  < io.  Explain why (Mnvτj)n>o  is bounded by the constant ( .

(d)  [5 Marks] Let Tj  be as in (c).  Justifying your steps (which includes mentioning any theorems you apply and briely justifying why you can use them),show that

nlP(Tj   n) < 1.

Deduce that we can have Tj  = 钝 with strictly positive probability, and briely justify why this also implies E[Tj] = 钝.

(e)  [2 Marks] why is it intuitive that T+ is initewhile Tj  is not?

2.  consider a Markov chain with a inite state space S  =  {s1 , s2 , s3 , s4 },  modelling four different states that your insurance company can be in at the end of the iscal year.  we refer to si as state i. In state 1, things are not so good and your company has made a loss for the year of 1 million pounds. In state 2, your company breaks even, earning precisely 0 pounds. In state 3, things are amazing, earning you 10 million pounds for the year. In state 4, things are also quite good with earnings of 4 million pounds for the year. The transitions between the states are modelled by the transition matrix

( P = '   (

0.4

0.2

0.6

p(4, 1)

p( 3)   p( 4) )'

p(4, 3)   p(4, 4) )

If you ind it helpful, you are welcome to use drawings to justify your answers. However, it is also ine to just refer to the entries of the transition matrix.

(a)  [1 mark] Suppose p(1, 4) = 0 and p(4, 1) = p(4, 4) = 0.4. what are then the values of p(1, 3) and p(4, 3)?

(b)  [5 marks] In the setting of (a), identify the closed communication classes, the transient states, and the recurrent states.  Briely explain your reasoning in deducing these  clasiications. Is the Markov chain irreducible?

(c)  [5 marks] Now suppose instead that p(1, 4) = 0.2 while p(4, 1) = p(4, 3) = 0. How do your answers to (b) change? Again, you should briely explain how you arrive at the different clasiications.

(d)  [5 marks] Finally, consider P with p(1, 4) = 0.2 and p(4, 1) = p(4, 4) = 0.4. Recalling that every inite Markov chain has a stationary distribution π, you may use that

π = (0.3, 0.55, 0.05, 0.1)

is a stationary distribution for this transition matrix P. write down what it means that this π is a stationary distribution.  Explain, by appealing to a theorem from lectures, why these probabilities can in fact be interpreted as the long-run probabilities of being in states 1, 2, 3, and 4, respectively (you need to briely justify why the theorem can be applied). Explain whether or not the theorem applies in (b) and (c).

(e)  [4 marks] Let P be as in (d). In the long run, how often do you expect to be making 10 million pounds for the year, and what are the average yearly earnings that you expect of your insurance company?

3.  Suppose claims are arriving at your insurance company at a rate of λ  O, meaning that the total number of claims Nt that have arrived by time t is a poisson process with intensity λ . These claims are not handled immediately, but are instead collected in a queue. At random times T1, T2 , . . . all the claims currently in the queue are dealt with and so the queue is reset to zero. The waiting times between resets of the queue, namely Sk = Tk — Tk-1, are assumed to be i.i.d. with Sk  … Exp(T) for some T  O. we are interested in understanding the continuous time Markov chain (xt)t>0, where xt is the number of claims in the queue at time t (currently waiting to be dealt with) starting from x0  = O.

(a)  [3 marks] ldentify the Q matrix of (xt)t>0, by specifying the entries Q(i, j) for all i, j in the state space. Briely explain why the entries are the way they are.

(b)  [5 marks] Let pt(O) be the probability of having xt  = O. using (a), show that t 一' pt(O) satisies the differential equation

pt(O) = T  (λ + T)pt(O).

You may freely rely on results from lectures, but you need to carefully explain your steps.

Hint: think of the forward kolmogorov equation.

(c)  [4 marks] Let pt(n) be the probability of having xt   =  n.  Similarly to (b), derive a differential equation fort 一' pt(n), carefully explaining your steps.

(d)  [6 marks] using a theorem from lectures (which you should state), ind the stationary distribution π = (πn)n>0 of (xt)t>0 that the values you have found indeed give a probability distribution.

Hint: you may want to use (a).  First identify π0, then π1, and so on, thus deducing a general expression for πn.

(e)  [4 marks] First, explain why p0 (O) = 1.  Next, solve the differential equation in (b) to ind pt(O) for all t 持 O.

Hint: solve the equation by showing that

this from s = O to s = t.

(f)  [3 marks] Explain why p0 (1) = O. Now use (e) to show that

pt(1) =  + e-(+)t  — (λ + T)pt(1)

and then ind pt(1) by solving this equation.

Hint: to solve the equation, show that   e(+)s  +   and then

integrate from s = O to s = t.

(g)  [3 marks] Derive the limits of pt(O) and pt(1) as t 一 钝. Explain how this relates to (d), by referring to a theorem from lectures (very briely explaining why the assumptions are satisied). ln the long run, what proportion of the time do we expect there to be precisely 1 claim waiting to be dealt with?

4.  Let (Bt)t>0  be a standard Brownian motion, starting at zero. For given constants b,  e 

and σ 持 0, let (xt)t>0 be the unique solution to the SDE

dxt = -bxtdt + σdBt,    x0 = ①.

Moreover, let (大t)t>0 denote the iltration generated by this stochastic process.

(a)  [4 marks] Show that the process (xt)t>0  is given by

xt = ①e-bt + l0t σe-b(t-s)dBs.

(b)  [3 marks] Referring to the deinition of a stochastic integral, explain very briely why we know that the stochastic integral in (a) is normally distributed. Then compute its mean and variance.

(c)  [6 marks] Let s < t. First, explain why 大s  is independent of (BT - Bs)T >s. using this, argue carefully that the conditional distribution of xt  given 大s  is

xt|大s   (e-b(t-s)xs ,  (1 - e-2b(t-s)))      Hint: rewrite xt in terms of xs and a stochastic integral from s to t.

(d)  [6 marks] Suppose b = σ 2. You may use (without proof) that, for c2   < 1/2, a normal random variable Z … Ⅵ (μ, c2 ) satisies

E[ez2] =  .

Based on this, verify directly that Mt = exp (xt(2) -bt)satisies the martingale property for the iltration (大t)t>0  (you are not allowed to use ltδ,s formula).

(e)  [5 marks] using ltδ,s formula, express Mt   =  exp (xt(2) - bt) as an ltδ process (for general band σ). Next, show what is special about the case b = σ 2  and explain how this relates to (d).

5.  consider the problem ofinding a solution f(t, ①) to the partial differential equation

?f(t, ①)

?t

with terminal condition f(T, ①) = given constants b e 政 and σ 持 0.

= -                   + b①             

h(①), for a given continuous function h :政 一 政 and

(a)  [3 marks] State the Feyman-kac representation of the solution f(t, ①), expressing it as a conditional expectation of the solution (xt)t>0 to a certain SDE (no justiication necessary, just write down the SDE).

(b)  [2 marks] Justifying why it is true, write down the conditional distribution of xT given xt  = ① fort < T. You may freely use results from the above problems.

Hint: you may want to look at problem 4(c).

(c)  [3 marks] Now suppose the terminal condition is h(①) = exp(T①) for some constant T e 政. using (a) and (b), compute explicitly the solution f(t, ①).

Hint: you may wish to recall that the moment generating function of a normal random variable … Ⅵ (μ, c2 ) is Mz(θ) = eθ.