Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit

Lecture 5: Stats 217

Summer 2022

This lecture introduces the branching process. It’s a process used in modelling the survival of family surnames, neutron chain reaction, survival of mutant genes, etc.

1    Setup

Suppose an organism at the end of its lifetime produces a random number ξ of offsprings with probability distribution

P(ξ = k) = pk ,          k = 0, 1, 2, ...

We assume that all offspring act independently of each other and at every time point, each generates a random number of progeny, the random number being an i.i.d. copy of ξ . The parents then die.

Let Xn  be the population size at generation n; then {Xn }n0  is called the branching process. The Xn , ξ offsprings respectively. The parents die. So the population size at time n + 1 is

Xn+1  = ξ n)  + ξn)  + · · · + ξ .

2    Examples of branching processes

2.1    Electron multipliers

A series of plates are set up in the path of electrons emitted by a source. Each electron, as it strikes the first plate, generates a random number of electrons, which in turn strike the next plate and generate more electrons, and so on. If we denote by Xn  the number of electrons emitted from the n’th plate due to electrons originating from the (n − 1)th plate, then Xn  can be modelled as a branching process.

2.2    Survival of family names

Family names are inherited by sons only. Suppose that each individual has probability pk of having k male offspring. Each such offspring will give rise to more offsprings, and if there is at least one male at every generation, the family name will continue, otherwise the family name will get extinct.

3    Mean of a branching process

Let µ  =  E(ξ) be the mean of the progeny distribution.  Suppose we start with one individual X0  = 1. Let Mn  = E(Xn ) be the mean size of the n\th generation. We want to get an expression for Mn . Since X0  = 1, M0  = E(X0 ) = 1.

We will use the law of total expectation, which is as follows.  Let A and B be any random variables, then

E(A) = E(E(A|B)).

Of course, for a given application, we will be interested to find E(A), and we will have to come up with a random variable B which will make our life easier in that E(A|B) will be easy to compute.

Recall that

Xn+1  = ξn)

where the sum i(0)=1  is interpreted as 0: if Xn  = 0, then there is nobody alive in generation n, so there cannot be any offspring in generation n + 1, which implies Xn+1  = 0.

Taking expectation on both sides and using the law of total expectation, we get

Mn+1  = E ( ξn))

= E (E ( ξn)|Xn ))

= E ( E(ξn)|Xn ))

Since ξn)  are independent of Xn , it follows that E(ξn)|Xn ) = E(ξn)) = E(ξ) = µ . Hence, Mn+1  = E(Xn  × µ) = µE(Xn ) = µMn .

Iterating this, for any n ≥ 0,

Mn  = µMn−1  = µ M2n−2  = ... = µ  Mn0  = µn

since M0  = 1 as written before.

So, we get three situations.

• If µ  =  1, then for every n, Mn   = µn   =  1.  So the population size remains constant and stable.

• If µ < 1, then Mn  = µn  → 0 exponentially fast, which indicates population extinction.

• If µ > 1, then Mn  = µn  → ∞ exponentially fast, which indicates population explosion.

4    Variance of a branching process

Let σ 2  = Var(ξ) denote the variance of the progeny distribution. Let Vn  = Var(Xn ) denote the variance of the branching process. Again, we will develop a recursion to solve for Vn . First, note that V0  = Var(X0 ) = Var(1) = 0.

We will use the following representation of the variance: for any random variables A and B ,

Var(A) = Var(E(A|B)) + E(Var(A|B)).

We write

Vn+1  = Var(Xn+1)

= Var(E(Xn+1|Xn )) + E(Var(Xn+1|Xn ))

Now,

E(Xn+1|Xn ) = E(ξn)|Xn ) = µXn

by the logic explained in the previous section. Hence,

Var(E(Xn+1|Xn )) = Var(µXn ) = µ Var(X2n ) = µ V2n .

Next,

Var(Xn+1|Xn ) = Var(ξn)|Xn ) = Xn σ 2

because ξn) are iid copies of ξ and independent of Xn so the conditional variance Var(ξn)|Xn ) = Var(ξn)) = Var(ξ) = σ 2 . Hence,

E(Var(Xn+1|Xn )) = E(σ2 Xn ) = σ2E(Xn ) = σ  M2n  = σ 2 µn . This establishes that for every n ≥ 0,

Vn+1  = µ V2n  + σ 2 µn .

Written another way, for every n ≥ 1,

Vn  = µ V2n1  + σ 2 µn1 .

There are two cases here.

•  (µ = 1) In this case, the recursion becomes

Vn  = Vn1  + σ 2

which with the boundary condition V0   = 0, yields Vn   = nσ 2 .  So, the variance increases linearly in this case.


•  (µ  1) Writing out a few terms, it becomes evident that

Vn  = σ2 (µn1  + µn  + · · · + µ2n2) = σ 2 µn1  × µn  1

This can be proven by induction.

If µ > 1, then this variance explodes exponentially. If µ < 1, this variance decays exponen- tially to 0. In the latter case, recall that we already had shown Mn  → 0. Now we can see that also Vn  → 0. So we have generation sizes whose mean and variance both tend to 0, and this shows that the generation size itself must converge to 0, indicating extinction.

5    Extinction probabilities

Assume X0  = 1 for simplicity. Let un  = P(Xn  = 0), that is, the probability that the population is extinct at time n (note: we are not saying n is the rst time of extinction).

Then, we can do a first step decomposition:

un  = P(Xn  = 0)

P(Xn  = 0, X1  = k)

P(Xn  = 0|X1  = k)P(X1  = k)

P(Xn  = 0|X1  = k)P(ξ = k)

= P(Xn  = 0|X1  = k)pk

where we used the fact that X1  is really a copy of ξ (here we are using X0   =  1).  Now, given X1  = k, to get Xn  = 0, each of the k subpopulations generated by the X1  individuals in the first generation, must get extinct by time (n − 1). These subpopulations are also independent. Hence, we can say,

P(Xn  = 0|X1  = k) = P(Xn1  = 0|X0  = k) = un(k)1

which nally gives the recursion

un  = un(k)1pk ;     u0  = 0.


5.1    Example

Suppose the progeny distribution is P(ξ  = 0) =  1/4 and P(ξ  = 2) = 3/4. Then, the recursion becomes

un  =  + un(2)1

and one can numerically calculate un .

6    Connection with probability generating function

Define the probability generating function (pgf) of the distribution of ξ as

ϕ(s) = E(sξ ) = pk sk ;


 

0 ≤ s ≤ 1.

Note that ϕ(0) = p0  i.e. the probability that no child is born, and ϕ(1) = pk  = 1.

Then, we can write the recursion

un  = un(k)1pk

as simply

un  = ϕ(un1).

Hence, if we know the pgf of the distribution of ξ, we can compute un  successively.

7    Limiting extinction probability

In this section, we answer the following question:  what is the probability that the population eventually becomes extinct? We will see that the answer is crucially linked to the pgf.

Theorem 7.1. The eventual extinction probability of a branching process starting with 1 individual and with progeny pgf ϕ is the smallest non-negative solution to the equation ϕ(s) = s.

We will not prove the theorem, but a few comments should be made here.

•  Suppose p0   = 0 i.e. the progeny size can never be 0. Then of course, the population can never be extinct. In this case, ϕ(0) = p0  = 0, so that the smallest non-negative solution to ϕ(s) = s.

•  1 is always a solution to ϕ(s) = s. In certain cases, the eventual probability of extinction equals 1.  This would mean that almost surely, the population becomes extinct.  In other cases, the extinction probability is strictly between 0 and 1.

7.1    Example

Consider the example from before:p0  = 3/4, p2  = 1/4. Then ϕ(s) = (3 + s2 )/4. Let us solve

3 + s2

4

This equation has two roots 1 and 3, so the smallest non-negative solution is 1. This means, the population will become extinct almost surely.

7.2    Showing extinction probability satises ϕ(s) = s

Dene u∞  = P(eventual extinction). Then, one can again do a rst step decomposition:

u  = P(eventual extinction)

P(eventual extinction|X1  = k)pk

= upk

= ϕ(u)

showing that u∞  [0, 1] is a solution to ϕ(s) = s.

8    Extinction probability based on mean progeny size

In this nal section, we connect u, the extinction probability, to µ, the mean progeny size.

Define f (s)  =  ϕ(s) − s.  Looking at solutions to ϕ(s)  =  s is same as looking at roots of f (s) = 0. So we will try to understand some properties of f .

f\ (s) = ϕ\ (s) 1 =  kpk sk1  1

and in particular,

f\ (1) =  kpk  1 = µ 1.

f\\ (s) = ϕ\\ (s) =  k(k 1)pk sk2  0.

So f\ is non-decreasing for s ∈ [0, 1]. In fact, if p0  + p1  < 1 then for some k > 1, pk   > 0 and hence f\\ (s) > 0. In this case, f\ is strictly increasing for s ∈ [0, 1].

•  Since f\ is non-decreasing, f\ (s) ≤ f\ (1) for all s ∈ [0, 1] implying f\ (s) ≤ µ − 1.

Now we make the conclusions about the smallest solution uto ϕ(s) = s.

•  Suppose µ < 1. Then, f\ (1) = µ − 1 < 0 and so, f\ (s) < 0 for any s ∈ [0, 1]. This shows that f is strictly decreasing. Since f (1) = 0, we get that for any s ∈ [0, 1], f (s) > f (1) = 0 implying ϕ(s) > s. Since ϕ(s) = s for s = 1, we conclude that the smallest non-negative solution to ϕ(s) = s is s = 1. Hence, u∞  = 1 and the population eventually dies.

 Next, suppose µ = 1. Here, we have two cases.

  (p0  + p1  = 1) In such a case, 1 = µ = p1  and so each individual generates exactly one progeny. Thus the population never goes extinct and u∞  = 0.

  (p0  + p1  < 1) In such a case, there exists some pk  > 0 for k > 1. Hence f\\ (s) > 0 so that f is strictly increasing on [0, 1]. Hence for any s ∈ [0, 1], f\ (s) < f (1) = 1−1 = 0\  implying f is strictly decreasing on [0, 1].  Thus, proceeding as in the case µ  <  1, f (s)  > f (1) for every s  ∈ [0, 1] implying ϕ(s)  > s.  Since ϕ(1) =  1, s =  1 is the smallest non-negative solution to ϕ(s) = s. Hence, u∞  = 1 and the population gets extinct almost surely.

• For µ > 1, one really has to solve the system ϕ(s) = s and the smallest solution will neces-

sarily be smaller than 1, which shows that P(eventual extinction) < 1, implying P(eventual survival) > 0. The population survives with positive probability.

8.1    Verifying the example

Recall the example: p0  = 3/4, p2  = 1/4. We showed previously that the extinction probability is 1 by directly solving ϕ(s) = s. Now, we conclude the same by noticing that µ = 2 × (1/4) = 1/2 < 1. Hence the population becomes extinct almost surely.