Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit

STA261H1S Term Test  1  (Wed Evening)

2022

Question 1

Let’s assume that X1, X2 , ..., Xn  ∼ f (x; θ), where the density is given by,

f (x; θ) =  (e)se ;  for x > 0

(a)  [6 Marks] Derive the maximum likelihood estimator of θ . Assuming that the true value of θ is

θ0 , calculate the Fisher Information.

 

Solution:

L(θ) = θn exp ()

(θ) = n ln(θ)  

S(θ) =  +  θˆ =

Now the second derivative,

′′(θ) =  2

E (′′(θ))|θ=θ0   = E (  2 )

n

= −

Thus, the Fisher’s information = −E(′′(θ))|θ=θ0   =

(b)  [3 Marks] Show that the E(S(θ))|θ=θ0   = 0, where S(θ) is the score function from (a).

 

Solution:

E(S(θ))|θ=θ0   = E ( + )|θ=θ0

n     nθ0

θ0          θ0(2)

n       n

θ0        θ0

(c)  [1 Marks] What are the mean and the variance of the asymptotic distribution of the MLE? Solution: as n → ∞ we have θˆ ∼ N (θ0 , θ0(2)/n)


Question 2

Let X1, X2 , . . . , X25    N (µ, σ1(2)  = 9) and independently of X1, X2 , . . . , X25 , let Y1, Y2 , . . . , Y20   N (µ, σ1(2)  = 16) where µ ∈ R is unknown. Let  = i(2)5(1) Xi   and  = i(2) Yi . We consider estimating µ by T = a + (1 − a), where a ∈ (0, 1).

(a)  [5 Marks] Is T an unbiased estimator of µ? If yes, prove your answer. If no, calculate the bias. Solutions: E(T) = E(a +(1− a)) = aE  +(1− a)E = aµ +(1− a)µ = µ(a +1− a) = µ and so T is an unbiased estimator of µ .     ■

 

 

 

 

(b)  [5 Marks] Find the value of a that minimizes the variance of T (i.e. Var(T)).

Note: You don’t have to do any second derivative tests.

Solutions: V (T) = V (a + (1 a)) inp a2 V () + (1 a)2 V () = a2  ×  + (1 a)2  ×   = 2a ×  2(1 − a) ×

Setting this equal to zero and solving for a

=⇒ a =  = 0.6896552     ■


Question 3

Suppose we have a finite population Π and a measurement X : Π → {0, 1}, where |Π| = 20 and |{π : X(π) = 0}| = 13.

(a)  [2 Marks] Determine fX (0) and fX (1).

Solutions: Here, fX (0) = 13/20 and fX (1) = 7/20.

 

 

(b)  [4 Marks] For an i.i.d. sample of size 5 (this means sampling with replacement), determine the probability that 5fˆX (1) = 4.

Solution

Note that 5fˆX (1) is the number of 1’s in the sample of size 5 and under iid sampling 5fˆX (1) ∼ Bin(n = 5, θ = 7/20) and so P (5fˆX (1) = 4) = (4(5)) ( )4  (1 )1  = 0.04877031     

 

(c)  [4 Marks] For simple random sample of size 5 (without replacement), determine the probability that 5fˆX (1) = 2.

Solution

Let X be the number the number of elements with measurement 1 in a sample of size 5. Then, P (5fˆX (1)  =  2)  ⇒ P (fˆX (1)  =  2/5).  That is we need to find P (X =  2) with replacement. Implying hypergeometric rule we get,

(2(5))( 3(15))

(5(20))


Question 4

X1, X2 , ..., Xn  are random variables from N (0, θ) distribution. Here, θ = σ 2  > 0 is the variance of the distribution, and is an unknown parameter.

(a)  [4 Marks] Find the sufficient statistic for θ

Solutions: Here we have,

L(θ) =  ( )  exp (  xi(2)) = gθ(T (s), h(s))

here, h(s) = 1 and T (s) =  xi(2)  and thus,  xi(2)  is a sufficient statistic for θ .

 

 

 

 

(b)  [6 Marks] Find the MLE of θ (no need to calculate the second derivative). Is the obtained MLE a consistent estimator?

Solution: Here,

 


L(θ) =  ( )


exp ( xi(2))


(θ) =   log(θ)  log(2π)   xi(2)

 =  +  xi(2)  = 0

θˆ =  xi(2)

Since, the mean parameter µ = 0, here and thus, V (X) = E(X2) = θ .  Due to the WLLN

  xi(2)   E(X2), and thus θˆ is a consistent estimator.


Question 5

Suppose that a statistical model is given by the family of Bernoulli (θ) distributions where θ ∈ Ω = [0, 1].

(a)  [4 Marks] If the interest is in making inference about the probability that two independent observations from this model are both equal to zero, then determine ψ(θ)

Solution

ψ(θ) = P (X1  = 0, X2  = 0) = (1 − θ)2        ■

 


(b)  [6 Marks] If {X1, X2 , . . . , Xn} are iid random variables from this model, find the maximum

likelihood estimator for the aforementioned ψ(θ) and calculate the bias in your estimator.

Solution

 is the MLE of θ and this ψ(θ) a is a 1-1 function of θ in Ω = (0, 1) and so T = (1 − )2  is the MLE of ψ(θ)

ET = E((1 )2 ) = E((1 ))2  + V ((1 )) =  (1 θ)2  +   = ψ(θ) +     = Bias(T) = ET ψ(θ) =