Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit


Math 170S Homework 2

In exercises 1-4, x1, x2, · · · , xn are i.i.d. sample outcomes of a random variable X.

Exercise 1

Let X Exp(λ), an exponential distribution with parameter λ > 0. This is a continuous probability distribution on the non-negative real numbers, with p.d.f. given by

fX (t) = λe−λt1t≥0.

(i) Show that E X = 1/λ.

(ii) Show that the log-likelihood function for the parameter λ is given by

n

l(x1, x2· · · , xn; λ) = n log λ  λ xi.

i=1

 

(iii) Show the MLE for λ is 1/x¯.

Exercise 2

Let X Geom(p), a geometric distribution with parameter p (0, 1). This is a discrete probability distribution on the positive integers, with p.m.f. given by

fX(k) = (1 p)k−1p, k = 1, 2, · · ·

(i) Show that E X = 1/p.

(ii) Show that the log-likelihood function for the parameter p is given by



l(x1, x2· · · , xn; p) = n log p +

n

 

i=1

xiΣ

− nΣ

log(1  p).


 

(iii) Show the MLE for p is 1/x¯.

Exercise 3

Let  X Poi(λ),  a Poisson  distribution with parameter   λ  >  0. This is a discrete probability distribution on the non-negative integers, with p.m.f. given by


 

fX(k) =

λk −λ

k!

, k = 0, 1, 2, · · ·


(i) Show that E X = λ.

(ii) Show that the log-likelihood function for the parameter λ is given by


l(x1, x2· · · , xn; λ) =


n

 

 

i=1

xiΣ

log λ log(x1!x2! · · · xn!).


 

(iii) Show the MLE for λ is x¯.

Exercise 4

Let X Binom(N, p), a binomial distribution with parameters N and p. This is a discrete probability distribution on integers between 0 and N, with p.m.f. given by

f   (k) = .NΣpk(1  p)Nk,    k = 0, 1, 2, · · · , N

(i) Show that E X = Np.

(ii) Suppose the parameter N is already known. Show that the log-likelihood function for the parameter p is given by


l(x1, x2· · · , xn; p) =

n

 

i=1

xiΣ


log p +

.nN 

n

 

i=1

xiΣΣ

log(1  p)


+ log ..NΣ.NΣ · · · .N ΣΣ .

 

(iii) Show the MLE for p is x¯/N .

Exercise 5

Let X, Y  be random variables satisfying the relation Y  = α+ βX + ε, where ε    (0, σ2) is independent from X. This models a linear relationship between Y and X, subject to some random error ε. Suppose (x1, y1), (x2, y2), , (xn, yn) are i.i.d. sample outcomes of (X, Y ).

In lecture, we computed the log-likelihood function to be

n

l(⃗x, ⃗y; α, β, σ2) =  n log(2πσ2   1   Σ(y   α  βx )

 n log(2πσ2   1   ⃗y  α β⃗x2,

where ⃗x  :=  (x1, x2, , xn),  ⃗y  :=  (y1, y2, , yn),  and 1  :=  (1, 1, , 1) Rn.  We  further computed the MLE for α and β to be


ˆ ˆ (⃗x  1) · (⃗y  1)

 

n i=1

(xi  x¯)(yi  y¯)


αˆ = y¯  βx¯,  and β  =

=

⃗x  12

n

i=1

(xi

 x¯)2 .


We concluded that the line y = αˆ + βˆx is the least squares regression line for the sample points (x1, y1), (x2, y2), · · · , (xn, yn).


(i) Using the likelihood function given above, show that the MLE for σ2 is


σˆ2 =  1 ⃗y  α β⃗x2  =  1 Σ(y

− α βx ) .

 

(ii) Suppose the midterm and final scores of 10 students in a statistics course are as follows:

 

Midterm: 85 76 87 73 92 85 93 66 85 68

Final: 83 78 85 75 93 88 92 72 85 73

Calculate the least squares regression line for these data. Use the midterm score as the

x-variable and the final score as the y-variable.

 

(iii) Plot this line and the data points on the same graph, and draw the vertical distances from the line to the data points.

(iv) Compute σˆ2 for these data.  How does its square root compare to the vertical distances that you’ve drawn?