STATS 310 Introduction to Statistical Inference SEMESTER 1, 2018
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
STATS 310
SEMESTER 1, 2018
STATISTICS
Introduction to Statistical Inference
1. Let X,Y ∼ Exponential(λ) independently, which has probability density function f(x;λ) = λe −λx, x ≥ 0, λ > 0.
(a) Write down an expression for the joint probability density function fX,Y (x,y). [5 marks]
(b) Consider the transformation Z = X + Y . Show that the joint probability
density function of X and Z is fX,Z (x,z) = fX (x)fY (z − x).
Be careful to state what the values of X and Z can be.
[5 marks] (c) Show that the marginal probability density function of Z is given by
fZ (z) = λ2 ze−λz ,
i.e., Z ∼ Gamma(2,λ).
[5 marks] (d) Show that the conditional distribution of X|Z = z is Uniform(0,z).
[5 marks]
[20 marks]
2. Let X = (X1 , . . . ,Xn )T be a random sample taken from a one-parameter regular
distribution with joint probability density function f(x;θ), and θ = θ(X) be an
(a) Write down an expression for the log-likelihood function ℓ(θ).
[3 marks] (b) Hence write down an expression for the score statistic U = U(X;θ).
[3 marks] (c) Using the properties of a regular distribution, show that E(U) = 0.
[3 marks]
(d) Using the properties of a regular distribution, show that an alternative calcu- lation for the Fisher information that is de ned as I(θ) = Var(U) is
I(θ) = −E .
[3 marks]
(e) Using the properties of a regular distribution, show that
E(θ · U) = 1.
(f) Using these properties, show that
Cov(θ ,U) = 1.
[3 marks]
(g) Hence show that the variance of any unbiased estimator θof θ has the following
Var(θ ) ≥ 1
[2 marks] [20 marks]
3. Let X1 ,X2 , . . . ,Xn be a random sample drawn independently from a common dis- tribution which has p.d.f.
f(x;θ) = x22θ3e−x/θ, x > 0, θ > 0.
Note: (k) = 0∞ tk−1e−t dt, for k > 0.
(a) Show that E(Xk ) = 12 (k + 2)!θk , for k = 1, 2, . . . .
[3 marks]
(b) Find the method of moments estimator θ 1of θ . [3 marks]
[3 marks]
[3 marks]
[3 marks]
(f) Is θ2 an MVUE of θ? Justify your answer. [3 marks]
interval for θ .
[2 marks] [20 marks]
4. Consider the multinomial distribution with 4 categories, where the random variables X1 ,X2 ,X3 and X4 have the joint probability function
f(x;θ) = n!x1!x2!x3!x4! x1 x2 x3 θ24 x4 ,
where x1 ,x2 ,x3 ,x4 ≥ 0, 0 < θ 1 ,θ2 < 1, n = x1 + x2 + x3 + x4 , x = (x1 ,x2 ,x3 ,x4 )T , and θ = (θ1 ,θ2 )T .
(a) Write down an expression for the log-likelihood function ℓ(θ).
[3 marks]
(b) Write down an expression for the score statistic vector U(x;θ).
[3 marks]
(c) Find the maximum likelihood estimator
(d) Show that the Fisher information is
I(θ) = n4 ! .
[3 marks]
(e) Find the asymptotic distribution of 1 − 2 , as n → ∞ . [3 marks]
(f) Consider using −2 log(LR) to test the null hypothesis H0 : θ 1 = θ2 against
2α(l)og(LR), and the criterion that is used to reject H0 at the signi cance level [3 marks]
(g) Consider using −2 log(LR) to test the goodness-of- t of the MLE 0 of θ0
[2 marks] [20 marks]
5. An experimenter observes independent observations
Y11 ,Y12 , . . . ,Y1,n1
Y21 ,Y22 , . . . ,Y2,n2
Y31 ,Y32 , . . . ,Y3,n3
where E(Yij ) = βi . Denote by ǫij = Yij − βi the errors, and assume ǫij N(0,σ2 )
Further, let Yi = (Yi1,Yi2 , . . . ,Yi,ni )T and ǫi = (ǫi1,ǫi2 , . . . ,ǫi,ni )T , for i = 1, 2, 3. Also, 0n and 1n are vectors of length n with elements of 0, and 1, respectively.
(a) Show that this model can be expressed as
Y = = |
0n1 1n2 0n3 |
0n1 0n2 |
+ . |
[3 marks]
(b) Show the least squares estimator of β = (β1 ,β2 ,β3 )T is = (Y1 , Y[32
(c) Show that the covariance matrix of is
[3 marks]
(d) Verify that the estimate of σ 2 is
s2 = (n1 − 1)s21 + (n2 − 1)s22 + (n3 − 1)s32
n1 + n2 + n3 − 3 ,
where si(2) = (ni − 1)−1 Pj(n)1 (Yij − Yi )2 , for i = 1, 2, 3.
(e) We wish to test the hypothesis H0 : β1 = β2 + β3 . Show that this hypothesis can be written in the form Aβ = c for matrix A and vector c. Find A and c. [3 marks]
(f) Hence, show that the Wald F-statistic for testing the null hypothesis H0 : β1 = β2 + β3 against the alternative hypothesis H1 : β1 β2 + β3 is given by
FW = 2 , where FW ∼ F1,n1 +n2 +n3 −3 = tn(2)1 +n2 +n3 −3 .
[3 marks]
(g) If the null hypothesis is true, then we can say that β1 = β2 + β3 and rewrite the model using only two parameters, e.g., β ∗ = (β1 ,β2 )T , in the form
Y = X∗β ∗ + ǫ ,
where ǫ = (ǫ1(T) , ǫ2(T) , ǫ3(T))T . Write down the new design matrix X∗ .
[2 marks] [20 marks]
6. Consider the subject of decision theory.
(a) Consider a decision-making problem that has 2 states of nature and needs to take one of 3 potential actions based on a random variable that takes one of 4 possible values. How many candidate decision rules are there in total?
[3 marks]
(b) For a decision-making problem, the following table gives R(di ,θj ), the risk of each candidate decision rule di for each state of nature θj .
θ1 θ2
d1 0 20
d2 5 17
d3 15 16
d4 20 13
d5 30 17
d6 35 14
d7 45 13
d8 50 10
Find all admissible rules.
[3 marks]
(c) Still with the table given in part (b), nd the minimax rule(s). Justify your answer.
[3 marks]
(d) If a Bayesian has a prior belief that P( = θ 1 ) = 0.4 and P( = θ2 ) = 0.6, will the Bayesian prefer d1 to d2 ? Justify your answer.
[3 marks]
(e) Show that a conjugate prior for the Binomial distribution is the Beta distri- bution, and obtain the posterior distribution.
Note: The probability function of a Binomial(n,p) distribution is given by
f(x) = x(n) px (1 − p)n−x, x = 0, 1, . . . ,n,
and the probability density function of a Beta(α,β) distribution by
f(x) = 1B(α,β)xα−1(1 − x)β−1 , 0 < x < 1, α > 0, β > 0,
where B(α,β) is the Beta function.
[3 marks]
(f) Let x be a random observation taken from N(µ,σ2 ) with σ 2 known, and the prior distribution for µ is N(µ0 ,σ0(2)). Show that the posterior distribution for µ is N(µ1 ,σ1(2)), where
µ 1 = and σ 1(2) = 1σ2 + 1σ02 −1 .
[3 marks]
(g) For the problem of estimating µ in part (f), is a central credible interval also the narrowest such interval? Justify your answer.
[2 marks] [20 marks]
2022-06-02