Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit

MATH6174W1

1.   [25 marks] Assume that X1 , . . . ,Xn are independent, identically distributed random variables with probability density function

fX (x;) = (x + )4 ,  x > 0,  ✓ > 0.

Also assume that x1 , . . . ,xn are observations of X1 , . . . ,Xn .

(a)  [5 marks] Show that E(Xi ) = ✓/2 and hence that ˜ = 2 is an unbiased estimator of ✓, where  =  P Xi .

(b)  [1 mark] Write down the log-likelihood function for ✓ given x = (x1 , . . . ,xn ).

(c)  [2 marks] Find the score for ✓ for a sample of size n.

(d)  [5 Marks] Show that the Fisher information for ✓, for a sample of size n, is

3n

I() =

(e)  [3 marks] Give a lower bound for the variance of unbiased estimators of ✓ and show that this cannot be attained.

(f)  [3 marks] Given that Var(Xi ) = ✓2 , find the efficiency of ˜.

(g)  [6 marks] Use the Central Limit Theorem to find the asymptotic distribution of ˜ and hence an approximate 95% confidence interval for ✓ .

 

2.   [25 marks]

(a)  [3 marks] Consider the following two linear models both with Y = Xβ + ✏, but where under M0

✏ ⇠ N(0 , σ 2In )

and under M1

 N(0 , ),

where  is an invertible n n covariance matrix.

Show that the ordinary least squares estimator of β ,

 = (XT X)1XTY ,

is unbiased under both models and nd its covariance matrix under both models. (b)  [2 marks] Consider from now on the following special cases of M0 and M1 :

Yi = 

where under M0

✏i ⇠ N(0, σ 2 ),  i = 1, . . . , 2m,

and under M1

i

Further assume that the i ,i = 1, . . . , 2m, are independent.

Write-out the vectors and matrices Y , X , β and ⌃ in terms of

Y1 , . . . ,Y2m , β0 , β1 , σ 1(2) and σ 2(2), when m = 2.

(c)  [6 marks] Show that under M1 the maximum likelihood estimates of β0 , β1 , σ1(2) and σ2(2) are given by

βˆ0   =  y¯1 ,

βˆ1   =  y¯2 − y¯1 ,

m

2(2)  =   i (yi − βˆ0 − βˆ1 )2 ,

where y¯1  =  P yi and y¯2  =  P yi . There is no need to check that

the Hessian is negative denite.

(d)  [3 marks] Show that under model M0 , the maximum likelihood estimate of σ 2

can be written as

2  =  ( 1(2) + 2(2)).

(e)  [5 marks] Show that the generalised likelihood ratio test for testing M0 against M1 rejects M0 when

(1 + F) 1+ > k,

where

1(2)

F =

(f)  [3 marks] Explain why the generalised likelihood ratio test for testing M0 against M1 rejects M0 when F < 1/c or when F > c.

(g)  [3 marks] Explain why F follows an F distribution under M0 and give its degrees of freedom. You may state any standard results from distribution theory without    proof.

 

3.   [25 marks]

Assume that X1 ,X2 , . . . ,Xnare independent identically distributed N(✓ , σ 2 ) observations. Suppose that the joint prior distribution for ✓ and σ 2 is

⇡(✓ , σ 2 )

/  1

✓ 2 ( −1, 1), σ 2  2 (0, 1).

(a)  [3 marks] Derive, up to a constant of proportionality, the joint posterior density of ✓ and σ 2 .

(b)  [4 marks] Derive the conditional posterior distributions of ✓ given σ 2 and of σ 2 given ✓ . Name those distributions.

(c)  [8 marks] Derive the marginal posterior density of ✓ . Name that distribution and write down its mean and variance for n > 3.

(d)  [10 marks] Now assume that σ 2 is a known constant. Find the Bayes factor where one model corresponds to ✓ = 0 and the other model has ✓ following N(0, ⌧ 2 ) a-priori.

 


4.  Consider a dose response study where the aim of the experiment is to model the       toxicity of a drug. Let xi = 0 if the drug does not produce substantial side effects,     and xi = 1 if it does, when the dose yi is applied to a subject (i = 1, . . . ,n). Each   dose is applied to a different subject, and we assume that the xi are realisations from independent Bernoulli random variables with corresponding success probabilities pi .

A logistic regression model will be used to describe the relationship between xi and yi with

log  = + βyi .

(a)  [8 marks] Find the likelihood for x = (x1 , . . . ,xn ) and hence show that, when ↵ and β have independent standard Normal prior distributions, the posterior            density (up to a normalising constant) is given by

(↵ , β|x) / exp[P xi (+ βyi ) 0.52 0.5β2]

(b)  [7 marks] The joint posterior distribution for ↵ and β can be explored using the

Metropolis-Hastings algorithm. For the logistic regression example above, find expressions for the acceptance probability of a new proposed combination      ( , β  )

(i) using Normal distributions centred around the current iteration and with variance 1 as the proposal distributions,

(ii) using the prior distributions from part (a) as the proposal distributions.

(c)  [10 marks] If ⇡(✓|x)/⇡(✓) < M < 1 for all values of ✓ , show that the             acceptance rate of the Metropolis-Hastings independence sampler is at least as high as that of the corresponding rejection sampler. Use the fact that acceptance rate for any rejection sampler is less than or equal to 1/M and that the               probability of accepting the next step proposed by ⇡(✓) at any point ✓(t) is:

1

((t)) = Z ()min {1,  } d.

−1

Describe the advantages and disadvantages of using rejection sampling and Metropolis-Hastings independence sampler in this case.