ECON61001 ECONOMETRIC METHODS 2020
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
ECON61001
ECONOMETRIC METHODS
2020
SECTION A
1. Let A denote a m n matrix with rank(A) = n. Show that A0A is a positive denite matrix. [8 marks]
2. Let {(yi,x )} be a sequence of independently and identically distributed (i.i.d.) random vectors. Suppose that yi is a dummy variable and so has a sample space of {0, 1} with P(yi = 1|xi) = (x 0) where ( ) is the cumulative distribution function of the standard normal distribution. Derive the (conditional) likelihood function based on {(yi,x )} . [8 marks]
3. Let {VT,T = 1, 2,...} be a sequence of scalar random variables and V be a scalar random variable.
(a) Provide a formal denition of what is meant by the statement that VT con-
verges in distribution to V . [3 marks]
(b) Provide a formal denition of what is meant by the statement that VT con-
verges in probability to V . [3 marks]
(c) What is the relationship between the two forms of convergence? Specif- ically, does convergence in distribution imply convergence in probability and/or vice versa? [2 marks]
4. A researcher wishes to estimate the following model:
ln(wage) = 0 + 1educ + 2exper + 3expersq + u
where ln(wage) is the log of the hourly wage, educ is the number of years of education, exper is the number of years of experience, expersq = (exper)2, and u is the error. The researcher believes that educ may be an endogenous regressor and so elects to estimate the model via Two Stage Least Squares (2SLS) using motheduc and fatheduc as instruments where these two variables are dened to be the number of years of education for the individual’s mother and father respectively.
(a) Explain why educ may be an endogenousregressorin this model. [3 marks] (b) Explain how to implement 2SLS estimation of this model. [5 marks]
5.(a) If {vt} is a scalar covariance (weakly) stationary univariate time series process
then what restrictions does the process satisfy?
5.(b) Consider the time series regression model
yt = x + ut
[3 marks]
(1)
where ut follows an Autoregressive model of order one (an AR(1) process) with autoregressive parameter . The Generalized Least Squares estimator of 0 can be obtained for any given value of via Ordinary Least Squares based on the quasi-differenced version of (1) that is, from the regression of y¨t = yt yt 1 on t = xt xt 1 . Show that the quasi-differenced equation can be viewed as a more general linear time series regression model subject to the so-called
common factor restriction. [5 marks]
SECTION B
6. Consider the linear regression model
yi = x0 + ui (2)
where xi is k 1 vector of explanatory variables, 0 is the k 1 vector of unknown regression coefcients and ui is the error term. Assume the following condi- tions hold: (i) Equation (2) is the true data generation process; (ii) {(ui,x ), i = 1, 2,...N} is an independently and identically distributed sequence of random vectors; (iii) E[xix ] = Q, a nite, positive denite matrix of constants; (iv) E[ui|xi] = 0; (v) Var[ui|xi] = h(xi) and h( . ) is a positive function of unknown form. The OLS estimator of 0 is ˆN = (X0X) 1X0y where y is N 1 with ith element yi , X is N k with ith row x and X is assumed to have rank equal to k .
(a) Is ˆN an unbiased estimator of 0 in this model? Provide a formal justica- tion for your answer. [8 marks]
(b) Assume N1/2(ˆN 0) N(0,V) and V. Dene R to be a nr k matrix of specied constants such that rank{R} = nr, and r to be a nr 1
vector of specied constants. Consider the test statistic
WN = N(RˆN r)0 (RR0) 1 (RˆN r).
Assuming also that R0 = r, show that WN ⃞r being sure to clearly state any large sample distributional results used in your answer. [12 marks]
(c) A colleague assumes that h(xi) = pxi0xi and then bases inference about 0 on the resulting Generalized Least Squares (GLS) estimator. Evaluate the advantages and disadvantages of this approach relative to inferences based on the OLS estimator of 0. [10 marks]
7.(a) Consider the model
ut = wt + wt 1 , t = 1, 2,...,T, (3)
where 0, and {wt} is a white noise process with variance . Let2 u denote the T 1 vector with tth element ut. Show that Var[u] = , where is a T T matrix with (t,s)th element t,s and the non-zero elements of are given by:
t,t = 2 (1 + 2) for t = 1, 2,...,T;
t,s = 2 for s = t + 1, t = 1, 2,...,T 1 and t = s + 1, s = 1, 2,...,T 1. [14 marks]
7.(b) A researcher estimates the following model for monthly wage growth in an indus-
try based on monthly data from February 1947 to December 1997,
yt = 0 + x1,t1 + x2,t2 + εt
where yt is the wage growth in the industry in period t, x1,t is the growth in the minimum wage in period t, x2,t is the growth in the consumer price index in period t, and εt denotes the error. If xt = (x1,t,x2,t)0 is strictly exogenous in this model
then what restrictions must {xt,εt} satisfy? [6 marks]
(c) OLS estimation of the model in part (b) yields the following results:
t = 0.002 + 0.151x1,t 0.244x2,t
[0.001] [0.050] [0.075]
where conventional OLS standard errors are in parentheses ( ) and Newey-West standard errors are in square brackets [ ]. The p-value of the Breusch-Godfrey test for serial correlation in the errors at order 1 is 0.005. Test whether these regression results suggest that an increase in the growth rate of the minimum wage leads to an increase in the growth rate of the industry wage, being sure to explain clearly: the null and alternative hypotheses in terms of the relevant regression parameters; the test statistic (including a justication for your choice of standard errors); the decision rule; and the outcome of the test. [10 marks]
8. Consider the Instrumental Variables (IV) estimator of the k 1 parameter vector 0 in the linear regression model
yi = x0 + ui , (4)
based on the population moment condition
E[ziui(0)] = 0, (5)
where ui() = yi x , xi = (1,x,i)0 is k 1 vector and zi is a q 1 vector. It is assumed that the instruments satisfy the relevance condition and so,
E[ziui()] 0, for all 0 . (6)
(a) Assuming equation (5) holds, show that the relevance condition in (6) can
be equivalently stated as rank{E[zix ]} = k. [8 marks] (b) Suppose that q = k = 2 and zi = (1,z2,i). Show that the relevance condition
is satised if and only if Corr(x2,i,z2,i) 0.
(c) Now assume that q = k so that the IV estimator is given by:
ˆIV = (Z0X) 1Z0y,
[8 marks]
(7)
where Z is the N k matrix with ith row z , X is the N k matrix with ith row x, and y is the N 1 vector with ith element yi. Further assume that {(ui,x,z ) 0 ; i = 1, 2,...,N} is a sequence of independently and iden- tically distributed (i.i.d.) random vectors that satisfy: (i) equation (4); (ii) E[ui|zi] = 0; (iii) Var[ui|zi] = , a positive, nite constant; (iv) E[zix ] = Qzx , a nonsingular nite matrix; and (v) E[ziz ] = Qzz, a nonsingular nite matrix.
Show that
N1/2(ˆIV 0) N (0, VIV )
where VIV = QQzz{Q}0 , being sure to clearly state any large sample distributional results used in your answer. [14 marks]
Hint: you may quote the generic form of the Weak Law of Large Numbers,
N 1 P wi w, but must verifyw for the specic choices ofwi relevant
to your answer; you may also quote the generic form of the Central Limit Theorem, N 1/2 P(wi w) N(0, w) but must verify w and w for the specic choices ofwi relevant to your answer.
9. Consider the linear regression model
y = X0 + u
where y and u are T 1 vectors, and X is T k matrix. Suppose the following conditions hold: (i) X is xed in repeated samples with rank(X) = k; (ii) u N(0, IT). Let be the (k + 1) 1 vector consisting of the parameters of this model that is, = (0 ,2)0 and ˆT = (ˆ , )0 denote the Maximum Likelihood estimator (MLE) of .
(a) Show that the log likelihood function is given by
T T (y X)0 (y X)
2 2 22 ,
being sure to justify each step of your derivation. [10 marks]
Hint: ifv is a n 1 random vector andv N(,) then its probability density function is given by:
f(v; , ) = exp (v )0 1 (v )/2 .
(b) By considering the score equations associated with maximum likelihood es- timation of this model, show that
ˆT = (X0X) 1X0y,
= (y XˆT) (y0 XˆT)/T,
being sure to clearly explain the steps in your derivation. Note: you do not need to consider the second order conditions. [14 marks]
Hint: for A, a and of dimensions p p, p 1 and p 1 respectively we have: (i) if h() = a0 then ∂h()/∂ = a; (ii) if g() = 0A then ∂g()/∂ = (A + A0) .
(c) Assuming the conditions given in the question hold, ˆT can be shown to be optimal in the sense that it is efcient relative to two appropriately dened classes of estimators. State these two optimality properties. [6 marks]
2022-01-21