Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit

Econ 410 Practice Problems

Day 9

We began today (and we will continue on Day 11) updating our formal regression assumptions for multiple regression. Together, the following assumptions are known as the Classical Linear Regression Model:

MLR.1 (Linear in Parameters) The population model (in other words, the true model) can be written as:

y = β0 + β1x1 + · · · + βkxk + u

where β0, . . . , βk are the population parameters and u is the unobservable random error.

MLR.2 (Random Sampling) We have a simple random sample of size n, {(yi , x1i , . . . , xki) : i = 1, 2, . . . , n}, following the population model defined in MLR.1.

MLR.3 (No Perfect Collinearity) In the sample, there are no exact linear relation-ships among the independent variables (including the constant term).

MLR.4 (Zero Conditional Mean) The error term (u) has an expected value of zero given any value of the explanatory variables. In other words, E(u|x1, . . . , xk) = 0.

MLR.5 (Homoskedasticity) The error term (u) has the same variance given any value of the explanatory variables. In other words, Var(u|x1, . . . , xk) = σ 2 .

These assumptions are useful because of the following key results:

1. MLR.1 - MLR.4 ⇒ OLS estimators of β0 through βk are unbiased

2. MLR.1 - MLR.5 ⇒ OLS estimators of β0 through βk are BLUE (Gauss-Markov The-orem)

Exercise 1 True or False: The following model violates MLR.1:

yi = β0 + β1xi + β2x 2i + ui

Exercise 2: True or False: The Gauss-Markov theorem states that the ordinary least squares estimators always have the smallest variance among all unbiased estimators.