STAT 153 - Introduction to Time Series Spring 2023 Homework 1
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
STAT 153 - Introduction to Time Series
Spring 2023
Homework 1
1. (White noise) We say {Xt } is white noise WN(0,σ2 ) if {Xt } is uncorrelated, i.e., Cov(Xt1 ,Xt2 ) = 0 for any t1 and t2 , and E[Xt] = 0, Var(Xt ) = σ 2 . Based on this definition, we may see that not all white noise model contains i.i.d. variables. This question provides one such counterexample.
Suppose that {Wt } and {Zt } are independent and identically distributed sequences, with P(Wt = 0) = P(Wt = 1) = 1/2 and P(Zt = −1) = P(Zt = 1) = 1/2. We define the time series model
Xt = Wt (1 − Wt −1)Zt .
Prove that {Xt } satisfies the above definition of white noise but X1 ,X2 , . . . are not i.i.d.
2. (Fundamentals on covariance function) This question reviews some fundamen- tal properties of covariance function, which is essential for computing autocovariance function.
(a) Cov(X,Y) = Cov(Y,X)
(b) Cov(X,X) = Var(X)
(c) For any constant a, Cov(aX,Y) = aCov(X,Y)
(d) For any constant a, Cov(a + X,Y) = Cov(X,Y) (e) If X and Y are independent, Cov(X,Y) = 0
(f) Cov(X,Y) = 0 does not imply that X and Y are independent
(g) Cov(对 aiXi , 对 bj Yj ) = 对 对 ainj Cov(Xi ,Yj ), where ai ,bj are con- stants
3. (Stationarity) For each of the following time series, identify if it is a stationary process. If yes, compute the mean and autocovariance function. Here, we assume that {Wt } is i.i.d. N(0, 1).
(a) Xt = t + W5
(b) Xt = t + Wt
(c) Xt = Wt2
(d) Xt = Wt Wt −2
(e) Xt = Wt − Wt −1
(f) Let W1 and W2 be independent random variables, each with mean 0 and variance
σ 2 , and let ψ be a constant. Consider the following process:
Xt = W1 cos(ψt) + W2 sin(ψt)
4. (ACF for prediction) Suppose we would like to predict a single stationary time series Xt with zero mean and autocorrelation function ρ(h) at some time t in the future, l > 0,
(a) If we predict using only of multiplicative of Xt , i.e., cXt for some constant c, prove that the mean squared prediction error
MSE(c) = E[(Xt+l − cXt )2]
is minimized by the value
c∗ = ρ(l).
(b) Show that the minimized mean squared error is
MSE(c∗ ) = γ(0)[1 − ρ2 (l)].
(c) Show that if Xt+l = cXt indeed holds, then
{
5. (R simulation) Please include both the plots and results in the answers. Recommend using R markdown.
(a) Simulate a series of n = 500 AR(1) observations with σ = 1 and compute both the sample ACF ρˆ(h) and ACF for h ≤ 20. Compare the differences of sample
ACF and ACF. Hint: use relevant functions in R instead of hardcoding ACF. (b) Repeat part (a) using only n = 50. How does changing n affect the results?
2023-02-08