Chapter 2 Time Series Processes
The time series models we discuss are:
White noise (WN) Moving Average (MA) Autoregressive (AR) Random Walk (RW)Autoregressive Moving Average (ARMA) ## White Noise White noise (WN) process: \(\epsilon_t\) is a time series with the following characteristics \[\begin{align} \mathbb{E}(\epsilon_t)&=0\\ \text{$\mathbb{V}$ar}(\epsilon_t)&=\mathbb{E}(\epsilon_t^2)=\sigma^2 < \infty\\ \text{$\mathbb{C}$ov}(\epsilon_t, \epsilon_{t-k})&=\gamma(k) =0 \;\;\;\;\;\;\;\; \text{for k}>0 \end{align}\]
The autocovariance \(\gamma(k)\) of a white noise is 0. This indicates that the time series does not have memory of itself. So, there is no persistence in the white noise: \(\epsilon_t\) is not similar or influenced by \(\epsilon_{t-k}\)
2.1 Moving Average
Moving Average (MA) process: the time series \(y_t\) depends on the current and the past values of the error term \(\epsilon_t\).
Consider that \(y_t\) depends on the first lag of \(\epsilon_t\), \(\epsilon_{t-1}\). \ Then, the time series follows a moving average process of order 1, denoted MA(1): \[\begin{equation*} y_t=\mu+\epsilon_t+\theta \epsilon_{t-1}, \;\;\;\;\;\; \epsilon_t\sim \mathcal{N}(0,\sigma^2) \end{equation*}\] where \(\mu\) is an constant, \(\sigma^2\) is the variance of \(\epsilon_t\).
Consider that \(y_t\) depends on the \(q\) lags of \(\epsilon_t\). Then, the moving average process is of order \(q\), MA(q). \[\begin{equation*} y_t=\mu+\epsilon_t+\theta \epsilon_{t-1}+\ldots+\theta_q \epsilon_{t-q} \end{equation*}\] Stationarity condition holds.\[\gamma(1) = \text{$\mathbb{C}$ov}(\mu+\epsilon_t+\theta \epsilon_{t-1}, y_{t-1})\]
Note: \(y_{t-1} = \mu + \epsilon_{t-1}+ \theta \epsilon_{t-2}\). \ Assume that the error terms are serially independent, i.e.~\(\epsilon_t \perp \epsilon_{t-k} \;\; \forall k\geq1\).
\[\begin{align*} \text{$\mathbb{C}$ov}(\mu&+\epsilon_t+\theta \epsilon_{t-1}, y_{t-1}) =\\ \text{$\mathbb{C}$ov}(\mu, y_{t-1})+&\text{$\mathbb{C}$ov}(\epsilon, y_{t-1})+ \text{$\mathbb{C}$ov}(\theta\epsilon_{t-1}, y_{t-1}) \end{align*}\]
\[\text{$\mathbb{C}$ov}(y_t, y_{t-1})=\gamma(1)=\theta\text{$\mathbb{V}$ar}(\epsilon_{t-1})= \theta \sigma^2\] \[\text{$\mathbb{C}$ov}(y_t, y_{t-k})=\gamma(k)=0 \;\;\;\; \text{for} \;\; k>1\]
2.2 Autoregressive process
Autoregressive (AR) process: the time series \(y_t\) depends on its previous values and on an error term \(\epsilon_t\).
Consider that \(y_t\) depends only on its first lag and on \(\epsilon_t\). Then, the time series follows an autoregressive process of order 1, denoted AR(1): \[\begin{equation*} y_t=\mu+\phi y_{t-1}+\epsilon_t, \;\;\;\;\;\; \epsilon_t\sim \mathcal{N}(0,\sigma^2) \end{equation*}\] where \(\mu\) is an intercept, \(\sigma^2\) is the variance of \(\epsilon_t\).\
Consider that \(y_t\) depends on its \(p\) past values. Then, the autoregressive process is of order \(p\), AR(p).
\[y_t=\mu+\phi y_{t-1}+\ldots+\phi_p y_{t-p}+\epsilon_t\]
The stationarity condition holds \(|\phi|\) is lower than 1.
Let’s consider an AR(1) process. Its statistical properties are:
\end{frame}
2.3 Wold Representation Theorem
A (covariance) stationary time series \(y_t\) can be represented as a Moving Average process of order infinity, MA(\(\infty\)): \[\begin{align*} y_t&=\sum_{j=0}^\infty\psi_j\epsilon_{t-j} \end{align*}\] To see how, consider an AR(1) process: \[\begin{align*}\label{ar1} y_t&=\phi y_{t-1}+\epsilon_t \end{align*}\] Substitute \(y_{t-1}\): \[\begin{align*} y_t&=\phi^2 y_{t-2}+ \phi \epsilon_{t-1} +\epsilon_t \end{align*}\] Recursively substitute \(y_{t-2}\) to \(y_{t-j}\), until you get: \[\begin{align*} y_t&=\phi^j \epsilon_{t-j}+ \ldots + \phi \epsilon_{t-1} +\epsilon_t= \sum_{j=0}^\infty \psi_j \epsilon_{t-j} \end{align*}\] where \(\psi_j =\phi^j\) and \(\sum_{j=0}^\infty|\psi_j|<\infty\).
2.4 Autoregressive Moving Average Process
Autoregressive Moving Average (ARMA) process: the time series \(y_t\) is formed by an autoregressive component and a moving average one.
The representation of ARMA(1,1) combines AR(1) and MA(1): \[ y_t=\phi y_{t-1}+\epsilon_t+\theta \epsilon_{t-1}\]
The ARMA process of order (p,q) is:
\[y_t=\phi_1 y_{t-1}+\ldots+\phi_p y_{t-p}+\epsilon_t+\theta_1 \epsilon_{t-1}+\ldots+\theta_q \epsilon_{t-q}\]
The stationarity condition holds \(|\phi|\) is lower than 1.
Notice that ARMA(1,0) is an AR(1) and ARMA(0,1) is an MA(1).
2.5 Random Walk
If the parameter \(\phi=1\), the process is a random walk (RW): \[\begin{align*} y_t&=y_{t-1}+\epsilon_t \end{align*}\]
This time series is a sum of random shocks \(\epsilon_t\) with \(t\) from time 1 until \(t\).Repeat steps and for all \(t\) until you get: \[y_t=\epsilon_1+\ldots+\epsilon_t\]
The stationarity condition hold (see next slide).
Note that \(\epsilon_t \; \forall t\) is a white noise process, \(\epsilon_t \sim \mathcal{N}(0, \sigma^2)\).
The variance increases with \(t\) \(\implies\) the variance depends on time \(\implies\) non-stationary process.
\
The variance increases with \(t\) \\(\implies\) the variance depends on time \\(\implies\) non-stationary process. \end{frame}