Autocorrelation

The Durbin-Watson Test

Durbin and Watson have suggested a test to detect the presence of autocorrelation which is applicable to small samples. However, the test is appropriate only for the first-order autoregressive scheme ($u_t =  \rho u_{t-1} + \varepsilon_t$). The testing procedure for the Durbin-Watson test is:

Step 1: Null and Alternative Hypothesis

The null hypothesis is $H_0:\rho=0$ (that is, $u$’s are not autocorrelated with a first-order scheme)

The alternative hypothesis is $H_1: \rho \ne 0$ (that is, $u$’s are autocorrelated with a first-order scheme)

Step 2: Level of Significance

Choose the appropriate level of significance, such as 5%, 1%, 10%, etc.

Step 3: Test Statistics

To test the null hypothesis, the Durbin-Watson statistic is

$$d = \frac{\sum\limits_{t=2}^n (u_t – u_{t-1})^2}{\sum\limits_{t=1}^n e_t^2}$$

The value of $d$ lie between 0 and 4, when $d=2$, then $\rho=0$. It means taht $H_0:\rho=0$ is equivalent to testing $H_0:d=2$.

\begin{align*}
d&= \frac{\sum\limits_{t=2}^n (u_t – u_{t-1})^2}{\sum\limits_{t=1}^n u_t^2}\\
&=\frac{ \sum\limits_{t=2}^n (u_t^2 + u_{t-1}^2 – 2u_t u_{t-1} ) }{\sum\limits_{t=1}^n u_t^2} \\
&=\frac{ \sum\limits_{t=2}^n u_t^2 + \sum\limits_{t=2}^n u_{t-1}^2 – 2 \sum\limits_{t=2}^n u_t u_{t-1} }{\sum\limits_{t=1}^n u_t^2}
\end{align*}

Durbin-Watson statistic is simply the ratio of the sum of squared differences in the successive residuals to the residual sum of squares. In the numerator, there will be $n-2$ observations because of lag values.

For large samples $\sum\limits_{t=2}^n u_t^2$, $\sum\limits_{t=2}^n u_{t-1}^2$ and $\sum\limits_{t=1}^n u_t^2$ are all approximately equal. Therefore,

\begin{align*}
d &\approx  \frac{2 \sum u_t^2 – 1}{\sum u_{t-1}^2} – \frac{2 \sum_{t=2}^n u_tu_{t-1} }{ \sum u_{t-1}^2 }\\
& \approx 2 \left[ 1- \frac{\sum u_t u_{t-1} }{ \sum u_{t-1}^2 }\right]\\
\text{but }\,\,\, \hat{\rho} &= \frac{\sum u_t u_{t-1}}{\sum u_{t-1}^2}
\end{align*}

Therefore $d\approx 2(1-\hat{\rho})$

It is obvious that the values of $d$ lie between 0 and 4.

Firstly: If there is no autocorrelation $\hat{\rho}=$ then $d=2$, it means that from the sample data $d^*\approx 2$. We accept that there is no autocorrelation.

Secondly: If $\hat{\rho}=+1$, we have perfect positive autocorrelation. Therefore, if $2<d^* <4$ there is some degree of positive autocorrelation (which is stronger the higher for the higher value of $d^*$).

Thirdly: If $\hat{\rho}=-1, d=4$. We have perfect negative autocorrelation. Therefore, if $2<d^*<4$, there is some degree of negative autocorrelation (which is stronger for the higher value of $d^*$).

The next step is to use the sample residual ($u_t$’s) and compute the empirical value of the Durbin-Watson statistic $d^*$.

Finally, the empirical $d^*$ must be compared with the theoretical values of $d$, that is, the values of $d$ which define the critical region of the test.

The problem with this test is that the exact distribution of $d$ is not known. However, Durbin and Watson have established upper ($d_u$) and lower ($d_l$) limits for the significance level of $d$ which are appropriate to the hypothesis of zero first-order autocorrelation against the alternative hypothesis of positive first-order autocorrelation. Durbin and Watson have tabulated these upper and lower values at 5% and 1% level of significance.

Critical Region of $d$ test

Durbin-Watson Test
  • If $d^*<d_l$ we reject the null hypothesis of no autocorrelation and accept that there is positive autocorrelation of the first order.
  • If $d^* >( 4-d_l)$ we reject the null hypothesis of no autocorrelation and accept that there is negative autocorrelation of the first order.
  • If $d_u < d^* < (4-d_u)$ we accept the null hypothesis of no autocorrelation
  • if $d_l < d^* < d_u$ or if $(4-d_u)<d^*<(4-d_l)$ the test is inconclusive.

Assumptions underlying the $d$ Statistics

  • The regression model includes the intercept term. It is not present as in the case of the regression through the origin, it is essential to return the regression including the intercept term to obtain the RSS.
  • The explanatory variables, $X$’s are non-stochastic or fixed in repeated sampling.
  • The disturbances $u_t$ are generated  by the first-order autoregressive scheme: $u_t=\rho + u_{t-1} +\varepsilon_t$ (it cannot be used to detect higher-order autoregression schemes.
  • The error term $u_t$ is assumed to be normally distributed.
  • The regression model does not include the lagged values(s) of the dependent variable as one of the explanatory variables. The Durbin-Watson test is inappropriate to the model of this type $$Y_t=\beta_1 + \beta_2X_{2t} + \beta_3 X_{3t} + \cdots+ \beta_k X_{kt} + \gamma Y_{t-1}+u_t$$, where $Y_{t-1}$ is the one period lagged values of $Y$.
  • There are no missing observations in the data.

Limitations or Shortcoming of Durbin-Watson Test Statistics

Durbin-Watson test has several shortcomings:

  • The $d$ statistics is not an appropriate measure of autocorrelation if among the explanatory variables there are lagged values of the endogenous variables.
  • Durbin-Watson test is inconclusive if computed value lies between $d_l$ and $d_u$.
  • It is inappropriate for testing higher-order serial correlation or for other forms of autocorrelation.

An Asymptotic or Large Sample Test

Under the null hypothesis that $\rho=0$ and assuming that the sample size $n$ is large, it can be shown that $\sqrt{n}\hat{\rho}$ follows the normal distribution with 0 mean and variance 1, i.e. asymptotically,

$$\sqrt{n}\,\, \hat{\rho} \sim N(0, 1)$$

Residuals plot for Detection of Autocorrelation

The existence and pattern of autocorrelation may be detected using a graphical representation of residuals obtained from ordinary least square regression. One can draw the following residuals plot for detection of autocorrelation:

  • A plot of the residual plot against time.
  • A plot of the $\hat{u}_t$ against $\hat{u}_{t-1}$
  • A plot of standardized residuals against time.

Note that the population disturbances $u_t$ are not directly observable, we use their proxies, the residuals $\hat{u}_t$.

Positive negative autocorrelation
Residuals plot for Autocorrelation
  • A random pattern of residuals indicates the non-presence of autocorrelation.
  • A plot of residuals for detection of residuals used for visual examination of $\hat{u}_t$ or  $\hat{u}_t^2$ can provide useful information not only about the presence of autocorrelation but also about the presence of heteroscedasticity. Similarly, the examination of $\hat{u}_t$ and $\hat{u}_t^2$ provides useful information about model inadequacy or specification bias too.
  • The standardized residuals are computed as $\frac{u_t}{\hat{\sigma}}$ where $\hat{\sigma}$ is standard error of regression.

Note: The plot of residuals against time is called the sequence plot. For a time-series data, researcher can plot (graphically draw) the residuals verses time (called a time sequence plot), he may expect to observe some random pattern in the time series data, indicating that the data is not autocorrelated. However, if researcher observes some pattern (other than the random) in the graphical representation o f the data, then it means that the data is autocorrelated.

See more on: Autocorrelation

First Order Autocorrelation

Consider the multiple regression model

$$Y_t=\beta_1+\beta_2 X_{2t}+\beta_3 X_{3t}+\cdots+\beta_k X_{kt}+u_t,$$

in which the current observation of the error term ($u_t$) is a function of the previous (lagged) observation of the error term ($u_{t-1}$). That is,

\begin{align*}
u_t = \rho u_{t-1} + \varepsilon_t, \tag*{eq 1}
\end{align*}

where $\rho$ is the parameter depicting the functional relationship among observations of the error term $u_t$ and $\varepsilon_t$ is a stochastic error term which is iid (identically independently distributed). It satisfy the standard OLS assumption:

\begin{align*}
E(\varepsilon) &=0\\
Var(\varepsilon) &=\sigma_t^2\\
Cov(\varepsilon_t, \varepsilon_{t+s} ) &=0
\end{align*}

Note if $\rho=1$, then all these assumptions are undefined.

The scheme (eq1) is known as a Markov first-order autoregressive scheme, usually denoted by AR(1). The eq1 is interpreted as the regression of $u_t$ on itself tagged on period. It is first-order because $u_t$ and its immediate past value are involved. Note the $Var(u_t)$ is still homoscedasticity under AR(1) scheme.

The coefficient $\rho$ is called the first-order autocorrelation coefficient (also called the coefficient of autocovariance) and takes values from -1 to 1 or ($|\rho|<1$). The size of $\rho$ determines the strength of autocorrelation (serial correlation).  There are three different cases:

  1. If $\rho$ is zero, then there is no autocorrelation because $u_t=\varepsilon_t$.
  2. If $\rho$ approaches to 1, the value of the previous observation of the error ($u_t-1$) becomes more important in determining the value of the current error term ($u_t$) and therefore, greater positive autocorrelation exists. The negative error term will lead to negative and positive will lead to a positive error term.
  3. If $\rho$ approaches to -1, there is a very high degree of negative autocorrelation. The signs of the error term have a tendency to switch signs from negative to positive and vice versa in consecutive observations.
Positive negative autocorrelation

For first order autocorrelation AR(1)

\begin{align*}
u_t &= \rho u_{t-1}+\varepsilon_t\\
E(u_t) &= \rho E(u_{t-1})+ E(\varepsilon_t)=0\\
Var(u_t)&=\rho^2 Var(u_{t-1}+var(\varepsilon_t)\\
\text{Because $u$’s and $\varepsilon$’s are uncorrelated}\\
Var(u_t)&=\sigma^2\\
Var(u_{t-1}) &=\sigma^2\\
Var(\varepsilon_t)&=\sigma_t^2\\
\Rightarrow Var(u_t) &=\rho^2 \sigma^2+\sigma_t^2\\
\Rightarrow \sigma^2-\rho^2\sigma^2 &=\sigma_t^2\\
\Rightarrow \sigma^2(1-\rho^2)&=\sigma_t^2\\
\Rightarrow Var(u_t)&=\sigma^2=\frac{\sigma_t^2}{1-\rho^2}
\end{align*}

For covariance, multiply equation (eq1) by $u_{t-1}$ and taking the expectations on both sides

\begin{align*}
u_t\cdot u_{t-1} &= \rho u_{t-1} \cdot u_{t-1} + \varepsilon_t \cdot u_{t-1}\\
E(u_t u_{t-1}) &= E[\rho u_{t-1}^2 + u_{t-1}\varepsilon_t ]\\
cov(u_t, u_{t-1}) &= E(u_t u_{t-1}) = E[\rho u_{t-1}^2 + u_{t-1}\varepsilon_t ]\\
&=\rho \frac{\sigma_t^2}{1-\rho^2}\tag*{$\because Var(u_t) = \frac{\sigma_t^2}{1-\rho^2}$}
\end{align*}

Similarly,
\begin{align*}
cov(u_t,u_{t-2}) &=\rho^2 \frac{\sigma_t^2}{(1-\rho^2)}\\
cov(u_t,u_{t-2}) &= \rho^2 \frac{\sigma_t^2}{(1-\rho^2)}\\
cov(u_t, u_{t+s}) &= \rho^p
\end{align*}

Scroll to Top