Category: Time Series Analysis and Forecasting

The Durbin-Watson Test

Durbin and Watson have suggested a test to detect the presence of autocorrelation which is applicable to small samples. However, the test is appropriate only for the first-order autoregressive scheme ($u_t =  \rho u_{t-1} + \varepsilon_t$). The testing procedure for the Durbin-Watson test is:

Step 1: Null and Alternative Hypothesis

The null hypothesis is $H_0:\rho=0$ (that is, $u$’s are not autocorrelated with a first-order scheme)

The alternative hypothesis is $H_1: \rho \ne 0$ (that is, $u$’s are autocorrelated with a first-order scheme)

Step 2: Level of Significance

Choose the appropriate level of significance, such as 5%, 1%, 10%, etc.

Step 3: Test Statistics

To test the null hypothesis, the Durbin-Watson statistic is

$$d = \frac{\sum\limits_{t=2}^n (u_t – u_{t-1})^2}{\sum\limits_{t=1}^n e_t^2}$$

The value of $d$ lie between 0 and 4, when $d=2$, then $\rho=0$. It means taht $H_0:\rho=0$ is equivalent to testing $H_0:d=2$.

\begin{align*}
d&= \frac{\sum\limits_{t=2}^n (u_t – u_{t-1})^2}{\sum\limits_{t=1}^n u_t^2}\\
&=\frac{ \sum\limits_{t=2}^n (u_t^2 + u_{t-1}^2 – 2u_t u_{t-1} ) }{\sum\limits_{t=1}^n u_t^2} \\
&=\frac{ \sum\limits_{t=2}^n u_t^2 + \sum\limits_{t=2}^n u_{t-1}^2 – 2 \sum\limits_{t=2}^n u_t u_{t-1} }{\sum\limits_{t=1}^n u_t^2}
\end{align*}

Durbin-Watson statistic is simply the ratio of the sum of squared differences in the successive residuals to the residual sum of squares. In the numerator, there will be $n-2$ observations because of lag values.

For large samples $\sum\limits_{t=2}^n u_t^2$, $\sum\limits_{t=2}^n u_{t-1}^2$ and $\sum\limits_{t=1}^n u_t^2$ are all approximately equal. Therefore,

\begin{align*}
d &\approx  \frac{2 \sum u_t^2 – 1}{\sum u_{t-1}^2} – \frac{2 \sum_{t=2}^n u_tu_{t-1} }{ \sum u_{t-1}^2 }\\
& \approx 2 \left[ 1- \frac{\sum u_t u_{t-1} }{ \sum u_{t-1}^2 }\right]\\
\text{but }\,\,\, \hat{\rho} &= \frac{\sum u_t u_{t-1}}{\sum u_{t-1}^2}
\end{align*}

Therefore $d\approx 2(1-\hat{\rho})$

It is obvious that the values of $d$ lie between 0 and 4.

Firstly: If there is no autocorrelation $\hat{\rho}=$ then $d=2$, it means that from the sample data $d^*\approx 2$. We accept that there is no autocorrelation.

Secondly: If $\hat{\rho}=+1$, we have perfect positive autocorrelation. Therefore, if $2<d^* <4$ there is some degree of positive autocorrelation (which is stronger the higher for the higher value of $d^*$).

Thirdly: If $\hat{\rho}=-1, d=4$. We have perfect negative autocorrelation. Therefore, if $2<d^*<4$, there is some degree of negative autocorrelation (which is stronger for the higher value of $d^*$).

The next step is to use the sample residual ($u_t$’s) and compute the empirical value of the Durbin-Watson statistic $d^*$.

Finally, the empirical $d^*$ must be compared with the theoretical values of $d$, that is, the values of $d$ which define the critical region of the test.

The problem with this test is that the exact distribution of $d$ is not known. However, Durbin and Watson have established upper ($d_u$) and lower ($d_l$) limits for the significance level of $d$ which are appropriate to the hypothesis of zero first-order autocorrelation against the alternative hypothesis of positive first-order autocorrelation. Durbin and Watson have tabulated these upper and lower values at 5% and 1% level of significance.

Critical Region of $d$ test

Durbin-Watson Test
  • If $d^*<d_l$ we reject the null hypothesis of no autocorrelation and accept that there is positive autocorrelation of the first order.
  • If $d^* >( 4-d_l)$ we reject the null hypothesis of no autocorrelation and accept that there is negative autocorrelation of the first order.
  • If $d_u < d^* < (4-d_u)$ we accept the null hypothesis of no autocorrelation
  • if $d_l < d^* < d_u$ or if $(4-d_u)<d^*<(4-d_l)$ the test is inconclusive.

Assumptions underlying the $d$ Statistics

  • The regression model includes the intercept term. It is not present as in the case of the regression through the origin, it is essential to return the regression including the intercept term to obtain the RSS.
  • The explanatory variables, $X$’s are non-stochastic or fixed in repeated sampling.
  • The disturbances $u_t$ are generated  by the first-order autoregressive scheme: $u_t=\rho + u_{t-1} +\varepsilon_t$ (it cannot be used to detect higher-order autoregression schemes.
  • The error term $u_t$ is assumed to be normally distributed.
  • The regression model does not include the lagged values(s) of the dependent variable as one of the explanatory variables. The Durbin-Watson test is inappropriate to the model of this type $$Y_t=\beta_1 + \beta_2X_{2t} + \beta_3 X_{3t} + \cdots+ \beta_k X_{kt} + \gamma Y_{t-1}+u_t$$, where $Y_{t-1}$ is the one period lagged values of $Y$.
  • There are no missing observations in the data.

Limitations or Shortcoming of Durbin-Watson Test Statistics

Durbin-Watson test has several shortcomings:

  • The $d$ statistics is not an appropriate measure of autocorrelation if among the explanatory variables there are lagged values of the endogenous variables.
  • Durbin-Watson test is inconclusive if computed value lies between $d_l$ and $d_u$.
  • It is inappropriate for testing higher-order serial correlation or for other forms of autocorrelation.

An Asymptotic or Large Sample Test

Under the null hypothesis that $\rho=0$ and assuming that the sample size $n$ is large, it can be shown that $\sqrt{n}\hat{\rho}$ follows the normal distribution with 0 mean and variance 1, i.e. asymptotically,

$$\sqrt{n}\,\, \hat{\rho} \sim N(0, 1)$$

Residuals plot for Detection of Autocorrelation

The existence and pattern of autocorrelation may be detected using a graphical representation of residuals obtained from ordinary least square regression. One can draw the following residuals plot for detection of autocorrelation:

  • A plot of the residual plot against time.
  • A plot of the $\hat{u}_t$ against $\hat{u}_{t-1}$
  • A plot of standardized residuals against time.

Note that the population disturbances $u_t$ are not directly observable, we use their proxies, the residuals $\hat{u}_t$.

Positive negative autocorrelation
Residuals plot for Autocorrelation
  • A random pattern of residuals indicates the non-presence of autocorrelation.
  • A plot of residuals for detection of residuals used for visual examination of $\hat{u}_t$ or  $\hat{u}_t^2$ can provide useful information not only about the presence of autocorrelation but also about the presence of heteroscedasticity. Similarly, the examination of $\hat{u}_t$ and $\hat{u}_t^2$ provides useful information about model inadequacy or specification bias too.
  • The standardized residuals are computed as $\frac{u_t}{\hat{\sigma}}$ where $\hat{\sigma}$ is standard error of regression.

Note: The plot of residuals against time is called the sequence plot. For a time-series data, researcher can plot (graphically draw) the residuals verses time (called a time sequence plot), he may expect to observe some random pattern in the time series data, indicating that the data is not autocorrelated. However, if researcher observes some pattern (other than the random) in the graphical representation o f the data, then it means that the data is autocorrelated.

See more on: Autocorrelation

Seasonal Variations: Estimation

We have to find out a way of isolating and measuring the seasonal variations. There are two reasons for isolating and measuring the effect of seasonal variation.

  • To study the changes brought by seasons in the values of the given variable in a time series
  • To remove it from the time series to determine the value of the variable

Summing the values of a particular season for a number of years, the irregular variations will cancel each other, due to independent random disturbances. If we also eliminate the effect of trend and cyclical variations, the seasonal variations will be left out which are expressed as a percentage of their average.

A study of seasonal variation leads to more realistic planning of production and purchases etc.

Seasonal Index

When the effect of the trend has been eliminated, we can calculate a measure of seasonal variation known as the seasonal index. A seasonal index is simply an average of the monthly or quarterly value of different years expressed as a percentage of averages of all the monthly or quarterly values of the year.

The following methods are used to estimate seasonal variations.

  • Average percentage method (simple average method)
  • Link relative method
  • Ratio to the trend of short time values
  • Ratio to the trend of long time averages projected to short times
  • Ratio to moving average

The Simple Average Method

Assume the series is expressed as

$$Y=TSCI$$

Considering the long time averages as trend values and eliminate the trend element by expressing a short time observed value as a percentage of the corresponding long time average. In the multiplicative model, we obtain

\begin{align*}
\frac{\text{short time observed value} }{\text{long time average}}\times &= \frac{TSCI}{T}\times 100\\
&=SCI\times 100
\end{align*}

This percentage of long time average represents the seasonal (S), the cyclical (C) and the irregular (I) component.

Once $SCI$ obtained, we try to remove $CI$ as much as possible from $SCI$. This is done by arranging these percentages season-wise for all the long times (say years) and taking the modified arithmetic mean for each season by ignoring both the smallest and the largest percentages. These would be seasonal indices.

If the average of these indices is not 100, then the adjustment can be made, by expressing these seasonal indices as the percentage of their arithmetic mean. The adjustment factor would be

\begin{align*}
\frac{100}{\text{Mean of Seasonal Indiex}} \rightarrow \frac{400}{\text{sums of quarterly index}} \,\, \text{ or } \frac{1200}{\text{sums of monthly indices}}
\end{align*}

Question: The following data is about number of automobile sold.

YearQuarter 1Quarter 2Quarter 3Quarter 4
1981250278315288
1982247265301285
1983261285353373
1984300325370343
1985281317381374

Calculate the seasonal indices by the average percentage method.

Solution:

First, we obtain the yearly (long term) averages

Year19811982198319841985
Year Total11311098127213381353
Yearly Average1131/4=282.75274.50318.00334.50338.25

Next, we divide each quarterly value by the corresponding yearly average and express the results as percentages. That is,

YearQuarter 1Quarter 2Quarter 3Quarter 4 
1981$\frac{250}{282.75}\times=88.42$$\frac{278}{282.75}\times=98.32^*$$\frac{315}{282.75}\times=111.41$$\frac{288}{282.75}\times=101.86^*$ 
1982$\frac{247}{274.50}\times=89.98^*$$\frac{265}{274.50}\times=96.54$$\frac{301}{274.50}\times=109.65^*$$\frac{285}{274.50}\times=103.83$ 
1983$\frac{261}{318.00}\times=82.08^*$$\frac{285}{318.00}\times=89.62^*$$\frac{353}{318.00}\times=111.01$$\frac{373}{318.00}\times=117.30^*$ 
1984$\frac{300}{334.50}\times=89.69$$\frac{325}{334.50}\times=97.16$$\frac{370}{334.50}\times=110.61$$\frac{343}{334.50}\times=102.54$ 
1985$\frac{281}{338.25}\times=83.07$$\frac{317}{338.25}\times=93.72$$\frac{381}{338.25}\times=112.64^*$$\frac{374}{338.25}\times=110.57$ 
Total (modfied)
261.18247.42333.03316.94Total
Mean (modified)
$\frac{261.18}{3}=87.06$$\frac{247.42}{3}=95.81$$\frac{333.03}{3}=111.01$$\frac{316.94}{3}=105.65$399.52

* on values represents smallest and largest values in a quarter that are not included in the total.

Read about Component of Time Series

Detrending of time Series

Detrending is a process of eliminating the trend component from a time series, where a trend refers to a change in the mean over time (a continuous decrease or increase over time). It means when data is detrended, an aspect from that data has been removed that you think is causing some kind of distortion.

Assuming the multiplicative model:

$$Detrended\, value = \frac{Y}{T} = \frac{TSCI}{T}=SCI $$

Assuming additive model:
$$Detrended\, value = Y-T=T+S+C+I-T = S+C+I$$

Stationary Time Series:

The detrending of times series is a process of removing the trend from a non-stationary time series. A detrended time series is known as a stationary time series, while a time series with a trend is non-stationary time series. A stationary time series oscillates about the horizontal line. If a series does not have a trend or we remove the trend successfully, the series is said to be trend stationary. Elimination of the trend component may be thought of as rotating the trend line to a horizontal position. The trend component can be eliminated from the observed time series by computing either the ratios to the trend if the multiplicative model is assumed or the deviations from the trend if the additive model is assumed.

Read about Secular Trend in Time Series

Read more about Detrending of Time Series

x Logo: Shield Security
This Site Is Protected By
Shield Security