# Reasons for Autocorrelation

There are several reasons for Autocorrelation, some reasons for autocorrelation are:

**i) Inertia**

Inertia or sluggishness in economic time-series is a great reason for autocorrelation. For example, GNP, production, price index, employment, and unemployment exhibit business cycles. Starting at the bottom of the recession, when the economic recovery starts, most of these series start moving upward. In this upswing, the value of a series at one point in time is greater than its previous values. These successive periods (observations) are likely to be interdependent.

**ii) Omitted Variables Specification Bias**

The residuals (which are proxies for $u_i$) may suggest that some variables that were originally candidate but were not included in the model (for a variety of reasons) should be included. This is the case of excluded variable specification bias. Often the inclusion of such variables may remove the correlation pattern observed among the residuals. For example, the model

$$Y_t = \beta_1 + \beta_2 X_{2t} + \beta_3 X_{3t} + \beta_4 X_{4t} + u_t,$$

is correct. However, running

$$Y_t=\beta_1 + \beta_2 X_{2t} + \beta_3X_{3t}+v_i,\quad \text{where $v_t=\beta_4X_{4t}+u_t$ },$$

the error or disturbance term will reflect a systematic pattern. Thus creating false autocorrelation, due to exclusion of $X_{4t}$ variable from the model. The effect of $X_{4t}$ will be captured by the disturbances $v_t$.

**iii) Model Specification: Incorrect Functional Form**

Autocorrelation can also occur due to the miss-specification of the model. Suppose that $Y_t$ is connected to $X_{2t}$ with a quadratic relation

$$Y_t=\beta_1 + \beta_2 X_{2t}^2+u_t,$$

but we wrongly estimate a straight line relationship ($Y_t=\beta_1 + \beta_2X_{2t}+u_t$). In this case, the error term obtained from the straight line specification will depend on $X_{2t}^2$. If $X_{2t}$ is increasing/decreasing over time, $u_t$ will also be increasing or decreasing over time.

**iv) Effect of Cobweb Phenomenon **

The quantity supplied in the period $t$ of many agricultural commodities depends on their price in period $t-1$. This is called the Cobweb phenomenon. This is because the decision to plant a crop in a period of $t$ is influenced by the price of the commodity in that period. However, the actual supply of the commodity is available in the period $t+1$.

\begin{align*}

QS_{t+1} &= \alpha + \beta P_t + \varepsilon_{t+1}\\

\text{or }\quad QS_t &= \alpha + \beta P_{t-1} + \varepsilon_t

\end{align*}

This supply model indicates that if the price in period $t$ is higher, the farmer will decide to produce more in the period $t+1$. Because of increased supply in period $t+1$, $P_{t+1}$ will be lower than $P_t$. As a result of lower price in period $t+1$, the farmer will produce less in period $t+2$ than they did in period $t+1$. Thus disturbance in the case of the Cobweb phenomenon ar not expected to be random, rather, they will exhibit a systematic pattern and thus cause a problem of autocorrelation.

**v) Effect of Lagged Relationship**

Many times in business and economic research the lagged values of the dependent variable are used as explanatory variables. For example, to study the effect of tastes and habits on consumption in a period $t$, consumption in period $t-1$ is used as an explanatory variable since consumer do not change their consumption habits readily for psychological, technological, or institutional reasons. The consumption function will be

$$C_t = \alpha + \beta Y + \gamma C_{t-1} + \varepsilon_t,$$

where $C$ is consumption and $Y$ is income.

If the lagged terms ($C_{t-1}$) is not included in the above consumption function, the resulting error term will reflect a systematic pattern due to the impact of habits and tastes on current consumption and thereby autocorrelation will be present.

**vi) Data Manipulation**

Often raw data are manipulated in the empirical analysis. For example, in time-series regression involving quarterly data, such data are usually derived from the monthly data by simply adding three monthly observations and dividing the sum by 3. This averaging introduces smoothness into the data by dampening the fluctuations on the monthly data. This smoothness may itself lend to a systematic pattern in the disturbances, thereby introducing autocorrelation.

Interpolation or extrapolation of data is also another source of data manipulation.

**Vii) Non-Stationarity**

It is quite possible that both $Y$ and $X$ are non-stationary and therefore, the error $u$ is also non-stationary. In this case, the error term will exhibit autocorrelation.

This is all about reasons for autocorrelation.

Read more about autocorrelation.