Autocorrelation Reasons

The post is about autocorrelation Reasons that may occur in time series data. To learn and understand what is autocorrelation, see the post about Introduction to autocorrelation.

Autocorrelation Reasons

Autocorrelation Reasons

There are several reasons for Autocorrelation. Some of the most important autocorrelation reasons are:

i) Inertia

Inertia or sluggishness in economic time series is a great reason for autocorrelation. For example, GNP, production, price index, employment, and unemployment exhibit business cycles. Starting at the bottom of the recession, when the economic recovery starts, most of these series start moving upward. In this upswing, the value of a series at one point in time is greater than its previous values. These successive periods (observations) are likely to be interdependent.

ii) Omitted Variables Specification Bias

The residuals (which are proxies for $u_i$) may suggest that some variables that were originally candidates but were not included in the model (for a variety of reasons) should be included. This is the case of excluded variable specification bias. Often the inclusion of such variables may remove the correlation pattern observed among the residuals. For example, the model

$$Y_t = \beta_1 + \beta_2 X_{2t} + \beta_3 X_{3t} + \beta_4 X_{4t} + u_t,$$

is correct. However, running

$$Y_t=\beta_1 + \beta_2 X_{2t} + \beta_3X_{3t}+v_i,\quad \text{where $v_t=\beta_4X_{4t}+u_t$ },$$

the error or disturbance term will reflect a systematic pattern. Thus creating false autocorrelation, due to the exclusion of $X_{4t}$ variable from the model. The effect of $X_{4t}$ will be captured by the disturbances $v_t$.

iii) Model Specification: Incorrect Functional Form

Autocorrelation can also occur due to the miss-specification of the model. Suppose that $Y_t$ is connected to $X_{2t}$ with a quadratic relation

$$Y_t=\beta_1 + \beta_2 X_{2t}^2+u_t,$$

but we wrongly estimate a straight line relationship ($Y_t=\beta_1 + \beta_2X_{2t}+u_t$). In this case, the error term obtained from the straight line specification will depend on $X_{2t}^2$. If $X_{2t}$ is increasing/decreasing over time, $u_t$ will also be increasing or decreasing over time. Therefore, an Incorrect Functional Form is also another important reason for autocorrelation.

iv) Effect of Cobweb Phenomenon

The quantity supplied in the period $t$ of many agricultural commodities depends on their price in period $t-1$. This is called the Cobweb phenomenon. This is because the decision to plant a crop in a period of $t$ is influenced by the price of the commodity in that period. However, the actual supply of the commodity is available in the period $t+1$.

\begin{align*}
QS_{t+1} &= \alpha + \beta P_t + \varepsilon_{t+1}\\
\text{or }\quad QS_t &= \alpha + \beta P_{t-1} + \varepsilon_t
\end{align*}

This supply model indicates that if the price in period $t$ is higher, the farmer will decide to produce more in the period $t+1$. Because of increased supply in period $t+1$, $P_{t+1}$ will be lower than $P_t$. As a result of lower price in period $t+1$, the farmer will produce less in period $t+2$ than they did in period $t+1$. Thus disturbances in the case of the Cobweb phenomenon are not expected to be random, rather, they will exhibit a systematic pattern and thus cause a problem of autocorrelation.

v) Effect of Lagged Relationship

Many times in business and economic research the lagged values of the dependent variable are used as explanatory variables. For example, to study the effect of tastes and habits on consumption in a period $t$, consumption in period $t-1$ is used as an explanatory variable since consumer do not change their consumption habits readily for psychological, technological, or institutional reasons. The consumption function will be

$$C_t = \alpha + \beta Y + \gamma C_{t-1} + \varepsilon_t,$$
where $C$ is consumption and $Y$ is income.

If the lagged terms ($C_{t-1}$) are not included in the above consumption function, the resulting error term will reflect a systematic pattern due to the impact of habits and tastes on current consumption and thereby autocorrelation will be present.

vi) Data Manipulation

Data manipulation is also another important reason for autocorrelation. Often raw data are manipulated in empirical analysis. For example, in time-series regression involving quarterly data, such data are usually derived from the monthly data by simply adding three monthly observations and dividing the sum by 3. This averaging introduces smoothness to the data by dampening the fluctuations in the monthly data. This smoothness may itself lend to a systematic pattern in the disturbances, thereby introducing autocorrelation.

Interpolation or extrapolation of data is also another source of data manipulation.

Vii) Non-Stationarity

Both $Y$ and $X$ may be non-stationary and therefore, the error $u$ is also non-stationary. In this case, the error term will exhibit autocorrelation.

Patterns of Autocorrelation and Non-Autocorrelation Autocorrelation Reasons

This is all about autocorrelation reasons.

MCQs General Knowledge

Read more about autocorrelation.

Autocorrelation An Introduction (2020)

The term autocorrelation may be defined as a “correlation between members of a series of observations ordered in time (as in time series data) or space (as in cross-sectional data)”. Autocorrelation is most likely to occur in time-series data. In the regression context, the CLRM assumes that covariances and correlations do not exist in the disturbances $u_i$. Symbolically,

$$Cov(u_i, u_j | x_i, x_j)=E(u_i u_j)=0, \quad i\ne j$$

In simple words, the disturbance term relating to any observation is not influenced by the disturbance term relating to any other observation. In other words, the error terms $u_i$ and $u_j$ are independently distributed (serially independent). If there are dependencies among disturbance terms, then there is a problem of autocorrelation. Symbolically,

$$ Cov(u_i,u_j|x_i, x_j) = E(u_i, u_j) \ne 0,\quad i\ne j$$

Autocorrelation

Suppose, we have disturbance terms from two different time series say $u$ and $v$ such as $u_1, u_2, \cdots, u_{10}$, and $v_1,v_2,\cdots, v_{11}$, then the correlation between these two different time series is called serial correlation (that is, the lag correlation between two series).

Suppose, we have two-time series $u$ ($u_1,u_2,\cdots, u_{10}$) and the lag values of this series are $u_2, u_3,\cdots, u_{12}$, then the correlation between these series is called auto-correlation (that is the lag correlation of a given series with itself, lagged by a number of time units).

The use of OLS to estimate a regression model results in BLUE estimates of the parameters only when all the assumptions of the CLRM are satisfied. After performing regression analysis one may plot the residuals to observe some patterns when results are not according to prior expectations.

Plausible Patterns of Autocorrelation

Some plausible patterns of autocorrelation and non-autocorrelation are:

Patterns of Autocorrelation

Figure $a$–$d$ shows that there is a discernible (قابل دریافت، عیاں، قابل فہم) pattern among the $u$’s.
ٖFigure (a) shows a cyclical pattern.
Figure (b) suggests an upward linear trend in the disturbances
Figure (c) suggests a downward linear trend in the disturbances
Figure (d) indicates both the linear and quadratic trend terms are present in the disturbances
Figure (e) shows no systematic pattern. Therefore, supporting the assumption of CLRM of no autocorrelation.

The importance of autocorrelation can be described as follows:

  • Identifying Patterns: Autocorrelation measures the correlation between a variable and its lagged versions, essentially checking how similar past values are to present values. Therefore, it helps identify trends or seasonality within the data. For instance, positive auto-correlation in stock prices might suggest momentum, where recent gains could indicate continued increase.
  • Validating Models: Many statistical models, especially related to time series forecasting, assume independence between errors term. Autocorrelation helps to assess this assumption. If data exhibits autocorrelation, it can mislead the model, and further may lead to inaccurate forecasts. Accounting for autocorrelation through appropriate techniques improves model accuracy.
  • Understanding Dynamic Systems: Presence of auto-correlation indicates that the dependence of system on its past states. This is valuable in various fields, like finance or engineering, where system behavior is influenced by its history.
https://itfeature.com

Learn R Programming

Computer MCQs Test Online

Autocorrelation in Time Series Data (2015)

The post is about autocorrelation in time series data. The autocorrelation (serial correlation, or cross-autocorrelation) function (the diagnostic tool) helps to describe the evaluation of a process through time. Inference based on autocorrelation function is often called an analysis in the time domain.

Autocorrelation of a random process is the measure of correlation (relationship) between observations at different distances apart. These coefficients (correlation or autocorrelation) often provide insight into the probability model which generated the data. One can say that autocorrelation is a mathematical tool for finding repeating patterns in the data series.

The detection of autocorrelation in time series data is usually used for the following two purposes:

  1. Help to detect the non-randomness in data (the first i.e. lag 1 autocorrelation is performed)
  2. Help in identifying an appropriate time series model if the data are not random (autocorrelation is usually plotted for many lags)

For simple correlation, let there are $n$ pairs of observations on two variables $x$ and $y$, then the usual correlation coefficient (Pearson’s coefficient of correlation) is

\[r=\frac{\sum(x_i-\overline{x})(y_i-\overline{y})}{\sqrt{\sum (x_i-\overline{x})^2 \sum (y_i-\overline{y})^2 }}\]

A similar idea can be used in time series to see whether successive observations are correlated or not. Given $N$ observations $x_1, x_2, \cdots, x_N$ on a discrete time series, we can form ($n-1$) pairs of observations such as $(x_1, x_2), (x_2, x_3), \cdots, (x_{n-1}, x_n)$. Here in each pair first observation is as one variable ($x_t$) and the second observation is as the second variable ($x_{t+1}$). So the correlation coefficient between $x_t$ and $x_{t+1}$ is

\[r_1\frac{ \sum_{t=1}^{n-1} (x_t- \overline{x}_{(1)} ) (x_{t+1}-\overline{x}_{(2)})  }    {\sqrt{ [\sum_{t=1}^{n-1} (x_t-\overline{x}_{(1)})^2] [ \sum_{t=1}^{n-1} (y_t-\overline{y}_{(1)})^2 ] } }\]

where

$\overline{x}_{(1)}=\sum_{t=1}^{n-1} \frac{x_t}{n-1}$ is the mean of first $n-1$ observations

$\overline{x}_{(2)}=\sum_{t=2}^{n} \frac{x_t}{n-1}$ is the mean of last $n-1$ observations

Note that: The assumption is that the observations in autocorrelation are equally spaced (equi-spaced).

It is called autocorrelation or serial correlation coefficient. For large $n$, $r_1$ is approximately

\[r_1=\frac{\frac{\sum_{t=1}^{n-1} (x_t-\overline{x})(x_{t+1}-\overline{x}) }{n-1}}{ \frac{\sum_{t=1}^n (x_t-\overline{x})^2}{n}}\]

or

\[r_1=\frac{\sum_{t=1}^{n-1} (x_t-\overline{x})(x_{t+1}-\overline{x}) } { \sum_{t=1}^n (x_t-\overline{x})^2}\]

For $k$ distance apart i.e., for $k$ lags

\[r_k=\frac{\sum_{t=1}^{n-k} (x_t-\overline{x})(x_{t+k}-\overline{x}) } { \sum_{t=1}^n (x_t-\overline{x})^2}\]

An $r_k$ value of $\pm \frac{2}{\sqrt{n} }$ denotes a significant difference from zero and signifies an autocorrelation.

Patterns of Autocorrelation and Non-Autocorrelation
Autocorrelation in time series

Applications of Autocorrelation in Time Series

There are several applications of autocorrelation in Time Series Data. Some of them are described below.

  • Autocorrelation analysis is widely used in fluorescence correlation spectroscopy.
  • Autocorrelation is used to measure the optical spectra and to measure the very short-duration light pulses produced by lasers.
  • Autocorrelation is used to analyze dynamic light scattering data for the determination of the particle size distributions of nanometer-sized particles in a fluid. A laser shining into the mixture produces a speckle pattern. The autocorrelation of the signal can be analyzed in terms of the diffusion of the particles. From this, knowing the fluid viscosity, the sizes of the particles can be calculated using Autocorrelation.
  • The small-angle X-ray scattering intensity of a nano-structured system is the Fourier transform of the spatial autocorrelation function of the electron density.
  • In optics, normalized autocorrelations and cross-correlations give the degree of coherence of an electromagnetic field.
  • In signal processing, autocorrelation can provide information about repeating events such as musical beats or pulsar frequencies, but it cannot tell the position in time of the beat. It can also be used to estimate the pitch of a musical tone.
  • In music recording, autocorrelation is used as a pitch detection algorithm before vocal processing, as a distortion effect or to eliminate undesired mistakes and inaccuracies.
  • In statistics, spatial autocorrelation between sample locations also helps one estimate mean value uncertainties when sampling a heterogeneous population.
  • In astrophysics, auto-correlation is used to study and characterize the spatial distribution of galaxies in the Universe and multi-wavelength observations of Low Mass X-ray Binaries.
  • In an analysis of Markov chain Monte Carlo data, autocorrelation must be taken into account for correct error determination.

Further Reading: Autocorrelation in time series