Goldfeld-Quandt Test Example (2020)

Data is taken from the Economic Survey of Pakistan 1991-1992. The data file link is at the end of the post “Goldfeld-Quandt Test Example for the Detection of Heteroscedasticity”.

Read about the Goldfeld-Quandt Test in detail by clicking the link “Goldfeld-Quandt Test: Comparison of Variances of Error Terms“.

Goldfeld-Quandt Test Example

For an illustration of the Goldfeld-Quandt Test Example, the data given in the file should be divided into two sub-samples after dropping (removing/deleting) the middle five observations.

Sub-sample 1 consists of data from 1959-60 to 1970-71.

Sub-sample 2 consists of data from 1976-77 to 1987-1988.

The sub-sample 1 is highlighted in green colour, and sub-sample 2 is highlighted in blue color, while the middle observation that has to be deleted is highlighted in red.

Goldfeld-Quandt Test Example

The Step-by-Step Procedure to Conduct the Goldfeld Quandt Test

Step 1: Order or Rank the observations according to the value of $X_i$. (Note that observations are already ranked.)

Step 2: Omit $c$ central observations. We selected 1/6 observations to be removed from the middle of the observations. 

Step 3: Fit OLS regression on both samples separately and obtain the Residual Sum of Squares (RSS) for each sub-sample.

The Estimated regression for the two sub-samples are:

Sub-sample 1: $\hat{C}_1 = 1010.096 + 0.849 \text{Income}$

Sub-sample 2: $\hat{C}_2 = -244.003 + 0.88067 \text{Income}$

Now compute the Residual Sum of Squares for both sub-samples.

The residual Sum of Squares for Sub-Sample 1 is $RSS_1=2532224$

The residual Sum of Squares for Sub-Sample 2 is $RSS_2=10339356$

The F-Statistic is $ \lambda=\frac{RSS_2/n_2}{RSS_1/n_1}=\frac{10339356}{2532224}=4.083$

The critical value of $F(n_1=10, n_2=10$ at a 5% level of significance is 2.98.

Since the computed F value is greater than the critical value, heteroscedasticity exists in this case, that is, the variance of the error term is not consistent, rather it depends on the independent variable, GNP.

Your assignment is to perform the Goldfeld-Quandt Test Example using any statistical software and confirm the results.

Download the data file by clicking the link “GNP and consumption expenditure data“.

Learn about White’s Test of Heteroscedasticity

Goldfeld-Quandt Test Example

Learn R Programming

Online Test Preparation MCQS with Answers

First Order Autocorrelation (2020)

To understand the First Order Autocorrelation, consider the multiple regression model as described below

$$Y_t=\beta_1+\beta_2 X_{2t}+\beta_3 X_{3t}+\cdots+\beta_k X_{kt}+u_t,$$

In the model above the current observation of the error term ($u_t$) is a function of the previous (lagged) observation of the error term ($u_{t-1}$). That is,

\begin{align*}
u_t = \rho u_{t-1} + \varepsilon_t, \tag*{eq 1}
\end{align*}

where $\rho$ is the parameter depicting the functional relationship among observations of the error term $u_t$ and $\varepsilon_t$ is a stochastic error term which is iid (identically independently distributed). It satisfies the standard OLS assumption:

\begin{align*}
E(\varepsilon) &=0\\
Var(\varepsilon) &=\sigma_t^2\\
Cov(\varepsilon_t, \varepsilon_{t+s} ) &=0
\end{align*}

Note if $\rho=1$, then all these assumptions are undefined.

The scheme (eq1) is known as a Markov first-order autoregressive scheme, usually denoted by AR(1). The eq1 is interpreted as the regression of $u_t$ on itself tagged on period. It is first-order because $u_t$ and its immediate past value are involved. Note the $Var(u_t)$ is still homoscedasticity under the AR(1) scheme.

The coefficient $\rho$ is called the first order autocorrelation coefficient (also called the coefficient of autocovariance) and takes values from -1 to 1 or ($|\rho|<1$). The size of $\rho$ determines the strength of autocorrelation (serial correlation).  There are three different cases:

  1. If $\rho$ is zero, then there is no autocorrelation because $u_t=\varepsilon_t$.
  2. If $\rho$ approaches 1, the value of the previous observation of the error ($u_t-1$) becomes more important in determining the value of the current error term ($u_t$), and therefore, greater positive autocorrelation exists. The negative error term will lead to negative and positive will lead to a positive error term.
  3. If $\rho$ approaches -1, there is a very high degree of negative autocorrelation. The signs of the error term tend to switch signs from negative to positive and vice versa in consecutive observations.
First order Autocorrelation

First Order Autocorrelation AR(1)

\begin{align*}
u_t &= \rho u_{t-1}+\varepsilon_t\\
E(u_t) &= \rho E(u_{t-1})+ E(\varepsilon_t)=0\\
Var(u_t)&=\rho^2 Var(u_{t-1}+var(\varepsilon_t)\\
\text{Because $u$’s and $\varepsilon$’s are uncorrelated}\\
Var(u_t)&=\sigma^2\\
Var(u_{t-1}) &=\sigma^2\\
Var(\varepsilon_t)&=\sigma_t^2\\
\Rightarrow Var(u_t) &=\rho^2 \sigma^2+\sigma_t^2\\
\Rightarrow \sigma^2-\rho^2\sigma^2 &=\sigma_t^2\\
\Rightarrow \sigma^2(1-\rho^2)&=\sigma_t^2\\
\Rightarrow Var(u_t)&=\sigma^2=\frac{\sigma_t^2}{1-\rho^2}
\end{align*}

For covariance, multiply equation (eq1) by $u_{t-1}$ and taking the expectations on both sides

\begin{align*}
u_t\cdot u_{t-1} &= \rho u_{t-1} \cdot u_{t-1} + \varepsilon_t \cdot u_{t-1}\\
E(u_t u_{t-1}) &= E[\rho u_{t-1}^2 + u_{t-1}\varepsilon_t ]\\
cov(u_t, u_{t-1}) &= E(u_t u_{t-1}) = E[\rho u_{t-1}^2 + u_{t-1}\varepsilon_t ]\\
&=\rho \frac{\sigma_t^2}{1-\rho^2}\tag*{$\because Var(u_t) = \frac{\sigma_t^2}{1-\rho^2}$}
\end{align*}

Similarly,
\begin{align*}
cov(u_t,u_{t-2}) &=\rho^2 \frac{\sigma_t^2}{(1-\rho^2)}\\
cov(u_t,u_{t-2}) &= \rho^2 \frac{\sigma_t^2}{(1-\rho^2)}\\
cov(u_t, u_{t+s}) &= \rho^p
\end{align*}

The strength and direction of the correlation (positive or negative) and its distance from zero determine the significance of the first-order autocorrelation. Values close to $+1$ or $-1$ indicate strong positive or negative autocorrelation, respectively. A value close to zero suggests little to no autocorrelation.

Software like R, Python, and MS Excel have built-in functions to calculate autocorrelation. The visualization of ACF is often a preferred method to assess autocorrelation across different lags, not just the first order autocorrelation.

In summary, first order autocorrelation refers to the correlation between a time series and lagged values of the same time series, specifically at a lag of one time period. It measures how much a variable in a time series is related to its immediate past value.

https://itfeature.com

Learn R Programming

Computer MCQs Online Test

What are the Consequences of Autocorrelation (2020)

Autocorrelation, when ignored, can lead to several issues in analyzing data, particularly in statistical models. In this post, we will discuss some important consequences of the existence of autocorrelation in the data. The consequences of the OLS estimators in the presence of Autocorrelation can be summarized as follows:

Consequences Patterns of Autocorrelation and Non-Autocorrelation

Consequences of Autocorrelation on OLS Estimators If Exists

  • When the disturbance terms are serially correlated then the OLS estimators of the $\hat{\beta}$s are still unbiased and consistent but the optimist property (minimum variance property) is not satisfied. This makes it harder to determine if the estimated effect of a variable is truly significant.
  • The OLS estimators will be inefficient and therefore, no longer BLUE. Inefficient means there could be better ways to estimate the model parameters that would produce more precise results with lower variance.
  • The estimated variance of the regression coefficients will be biased and inconsistent and will be greater than the variances of estimate calculated by other methods, therefore, hypothesis testing is no longer valid. In most of the cases, $R^2$ will be overestimated (indicating a better fit than the one that truly exists). The t- and F-statistics will tend to be higher. One might reject a true null hypothesis (meaning a relationship does not exist) or fail to reject a false one (meaning a relationship appears to exist when it does not).
  • The variance of random term $u$ may be under-estimated if the $u$’s are autocorrelated. That is, the random variance $\hat{\sigma}^2=\frac{\sum \hat{u}_i^2}{n-2}$ is likely to be under-estimate the true $\sigma^2$.
  • Among the consequences of autocorrelation, another is, that if the disturbance terms are autocorrelated then the OLS estimates are not asymptotic. That is $\hat{\beta}$s are not asymptotically efficient.

Therefore, autocorrelation may lead to misleading results and unreliable statistical tests. If autocorrelation is suspected in the data being analyzed, then use different statistical techniques to address it and improve the validity of your analysis.

https://itfeature.com

Learn about Autocorrelation and Reasons for Autocorrelations

Learn more about Autocorrelation on Wikipedia

MCQs General Knowledge

Autocorrelation Reasons

The post is about autocorrelation Reasons that may occur in time series data. To learn and understand what is autocorrelation, see the post about Introduction to autocorrelation.

Autocorrelation Reasons

Autocorrelation Reasons

There are several reasons for Autocorrelation. Some of the most important autocorrelation reasons are:

i) Inertia

Inertia or sluggishness in economic time series is a great reason for autocorrelation. For example, GNP, production, price index, employment, and unemployment exhibit business cycles. Starting at the bottom of the recession, when the economic recovery starts, most of these series start moving upward. In this upswing, the value of a series at one point in time is greater than its previous values. These successive periods (observations) are likely to be interdependent.

ii) Omitted Variables Specification Bias

The residuals (which are proxies for $u_i$) may suggest that some variables that were originally candidates but were not included in the model (for a variety of reasons) should be included. This is the case of excluded variable specification bias. Often the inclusion of such variables may remove the correlation pattern observed among the residuals. For example, the model

$$Y_t = \beta_1 + \beta_2 X_{2t} + \beta_3 X_{3t} + \beta_4 X_{4t} + u_t,$$

is correct. However, running

$$Y_t=\beta_1 + \beta_2 X_{2t} + \beta_3X_{3t}+v_i,\quad \text{where $v_t=\beta_4X_{4t}+u_t$ },$$

the error or disturbance term will reflect a systematic pattern. Thus creating false autocorrelation, due to the exclusion of $X_{4t}$ variable from the model. The effect of $X_{4t}$ will be captured by the disturbances $v_t$.

iii) Model Specification: Incorrect Functional Form

Autocorrelation can also occur due to the miss-specification of the model. Suppose that $Y_t$ is connected to $X_{2t}$ with a quadratic relation

$$Y_t=\beta_1 + \beta_2 X_{2t}^2+u_t,$$

but we wrongly estimate a straight line relationship ($Y_t=\beta_1 + \beta_2X_{2t}+u_t$). In this case, the error term obtained from the straight line specification will depend on $X_{2t}^2$. If $X_{2t}$ is increasing/decreasing over time, $u_t$ will also be increasing or decreasing over time. Therefore, an Incorrect Functional Form is also another important reason for autocorrelation.

iv) Effect of Cobweb Phenomenon

The quantity supplied in the period $t$ of many agricultural commodities depends on their price in period $t-1$. This is called the Cobweb phenomenon. This is because the decision to plant a crop in a period of $t$ is influenced by the price of the commodity in that period. However, the actual supply of the commodity is available in the period $t+1$.

\begin{align*}
QS_{t+1} &= \alpha + \beta P_t + \varepsilon_{t+1}\\
\text{or }\quad QS_t &= \alpha + \beta P_{t-1} + \varepsilon_t
\end{align*}

This supply model indicates that if the price in period $t$ is higher, the farmer will decide to produce more in the period $t+1$. Because of increased supply in period $t+1$, $P_{t+1}$ will be lower than $P_t$. As a result of lower price in period $t+1$, the farmer will produce less in period $t+2$ than they did in period $t+1$. Thus disturbances in the case of the Cobweb phenomenon are not expected to be random, rather, they will exhibit a systematic pattern and thus cause a problem of autocorrelation.

v) Effect of Lagged Relationship

Many times in business and economic research the lagged values of the dependent variable are used as explanatory variables. For example, to study the effect of tastes and habits on consumption in a period $t$, consumption in period $t-1$ is used as an explanatory variable since consumer do not change their consumption habits readily for psychological, technological, or institutional reasons. The consumption function will be

$$C_t = \alpha + \beta Y + \gamma C_{t-1} + \varepsilon_t,$$
where $C$ is consumption and $Y$ is income.

If the lagged terms ($C_{t-1}$) are not included in the above consumption function, the resulting error term will reflect a systematic pattern due to the impact of habits and tastes on current consumption and thereby autocorrelation will be present.

vi) Data Manipulation

Data manipulation is also another important reason for autocorrelation. Often raw data are manipulated in empirical analysis. For example, in time-series regression involving quarterly data, such data are usually derived from the monthly data by simply adding three monthly observations and dividing the sum by 3. This averaging introduces smoothness to the data by dampening the fluctuations in the monthly data. This smoothness may itself lend to a systematic pattern in the disturbances, thereby introducing autocorrelation.

Interpolation or extrapolation of data is also another source of data manipulation.

Vii) Non-Stationarity

Both $Y$ and $X$ may be non-stationary and therefore, the error $u$ is also non-stationary. In this case, the error term will exhibit autocorrelation.

Patterns of Autocorrelation and Non-Autocorrelation Autocorrelation Reasons

This is all about autocorrelation reasons.

MCQs General Knowledge

Read more about autocorrelation.