Category: Heteroscedasticity

Park Glejser Test: Numerical Example

To detect the presence of heteroscedasticity using the Park Glejser test, consider the following data.

Year1992199319941995199619971998
Yt37484536255563
Xt4.56.53.532.58.57.5

The step by step procedure of conducting Park Glejser test:

Step 1: Obtain estimate the regression equation

$$\hat{Y}_i = 19.8822 + 4.7173X_i$$

Obtain the residuals from this estimated regression equation:

Residuals-4.1103-2.54508.60711.9657-6.6756-4.97977.7377

Take the absolute values of these residuals and consider it as your dependent variables to perform the different functional forms suggested by Glejser.

Step 2: Regress the absolute values of $\hat{u}_i$ on the $X$ variable that is thought to be closely associated with $\sigma_i^2$. We will use the following function forms.

 Functional FormResults
1)$|\hat{u}_t| = \beta_1 + \beta_2 X_i +v_i$

$|\hat{u}_i| = 5.2666-0.00681X_i,\quad R^2=0.00004$

$t_{cal} = -0.014$

   
2)$|\hat{u}_t| = \beta_1 + \beta_2 \sqrt{X_i} +v_i$

$|\hat{u}_i| = 5.445-0.0962X_i,\quad R^2=0.000389$

$t_{cal} = -0.04414$

   
3)$|\hat{u}_t| = \beta_1 + \beta_2 \frac{1}{X_i} +v_i$

$||\hat{u}_i| = 4.9124+1.3571X_i,\quad R^2=0.00332$

$t_{cal} = -0.12914$

   
4)$|\hat{u}_t| = \beta_1 + \beta_2 \frac{1}{\sqrt{X_i}} +v_i$

$\hat{u}_i| = 4.7375+1.0428X_i,\quad R^2=0.00209$

$t_{cal} = 0.10252$

Since none of the residual regression is significant, therefore, the hypothesis of heteroscedasticity is rejected. Therefore, we can say that there is no relationship between the absolute value of the residuals ($u_i$) and the explanatory variable $X$.

Error Variance is Proportional to Xi: Park Glejser Test

How to perform White General Heteroscedasticity?

Heteroscedasticity-Corrected Standard Errors ($\sigma_i^2 $ unknown)

$\sigma_i^2$ are rarely known. However, there is a way of obtaining consistent estimates of variances and covariances of OLS estimators even if there is heteroscedasticity.

White’s Heteroscedasticity-Consistent Variances and Standard Errors:
White’s heteroscedasticity-corrected standard errors are known as robust standard errors. White’s heteroscedasticity-corrected standard errors are larger (maybe smaller too) than the OLS standard errors and therefore, the estimated $t$-values are much smaller (or maybe larger) than those obtained by the OLS.

Comparing the OLS output with White’s heteroscedasticity-corrected standard errors may be useful to see whether heteroscedasticity is a serious problem in a particular set of data.

Plausible Assumptions about Heteroscedasticity Patterns:

Assumption 1: The error variance is proportional to $X_i^2$

Error Variance

$$E(u_i^2)=\sigma^2 X_i^2$$
It is believed that the variance of $u_i$ is proportional to the square of the $X$ (in graphical methods or Park and Glejser approaches).

One may transform the original model as follows:

\begin{align}\label{assump1}
\frac{Y_i}{X_i} &=\frac{\beta_1}{X_i} + \beta_2 + \frac{u_i}{X_i} \nonumber \\
&=\beta_1 \frac{1}{X_i} + \beta_2 + v_i,\qquad \qquad (1)
\end{align}

where $v_i$ is the transformed disturbance term, equal to $\frac{u_i}{X_i}$. It can be verified that

\begin{align*}
E(v_i^2) &=E\left(\frac{u_i}{X_i}\right)^2\\
&=\frac{1}{X_i^2}E(u_i^2)=\sigma^2
\end{align*}

Hence, the variance of $v_i$ is now homoscedastic, and one may apply OLS to the transformed equation by regressing $\frac{Y_i}{X_i}$ on $\frac{1}{X_i}$.

Notice that in the transformed regression the intercept term $\beta_2$ is the slope coefficient in the original equation and the slope coefficient $\beta_1$ is the intercept term in the original model. Therefore, to get back to the original model multiply the estimated equation (1) by $X_i$.

Assumption 2: The Error Variance is Proportional to $X_i$

The square root transformation: $E(u_i^2) = \sigma^2 X_i$

heteroscedasticity-corrected standard errors

If it is believed that the variance of $u_i$ is proportional to $X_i$, then the original model can be transformed as

\begin{align*}
\frac{Y_i}{\sqrt{X_i}} &= \frac{\beta_1}{\sqrt{X_i}} + \beta_2 \sqrt{X_i} + \frac{u_i}{\sqrt{X_i}}\\
&=\beta_1 \frac{1}{\sqrt{X_i}} + \beta_2\sqrt{X_i}+v_i,\quad\quad (a)
\end{align*}

where $v_i=\frac{u_i}{\sqrt{X_i}}$ and $X_i>0$

$E(v_i^2)=\sigma^2$ (a homoscedastic situation)

One may proceed to apply OLS on equation (a), regressing $\frac{Y_i}{\sqrt{X_i}}$ on $\frac{1}{\sqrt{X_i}}$ and $\sqrt{X_i}$.

Note that the transformed model (a) has no intercept term. Therefore, use the regression through the origin model to estimate $\beta_1$ and $\beta_2$. To get back the original model simply multiply the equation (a) by $\sqrt{X_i}$.

Consider a case of $intercept = 0$, that is, $Y_i=\beta_2X_i+u_i$. The transformed model will be

\begin{align*}
\frac{Y_i}{\sqrt{X_i}} &= \beta_2 \sqrt{X_i} + \frac{u_i}{\sqrt{X_i}}\\
\beta_2 &=\frac{\overline{Y}}{\overline{X}}
\end{align*}

Here, the WLS estimator is simply the ratio of the means of the dependent and explanatory variable.

Assumption 3: The Error Variance is proportional to the Square of the Mean value of $Y$

$$E(u_i^2)=\sigma^2[E(Y_i)]^2$$

The original model is $Y_i=\beta_1 + \beta_2 X_i + u_I$ and $E(Y_i)=\beta_1 + \beta_2X_i$

The transformed model

\begin{align*}
\frac{Y_i}{E(Y_i)}&=\frac{\beta_1}{E(Y_i)} + \beta_2 \frac{X_i}{E(Y_i)} + \frac{u_i}{E(Y_i)}\\
&=\beta_1\left(\frac{1}{E(Y_i)}\right) + \beta_2 \frac{X_i}{E(Y_i)} + v_i, \quad \quad (b)
\end{align*}

where $v_i=\frac{u_i}{E(Y_i)}$, and $E(v_i^2)=\sigma^2$ (a situation of homoscedasticity).

Note that for the transformed model (b) is inoperational as $E(Y_i)$ depends on $\beta_1$ and $\beta_2$ which are unknown. We know $\hat{Y}_i = \hat{\beta}_1 + \hat{\beta}_2X_i$ which is an estimator of $E(Y_i)$. Therefore, we proceed in two steps.

Step 1: Run the usual OLS regression ignoring the presence of heteroscedasticity problem and obtain $\hat{Y}_i$.

Step 2: Use the estimate of $\hat{Y}_i$ to transform the model as

\begin{align*}
\frac{Y_i}{\hat{Y}_i}&=\frac{\beta_1}{\hat{Y}_i} + \beta_2 \frac{X_i}{\hat{Y}_i} + \frac{u_i}{\hat{Y}_i}\\
&=\beta_1\left(\frac{1}{\hat{Y}_i}\right) + \beta_2 \frac{X_i}{\hat{Y}_i} + v_i, \quad \quad (c)
\end{align*}

where $v_i=\frac{u_i}{\hat{Y}_i}$.

Although $\hat{Y}_i$ is not exactly $E(Y_i)$, they are consistent estimates (as the sample size increases indefinitely; $\hat{Y}_i$ converges to true $E(Y_i)$). Therefore, the transformed model (c) will perform well if the sample size is reasonably large.

Assumption 4: Log Transformation

A log transformation

$$ ln Y_i = \beta_1 + \beta_2 ln X_i + u_i \tag*{log model-1}$$ usually reduces heteroscedasticity when compared to the regression $$Y_i=\beta_1+\beta_2X_i + u_i $$

It is because log transformation compresses the scales in which the variables are measured, by reducing a tenfold (دس گنا) difference between two values to a twofold (دگنا) difference. For example, 80 is 10 times the number 8, but ln(80) = 4.3280 is about twice as large as ln(8) = 2.0794.

By taking the log transformation, the slope coefficient $\beta_2$ measures the elasticity of $Y$ with respect to $X$ (that is, the percentage change in $Y$ for the percentage change in $X$).

If $Y$ is consumption and $X$ is income in the model (log model-1) then $\beta_2$ measures income elasticity, while in the original model (model without any transformation: OLS model), $\beta_2$ measures only the rate of change of mean consumption for a unit change in income.

Note that the log transformation is not applicable if some of the $Y$ and $X$ values are zero or negative.

Note regarding all assumptions about the nature of heteroscedasticity, we are essentially speculating (سوچنا، منصوبہ بنانا) about the nature of $\sigma_i^2$.

  • There may be a problem of spurious correlation. For example, in the model $$Y_i = \beta_1+\beta_2X_i + u_i,$$ the $Y$ and $X$ variables may not be correlation but in transformed model $$\frac{Y_i}{X_i}=\beta_1\left(\frac{1}{X_i}\right) + \beta_2,$$ the $\frac{Y_i}{X_i}$ and $\frac{1}{X_i}$ are often found to be correlated.
  • $\sigma_i^2$ are not directly known, we estimate them from one or more of the transformations. All testing procedures are valid only in large samples. Therefore, be careful in interpreting the results based on the various transformations in small or finite samples.
  • For model with more than one explanatory variable, one may not know in advance, which of the $X$ variables should be chosen for transforming data.

In case of Heteroscedasticity, proof of $E(\hat{\sigma}^2)\ne \sigma^2$

In this post, we will prove mathematically that $E(\hat{\sigma}^2)\ne \sigma^2$ when there is some presence of hetero in the data.

For the proof of $E(\hat{\sigma}^2)\ne \sigma^2$, consider the two-variable linear regression model in the presence of heteroscedasticity,

\begin{align}
Y_i=\beta_1 + \beta_2 X+ u_i, \quad\quad (eq1)
\end{align}

where $Var(u_i)=\sigma_i^2$ (Case of heteroscedasticity)

as

\begin{align}
\hat{\sigma^2} &= \frac{\sum \hat{u}_i^2 }{n-2}\\
&= \frac{\sum (Y_i – \hat{Y}_i)^2 }{n-2}\\
&=\frac{(\beta_1 + \beta_2 X_i + u_i – \hat{\beta}_1 -\hat{\beta}_2 X_i )^2}{n-2}\\
&=\frac{\sum \left( -(\hat{\beta}_1-\beta_1) – (\hat{\beta}_2 – \beta_2)X_i + u_i \right)^2 }{n-2}\quad\quad (eq2)
\end{align}

Noting that

\begin{align*}
(Y_i-\hat{Y}_i)&=0\\
\beta_1 + \beta_2 X + u_i\, – \,\hat{\beta}_1 – \hat{\beta}_2X &=0\\
-(\hat{\beta}_1 -\beta_1) – X(\hat{\beta}_2-\beta_2) – u_i & =0\\
(\hat{\beta}_1 -\beta_1) &= – X (\hat{\beta}_2-\beta_2) + u_i\\
\text{Applying summation on both side}&\\
\sum (\hat{\beta}_1-\beta_1) &= -(\hat{\beta}_2-\beta_2)\sum X + \sum u_i\\
(\hat{\beta}_1 – \beta_1) &= -(\hat{\beta}_2-\beta_2)\overline{X}+\overline{u}
\end{align*}

Substituting it in (eq2) and taking expectation on both sides:

\begin{align}
\hat{\sigma}^2 &= \frac{1}{n-2} \left[ -(-(\hat{\beta}_2 – \beta_2) \overline{X} + \overline{u} ) – (\hat{\beta}_2-\beta_2)X_i + u_i  \right]^2\\
&=\frac{1}{n-2}E\left[(\hat{\beta}_2-\beta_2)\overline{X} -\overline{u} – (\hat{\beta}_2-\beta_2)X_i-u_i \right]^2\\
&=\frac{1}{n-2} E\left[ -(\hat{\beta}_2 – \beta_2)(X_i-\overline{X}) + (u_i-\overline{u})\right]^2\\
&= \frac{1}{n-2}\left[-\sum x_i^2 Var(\hat{\beta}_2) + E[\sum(u_i-\overline{u}]^2 \right]\\
&=\frac{1}{n-2} \left[ -\frac{\sum x_i^2 \sigma_i^2}{(\sum x_i^2)} + \frac{(n-1)\sum \sigma_i^2}{n} \right]
\end{align}

If there is homoscedasticity, then $\sigma_i^2=\sigma^2$ for each $i$, $E(\hat{\sigma}_i^2)=\sigma^2$.

The expected value of the $\hat{\sigma}^2=\frac{\hat{u}_i^2}{n-2}$ will not be equal to the true $\sigma^2$ in the presence of heteroscedasticity.


Heteroscedasticity

Read about more on Remedy of Heteroscedasticity

More on heteroscedasticity on Wikipedia

Numerical Example: Goldfeld-Quandt Test

Data is taken from the Economic Survey of Pakistan 1991-1992. The data file link is at the end of this numerical example of the Goldfeld-Quandt Test.

For an illustration of the Goldfeld-Quandt test, data given in the file should be divided into two sub-samples after dropping (removing/deleting) the middle five observations.

Sub-sample 1 consists of data from 1959-60 to 1970-71.

Sub-sample 2 consists of data from 1976-77 to 1987-1988.

The sub-sample 1 is highlighted in green colour, and sub-sample 2 is highlighted in blue color, while the middle observation that has to be deleted is highlighted in red.

The Step by Step procedure to conduct the Goldfeld-Quandt test is:

Step 1: Order or Rank the observations according to the value of $X_i$. (Note that observations are already ranked.)

Step 2: Omit $c$ central observations. We selected 1/6 observations to be removed from the middle of the observations. 

Step 3: Fit OLS regression on both samples separately and obtain the Residual Sum of Squares (RSS) for each sub-sample.

The Estimated regression for the two sub-samples are:

Sub-sample 1: $\hat{C}_1 = 1010.096 + 0.849 \text{Income}$

Sub-sample 2: $\hat{C}_2 = -244.003 + 0.88067 \text{Income}$

Now compute the Residual Sum of Squares for both sub-samples.

Residual Sum of Squares for Sub-Sample 1 is $RSS_1=2532224$

Residual Sum of Squares for Sub-Sample 2 is $RSS_2=10339356$

The F-Statistic is $ \lambda=\frac{RSS_2/n_2}{RSS_1/n_1}=\frac{10339356}{2532224}=4.083$

The critical value of $F(n_1=10, n_2=10$ at 5% level of significance is 2.98.

Since the computed F value is greater than the critical value, heteroscedasticity exists in this case, that is, the variance of the error term is not consistent, rather it depends on the independent variable, GNP.

Your assignment is to perform this Numerical Example of the Goldfeld-Quandt test using any statistical software and confirm the results.

Download the data file by clicking the link gnp and consumption expenditure data

x Logo: Shield Security
This Site Is Protected By
Shield Security