Heteroscedasticity Residual Plot (2020)

The post is about Heteroscedasticity Residual Plot.

Heteroscedasticity and Heteroscedasticity Residual Plot

One of the assumptions of the classical linear regression model is that there is no heteroscedasticity (error terms have constant error terms) meaning that ordinary least square (OLS) estimators are (BLUE, best linear unbiased estimator) and their variances are the lowest of all other unbiased estimators (Gauss Markov Theorem).

If the assumption of constant variance does not hold then this means that the Gauss Markov Theorem does not apply. For heteroscedastic data, regression analysis provides an unbiased estimate of the relationship between the predictors and the outcome variables.

As we have discussed heteroscedasticity occurs when the error variance has non-constant variance.  In this case, we can think of the disturbance for each observation as being drawn from a different distribution with a different variance.  Stated equivalently, the variance of the observed value of the dependent variable around the regression line is non-constant. 

We can think of each observed value of the dependent variable as being drawn from a different conditional probability distribution with a different conditional variance. A general linear regression model with the assumption of heteroscedasticity can be expressed as follows

\begin{align*}
y_i & = \beta_0 + \beta_1 X_{i1} + \beta_2 X_{i2} + \cdots + \beta_p X_ip + \varepsilon_i\\
Var(\varepsilon_i)&=E(\varepsilon_i^2)\\
&=\sigma_i^2; \cdots i=1,2,\cdots, n
\end{align*}

Note that we have a $i$ subscript attached to sigma squared.  This indicates that the disturbance for each of the $ n$ units is drawn from a probability distribution that has a different variance.

If the error term has non-constant variance, but all other assumptions of the classical linear regression model are satisfied, then the consequences of using the OLS estimator to obtain estimates of the population parameters are:

  • The OLS estimator is still unbiased
  • The OLS estimator is inefficient; that is, it is not BLUE
  • The estimated variances and covariances of the OLS estimates are biased and inconsistent
  • Hypothesis tests are not valid

Detection of Heteroscedasticity Residual Plot

The residual for the $i$th observation, $\hat{\varepsilon_i}$, is an unbiased estimate of the unknown and unobservable error for that observation, $\hat{\varepsilon_i}$. Thus the squared residuals, $\hat{\varepsilon_i^2} $, can be used as an estimate of the unknown and unobservable error variance,  $\sigma_i^2=E(\hat{\varepsilon_i})$. 

One can calculate the squared residuals and then plot them against an explanatory variable that you believe might be related to the error variance.  If you believe that the error variance may be related to more than one of the explanatory variables, you can plot the squared residuals against each one of these variables.  Alternatively, you could plot the squared residuals against the fitted value of the dependent variable obtained from the OLS estimates.  Most statistical programs (software) have a command to do these residual plots.  It must be emphasized that this is not a formal test for heteroscedasticity.  It would only suggest whether heteroscedasticity may exist.

Below there are residual plots showing the three typical patterns. The first plot shows a random pattern that indicates a good fit for a linear model. The other two plot patterns of residual plots are non-random (U-shaped and inverted U), suggesting a better fit for a non-linear model, than a linear regression model.

Heteroscedasticity Regression Residual Plot 3
Heteroscedasticity Residual Plot 1
Heteroscedasticity Residual Plot 1
Heteroscedasticity Residual Residual Plot 2
Heteroscedasticity Residual Plot 2
Heteroscedasticity Residual Plot 3

Learn R Language from R Frequently Asked Questions

Goldfeld Quandt Test: Comparison of Variances of Error Terms

The Goldfeld Quandt test is one of two tests proposed in a 1965 paper by Stephen Goldfeld and Richard Quandt. Both parametric and nonparametric tests are described in the paper, but the term “Goldfeld–Quandt test” is usually associated only with the parametric test.
Goldfeld-Quandt test is frequently used as it is easy to apply when one of the regressors (or another r.v.) is considered the proportionality factor of heteroscedasticity. Goldfeld-Quandt test is applicable for large samples. The observations must be at least twice as many as the parameters to be estimated. The test assumes normality and serially independent error terms $u_i$.

The Goldfeld Quandt test compares the variance of error terms across discrete subgroups. So data is divided into h subgroups. Usually, the data set is divided into two parts or groups, and hence the test is sometimes called a two-group test.

Goldfeld Quandt Test: Comparison of Variances of Error Terms

Before starting how to perform the Goldfeld Quand Test, you may read more about the term Heteroscedasticity, the remedial measures of heteroscedasticity, Tests of Heteroscedasticity, and Generalized Least Square Methods.

Goldfeld Quandt Test Procedure:

The procedure for conducting the Goldfeld-Quandt Test is;

  1. Order the observations according to the magnitude of $X$ (the independent variable which is the proportionality factor).
  2. Select arbitrarily a certain number (c) of central observations which we omit from the analysis. (for $n=30$, 8 central observations are omitted i.e. 1/3 of the observations are removed). The remaining $n-c$ observations are divided into two sub-groups of equal size i.e. $\frac{(n-2)}{2}$, one sub-group includes small values of $X$ and the other sub-group includes the large values of $X$, and a data set is arranged according to the magnitude of $X$.
  3. Now Fit the separate regression to each of the sub-groups, and obtain the sum of squared residuals from each of them.
    So $\sum c_1^2$ shows the sum of squares of Residuals from a sub-sample of low values of $X$ with $(n – c)/2 – K$ df, where K is the total number of parameters.$\sum c_2^2$ shows the sum of squares of Residuals from a sub-sample of large values of $X$ with $(n – c)/2 – K$ df, where K is the total number of parameters.
  4. Compute the Relation $F^* = \frac{RSS_2/df}{RSS_2/df}=\frac{\sum c_2^2/ ((n-c)/2-k)}{\sum c_1^2/((n-c)/2-k) }$

If variances differ, F* will have a large value. The higher the observed value of the F*-ratio the stronger the heteroscedasticity of the $u_i$.

Goldfeld Quandt Test of

References

  • Goldfeld, Stephen M.; Quandt, R. E. (June 1965). “Some Tests for Homoscedasticity”. Journal of the American Statistical Association 60 (310): 539–547
  • Kennedy, Peter (2008). A Guide to Econometrics (6th ed.). Blackwell. p. 116

Numerical Example of the Goldfeld-Quandt Test.

R Programming and Data Analysis in R

Online MCQs Test Website