Category: Remedial Measures of Heteroscedasticity

Heteroscedasticity-Corrected Standard Errors ($\sigma_i^2 $ unknown)

$\sigma_i^2$ are rarely known. However, there is a way of obtaining consistent estimates of variances and covariances of OLS estimators even if there is heteroscedasticity.

White’s Heteroscedasticity-Consistent Variances and Standard Errors:
White’s heteroscedasticity-corrected standard errors are known as robust standard errors. White’s heteroscedasticity-corrected standard errors are larger (maybe smaller too) than the OLS standard errors and therefore, the estimated $t$-values are much smaller (or maybe larger) than those obtained by the OLS.

Comparing the OLS output with White’s heteroscedasticity-corrected standard errors may be useful to see whether heteroscedasticity is a serious problem in a particular set of data.

Plausible Assumptions about Heteroscedasticity Patterns:

Assumption 1: The error variance is proportional to $X_i^2$

Error Variance

$$E(u_i^2)=\sigma^2 X_i^2$$
It is believed that the variance of $u_i$ is proportional to the square of the $X$ (in graphical methods or Park and Glejser approaches).

One may transform the original model as follows:

\begin{align}\label{assump1}
\frac{Y_i}{X_i} &=\frac{\beta_1}{X_i} + \beta_2 + \frac{u_i}{X_i} \nonumber \\
&=\beta_1 \frac{1}{X_i} + \beta_2 + v_i,\qquad \qquad (1)
\end{align}

where $v_i$ is the transformed disturbance term, equal to $\frac{u_i}{X_i}$. It can be verified that

\begin{align*}
E(v_i^2) &=E\left(\frac{u_i}{X_i}\right)^2\\
&=\frac{1}{X_i^2}E(u_i^2)=\sigma^2
\end{align*}

Hence, the variance of $v_i$ is now homoscedastic, and one may apply OLS to the transformed equation by regressing $\frac{Y_i}{X_i}$ on $\frac{1}{X_i}$.

Notice that in the transformed regression the intercept term $\beta_2$ is the slope coefficient in the original equation and the slope coefficient $\beta_1$ is the intercept term in the original model. Therefore, to get back to the original model multiply the estimated equation (1) by $X_i$.

Assumption 2: The Error Variance is Proportional to $X_i$

The square root transformation: $E(u_i^2) = \sigma^2 X_i$

heteroscedasticity-corrected standard errors

If it is believed that the variance of $u_i$ is proportional to $X_i$, then the original model can be transformed as

\begin{align*}
\frac{Y_i}{\sqrt{X_i}} &= \frac{\beta_1}{\sqrt{X_i}} + \beta_2 \sqrt{X_i} + \frac{u_i}{\sqrt{X_i}}\\
&=\beta_1 \frac{1}{\sqrt{X_i}} + \beta_2\sqrt{X_i}+v_i,\quad\quad (a)
\end{align*}

where $v_i=\frac{u_i}{\sqrt{X_i}}$ and $X_i>0$

$E(v_i^2)=\sigma^2$ (a homoscedastic situation)

One may proceed to apply OLS on equation (a), regressing $\frac{Y_i}{\sqrt{X_i}}$ on $\frac{1}{\sqrt{X_i}}$ and $\sqrt{X_i}$.

Note that the transformed model (a) has no intercept term. Therefore, use the regression through the origin model to estimate $\beta_1$ and $\beta_2$. To get back the original model simply multiply the equation (a) by $\sqrt{X_i}$.

Consider a case of $intercept = 0$, that is, $Y_i=\beta_2X_i+u_i$. The transformed model will be

\begin{align*}
\frac{Y_i}{\sqrt{X_i}} &= \beta_2 \sqrt{X_i} + \frac{u_i}{\sqrt{X_i}}\\
\beta_2 &=\frac{\overline{Y}}{\overline{X}}
\end{align*}

Here, the WLS estimator is simply the ratio of the means of the dependent and explanatory variable.

Assumption 3: The Error Variance is proportional to the Square of the Mean value of $Y$

$$E(u_i^2)=\sigma^2[E(Y_i)]^2$$

The original model is $Y_i=\beta_1 + \beta_2 X_i + u_I$ and $E(Y_i)=\beta_1 + \beta_2X_i$

The transformed model

\begin{align*}
\frac{Y_i}{E(Y_i)}&=\frac{\beta_1}{E(Y_i)} + \beta_2 \frac{X_i}{E(Y_i)} + \frac{u_i}{E(Y_i)}\\
&=\beta_1\left(\frac{1}{E(Y_i)}\right) + \beta_2 \frac{X_i}{E(Y_i)} + v_i, \quad \quad (b)
\end{align*}

where $v_i=\frac{u_i}{E(Y_i)}$, and $E(v_i^2)=\sigma^2$ (a situation of homoscedasticity).

Note that for the transformed model (b) is inoperational as $E(Y_i)$ depends on $\beta_1$ and $\beta_2$ which are unknown. We know $\hat{Y}_i = \hat{\beta}_1 + \hat{\beta}_2X_i$ which is an estimator of $E(Y_i)$. Therefore, we proceed in two steps.

Step 1: Run the usual OLS regression ignoring the presence of heteroscedasticity problem and obtain $\hat{Y}_i$.

Step 2: Use the estimate of $\hat{Y}_i$ to transform the model as

\begin{align*}
\frac{Y_i}{\hat{Y}_i}&=\frac{\beta_1}{\hat{Y}_i} + \beta_2 \frac{X_i}{\hat{Y}_i} + \frac{u_i}{\hat{Y}_i}\\
&=\beta_1\left(\frac{1}{\hat{Y}_i}\right) + \beta_2 \frac{X_i}{\hat{Y}_i} + v_i, \quad \quad (c)
\end{align*}

where $v_i=\frac{u_i}{\hat{Y}_i}$.

Although $\hat{Y}_i$ is not exactly $E(Y_i)$, they are consistent estimates (as the sample size increases indefinitely; $\hat{Y}_i$ converges to true $E(Y_i)$). Therefore, the transformed model (c) will perform well if the sample size is reasonably large.

Assumption 4: Log Transformation

A log transformation

$$ ln Y_i = \beta_1 + \beta_2 ln X_i + u_i \tag*{log model-1}$$ usually reduces heteroscedasticity when compared to the regression $$Y_i=\beta_1+\beta_2X_i + u_i $$

It is because log transformation compresses the scales in which the variables are measured, by reducing a tenfold (دس گنا) difference between two values to a twofold (دگنا) difference. For example, 80 is 10 times the number 8, but ln(80) = 4.3280 is about twice as large as ln(8) = 2.0794.

By taking the log transformation, the slope coefficient $\beta_2$ measures the elasticity of $Y$ with respect to $X$ (that is, the percentage change in $Y$ for the percentage change in $X$).

If $Y$ is consumption and $X$ is income in the model (log model-1) then $\beta_2$ measures income elasticity, while in the original model (model without any transformation: OLS model), $\beta_2$ measures only the rate of change of mean consumption for a unit change in income.

Note that the log transformation is not applicable if some of the $Y$ and $X$ values are zero or negative.

Note regarding all assumptions about the nature of heteroscedasticity, we are essentially speculating (سوچنا، منصوبہ بنانا) about the nature of $\sigma_i^2$.

  • There may be a problem of spurious correlation. For example, in the model $$Y_i = \beta_1+\beta_2X_i + u_i,$$ the $Y$ and $X$ variables may not be correlation but in transformed model $$\frac{Y_i}{X_i}=\beta_1\left(\frac{1}{X_i}\right) + \beta_2,$$ the $\frac{Y_i}{X_i}$ and $\frac{1}{X_i}$ are often found to be correlated.
  • $\sigma_i^2$ are not directly known, we estimate them from one or more of the transformations. All testing procedures are valid only in large samples. Therefore, be careful in interpreting the results based on the various transformations in small or finite samples.
  • For model with more than one explanatory variable, one may not know in advance, which of the $X$ variables should be chosen for transforming data.

Heteroscedasticity Tests and Remedies

There are a set of heteroscedasticity tests and remedies that require an assumption about the structure of the heteroscedasticity, if it exists. That is, to use these tests you must choose a specific functional form for the relationship between the error vriance and the variables that you believe determine the error variance. The major difference between these tests is the functional form that each test assumes.

Breusch-Pagan Test

The Breusch-Pagan test assumes the error variance is a linear function of one or more variables.

Harvey-Godfrey Test

The Harvey-Godfrey test assumes the error variance is an exponential function of one or more variables. The variables are usually assumed to be one or more of the explanatory variables in the regression equation.

The White Test

The white test of heteroscedasticity is a general test for the detection of heteroscdsticity existence in data set. It has the following advantages:

  1. It does not require you to specify a model of the structure of the heteroscedasticity, if it exists.
  2. It does not depend on the assumption that the errors are normally distributed.
  3. It specifically tests if the presence of heteroscedasticity causes the OLS formula for the variances and the covariances of the estimates to be incorrect.

Remedies for Heteroscedasticity

Suppose that you find the evidence of existence of heteroscedasticity. If you use the oLS estimator, you will get unbiased but inefficient estimates of the parameters of the model. Also, the estimates of the variances and covariances of the parameter estimates will be biased and inconsistent, and as a result hypothesis tests will not be valid. When there is evidence of heteroscedasticity, econometricians do one of the two things:

  • Use OLS estimator to estimate the parameters of the model. Correct the estimates of the variances and covariances of the OLS estimates so that they are consistent.
  • Use an estimator other than the OLS estimator to estimate the parameters of the model.

Many econometricians choose first alternative. This is because the most serious consequence of using the OLS estimator when there is heteroscedasticity is that the estimates of the variances and covariances of the parameter estimates are biased and inconsistent. If this problem is corrected, then the only shortcoming of using OLS is that you lose some precision relative to some other estimator that you could have used. However, to get more precise estimates with an alternative estimator, you must know the approximate structure of the heteroscedasticity. If you specify the wrong model of heteroscedasticity, then this alternative estimator can yield estimates that are worse than the OLS

Heteroscedasticity Regression Residual Plot

Heteroscedasticity

One of the assumptions of the classical linear regression model is that there is no heteroscedasticity (error terms has constant error term) meaning that ordinary least square (OLS) estimators are (BLUE, best linear unbiased estimator) and their variances are the lowest of all other unbiased estimators (Gauss Markov Theorem). If the assumption of constant variance does not hold then this means that the Gauss Markov Theorem does not apply. For heteroscedastic data, regression analysis provides an unbiased estimate for the relationship between the predictors and the outcome variables.

As we have discussed that heteroscedasticity occurs when the error variance has non-constant variance.  In this case, we can think of the disturbance for each observation as being drawn from a different distribution with a different variance.  Stated equivalently, the variance of the observed value of the dependent variable around the regression line is non-constant.  We can think of each observed value of the dependent variable as being drawn from a different conditional probability distribution with a different conditional variance. A general linear regression model with the assumption of heteroscedasticity can be expressed as follows

\begin{align*}
y_i & = \beta_0 + \beta_1 X_{i1} + \beta_2 X_{i2} + \cdots + \beta_p X_ip + \varepsilon_i\\
Var(\varepsilon_i)&=E(\varepsilon_i^2)\\
&=\sigma_i^2; \cdots i=1,2,\cdots, n
\end{align*}

Note that we have a $i$ subscript attached to sigma squared.  This indicates that the disturbance for each of the $n$-units is drawn from a probability distribution that has a different variance.

If the error term has non-constant variance, but all other assumptions of the classical linear regression model are satisfied, then the consequences of using the OLS estimator to obtain estimates of the population parameters are:

  • The OLS estimator is still unbiased
  • The OLS estimator is inefficient; that is, it is not BLUE
  • The estimated variances and covariances of the OLS estimates are biased and inconsistent
  • Hypothesis tests are not valid

Detection of Heteroscedasticity Regression Residual Plot

The residual for the $i$th observation, $\hat{\varepsilon_i}$, is an unbiased estimate of the unknown and unobservable error for that observation, $\hat{\varepsilon_i}$. Thus the squared residuals, $\hat{\varepsilon_i^2} $, can be used as an estimate of the unknown and unobservable error variance,  $\sigma_i^2=E(\hat{\varepsilon_i})$.  You can calculate the squared residuals and then plot them against an explanatory variable that you believe might be related to the error variance.  If you believe that the error variance may be related to more than one of the explanatory variables, you can plot the squared residuals against each one of these variables.  Alternatively, you could plot the squared residuals against the fitted value of the dependent variable obtained from the OLS estimates.  Most statistical programs (software) have a command to do these residual plots.  It must be emphasized that this is not a formal test for heteroscedasticity.  It would only suggest whether heteroscedasticity may exist.

Below there are residual plots showing the three typical patterns. The first plot shows a random pattern that indicates a good fit for a linear model. The other two plot patterns of residual plots are non-random (U-shaped and inverted U), suggesting a better fit for a non-linear model, than a linear regression model.

Heteroscedasticity Regression Residual Plot 3

Heteroscedasticity Regression Residual Plot 1

Heteroscedasticity Residual Plot 1

Heteroscedasticity Residual Residual Plot 2

Heteroscedasticity Residual Plot 2

Heteroscedasticity Regression Residual Plot 3

Remedial Measures for Heteroscedasticity

Here we will learn about Remedial Measures for Heteroscedasticity.

The heteroscedasticity does not destroy the unbiasedness and consistency properties of the OLS estimator (as OLS estimators remain unbiased and consistent in the presence of heteroscedasticity), but they are no longer efficient, not even asymptotically. The lack of efficiency makes the usual hypothesis testing procedure dubious (مشکوک، غیر معتبر). Therefore, there should be some remedial measures of heteroscedasticity. There are two approaches to remediation: (i) when $\sigma_i^2$ is known, and (ii) when $\sigma_i^2$ is unknown.

(i) $\sigma_i^2$ is known

Consider the simple linear regression model Yi=α+βXii.

If $V(\mu_i)=\sigma_i^2$ then heteroscedasticity is present. Given the values of $\sigma_i^2$ heteroscedasticity can be corrected by using weighted least squares (WLS) as a special case of Generalized Least Square (GLS). Weighted least squares is the OLS method of estimation applied to the transformed model.

When heteroscedasticity is detected by any appropriate statistical test, then appropriate solution is transform the original model in such a way that the transformed disturbance term has constant variance. The transformed model reduces to the adjustment of the original data. The transformed error term μi has a constant variance i.e. homoscedastic. Mathematically

\begin{eqnarray*}
V(\mu_i^*)&=&V\left(\frac{\mu_i}{\sigma_i}\right)\\
&=&\frac{1}{\sigma_i^2}Var(\mu_i)\\
&=&\frac{1}{\sigma_i^2}\sigma_i^2=1
\end{eqnarray*}

This approach has its limited use as the individual error variance are not always known a priori. In case of significant sample information, reasonable guesses of the true error variances can be made and be used for $\sigma_i^2$.

(ii) $\sigma_i^2$ is unknown

If $\sigma_i^2$ is not known a priori, then heteroscedasticity is corrected by hypothesizing a relationship between the error variance and one of the explanatory variables. There can be several versions of the hypothesized relationship. Suppose hypothesized relationship is $Var(\mu)=\sigma^2 X_i^2$ (error variance is proportional to $X_i^2$). For this hypothesized relation we will use the following transformation to correct for heteroscedasticity for the following simple linear regression model Yi=α+βXii.
\begin{eqnarray*}
\frac{Y_i}{X_i}&=&\frac{\alpha}{X_i}+\beta+\frac{\mu_i}{X_i}\\
\Rightarrow \quad Y_i^*&=&\beta +\alpha_i^*+\mu_i^*\\
\mbox{where } Y_i^*&=&\frac{Y_i}{X_i}, \alpha_I^*=\frac{1}{X_i} \mbox{and  } \mu_i^*=\frac{\mu}{X_i}
\end{eqnarray*}

Now the OLS estimation of the above transformed model will yield the efficient parameter estimates as $\mu_i^*$’s have constant variance. i.e.

\begin{eqnarray*}
V(\mu_i^*)&=&V(\frac{\mu_i}{X_i})\\
&=&\frac{1}{X_i^2} V(\mu_i^2)\\
&=&\frac{1}{X_i^2}\sigma^2X_i^2\\
&=&\sigma^2=\mbox{ Constant}
\end{eqnarray*}

For correction of heteroscedasticity some other hypothesized relations are

  • Error variance is proportional to Xi (Square root transformation) i.e $E(\mu_i^2)=\sigma^2X_i$
    The transformed model is
    \[\frac{Y_i}{\sqrt{X_i}}=\frac{\alpha}{\sqrt{X_i}}+\beta\sqrt{X_i}+\frac{\mu_i}{\sqrt{X_i}}\]
    It (transformed model) has no intercept term. Therefore we have to use the regression through the origin model to estimate $\alpha$ and $\beta$. To get the original model, multiply $\sqrt{X_i}$ with the transformed model.
  • Error Variance is proportional to the square of the mean value of Y. i.e. $E(\mu_i^2)=\sigma^2[E(Y_i)]^2$
    Here the variance of $\mu_i$ is proportional to the square of the expected value of Y, and E(Yi)=α+βxi.
    The transformed model will be
    \[\frac{Y_i}{E(Y_i)}=\frac{\alpha}{E(Y_i)}+\beta\frac{X_i}{E(Y_i)}+\frac{\mu_i}{E(Y_i)}\]
    This transformation is not appropriate because E(Yi) depends upon $\alpha$ and β which are unknown parameters. $\hat{Y_i}=\hat{\alpha}+\hat{\beta}$ is an estimator of E(Yi), so we will proceed in two steps:

     

    1. We run the usual OLS regression dis-regarding the heteroscedasticity problem and obtain $\hat{Y_i}$
    2. We will transform the model by using estimated $\hat{Y_i}$ i.e. $\frac{Y_i}{\hat{Y_i}}=\alpha\frac{1}{\hat{Y_i}}+\beta_1\frac{X_i}{\hat{Y_i}}+\frac{\mu_i}{\hat{Y_i}}$ and run the regression on transformed model.

      This transformation will perform satisfactory results only if the sample size is reasonably large.

  • Log transformation such as ln Yi=α+β ln Xii
    Log transformation compresses the scales in which the variables are measured. But this transformation is not applicable in some of the $Y$ and $X$ values are zero or negative.
x Logo: Shield Security
This Site Is Protected By
Shield Security