Category: Introduction

Heteroscedasticity-Corrected Standard Errors ($\sigma_i^2 $ unknown)

$\sigma_i^2$ are rarely known. However, there is a way of obtaining consistent estimates of variances and covariances of OLS estimators even if there is heteroscedasticity.

White’s Heteroscedasticity-Consistent Variances and Standard Errors:
White’s heteroscedasticity-corrected standard errors are known as robust standard errors. White’s heteroscedasticity-corrected standard errors are larger (maybe smaller too) than the OLS standard errors and therefore, the estimated $t$-values are much smaller (or maybe larger) than those obtained by the OLS.

Comparing the OLS output with White’s heteroscedasticity-corrected standard errors may be useful to see whether heteroscedasticity is a serious problem in a particular set of data.

Plausible Assumptions about Heteroscedasticity Patterns:

Assumption 1: The error variance is proportional to $X_i^2$

Error Variance

$$E(u_i^2)=\sigma^2 X_i^2$$
It is believed that the variance of $u_i$ is proportional to the square of the $X$ (in graphical methods or Park and Glejser approaches).

One may transform the original model as follows:

\begin{align}\label{assump1}
\frac{Y_i}{X_i} &=\frac{\beta_1}{X_i} + \beta_2 + \frac{u_i}{X_i} \nonumber \\
&=\beta_1 \frac{1}{X_i} + \beta_2 + v_i,\qquad \qquad (1)
\end{align}

where $v_i$ is the transformed disturbance term, equal to $\frac{u_i}{X_i}$. It can be verified that

\begin{align*}
E(v_i^2) &=E\left(\frac{u_i}{X_i}\right)^2\\
&=\frac{1}{X_i^2}E(u_i^2)=\sigma^2
\end{align*}

Hence, the variance of $v_i$ is now homoscedastic, and one may apply OLS to the transformed equation by regressing $\frac{Y_i}{X_i}$ on $\frac{1}{X_i}$.

Notice that in the transformed regression the intercept term $\beta_2$ is the slope coefficient in the original equation and the slope coefficient $\beta_1$ is the intercept term in the original model. Therefore, to get back to the original model multiply the estimated equation (1) by $X_i$.

Assumption 2: The Error Variance is Proportional to $X_i$

The square root transformation: $E(u_i^2) = \sigma^2 X_i$

heteroscedasticity-corrected standard errors

If it is believed that the variance of $u_i$ is proportional to $X_i$, then the original model can be transformed as

\begin{align*}
\frac{Y_i}{\sqrt{X_i}} &= \frac{\beta_1}{\sqrt{X_i}} + \beta_2 \sqrt{X_i} + \frac{u_i}{\sqrt{X_i}}\\
&=\beta_1 \frac{1}{\sqrt{X_i}} + \beta_2\sqrt{X_i}+v_i,\quad\quad (a)
\end{align*}

where $v_i=\frac{u_i}{\sqrt{X_i}}$ and $X_i>0$

$E(v_i^2)=\sigma^2$ (a homoscedastic situation)

One may proceed to apply OLS on equation (a), regressing $\frac{Y_i}{\sqrt{X_i}}$ on $\frac{1}{\sqrt{X_i}}$ and $\sqrt{X_i}$.

Note that the transformed model (a) has no intercept term. Therefore, use the regression through the origin model to estimate $\beta_1$ and $\beta_2$. To get back the original model simply multiply the equation (a) by $\sqrt{X_i}$.

Consider a case of $intercept = 0$, that is, $Y_i=\beta_2X_i+u_i$. The transformed model will be

\begin{align*}
\frac{Y_i}{\sqrt{X_i}} &= \beta_2 \sqrt{X_i} + \frac{u_i}{\sqrt{X_i}}\\
\beta_2 &=\frac{\overline{Y}}{\overline{X}}
\end{align*}

Here, the WLS estimator is simply the ratio of the means of the dependent and explanatory variable.

Assumption 3: The Error Variance is proportional to the Square of the Mean value of $Y$

$$E(u_i^2)=\sigma^2[E(Y_i)]^2$$

The original model is $Y_i=\beta_1 + \beta_2 X_i + u_I$ and $E(Y_i)=\beta_1 + \beta_2X_i$

The transformed model

\begin{align*}
\frac{Y_i}{E(Y_i)}&=\frac{\beta_1}{E(Y_i)} + \beta_2 \frac{X_i}{E(Y_i)} + \frac{u_i}{E(Y_i)}\\
&=\beta_1\left(\frac{1}{E(Y_i)}\right) + \beta_2 \frac{X_i}{E(Y_i)} + v_i, \quad \quad (b)
\end{align*}

where $v_i=\frac{u_i}{E(Y_i)}$, and $E(v_i^2)=\sigma^2$ (a situation of homoscedasticity).

Note that for the transformed model (b) is inoperational as $E(Y_i)$ depends on $\beta_1$ and $\beta_2$ which are unknown. We know $\hat{Y}_i = \hat{\beta}_1 + \hat{\beta}_2X_i$ which is an estimator of $E(Y_i)$. Therefore, we proceed in two steps.

Step 1: Run the usual OLS regression ignoring the presence of heteroscedasticity problem and obtain $\hat{Y}_i$.

Step 2: Use the estimate of $\hat{Y}_i$ to transform the model as

\begin{align*}
\frac{Y_i}{\hat{Y}_i}&=\frac{\beta_1}{\hat{Y}_i} + \beta_2 \frac{X_i}{\hat{Y}_i} + \frac{u_i}{\hat{Y}_i}\\
&=\beta_1\left(\frac{1}{\hat{Y}_i}\right) + \beta_2 \frac{X_i}{\hat{Y}_i} + v_i, \quad \quad (c)
\end{align*}

where $v_i=\frac{u_i}{\hat{Y}_i}$.

Although $\hat{Y}_i$ is not exactly $E(Y_i)$, they are consistent estimates (as the sample size increases indefinitely; $\hat{Y}_i$ converges to true $E(Y_i)$). Therefore, the transformed model (c) will perform well if the sample size is reasonably large.

Assumption 4: Log Transformation

A log transformation

$$ ln Y_i = \beta_1 + \beta_2 ln X_i + u_i \tag*{log model-1}$$ usually reduces heteroscedasticity when compared to the regression $$Y_i=\beta_1+\beta_2X_i + u_i $$

It is because log transformation compresses the scales in which the variables are measured, by reducing a tenfold (دس گنا) difference between two values to a twofold (دگنا) difference. For example, 80 is 10 times the number 8, but ln(80) = 4.3280 is about twice as large as ln(8) = 2.0794.

By taking the log transformation, the slope coefficient $\beta_2$ measures the elasticity of $Y$ with respect to $X$ (that is, the percentage change in $Y$ for the percentage change in $X$).

If $Y$ is consumption and $X$ is income in the model (log model-1) then $\beta_2$ measures income elasticity, while in the original model (model without any transformation: OLS model), $\beta_2$ measures only the rate of change of mean consumption for a unit change in income.

Note that the log transformation is not applicable if some of the $Y$ and $X$ values are zero or negative.

Note regarding all assumptions about the nature of heteroscedasticity, we are essentially speculating (سوچنا، منصوبہ بنانا) about the nature of $\sigma_i^2$.

  • There may be a problem of spurious correlation. For example, in the model $$Y_i = \beta_1+\beta_2X_i + u_i,$$ the $Y$ and $X$ variables may not be correlation but in transformed model $$\frac{Y_i}{X_i}=\beta_1\left(\frac{1}{X_i}\right) + \beta_2,$$ the $\frac{Y_i}{X_i}$ and $\frac{1}{X_i}$ are often found to be correlated.
  • $\sigma_i^2$ are not directly known, we estimate them from one or more of the transformations. All testing procedures are valid only in large samples. Therefore, be careful in interpreting the results based on the various transformations in small or finite samples.
  • For model with more than one explanatory variable, one may not know in advance, which of the $X$ variables should be chosen for transforming data.

In case of Heteroscedasticity, proof of $E(\hat{\sigma}^2)\ne \sigma^2$

In this post, we will prove mathematically that $E(\hat{\sigma}^2)\ne \sigma^2$ when there is some presence of hetero in the data.

For the proof of $E(\hat{\sigma}^2)\ne \sigma^2$, consider the two-variable linear regression model in the presence of heteroscedasticity,

\begin{align}
Y_i=\beta_1 + \beta_2 X+ u_i, \quad\quad (eq1)
\end{align}

where $Var(u_i)=\sigma_i^2$ (Case of heteroscedasticity)

as

\begin{align}
\hat{\sigma^2} &= \frac{\sum \hat{u}_i^2 }{n-2}\\
&= \frac{\sum (Y_i – \hat{Y}_i)^2 }{n-2}\\
&=\frac{(\beta_1 + \beta_2 X_i + u_i – \hat{\beta}_1 -\hat{\beta}_2 X_i )^2}{n-2}\\
&=\frac{\sum \left( -(\hat{\beta}_1-\beta_1) – (\hat{\beta}_2 – \beta_2)X_i + u_i \right)^2 }{n-2}\quad\quad (eq2)
\end{align}

Noting that

\begin{align*}
(Y_i-\hat{Y}_i)&=0\\
\beta_1 + \beta_2 X + u_i\, – \,\hat{\beta}_1 – \hat{\beta}_2X &=0\\
-(\hat{\beta}_1 -\beta_1) – X(\hat{\beta}_2-\beta_2) – u_i & =0\\
(\hat{\beta}_1 -\beta_1) &= – X (\hat{\beta}_2-\beta_2) + u_i\\
\text{Applying summation on both side}&\\
\sum (\hat{\beta}_1-\beta_1) &= -(\hat{\beta}_2-\beta_2)\sum X + \sum u_i\\
(\hat{\beta}_1 – \beta_1) &= -(\hat{\beta}_2-\beta_2)\overline{X}+\overline{u}
\end{align*}

Substituting it in (eq2) and taking expectation on both sides:

\begin{align}
\hat{\sigma}^2 &= \frac{1}{n-2} \left[ -(-(\hat{\beta}_2 – \beta_2) \overline{X} + \overline{u} ) – (\hat{\beta}_2-\beta_2)X_i + u_i  \right]^2\\
&=\frac{1}{n-2}E\left[(\hat{\beta}_2-\beta_2)\overline{X} -\overline{u} – (\hat{\beta}_2-\beta_2)X_i-u_i \right]^2\\
&=\frac{1}{n-2} E\left[ -(\hat{\beta}_2 – \beta_2)(X_i-\overline{X}) + (u_i-\overline{u})\right]^2\\
&= \frac{1}{n-2}\left[-\sum x_i^2 Var(\hat{\beta}_2) + E[\sum(u_i-\overline{u}]^2 \right]\\
&=\frac{1}{n-2} \left[ -\frac{\sum x_i^2 \sigma_i^2}{(\sum x_i^2)} + \frac{(n-1)\sum \sigma_i^2}{n} \right]
\end{align}

If there is homoscedasticity, then $\sigma_i^2=\sigma^2$ for each $i$, $E(\hat{\sigma}_i^2)=\sigma^2$.

The expected value of the $\hat{\sigma}^2=\frac{\hat{u}_i^2}{n-2}$ will not be equal to the true $\sigma^2$ in the presence of heteroscedasticity.


Heteroscedasticity

Read about more on Remedy of Heteroscedasticity

More on heteroscedasticity on Wikipedia

Consequences of Heteroscedasticity

The following are consequences of heteroscedasticity when it exists in the data.

  • The OLS estimators and regression predictions based on them remain unbiased and consistent.
  • The OLS estimators are no longer the BLUE (Best Linear Unbiased Estimators) because they are no longer efficient, so the regression predictions will be inefficient too.
  • Because of the inconsistency of the covariance matrix of the estimated regression coefficients, the tests of hypotheses, (t-test, F-test) are no longer valid.

Heteroscedasticity

Learn about Remedial Measures of Heteroscedasticity

OLS Estimation in the Presence of Heteroscedasticity

For the OLS Estimation in the presence of heteroscedasticity, consider the two-variable model

\begin{align*}
Y_i &= \beta_1 +\beta_2X_i + u_i\\
\hat{\beta}_2&=\frac{\sum x_i y_i}{\sum x_i^2}\\
Var(\hat{\beta}_2)&= \frac{\sum x_i^2\, \sigma_i^2}{(\sum x_i^2)^2}
\end{align*}

OLS Estimation in the Presence of Heteroscedasticity, the variance of the OLS estimator will be

$Var(\hat{\beta}_2)$ under the assumption of homoscedasticity is $Var(\hat{\beta}_2)=\frac{\sigma^2}{\sum x_i^2}$. If $\sigma_i^2=\sigma^2$ the both $Var(\hat{\beta}_2)$ will be same.

Note that in the case of heteroscedasticity, the OLS estimators

  • $\hat{\beta_2}$ is BLUE if the assumptions of the classical model, including homoscedasticity, hold.
  • To establish the unbiasedness of $\hat{\beta}_2$, it is not necessary for the disturbances ($u_i$) to be homoscedastic.
  • In fact, the variance of $u_i$, homoscedasticity, or heteroscedasticity plays no part in the determination of the unbiasedness property.
  • $\hat{\beta}_2$ will be a consistent estimator despite heteroscedasticity.
  • With the increase of sample size indefinitely, the $\hat{\beta}_2$ (estimated $\beta_2$) converges to its true value.
  • $\hat{\beta}_2$ is asymptotically normally distributed.

For AR(1) the two-variable model will be $Y_t=\beta_1+\beta_2 X_2+u_t$.

The variance of $\hat{\beta}_2$ for AR(1) scheme is

$$Var(\hat{\beta}_2)_{AR(1)} = \frac{\sigma^2}{\sum x_i^2}\left[ 1+ 2 \rho \frac{\sum x_t x_{t-1}}{\sum x_t^2} +2\rho^2 \frac{\sum x_t x_{t-2}}{\sum x_t^2} +\cdots + 2\rho^{n-1} \frac{x_tx_n}{\sum x_t^2} \right]$$

If $\rho=0$ then $Var(\hat{\beta}_2)_{AR(1)} = Var(\hat{\beta}_2)_{OLS}$.

Assume that the regressors $X$ also follows the AR(1) scheme with a coefficient of autocorrelation for $r$, then

\begin{align*}
Var(\hat{\beta}_2)_{AR(1)} &= \frac{\sigma^2}{\sum x_t^2}\left(\frac{1+r\rho}{1-r \rho} \right)\\
&=Var(\hat{\beta}_2)_{OLS}\left(\frac{1+r\rho}{1-r \rho} \right)
\end{align*}

That is, the usual OLS formula of variance of $\hat{\beta}_2$ will underestimate the variance of $\hat{\beta}_2{_{AR(1)}} $.

Note $\hat{\beta}_2$ although linear-unbiased but not efficient.

In general, in economics, negative autocorrelation is much less likely to occur than positive autocorrelation.

Higher-Order Autocorrelation

Autocorrelation can take many forms. for example,

$$u_t = \rho_1 u_{t-1} + \rho_2 u_{t-2} + \cdots + \rho_p u_{t-p} + \varepsilon_t$$

It is $p$th order autocorrelation.

If we have quarterly data, and we omit seasonal effects, we might expect to find that a 4th-order autocorrelation is present. Similarly, monthly data might exhibit 12th-order autocorrelation.

Learn about Heteroscedasticity Tests and Remedies

x Logo: Shield Security
This Site Is Protected By
Shield Security