Generalized Least Squares (GLS vs OLS) (2022)

The usual Ordinary Least Squares (OLS) method assigns equal weight (or importance) to each observation. But generalized least squares (GLS) take such information into account explicitly and are therefore capable of producing BLUE estimators. Both GLS and OLS are regression techniques used to fit a line to data points and estimate the relationship between a dependent variable ($y$) and one or more independent variables ($X$).

Consider following two-variable model,

begin{align}
Y_i &= beta_1 + beta_2 X_i + u_inonumber\
text{or}\
Y_i &= beta_1X_{0i} + beta_2 X_i + u_i, tag*{(eq1)}
end{align}

where $X_{0i}=1$ for each $i$.

Generalized Least Squares (GLS)

Assume that the heteroscedastic variance $sigma_i^2$ is known:

begin{align}
frac{Y_i}{sigma_i} &= beta_1 left(frac{X_{0i}}{sigma_i} right)+beta_2 left(frac{X_i}{sigma_i}right) +left(frac{u_i}{sigma_i}right)\nonumber
Y_i^* &= beta_i^* X_{0i}^* + beta_2^* X_i^* + u_i^*, tag*{(eq2)}
end{align}

where the stared variables (variables with stars on them) are the original variable divided by the known $sigma_i$. The stared coefficients are the transformed model’s parameters, distinguishing them from OLS parameters $beta_1$ and $beta_2$.

begin{align*}
Var(u_i^*) &=E(u_i^{2*})=Eleft(frac{u_i}{sigma_i}right)^2\
&=frac{1}{sigma_i^2}E(u_i^2) tag*{$because E(u_i)=0$}\
&=frac{1}{sigma_i^2}sigma_i^2 tag*{$because E(u_i^2)=sigma_i^2$}=1, text{which is a constant.}
end{align*}

The variance of the transformed $u_i^*$ is now homoscedastic. Applying OLS to the transformed model (eq2) will produce estimators that are BLUE, that is, $beta_1^*$ and $beta_2^*$ are now BLUE while $hat{beta}_1$ and $hat{beta}_2$ not.

Generalized Least Squares (GLS) Method

The procedure of transforming the original variable in such a way that the transformed variables satisfy the assumption of the classical model and then applying OLS to them is known as the Generalized Least Squares (GLS) method.

The Generalized Least Squares (GLS) are Ordinary Least squares (OLS) on the transformed variables that satisfy the standard LS assumptions. The estimators obtained are known as GLS estimators and are BLUE.

To obtain a Generalized Least Squares Estimator we minimize

begin{align}
sum hat{u}_i^{*2} &= sum left(Y_i^* =hat{beta}_1^* X_{0i}^* – hat{beta}_2^* X_i^* right)^2nonumber\
text{That is}\
sum left(frac{hat{u}_i}{sigma_i}right)^2 &=sum left[frac{Y_i}{sigma_i} – hat{beta}_1^* left(frac{X_{0i}}{sigma_i}right) -hat{beta}_2^*left(frac{X_i}{sigma_i}right) right]^2 tag*{(eq3)}\
sum w_i hat{u}_i^2 &=sum w_i(Y_i-hat{beta}_1^* X_{0i} -hat{beta}_2^*X_i)^2 tag*{(eq4)}
end{align}

The GLS estimator of $hat{beta}_2^*$ is

begin{align*}
hat{beta}_2^* &= frac{(sum w_i)(sum w_i X_iY_i)-(sum w_i X_i)(sum w_iY_i) }{(sum w_i)(sum w_iX_i^2)-(sum w_iX_i)^2} \
Var(hat{beta}_2^*) &=frac{sum w_i}{(sum w_i)(sum w_iX_i^2)-(sum w_iX_i)^2},\
text{where $w_i=frac{1}{sigma_i^2}$}
end{align*}

Difference between GLS and OLS

In GLS, a weighted sum of residual squares is minimized with $w_i=frac{1}{sigma}_i^2$ acting as the weights, but in OLS an unweighted (or equally weighted residual sum of squares) is minimized. From equation (eq3), in GLS the weight assigned to each observation is inversely proportional to its $sigma_i$, that is, observations coming from a population with larger $sigma_i$ will get relatively smaller weight, and those from a population with $sigma_i$ will get proportionately larger weight in minimizing the RSS (eq4).

Since equation (eq4) minimized a weighted RSS, it is known as weighted least squares (WLS), and the estimators obtained are known as WLS estimators.

GLS Method

The generalized Least Squares method is a powerful tool for handling correlated and heteroscedastic errors. This method is also widely used in econometrics, finance, and other fields where regression analysis is applied to real-world data with complex error structures.

The summary of key differences between GLS and OLS methods are

FeatureGLS MethodOLS Method
AssumptionsCan handle Heteroscedasticity, AutocorrelationHomoscedasticity, and Error Term Independent
MethodMinimizes weighted sum of Squares of residualsMinimzes the sum of squares of residuals
BenefitsMore efficient estimates (if assumptions are met)Simpler to implement
DrawbacksMore complex, requires error covariance matrix estimationCan be inefficient when assumptions are violated

Remember, diagnosing issues (violation of assumptions) like heteroscedasticity and autocorrelation are often performed after an initial OLS fit. This can help to decide if GLS or other robust regression techniques are necessary. Therefore, the choice among OLS and GLS depends on the data characteristics and sample size.

Read about the Residual Plot

R and Data Analysis

MCQs English

White Test of Heteroscedasticity Detection (2022)

The post is about the White test of heteroscedasticity.

One important assumption of Regression is that the variance of the Error Term is constant across observations. If the error has a constant variance, then the errors are called homoscedastic, otherwise heteroscedastic. In the case of heteroscedastic errors (non-constant variance), the standard estimation methods become inefficient. Typically, to assess the assumption of homoscedasticity, residuals are plotted.

White test of Heteroscedasticity

White test (Halbert White, 1980) proposed a test that is very similar to that by Breusch-Pagen. The White test of Heteroscedasticity is general because it does not rely on the normality assumptions and it is also easy to implement. Because of the generality of White’s test, it may identify the specification bias too. Both the White test of heteroscedasticity and the Breusch-Pagan test are based on the residuals of the fitted model.

To test the assumption of homoscedasticity, one can use auxiliary regression analysis by regressing the squared residuals from the original model on the set of original regressors, the cross-products of the regressors, and the squared regressors.

The step-by-step procedure for performing the White test of Heteroscedasticity is as follows:

Consider the following Linear Regression Model (assume there are two independent variables)
\[Y_i=\beta_0+\beta_1X_{1i}+\beta_1X_{2i}+e_i \tag{1} \]

For the given data, estimate the regression model, and obtain the residuals $e_i$’s.

Note that the regression of residuals can take linear or non-linear functional forms.

  1. Now run the following regression model to obtain squared residuals from original regression on the original set of the independent variable, the square value of independent variables, and the cross-product(s) of the independent variable(s) such as
    \[Y_i=\beta_0+\beta_1X_1+\beta_2X_2+\beta_3X_1^2+\beta_4X_2^2+\beta_5X_1X_2 \tag{2}\]
  2. Find the $R^2$ statistics from the auxiliary regression in step 2.
    You can also use the higher power regressors such as the cube. Also, note that there will be a constant term in equation (2) even though the original regression model (1)may or may not have the constant term.
  3. Test the statistical significance of \[n \times R^2\sim\chi^2_{df}\tag{3},\] under the null hypothesis of homoscedasticity or no heteroscedasticity, where df is the number of regressors in equation (2)
  4. If the calculated chi-square value obtained in (3) is greater than the critical chi-square value at the chosen level of significance, reject the hypothesis of homoscedasticity in favor of heteroscedasticity.
Heteroscedasticity Patterns: White Test of Heteroscedasticity

For several independent variables (regressors) model, introducing all the regressors, their square or higher terms, and their cross products, consume degrees of freedom.

In cases where the White test statistics are statistically significant, heteroscedasticity may not necessarily be the cause, but specification errors. In other words, “The white test can be a test of heteroscedasticity or specification error or both”. If no cross-product terms are introduced in the White test procedure, then this is a pure test of pure heteroscedasticity.
If the cross-product is introduced in the model, then it is a test of both heteroscedasticity and specification bias.

White Test of Heteroscedasticity Detection

By employing the White test of heteroscedasticity, one can gain valuable insights about the presence of heteroscedasticity and decide on appropriate corrective measures (like Weighted Least Squares (WLS)) if necessary to ensure reliable standard errors and hypothesis tests in your regression analysis.

Summary

The White test of heteroscedasticity is a flexible approach that can be used to detect various patterns of heteroscedasticity. This test indicates the presence of heteroscedasticity but it does not pinpoint the specific cause (like model misspecification). The White test is relatively easy to implement in statistical software.

References

  • H. White (1980), “A heteroscedasticity Consistent Covariance Matrix Estimator and a Direct Test of Heteroscedasticity”, Econometrica, Vol. 48, pp. 817-818.
  • https://en.wikipedia.org/wiki/White_test

Click Links to learn more about Tests of Heteroscedasticity: Regression Residuals Plot, Bruesch-Pagan Test, Goldfeld-Quandt Test

See the Numerical Example of the White Test of Heteroscedasticity

Visit: https://gmstat.com

Online Estimation Quiz 7

Online Estimation Quiz from Statistical Inference covers the topics of Estimation and Hypothesis Testing for the preparation of exams and different statistical job tests in Government/ Semi-Government or Private Organization sectors. These tests are also helpful in getting admission to different colleges and Universities. The online MCQS estimation quiz will help the learner understand the related concepts and enhance their knowledge.

MCQs about statistical inference covering the topics estimation, estimator, point estimate, interval estimate, properties of a good estimator, unbiasedness, efficiency, sufficiency, Large sample, and sample estimation.

1. The width of the confidence interval decreases if the confidence coefficient is

 
 
 
 

2. Criteria to check a point estimator to be good are

 
 
 
 

3. If $\mu=130, \overline{X}=150, \sigma=5$, and $n=10$. What Statistic is appropriate.

 
 
 
 

4. The consistency of an estimator can be checked by comparing

 
 
 
 

5. Interval estimation and confidence interval are:

 
 
 
 

6. A large sample contains more than

 
 
 
 

7. A statistician calculates a 95% confidence interval for $\mu$ when $\sigma$ is known. The confidence interval is Rs 18000 to 22000, and then amount of sample means $\overline{X}$ is:

 
 
 
 

8. For a biased estimator $\hat{\theta}$ of $\theta$, which one of the following is correct.

 
 
 
 

9. For $\alpha=0.05$, the critical value of $Z_{0.05}$ is equal to

 
 
 
 

10. If $1-\alpha=0.90$ then value of $Z_{\frac{\alpha}{2}}$ is

 
 
 
 

11. If $Var(T_2)<Var(T_1)$ then $T_2$ is

 
 
 
 

12. t-distribution is used when

 
 
 
 

13. A sample is considered a small sample if the size is

 
 
 
 

14. In applying t-test

 
 
 
 

15. A confidence interval will be widened if:

 
 
 
 

16. By decreasing $\overline{X}$ the length of the confidence interval for $\mu$

 
 
 
 

17. The best estimator of population proportion ($\pi$) is:

 
 
 
 

18. If the population Standard Deviation is unknown and the sample size is less than 30, then the Confidence Interval for the population mean ($\mu$) is

 
 
 
 

19. Which is NOT the property of a point estimator?

 
 
 
 

20. In a $Z$-test the number of degrees of freedom is

 
 
 
 

Statistical inference is a branch of statistics in which we conclude (make some wise decisions) about the population parameter using sample information. Statistical inference can be further divided into the Estimation of the Population Parameters and the Hypothesis Testing.

Estimation is a way of finding the unknown value of the population parameter from the sample information by using an estimator (a statistical formula) to estimate the parameter. One can estimate the population parameter by using two approaches (I) Point Estimation and (ii) Interval Estimation.

Online Estimation Quiz

  • A large sample contains more than
  • A sample is considered a small sample if the size is
  • In applying t-test
  • t-distribution is used when
  • If the population Standard Deviation is unknown and the sample size is less than 30, then the Confidence Interval for the population mean ($\mu$) is
  • If $\mu=130, \overline{X}=150, \sigma=5$, and $n=10$. What Statistic is appropriate?
  • If $1-\alpha=0.90$ then value of $Z_{\frac{\alpha}{2}}$ is
  • For $\alpha=0.05$, the critical value of $Z_{0.05}$ is equal to
  • In a $Z$-test the number of degrees of freedom is
  • The width of the confidence interval decreases if the confidence coefficient is
  • By decreasing $\overline{X}$ the length of the confidence interval for $\mu$
  • A statistician calculates a 95% confidence interval for $\mu$ when $\sigma$ is known. The confidence interval is Rs 18000 to 22000, and then the amount of sample means $\overline{X}$ is:
  • Criteria to check a point estimator to be good are
  • The consistency of an estimator can be checked by comparing
  • If $Var(T_2)<Var(T_1)$ then $T_2$ is
  • For a biased estimator $\hat{\theta}$ of $\theta$, which one of the following is correct?
  • Which is NOT the property of a point estimator?
  • The best estimator of population proportion ($\pi$) is:
  • Interval estimation and confidence interval are:
  • A confidence interval will be widened if:

In point estimation, a single numerical value is computed for each parameter, while in an interval estimation, a set of values (interval) for the parameter is constructed. The width of the confidence interval depends on the sample size and confidence coefficient. However, it can be decreased by increasing the sample size. The estimator is a formula used to estimate the population parameter by making use of sample information.

Online Estimation Quiz

gmstat.com online MCQs test Website