Model Selection Criteria (2019)

All models are wrong, but some are useful. Model selection criteria are rules used to select a (statistical) model among competing models, based on given data.

Several model selection criteria are used to choose among a set of candidate models, and/ or compare models for forecasting purposes.

All model selection criteria aim at minimizing the residual sum of squares (or increasing the coefficient of determination value). The criterion Adj-$R^2$, Akaike Information, Bayesian Information Criterion, Schwarz Information Criterion, and Mallow’s $C_p$ impose a penalty for including an increasingly large number of regressors. Therefore, there is a trade-off between the goodness of fit of the model and its complexity. The complexity refers to the number of parameters in the model.

Model Selection Criteria

Model Selection Criteria: Coefficient of Determination ($R^2$)

$$R^2=\frac{\text{Explained Sum of Square}}{\text{Total Sum of Squares}}=1-\frac{\text{Residuals Sum of Squares}}{\text{Total Sum of Squares}}$$

Adding more variables to the model may increase $R^2$ but it may also increase the variance of forecast error.
There are some problems with $R^2$

  • It measures in-sample goodness of fit (how close an estimated $Y$ value is to its actual values) in the given sample. There is no guarantee that $R^2$ will forecast well out-of-sample observations.
  • In comparing two or more $R^2$’s, the dependent variable must be the same.
  • $R^2$ cannot fall when more variables are added to the model.

Model Selection Criteria: Adjusted Coefficient of Determination ($R^2$)

$$\overline{R}^2=1-\frac{RSS/(n-k}{TSS(n-1)}$$

$\overline{R}^2 \ge R^2$ shows that the adjusted $R^2$ penalizes for adding more regressors (explanatory variables). Unlike $R^2$, the adjusted $R^2$ will increase only if the absolute $t$-value of the added variable is greater than 1. For comparative purposes, $\overline{R}^2$ is a better measure than $R^2$. The regressand (dependent variable) must be the same for the comparison of models to be valid.

Model Selection Criteria: Akaike’s Information Criterion (AIC)

$$AIC=e^{\frac{2K}{n}}\frac{\sum \hat{u}^2_i}{n}=e^{\frac{2k}{n}}\frac{RSS}{n}$$
where $k$ is the number of regressors including the intercept. The formula of AIC is

$$\ln AIC = \left(\frac{2k}{n}\right) + \ln \left(\frac{RSS}{n}\right)$$
where $\ln AIC$ is natural log of AIC and $\frac{2k}{n}$ is penalty factor.

AIC imposes a harsher penalty than the adjusted coefficient of determination for adding more regressors. In comparing two or more models, the model with the lowest value of AIC is preferred. AIC is useful for both in-sample and out-of-sample forecasting performance of a regression model. AIC is used to determine the lag length in an AR(p) model also.

Model Selection Criteria: Schwarz’s Information Criterion (SIC)

\begin{align*}
SIC &=n^{\frac{k}{n}}\frac{\sum \hat{u}_i^2}{n}=n^{\frac{k}{n}}\frac{RSS}{n}\\
\ln SIC &= \frac{k}{n} \ln n + \ln \left(\frac{RSS}{n}\right)
\end{align*}
where $\frac{k}{n}\ln\,n$ is the penalty factor. SIC imposes a harsher penalty than AIC.

Like AIC, SIC is used to compare the in-sample or out-of-sample forecasting performance of a model. The lower the values of SIC, the better the model.

Model Selection Criteria: Mallow’s $C_p$ Criterion

For Model selection the Mallow criteria is
$$C_p=\frac{RSS_p}{\hat{\sigma}^2}-(n-2p)$$
where $RSS_p$ is the residual sum of the square using the $p$ regression in the model.
\begin{align*}
E(RSS_p)&=(n-p)\sigma^2\\
E(C_p)&\approx \frac{(n-p)\sigma^2}{\sigma^2}-(n-2p)\approx p
\end{align*}
A model that has a low $C_p$ value, about equal to $p$ is preferable.

Model Selection Criteria: Bayesian Information Criteria (BIC)

The Bayesian information Criteria is based on the likelihood function and it is closely related to the AIC. The penalty term in BIC is larger than in AIC.
$$BIC=\ln(n)k-2\ln(\hat{L})$$
where $\hat{L}$ is the maximized value of the likelihood function of the regression model.

Cross-Validation

Cross-validation is a technique where the data is split into training and testing sets. The model is trained on the training data and then evaluated on the unseen testing data. This helps assess how well the model generalizes to unseen data and avoids overfitting.

Note that no one of these criteria is necessarily superior to the others.

Read more about Correlation and Regression Analysis

Learning R Language Programming

Leave a Comment

Discover more from Statistics for Data Analyst

Subscribe now to keep reading and get access to the full archive.

Continue reading