Akaike Information Criteria: A Comprehensive Guide

The Akaike Information Criteria/Criterion (AIC) is a method used in statistics and machine learning to compare the relative quality of different models for a given dataset. The AIC method helps in selecting the best model out of a bunch by penalizing models that are overly complex. Akaike Information Criterion provides a means for comparing among models i.e. a tool for model selection.

  • A too-simple model leads to a large approximation error.
  • A too-complex model leads to a large estimation error.

AIC is a measure of goodness of fit of a statistical model developed by Hirotsugo Akaike under the name of “an information Criteria (AIC) and published by him in 1974 first time. It is grounded in the concept of information entropy in between bias and variance in model construction or between accuracy and complexity of the model.

The Formula of Akaike Information Criteria

Given a data set, several candidate models can be ranked according to their AIC values. From AIC values one may infer that the top two models are roughly in a tie and the rest far worse.

$$AIC = 2k-ln(L)$$

where $k$ is the number of parameters in the model, and $L$ is the maximized value of the likelihood function for the estimated model.

Akaike Information Criteria/ Criterion (AIC)

For a set of candidate models for the data, the preferred model is the one that has a minimum AIC value. AIC estimates relative support for a model, which means that AIC scores by themselves are not very meaningful

Akaike Information Criteria focuses on:

  • Balances fit and complexity: A model that perfectly fits the data might not be the best because it might be memorizing the data instead of capturing the underlying trend. AIC considers both how well a model fits the data (goodness of fit) and how complex it is (number of variables).
  • A lower score is better: Models having lower AIC scores are preferred as they achieve a good balance between fitting the data and avoiding overfitting.
  • Comparison tool: AIC scores are most meaningful when comparing models for the same dataset. The model with the lowest AIC score is considered the best relative to the other models being evaluated.

Summary

The AIC score is a single number and is used as model selection criteria. One cannot interpret the AIC score in isolation. However, one can compare the AIC scores of different model fits to the same data. The model with the lowest AIC is generally considered the best choice.

The AIC is the most useful model selection criterion when there are multiple candidate models to choose from. It works well for larger datasets. However, for smaller datasets, the corrected AIC should be preferred. AIC is not perfect, and there can be situations where it fails to choose the optimal model.

There are many other model selection criteria. For more detail read the article: Model Selection Criteria

Akaike Information Criteria

https://rfaqs.com

https://gmstat.com

https://itfeature.com

Multiple Regression Analysis

Introduction to Multiple Regression Analysis

Francis Galton (a biometrician) examines the relationship between fathers’ and sons’ height. He analyzed the similarities between the parent and child generation of 700 sweet peas. Galton found that the offspring of tall parents tended to be shorter and offspring of shorter parents tended to be taller. The height of the children depends ($Y$) upon the height of the parents ($X$). In case, there is more than one independent variable (IV), we need multiple regression analysis (MRA), also called multiple linear regression (MLR).

Multiple Linear Regression Model

The linear regression model (equation) for two independent variables (regressors) is

$$Y_{ij} = \alpha + \beta_1 X_{1i} + \beta_2 X_{2i} + \varepsilon_{ij}$$

The general linear regression model (equation) for $k$ independent variables is

$$Y_{ij} = \alpha + \beta_1 X_{1i} + \beta_2 X_{2i} + \beta_3X_{3i} + \cdots + \varepsilon_{ij}$$

The $\beta$s are all regression coefficients (partial slopes) and the $\alpha$ is the intercept.

The sample linear regression model is

$$\hat{y} = \hat{\alpha} + \hat{\beta}_1 x_{1i} + \hat{\beta}_2x_{2i} + \hat{\varepsilon}_{ij}$$

Multiple Regression Coefficients Formula

To fit the MLR equation for two variables, one needs to compute the values of $\hat{\beta}_1, \hat{\beta}_2$, and $\alpha$.

Multiple Regression Analysis Partial Coefficient 1

The yellow part of the above formula is the (“sum of the product of 1st independent and dependent variables”) multiplied by the (“sum of the square of 2nd independent variable).

The red part of the above formula is the (“Sum of the product of 2nd independent and dependent variables”) multiplied by the (“sum of the product of two independent variables”).

The green part of the above formula is the (“sum of the square of 1st independent variable”) multiplied by the (“sum of the square of 2nd independent variable”).

The blue part of the above formula is the (“square of the sum of the product of two independent variables”).

The formula for 2nd regression coefficient is

Multiple Regression Analysis Partial Coefficient 1

In short, note that the $S$ stands for the sum of squares and the sum of products.

Multiple Linear Regression Example

Consider the following data about two regressors ($X_1, X_2$) and one regressand variable ($Y$).

$Y$$X_1$$X_2$$X_1 y$$X_2 y$$X_1 X_2$$X_1^2$$X_2^2$
301015300450150100225
2258110176402564
161012160192120100144
737214921949
1421028140204100
8930526191007351238582

\begin{align*}
S_{x_1Y} &= \sum X_1 y – \frac{\sum X_1 \sum Y}{n} = 619 – \frac{30\times 59}{5} = 265\\
S_{x_1x_2} &= \sum X_1 X_2 – \frac{\sum X_1 \sum X_2}{n} = 351 – \frac{30 \times 52}{5} = 39\\
S_{X_1^2} &= \sum X_1^2 – \frac{(\sum X_1)^2}{n} = 238 -\frac{30^2}{5} = 58\\
S_{X_2^2} &= \sum X_2^2 – \frac{(\sum X_2)^2}{n} = 582 – \frac{52^2}{5} = 41.2\\
S_{X_2 y} &= \sum X_2 Y – \frac{\sum X_2 \sum Y}{n} =1007 – \frac{52 \times 89}{5} = 81.4
\end{align*}

\begin{align*}
\hat{\beta}_1 &= \frac{(S_{X_1 Y})(S_{X_2^2}) – (S_{X_2Y})(S_{X_1 X_2}) }{(S_{X_1^2})(S_{X_2^2}) – (S_{X_1X_2})^2} = \frac{(265)(41.2) – (81.4)(39)}{(58)(41.2) – (39)^2} = 8.91\\
\hat{\beta}_2 &= \frac{(S_{X_2 Y})(S_{X_1^2}) – (S_{X_1Y})(S_{X_1 X_2}) }{(S_{X_1^2})(S_{X_2^2}) – (S_{X_1X_2})^2} = \frac{(81.4)(58) – (265)(39)}{(58)(41.2) – (39)^2} = -6.46\\
\hat{\alpha} &= \overline{Y} – \hat{\beta}_1 \overline{X}_1 – \hat{\beta}_2 \overline{X}_2\\
&=31.524 + 8.91X_1 – 6.46X_2
\end{align*}

Important Key Points of Multiple Regression

  • Independent variables (predictors, regressors): These are the variables that one believes to influence the dependent variable. One can have two or more independent variables in a multiple-regression model.
  • Dependent variable (outcome, response): This is the variable one is trying to predict or explain using the independent variables.
  • Linear relationship: The core assumption is that the relationship between the independent variables and dependent variable is linear. This means the dependent variable changes at a constant rate for a unit change in the independent variable, holding all other variables constant.

The main goal of multiple regression analysis is to find a linear equation that best fits the data. The multiple regression analysis also allows one to:

  • Predict the value of the dependent variable based on the values of the independent variables.
  • Understand how changes in the independent variables affect the dependent variable while considering the influence of other independent variables.

Interpreting the Multiple Regression Coefficient

https://rfaqs.com

https://gmstat.com

Simple Linear Regression Model

Frequently, we measure two or more variables on each individual and try to express the nature of the relationship between these variables (for example in simple linear regression model and correlation analysis). Using the regression technique, we estimate the relationship of one variable with another by expressing the one in terms of a linear (or more complex) function of another. We also predict the values of one variable in terms of the other. The variables involved in regression and correlation analysis are continuous. In this post we will learn about Simple Linear Regression Model.

We are interested in establishing significant functional relationships between two (or more) variables. For example, the function $Y=f(X)=a+bx$ (read as $Y$ is function of $X$) establishes a relationship to predict the values of variable $Y$ for the given values of variable $X$. In statistics (biostatistics), the function is called a simple linear regression model or simply the regression equation.

The variable $Y$ is called the dependent (response) variable, and $X$ is called the independent (regressor or explanatory) variable.

In biology, many relationships can be appropriate over only a limited range of values of $X$. Negative values are meaningless in many cases, such as age, height, weight, and body temperature.

The method of linear regression is used to estimate the best-fitting straight line to describe the relationship between variables. The linear regression gives the equation of the straight line that best describes how the outcome of $Y$ increases/decreases with an increase/decrease in the explanatory variable $X$. The equation of the regression line is
$$Y=\beta_0 + \beta_1 X,$$
where $\beta_0$ is the intercept (value of $Y$ when $X=0$) and $\beta_1$ is the slope of the line. Both $\beta_0$ and $\beta_1$ are the parameters (or regression coefficients) of the linear equation.

Estimation of Regression Coefficients in Simple Linear Regression Model

The best-fitting line is derived using the method of the \textit{Least Squares} by finding the values of the parameters $\beta_0$ and $\beta_1$ that minimize the sum of the squared vertical distances of the points from the regression line,

The dotted-line (best-fit) line passes through the point ($\overline{X}, \overline{Y}$).

The regression line $Y=\beta_0+\beta_1X$ is fit by the least-squares methods. The regression coefficients $\beta_0$ and $\beta_1$ both are calculated to minimize the sum of squares of the vertical deviations of the points about the regression line. Each deviation equals the difference between the observed value of $Y$ and the estimated value of $Y$ (the corresponding point on the regression.

The following table shows the \textit{body weight} and \textit{plasma volume} of eight healthy men.

SubjectBody Weight (KG)Plasma Volume (liters)
158.02.75
270.02.86
374.03.37
463.52.76
562.02.62
670.53.49
771.03.05
866.03.12
Simple Linear Regression Models: Scatter plot with regression line

The parameters $\beta_0$ and $\beta_1$ are estimated using the following formula (for simple linear regression model):

\begin{align}
\beta_1 &= \frac{n\sum\limits_{i=1}^{n} x_iy_i -\sum\limits_{i=1}^{n} x_i \sum\limits_{i=1}^{n} y_i} {n \sum\limits_{i=1}^{n} x_i^2 – \left(\sum\limits_{i=1}^{n} x_i \right)^2}\\
\beta_0 &= \overline{Y} – \beta_1 \overline{X}
\end{align}

Regression coefficients are sometimes known as “beta-coefficients”. When slope ($\beta_1=0$) then there is no relationship between $X$ and $Y$ variable. For the data above, the best-fitting straight line describing the relationship between plasma volume with body weight is
$$Plasma\, Volume = 0.0857 +0.0436\times Weight$$
Note that the calculated values for $\beta_0$ and $\beta_1$ are estimates of the population values, therefore, subject to sampling variations.

Simple linear regression model equation

https://gmstat.com

https://rfaqs.com

Leverage Influential Point and Outlier: Diagnostics (2024)

In this post, a discussion about diagnostics for a Leverage Influential point and outlier will be made. In a regression analysis, certain observations may play a role in influencing the outcomes of the fitted model and its estimates. These observations may be classified as outliers, leverage, and influential points.

Outlier Leverage Influential Point

The explanation of outlier leverage influential point is described as under:

  • Outliers: An outlier is an extreme observation that differs considerably from the other observations. An outlier may be due to the recording error and the model cannot explain them. However, outlier(s) may contain some important information. An outlier may be in $x$-space, $y$-space, or both.
  • Leverage: An unusual $x$ value is called a leverage point. The leverage point affects the model summary statistics (such as $R^2$, standard error, etc.), but has little impact on the estimates of the regression coefficients. A leverage point has an unusual predictor value and is different from the bulk of the observations.
  • Influence: An unusual $y$ value (and may be an extreme $x$ value), is called an influence point. An influence point has a noticeable impact on the estimated regression coefficients and may change the direction of the slope.
Diagnostics for Outliers leverage and influential points
image taken from: https://www.cbsd.org/

Diagnostics for Outlier Leverage Influential Point

There are some methods to detect/ identify the outlier leverage influential point

Outliers

Outliers must be treated very carefully. Outliers may be detected by examining the

  • Normal Quantile Plots (departer from normality)
  • Residual Plots (magnitude of the residuals)
  • Scaled residuals (a potential outlier if magnitudes > 3)
Outlier Detection using Box Plot

Leverage Point

The diagonal elements of the “hat matrix” have an important role in detecting influential observations. $$h_{ii} = x’_i (X’X)^{-1}x_i,$$ where $X$ is matrix of regressors and $x’_i$ is the ith row of the $X$ matrix.

A large diagonal element is an indicator of influential observation as they are remote in $x$-space. Any observation exceeding the average size of the diagonal element of the hat matrix ($\overline{h} = \frac{p}{n}=2h$) is considered as a leverage point, where $p$ is the number of parameters in the model.
It is also useful to observe the studentized residuals in conjunction with $h_{ii}$ (that is, look for large hat diagonal and large residual values).

Note that not all of the leverage points are influential unless they have large residuals. Therefore, observations having large $h_{ii}$ values and large residuals are likely to be R.

Influential Points

  • Cook’s Distance: The Cook’s Distance is the Deletion Diagnostic that is used to measure the influence of the $i$th observation by removing it from the regression analysis. It is based on all $n$ points, $\hat{\beta}, and the estimates based on the deletion of the $i$th point, $\hat{\beta}_{(i)}$.
  • DFBETAS is another Deletion Diagnostic used to measure how the change in each of the $\hat{\beta}j$ is due to influential observation. A large value of DFBETAS indicates that the $i$th observation is considerably an influential observation on the $j$th regression coefficient. If $|DFBETAS{j, i} > \frac{2}{\sqrt{n}}$ then the $i$th observation warrants further examination.
  • DFFITS is another deletion diagnostic measure used to measure the deletion influence of the $i$th observation on the predicted or fitted values. DFFITS is the number of standard deviations that the fitted values change if ith observations are removed. If $|DFFITS_i|>\frac{2}{\sqrt{\frac{p}{n}}}$ then the $i$th observation warrants further examination.

Note that the case deletion diagnostics do not provide any information about the overall prediction of the estimation. However, the performance of the model can be measured by using the Generalized Variance (GV) and Covariance Ratio.

In summary, the Outliers, Leverage Points, and Influential Observations are certain data points (observations) that deviate (distant) from the expected patterns. On the other hand, the outliers are extreme values that lie far away from the other data points, while leverage points exert a strong influence on the regression models.

Read more about Regression Diagnostics

R Programming Language