Outliers and Influential Observations

Here we will focus on the difference between the outliers and influential observations.

Outliers

The cases (observations or data points) that do not follow the model as the rest of the data are called outliers. In Regression, the cases with large residuals are a candidate for outliers. So an outlier is a data point that diverges from an overall pattern in a sample. Therefore, an outlier can certainly influence the relationship between the variables and may also exert an influence on the slope of the regression line.

An outlier can be created by a shift in the location (mean) or in the scale (variability) of the process. An outlier may be due to recording errors (may be correctable), or due to the sample not being entirely from the same population. This may also be due to the values from the same population but from the non-normal (heavy-tailed) population. That is, outliers may be due to incorrect specifications that are based on the wrong distributional assumptions.

Outliers and Influential Observations

Inferential Observations

An influential observation is often an outlier in the x-direction. Influential observation may arise from

  1. observations that are unusually large or otherwise deviate in unusually extreme forms from the center of a reference distribution,
  2. the observation may be associated with a unit that has a low probability and thus has a high probability weight.
  3. the observation may have a very large weight (relative to the weights of other units in the specified sub-population) due to problems with stratum jumping; sampling of birth units or highly seasonal units; large nonresponse adjustment factors arising from unusually low response rates within a given adjustment cell; unusual calibration-weighting effects; or other factors.

Importance of Outliers and Influential Observations

Outliers and Influential observations are important because:

  • Both outliers and influential observations can potentially mislead the interpretation of the regression model.
  • Outliers might indicate errors in the data or a non-linear relationship that the model cannot capture.
  • Influential observations can make the model seem more accurate than it is, masking underlying issues.

How to Identify Outliers and Influential Observations

Both outliers and influential observations can be identified by using:

  • Visual inspection: Scatterplots can reveal outliers as distant points.
  • Residual plots: Plotting residuals against predicted values or independent variables can show patterns indicative of influential observations.
  • Statistical diagnostics: Measures like Cook’s distance or leverage can quantify the influence of each data point.

By being aware of outliers and influential observations, one can ensure that the regression analysis provides a more reliable picture of the relationship between variables.

Learn R Programming Language

Error and Residual in Regression

Error and Residual in Regression

In Statistics and Optimization, Statistical Errors and Residuals are two closely related and easily confused measures of “Deviation of a sample from the mean”.

Error is a misnomer; an error is the amount by which an observation differs from its expected value. The errors e are unobservable random variables, assumed to have zero mean and uncorrelated elements each with common variance  σ2.

A Residual, on the other hand, is an observable estimate of the unobservable error. The residuals $\hat{e}$ are computed quantities with mean ${E(\hat{e})=0}$ and variance ${V(\hat{e})=\sigma^2 (1-H)}$.

Like the errors, each of the residuals has zero mean, but each residual may have a different variance. Unlike the errors, the residuals are correlated. The residuals are linear combinations of the errors. If the errors are normally distributed so are the errors.

regression: Error and Residual in Regression

Note that the sum of the residuals is necessarily zero, and thus the residuals are necessarily not independent. The sum of the errors need not be zero; the errors are independent random variables if the individuals are chosen from the population independently.

The differences between errors and residuals in Regression are:

Sr. No.ErrorsResiduals
1)Error represents the unobservable difference between an actual value $y$ of the dependent variable and its true population mean.Residuals represent the observable difference between an actual value $y$ of the dependent variable and its predicted value according to the regression model.
2)Error is a theoretical concept because the true population mean is usually unknown.One can calculate residuals because we have the data and the fitted model.
3)Errors are assumed to be random and independent, with a mean of zero.Residuals are considered estimates of the errors for each data point.

Residuals are used in various ways to evaluate the regression model, including:

  • Residual plots: The residual plots are used to visualize the residuals versus the independent variable or predicted values.
  • Mean Squared Error (MSE): The MSE statistic measures the average squared difference between the residuals and zero.

In essence, understanding errors and residuals helps the researcher gauge how well the regression model captures the underlying relationship between variables, despite the inherent randomness or “noise” in real-world data.

FAQS about Errors and Residuals

  1. What is an Error?
  2. What are residuals in regression?
  3. What is the purpose of residual plots?
  4. What is a mean squared error (MSE)?
  5. Differentiate between error and residual.
  6. Discuss the sum of residuals and the sum of errors.
Statistics Help: https://itfeature.com

Learn about Simple Linear Regression Models

Statistical Models in R Language

Inverse Regression Analysis or Calibration (2012)

In most regression problems we have to determine the value of $Y$  corresponding to a given value of $X$. The inverse of this problem is also called inverse regression analysis or calibration.

Inverse Regression Analysis

For inverse regression analysis, let the known values represented by matrix $X$ and their corresponding values by vector $Y$, which both form a simple linear regression model. Let, there is an unknown value of $X$, such as $X_0$, which cannot be measured and we observe the corresponding value of $Y$, say $Y_0$. Then, $X_0$ can be estimated and a confidence interval for $X_0$ can be obtained.

In regression analysis, we want to investigate the relationship between variables. Regression has many applications, which occur in many fields: engineering, economics, the physical and chemical sciences, management, biological sciences, and social sciences. We only consider the simple linear regression model, which is a model with one regressor $X$ that has a linear relationship with a response $Y$. It is not always easy to measure the regressor $X$ or the response $Y$.

Let us consider a typical example of this problem. If $X$ is the concentration of glucose in certain substances, then a spectrophotometric method is used to measure the absorbance. This absorbance depends on the concentration of $X$. The response $Y$ is easy to measure with the spectrophotometric method, but the concentration, on the other hand, is not easy to measure. If we have $n$ known concentrations, then the absorbance can be measured.

If there is a linear relation between $Y$ and $X$, then a simple linear regression model can be made with these data. Suppose we have an unknown concentration, that is difficult to measure, but we can measure the absorbance of this concentration. Is it possible to estimate this concentration with the measured absorbance? This is called the calibration problem or inverse regression Analysis.

Suppose, we have a linear model $Y=\beta_0+\beta_1X+e$ and we have an observed value of the response $Y$, but we do not have the corresponding value of $X$. How can we estimate this value of $X$? The two most important methods to estimate $X$ are the classical method and the inverse method.

The classical method of inverse regression analysis is based on the simple linear regression model

$Y=\hat{\beta}_0+\hat{\beta}_1X+\varepsilon,$   where $\varepsilon \tilde N(0, \, \sigma^2)$

where the parameters $\hat{beta}_0$ and $\hat{beta}_1$ are estimated by Least Squares as $\beta_0$ and $\beta_1$. At least two of the $n$ values of $X$ have to be distinct, otherwise, we cannot fit a reliable regression line. For a given value of $X$, say $X_0$ (unknown), a $Y$ value, say $Y_0$ (or a random sample of $k$ values of $Y$) is observed at the $X_0$ value. For inverse regression analysis, the problem is to estimate $X_0$. The classical method uses a $Y_0$ value (or the mean of $k$ values of $Y_0$) to estimate $X_0$, which is then estimated by $\hat{x_0}=\frac{\hat{Y_0}-\hat{\beta_0}} {\hat{\beta_1}}$.

scatter with regression line: Inverse Regression Analysis

The inverse estimator is the simple linear regression of $X$ on $Y$. In this case, we have to fit the model

\[X=a_0+a_1Y+e, \text{where }\, N(0, \sigma^2)\]

to obtain the estimator. Then the inverse estimator of $X_0$

\[X_0=a_0+a_1Y+e\]

Important Considerations when performing Inverse Regression

  • Inverse regression can be statistically challenging, especially when the errors are mainly in the independent variables (which become the dependent variables in the inverse model).
  • It is not a perfect replacement for traditional regression, and the assumptions underlying the analysis may differ.
  • In some cases, reverse regression, which treats both variables as having errors, might be a more suitable approach.

In summary, inverse regression is a statistical technique that flips the roles of the independent and dependent variables in a regression model.

Learn R Language Programming

Coefficient of Determination: Model Selection (2012)

$R^2$ pronounced R-squared (Coefficient of determination) is a useful statistic to check the regression fit value. $R^2$ measures the proportion of total variation about the mean $\bar{Y}$ explained by the regression. R is the correlation between $Y$ and $\hat{Y}$ and is usually the multiple correlation coefficient. The coefficient of determination ($R^2$) can take values as high as 1 or  (100%) when all the values are different i.e. $0\le R^2\le 1$.

Coefficient of Determination

When repeat runs exist in the data the value of $R^2$ cannot attain 1, no matter how well the model fits, because no model can explain the variation in the data due to the pure error. A perfect fit to data for which $\hat{Y}_i=Y_i$, $R^2=1$. If $\hat{Y}_i=\bar{Y}$, that is if $\beta_1=\beta_2=\cdots=\beta_{p-1}=0$ or if a model $Y=\beta_0 +\varepsilon$ alone has been fitted, then $R^2=0$. Therefore we can say that $R^2$ is a measure of the usefulness of the terms other than $\beta_0$ in the model.

Note that we must be sure that an improvement/ increase in $R^2$ value due to adding a new term (variable) to the model under study should have some real significance and is not because the number of parameters in the model is getting else to saturation point. If there is no pure error $R^2$ can be made unity.

\begin{align*}
R^2 &= \frac{\text {SS due to regression given}\, b_0}{\text{Total SS corrected for mean} \, \bar{Y}} \\
&= \frac{SS \, (b_1 | b_0)}{S_{YY}} \\
&= \frac{\sum(\hat{Y_i}-\bar{Y})^2} {\sum(Y_i-\bar{Y})^2}r \\
&= \frac{S^2_{XY}}{(S_{XY})(S_{YY})}
\end{align*}

where summation are over $i=1,2,\cdots, n$.

Coefficient of Determination
Coefficient of Determination

Interpreting R-Square $R^2$ does not indicate whether:

  • the independent variables (explanatory variables) are a cause of the changes in the dependent variable;
  • omitted-variable bias exists;
  • the correct regression was used;
  • the most appropriate set of explanatory variables has been selected;
  • there is collinearity (or multicollinearity) present in the data;
  • the model might be improved using transformed versions of the existing explanatory variables.

Learn more about

https://itfeature.com