Heteroscedasticity Regression Residual Plot


One of the assumption of classical linear regression model is that there is no heteroscedasticity (error terms has constant error term) meaning that ordinary least square (OLS) estimators are (BLUE, best linear unbiased estimator) and their variances is the lowest of all other unbiased estimators (Gauss Markov Theorem). If the assumption of constant variance does not hold then this means that the Gauss Markov Theorem does not apply. For heteroscedastic data, regression analysis provide unbiased estimate for the relationship between the predictors and the outcome variables.

As we have discussed that heteroscedasticity occurs when the error variance has non-constant variance.  In this case, we can think of the disturbance for each observation as being drawn from a different distribution with a different variance.  Stated equivalently, the variance of the observed value of the dependent variable around the regression line is non-constant.  We can think of each observed value of the dependent variable as being drawn from a different conditional probability distribution with a different conditional variance. A general linear regression model with the assumption of heteroscedasticity can be expressed as follows

y_i & = \beta_0 + \beta_1 X_{i1} + \beta_2 X_{i2} + \cdots + \beta_p X_ip + \varepsilon_i\\
&=\sigma_i^2; \cdots i=1,2,\cdots, n

Note that we have a $i$ subscript attached to sigma squared.  This indicates that the disturbance for each of the $n$-units is drawn from a probability distribution that has a different variance.

If the error term has non-constant variance, but all other assumptions of the classical linear regression model are satisfied, then the consequences of using the OLS estimator to obtain estimates of the population parameters are:

  • The OLS estimator is still unbiased
  • The OLS estimator is inefficient; that is, it is not BLUE
  • The estimated variances and covariances of the OLS estimates are biased and inconsistent
  • Hypothesis tests are not valid

Detection of Heteroscedasticity Regression Residual Plot

The residual for the $i$th observation, $\hat{\varepsilon_i}$, is an unbiased estimate of the unknown and unobservable error for that observation, $\hat{\varepsilon_i}$. Thus the squared residuals, $\hat{\varepsilon_i^2}$ , can be used as an estimate of the unknown and unobservable error variance,  $\sigma_i^2=E(\hat{\varepsilon_i})$.  You can calculate the squared residuals and then plot them against an explanatory variable that you believe might be related to the error variance.  If you believe that the error variance may be related to more than one of the explanatory variables, you can plot the squared residuals against each one of these variables.  Alternatively, you could plot the squared residuals against the fitted value of the dependent variable obtained from the OLS estimates.  Most statistical programs (softwares) have a command to do these residual plots.  It must be emphasized that this is not a formal test for heteroscedasticity.  It would only suggest whether heteroscedasticity may exist.

Below there are residual plots showing the three typical patterns. The first plot shows a random pattern that indicates a good fit for a linear model. The other two plot patterns of residual plots are non-random (U-shaped and inverted U), suggesting a better fit for a non-linear model, than linear regression model.

Heteroscedasticity Regression Residual Plot 3

Heteroscedasticity Regression Residual Plot 1

Heteroscedasticity Residual Plot 1

Heteroscedasticity Residual Residual Plot 2

Heteroscedasticity Residual Plot 2

Heteroscedasticity Regression Residual Plot 3


Download pdf file Heteroscedasticity Regression Residual Plot


Matrix in Matlab: Creating and manipulating Matrices in Matlab

Matrix (a two dimensional, rectangular shaped used to store multiple elements of data in an easy accessible format) is the most basic data structure in Matlab. The elements of matrix can be numbers, characters, logical states of yes or no (true or false) or other Matlab structure types. Matlab also supports more than two dimensional data structures, referred to as arrays in Matlab. Matlab is matrix-based computing environment in which all of the data entered into Matlab is stored as as a matrix.

It is assumed in this Matlab tutorial that you know some of the basics on how to define and manipulate vectors in Matlab software. we will discuss here

  1. Defining Matrices
  2. Matrix Operations
  3. Matrix Functions

1)  Defining/ Creating Matrices

Defining a matrix in Matlab is similar to defining a vector in Matlab. To define a matrix, treat it as a column of row vectors.
>> A=[1 2 3; 4 5 6; 7 8 9]

Note that spaces between number is used to define the elements of matrix and semi-colon is used to separate the rows of matrix A. The square brackets are used to construct matrices. The individual matrix and vectors entries can be referenced within parenthesis. For example A(2,3) represents element in second row and third column of matrix A.

Matrix in Matlab

Matrix in Matlab

Some example to create matrix and extract elements
>> A=rand(6, 6)
>> B=rand(6, 4)

>>A(1:4, 3) is a column vector consisting of the first four entries of the third column of A
>>A(:, 3) is the third column of A
>>A(1:4, : ) contains column  and column 4 of matrix A

Convenient matrix building Functions

eye –> identity
zeros –> matrix of zeros
ones –> matrix of ones
diag –> create or extract diagonal elements of matrix
triu –> upper triangular part of matrix
tril –> lower triangular part of matrix
rand –> randomly generated matrix
hilb –> Hilbert matrix
magic –> magic square

2)  Matrix Operations

Many of the mathematical operations can be applied on matrices and vectors in Matlab such as addition, subtraction, multiplication and division of matrices etc.

Matrix or Vector Multiplication

If x and y are both column vectors, then x’*y is their inner (or dot) product and x*y’ is their outer (or cross) product.

Matrix division

Let A is an invertible square matrix and b is a compatible column vector then
x = A/b is solution of A * x = b
x = b/A is solution of x * A = b

These are also called the backslash (\) and slash operators (/) also referred to as the mldivide and mrdivide.

3)  Matrix Functions

Matlab has a many functions used to create different kinds of matrices. Some important matrix functions used in Matlab are

eig –> eigenvalues and eigenvectors
eigs –> like eig, for large sparse matrices
chol –> cholesky factorization
svd –> singular value decomposition
svds –> like svd, for large sparse matrices
inv –> inverse of matrix
lu –> LU factorization
qr –> QR factorization
hess –> Hessenberg form
schur –> Schur decompostion
rref –> reduced row echelon form
expm –> matrix exponential
sqrtm –> matrix square root
poly –> characteristic polynomial
det –> determinant of matrix
size –> size of an array
length –> length of a vector
rank –> rank of matrix

Sufficient statistics and Sufficient Estimators

An estimator $\hat{\theta}$ is sufficient if it make so much use of the information in the sample that no other estimator could extract from the sample, additional information about the population parameter being estimated.

The sample mean $\overline{X}$ utilizes all the values included in the sample so it is sufficient estimator of population mean $\mu$.

Sufficient estimators are often used to develop the estimator that have minimum variance among all unbiased estimators (MVUE).

If sufficient estimator exists, no other estimator from the sample can provide additional information about the population being estimated.

If there is a sufficient estimator, then there is no need to consider any of the non-sufficient estimator. Good estimator are function of sufficient statistics.

Let $X_1,X_2,\cdots,X_n$ be a random sample from a probability distribution with unknown parameter $\theta$, then this statistic (estimator) $U=g(X_1,X_,\cdots,X_n)$ observation gives $U=g(X_1,X_2,\cdots,X_n)$ does not depend upon population parameter $\Theta$.

Sufficient Statistic Example

The sample mean $\overline{X}$ is a sufficient for the population mean $\mu$ of a normal distribution with known variance. Once the sample mean is known, no further information about the population mean $\mu$ can be obtained from the sample itself, while median is not sufficient for the mean; even if the median of the sample is known, knowing the sample itself would provide further information about the population mean $\mu$.

Mathematical Definition of Sufficiency

Suppose that $X_1,X_2,\cdots,X_n \sim p(x;\theta)$. $T$ is sufficient for $\theta$ if the conditional distribution of $X_1,X_2,\cdots, X_n|T$ does not depend upon $\theta$. Thus
This means that we can replace $X_1,X_2,\cdots,X_n$ with $T(X_1,X_2,\cdots,X_n)$ without losing information.

For further reading visit: https://en.wikipedia.org/wiki/Sufficient_statistic

Download pdf file Sufficient Statistics:


Component of Time Series Data

Traditional methods of time series analysis are concerned with decomposing of a series into a trend, a seasonal variation and other irregular fluctuations. Although this approach is not always the best but still useful (Kendall and Stuart, 1996).

The components, by which time series is composed of, are called component of time series data. There are four basic Component of time series data described below.

Different Sources of Variation are:

  1. Seasonal effect (Seasonal Variation or Seasonal Fluctuations)
    Many of the time series data exhibits a seasonal variation which is annual period, such as sales and temperature readings.  This type of variation is easy to understand and can be easily measured or removed from the data to give de-seasonalized data.Seasonal Fluctuations describes any regular variation (fluctuation) with a period of less than one year for example cost of variation types of fruits and vegetables, cloths, unemployment figures, average daily rainfall, increase in sale of tea in winter, increase in sale of ice cream in summer etc., all show seasonal variations.The changes which repeat themselves within a fixed period, are also called seasonal variations, for example, traffic on roads in morning and evening hours, Sales at festivals like EID etc., increase in the number of passengers at weekend etc. Seasonal variations are caused by climate, social customs, religious activities etc.
  2. Other Cyclic Changes (Cyclical Variation or Cyclic Fluctuations)
    Time series exhibits Cyclical Variations at a fixed period due to some other physical cause, such as daily variation in temperature. Cyclical variation is a non-seasonal component which varies in recognizable cycle. sometime series exhibits oscillation which do not have a fixed period but are predictable to some extent. For example, economic data affected by business cycles with a period varying between about 5 and 7 years.In weekly or monthly data, the cyclical component may describes any regular variation (fluctuations) in time series data. The cyclical variation are periodic in nature and repeat themselves like business cycle, which has four phases (i) Peak (ii) Recession (iii) Trough/Depression (iv) Expansion.
  3. Trend (Secular Trend or Long Term Variation)
    It is a longer term change. Here we take into account the number of observations available and make a subjective assessment of what is long term. To understand the meaning of long term, let for example climate variables sometimes exhibit cyclic variation over a very long time period such as 50 years. If one just had 20 years data, this long term oscillation would appear to be a trend, but if several hundreds years of data is available, then long term oscillations would be visible.These movements are systematic in nature where the movements are broad, steady, showing slow rise or fall in the same direction. The trend may be linear or non-linear (curvilinear). Some examples of secular trend are: Increase in prices, Increase in pollution, increase in the need of wheat, increase in literacy rate, decrease in deaths due to advances in science.Taking averages over a certain period is a simple way of detecting trend in seasonal data. Change in averages with time is evidence of a trend in the given series, though there are more formal tests for detecting trend in time series.
  4. Other Irregular Variation (Irregular Fluctuations)
    When trend and cyclical variations are removed from a set of time series data, the residual left, which may or may not be random. Various techniques for analyzing series of this type examine to see “if irregular variation may be explained in terms of probability models such as moving average or autoregressive  models, i.e. we can see if any cyclical variation is still left in the residuals.These variation occur due to sudden causes are called residual variation (irregular variation or accidental or erratic fluctuations) and are unpredictable, for example rise in prices of steel due to strike in the factory, accident due to failure of break, flood, earth quick, war etc.
Component of Time Series Data

Component of Time Series Data



Frequency Distribution Table

Using Descriptive statistics we can organize the data to get the general pattern of the data and check where data values tend to concentrate and try to expose extreme or unusual data values.

A frequency distribution is a compact form of data in a table which displays the categories of observations according to there magnitudes and frequencies such that the similar or identical numerical values are grouped together. The categories are also known as groups, class intervals or simply classes. The classes must be mutually exclusive classes showing the number of observations in each class. The number of values falling in a particular category is called the frequency of that category denoted by f.

A Frequency Distribution shows us a summarized grouping of data divided into mutually exclusive classes and the number of occurrences in a class. Frequency distribution is a way of showing a raw (ungrouped or unorganized) data into grouped or organized data to show results of sales, production, income, loan, death rates, height, weight, temperature etc.

The relative frequency of a category is the proportion of observed frequency to the total frequency obtained by dividing observed frequency by the total frequency and denoted by r.f.  The sum of r.f. column should be one except for rounding error. Multiplying each relative frequency of class by 100 we can get percentage occurrence of a class. A relative frequency captures the relationship between a class total and the total number of observations.

The frequency distribution may be made for continuous data, discrete data and categorical data (for both qualitative and quantitative data). It can also be used to draw some graphs such as histogram, line chart, bar chart, pie chart, frequency polygon etc.

Steps to make a Frequency Distribution of data are:

  1. Decide about the number of classes. The number of classes usually between 5 and 20. Too many classes or too few classes might not reveal the basic shape of the data set, also it will be difficult to interpret such frequency distribution. The maximum number of classes may be determined by formula:
    \[\text{Number of Classes} = C = 1 + 3.3 log (n)\]
    \[\text{or} \quad C = \sqrt{n} \quad {approximately}\]where $n$ is the total number of observations in the data.
  2. Calculate the range of the data (Range = Max – Min) by finding minimum and maximum data value. Range will be used to determine the class interval or class width.
  3. Decide about width of the class denote by h and obtained by
    \[h = \frac{\text{Range}}{\text{Number of Classes}}= \frac{R}{C} \]
    Generally the class interval or class width is the same for all classes. The classes all taken together must cover at least the distance from the lowest value (minimum) in the data set up to the highest (maximum) value. Also note that equal class intervals are preferred in frequency distribution, while unequal class interval may be necessary in certain situations to avoid a large number of empty, or almost empty classes.
  4. Decide the individual class limits and select a suitable starting point of the first class which is arbitrary, it may be less than or equal to the minimum value. Usually it is started before the minimum value in such a way that the mid point (the average of lower and upper class limits of the first class) is properly placed.
  5. Take an observation and mark a vertical bar (|) for a class it belongs. A running tally is kept till the last observation. The tally counts  indicates five.
  6. Find the frequencies, relative frequency,  cumulative frequency etc. as required.
Frequency Distribution Table

Frequency Distribution Table

A frequency distribution is said to be skewed when its mean and median are different. The kurtosis of a frequency distribution is the concentration of scores at the mean, or how peaked the distribution appears if depicted graphically, for example, in a histogram. If the distribution is more peaked than the normal distribution it is said to be leptokurtic; if less peaked it is said to be platykurtic.

Further Reading: Frequency Distribution Table


Download pdf file of Frequency Distribution :