## Absolute Measure of Dispersion

**An absolute Measure of Dispersion gives** an idea about the amount of dispersion/ spread in a set of observations. These quantities measure the dispersion in the same units as the units of original data. The absolute measure of dispersion cannot be used to compare the variation of two or more series/ data sets. The absolute measure of dispersion does not in itself, tell whether the variation is large or small.

**Absolute Measure of Dispersion**

The absolute Measure of Dispersion:

- Range
- Quartile Deviation
- Mean Deviation
- Variance or Standard Deviation

**Range**

The Range is the difference between the largest value and the smallest value in the data set. For ungrouped data, let $X_0$ be the smallest value and $X_n$ be the largest value in a data set then the range ($R$) is defined as

$R=X_n-X_0$.

For grouped data Range can be calculated in three different ways

R=Mid point of the highest class – Midpoint of the lowest class

R=Upper class limit of the highest class – Lower class limit of the lower class

R=Upper class boundary of the highest class – The lower class boundary of the lowest class

**Quartile Deviation (Semi-Interquantile Range)**

The Quartile deviation (an absolute measure of dispersion) is defined as the difference between the third and first quartiles, and half of this range is called the semi-interquartile range (SIQD) or simply quartile deviation (QD). $$QD=\frac{Q_3-Q_1}{2}$$

The Quartile Deviation is superior to the range as it is not affected by extremely large or small observations, anyhow it does not give any information about the position of observation lying outside the two quantities. It is not amenable to mathematical treatment and is greatly affected by sampling variability. Although Quartile Deviation is not widely used as a measure of dispersion, it is used in situations in which extreme observations are thought to be unrepresentative/ misleading. Quartile Deviation is not based on all observations therefore it is affected by extreme observations.

Note: The range “Median ± QD” contains approximately 50% of the data.

**Mean Deviation (Average Deviation)**

The Mean Deviation is another absolute measure of dispersion and is defined as the arithmetic mean of the deviations measured either from the mean or from the median. All these deviations are counted as positive to avoid the difficulty arising from the property that the sum of deviations of observations from their mean is zero.

$MD=\frac{\sum|X-\overline{X}|}{n}\quad$ for ungrouped data for mean

$MD=\frac{\sum f|X-\overline{X}|}{\sum f}\quad$ for grouped data for mean

$MD=\frac{\sum|X-\tilde{X}|}{n}\quad$ for ungrouped data for median

$MD=\frac{\sum f|X-\tilde{X}|}{\sum f}\quad$ for grouped data for median

Mean Deviation can be calculated about other central tendencies but it is least when deviations are taken as the median.

The Mean Deviation gives more information than the range or the Quartile Deviation as it is based on all the observed values. The Mean Deviation does not give undue weight to occasional large deviations, so it should likely be used in situations where such deviations are likely to occur.

**Variance and Standard Deviation**

This **absolute measure of dispersion** is defined as the mean of the squares of deviations of all the observations from their mean. Traditionally population variance is denoted by $\sigma^2$ (sigma square) and for sample data denoted by $S^2$ or $s^2$.

Symbolically

$\sigma^2=\frac{\sum(X_i-\mu)^2}{N}\quad$ Population Variance for ungrouped data

$S^2=\frac{\sum(X_i-\overline{X})^2}{n}\quad$ sample Variance for ungrouped data

$\sigma^2=\frac{\sum f(X_i-\mu)^2}{\sum f}\quad$ Population Variance for grouped data

$\sigma^2=\frac{\sum f (X_i-\overline{X})^2}{\sum f}\quad$ Sample Variance for grouped data

The variance is denoted by $Var(X)$ for random variable $X$. The term variance was introduced by R. A. Fisher (1890-1982) in 1918. The variance is in squares of units and the variance is a large number compared to observations themselves.**Note** that there are alternative formulas to compute Variance or Standard Deviations.

The positive square root of the variance is called Standard Deviation (SD) to express the deviation in the same units as the original observation. It is a measure of the average spread about the mean and is symbolically defined as

$\sigma^2=\sqrt{\frac{\sum(X_i-\mu)^2}{N}}\quad$ Population Standard for ungrouped data

$S^2=\sqrt{\frac{\sum(X_i-\overline{X})^2}{n}}\quad$ Sample Standard Deviation for ungrouped data

$\sigma^2=\sqrt{\frac{\sum f(X_i-\mu)^2}{\sum f}}\quad$ Population Standard Deviation for grouped data

$\sigma^2=\sqrt{\frac{\sum f (X_i-\overline{X})^2}{\sum f}}\quad$ Sample Standard Deviation for grouped data

Standard Deviation is the most useful measure of dispersion and is credited with the name Standard Deviation by Karl Pearson (1857-1936).

In some text Sample, Standard Deviation is defined as $S^2=\frac{\sum (X_i-\overline{X})^2}{n-1}$ based on the argument that knowledge of any $n-1$ deviations determines the remaining deviations as the sum of n deviations must be zero. This is an unbiased estimator of the population variance $\sigma^2$. The Standard Deviation has a definite mathematical measure, it utilizes all the observed values and is amenable to mathematical treatment but affected by extreme values.

**References**