Basic Statistics and Data Analysis

Lecture notes, MCQS of Statistics

Category: Measure of Dispersion

Skewness: Measure of Asymmetry

The skewed and askew are widely used terminologies that refer to something that is out of order or distorted on one side. Similarly, when referring to the shape of frequency distributions or probability distributions, the term skewness also refers to asymmetry of that distribution. A distribution with an asymmetric tail extending out to the right is referred to as “positively skewed” or “skewed to the right”, while a distribution with an asymmetric tail extending out to the left is referred to as “negatively skewed” or “skewed to the left”. The range of skewness is from minus infinity ($-\infty$) to positive infinity ($+\infty$). In simple words skewness (asymmetry) is measure of symmetry or in other words skewness is the lack of symmetry.

Karl Pearson (1857-1936) first suggested measuring skewness by standardizing the difference between the mean and the mode, such that, $skewness=\frac{\mu-mode}{\text{standard deviation}}$. Since, population modes are not well estimated from sample modes, therefore Stuart and Ord, 1994 suggested that one can estimate the difference between the mean and the mode as being three times the difference between the mean and the median. Therefore, the estimate of skewness will be: $skewness=\frac{3(M-median)}{\text{standard deviation}}$. Many of the statisticians use this measure but after eliminating the ‘3’, that is, $skewness=\frac{M-Median}{\text{standard deviation}}$. This statistic ranges from $-1$ to $+1$. According to Hilderand, 1986, absolute values of skewness above 0.2 indicate great skewness.

Skewness has also been defined with respect to the third moment about the mean, that is $\gamma_1=\frac{\sum(X-\mu)^3}{n\sigma^3}$, which is simply the expected value of the distribution of cubed $Z$ scores. Skewness measured in this way is also sometimes referred to as “Fisher’s skewness”. When the deviations from the mean are greater in one direction than in the other direction, this statistic will deviate from zero in the direction of the larger deviations. From sample data, Fisher’s skewness is most often estimated by: $g_1=\frac{n\sum z^3}{(n-1)(n-2)}$. For large sample sizes ($n > 150$), $g_1$ may be distributed approximately normally, with a standard error of approximately $\sqrt{\frac{6}{n}}$. While one could use this sampling distribution to construct confidence intervals for or tests of hypotheses about $\gamma_1$, there is rarely any value in doing so.

Arthur Lyon Bowley (1869-19570, has also proposed a measure of skewness based on the median and the two quartiles. In a symmetrical distribution, the two quartiles are equidistant from the median but in an asymmetrical distribution, this will not be the case. The Bowley’s coefficient of skewness is $skewness=\frac{q_1+q_3-2\text{median}}{Q_3-Q_1}$. Its value lies between 0 and $\pm1$.

The most commonly used measures of skewness (those discussed here) may produce some surprising results, such as a negative value when the shape of the distribution appears skewed to the right.

It is important for researchers from the behavioral and business sciences to measure skewness when it appears in their data. Great amount of skewness may motivate the researcher to investigate the existence of outliers. When making decisions about which measure of location to report and which inferential statistic to employ, one should take into consideration the estimated skewness of the population. Normal distributions have zero skewness. Of course, a distribution can be perfectly symmetric but may far away from normal distribution. Transformations of variables under study commonly employed to reduce (positive) skewness. These transformation may include square root, log, and reciprocal of variable.

For more about skewness see, Skewness

Standard Error of Estimate

Standard error (SE) is a statistical term used to measure the accuracy within a sample taken from population of interest. The standard error of the mean measures the variation in the sampling distribution of the sample mean, usually denoted by $\sigma_\overline{x}$ is calculated as

\[\sigma_\overline{x}=\frac{\sigma}{\sqrt{n}}\]

Drawing (obtaining) different samples from the same population of interest usually results in different values of sample means, indicating that there is a distribution of sampled means having its own mean (average values) and variance. The standard error of the mean is considered as the standard deviation of all those possible sample drawn from the same population.

The size of the standard error is affected by standard deviation of the population and number of observations in a sample called the sample size. The larger the standard deviation of the population ($\sigma$), the larger the standard error will be, indicating that there is more variability in the sample means. However larger the number of observations in a sample smaller will be the standard error of estimate, indicating that there is less variability in the sample means, where by less variability we means that the sample is more representative of the population of interest.

If the sampled population is not very larger, we need to make some adjustment in computing the SE of the sample means. For a finite population, in which total number of objects (observations) is $N$ and the number of objects (observations) in a sample is $n$, then the adjustment will be $\sqrt{\frac{N-n}{N-1}}$. This adjustment is called the finite population correction factor. Then the adjusted standard error will be

\[\frac{\sigma}{\sqrt{n}} \sqrt{\frac{N-n}{N-1}}\]

The SE is used to:

  1. measure the spread of values of statistic about the expected value of that statistic
  2. construct confidence intervals
  3. test the null hypothesis about population parameter(s)

The standard error is computed from sample statistics. To compute SE for simple random samples, assuming that the size of population ($N$) is at least 20 times larger than that of the sample size ($n$).
\begin{align*}
Sample\, mean, \overline{x} & \Rightarrow SE_{\overline{x}} = \frac{n}{\sqrt{n}}\\
Sample\, proportion, p &\Rightarrow SE_{p} \sqrt{\frac{p(1-p)}{n}}\\
Difference\, b/w \, means, \overline{x}_1 – \overline{x}_2 &\Rightarrow SE_{\overline{x}_1-\overline{x}_2}=\sqrt{\frac{s_1^2}{n_1}+\frac{s_2^2}{n_2}}\\
Difference\, b/w\, proportions, \overline{p}_1-\overline{p}_2 &\Rightarrow SE_{p_1-p_2}=\sqrt{\frac{p_1(1-p_1)}{n_1}+\frac{p_2(1-p_2)}{n_2}}
\end{align*}

The standard error is identical to the standard deviation, except that it uses statistics whereas the standard deviation uses the parameter.

 

For more about SE follow the link Standard Error of Estimate

 

Sum of Squares

Sum of Sqaures

In statistics, the sum of squares is a measure of the total variability (spread, variation) within a data set. In other words the sum of squares is a measure of deviation or variation from mean value of the given data set. A sum of squares calculated by first computing the differences between each data point (observation) and mean of the data set, i.e. $x=X-\overline{X}$. The computed $x$ is the deviation score for the given data set. Squaring each of this deviation score and then adding these squared deviation scores gave us the sum of squares (SS), which is represented mathematically as

\[SS=\sum(x^2)=\sum(X-\overline{X})^2\]

Note that the small letter $x$ usually represents the deviation of each observation from mean value, while capital letter $X$ represents the variable of interest in statistics.

Sum of Squares Example

Consider the following data set {5, 6, 7, 10, 12}. To compute the sum of squares of this data set, follow these steps

  • Calculate the average of the given data by summing all the values in the data set and then divide this sum of numbers by the total number of observations in the date set. Mathematically, it is $\frac{\sum X_i}{n}=\frac{40}{5}=8$, where 40 is the sum of all numbers $5+6+7+10+12$ and there are 5 observations in number.
  • Calculate the difference of each observation in data set from the average computed in step 1, for given data. The difference are
    5 – 8 = –3; 6 – 8 = –2; 7 – 8 = –1; 10 – 8 =2 and 12 – 8 = 4
    Note that the sum of these differences should be zero. (–3 + –2 + –1 + 2 +4 = 0)
  • Now square the each of the differences obtained in step 2. The square of these differences are
    9, 4, 1, 4 and 16
  • Now add the squared number obtained in step 3. The sum of these squared quantities will be 9 + 4 + 1 + 4 + 16 = 34, which is the sum of the square of the given data set.

In statistics, sum of squares occurs in different contexts such as

  • Partitioning of Variance (Partition of Sums of Squares)
  • Sum of Squared Deviations (Least Squares)
  • Sum of Squared Differences (Mean Squared Error)
  • Sum of Squared Error (Residual Sum of Squares)
  • Sum of Squares due to Lack of Fit (Lack of Fit Sum of Squares)
  • Sum of Squares for Model Predictions (Explained Sum of Squares)
  • Sum of Squares for Observations (Total Sum of Squares)
  • Sum of Squared Deviation (Squared Deviations)
  • Modeling involving Sum of Squares (Analysis of Variance)
  • Multivariate Generalization of Sum of Square (Multivariate Analysis of Variance)

As previously discussed, Sum of Square is a measure of the Total Variability of a set of scores around a specific number.

 

Range: An Absolute Measure of Dispersion

Measure of Central Tendency provides typical value about the data set, but it does not tell the actual story about data i.e. mean, median and mode are enough to get summary information, though we know about the center of the data. In other words, we can measure the center of the data by looking at averages (mean, median, mode). These measure tell nothing about the spread of data. So for more information about data we need some other measure, such as measure of dispersion or spread.

Spread of data can be measured by calculating the range of data; range tell us over how many numbers of data extends. Range (an absolute measure of dispersion) can be found by subtracting highest value (called upper bound) in data from smallest value (called lower bound) in data. i.e.

Range = Upper Bound – Lowest Bound
OR
Range = Largest Value – Smallest Value

This absolute measure of dispersion have disadvantages as range only describes the width of the data set (i.e. only spread out) measure in same unit as data, but it does not gives the real picture of how data is distributed. If data has outliers, using range to describe the spread of that can be very misleading as range is sensitive to outliers. So we need to be careful in using range as it does not give the full picture of what’s going between the highest and lowest value. It might give misleading picture of the spread of the data because it is based only on the two extreme values. It is therefore an unsatisfactory measure of dispersion.

However range is widely used in statistical process control such as control charts of manufactured products, daily temperature, stock prices etc., applications as it is very easy to calculate. It is an absolute measure of dispersion, its relatives measure known as the coefficient of dispersion defined the the relation

\[Coefficient\,\, of\,\, Dispersion = \frac{x_m-x_0}{x_m-x_0}\]

Coefficient of dispersion is a pure dimensionless and is used for comparison purpose.

Download pdf file:

 

Absolute Measure of Dispersion

Absolute Measure of Dispersion gives an idea about the amount of dispersion/ spread in a set of observations. These quantities measures the dispersion in the same units as the units of original data. Absolute measures cannot be used to compare the variation of two or more series/ data set. A measure of absolute dispersion does not in itself, tell whether the variation is large or small.

Commonly used Absolute Measure of Dispersion are:

  1. Range
  2. Quartile Deviation
  3. Mean Deviation
  4. Variance or Standard Deviation

The details about these Absolute Measure of Dispersion or spread are:

Range

Range is the difference between the largest value and the smallest value in the data set. For ungrouped data, let $X_0$ is the smallest value and $X_n$ is the largest  value in a data set then the range (R) is defined as
$R=X_n-X_0$.

For grouped data Range can be calculated in three different ways
R=Mid point of highest class – Mid point of lowest class
R=Upper class limit of highest class-Lower class limit of lower class
R=Upper class boundary of highest class – Lower class boundary of lowest class

Quartile Deviation (Semi-Interquantile Range)

Quartile deviation defined as the difference between the third and first quartiles, and half of this range is called the semi-interquartile range (SIQD) or simply quartile deviation (QD). $QD=\frac{Q_3-Q_1}{2}$
The Quartile Deviation is superior to range as it is not affected by extremely large or small observations, any how it does not give any information about the position of observation lying outside the two quantities. It is not amenable to mathematical treatment and is greatly affected by sampling variability. Although Quartile Deviation is not widely used as measure of dispersion, but it is used in situations in which extreme observations are thought to be unrepresentative/ misleading. Quartile Deviation is not based on all observation therefore it is affected by extreme observations.

Note: The range “Median ± QD” contains approximately 50% of the data.

Mean Deviation (Average Deviation)

The Mean Deviation is defined as the arithmetic mean of the deviations measured either from mean or from the median. All these deviations are counted as positive to avoid the difficulty arising from the property that the sum of deviations of observations from their mean is zero.
$MD=\frac{\sum|X-\overline{X}|}{n}\quad$ for ungrouped data for mean
$MD=\frac{\sum f|X-\overline{X}|}{\sum f}\quad$ for grouped data for mean
$MD=\frac{\sum|X-\tilde{X}|}{n}\quad$ for ungrouped data for median
$MD=\frac{\sum f|X-\tilde{X}|}{\sum f}\quad$ for grouped data for median
Mean Deviation can be calculated about other central tendencies but it is least when deviations are taken as median.

The Mean Deviation gives more information than range or the Quartile Deviation as it is based on all the observed values. The Mean Deviation does not give undue weight to occasional large deviations, so it should likely to be used in situation where such deviation are likely to occur.

Variance and Standard Deviation

This absolute measure of dispersion is defined as the mean of the squares of deviations of all the observations from their mean. Traditionally for population variance is denoted by $\sigma^2$ (sigma square) and for sample data denoted by $S^2$ or $s^2$.
Symbolically
$\sigma^2=\frac{\sum(X_i-\mu)^2}{N}\quad$ Population Variance for ungrouped data
$S^2=\frac{\sum(X_i-\overline{X})^2}{n}\quad$ sample Variance for ungrouded data
$\sigma^2=\frac{\sum f(X_i-\mu)^2}{\sum f}\quad$ Population Variance for grouped data
$\sigma^2=\frac{\sum f (X_i-\overline{X})^2}{\sum f}\quad$ Sample Variance for grouped data

The variance is denoted by Var(X) for random variable X. The term variance was introduced by R. A. Fisher (1890-1982) in 1918. The variance is in square of units and the variance is a large number compared to observation themselves.
Note that there are alternative formulas to compute Variance or Standard Deviations.

The positive square root of the variance is called Standard Deviation (SD) to express the deviation in the same units as the original observation themselves.It is a measure of the average spread about the mean and symbolically defined as
$\sigma^2=\sqrt{\frac{\sum(X_i-\mu)^2}{N}}\quad$ Population Standard for ungrouped data
$S^2=\sqrt{\frac{\sum(X_i-\overline{X})^2}{n}}\quad$ Sample Standard Deviation for ungrouped data
$\sigma^2=\sqrt{\frac{\sum f(X_i-\mu)^2}{\sum f}}\quad$ Population Standard Deviation for grouped data
$\sigma^2=\sqrt{\frac{\sum f (X_i-\overline{X})^2}{\sum f}}\quad$ Sample Standard Deviation for grouped data
Standard Deviation is most useful measure of dispersion is credited with the name Standard Deviation by Karl Pearson (1857-1936).
In some text Sample Standard Deviation is defined as $S^2=\frac{\sum (X_i-\overline{X})^2}{n-1}$ on the basis of the argument that knowledge of any $n-1$ deviations determines the remaining deviations as the sum of n deviations must be zero. In fact this is an unbiased estimator of the population variance $\sigma^2$. The Standard Deviation has a definite mathematical measure, it utilizes all the observed values and is amenable to mathematical treatment but affected by extreme values.

References

Download pdf file:

 

Copy Right © 2011-2017 | Free Music Download ITFEATURE.COM