Category: Measure of Dispersion

Variance: A Measure of Dispersion

Variance is a measure of the dispersion of a distribution of a random variable. The term variance was introduced by R. A. Fisher in 1918. The variance of a set of observations (data set) is defined as the mean of the squares of deviations of all the observations from their mean. When it is computed for the entire population, the variance is called the population variance, usually denoted by $\sigma^2$, while for sample data, it is called sample variance and denoted by $S^2$ in order to distinguish between population variance and sample variance. Variance is also denoted by $Var(X)$ when we speak about the variance of a random variable. The symbolic definition for population and sample variance is

$\sigma^2=\frac{\sum (X_i – \mu)^2}{N}; \quad \text{for population data}$

$\sigma^2=\frac{\sum (X_i – \overline{X})^2}{n-1}; \quad \text{for sample data}$

It should be noted that the variance is in the square of units in which the observations are expressed and variance is a large number compared to observations themselves. The variance because of its nice mathematical properties, assumes an extremely important role in statistical theory.

Variance can be computed if we have standard deviation as the variance is square of standard deviation i.e. Variance = (Standard Deviation)$^2$.

Variance can be used to compare dispersion in two or more sets of observations. Variance can never be negative since every term in the variance is squared quantity, either positive or zero.
To calculate the standard deviation one has to follow these steps:

  1. First find the mean of the data.
  2. Take difference of each observation from mean of the given data set. The sum of these differences should be zero or near to zero it may be due to rounding of numbers.
  3. Square the values obtained in step 1, which should be greater than or equal to zero, i.e. should be a positive quantity.
  4. Sum all the squared quantities obtained in step 2. We call it sum of squares of differences.
  5. Divide this sum of squares of differences by total number of observation if we have to calculate population standard deviation ($\sigma$). For sample standard deviation (S) divide the sum of squares of differences by total number of observation minus one i.e. degree of freedom.
    Find the square root of the quantity obtained in step 4. The resultant quantity will be standard deviation for given data set.
Measure of Dispersion

The major characteristics of the variances are:
a)    All of the observations are used in the calculations
b)    Variance is not unduly influenced by extreme observations
c)    The variance is not in the same units as the observation, the variance is in the square of units in which the observations are expressed.

Read more about Measure of Dispersion

Standard Deviation: A Measure of Dispersion

Standard Deviation

The standard deviation is a widely used concept in statistics and it tells how much variation (spread or dispersion) is in the data set. It can be defined as the positive square root of the mean (average) of the squared deviations of the values from their mean.
To calculate the standard deviation one have to follow these steps:

  1. First, find the mean of the data.
  2. Take the difference of each data point from the mean of the given data set (which is computed in step 1). Note that, the sum of these differences must be equal to zero or near to zero due to rounding of numbers.
  3. Now computed the square the differences that were obtained in step 2, It would be greater than zero, that it, I will be a positive quantity.
  4. Now add up all the squared quantities obtained in step 3. We call it the sum of squares of differences.
  5. Divide this sum of squares of differences (obtained in step 4) by the total number of observation (available in data) if we have to calculate population standard deviation ($\sigma$). If you want t to compute sample standard deviation ($S$) then divide the sum of squares of differences (obtained in step 4) by the total number of observation minus one ($n-1$) i.e. the degree of freedom. Note $n$ is the number of observations available in the data set.
  6. Find the square root (also known as under root) of the quantity obtained in step 5. The resultant quantity in this way known as the standard deviation for the given data set.

The sample standard deviation of a set of $n$ observation, $$X_1, X_2, \cdots, X_n$$ denoted by $S$ is
\begin{aligned}
\sigma &=\sqrt{\frac{\sum_{i=1}^n (X_i-\overline{X})^2}{n}}; Population\, Standard\, Deviation\\
S&=\sqrt{ \frac{\sum_{i=1}^n (X_i-\overline{X})^2}{n-1}}; Sample\, Standard\, Deviation
\end{aligned}
The standard deviation can be computed from variance too as $S= \sqrt{Variance}$.

The real meaning of the standard deviation is that for a given data set 68% of the data values will lie within the range $\overline{X} \pm \sigma$ i.e. within one standard deviation from mean or simply within one $\sigma$. Similarly, 95% of the data values will lie within the range $\overline{X} \pm 2 \sigma$ and 99% within $\overline{X} \pm 3 \sigma$.

Examples of Standard Deviation and Variance

A large value of standard deviation indicates more spread in the data set which can be interpreted as the inconsistent behaviour of the data collected. It means that the data points tend to away from the mean value. For the case of smaller standard deviation, data points tend to be close (very close) to mean indicating the consistent behaviour of data set.
The standard deviation and variance both are used to measure the risk of a particular investment in finance. The mean of 15% and standard deviation of 2% indicates that it is expected to earn a 15% return on investment and we have 68% chance that the return will actually be between 13% and 17%. Similarly, there are 95% chance that the return on the investment will yield an 11% to 19% return.

Skewness: Measure of Asymmetry

The skewed and askew are widely used terminologies that refer to something that is out of order or distorted on one side. Similarly, when referring to the shape of frequency distributions or probability distributions, the term skewness also refers to the asymmetry of that distribution. A distribution with an asymmetric tail extending out to the right is referred to as “positively skewed” or “skewed to the right”, while a distribution with an asymmetric tail extending out to the left is referred to as “negatively skewed” or “skewed to the left”. The range of skewness is from minus infinity ($-\infty$) to positive infinity ($+\infty$). In simple words, skewness (asymmetry) is a measure of symmetry or in other words, skewness is a lack of symmetry.

Karl Pearson (1857-1936) first suggested measuring skewness by standardizing the difference between the mean and the mode, such that, $\frac{\mu-mode}{\text{standard deviation}}$. Since population modes are not well estimated from sample modes, therefore Stuart and Ord, 1994 suggested that one can estimate the difference between the mean and the mode as being three times the difference between the mean and the median. Therefore, the estimate of skewness will be: $\frac{3(M-median)}{\text{standard deviation}}$. Many of the statisticians use this measure but after eliminating the ‘3’, that is, $\frac{M-Median}{\text{standard deviation}}$. This statistic ranges from $-1$ to $+1$. According to Hilderand, 1986, absolute values of skewness above 0.2 indicate great skewness.

Skewness has also been defined with respect to the third moment about the mean, that is $\gamma_1=\frac{\sum(X-\mu)^3}{n\sigma^3}$, which is simply the expected value of the distribution of cubed $Z$ scores. Skewness measured in this way is also sometimes referred to as “Fisher’s skewness”. When the deviations from the mean are greater in one direction than in the other direction, this statistic will deviate from zero in the direction of the larger deviations. From sample data, Fisher’s skewness is most often estimated by: $g_1=\frac{n\sum z^3}{(n-1)(n-2)}$. For large sample sizes ($n > 150$), $g_1$ may be distributed approximately normally, with a standard error of approximately $\sqrt{\frac{6}{n}}$. While one could use this sampling distribution to construct confidence intervals for or tests of hypotheses about $\gamma_1$, there is rarely any value in doing so.

Arthur Lyon Bowley (1869-19570, has also proposed a measure of skewness based on the median and the two quartiles. In a symmetrical distribution, the two quartiles are equidistant from the median but in an asymmetrical distribution, this will not be the case. The Bowley’s coefficient of skewness is $\frac{q_1+q_3-2\text{median}}{Q_3-Q_1}$. Its value lies between 0 and $\pm1$.

The most commonly used measures of skewness (those discussed here) may produce some surprising results, such as a negative value when the shape of the distribution appears skewed to the right.

It is important for researchers from the behavioral and business sciences to measure skewness when it appears in their data. A great amount of skewness may motivate the researcher to investigate the existence of outliers. When making decisions about which measure of the location to report and which inferential statistic to employ, one should take into consideration the estimated skewness of the population. Normal distributions have zero skewness. Of course, a distribution can be perfectly symmetric but may far away from the normal distribution. Transformations of variables under study commonly employed to reduce (positive) skewness. These transformations may include square root, log, and reciprocal of a variable.

For more about skewness: Asymmetry see, Skewness

Standard Error of Estimate

Standard error (SE) is a statistical term used to measure the accuracy within a sample taken from population of interest. The standard error of the mean measures the variation in the sampling distribution of the sample mean, usually denoted by $\sigma_\overline{x}$ is calculated as

\[\sigma_\overline{x}=\frac{\sigma}{\sqrt{n}}\]

Drawing (obtaining) different samples from the same population of interest usually results in different values of sample means, indicating that there is a distribution of sampled means having its own mean (average values) and variance. The standard error of the mean is considered as the standard deviation of all those possible sample drawn from the same population.

The size of the standard error is affected by standard deviation of the population and number of observations in a sample called the sample size. The larger the standard deviation of the population ($\sigma$), the larger the standard error will be, indicating that there is more variability in the sample means. However larger the number of observations in a sample smaller will be the standard error of estimate, indicating that there is less variability in the sample means, where by less variability we means that the sample is more representative of the population of interest.

If the sampled my canadian pharmacy population is not very larger, we need to make some adjustment in computing the SE of the sample means. For a finite population, in which total number of objects (observations) is $N$ and the number of objects (observations) in a sample is $n$, then the adjustment will be $\sqrt{\frac{N-n}{N-1}}$. This adjustment is called the finite population correction factor. Then the adjusted standard error will be

\[\frac{\sigma}{\sqrt{n}} \sqrt{\frac{N-n}{N-1}}\]

The SE is used to:

  1. measure the spread of values of statistic about the expected value of that statistic
  2. construct confidence intervals
  3. test the null hypothesis about population parameter(s)

The standard error is computed from sample statistics. To compute SE for simple random samples, assuming that the size of population ($N$) is at least 20 times larger than that of the sample size ($n$).
\begin{align*}
Sample\, mean, \overline{x} & \Rightarrow SE_{\overline{x}} = \frac{n}{\sqrt{n}}\\
Sample\, proportion, p &\Rightarrow SE_{p} \sqrt{\frac{p(1-p)}{n}}\\
Difference\, b/w \, means, \overline{x}_1 – \overline{x}_2 &\Rightarrow SE_{\overline{x}_1-\overline{x}_2}=\sqrt{\frac{s_1^2}{n_1}+\frac{s_2^2}{n_2}}\\
Difference\, b/w\, proportions, \overline{p}_1-\overline{p}_2 &\Rightarrow SE_{p_1-p_2}=\sqrt{\frac{p_1(1-p_1)}{n_1}+\frac{p_2(1-p_2)}{n_2}}
\end{align*}

The standard error is identical to the standard deviation, except that it uses statistics whereas the standard deviation uses the parameter.

 

For more about SE follow the link Standard Error of Estimate

 

x Logo: Shield Security
This Site Is Protected By
Shield Security