What is the Measure of Kurtosis (2012)

Introduction to Kurtosis

In statistics, a measure of kurtosis is a measure of the “tailedness” of the probability distribution of a real-valued random variable. The standard measure of kurtosis is based on a scaled version of the fourth moment of the data or population. Therefore, the measure of kurtosis is related to the tails of the distribution, not its peak.

Measure of Kurtosis

Sometimes, the Measure of Kurtosis is characterized as a measure of peakedness that is mistaken. A distribution having a relatively high peak is called leptokurtic. A distribution that is flat-topped is called platykurtic. The normal distribution which is neither very peaked nor very flat-topped is also called mesokurtic.  The histogram in some cases can be used as an effective graphical technique for showing the skewness and kurtosis of the data set.

Measure of Kurtosis

Data sets with high kurtosis tend to have a distinct peak near the mean, decline rather rapidly, and have heavy tails. Data sets with low kurtosis tend to have a flat top near the mean rather than a sharp peak.

Moment ratio and Percentile Coefficient of kurtosis are used to measure the kurtosis

Moment Coefficient of Kurtosis= $b_2 = \frac{m_4}{S^2} = \frac{m_4}{m^{2}_{2}}$

Percentile Coefficient of Kurtosis = $k=\frac{Q.D}{P_{90}-P_{10}}$
where Q.D = $\frac{1}{2}(Q_3 – Q_1)$ is the semi-interquartile range. For normal distribution, this has a value of 0.263.

Dr. Wheeler defines kurtosis as:

The kurtosis parameter is a measure of the combined weight of the tails relative to the rest of the distribution.

So, kurtosis is all about the tails of the distribution – not the peakedness or flatness.

A normal random variable has a kurtosis of 3 irrespective of its mean or standard deviation. If a random variable’s kurtosis is greater than 3, it is considered Leptokurtic. If its kurtosis is less than 3, it is considered Platykurtic.

A large value of kurtosis indicates a more serious outlier issue and hence may lead the researcher to choose alternative statistical methods.

Measure of Kurtosis

Some Examples of Kurtosis

  • In finance, risk and insurance are examples of needing to focus on the tail of the distribution and not assuming normality.
  • Kurtosis helps in determining whether the resource used within an ecological guild is truly neutral or which it differs among species.
  • The accuracy of the variance as an estimate of the population $\sigma^2$ depends heavily on kurtosis.

For further reading see Moments in Statistics

FAQs about Kurtosis

  1. Define Kurtosis.
  2. What is the moment coefficient of Kurtosis?
  3. What is the definition of kurtosis by Dr. Wheeler?
  4. Give examples of kurtosis from real life.

R Frequently Asked Language

Sampling Error Definition, Example, Formula

In Statistics, sampling error also called estimation error is the amount of inaccuracy in estimating some value that is caused by only a portion of a population (i.e. sample) rather than the whole population. It is the difference between the statistic (value of the sample, such as sample mean) and the corresponding parameter (value of population, such as population mean) is called the sampling error. If $\bar{x}$ is the sample statistic and $\mu$ is the corresponding population parameter then it is defined as \[\bar{x} – \mu\].

Exact calculation/ measurements of sampling error are not feasible generally as the true value of the population is unknown usually, however, it can often be estimated by probabilistic modeling of the sample.

Sampling Error

Causes of Sampling-Error

  • The cause of the Error discussed may be due to the biased sampling procedure. Every research should select sample(s) that is free from any bias and the sample(s) are representative of the entire population of interest.
  • Another cause of this Error is chance. The process of randomization and probability sampling is done to minimize the sampling process error but it is still possible that all the randomized subjects/ objects are not representative of the population.

Eliminate/ Reduce the Sampling Error

The elimination/ Reduction of sampling-error can be done when a proper and unbiased probability sampling technique is used by the researcher and the sample size is large enough.

  • Increasing the sample size
    The sampling-error can be reduced by increasing the sample size. If the sample size $n$ is equal to the population size $N$, then the sampling error will be zero.
  • Improving the sample design i.e. By using the stratification
    The population is divided into different groups containing similar units.

The potential Sources of Errors are:

Potential Sources of Sampling and Non-Sampling

Also Read: Sampling and Non-Sampling Errors

Read more about Sampling Error on Wikipedia

https://rfaqs.com

Truth about Bias in Statistics

Bias in Statistics is defined as the difference between the expected value of a statistic and the true value of the corresponding parameter. Therefore, the bias is a measure of the systematic error of an estimator. The bias indicates the distance of the estimator from the true value of the parameter. For example, if we calculate the mean of a large number of unbiased estimators, we will find the correct value.

Bias in Statistics: The Difference between Expected and True Value

In other words, the bias (sampling error) is a systematic error in measurement or sampling and it tells how far off on the average the model is from the truth.

Gauss, C.F. (1821) during his work on the least-squares method gave the concept of an unbiased estimator.

The bias of an estimator of a parameter should not be confused with its degree of precision as the degree of precision is a measure of the sampling error. The bias is favoring one group or outcome intentionally or unintentionally over other groups or outcomes available in the population under study. Unlike random errors, bias is a serious problem and bias can be reduced by increasing the sample size and averaging the outcomes.

Bias in Statistics

Several types of bias should not be considered mutually exclusive

  • Selection Bias (arise due to systematic differences between the groups compared)
  • Exclusion Bias (arises due to the systematic exclusion of certain individuals from the study)
  • Analytical Bias (arise due to the way that the results are evaluated)

Mathematically Bias can be defined as

Let statistics $T$ used to estimate a parameter $\theta$, if $E(T) = \theta$+ bias$(\theta)$ then bias$(\theta)$ is called the bias of the statistic $T$, where $E(T)$ represents the expected value of the statistics $T$.

Note: that if bias$(\theta)=0$, then $E(T)=\theta$. So, $T$ is an unbiased estimator of the true parameter, say $\Theta$.

Types of Sample Selection Bias

Reference:
Gauss, C.F. (1821, 1823, 1826). Theoria Combinations Observationum Erroribus Minimis Obnoxiae, Parts 1, 2 and suppl. Werke 4, 1-108.

For further reading about Statistical Bias visit: Bias in Statistics.

Learn about Estimation and Types of Estimation