Quartile Deviation (2025)

Quartile deviation denoted by QD is the absolute measure of dispersion and it is defined as the half of the difference between the upper quartile ($Q_3$) and the lower quartile ($Q_1$).

The Quartile Deviation also known as semi-interquartile range (Semi IQR), is a measure of dispersion that focuses on the middle 50% of the data. It is calculated as half the difference between the Third Quartile ($Q_3$) and the First Quartile ($Q_1$). One can write it mathematically as

$$QD = \frac{Q_3-Q_1}{2}$$

Note that the interquartile range is only the difference between the upper quartile ($Q_3$) and the lower quartile ($Q_1$), that is,

$$Interquartile\,\, Range = IRQ = Q_3 – Q_1$$

The Relative Measure of Quartile Deviation is the Coefficient of Quartile Deviation and is given as

$$Coefficient\,\,of\,\,QD = \frac{Q_3 – Q_1}{Q_3 + Q_1}\times 100$$

Quartile Deviation

When to Use QD

  • When dealing with skewed data or data with outliers.
  • When a quick and easy measure of dispersion is needed.

Interpretation QD

Spread: A larger quartile deviation indicates greater variability in the middle portion of the data.
Outliers: QD is less sensitive to extreme values (outliers) compared to the standard deviation.

Quartile Deviation for Ungrouped Data

222225253030303131333639
404042424848505152555759
818689899091919192939393
939494949596969697979898
999999100100100101101102102102102
102103103104104104105106106106107108
108108109109109110111112112113113113
113114115116116117117117118118119121

The above data is already sorted and there are a total of 96 observations. The first and third quartiles of the data can be computed as follows:

$Q_1 = \left(\frac{n}{4}\right)th$ value $= \left(\frac{96}{4}\right)th$ value $= 24th$ value. The 24th observation is 59, therefore, $Q_1=59$.

$Q_3 = \left(\frac{3n}{4}\right)th$ value $= \left(\frac{3\times 96}{4}\right)th$ value $= 72th$ value. The 72nd observation is 108, therefore, $Q_3=108$.

The quartile deviation will be

$$QD=\frac{Q_3 – Q_1}{2} = \frac{108-59}{2} = 24.5$$

The Interquartile Range $= IQR = Q_3 – Q_1 = 108 – 59 = 49$

The coefficient of Quantile Deviation will be

$$Coefficient\,\, of\,\, QD = \frac{Q_3 – Q_1}{Q_3 – Q_1}\times 100 = \frac{108-59}{108+59}\times 100 = 29.34\%$$

Quartile Deviation for Grouped Data

Consider the following example for grouped data to compute the quartile deviation.

ClassesFrequenciesClass BoundariesCF
11-14.91110.95-14.9511
15-20.91914.95-20.9530
21-24.92120.95-24.9551
25-30.93424.95-30.9585
31-34.91630.95-34.95101
35-40.9934.95-40.95110
41-44.9440.95-44.95114
Total114  

The first and third quartiles for the above-grouped data will be

\begin{align*}
Q_1 &= l + \frac{h}{f}\left(\frac{n}{4} – C\right)\\
&= 14.95 + \frac{6}{19}\left(\frac{114}{4} – 11\right)\\
&= 14.95 + \frac{6}{19}(28.5 – 11) = 20.48\\
Q_3 &= l + \frac{h}{f}\left(\frac{3\times 114}{4}-85\right)\\
&=30.95 + 0.187418 = 31.14
\end{align*}

The QD is

$$QD = \frac{Q_3 – Q_1}{2} = \frac{31.14 – 20.48}{2} = \frac{10.66}{2} = 5.33$$

The Interquartile Range will be

$$IQR = Q_3 – Q_1 = 31.14 – 20.48 = 10.66$$

The coefficient of quartile deviation is

$$Coefficient\,\,of\,\, QD = \frac{Q_3 – Q_1}{Q_3 + Q_1}\times 100 = \frac{31.14 – 20.48}{31.14+20.48}\times 100 = 20.65\%$$

  • Less affected by outliers: Makes it suitable for skewed data.
  • Easy to calculate: Relatively simple compared to standard deviation.

Disadvantages of QD

  • Ignores extreme values: This may not provide a complete picture of the data’s spread.
  • Less sensitive to changes in data: Compared to standard deviation.

In summary, Quartile deviation is a valuable and useful tool for understanding the spread of data, particularly when outliers are present. By focusing on the middle 50% of the data, it provides a robust measure of dispersion that is less sensitive to extreme values. However, it is important to consider its limitations, such as its insensitivity to outliers and changes in data.

Frequently Asked Questions about Quartile Deviation

  1. What is quartile deviation?
  2. What are the advantages of QD?
  3. What are the disadvantages of QD?
  4. What is IQR?
  5. What is Semi-IQR?
  6. How QD is interpreted?
  7. How QD is computed for grouped and ungrouped data?
  8. When QD should be used?

Learn R Programming, Test Preparation MCQs

MCQs Cluster Analysis Quiz 6

The post is about MCQs cluster Analysis. There are 20 multiple-choice questions from clustering, covering topics such as k-means, k-median, k-means++, cosine similarity, k-medoid, Manhattan Distance, etc. Let us start with the MCQs Cluster Analysis Quiz.

Online Multiple-Choice Questions about Cluster Analysis

1. Which of the following statements is true?

 
 
 
 

2. The k-means++ algorithm is designed to better initialize K-means, which will take the farthest point from the currently selected centroids. Suppose $k = 2$ and we have chosen the first centroid as $(0, 0)$. Among the following points (these are all the remaining points), which one should we take for the second centroid?

 
 
 
 

3. What are some common considerations and requirements for cluster analysis?

 
 
 
 

4. Which of the following statements about the K-means algorithm are correct?

 
 
 
 

5. Given three vectors $A, B$, and $C$, suppose the cosine similarity between $A$ and $B$ is $cos(A, B) = 1.0$, and the similarity between $A$ and $C$ is $cos(A, C) = -1.0$. Can we determine the cosine similarity between $B$ and $C$?

 
 

6. In the figure below, Map the figure to the type of link it illustrates.

MCQs Cluster Analysis Quiz 6

 
 
 

7. Suppose $X$ is a random variable with $P(X = -1) = 0.5$ and $P(X = 1) = 0.5$. In addition, we have another random variable $Y=X * X$. What is the covariance between $X$ and $Y$?

 
 
 
 

8. In the figure below, Map the figure to the type of link it illustrates.

MCQs Cluster Analysis Quiz 6

 
 
 

9. Is K-means guaranteed to find K clusters that lead to the global minimum of the SSE?

 
 

10. Which of the following statements, if any, is FALSE?

 
 
 
 

11. Which of the following statements is true?

 
 
 
 

12. In the figure below, Map the figure to the type of link it illustrates.

MCQs Cluster Analysis Quiz 6

 
 
 

13. Which of the following statements is true?

 
 

14. Given the two-dimensional points (0, 3) and (4, 0), what is the Manhattan distance between those two points?

 
 
 
 

15. Which of the following statements is true?

 
 
 
 

16. In the k-medoids algorithm, after computing the new center for each cluster, is the center always guaranteed to be one of the data points in that cluster?

 
 

17. Considering the k-median algorithm, if points $(-1, 3), (-3, 1),$ and $(-2, -1)$ are the only points that are assigned to the first cluster now, what is the new centroid for this cluster?

 
 
 
 

18. In the k-median algorithm, after computing the new center for each cluster, is the center always guaranteed to be one of the data points in that cluster?

 
 

19. Which of the following statements about the K-means algorithm are correct?

 
 
 
 

20. For k-means, will different initializations always lead to different clustering results?

 
 

Online MCQs Cluster Analysis

  • Which of the following statements is true?
  • What are some common considerations and requirements for cluster analysis?
  • Which of the following statements is true?
  • Which of the following statements is true?
  • Which of the following statements about the K-means algorithm are correct?
  • Which of the following statements, if any, is FALSE?
  • In the figure below, Map the figure to the type of link it illustrates.
  • In the figure below, Map the figure to the type of link it illustrates.
  • In the figure below, Map the figure to the type of link it illustrates.
  • Considering the k-median algorithm, if points $(-1, 3), (-3, 1),$ and $(-2, -1)$ are the only points that are assigned to the first cluster now, what is the new centroid for this cluster?
  • Which of the following statements about the K-means algorithm are correct?
  • Given the two-dimensional points (0, 3) and (4, 0), what is the Manhattan distance between those two points?
  • Given three vectors $A, B$, and $C$, suppose the cosine similarity between $A$ and $B$ is $cos(A, B) = 1.0$, and the similarity between $A$ and $C$ is $cos(A, C) = -1.0$. Can we determine the cosine similarity between $B$ and $C$?
  • Is K-means guaranteed to find K clusters that lead to the global minimum of the SSE?
  • The k-means++ algorithm is designed to better initialize K-means, which will take the farthest point from the currently selected centroids. Suppose $k = 2$ and we have chosen the first centroid as $(0, 0)$. Among the following points (these are all the remaining points), which one should we take for the second centroid?
  • Which of the following statements is true?
  • Suppose $X$ is a random variable with $P(X = -1) = 0.5$ and $P(X = 1) = 0.5$. In addition, we have another random variable $Y=X * X$. What is the covariance between $X$ and $Y$?
  • For k-means, will different initializations always lead to different clustering results?
  • In the k-medoids algorithm, after computing the new center for each cluster, is the center always guaranteed to be one of the data points in that cluster?
  • In the k-median algorithm, after computing the new center for each cluster, is the center always guaranteed to be one of the data points in that cluster?
MCQs Cluster Analysis Quiz with Answers

https://itfeature.com, https://rfaqs.com

Use of t Distribution in Statistics

The post is about the use of t Distribution in Statistics. The t distribution, also known as the Student’s t-distribution, is a probability distribution used to estimate population parameter(s) when the sample size is small or when the population variance is unknown. The t distribution is similar to the normal bell-shaped distribution but has heavier tails. This means that it gives a lower probability to the center and a higher probability to the tails than the standard normal distribution.

The t distribution is particularly useful as it accounts for the extra variability that comes with small sample sizes, making it a more accurate tool for statistical analysis in such cases.

The following are the commonly used situations in which t distribution is used:

Use of t Distribution: Confidence Intervals

The t distribution is widely used in constructing confidence intervals. In most of the cases, The width of the confidence intervals depends on the degrees of freedom (sample size – 1):

  1. Confidence Interval for One Sample Mean
    $$\overline{X} \pm t_{\frac{\alpha}{2}} \left(\frac{s}{\sqrt{n}} \right)$$
    where $t_{\frac{\alpha}{2}}$ is the upper $\frac{\alpha}{2}$ point of the t distribution with $v=n-1$ degrees of freedom and $s^2$ is the unbiased estimate of the population variance obtained from the sample, $s^2 = \frac{\Sigma (X_i-\overline{X})^2}{n-1} = \frac{\Sigma X^2 – \frac{(\Sigma X)^2}{n}}{n-1}$
  2. Confidence Interval for Difference between Two Independent Samples MeanL
    Let $X_{11}, X_{12}, \cdots, X_{1n_1}$ and $X_{21}, X_{22}, \cdots, X_{2n_2}$ be the random samples of size $n_1$ and $n_2$ from normal population with variances $\sigma_1^2$ and $\sigma_2^2$, respectively. Let $\overline{X}_1$ and $\overline{X}_2$ be the respectively sample means. The confidence interval for the difference between two population mean $\mu_1 – \mu_2$ when the population variances $\sigma_1^2$ and $\sigma_2^2$ are unknown and the sample sizes $n_1$ and $n_2$ are small (less than 30) is
    $$(\overline{X}_1 – \overline{X}_2 \pm t_{\frac{\alpha}{2}}(S_p)\sqrt{\frac{1}{n_1} + \frac{1}{n_2}}$$
    where $S_p = \frac{(n_1 – 1)s_1^2 + (n_2-1)s_2^2}{n_1-n_2-2}$ (Pooled Variance), where $s_1^2$ and $s_2^2$ are the unbiased estimates of population variances $\sigma_1^2$ and $\sigma_2^2$, respectively.
  3. Confidence Interval for Paired Observations
    The confidence interval for $\mu_d=\mu_1-\mu_2$ is
    $$\overline{d} \pm t_{\frac{\alpha}{2}} \frac{S_d}{\sqrt{n}}$$
    where $\overline{d}$ and $S_d$ are the mean and standard deviation of the differences of $n$ pairs of measurements and $t_{\frac{\alpha}{2}}$ is the upper $\frac{\alpha}{2}$ point of the distribution with $n-1$ degrees of freedom.

Use of t Distribution: Testing of Hypotheses

The t-tests are used to compare means between two groups or to test if a sample mean is significantly different from a hypothesized population mean.

  1. Testing of Hypothesis for One Sample Mean
    It compares the mean of a single sample to a known population mean when the population standard deviation is known,
    $$t=\frac{\overline{X}-\mu}{\frac{s}{\sqrt{n}}}$$
  2. Testing of Hypothesis for Difference between Two Population Means
    For two random samples of sizes $n_1$ and $n_2$ drawn from two normal population having equal variances ($\sigma_1^2 = \sigma_2^2 = \sigma^2$), the test statistics is
    $$t=\frac{\overline{X}_1 – \overline{X}_2}{S_p \sqrt{\frac{1}{n_1} + \frac{1}{n_2}}}$$
    with $v=n_1+n_2-2$ degrees of freedom.
  3. Testing of Hypothesis for Paird/Dependent Observations
    To test the null hypothesis ($\mu_d = \mu_o$) the statistics is
    $$t=\frac{\overline{d} – d_o}{\frac{s_d}{\sqrt{n}}}$$
    with $v=n-1$ degrees of freedom.
  4. Testing the Coefficient of Correlation
    For $n$ pairs of observations (X, Y), the sample correlation coefficient, the test of significance (testing of hypothesis) for the correlation coefficient is
    $$t=\frac{r\sqrt{n-2}}{\sqrt{1-r^2}}$$
    with $v=n-2$ degrees of freedom.
  5. Testing the Regression Coefficients
    The t distribution is used to test the significance of regression coefficients in linear regression models. It helps determine whether a particular independent variable ($X$) has a significant effect on the dependent variable ($Y$). The regression coefficient can be tested using the statistic
    $$t=\frac{\hat{\beta} – \beta}{\sqrt{SE_{\hat{\beta}}}}$$
    where $SE_{\hat{\beta}} = \frac{S_{Y\cdot X}}{\sqrt{\Sigma (X-\overline{X})^2}}=\frac{\sqrt{\frac{\Sigma Y^2 – \hat{\beta}_o \Sigma X – \hat{\beta}_1 \Sigma XY }{n-2} } }{S_X \sqrt{n-1}}$

The t distribution is a useful statistical tool for data analysis as it allows the user to make inferences/conclusions about population parameters even when there is limited information about the population.

MCQs in Statistics, Test Preparation MCQs, R and Data Analysis

https://itfeature.com use of t distribution

Frequently Asked Questions about the Use of t Distribution

  • What is t distribution?
  • Discuss what type of confidence intervals can be constructed by using t distribution.
  • Discuss what type of hypothesis testing can be performed by using t distribution.
  • How does the t distribution resemble the normal distribution?
  • What is meant by small sample size and unknown population variance?

Sampling Distribution of Means

Suppose, we have a population of size $N$ having mean $\mu$ and variance $\sigma^2$. We draw all possible samples of size $n$ from this population with or without replacement. Then we compute the mean of each sample and denote it by $\overline{x}$. These means are classified into a frequency table which is called frequency distribution of means and the probability distribution of means is called the sampling distribution of means.

Sampling Distribution

A sampling distribution is defined as the probability distribution of the values of a sample statistic such as mean, standard deviation, proportions, or difference between means, etc., computed from all possible samples of size $n$ from a population. Some of the important sampling distributions are:

  • Sampling Distribution of Means
  • Sampling Distribution of the Difference Between Means
  • Sampling Distribution of the Proportions
  • Sampling Distribution of the Difference between Proportions
  • Sampling Distribution of Variances

Notations of Sampling Distribution of Means

The following notations are used for sampling distribution of means:

$\mu$: Population mean
$\sigma^2$: Population Variance
$\sigma$: Population Standard Deviation
$\mu_{\overline{X}}$: Mean of the Sampling Distribution of Means
$\sigma^2_{\overline{X}}$: Variance of Sampling Distribution of Means
$\sigma_{\overline{X}}$: Standard Deviation of the Sampling Distribution of Means

Formulas for Sampling Distribution of Means

The following following formulas for the computation of means, variance, and standard deviations can be used:

\begin{align*}
\mu_{\overline{X}} &= E(\overline{X}) = \Sigma (\overline{X}P(\overline{X})\\
\sigma^2_{\overline{X}} &= E(\overline{X}^2) – [E(\overline{X})]^2\\
\text{where}\\
E(\overline{X}^2) &= \Sigma \overline{X}^2P(\overline{X})\\
\sigma_{\overline{X}} &= \sqrt{E(\overline{X}^2) – [E(\overline{X})]^2}
\end{align*}

Numerical Example: Sampling Distribution of Means

A population of $(N=5)$ has values 2, 4, 6, 8, and 10. Draw all possible samples of size 2 from this population with and without replacement. Construct the sampling distribution of sample means. Find the mean, variance, and standard deviation of the population and verify the following:

Sr. No.Sampling with ReplacementSampling without Replacement
1)$\mu_{\overline{X}} = \mu$$\mu_{\overline{X}} = \mu$
2)$\sigma^2_{\overline{X}}=\frac{\sigma^2}{n}$$\sigma^2_{\overline{X}}=\frac{\sigma^2}{n}\frac{N-n}{N-1}$
3)$\sigma_{\overline{X}} = \frac{\sigma}{\sqrt{n}}$$\sigma_{\overline{X}} = \frac{\sigma}{\sqrt{n}} \sqrt{\frac{N-n}{N-1}}$

Solution

The solution to the above example is as follows:

Sampling with Replacement (Mean, Variance, and Standard Deviation)

The number of possible samples is: $N^n = 5^2 = 25.

Samples$\overline{X}$Samples$\overline{X}$Samples$\overline{X}$
2, 224, 1078, 88
2, 436, 248, 109
2, 646, 4510, 26
2, 856, 6610, 47
2, 1066, 8710, 68
4, 236, 10810, 89
4, 448, 2510, 1010
4, 658, 46
4, 868, 67

The sampling distribution of sample means will be

$\overline{X}$Freq$P(\overline{X}$$\overline{X}P(\overline{X})$$\overline{X}^2$$\overline{X}^2P(\overline{X}$
211/252/2544/25
322/256/25918/25
433/2512/251648/25
544/2520/2525100/25
655/2530/2536180/25
744/2528/2549196/25
833/2524/2564192/25
922/2518/2581162/25
10112510/25100100/25
Total25/25=1150/25 = 61000/25=40

\begin{align*}
\mu_{\overline{X}} &= E(\overline{X}) = \Sigma \left[\overline{X}P(\overline{X})\right] = \frac{150}{25}=6\\
\sigma^2_{\overline{X}} &= E(\overline{X}^2) – [E(\overline{X}]^2=\Sigma [\overline{X}^2P(\overline{X})] – [\Sigma [\overline{X}P(\overline{X})]]^2\\
&= 40 – 6^2 = 4\\
\sigma_{\overline{X}} &= \sqrt{4}=2
\end{align*}

Mean, Variance, and Standard Deviation for Population

The following are computations for population values.

$X$24681030
$X^2$4163664100220

\begin{align*}
\mu &= \frac{\Sigma}{N} = \frac{30}{5} = 6\\
\sigma^2 &= \frac{\Sigma X^2}{N} – \left(\frac{\Sigma X}{n} \right)^2\\
&=\frac{220}{5} – (6)^2 = 8\\
\sigma&= \sqrt{8} = 2.82
\end{align*}

Verifications:

  1. Mean: $\mu_{\overline{X}} = \mu \Rightarrow 6=6$
  2. Variance: $\sigma^2_{\overline{X}} = \frac{\sigma^2}{n} \Rightarrow 4=\frac{8}{2}$
  3. Standard Deviation: $\sigma_{\overline{X}}=\frac{\sigma}{\sqrt{n}} \Rightarrow 2=\frac{2.82}{\sqrt{2}}=2$

Sampling without Replacement

The possible samples for sampling without replacement are: $\binom{5}{2}=10$

Samples$\overline{x}$Samples$\overline{x}$
2, 434, 86
2, 644, 107
2, 856, 87
2, 1066, 108
4, 648, 109

The sampling distribution sample means for sampling without replacement is

$\overline{x}$Freq$P(\overline{x})$$\overline{x}P(\overline{x})$$\overline{x}^2$$\overline{x}^2P(\overline{x})$
311/103/1099/10
411/104/101616/10
522/1010/102550/10
622/1012/103672/10
722/1014/104998/10
811/108/106464/10
911/209/108181/10
Total10/10=160/10=6390/10 = 39

\begin{align*}
\mu_{\overline{X}} &= E(\overline{X}) = \Sigma \left[\overline{X}P(\overline{X})\right] = \frac{60}{10}=6\\
\sigma^2_{\overline{X}} &= E(\overline{X}^2) – [E(\overline{X}]^2=\Sigma [\overline{X}^2P(\overline{X})] – [\Sigma [\overline{X}P(\overline{X})]]^2\\
&= 39 – 6^2 = 3\\
\sigma_{\overline{X}} &= \sqrt{3}=1.73
\end{align*}

Verifications:

  1. Mean: $\mu_{\overline{X}} = \mu \Rightarrow 6=6$
  2. Variance: $\sigma^2_{\overline{X}} = \frac{\sigma^2}{n}\cdot \left(\frac{N-n}{N-1}\right) \Rightarrow 3=\frac{8}{2}\cdot\left(\frac{5-2}{5-1}\right)=3$
  3. Standard Deviation: $\sigma_{\overline{X}}=\frac{\sigma}{\sqrt{n}} \Rightarrow 1.73=\sqrt{3}$

Why is Sampling Distribution Important?

  • Inference: Sampling distribution of means allows users to make inferences about the population mean based on sample data.
  • Hypothesis Testing: It is crucial for hypothesis testing, where the researcher compares sample statistics to population parameters.
  • Confidence Intervals: It helps construct confidence intervals, which provide a range of values likely to contain the population mean.
Sampling Distribution of Means

Note that the sampling distribution of means provides a framework for understanding how sample means vary from sample to sample and how they relate to the population mean. This understanding is fundamental to statistical inference and decision-making.

R and Data Analysis, Online Quiz Website