The t Distribution 2024

Introduction to t Distribution

The Student’s t distribution or simply t distribution is a probability distribution similar to the normal probability distribution with heavier tails. The t distribution produces values that fall far from the average compared to the normal distribution. The t distribution is an important statistical tool for making inferences about the population parameters when the population standard deviation is unknown.

The t-distribution is used when one needs to estimate the population parameters (such as mean) but the population standard deviation is unknown. When $n$ is small (less than 30), one must be careful in invoking the normal distribution for $\overline{X}$. The distribution of $\overline{X}$ depends on the shape of the population distribution. Therefore, no single inferential procedure can be expected to work for all kinds of population distributions.

t Distribution

One Sample t-Test Formula

If $X_1, X_2, \cdots, X_n$ is a random sample from a normal population with mean $\mu$ and standard deviation of $\sigma$, the sample mean $\overline{X}$ is exactly distributed as normal with mean $\mu$ and standard deviation $\frac{\sigma}{\sqrt{n}}$ and $Z=\frac{\overline{X} – \mu}{\frac{\sigma}{\sqrt{n}}}$ is a standard normal variable. When $\sigma$ is unknown, the sample standard deviation is used, that is
$$t=\frac{\overline{X} – \mu}{\frac{s}{\sqrt{n}}},$$
which is analogous to the Z-statistic.

The Sampling Distribution for t

Consider samples of size $n$ drawn from a normal population with mean $\mu$ and for each sample, we compute $t$ using the sample $\overline{X}$ and sample standard deviation $S$ (or $s$), the sampling distribution for $t$ can be obtained

$$Y=\frac{k}{\left(1 + \frac{t^2}{n-1}\right)^{\frac{n}{2}} } = \frac{k}{\left(1+ \frac{t^2}{v} \right)^{\frac{v+1}{2} }},$$
where $k$ is a constant depending on $n$ such that the total area under the curve is one, and $v=n-1$ is called the number of degrees of freedom.

The t distributions are symmetric around zero but have thicker tails (more spread out) than the standard normal distribution. Note that with the large value of $n$, the t-distribution approaches the standard normal distribution.

Properties of the t Distribution

  • The t distribution is bell-shaped, unimodal, and symmetrical around the mean of zero (like the standard normal distribution)
  • The variance of the t-distribution is always greater than 1.
  • The shape of the t-distribution changes as the number of degrees of freedom changes. So, we have a family of $t$ distributions.
  • For small values of $n$, the distribution is considerably flatter around the center and more spread out than the normal distribution, but the t-distribution approaches the normal as the sample size increases without limit.
  • The mean and variance of the t distribution are $\mu=0$ and $\sigma^2 = \frac{v}{v-2}$, where $v>2$.

Common Application of t Distribution

  • t-tests are used to compared means between two groups
  • t-test are used to compared if a sample mean is significantly different from a hypothesized population mean.
  • t-values are used for constructing confidence intervals for population means when the population standard deviation is unknown.
  • Used to test the significance of the correlation and regression coefficients.
  • Used to construct confidence intervals of correlation and regression coefficients.
  • Used to estimate the standard error of various statistical models.

Assumptions of the t Distribution

The t-distribution relies on the following assumptions:

  • Independence: The observations in the sample must be independent of each other. This means that the value of one observation does not influence the value of another.
  • Normality: The population from which the sample is drawn should be normally distributed. However, the t-distribution is relatively robust to violations of this assumption, especially for larger sample sizes.
  • Homogeneity of Variance: If comparing two groups, the variances of the two populations should be equal. This assumption is important for accurate hypothesis testing.

Note that significant deviations from normality or unequal variances can affect the accuracy of the results. Therefore, it is always a good practice to check the assumptions before conducting a t-test and consider alternative non-parametric tests if the assumptions are not met.

statistics help: https://itfeature.com t distribution

Download Student’s t Distribution Table

https://rfaqs.com, https://gmstat.com

Normal Probability Distribution

The Gaussian or normal probability distribution role is very important in statistics. It was investigated by researchers/ persons interested in gambling or in the distribution of errors made by people observing astronomical events. The normal probability distribution is important in other fields such as social sciences, behavioural statistics, business and management sciences, and engineering and technologies.

Importance of Normal Distribution

Some of the important reasons for the normal probability distribution are:

  • Many variables such as (weight, height, marks, measurement errors, IQ, etc.) are distributed as the symmetrical bell-shaped normal curve approximately.
  • Many inferential procedures (parametric tests: confidence intervals, hypothesis testing, regression analysis, etc.) assume that the variables follow the normal distribution.
  • All probability distributions approach a normal distribution under some conditions.
  • Even if a variable is not normally distributed, a distribution of sample sums or averages on that variable will be approximately normally distributed if the sample size is large enough.
  • The mathematics of a normal curve is well-known and relatively simple. One can find the probability that a score randomly sampled from a normal distribution falls within the interval $a$ and $b$ by integrating the normal probability density function (PDF) from $a$ to $b$. This is equivalent to finding the area under the curve between $a$ and $b$ assuming a total area of one.
  • Due to the Central Limit Theorem, the average of many independent random variables tends to follow a normal probability distribution, regardless of the original distribution of the variables.

Probability Density Functions of Normal Distribution

The probability density function known as the normal curve. $F(X)$ is the probability density, aka the height of the curve at value $X$.

$$F(X) = \frac{1}{\sigma \sqrt{2\pi}} e^{-\frac{(X-\mu)^2}{2\sigma^2} }$$

There are two parameters in the PDF of normal distribution, (i) the mean and (ii) the standard deviation. Everything else in the PDF of normal distribution on the right-hand side is a constant. There is a family of normal probability distribution with respect to their means and their standard deviations.

Standard Normal Probability Distribution

One can work with normal curve, even if one don’t know about integral calculus. One can use computer to compute the area under the normal curve or make use of the normal curve table. The normal curve table (standard normal table) is based on the standard normal curve ($Z$), which has a mean of 0 and a variance of 1. To use a standard normal curve table, one need to convert the raw score to $Z$-scores. A $Z$-score is the number of standard deviations ($\sigma$ or $s$) a score is above or below the mean of a reference distribution.

$$Z_X = \frac{X-\mu}{\sigma}$$

For example, suppose one wish to know the percentile rank of a score of 90 on an IQ test with $\mu = 100$ and $\sigma=10$. The $Z$-score will be

$$Z=\frac{X-\mu}{\sigma} = \frac{90-100}{10} = -1$$

One can either integrate the normal cure from $-\infty$ to $-1$ or use the standard normal table. The probability or area under the curve on the left of $-1$ is 0.1587 or 15.87%.

Standard Normal Probability distribution Curve

Key Characteristics of Normal Probability Distribution

  • Symmetry: In normal probability distribution, the mean, median, and mode are all equal and located at the center of the curve.
  • Spread: In normal distribution, the spread of the data is determined by the standard deviation. A larger standard deviation means that the curve is wider, and a smaller standard deviation means a narrower curve.
  • Area under the Normal Curve: The total area under the normal curve is always equal to 1 or 100%.
normal curve dnorm() normal probability distribuiton

Real-Life Applications of Normal Distribution

The following are some real-life applications of normal probability distribution.

  • Natural Phenomena:
    • Biological Traits: Many biological traits, such as weight, height, and IQ scores, tend to follow a normal distribution. This helps us to understand the typical range of values for different biological traits and identify outliers.
    • Physical Measurements: Errors in measurements often follow a normal distribution. This knowledge is crucial in fields like engineering and physics for quality control and precision.
  • Statistical Inference:
    • Hypothesis Testing: The normal distribution is used extensively in hypothesis testing to determine the statistical significance of the results. By understanding the distribution of sample means, one can make inferences about population parameters.
    • Confidence Intervals: Normal distribution helps calculate confidence intervals, which provide a range of values within which a population parameter is likely to fall with a certain level of confidence.
  • Machine Learning and Artificial Intelligence:
    • Feature Distribution: Many machine learning (ML) algorithms assume that features in data follow a normal distribution. The normality assumption about machine learning algorithms can influence the choice of algorithms and the effectiveness of models.
    • Error Analysis: The normal distribution is used to analyze the distribution of errors in machine learning models, helping to identify potential biases and improve accuracy.
  • Finance and Economics:
    • Asset Returns: While not perfectly normal, many financial assets, such as stock prices, follow an approximately normal distribution over short time periods. The assumption of normality is used in various financial models and risk assessments.
    • Economic Indicators: Economic indicators such as GDP growth rates and inflation rates often exhibit a normal distribution, allowing economists to analyze trends and make predictions.
  • Quality Control:
    • Process Control Charts: In manufacturing and other industries, normal distribution is used to create control charts that monitor the quality of products or processes. By tracking the distribution of measurements, one can identify when a process is going out of control.
  • Product Quality: Manufacturers use statistical quality control methods based on normal distribution to ensure that products meet quality standards.
  • Everyday Life:
    • Standardized Tests: The standardized Test scores, such as SAT and GRE, are often normalized to a standard normal distribution, allowing for comparisons between different test-takers.
    https://itfeature.com, Normal probability distribution

    R Programming Language, Online Quiz Website

    Non Central Chi Square Distribution (2013)

    The Non Central Chi Square Distribution is a generalization of the Chi-Square Distribution.
    If $Y_{1} ,Y_{2} ,\cdots ,Y_{n} \sim N(0,1)$ i.e. $(Y_{i} \sim N(0,1)) \Rightarrow y_{i}^{2} \sim \psi _{i}^{2}$ and $\sum y_{i}^{2}  \sim \psi _{(n)}^{2} $

    If mean ($\mu $) is non-zero then $y_{i} \sim N(\mu _{i} ,1)$ i.e each $y_{i} $ has different mean
    \begin{align*}
    \Rightarrow  & \qquad y_i^2 \sim \psi_{1,\frac{\mu_i^2}{2}} \\
    \Rightarrow  & \qquad \sum y_i^2 \sim \psi_{(n,\frac{\sum \mu_i^2}{2})} =\psi_{(n,\lambda )}^{2}
    \end{align*}

    Note that if $\lambda =0$ then we have central $\psi ^{2} $. If $\lambda \ne 0$ then it is a noncentral chi-squared distribution because it has no central mean (as distribution is not standard normal).

    Central Chi Square Distribution $f(x)=\frac{1}{2^{\frac{n}{2}} \left|\! {\overline{\frac{n}{2} }}  \right. } \chi ^{\frac{n}{2} -1} e^{-\frac{x}{2} }; \qquad 0<x<\infty $

    Theorem:

    If $Y_{1} ,Y_{2} ,\cdots ,Y_{n} $ are independent normal random variables with $E(y_{i} )=\mu _{i} $ and $V(y_{i} )=1$ then $w=\sum y_{i}^{2}  $ is distributed as non central chi-square with $n$ degree of freedom and non-central parameter $\lambda $, where $\lambda =\frac{\sum \mu _{i}^{2}  }{2} $ and has pdf

    \begin{align*}
    f(w)=e^{-\lambda } \sum _{i=0}^{\infty }\left[\frac{\lambda ^{i} w^{\frac{n+2i}{2} -1} e^{-\frac{w}{2} } }{i!\, 2^{\frac{n+2i}{2} } \left|\! {\overline{\frac{n+2i}{2} }}  \right. } \right]\qquad 0\le w\le \infty
    \end{align*}

    Proof: Non Central Chi Square Distribution

    Consider the moment generating function of $w=\sum y_{i}^{2}  $

    \begin{align*}
    M_{w} (t)=E(e^{wt} )=E(e^{t\sum y_{i}^{2}  } ); \qquad \text{ where } y_{i} \sim N(\mu \_{i} ,1)
    \end{align*}

    By definition
    \begin{align*}
    M_{w} (t) &= \int \cdots \int e^{t\sum y_{i}^{2} } .f(y_{i} )dy_{i} \\
    &= K_{1} \int \cdots \int e^{-\frac{1}{2} (1-2t)\left[\sum y_{i}^{2} -\frac{2\sum y_{i} \mu _{i} }{1-2t} \right]}   dy_{1} .dy_{2} \cdots dy_{n} \\
    &\text{By completing square}\\
    & =K_{1} \int \cdots \int e^{\frac{1}{2} (1-2t)\sum \left[\left[y_{i} -\frac{\mu _{i} }{1-2t} \right]^{2} -\frac{\mu _{i}^{2} }{(1-2t)^{2} } \right]}   dy_{1} .dy_{2} \cdots dy_{n} \\
    &= e^{-\frac{\sum \mu_{i}^{2} }{2} \left(1-\frac{1}{1-2t} \right)} \int \cdots \int \left(\frac{1}{\sqrt{2\pi } } \right)^{n} \frac{\frac{1}{\left(\sqrt{1-2t} \right)^{n} } }{\frac{1}{\left(\sqrt{1-2t} \right)^{n} } }  \, e^{-\frac{1}{2.\frac{1}{1-2t} } .\sum \left(y_{i} -\frac{\mu _{i} }{1-2t} \right)^{2} }  dy_{1} .dy_{2} \cdots dy_{n}\\
    &=e^{-\frac{\sum \mu _{i}^{2} }{2} \left(1-\frac{1}{1-2t} \right)} .\frac{1}{\left(\sqrt{1-2t} \right)^{n} } \int \cdots \int \left(\frac{1}{\sqrt{2\pi } } \right)^{n}  \frac{1}{\left(\sqrt{\frac{1} {1-2t}} \right)^n} e^{-\, \frac{1}{2.\frac{1}{1-2t} } .\sum \left(y_{i} -\frac{\mu_i}{1-2t}\right)^{2} } dy_{1} .dy_{2} \cdots dy_{n}\\
    \end{align*}

    where

    \[\int_{-\infty}^{\infty } \cdots \int _{-\infty }^{\infty }\left(\frac{1}{\sqrt{2\pi}} \right)^{n} \frac{1}{\left(\frac{1}{1-2t} \right)^{\frac{n}{2}}} e^{-{\frac{1}{2}.\frac{1}{1-2t} }} .\sum \left(y_{i} -\frac{\mu _{i} }{1-2t} \right)^{2} dy_{1} .dy_{2} \cdots dy_{n}\]
    is integral to complete density

    \begin{align*}
    M_{w}(t)&=e^{-\frac{\sum \mu_i^2}{2} \left(1-\frac{1}{1-2t}\right)} .\left(\frac{1}{\sqrt{1-2t} } \right)^{n} \\
    &=\left(\frac{1}{\sqrt{1-2t}}\right)^{n} e^{-\lambda \left(1-\frac{1}{1-2t} \right)} \\
    &=e^{-\lambda }.e^{\frac{\lambda}{1-2t}} \frac{1}{(1-2t)^{\frac{n}{2}}}\\
    &\text{Using Taylor series about zero}\\
    &=e^{-\lambda } \sum _{i=0}^{\infty }\frac{\lambda ^{i} }{i!(1-2t)^{i} (1-2t)^{n/2} }\\
    M_{w=y_{i}^{2} } (t)&=e^{-\lambda } \sum _{i=0}^{\infty }\frac{\lambda ^{i} }{i!(1-2t)^{\frac{n+2i}{2} } }\tag{A}
    \end{align*}

    Now Moment Generating Function (MGF) for non central Chi Square distribution for a given density function is
    \begin{align*}
    M_{\omega} (t) & = E(e^{\omega t} )\\
    &=\int _{0}^{\infty }e^{\omega \lambda } e^{-\lambda } \sum _{i=0}^{\infty }\frac{\lambda ^{i} \omega ^{\frac{n+2i}{2} -1} e^{-\frac{\omega }{2} } }{i!2^{\frac{n+2i}{2} } \left|\! {\overline{\frac{n+2i}{2} }}  \right. } d\omega\\
    &=e^{-\lambda } \sum _{i=0}^{\infty }\frac{\lambda ^{i} }{i!2^{\frac{n+2i}{2} } \left|\! {\overline{\frac{n+2i}{2} }}  \right. }  \int _{0}^{\infty }e^{\frac{\omega }{2} (1-2t)}  \omega ^{\frac{n+2i}{2} -1} d\omega
    \end{align*}
    Let
    \begin{align*}
    \frac{\omega }{2} (1-2t)&=P\\
    \Rightarrow \omega & =\frac{2P}{1-2t} \\
    \Rightarrow d\omega &=\frac{2dp}{1-2t}
    \end{align*}

    \begin{align*}
    &=e^{-\lambda } \sum\limits_{i=0}^{\infty }\frac{\lambda ^{i} }{i!2^{\frac{n+2i}{2} } \left|\! {\overline{\frac{n+2i}{2} }}  \right. }  \int _{0}^{\infty }e^{-P} \left(\frac{2P}{1-2t} \right)^{\frac{n+2i}{2} -1} \frac{2dP}{1-2t}  \\
    &=e^{-\lambda } \sum _{i=0}^{\infty }\frac{\lambda ^{i} 2^{\frac{n+2i}{2} } }{i!2^{\frac{n+2i}{2} } \left|\! {\overline{\frac{n+2i}{2} }}  \right. (1-2t)^{\frac{n+2i}{2} -1} } \int _{0}^{\infty }e^{-P} P^{\frac{n+2i}{2} -1}  dP \\
    &=e^{-\lambda } \sum _{i=0}^{\infty }\frac{\lambda ^{i} }{i!\left|\! {\overline{\frac{n+2i}{2} }}  \right. (1-2t)^{\frac{n+2i}{2} } } \left|\! {\overline{\frac{n+2i}{2} }}  \right.
    \end{align*}

    as \[\int\limits _{0}^{\infty }e^{-P} P^{\frac{n+2i}{2} -1}  dP=\left|\! {\overline{\frac{n+2i}{2} }}  \right. \]

    \[M_{\omega } (t)=e^{-\lambda } \sum _{i=0}^{\infty }\frac{\lambda ^{i} }{i!(1-2t)^{\frac{n+2i}{2} } }  \tag{B}\]

    Comparing ($A$) and ($B$)
    \[M_{w=\sum y_{i}^{2} } (t)=M_{\omega } (t)\]

    Non Central Chi Square Distribution

    By Uniqueness theorem

    \[f_{w} (w)=f_{\omega } (\omega )\]
    \begin{align*}
    \Rightarrow \qquad f_{w} (t)&=f(\psi ^{2} )\\
    &=e^{-\lambda } \sum _{i=0}^{\infty }\frac{\lambda ^{i} w^{\frac{n+2i}{2} -1} e^{-\frac{w}{2} } }{i!2^{\frac{n+2i}{2} } \left|\! {\overline{\frac{n+2i}{2} }}  \right. };  \qquad o\le w\le \infty
    \end{align*}
    is the pdf of non central chi square with $n$ degrees of freedom and $\lambda =\frac{\sum \mu _{i}^{2} }{2} $ is the non-centrality parameter. Non Central Chi Square distribution is also Additive to Central Chi Square distribution.

    Application of Non Central Chi Square Distribution

    • Power analysis: Non Central Chi Square Distribution is useful in calculating the power of chi-squared tests.
    • Non-normal data: When the underlying data is not normally distributed, the non central chi squared distribution can be used in certain tests that rely on chi-squared approximations.
    • Signal processing: In some areas like radar systems, the non central chi squared distribution arises when modeling signals with background noise.
    https://itfeature.com

    Reference:

    F Distribution: Ratios of two Independent Estimators (2013)

    F-distribution is a continuous probability distribution (also known as Snedecor’s F distribution or the Fisher-Snedecor distribution) which is named in honor of R.A. Fisher and George W. Snedecor. This distribution arises frequently as the null distribution of a test statistic (hypothesis testing), used to develop confidence interval and in the analysis of variance for comparison of several population means.

    If $s_1^2$ and $s_2^2$ are two unbiased estimates of the population variance $\sigma^2$ obtained from independent samples of size n1 and n2 respectively from the same normal population, then the mathematically F-ratio is defined as
    \[F=\frac{s_1^2}{s_2^2}=\frac{\frac{(n_1-1)\frac{s_1^2}{\sigma^2}}{v_1}}{\frac{(n_2-1)\frac{s_2^2}{\sigma^2}}{v_2}}\]
    where $v_1=n_1-1$ and $v_2=n_2-1$. Since $\chi_1^2=(n_1-1)\frac{s_1^2}{\sigma^2}$ and $\chi_2^2=(n_2-1)\frac{s_2^2}{\sigma^2}$ are distributed independently as $\chi^2$ with $v_1$ and $v_2$ degree of freedom respectively, we have
    \[F=\frac{\frac{\chi_1^2}{v_1}}{\frac{\chi_2^2}{v_2}}\]

    So, F Distribution is the ratio of two independent Chi-square ($\chi^2$) statistics each divided by their respective degree of freedom.

    F Distribution Properties

    •  This takes only non-negative values since the numerator and denominator of the F-ratio are squared quantities.
    • The range of F values is from 0 to infinity.
    • The shape of the F-curve depends on the parameters v1 and v2 (its nominator and denominator df). It is non-symmetrical and skewed to the right (positive skewed) distribution. It tends to become more and more symmetric when one or both of the parameter values (v1, v2) increase, as shown in the following figure.
    F distribution
    • It is asymptotic. As X values increase, the F-curve approaches the X-axis but never crosses it or touches it (similar behavior to the normal probability distribution).
    • F have a unique mode at the value \[\tilde{F}=\frac{v_2(v_2-2)}{v_1(v_2+2)},\quad (v_2>2)\] which is always less than unity.
    • The mean of F is $\mu=\frac{v_2}{v_2-2},\quad (v_2>2)$
    • The variance of F is \[\sigma^2=\frac{2v_2^2(v_1+v_2-2)}{v_1(v_2-2)(v_2-4)},\quad (v_2>4)\]

    Assumptions of F Distribution

    The statistical procedure of comparing the variances of two populations has assumptions

    • The two populations (from which the samples are drawn) follow Normal distribution
    • The two samples are random samples drawn independently from their respective populations.

    The statistical procedure of comparing three or more populations has assumptions

    • The population follows the Normal distribution
    • The population has equal standard deviations σ
    • The populations are independent of each other.

    Note

    This distribution is relatively insensitive to violations of the assumptions of normality of the parent population or the assumption of equal variances.

    Use of F Distribution Table

    For a given (specified) level of significance α, $F_\alpha(v_1,v_2)$ symbol is used to represent the upper (right-hand side) 100% point of an F distribution having $v_1$ and $v_2$ df.

    The lower (left-hand side) percentage point can be found by taking the reciprocal of the F-value corresponding to the upper (right-hand side) percentage point, but the number of df is interchanged i.e. \[F_{1-\alpha}(v_1,v_2)=\frac{1}{F_\alpha(v_2,v_1)}\]

    The distribution for the variable F is given by
    \[Y=k.F^{(\frac{v_1}{2})-1}\left(1+\frac{F}{v_2}\right)^{-\frac{(v_1+v_2)}{2}}\]

    References:

    Learn R Programming Language