Inferential Statistics Tests List (2023)

The following is the list of different parametric and non-parametric lists of the Inferential Statistics Tests List. A short description of each Inferential Statistics Test is also provided.

Inferential Statistics Tests: Parametric Statistics

Sr. No.Statistical TestShort Description of Inferential Test
1)Z testLarge sample test for one mean/average when sigma ($\sigma$) is known (or $n$ is large), population distribution is normal.
2)t testSmall sample test for one mean/average when sigma ($\sigma$) is unknown (and $n$ is small), population distribution is normal.
3)Z testLarge sample test for one proportion.
4)Z testSmall sample test for two means/averages when sigmas ($\sigma_1$ and $\sigma_2$) are unknown, samples are independent, and are from normal populations. The variances are NOT pooled.
5)t testSmall sample test for two means/averages when sigmas ($\sigma_1$ and $\sigma_2$) are unknown, samples are independent and are from normal populations. The variances are NOT pooled.
6)t testSmall sample test for two means/averages when sigmas ($\sigma_1$ and $\sigma_2$) are unknown, samples are independent and are from normal populations. The variances are NOT pooled.
7)t testA test for two means/averages for dependent (paired or related) samples where $d$ (The difference between samples) is normally distributed.
8)Z testLarge sample test for two proportions.
9)$\chi^2$Chi-square goodness of fit, or multinomial distribution., where each expected value is at least 5.
10)$\chi^2_{ii}$Chi-square for contingency tables (rows & columns) where each expected value is at least 5.
Either a test of independence, a test of homogeneity, or a test of association.
11)$\chi^2$Test for one variance or standard deviation.
12)F testTest for two variances or standard deviations for independent samples from the normal populations.
13)F (Anova)Test for three or more means for independent random samples from normal populations. The variances are assumed to be equal.
14)Tukey QA multiple comparison test for all pairs of means (usually for equal sample sizes).
15)Dunnett qA multiple comparison test for a control mean to other means.
16)Hartley H,
Bartlett,
Levene,
Brown-Forsythe,
O’Brien
Test for homoscedasticity, and homogeneity of variances.
17)Pearson $r$Pearson product-moment correlation coefficient.
18)SlopeTest on the slope of the linear regression line.
19)InterceptIntercept Test on the y-intercept of the linear regression line.

Inferential Statistics Tests: Non-Parametric Tests

The following is the list of non-parametric Tests with a short description of the tests.

Sr. No.Statistical TestsShort Description of Inferential Statistics Tests
1)Runs TestUsed to determine whether the sequence of data is random.
2)Mann-Whitney U TestAnalogous to Test # 5 from Parametric test list.
3)Sign TestAnalogous to Test # 2 from Parametric Test (Single sample Median Test). or Test # 7 from Parametric Test.
4)Wilcoxon Signed-Ranked TestSimilar to the sign test, but more efficient, analogous to Parametric Test # 7.
5)Kruskal-Wallis TestAnalogous to Parametric Test # 13.
6)Multiple Comparison TestAnalogous to Parametric Test # 14.
7)Spearmna $r_s$ Rank CorrelationAnalogous to Parametric Test # 15.

Advanced Inferential Statistics Tests

Following is the list of some advanced inferential Statistics tests with a short description of the test.

Sr. No. Statistical TestShort Description of Inferential Statistics Tests
1)Two-factor ANOVA With ReplicationInteraction is possible between two factors.
2)Two-factor ANOVA Without ReplicationOnly one observation per cell; no interaction effect is observed between the two factors.
3)One Datum StatUsed to compare one piece of data (datum) to a mean.
4)McNemar StAtUsed to test a 2 x 2 table of matched discordant pairs.
5)Two Poisson CountsUsed to compare two Poisson counts.
6)Two Regression SlopesUsed to compare two regression equation slopes.
7)Several Regression SlopesUsed to compare several regression equation slopes.
8)Multiple RegressionUsed to test a linear relationship with more than two variables.
9)Holgate StatisticUsed to determine spatial distribution.
Inferential Statistics Tests

Learn about Estimate and Estimation

Learn R Programming Language

Important Online Hypothesis and Testing Quizzes (2024)

The post contains a list of Online Hypothesis and Testing Quizzes from Statistical Inference for the preparation of exams and different statistical job tests in Government/ Semi-Government or Private Organization sectors. These tests are also helpful in getting admission to different colleges and Universities. All these online Hypothesis and Testing Quizzes will help the learner understand the related concepts and enhance their knowledge.

Click the links below to get started with Online Hypothesis and Testing Quizzes

MCQs Hypothesis Testing 08MCQs Hypothesis Testing 07
MCQs Hypothesis Testing 06MCQs Hypothesis Testing 05MCQs Hypothesis Testing 04
MCQs Hypothesis Testing 03MCQs Hypothesis Testing 02MCQs Hypothesis Testing 01
Online Quiz Hypothesis Testing

Most of the MCQs on this Post cover Estimate and Estimation, Testing of Hypothesis, Parametric and Non-Parametric tests, etc.

Hypothesis and Testing

R Programming Language

Contingency Tables

Introduction to Contingency Tables

Contingency Tables also called cross tables or two-way frequency tables describe the relationship between several categorical (qualitative) variables. A bivariate relationship is defined by the joint distribution of the two associated random variables.

Contingency Tables

Let $X$ and $Y$ be two categorical response variables. Let variable $X$ have $I$ levels and variable $Y$ have $J$. The possible combinations of classifications for both variables are $I\times J$. The response $(X, Y)$ of a subject randomly chosen from some population has a probability distribution, which can be shown in a rectangular table having $I$ rows (for categories of $X$) and $J$ columns (for categories of $Y$).

The cells of this rectangular table represent the $IJ$ possible outcomes. Their probability (say $\pi_{ij}$) denotes the probability that ($X, Y$) falls in the cell in row $i$ and column $j$. When these cells contain frequency counts of outcomes, the table is called a contingency or cross-classification table and it is referred to as a $I$ by $J$ ($I \times J$) table.

Joint and Marginal Distribution

The probability distribution {$\pi_{ij}$} is the joint distribution of $X$ and $Y$. The marginal distributions are the rows and columns totals obtained by summing the joint probabilities. For the row variable ($X$) the marginal probability is denoted by $\pi_{i+}$ and for column variable ($Y$) it is denoted by $\pi_{+j}$, where the subscript “+” denotes the sum over the index it replaces; that is, $\pi_{i+}=\sum_j \pi_{ij}$ and $\pi_{+j}=\sum_i \pi_{ij}$ satisfying

$l\sum_{i} \pi_{i+} =\sum_{j} \pi_{+j} = \sum_i \sum_j \pi_{ij}=1$

Note that the marginal distributions are single-variable information, and do not pertain to association linkages between the variables.

Contingency Tables, Cross Tabulation

In (many) contingency tables, one variable (say, $Y$) is a response, and the other $X$) is an explanatory variable. When $X$ is fixed rather than random, the notation of a joint distribution for $X$ and $Y$ is no longer meaningful. However, for a fixed level of $X$, the variable $Y$ has a probability distribution. It is germane to study how this probability distribution of $Y$ changes as the level of $X$ changes.

Contingency Table Uses

  • Identify relationships between categorical variables.
  • See if one variable is independent of the other (i.e. if the frequency of one category is the same regardless of the other variable’s category).
  • Calculate probabilities of specific combinations occurring.
  • Often used as a stepping stone for further statistical analysis, like chi-square tests, to determine if the observed relationship between the variables is statistically significant.

Read More about Contingency Tables

https://itfeature.com

Computer MCQs Test Online

R Programming Language

Chi Square Goodness of Fit Test (2019)

The post is about the Chi Square Goodness of Fit Test.

Application of $\chi^2$distribution is the test of goodness of fit. It is possible to test the hypothesis that a population has a specified theoretical distribution using the $\chi^2$ distribution. The theoretical distribution may be Normal, Binomial, Poisson, or any other distribution.

The Chi-Square Goodness of Fit Test enables us to check whether there is a significant difference between an observed frequency distribution and a theoretical frequency distribution (expected frequency distribution) based on some theoretical models, that is (how well it fits the distribution of data we have observed). A goodness of fit test between observed and expected frequencies is based upon

[\chi^2 = \sum\limits_{i=1}^k \left[ \frac{(OF_i – EF_i)^2}{EF_i} \right] ]

where $OF_i$ represents the observed and $EF_i$ the expected frequencies. for the $i$th class and $k$ is the number of possible outcomes or the number of different classes.

Degrees of Freedom (Chi Square Goodness of Fit Test)

It is important to note that

  • The computed $\chi^2$ value will be small if the observed frequencies are close to the corresponding expected frequencies indicating a good fit.
  • The computed $\chi^2$ value will be large, if observed and expected frequencies have a great deal of difference, indicating a poor fit.
  • A good fit leads to the acceptance of the null hypothesis that the sample distribution agrees with the hypothetical or theoretical distribution.
  • A bad fit leads to the rejection of the null hypothesis.

Critical Region (Chi Square Goodness of Fit Test)

The critical region under the $\chi^2$ curve will fall in the right tail of the distribution. We find the critical value of $\chi^2_{\alpha}$ from the table for a specified level of significance $\alpha$ and $v$ degrees of freedom.

Decision

If the computed $\chi^2$ value is greater than the critical $\chi^2_{\alpha}$ the null hypothesis will be rejected. Thus $\chi^2> \chi^2_{\alpha}$ constitutes the critical region.

Chi square goodness of fit test

Some Requirements

The Chi Square Goodness of fit test should not be applied unless each of the expected frequencies is at least equal to 5. When there are smaller expected frequencies in several, these should be combined (merged). The total number of frequencies should not be less than fifty.

Note that we must look with suspicion upon circumstances where $\chi^2$ is too close to zero since it is rare that observed frequencies agree well with expected frequencies. To examine such situations, we can determine whether the computed value of $\chi^2$ is less than $\chi^2_{0.95}$ to decide that the agreement is too good at the 0.05 level of significance.

R Programming Language

Computer MCQs Test Online