Cohen Effect Size and Statistical Significance

Statistical significance is important but not the most important consideration in evaluating the results. Because statistical significance tells only the likelihood (probability) that the observed results are due to chance alone. Considering the effect size when obtaining statistically significant results is important.

Effect size is a quantitative measure of some phenomenon. For example,

  • Correlation between two variables
  • The regression coefficients ($\beta_0, \beta_1, \beta_2$) for the regression model, for example, coefficients $\beta_1, \beta_2, \cdots$
  • The mean difference between two or more groups
  • The risk with which something happens

The effect size plays an important role in power analysis, sample size planning, and meta-analysis.

Since effect size indicates how strong (or important) our results are. Therefore, when you are reporting results about the statistical significance for an inferential test, the effect size should also be reported.

For the difference in means, the pooled standard deviation (also called combined standard deviation, obtained from pooled variance) is used to indicate the effect size.

The Cohen Effect Size for the Difference in Means

The effect size ($d$) for the difference in means by Cohen is

$d=\frac{mean\, of\, group\,1 – mean\,of\,group\,2}{SD_{pooled}}$

Cohen provided rough guidelines for interpreting the effect size.

If $d=0.2$, the effect size will be considered as small.

For $d=0.5$, the effect size will be medium.

and if $d=0.8$, the effect size is considered as large.

Note that statistical significance is not the same as the effect size. The statistical significance tells how likely it is that the result is due to chance, while effect size tells how important the result is.

Also note that the statistical-significance is not equal to economic, human, or scientific significance.

For the effect size of the dependent sample $t$-test, see the post-effect size for the dependent sample t-test

Cohen Effect size and statistical significance

See the short video on Effect Size and Statistical Significance

https://itfeature.com

Visit: https://gmstat.com

https://rfaqs.com

Performing Chi Square test from Crosstabs in SPSS

In this post, we will learn about “performing Chi Square Test” in SPSS Statistics Software. For this purpose, from the ANALYSIS menu of SPSS, the crosstabs procedure in descriptive statistics is used to create contingency tables also known as two-way frequency tables, cross-tabulation, which describe the association between two categories of variables.

In a crosstab, the categories of one variable determine the rows of the contingency table, and the categories of the other variable determine the columns. The contingency table dimensions can be reported as $R\times C$, where $R$ is the number of categories for the row variables, and $C$ is the number of categories for the column variable. Additionally, a “square” crosstab is one in which the row and column variables have the same number of categories. Tables of dimensions $2 \times 2$, $3\times 3$, $4\times 4$, etc., are all square crosstab.

Performing Chi Square Test in SPSS

Let us start performing Chi Square test on cross-tabulation in SPSS, first, click Analysis from the main menu, then Descriptive Statistics, and then Crosstabs, as shown in the figure below

Performing Chi Square Test Crosstabs in SPSS

As an example, we are using the “satisf.sav” data file that is already available in the SPSS installation folder. Suppose, we are interested in finding the relationship between the “Shopping Frequency” and the “Made Purchase” variable. For this purpose, shift any one of the variables from the left pan to the right pan as row(s) and the other in the right pan as column(s). Here, we are taking “Shopping Frequency” as row(s) and “Made Purchase” as column(s) variables. Pressing OK will give the contingency table only.

Crosstabs in SPSS

The ROW(S) box is used to enter one or more variables to be used in the cross-table and Chi-Square statistics. Similarly, the COLUMNS(S) box is used to enter one or more variables to be used in the cross-table and Chi-Square statistics. Note At least one row and one column variable should be used.

The layer box is used when you need to find the association between three or more variables. When the layer variable is specified, the crosstab between the row and the column variables will be created at each level of the layer variable. You can have multiple layers of variables by specifying the first layer variable and then clicking next to specify the second layer variable. Alternatively, you can try out multiple variables as single layers at a time by putting them all in layer 1 of 1 box.

The STATISTICS button will lead to a dialog box that contains different inferential statistics for finding the association between categorical variables.

The CELL button will lead to a dialog box that controls which output is displayed in each crosstab cell, such as observed frequency, expected frequency, percentages, residuals, etc., as shown below.

Crosstabs cell display

Performing Chi Square test on the selected variables, click on the “Statistics” button and choose (tick) the option of “Chi-Square” from the top-left side of the dialog box shown below. Note the Chi-square check box must have a tick in it, otherwise only a cross-table will be displayed.

Crosstabs Chi-Square Statistics in SPSS

Press the “Continue” button and then the OK button. We will get output windows containing the cross-tabulation results in Chi-Square statistics as shown below

Crosstabs output SPSS windows

The Chi-Square results indicate an association between the categories of the “Sopping Frequency” variable and the “Made Purchase” variable since the p-value is smaller than say 0.01 level of significance.

For video lecture on Contingency Table and chi-square statistics, See the video lectures

See another video about the Contingency Table and Chi-Square Goodness of Fit Test

Learn How to perform data analysis in SPSS

Learn R Programming Language

Measure of Association: Contingency Table (2019)

The Contingency Table (also called two-way frequency tables/ crosstabs or cross-tabulations) is used to find the relationship (association or dependencies (a measure of association)) between two or more variables measured on the nominal or ordinal measurement scale.

Contingency Table: A Measure of Association

A contingency table contains $R$ rows and $C$ columns measured, the order of the contingency table is $R \times C$. There should be a minimum of 2 (categories in row variable without row header) and 2 (categories in column variable without column header).

A cross table is created by listing all the categories (groups or levels) of one variable as rows in the table and the categories (groups or levels) of other (second) variables as columns, and then joint (cell) frequency (or counts) for each cell. The cell frequencies are totaled across both the rows and the columns. These totals (sums) are called marginal frequencies. The sum (total) of column sums (or rows sum) can be called the Grand Total and must be equal to $N$. The frequencies or counts in each cell are the observed frequency.

The next step in calculating the Chi-square statistics is the computation of the expected frequency for each cell of the contingency table. The expected values of each cell are computed by multiplying the marginal frequencies of the row and marginal frequencies of the column (row sums and column sums are multiplied) and then dividing by the total number of observations (Grand Total, $N$). It can be formulated as

$Expected\,\, Frequency = \frac{(Row\,\, Total \,\, * \,\, Column\,\, Total)}{ Grand \,\, Total}$

The same procedure is used to compute the expected frequencies for all the cells of the contingency table.

The next step is related to the computation of the amount of deviation or error for each cell. for this purpose subtract the expected cell frequency from the observed cell frequency for each cell. The Chi-square statistic is computed by squaring the difference and then dividing the square of the difference by the expected frequency for each cell.

Contingency Table Measure of Association

Finally, the aggregate Chi-square statistic is computed by summing the Chi-square statistic. For formula is,

$$\chi^2=\sum_{i=1}^n \frac{\left(O_{if}-E_{ij}\right)^2}{E_{ij}}$$

The $\chi^2$ table value, the degrees of freedom, and the level of significance are required. The degrees of freedom for a contingency table is computed as
$$df=(number\,\, of \,\, rows – 1)(number \,\, of \,\, columns -1)$$.

For further detail about the contingency table (as a measure of association) and its example about how to compute expected frequencies and Chi-Square statistics, see the video lecture

https://itfeature.com

See Classification of Randomized Complete Designs

Online MCQs Tests with Answers

Student t-test Comparison Test (2015)

In 1908, William Sealy Gosset published his work under the pseudonym “Student” to solve problems associated with inference based on sample(s) drawn from a normally distributed population when the population standard deviation is unknown. He developed the Student t-test and t-distribution, which can be used to compare two small sets of quantitative data collected independently of one another, in this case, this t-test is called independent samples t-test or also called unpaired samples t-test.

The Student t-test is the most commonly used statistical technique in testing of hypothesis based on the difference between sample means. The student t-test can be computed just by knowing the means, standard deviations, and number of data points in both samples by using the following formula

\[t=\frac{\overline{X}_1-\overline{X}_2 }{\sqrt{s_p^2 (\frac{1}{n_1}+\frac{1}{n_2})}}\]

where $s_p^2$ is the pooled (combined) variance and can be computed as

\[s_p^2=\frac{(n_1-1)s_1^2 + (n_2-2)s_2^2}{n_1+n_2-2}\]

Using this test statistic, we test the null hypothesis $H_0:\mu_1=\mu_2$ which means that both samples came from the same population under the given “level of significance” or “level of risk”.

If the computed t-statistics from the above formula is greater than the critical value (value from t-table with $n_1+n_2-2$ degrees of freedom and given a level of significance, say $\alpha=0.05$), the null hypothesis will be rejected, otherwise, the null hypothesis will be accepted.

Note that the t-distribution is a family of curves depending on the degree of freedom (the number of independent observations in the sample minus the number of parameters). As the sample size increases, the t-distribution approaches a bell shape i.e. normal distribution.

Student t-test Example

The production manager wants to compare the number of defective products produced on the day shift with the number on the afternoon shift. A sample of the production from 6-day and 8-afternoon shifts revealed the following defects. The production manager wants to check at the 0.05 significance level, is there a significant difference in the mean number of defects per shits?

Day shift587697  
Afternoon Shit810711912149

Some required calculations for the Student t-test are:

The mean of samples:

$\overline{X}_1=7$, $\overline{X}_2=10$,

Standard Deviation of samples

$s_1=1.4142$, $s_2=2.2678$ and $s_p^2=\frac{(6-1) (1.4142)^2+(8-1)(2.2678)^2}{6+8-2}=3.8333$

Step 1: Null and alternative hypothesis are: $H_0:\mu_1=\mu_2$ vs $H_1:\mu_1 \ne \mu_2$

Step 2: Level of significance: $\alpha=0.05$

Step 3: Test Statistics

$\begin{aligned}
t&=\frac{\overline{X}_1-\overline{X}_2 }{\sqrt{s_p^2 (\frac{1}{n_1}+\frac{1}{n_2})}}\\
&=\frac{7-10}{\sqrt{3.8333(\frac{1}{6}+\frac{1}{8})}}=-2.837
\end{aligned}$

Step 4: Critical value or rejection region (Reject $H_0$ if the absolute value of t-calculated in step 3 is greater than the absolute table value i.e. $|t_{calculated}|\ge t_{tabulated}|$). In this example t-tabulated is -2.179 with 12 degrees of freedom at a significance level of 5%.

Step 5: Conclusion: As computed value $|2.837| > |2.179|$, the number of defects is not the same on the two shifts.

Different Types of Comparison Tests

  • Independent Samples t-test: This compares the means of two independent groups. For example, you might use this to see if a new fertilizer increases plant growth compared to a control group.
  • Paired Samples t-test: This compares the means from the same group at different times or under various conditions. Imagine testing the same group’s performance on a task before and after training.
  • One-Sample t-test: This compares the mean of a single group to a hypothesized value. For instance, you could use this to see if students’ average exam scores significantly differ from 75%.

The summary of key differences between the comparison tests

Independent SamplesPaired SamplesOne-Sample
GroupsIndependentSame group at different timesSingle group
HypothesisMeans are differentMeans are differentMean is different from a hypothesized value
AssumptionsNormally distributed data, equal variances (testable)Normally distributed differencesNormally distributed data

Regardless of the type of t-test, all the above comparison tests assess the significance of a difference between means. These tests tell the research if the observed difference is likely due to random chance or reflects a true underlying difference in the populations.

Student T-test

https://rfaqs.com

https://gmstat.com