Cohen Effect Size and Statistical Significance

Statistical significance is important but not the most important consideration in evaluating the results. Because statistical significance tells only the likelihood (probability) that the observed results are due to chance alone. Considering the effect size when obtaining statistically significant results is important.

Effect size is a quantitative measure of some phenomenon. For example,

  • Correlation between two variables
  • The regression coefficients ($\beta_0, \beta_1, \beta_2$) for the regression model, for example, coefficients $\beta_1, \beta_2, \cdots$
  • The mean difference between two or more groups
  • The risk with which something happens

The effect size plays an important role in power analysis, sample size planning, and meta-analysis.

Since effect size indicates how strong (or important) our results are. Therefore, when you are reporting results about the statistical significance for an inferential test, the effect size should also be reported.

For the difference in means, the pooled standard deviation (also called combined standard deviation, obtained from pooled variance) is used to indicate the effect size.

The Cohen Effect Size for the Difference in Means

The effect size ($d$) for the difference in means by Cohen is

$d=\frac{mean\, of\, group\,1 – mean\,of\,group\,2}{SD_{pooled}}$

Cohen provided rough guidelines for interpreting the effect size.

If $d=0.2$, the effect size will be considered as small.

For $d=0.5$, the effect size will be medium.

and if $d=0.8$, the effect size is considered as large.

Note that statistical significance is not the same as the effect size. The statistical significance tells how likely it is that the result is due to chance, while effect size tells how important the result is.

Also note that the statistical-significance is not equal to economic, human, or scientific significance.

For the effect size of the dependent sample $t$-test, see the post-effect size for the dependent sample t-test

Cohen Effect size and statistical significance

See the short video on Effect Size and Statistical Significance

https://itfeature.com

Visit: https://gmstat.com

https://rfaqs.com

Effect Size Definition, Formula, Interpretation (2014)

Effect Size Definition

The Effect Size definition: An effect size is a measure of the strength of a phenomenon, conveying the estimated magnitude of a relationship without making any statement about the true relationship. Effect size measure(s) play an important role in meta-analysis and statistical power analyses. So reporting effect size in thesis, reports or research reports can be considered as a good practice, especially when presenting some empirical results/ findings because it measures the practical importance of a significant finding. Simply, we can say that effect size is a way of quantifying the size of the difference between the two groups.

Effect size is usually computed after rejecting the null hypothesis in a statistical hypothesis testing procedure. So if the null hypothesis is not rejected (i.e. accepted) then effect size has little meaning.

There are different formulas for different statistical tests to measure the effect size. In general, the effect size can be computed in two ways.

  1. As the standardized difference between two means
  2. As the effect size correlation (correlation between the independent variables classification and the individual scores on the dependent variable).

The Effect Size Dependent Sample T-test

The effect size of paired sample t-test (dependent sample t-test) known as Cohen’s d (effect size) ranging from $-\infty$ to $\infty$ evaluated the degree measured in standard deviation units that the mean of the difference scores is equal to zero. If the value of d equals 0, then it means that the difference scores are equal to zero. However larger the d value from 0, the more the effect size.

Effect Size Formula for Dependent Sample T-test

The effect size for the dependent sample t-test can be computed by using

\[d=\frac{\overline{D}-\mu_D}{SD_D}\]

Note that both the Pooled Mean (D) and standard deviation are reported in SPSS output under paired differences.

Let the effect size, $d = 2.56$ which means that the sample means difference and the population mean difference is 2.56 standard deviations apart. The sign does not affect the size of an effect i.e. -2.56 and 2.56 are equivalent effect sizes.

The $d$ statistics can also be computed from the obtained $t$ value and the number of paired observations by Ray and Shadish (1996) such as

\[d=\frac{t}{\sqrt{N}}\]

The value of $d$ is usually categorized as small, medium, and large. With Cohen’s $d$:

  • d=0.2 to 0.5 small effect
  • d=0.5 to 0.8, medium effect
  • d= 0.8 and higher, large effect.

Calculating Effect Size from $R^2$

Another method of computing the effect size is with r-squared ($r^2$), i.e.

\[r^2=\frac{t^2}{t^2+df}\]

Effect size is categorized into small, medium, and large effects as

  • $r^2=0.01$, small effect
  • $r^2=0.09$, medium effect
  • $r^2=0.25$, large effect.
Effect Size Definition Dependent t test

The non‐significant results of the t-test indicate that we failed to reject the hypothesis that the two conditions have equal means in the population. A larger value of $r^2$ indicates the larger effect (effect size), while a large effect size with a non‐significant result suggests that the study should be replicated with a larger sample size.

So larger value of effect size computed from either method indicates a very large effect, meaning that means are likely very different.

Choosing the Right Effect Size Measure

The appropriate effect size measure depends on the type of analysis being conducted (for example, correlation, group comparison, etc.) and the scale measurement of the data (continuous, binary, nominal, ration, interval, ordinal, etc.). It is always a good practice to report both effect size and statistical significance (p-value) to provide a more complete picture of your findings.

In conclusion, effect size is a crucial concept in interpreting statistical results. By understanding and reporting effect size, one can gain a deeper understanding of the practical significance of the research findings and contribute to a more comprehensive understanding of the field of study.

References:

  • Ray, J. W., & Shadish, W. R. (1996). How interchangeable are different estimators of effect size? Journal of Consulting and Clinical Psychology, 64, 1316-1325. (see also “Correction to Ray and Shadish (1996)”, Journal of Consulting and Clinical Psychology, 66, 532, 1998)
  • Kelley, Ken; Preacher, Kristopher J. (2012). “On Effect Size”. Psychological Methods 17 (2): 137–152. doi:10.1037/a0028086.

Learn more about Effect Size Definition and Statistical Significance

R Language Basics