Type I and Type II errors in Statistics

In hypothesis testing, two types of errors can be made: Type I and Type II errors.

Type I and Type II Errors

  • A Type I error occurs when you reject a true null hypothesis (remember that when the null hypothesis is true you hope to retain it). Type-I error is a false positive error.
    α=P(type I error)=P(Rejecting the null hypothesis when it is true)
    Type I error is more serious than type II error and therefore more important to avoid than a type II error.
  • A Type II error occurs when you fail to reject a false null hypothesis (remember that when the null hypothesis is false you hope to reject it). Type II error is a false negative error.
    $\beta$=P(type II error) = P(accepting null hypothesis when alternative hypothesis is true)
  • The best way to allow yourself to set a low alpha level (i.e., to have a small chance of making a Type I error) and to have a good chance of rejecting the null when it is false (i.e., to have a small chance of making a Type II error) is to increase the sample size.
  • The key to hypothesis testing is to use a large sample in your research study rather than a small sample!
Type I and Type II Errors

If you do reject your null hypothesis, then it is also essential that you determine whether the size of the relationship is practically significant.
The hypothesis test procedure is therefore adjusted so that there is a guaranteed “low” probability of rejecting the null hypothesis wrongly; this probability is never zero.

Therefore, for type I and Type II errors remember that falsely rejecting the null hypothesis results in an error called Type-I error and falsely accepting the null hypothesis results in Type-II Error.

Read more about Level of significance in Statistics

Visit Online MCQs Quiz Website

Leave a Comment

Discover more from Statistics for Data Analyst

Subscribe now to keep reading and get access to the full archive.

Continue reading