The post covers the Type I and Type II Errors examples.

Hypothesis testing helps us to determine whether the results are statistically significant or occurred by chance. Hypothesis testing is based on probability, therefore, there is always a chance of making the wrong decision about the null hypothesis (a hypothesis about population). It means that there are two types of errors (Type I and Type II errors) that can be made when drawing a conclusion or decision.

## Table of Contents

### Errors in Statistical Decision-Making

To understand the errors in statistical decision-making, we first need to see the step-by-step process of hypothesis testing:

- State the null hypothesis and the alternative hypothesis.
- Choose a level of significance (also called type-I error).
- Compute the required test statistics
- Find the critical value or p-value
- Reject or fail to reject the null hypothesis.

When you decide to reject or fail to reject the null hypothesis, there are four possible outcomes–two represent correct choices, and two represent errors. You can:

• Reject the null hypothesis when it is actually true (Type-I error)

• Reject the null hypothesis when it is actually false (Correct)

• Fail to reject the null hypothesis when it is actually true (Correct)

• Fail to reject the null hypothesis when it is actually false (Type-II error)

These four possibilities can be presented in the truth table.

### Type I and Type II Errors Examples: Clinical Trial

To understand Type I and Type II errors, consider the example from clinical trials. In clinical trials, Hypothesis tests are often used to determine whether a new medicine leads to better outcomes in patients. Imagine you are a data professional and working in a pharmaceutical company. The company invents a new medicine to treat the common cold. The company tests a random sample of 200 people with cold symptoms. Without medicine, the typical person experiences cold symptoms for 7.5 days. The average recovery time for people who take the medicine is 6.2 days.

You conduct a hypothesis test to determine if the effect of the medicine on recovery time is statistically significant, or due to chance.

In this case:

- Your null hypothesis ($H_0$) is that the medicine has no effect.
- Your alternative hypothesis ($H_a$) is that the medicine is effective.

#### Type I Error

A Type-I error (also known as a false positive) occurs when a true null hypothesis is rejected. In other words, one can conclude that the result is statistically significant when in fact the results occurred by chance. To understand this, let in your clinical trial, the results indicate that the null hypothesis is true, which means that the medicine has no effect. In case, you make a Type-I error and reject the null hypothesis, it means that you incorrectly conclude that the medicine relieves cold symptoms while the medicine was (actually) ineffective.

The probability of making a Type I error is represented by $\alpha$ (the level of significance. Typically, a 0.05 (or 5%) significance level is used. A significance level of 5% means you are willing to accept a 5% chance you are wrong when you reject the null hypothesis.

#### Reduce the risk of Type I error

To reduce your chances of making Type I errors, it is advised to choose a lower significance level. For example, one can choose the significance level of 1% instead of the standard 5%. It will reduce the chances of making a Type I error from 5% to 1%.

#### Type II Error

A Type II error occurs when we fail to reject a null hypothesis when it is false. In other words, one may conclude that the result occurred by chance, however, in fact, it didn’t. For example, in a clinical study, if the null hypothesis is false, it means that the medicine is effective. In case you make a Type II Error and fail to reject the null hypothesis, it means that you incorrectly conclude that the medicine is ineffective while in reality, the medicine relieves cold symptoms.

The probability of making a Type II error is represented by $\beta$ and it is related to the power of a hypothesis test (power = $1- \beta$). Power refers to the likelihood that a test can correctly detect a real effect when there is one.

**Note** that reducing the risk of making a Type I error means that it is more likely to make a Type II error or false negative.

#### Reduce your risk of making Type II Error

One can reduce the risk of making a Type II error by ensuring that the test has enough power. In data work, power is usually set at 0.80 or 80%. The higher the statistical power, the lower the probability of making a Type II error. To increase power, you can increase your sample size or your significance level.

### Potential Risks of Type I and Type II Errors

As a data professional, it is important to be aware of the potential risks involved in making the two types of errors.

- A
**Type I error**means rejecting a true null hypothesis. In general, making a Type I error often leads to implementing changes that are unnecessary and ineffective, and which waste valuable time and resources.

For example, if you make a Type I error in your clinical trial, the new medicine will be considered effective even though it is ineffective. Based on this incorrect conclusion, ineffective medication may be prescribed to a large number of people. While other treatment options may be rejected in favor of the new medicine.

- A
**Type II error**means failing to reject a false null hypothesis. In general, making a Type II error may result in missed opportunities for positive change and innovation. A lack of innovation can be costly for people and organizations.

For example, if you make a Type II error in your clinical trial, the new medicine will be considered ineffective even though it’s effective. This means that a useful medication may not reach a large number of people who could benefit from it.

In summary, as a data professional, it helps to be aware of the potential errors built into hypothesis testing and how they can affect the final decisions. Depending on the certain situation, one may choose to minimize the risk of either a Type I or Type II error. Ultimately, it is the responsibility of a data professional to determine which type of error is riskier based on the goals of your analysis.