# Difference between a probability value and the significance level?

Basically in hypothesis testing the goal is to see if the probability value is less than or equal to the significance level (i.e., is p ≤ alpha). It is also called the size of the test or size of the critical region. It is generally specified before any samples are drawn so that the results obtained will not influence our choice.

• The probability value (also called the p-value) is the probability of the observed result found in your research study of occurring (or an even more extreme result occurring), under the assumption that the null hypothesis is true (i.e., if the null were true).
• In hypothesis testing, the researcher assumes that the null hypothesis is true and then sees how often the observed finding would occur if this assumption were true (i.e., the researcher determines the p-value).
• The significance level (also called the alpha level) is the cutoff value the researcher selects and then uses to decide when to reject the null hypothesis.
• Most researchers select the significance or alpha level of .05 to use in their research; hence, they reject the null hypothesis when the p-value is less than or equal to .05.
• The key idea of hypothesis testing it that you reject the null hypothesis when the p-value is less than or equal to the significance level of.05.

# Type I Error

It has become part of the statistical hypothesis testing culture.

• It is a longstanding convention.
• It reflects a concern over making type I errors (i.e., wanting to avoid the situation where you reject the null when it is true, that is, wanting to avoid “false positive” errors).
• If you set the significance level at .05, then you will only reject a true null hypothesis 5% or the time (i.e., you will only make a type I error 5% of the time) in the long run.