The post is a quick overview of probability. Probability theory is a branch of mathematics that deals with the occurrence of random events. It provides a framework for quantifying uncertainty and making predictions based on available information.
Overview of Probability
The probability can be classified into two points of view:
Objective probability requires a computational formula, while subject probability can be derived from experience, judgment, or previous knowledge about the event. In this post, I will focus on an overview of probability and the Laws of Probability.
Objective Probability
The objective probability has the following definitions
Classical and a Priori Definition
$$P(A) = \frac{\text{Number of sample points in the event based on the favorable events}}{\text{Number of sample points in the sample space}} = \frac{m}{n} = \frac{n(A)}{n(S)}$$
The Relative Frequency or a Posteriori Definition
$$P(A) = \lim\limits_{n\rightarrow \infty} \frac{m}{n}$$
This definition assumes that as $n$ increases to infinity, $\frac{m}{n}$ becomes stable.
When we experiment with the same condition many times, the probability of favourable event becomes stable. For example, if we toss a coin 10 times, then 100 times, then 1000 times, then 10,000 times, then 100,000 times, then 1000,000, and so on. We are also interested in the various numbers of heads that occur. Let $H$ (occurrence of head) be our favorable event, and the probability of a favorable event is called the probability of success. Then the definition said that there are approximately 50% heads in one million tosses. This definition is also called the empirical or statistical definition of probability. that is more useful in practical problems. In practical problems, we find the winning percentage of a team.
The axiomatic Definition of Probability
An axiom is a statement, about any phenomenon, which is used to find real-world problems.
The axiomatic definition of probability states that if a sample space $S$ with sample points $E_1, E_2, \cdots, E_n$, then a real number is assigned to each sample point denoted by $P(E_i)$, should satisfy the following conditions:
- for any event ($E_i$), $0< P(E_i) <1$
- $P(S) = 1$, sure event
- If $A$ and $B$ are two mutually exclusive events, then $P(A\cup B) = P(A) + P(B)$
Laws of Probability
For computing the probability of two or more events, the following laws of probability may be used.
Law of Addition
- For mutually exclusive events: $P(A\cup B) = P(A) + P(B)$
- For non-mutually exclusive events: $P(A\cup B) = P(A) + P(B) – P(A\cap B)$
If $A$, $B$, and $C$ are three events in a sample space $S$, then
$P(A\cup B \cup C) = P(A) + P(B) + P(C) – P(A \cap B) – P(B\cap C) – P(A \cap C)$
Law of Multiplication
For independent events $A$ and $B$: $P(A \text{ and } B) = P(A) \times P(B)$
For dependent events $A$ and $B$: $P(A \text { and } B) = P(A) \times P(B|A)$ (where $P(B|A)$ is the conditional probability of $B$ given $A$)
Law of Complementation
If $A$ is an event and $A’$ is the complement of that event, then
$P(A’) = 1-P(A)$, Note that $P(A) + P(A’) = 1$
Probability of sub-event
If $A$ and $B$ are two events in such a way that $A \subset B$, then $P(A) \le P(B)$
If $A$ and $B$ are any two events defined in a sample space $S$, then
$P(A\cap B’) = P(A) – P(A\cap B)$
Conditional Probability
$P(A|B) = \frac{P(A\cap B}{P(B)}$ or $P(B|A) = \frac{P(A\cap B}{P(A)}$.
Example of Conditional Probability
If we throw a die, what is the probability of 6? That is, $\frac{1}{6}$. What is the probability of 6 given that all are even numbers?
When a die is rolled, the sample space is $S=\{1, 2, 3, 4, 5, 6\}$. Let denote the even numbers by $B$, that is, $B=\{2, 4, 6\}$
$P(A|B) = \frac{1}{3}$
Law of Total Probability
If events $B_1, B_2, \cdots, B_n$ are mutually exclusive and exhaustive events, then for any event $A$: $P(A) = P(A|B_1) \times P(B_1) + P(A|B_2) * P(B_2) + \cdots + P(A|B_n) \times P(B_n)$
Bays’s theorem
Bays’ there is used to update probabilities based on new information.
If $A_1, A_2, \cdots, A_k$ are many events in a sample space.
$P(A_i|B) = \frac{P(A_i) P(B|A_i)}{\Sigma P(A)_i P(B|A_i)}, \text{ for } i, 1, 2, 3, \cdots, k$
https://rfaqs.com, https://gmstat.com