Classical Probability: Example, Definition, and Uses (2017)

Classical probability is the statistical concept that measures the likelihood (probability) of something happening. In a classic sense, it means that every statistical experiment will contain elements that are equally likely to happen (equal chances of occurrence of something). Therefore, the concept of classical probability is the simplest form of probability that has equal odds of something happening.

Classical Probability Examples

Example 1: The typical example of classical probability would be rolling a fair die because it is equally probable that the top face of the die will be any of the 6 numbers on the die: 1, 2, 3, 4, 5, or 6.

Example 2: Another example of classical probability would be tossing an unbiased coin. There is an equal probability that your toss will yield either head or tail.

Example 3: In selecting bingo balls, each numbered ball has an equal chance of being chosen.

Example 4: Guessing a multiple choice quiz (MCQs) test with (say) four possible answers A, B, C, or D. Each option (choice) has the same odds (equal chances) of being picked (assuming you pick randomly and do not follow any pattern).

Classical Probability Formula

The probability of a simple event happening is the number of times the event can happen, divided by the number of possible events (outcomes).

Mathematically $P(A) = \frac{f}{N}$,

where, $P(A)$ means “probability of event A” (event $A$ is whatever event you are looking for, like winning the lottery, that is event of interest), $f$ is the frequency, or number of possible times the event could happen and $N$ is the number of times the event could happen.

For example,  the odds of rolling a 2 on a fair die are one out of 6, (1/6). In other words, one possible outcome (there is only one way to roll a 1 on a fair die) is divided by the number of possible outcomes.

Classical probability can be used for very basic events, like rolling a dice and tossing a coin, it can also be used when the occurrence of all events is equally likely. Choosing a card from a standard deck of cards gives you a 1/52 chance of getting a particular card, no matter what card you choose. On the other hand, figuring out whether will it rain tomorrow or not isn’t something you can figure out with this basic type of probability. There might be a 15% chance of rain (and therefore, an 85% chance of it not raining).

Classical Probability Formula

Other Examples of classical Probability

There are many other examples of classical probability problems besides rolling dice. These examples include flipping coins, drawing cards from a deck, guessing on a multiple-choice test, selecting jellybeans from a bag, choosing people for a committee, etc.

Classical Probability cannot be used:

Dividing the number of events by the number of possible events is very simplistic, and it isn’t suited to finding probabilities for a lot of situations. For example, natural events like weights, heights, and test scores need normal distribution probability charts to calculate probabilities. Most “real life” things aren’t simple events like coins, cards, or dice. You’ll need something more complicated than classical probability theory to solve them.

It is important to note that the classical probability is most applicable in situations where:

  • All possible outcomes can be clearly defined and listed.
  • Each outcome has an equal chance of happening.

In conclusion, classical probability provides a foundational understanding of probability concepts, and it has various applications in games of chance, simple random sampling, and other situations where clear, equally likely outcomes can be defined.

For further Details see Introduction to Probability Theory

R Frequently Asked Questions

Online MCQs Quiz Website of Various Subjects

Probability Theory: An Introduction (2012)

This post is about probability theory. It will serve as an introduction to the theory of chances.

Probability Theory

Uncertainty is everywhere i.e. nothing in this world is perfect or 100% certain except the Almighty Allah the Creator of the Universe. For example, if someone bought 10 lottery tickets out of 500 and each of the 500 tickets is as likely as any other to be selected or drawn for the first prize then it means that you have 10 chances out of 500 tickets or 2% chances to win the first prize.

Similarly, a decision maker seldom has complete information to make a decision.
So, probability is a measure of the likelihood that something will happen, however, probability cannot predict the number of times that something will occur in the future, so all the known risks involved must be scientifically evaluated. The decisions that affect our daily life, are based upon the likelihood (probability or chance) but not on absolute certainty. The use of probability theory allows the decision maker with only limited information to analyze the risks and minimize the gamble inherently. For example in marketing a new product or accepting an incoming shipment possibly containing defective parts.

Probability Theory

Probability can be considered as the quantification of uncertainty or likelihood. Probabilities are usually expressed as fractions such as {1/6, 1/2, 8/9} or as decimals such as {0.167, 0.5, 0.889} and can also be presented as percentages such as {16.7%, 50%, 88.9%}.

Types of Probability

Suppose we want to compute the chances (Note that we are not predicting here, just measuring the chances) that something will occur in the future. For this purpose, we have three types of probability

1) Classical Approach or Prior Approach

In a classical probability approach, two assumptions are used

  • Outcomes are mutually exclusive
  • Outcomes are equally likely

Classical probability is defined as “The number of outcomes favorable to the occurrence of an event divided by the total number of all possible outcomes”.
OR
An experiment resulting in $n$ equally likely mutually exclusive and collectively exhaustive outcomes and “$m$” of which are favorable to the occurrence of an event A, then the probability of event A is the ratio of $\ \frac {m}{n}$. (D.S. Laplace (1749-1927).

Symbolically we can write $$P(A) = \frac{m}{n} = \frac{number\,\, of\,\, favorable\,\, outcomes}{Total\,\, number\,\, of\,\, outcomes}$$

Some shortcomings of the classical approach

  • This approach to probability is useful only when one deals with card games, dice games, or coin tosses. i.e. Events are equally likely but not suitable for serious problems such as decisions in management.
  • This approach assumes a world that does not exist, as some assumptions are imposed as described above.
  • This approach assumes symmetry about the world but there may be some disorder in a system.

2) Relative Frequency or Empirical Probability or A Posterior Approach

The proportion of times that an event occurs in the long run when conditions are stable. Relative frequency becomes stable as the number of trials becomes large under uniform conditions.
To calculate the relative frequency an experiment is repeated a large number of times say “n” under uniform/stable conditions. So if an event A occurs m times, then the probability of the occurrence of the event A is defined by
$$P(A)=\lim_{x\to\infty}\frac{m}{n}$$

if we say that the probability of a number n child will be a boy is $\frac{1}{2}$, then it means that over a large number of children born 50% of all will be boys.

Some Critics

  • It is difficult to ensure that the experiment is repeated under stable/uniform conditions.
  • The experiment can be repeated only a finite number of times in the real world, not an infinite number of times.

3) Subjective Probability Approach

This is the probability based on the beliefs of the persons making the probability assessment.
Subjective probability assessments are often found when events occur only once or at most a very few times.
This approach is applicable in business, marketing, and economics for quick decisions without performing any mathematical calculations.
The Disadvantage of subjective probability is that two or more persons facing the same evidence/problem may arrive at different probabilities i.e. for the same problem there may be different decisions.

Real-Life Example of Subjective Probability

  • A firm must decide whether or not to market a new type of product. The decision will be based on prior information that the product will have high market acceptance.
  • The Sales Manager considers that there is a 40% chance of obtaining the order for which the firm has just quoted. This value (40% chance) cannot be tested by repeated trials.
  • Estimating the probability that you will be married before the age of 30 years.
  • Estimating the likelihood (probability, chances) that Pakistan’s budget deficit will be reduced by half in the next 5 years.

Note that subjective probability theory is not a repeatable experiment, the relative frequency approach to probability is not applicable, nor can equally likely probabilities be assigned.

Important Terminologies of Probability Theory

Visit and Learn R Programming Language

Probability Terminology

Probability Terminology

The following are Probability terminology that are helpful in understanding the concepts of probability and rules of probability for solving different probability-related real-life problems.

Sets: A set is a well-defined collection of distinct objects. The objects making up a set are called its elements. A set is usually capital letters i.e. $A, B, C$, while its elements are denoted by small letters i.e. $a, b, c$, etc.

Null Set: A set that contains no element is called a null set or simply an empty set. It is denoted by { } or $\varnothing$.

Subset: If every element of a set $A$ is also an element of a set $B$, then $A$ is said to be a subset of $B$ and it is denoted by $A \ne B$.

Proper Subset: If $A$ is a subset of $B$, and $B$ contains at least one element that is not an element of $A$, then $A$ is said to be a proper subset of $B$ and is denoted by; $A \subset B$.

Finite and Infinite Sets: A set is finite, if it contains a specific number of elements, i.e. while counting the members of the sets, the counting process comes to an end otherwise the set is infinite.

Universal Set: A set consisting of all the elements of the sets under consideration is called the universal set. It is denoted by $\cup$.

Disjoint Set: Two sets $A$ and $B$ are said to be disjoint sets if they have no elements in common i.e. if $A \cup B = \varnothing$, then $A$ and $B$ are said to be disjoint sets.

Overlapping Sets: Two sets $A$ and $B$ are said to be overlapping sets, if they have at least one element in common, i.e. if $A \cap B \ne \varnothing$ and none of them is the subset of the other set then $A$ and $B$ are overlapping sets.

Union of Sets: The Union of two sets $A$ and $B$ is a set that contains the elements either belonging to $A$ or $B$ or both. It is denoted by $A \cap B$ and read as $A$ union $B$.

Intersection of Sets: The intersection of two sets $A$ and $B$ is a set that contains the elements belonging to both $A$ and $B$. It is denoted by $A \cup B$ and read as $A$ intersection $B$.

Difference of Sets: The difference between a set $A$ and a set $B$ is the set that contains the elements of the set $A$ that are not contained in $b$. The difference between sets $A$ and $B$ is denoted by $a-b$.

Complement of a Set: Complement of a set $a$ denoted by $\bar{A}$ or $A^c$ and is defined as $\bar{A}=\cup$.

Experiment: Any activity where we observe something or measure something. An activity that results in or produces an event is called an experiment.

Random Experiment: An experiment, if repeated under identical conditions may not give the same outcome, i.e. The outcome of a random experiment is uncertain, so that a given outcome is just one sample of many possible outcomes. For the random experiment, we know about all possible outcomes. A random experiment has the following properties;

  1. The experiment can be repeated any number of times.
  2. A random trial consists of at least two outcomes.

 Sample Space: The set of all possible outcomes in a random experiment is called sample space. In the coin toss experiment, the sample space is $S=\{Head, Tail\}$, in the card-drawing experiment the sample space has 52 members. Similarly the sample space for a die={1,2,3,4,5,6}.

Event: Event is simply a subset of sample space. In a sample space, there can be two or more events consisting of sample points. For coin, the list of all possible events is 4, found by $event=2^n$, that is i) $A_1 = \{H\}$, ii) $A_2=\{T\}$, iii) $A_3\{H, T\}$, and iv) $A_4=\varnothing$ are possible event for coin toss experiment.

Simple Event: If an event consists of one sample point, then it is called a simple event. For example, when two coins are tossed, the event {TT} is simple.

Compound Event: If an event consists of more than one sample point, it is called a compound event. For example, when two dice are rolled, an event B, the sum of two faces is 4 i.e. $B=\{(1,3), (2,3), 3,1)\}$ is a compound event.

Independent Events: Two events $A$ and $B$ are said to be independent if the occurrence of one does not affect the occurrence of the other. For example, in tossing two coins, the occurrence of a head on one coin does not affect in any way the occurrence of a head or tail on the other coin.

Dependent Events: Two events A and B are said to be dependent if the occurrence of one event affects the occurrence of the other event.

Mutually Exclusive Events: Two events $A$ and $B$ are said to be mutually exclusive if they cannot occur at the same time i.e. $A\cup B AUB=\varnothing$. For example, when a coin is tossed, we get either a head or a tail, but not both. That is why they have no common point there, so these two events (head and tail) are mutually exclusive. Similarly, when a die is thrown, the possible outcomes 1, 2, 3, 4, 5, 6 are mutually exclusive.

probability terminology

Equally Likely or Non-Mutually Exclusive Events: Two events $A$ and $B$ are said to be equally likely events when one event is as likely to occur as the other. OR If the experiment is continued a large number of times all the events have the chance of occurring an equal number of times. Mathematically, $A\cup B \ne\varnothing$. For example, when a coin is tossed, the head is as likely to occur as the tail or vice versa.

Exhaustive Events: When a sample space $S$ is partitioned into some mutually exclusive events, such that their union is the sample space itself, the event is called an exhaustive event. OR
Events are said to be collectively exhaustive when the union of mutually exclusive events is the entire sample space $S$.
Let a die is rolled, the sample space is $S=\{1,2,3,4,5,6\}$.
Let $A=\{1, 2\}, B=\{3, 4, 5\}$, and C=\{6\}$.

$A, B$, and $C$ are mutually exclusive events and their union $(A\cup B \cup C = S)$ is the sample space, so the events $A, B$, and $C$ are exhaustive.

Classical Probability Definition and Examples

R Frequently Asked Questions

Online MCQs Test Website