Content On This Page | ||
---|---|---|
Conditional Probability: Definition and Formula $P(A|B) = P(A \cap B) / P(B)$ | Properties of Conditional Probability |
Conditional Probability
Conditional Probability: Definition and Formula $P(A|B) = P(A \cap B) / P(B)$
The Concept of Conditional Probability
In many real-world scenarios, the likelihood of an event occurring is influenced by whether another event has already taken place. For example, the probability of getting good marks in an exam might increase if a student studies diligently. The probability of a car accident might increase if the road is wet.
Conditional probability is the probability of an event occurring **given that another event has already occurred**. It allows us to update our probability assessment based on new information. The key idea is that the occurrence of the conditioning event (the event that is known to have happened) effectively reduces the possible outcomes to just those within the conditioning event itself. We then consider the probability of the event of interest within this reduced set of outcomes.
Let $A$ and $B$ be two events associated with the same random experiment and sample space $S$. We are interested in the probability of event $A$ happening, knowing that event $B$ has already occurred.
Motivating Example (Illustrating the concept)
Let's consider rolling a fair standard six-sided die once. The sample space is $S = \{1, 2, 3, 4, 5, 6\}$. Since the die is fair, each outcome is equally likely, with a probability of $1/6$.
- Let $A$ be the event of rolling a number less than 4. The outcomes in $A$ are $\{1, 2, 3\}$. The probability of event $A$ is $P(A) = n(A)/n(S) = 3/6 = 1/2$.
- Let $B$ be the event of rolling an odd number. The outcomes in $B$ are $\{1, 3, 5\}$. The probability of event $B$ is $P(B) = n(B)/n(S) = 3/6 = 1/2$.
Now, suppose we are told that the result of the roll was an odd number. This means event $B$ has occurred. With this information, our possibilities are no longer all six outcomes in $S$. The possible outcomes are now limited to the outcomes in $B$, which are $\{1, 3, 5\}$. This set $B$ acts as our new, reduced sample space.
Within this reduced sample space $B=\{1, 3, 5\}$, we want to find the probability that the roll was less than 4 (event $A$). Which outcomes in $B$ are also in $A$ (i.e., are less than 4)? These are the outcomes $\{1, 3\}$.
The outcomes $\{1, 3\}$ are exactly the outcomes that are common to both event $A$ and event $B$. This is the intersection of $A$ and $B$, denoted $A \cap B$. So, $A \cap B = \{1, 3\}$. The number of outcomes in the intersection is $n(A \cap B) = 2$.
The conditional probability of $A$ given $B$ is the probability of $A$ occurring relative to the reduced sample space $B$. Using the concept from the classical definition within the space $B$:
Conditional Probability of A given B = $\frac{\text{Number of outcomes in } A \cap B}{\text{Number of outcomes in } B} = \frac{n(A \cap B)}{n(B)}$
From our example, this is $\frac{2}{3}$.
This conditional probability, denoted as $P(A|B)$, is $2/3$. Notice that this is different from the original probability of A, $P(A)=1/2$. The information that event B occurred changed the probability of A.
Definition and Formula
Let A and B be two events associated with the same sample space $S$. The **conditional probability of event A occurring given that event B has already occurred** is formally defined using the probabilities of the intersection of A and B, and the probability of B.
The notation for the conditional probability of A given B is $P(A|B)$, read as "the probability of A given B".
Formula for Conditional Probability:
$$P(A|B) = \frac{P(A \cap B)}{P(B)}$$
... (1)
This definition is valid **provided that $P(B) > 0$**. If $P(B) = 0$, it means event B is impossible or has zero probability under the experiment, and the conditional probability $P(A|B)$ is undefined.
Similarly, the conditional probability of event B occurring given that event A has already occurred is denoted by $P(B|A)$ and is defined as:
$$P(B|A) = \frac{P(B \cap A)}{P(A)}$$
... (2)
provided that $P(A) > 0$. Since the intersection is commutative ($A \cap B = B \cap A$), the numerator $P(A \cap B) = P(B \cap A)$.
Formula using Counts (for Equally Likely Outcomes)
If the random experiment has a finite sample space $S$ where all $n(S)$ outcomes are equally likely, we can use the classical definition of probability, $P(E) = n(E)/n(S)$. Substituting this into the conditional probability formula (Formula 1):
$$P(A|B) = \frac{P(A \cap B)}{P(B)} = \frac{n(A \cap B) / n(S)}{n(B) / n(S)}$$
Provided $n(S) > 0$ and $n(B) > 0$, we can cancel $n(S)$ from the numerator and denominator:
$$P(A|B) = \frac{n(A \cap B)}{n(B)}$$
... (3)
This formula is very useful when dealing with equally likely outcomes, as it simplifies the calculation to counting the number of outcomes in the intersection and the number of outcomes in the conditioning event.
Applying this to our dice example: $A=\{1, 2, 3\}$, $B=\{1, 3, 5\}$, $A \cap B = \{1, 3\}$. $n(A \cap B) = 2$, $n(B) = 3$. $P(A|B) = \frac{2}{3}$. This confirms the intuitive understanding.
Example
Example 1. A family has two children. What is the conditional probability that both are girls, given that at least one is a girl?
Answer:
Given: A family has two children. Event B: at least one child is a girl. Event A: both children are girls.
To Find: The conditional probability $P(A|B)$.
Solution:
Assume that the probability of having a boy (B) or a girl (G) is equal for each child (1/2) and that the gender of the children are independent. The sample space for the genders of two children is:
$S = \{GG, GB, BG, BB\}$.
The total number of possible outcomes is $n(S) = 4$. Assuming each outcome is equally likely, the probability of each outcome is $1/4$.
Let A be the event that both children are girls.
Outcomes in A: $A = \{GG\}$.
Number of outcomes in A, $n(A) = 1$.
... (i)
Probability of A, $P(A) = \frac{n(A)}{n(S)} = \frac{1}{4}$.
Let B be the event that at least one child is a girl.
Outcomes in B: $B = \{GG, GB, BG\}$.
Number of outcomes in B, $n(B) = 3$.
... (ii)
Probability of B, $P(B) = \frac{n(B)}{n(S)} = \frac{3}{4}$.
We need to find $P(A|B)$, the probability of event A (both girls) given that event B (at least one girl) has occurred.
First, we find the intersection of events A and B, $A \cap B$. This is the event that both conditions are met: both are girls AND at least one is a girl. The outcomes common to A and B are:
$A \cap B = \{GG\} \cap \{GG, GB, BG\} = \{GG\}$.
Number of outcomes in $A \cap B$, $n(A \cap B) = 1$.
... (iii)
Probability of $A \cap B$, $P(A \cap B) = \frac{n(A \cap B)}{n(S)} = \frac{1}{4}$.
$$P(A \cap B) = \frac{1}{4}$$
... (iv)
Now, use the conditional probability formula $P(A|B) = \frac{P(A \cap B)}{P(B)}$ (Formula 1):
$$P(A|B) = \frac{P(A \cap B)}{P(B)}$$
... (v)
Substitute the probabilities from (iv) and (ii):
$$P(A|B) = \frac{1/4}{3/4}$$
$$P(A|B) = \frac{1}{\cancel{4}} \times \frac{\cancel{4}}{3} = \frac{1}{3}$$
... (vi)
Alternatively, using the formula based on counts for equally likely outcomes $P(A|B) = \frac{n(A \cap B)}{n(B)}$ (Formula 3):
$$P(A|B) = \frac{n(A \cap B)}{n(B)} = \frac{1}{3}$$
... (vii)
The conditional probability that both children are girls, given that at least one is a girl, is $\frac{1}{3}$.
This is a classic example highlighting that $P(A|B)$ is often different from $P(A)$.
Properties of Conditional Probability
Conditional Probability as a Probability Measure
Conditional probability, $P(\cdot | E)$, where $E$ is the conditioning event with $P(E) > 0$, satisfies all three Kolmogorov's axioms of probability when considered on the reduced sample space $E$. This means that for a fixed event $E$, the function $P_E(A) = P(A|E)$ behaves like a standard probability measure for any event $A$ defined within the original sample space $S$. The properties derived from the axioms also hold for conditional probabilities.
Let $S$ be the sample space, and let $A$ and $B$ be any events, and $E$ be the conditioning event such that $P(E) > 0$.
-
Range of Conditional Probability:
The conditional probability of any event $A$ given $E$ is a value between 0 and 1, inclusive.
$$0 \le P(A|E) \le 1$$
... (1)
Reason: By definition, $P(A|E) = \frac{P(A \cap E)}{P(E)}$. By Axiom 1, $P(A \cap E) \ge 0$ and $P(E) > 0$, so $P(A|E) \ge 0$. Since $A \cap E$ is a subset of $E$, the outcomes in $A \cap E$ are also in $E$. Thus $P(A \cap E) \le P(E)$. Dividing by $P(E)$ gives $\frac{P(A \cap E)}{P(E)} \le \frac{P(E)}{P(E)} = 1$. So $P(A|E) \le 1$.
-
Probability of the Sample Space / Sure Event (Given E):
The conditional probability of the entire sample space $S$ given event $E$ is 1. This is because if $E$ has occurred, we are certain that the outcome is within $S$ (as $E \subseteq S$).
$$P(S|E) = 1$$
... (2)
Proof: Using the definition, $P(S|E) = \frac{P(S \cap E)}{P(E)}$. Since $E$ is a subset of $S$, the intersection of $S$ and $E$ is simply $E$ ($S \cap E = E$). So, $P(S|E) = \frac{P(E)}{P(E)} = 1$ (since $P(E)>0$).
-
Probability of the Impossible Event (Given E):
The conditional probability of the impossible event ($\phi$) given $E$ is 0.
$$P(\phi|E) = 0$$
... (3)
Proof: Using the definition, $P(\phi|E) = \frac{P(\phi \cap E)}{P(E)}$. The intersection of any event $E$ with the impossible event $\phi$ is always $\phi$ ($\phi \cap E = \phi$). The probability of the impossible event is 0, $P(\phi)=0$. So, $P(\phi|E) = \frac{0}{P(E)} = 0$ (since $P(E)>0$).
-
Addition Law for Mutually Exclusive Events (Given E):
If $A$ and $B$ are two mutually exclusive events ($A \cap B = \phi$), then the conditional probability of their union given $E$ is the sum of their individual conditional probabilities given $E$. This property extends Axiom 3 to conditional probabilities.
$$P(A \cup B | E) = P(A|E) + P(B|E) \quad \text{if } A \cap B = \phi$$
... (4)
This property holds for any finite or countably infinite sequence of pairwise mutually exclusive events.
Proof: Using the definition, $P(A \cup B | E) = \frac{P((A \cup B) \cap E)}{P(E)}$. Using the distributive property of set intersection over union, $(A \cup B) \cap E = (A \cap E) \cup (B \cap E)$.
$$P(A \cup B | E) = \frac{P((A \cap E) \cup (B \cap E))}{P(E)}$$
... (v)
Since $A$ and $B$ are mutually exclusive ($A \cap B = \phi$), the events $(A \cap E)$ and $(B \cap E)$ are also mutually exclusive. Their intersection is $(A \cap E) \cap (B \cap E) = A \cap B \cap E = \phi \cap E = \phi$.
Since $(A \cap E)$ and $(B \cap E)$ are mutually exclusive, by Axiom 3 (applied to ordinary probabilities):
$$P((A \cap E) \cup (B \cap E)) = P(A \cap E) + P(B \cap E)$$.
... (vi)
Substitute equation (vi) into (v):
$$P(A \cup B | E) = \frac{P(A \cap E) + P(B \cap E)}{P(E)}$$
... (vii)
Separating the terms on the right side:
$$P(A \cup B | E) = \frac{P(A \cap E)}{P(E)} + \frac{P(B \cap E)}{P(E)}$$
$$P(A \cup B | E) = P(A|E) + P(B|E)$$
... (viii)
-
General Addition Law (Given E):
For any two events $A$ and $B$ (not necessarily mutually exclusive), and event $E$ with $P(E)>0$, the conditional probability of their union given $E$ is:
$$P(A \cup B | E) = P(A|E) + P(B|E) - P(A \cap B | E)$$
... (5)
This property can be derived by applying the ordinary addition law $P(A \cup B) = P(A) + P(B) - P(A \cap B)$ to the events $(A \cap E)$ and $(B \cap E)$ and then dividing by $P(E)$.
-
Complement Rule (Given E):
For any event $A$ and event $E$ with $P(E)>0$, the conditional probability of the complement of $A$ (i.e., not $A$) given $E$ is 1 minus the conditional probability of $A$ given $E$.
$$P(A'|E) = 1 - P(A|E)$$.
... (6)
This implies $P(A|E) + P(A'|E) = 1$.
Proof: We know that an event $A$ and its complement $A'$ are mutually exclusive ($A \cap A' = \phi$) and their union is the sample space ($A \cup A' = S$).
Using the Addition Law for mutually exclusive events given E (Property 4):
$$P(A \cup A' | E) = P(A|E) + P(A'|E)$$
... (ix)
Since $A \cup A' = S$, the left side is $P(S|E)$. By Property 2, $P(S|E) = 1$.
So, $1 = P(A|E) + P(A'|E)$, which rearranges to $P(A'|E) = 1 - P(A|E)$.
-
Monotonicity:
If $A$ is a subset of $B$ ($A \subseteq B$), then the probability of $A$ given $E$ is less than or equal to the probability of $B$ given $E$, assuming $P(E)>0$.
$$P(A|E) \le P(B|E) \quad \text{if } A \subseteq B$$
... (7)
Proof: If $A \subseteq B$, then $A \cap E \subseteq B \cap E$. By a basic property of probability (derived from axioms), if $X \subseteq Y$, then $P(X) \le P(Y)$. So $P(A \cap E) \le P(B \cap E)$. Dividing by $P(E)$ gives $\frac{P(A \cap E)}{P(E)} \le \frac{P(B \cap E)}{P(E)}$, which is $P(A|E) \le P(B|E)$.
These properties demonstrate that conditional probability is a consistent and valid probability measure in its own right, allowing the application of standard probability rules within a reduced sample space context.
Example
Example 1. Given $P(A) = 0.6$, $P(B) = 0.7$, and $P(A \cup B) = 0.9$. Find $P(A|B)$ and $P(B|A)$.
Answer:
Given: $P(A) = 0.6$, $P(B) = 0.7$, $P(A \cup B) = 0.9$.
To Find: $P(A|B)$ and $P(B|A)$.
Solution:
To find $P(A|B)$ and $P(B|A)$, we need the probability of the intersection $P(A \cap B)$. We can find this using the general Addition Law of Probability:
$$P(A \cup B) = P(A) + P(B) - P(A \cap B)$$
... (i)
Substitute the given values into the formula:
$$0.9 = 0.6 + 0.7 - P(A \cap B)$$
... (ii)
$$0.9 = 1.3 - P(A \cap B)$$
... (iii)
Rearrange the equation to solve for $P(A \cap B)$:
$$P(A \cap B) = 1.3 - 0.9$$
$$P(A \cap B) = 0.4$$
... (iv)
Now we can calculate $P(A|B)$ using the conditional probability formula $P(A|B) = \frac{P(A \cap B)}{P(B)}$ (Formula 1 from Section I1):
$$P(A|B) = \frac{P(A \cap B)}{P(B)}$$
... (v)
Substitute values from (iv) and the given $P(B)=0.7$:
$$P(A|B) = \frac{0.4}{0.7}$$
$$P(A|B) = \frac{4/10}{7/10} = \frac{4}{7}$$
... (vi)
Next, calculate $P(B|A)$ using the formula $P(B|A) = \frac{P(A \cap B)}{P(A)}$ (Formula 2 from Section I1):
$$P(B|A) = \frac{P(A \cap B)}{P(A)}$$
... (vii)
Substitute values from (iv) and the given $P(A)=0.6$:
$$P(B|A) = \frac{0.4}{0.6}$$
$$P(B|A) = \frac{4/10}{6/10} = \frac{4}{6} = \frac{2}{3}$$
... (viii)
The probabilities are $P(A|B) = \frac{4}{7}$ and $P(B|A) = \frac{2}{3}$.