When a probability depends on an upstream choice you can’t see, condition on that choice. The law of total probability says: if $B_1, \ldots, B_n$ partition the sample space, then $P(A) = \sum_i P(A \mid B_i)\, P(B_i)$ — the marginal is a weighted average of the conditionals, with the priors as weights. It’s the denominator of Bayes’ rule and the workhorse for any “case splits” problem.
β Intro Β· expand
Try first (productive failure)
Before the worked example: spend 60 seconds taking your best shot at this.
A guess is fine β being briefly wrong about a problem makes the explanation
land harder when you read it. This appears once per tutorial; skip
if you already know the trick.
60s
β Try first Β· expand
Worked example
A drawer contains 4 coins: 3 are fair ($P(H) = 1/2$) and 1 is biased with $P(H) = 0.8$. You pick one coin uniformly at random and flip it once. What is the probability the flip comes up heads?
β Worked example Β· expand
Practice 1 of 3Type a fraction, decimal, or expression β mathjs parses it.
β Practice Β· expand
Reflection
What’s the cue in a problem that tells you to condition on a hidden case rather than try to compute the probability directly? And how do you know LOTP is enough — that you don’t need to flip the conditioning with Bayes?