Coupling: one uniform, two processes, pointwise comparison
Two processes share the same uniform-noise input β now compare their outcomes pointwise. Magic for tail bounds and convergence proofs.
Method · Coupling
Intro
Sometimes you want to compare two random variables β say a fair coin vs. a biased one β not just compute each one. Coupling is a trick: instead of flipping each coin independently, drive both from the same uniform random number. Now you can say things like “whenever the fair coin comes up heads, the biased one does too,” which makes inequalities easy. We’ll do exactly this for two coins.
β Intro Β· expand
Try first (productive failure)
Before the worked example: spend 60 seconds taking your best shot at this.
A guess is fine β being briefly wrong about a problem makes the explanation
land harder when you read it. This appears once per tutorial; skip
if you already know the trick.
60s
β Try first Β· expand
Worked example
Coin A is fair: $P(\text{H}) = 1/2$. Coin B is biased: $P(\text{H}) = 3/4$. Construct a coupling that makes “A is heads” imply “B is heads” almost surely. Under this coupling, what is $P(A = \text{T} \text{ and } B = \text{H})$?
β Worked example Β· expand
Practice 1 of 3Type a fraction, decimal, or expression β mathjs parses it.
β Practice Β· expand
Reflection
When you see a problem comparing two distributions — “is $X$ stochastically smaller than $Y$?”, “how close are they in total variation?” — what tells you a coupling will be cleaner than a direct CDF comparison? And why is $X \le Y$ <em>almost surely</em> strictly stronger than $X \preceq Y$ in distribution?