Cyan (English Wikipedia user) β https://commons.wikimedia.org/wiki/File:Clt_in_action.gif · CC BY-SA 3.0 / GFDL · Wikimedia Commons
The Central Limit Theorem is the reason “Normal” shows up everywhere. When you sum many iid random variables — each with finite mean $\mu$ and variance $\sigma^2$ — the sum is approximately $N(n\mu, n\sigma^2)$ for large $n$, regardless of the original distribution. The technique is mechanical: get $\mu$ and $\sigma^2$ per term, scale by $n$, standardize to a $Z$-score, look up $\Phi$.
β Intro Β· expand
Try first (productive failure)
Before the worked example: spend 60 seconds taking your best shot at this.
A guess is fine β being briefly wrong about a problem makes the explanation
land harder when you read it. This appears once per tutorial; skip
if you already know the trick.
60s
β Try first Β· expand
Worked example
100 fair six-sided dice are rolled. What is the approximate probability that their sum exceeds 380? Use the Central Limit Theorem. (Round to 2 decimal places.)
β Worked example Β· expand
Practice 1 of 3Type a fraction, decimal, or expression β mathjs parses it.
β Practice Β· expand
Reflection
When you read a problem with “many independent trials” and a question about the sum or the average, what’s the cue that tells you to reach for CLT rather than an exact binomial calculation? Once you’ve standardized, why does the original distribution stop mattering — what’s the theorem actually buying you?