Maximum likelihood: write it, log it, differentiate, solve
Write the likelihood, take the log, differentiate, solve. The estimator that maximises the data's plausibility is consistent and asymptotically efficient.
Maximum likelihood is the workhorse estimator: given an iid sample from a parametric family $f(x; \theta)$, the MLE is the parameter value that makes the observed data most probable. The recipe is always the same — write the likelihood, take logs, differentiate, set to zero, solve. The common families collapse to one-line estimators: Bernoulli gives sample proportion, Poisson gives count-over-exposure, Exponential gives reciprocal-of-sample-mean. Recognize the family, write down the answer.
β Intro Β· expand
Try first (productive failure)
Before the worked example: spend 60 seconds taking your best shot at this.
A guess is fine β being briefly wrong about a problem makes the explanation
land harder when you read it. This appears once per tutorial; skip
if you already know the trick.
60s
β Try first Β· expand
Worked example
A factory’s defect rate is unknown. You inspect 200 widgets and find 12 defective. What is the maximum likelihood estimate of the defect rate $p$?
β Worked example Β· expand
Practice 1 of 3Type a fraction, decimal, or expression β mathjs parses it.
β Practice Β· expand
Reflection
Once you know the distribution family, the MLE recipe collapses to a one-liner: Bernoulli → sample proportion, Poisson → count-over-exposure, Exponential → $n / \sum x_i$. In your own words, why does the log step turn the problem from intractable to mechanical? And what real-world data feature would make MLE a bad fit?