🏠
Guest Not signed in

Fisher information: how well can you know your parameter

Curvature of the log-likelihood at its peak. Sharp curvature = small standard error = parameter well-identified. Flat curvature = big SE = parameter the data refuses to pin down.

Method · Fisher Information
Intro

Every calibration output reports standard errors next to its parameters. Fisher information is the formal quantity those SEs come from: the *negative expected curvature* of the log-likelihood at the true parameter. Sharp curvature means the likelihood drops fast as you move away from the maximum, so the MLE is well-localised. Flat curvature means there's a whole region of parameters that fit the data nearly as well, so the MLE is wobbly. The Cramér-Rao bound makes this precise: $\mathrm{Var}(\hat\theta) \ge 1/I_n(\theta)$ for any unbiased estimator, with the MLE asymptotically achieving the bound. This tutorial covers the score function, the information matrix, the CR bound, the asymptotic normality of the MLE, and the practical identifiability story for two-parameter models like Heston's $\kappa, \theta$.

✓ Intro · expand
Independent · Legal