GARCH: vol clustering and the persistence parameter
AR(1) on the conditional variance instead of the level. Big returns predict big returns; the persistence parameter $\alpha + \beta$ is one number that controls everything — long-run vol, shock half-life, calibration drama.
Returns themselves don't autocorrelate much in liquid equity markets, but their *magnitudes* do: a large move today predicts a larger-than-average move tomorrow. This is volatility clustering, and the cleanest model for it is GARCH(1,1) (Bollerslev 1986): write an AR(1) for the conditional variance instead of for the return. Three parameters $(\omega, \alpha, \beta)$ control everything; the persistence sum $\alpha + \beta$ is the load-bearing one. Below $1$ the process is stationary and has a well-defined long-run variance. Near $1$ shocks decay slowly (the integrated-GARCH limit). The model was the pre-stochastic-vol vol-of-vol workhorse and remains the standard empirical baseline.
β Intro Β· expand
Try first (productive failure)
Before the worked example: spend 60 seconds taking your best shot at this.
A guess is fine β being briefly wrong about a problem makes the explanation
land harder when you read it. This appears once per tutorial; skip
if you already know the trick.
60s
β Try first Β· expand
Worked example
A GARCH(1,1) fit on SPX daily log-returns gives $\omega = 10^{-6}$, $\alpha = 0.05$, $\beta = 0.94$. (a) Verify stationarity. (b) Compute the long-run unconditional variance and the corresponding annualised volatility (use $252$ trading days per year). (c) Half-life in days of a volatility shock.
β Worked example Β· expand
Practice 1 of 3Type a fraction, decimal, or expression β mathjs parses it.
β Practice Β· expand
Reflection
GARCH has constant unconditional variance under stationarity; stochastic vol (Heston / Bergomi) has *stochastic* unconditional variance. Why is this difference so important for option pricing — what gets right or wrong in each model for OTM options?