Suki907 β https://commons.wikimedia.org/wiki/File:Kalman_filter_animation,_1d.gif · CC BY-SA 3.0 · Wikimedia Commons
Strip away the matrices and a Kalman filter is just inverse-variance weighting applied step by step. Two Gaussian sources of information about the same quantity merge with weights proportional to their precisions; the posterior variance shrinks. The Kalman gain $K$ is exactly “how much do I trust the measurement vs my prior?” Same idea scales up to state-space models, sensor fusion, GPS, robotics.
β Intro Β· expand
Try first (productive failure)
Before the worked example: spend 60 seconds taking your best shot at this.
A guess is fine β being briefly wrong about a problem makes the explanation
land harder when you read it. This appears once per tutorial; skip
if you already know the trick.
60s
β Try first Β· expand
Worked example
Two sensors measure an unknown temperature. Sensor A reads $10$ with noise std $\sigma_A = 2$; Sensor B reads $11$ with noise std $\sigma_B = 1$. What is the optimal combined estimate of the temperature?
β Worked example Β· expand
Practice 1 of 3Type a fraction, decimal, or expression β mathjs parses it.
β Practice Β· expand
Reflection
Why does the Kalman gain $K$ approach $1$ when the measurement is much more precise than the prior, and approach $0$ when the prior is precise? Where else have you seen inverse-variance weighting show up (meta-analysis, ensemble forecasts)?