A process is stationary when its statistical properties stop drifting. Markov chains have a closed-form stationary distribution; AR(1) is stationary iff $|\phi| < 1$; cointegration is the trick that turns two non-stationary series into one stationary spread.
Half the assumptions in applied statistics quietly require the data to be stationary: OLS standard errors, CLT for time averages, mean-reversion trading, hypothesis tests on returns. When stationarity fails (unit roots, regime shifts, trending drifts), the usual machinery quietly gives wrong answers. This tutorial nails down what stationarity means, when a Markov chain has a stationary distribution, when an AR(1) is stationary, and what cointegration buys you when the raw series isn't.
β Intro Β· expand
Try first (productive failure)
Before the worked example: spend 60 seconds taking your best shot at this.
A guess is fine β being briefly wrong about a problem makes the explanation
land harder when you read it. This appears once per tutorial; skip
if you already know the trick.
60s
β Try first Β· expand
Worked example
A two-state weather model has a stationary Markov transition matrix $P = \begin{pmatrix} 0.7 & 0.3 \\ 0.4 & 0.6 \end{pmatrix}$ (row $i$ is the conditional distribution given state $i$: state 1 is sunny, state 2 is rainy). Compute the long-run fraction of sunny days.
β Worked example Β· expand
Practice 1 of 3Type a fraction, decimal, or expression β mathjs parses it.
β Practice Β· expand
Reflection
Stationarity is a statement about the process, not about any single trajectory. Why does this matter when you fit an AR(1) to a single time series, and what is the (often hidden) assumption you have to make to justify the fit?