🏠
Guest Not signed in

Markov chains: stationary distributions and long-run fractions

What you can predict next depends only on where you are now. Long-run fractions live in the stationary distribution β€” find it once and you're done.

Method · Markov Chain
Intro
Directed graph of a Markov chain with mouse states, animating transitions between nodes
Nijdam (Dutch Wikipedia user) β€” https://commons.wikimedia.org/wiki/File:NYW-Markovketen-Muis.gif · CC BY-SA 3.0 / GFDL · Wikimedia Commons

When a problem asks “in the long run, what fraction of time does the chain spend in state $X$?”, you want the stationary distribution $\pi$, the unique probability vector satisfying $\pi = \pi P$. For finite irreducible aperiodic chains, $\pi_i$ is exactly the long-run fraction of time the chain spends in state $i$. Recipe: write $P$, write balance equations, add $\sum \pi_i = 1$, solve. Distinct from first-step analysis (which anchors at absorbing boundaries to compute hitting probabilities).

βœ“ Intro Β· expand
More examples

A handful of harder problems on the same theme. Click any prompt to reveal the solution sketch.

Recolouring marbles. A box holds $n$ marbles, each a different colour (one of each). Repeat the following step: draw two marbles uniformly at random, repaint the first so it matches the colour of the second, and put both back. What is the expected number of steps until every marble in the box shares one common colour?
$E[N] = (n - 1)^2$. Track the count $X_k$ of marbles of the dominant colour. From state $i$ the chain moves to $i + 1$ with probability $\tfrac{i(n - i)}{n(n - 1)}$, to $i - 1$ with the same probability, otherwise stays. The hitting time to $n$ from $X_0 = 1$ satisfies $E[N_i] = E[N_{i-1}] + \tfrac{n(n - 1)}{i(n - i)}$. Telescoping gives $E[N] = (n - 1)^2$.
+ More examples Β· expand
Independent · Legal