Markov Chains and Smoothing

December 13, 2025

Let \(\left\{ X_t \right\}_{t \geq 1}\) be a Markov chain. Use notation \(x_{i:j} = \left(x_i, x_{i+1}, \dots, x_j \right)\). By the Markov property,

\[ \begin{align*} \pi\left(x_{1:T}\right) &= \pi\left(x_{k+1:T} | x_{1:k+1} \right)\pi\left(x_{k+1}|x_{1:k} \right)\pi\left(x_k | x_{1:k-1} \right)\pi\left(x_{1:k-1}\right)\\ &= \pi\left(x_{k+1:T} | x_{k+1} \right)\pi\left(x_{k+1}|x_{k} \right)\pi\left(x_k | x_{k-1} \right)\pi\left(x_{1:k-1}\right) \end{align*} \]

Then,

\[ \begin{align*} \pi\left(x_k | x_{1:k-1}, x_{k+1:T}\right) &= \frac{\pi\left(x_{1:T}\right)}{\pi\left(x_{1:k-1}, x_{k+1:T} \right)}\\ &= \frac{\pi\left(x_{1:T}\right)}{\int\pi\left(x_{1:T}\right) dx_k}\\ &= \frac{\pi\left(x_{k+1:T} | x_{k+1} \right)\pi\left(x_{k+1}|x_{k} \right)\pi\left(x_k | x_{k-1} \right)\pi\left(x_{1:k-1}\right)}{\int \pi\left(x_{k+1:T} | x_{k+1} \right)\pi\left(x_{k+1}|x_{k} \right)\pi\left(x_k | x_{k-1} \right)\pi\left(x_{1:k-1}\right) dx_k} \\ &= \frac{\pi\left(x_{k+1:T} | x_{k+1} \right)\pi\left(x_{1:k-1}\right)}{\pi\left(x_{k+1:T} | x_{k+1} \right) \pi\left(x_{1:k-1}\right) } \times \frac{\pi\left(x_{k+1}|x_{k} \right)\pi\left(x_k | x_{k-1} \right)}{\int \pi\left(x_{k+1}|x_{k} \right)\pi\left(x_k | x_{k-1} \right) dx_k}\\ &= \frac{\pi\left(x_{k+1}|x_{k} \right)\pi\left(x_k | x_{k-1} \right)}{\int \pi\left(x_{k+1}|x_{k} \right)\pi\left(x_k | x_{k-1} \right) dx_k} \\ &= \frac{\pi\left(x_{k+1}, x_k | x_{k-1} \right)}{\pi\left( x_{k+1} | x_{k-1} \right)}\\ &= \pi\left( x_k | x_{k-1}, x_{k+1} \right)\\ \end{align*} \] Given all past and future, all information about \(x_k\) is contained in it’s nearest neighbors \(x_{k-1}\) and \(x_{k+1}\).