Blocking Transformations

We now define blocking transformations. The idea is to take the mean of subsequent pair of elements from \( \vec{X} \) and form a new vector \( \vec{X}_1 \). Continuing in the same way by taking the mean of subsequent pairs of elements of \( \vec{X}_1 \) we obtain \( \vec{X}_2 \), and so on. Define \( \vec{X}_i \) recursively by:

$$ \begin{align} (\vec{X}_0)_k &\equiv (\vec{X})_k \nonumber \\ (\vec{X}_{i+1})_k &\equiv \frac{1}{2}\Big( (\vec{X}_i)_{2k-1} + (\vec{X}_i)_{2k} \Big) \qquad \text{for all} \qquad 1 \leq i \leq d-1 \tag{22} \end{align} $$

The quantity \( \vec{X}_k \) is subject to \( k \) blocking transformations. We now have \( d \) vectors \( \vec{X}_0, \vec{X}_1,\cdots,\vec X_{d-1} \) containing the subsequent averages of observations. It turns out that if the components of \( \vec{X} \) is a stationary time series, then the components of \( \vec{X}_i \) is a stationary time series for all \( 0 \leq i \leq d-1 \)

We can then compute the autocovariance, the variance, sample mean, and number of observations for each \( i \). Let \( \gamma_i, \sigma_i^2, \overline{X}_i \) denote the autocovariance, variance and average of the elements of \( \vec{X}_i \) and let \( n_i \) be the number of elements of \( \vec{X}_i \). It follows by induction that \( n_i = n/2^i \).