Wrapping it up

Thus, the central limit theorem states that the PDF \( \tilde{p}(z) \) of the average of \( m \) random values corresponding to a PDF \( p(x) \) is a normal distribution whose mean is the mean value of the PDF \( p(x) \) and whose variance is the variance of the PDF \( p(x) \) divided by \( m \), the number of values used to compute \( z \).

The central limit theorem leads to the well-known expression for the standard deviation, given by

$$ \sigma_m= \frac{\sigma}{\sqrt{m}}. $$

The latter is true only if the average value is known exactly. This is obtained in the limit \( m\rightarrow \infty \) only. Because the mean and the variance are measured quantities we obtain the familiar expression in statistics (the so-called Bessel correction)

$$ \sigma_m\approx \frac{\sigma}{\sqrt{m-1}}. $$

In many cases however the above estimate for the standard deviation, in particular if correlations are strong, may be too simplistic. Keep in mind that we have assumed that the variables \( x \) are independent and identically distributed. This is obviously not always the case. For example, the random numbers (or better pseudorandom numbers) we generate in various calculations do always exhibit some correlations.

The theorem is satisfied by a large class of PDFs. Note however that for a finite \( m \), it is not always possible to find a closed form /analytic expression for \( \tilde{p}(x) \).