Statistics, law of large numbers

The law of large numbers states that as the size of our sample grows to infinity, the sample mean approaches the true mean \( \mu_X^{\phantom X} \) of the chosen PDF:

$$ \lim_{n\to\infty}\bar{x}_n = \mu_X^{\phantom X} $$

The sample mean \( \bar{x}_n \) works therefore as an estimate of the true mean \( \mu_X^{\phantom X} \).

What we need to find out is how good an approximation \( \bar{x}_n \) is to \( \mu_X^{\phantom X} \). In any stochastic measurement, an estimated mean is of no use to us without a measure of its error. A quantity that tells us how well we can reproduce it in another experiment. We are therefore interested in the PDF of the sample mean itself. Its standard deviation will be a measure of the spread of sample means, and we will simply call it the error of the sample mean, or just sample error, and denote it by \( \mathrm{err}_X^{\phantom X} \). In practice, we will only be able to produce an estimate of the sample error since the exact value would require the knowledge of the true PDFs behind, which we usually do not have.