**Theorem 1 – Central Limit Theorem**: If *x* has a distribution with mean *μ* and standard deviation *σ* then for *n* sufficiently large, the variable

has a distribution which is approximately the standard normal distribution.

**Observation**: Click here for a proof of the Central Limit Theorem (which involves calculus)

**Observation**: The larger the value of *n* the better the approximation will be. For practical purposes when *n* ≥ 30, then the approximation will be quite good. Even for smaller values (say *n* ≥ 20) the approximation is usually quite adequate.

**Corollary 1**: If *x* has a distribution with mean *μ* and standard deviation *σ* then the distribution of the sample mean of *x* is approximately *N*(*μ*, ) for large enough *n*.

**Observation**: The standard deviation of the sample mean (i.e.the standard error of the mean), namely , is smaller than the standard deviation of the population, namely *σ*. In fact as *n* gets bigger and bigger the standard error of the mean gets smaller and smaller with a value that approaches zero, a relationship that is usually denoted

A consequence of this observation is the Law of Large Numbers.

**Law of Large Numbers**: The larger the size of the sample, the more likely the mean of the sample will be close to the mean of the population.

**Observation**: The Central Limit Theorem is based on the hypothesis that sampling is done with replacement. When sampling is done without replacement, the Central Limit Theorem works just fine provided the population size is much larger than the sample size. When this is not the case, it is better to use the following standard error:

where *n _{p} *is the size of the population.

This information is useful for non-statisticians.