**Definition 1**: Let *x* be a random variable with normal distribution *N*(*μ, σ*)*.* Now consider a random sample {*x _{1}, x_{2},…, x_{n}*} from this population. The mean of the sample (called the

**sample mean**) is

*x̄* can be considered to be a number representing the mean of the actual sample taken, but it can also be considered to be a random variable representing the mean of any sample of size *n* from the population.

**Observation**: By Property 1 of Estimators, the mean of *x̄* is *μ* (i.e. *x̄* is an unbiased estimator of μ) even if the population being sampled is not normal. By Property 2 of Estimators, the variance of *x̄* is *σ ^{2}*/

*n*, and so the standard deviation of

*x̄*is

When the population is normal, we have the following stronger result.

**Theorem 1**: If *x* is a random variable with *N*(*μ, σ*) distribution and samples of size *n* are chosen, then the sample mean has normal distribution

Click here for a proof of Theorem 1.

**Definition 2**: The standard deviation of the sample mean is called the **standard error** of the mean.

**Observation**: As the sample size increases the standard error decreases, and so the precision of the sample mean as an estimator of the population mean improves.

**Observation**: See Special Charting Capabilities for how to graph the standard error of the mean.

**Example 1**: Test scores for a standardized test are distributed* N*(200, 40)*.* If a random sample of 16 test papers is taken, what is the expected mean of the sample and what is the expected standard deviation of the sample around the mean (i.e. the standard error)? What if the sample has size 100?

The mean of the sample is expected to be 200 in either case. The standard error when *n* = 16 is 40/4 = 10, while the standard error when *n* = 100 is 40/10 = 4.

Dear Charles,

first of all, thank you very much for your extremely interesting website: I’m learning statistics again !

Regarding this page, I was wondering why the Theorem 1 was a stronger result than those given above, since they were already stating that the mean µ of the sample mean x bar are equal and its variance is sigma / sqrt(n) ? More precisely, if these rules apply generally, then they should also apply to a N(µ,sigma), and hence yield directly to the Theorem 1. Why is it “stronger” ?

Thanks in advance,

Best regards,

Gilles

Giles,

It is stronger because the theorem also asserts that x-bar is normally distributed.

Charles

Hi Charles,

Why is it necessary to use the standard error instead of just using STDEV.S?

Since STDEV.S returns the standard deviation of a sample, how is it that the standard error also returns the standard deviation of a sample but gives a different result?

Given the way they’re worded I’d think they’re different versions of the same thing.

Thanks,

Jonathan

Jonathan,

In this case, the standard error is equal to the standard deviation divided by the square root of the sample size. The standard error is what you use based on the Central Limit Theorem.

Charles

I don’t understand one thing, why does the expected standard deviation of the sample reduce as n increases. So if I consider all 200 test papers, the expected sd of the sample will be 40/sqrt(200) = 2.8. Shouldn’t the expected sd of the sample would be same as that of the population (i.e. 40) as I have included all the observations?

I apologise for the blunder. I thought 200 as the sample size which is not obviously the case.

I don’t have any confusion.

How can we use these formulas to in a sample which size is less than 30? Impact 16.

Sampath,

Use the formulas with whatever value of n you have. However it is usually better to use the t distribution instead, especially with small samples. See

One sample t test

Charles