Basic Concepts of Sampling Distributions

Definition 1: Let x be a random variable with normal distribution N(μ, σ). Now consider a random sample {x1, x2,…, xn} from this population. The mean of the sample (called the sample mean) is

Sample mean formula

 can be considered to be a number representing the mean of the actual sample taken, but it can also be considered to be a random variable representing the mean of any sample of size n from the population.

Observation: By Property 1 of Estimators, the mean of is μ (i.e.  is an unbiased estimator of μ) even if the population being sampled is not normal. By Property 2 of Estimators, the variance of  is σ2/n, and so the standard deviation of  is

Standard error mean formula

When the population is normal, we have the following stronger result.

Theorem 1: If x is a random variable with N(μ, σ) distribution and samples of size n are chosen, then the sample mean has normal distribution


Click here for a proof of Theorem 1.

Definition 2: The standard deviation of the sample mean is called the standard error of the mean.

Observation: As the sample size increases the standard error decreases, and so the precision of the sample mean as an estimator of the population mean improves.

Observation: See Special Charting Capabilities for how to graph the standard error of the mean.

Example 1: Test scores for a standardized test are distributed N(200, 40). If a random sample of 16 test papers is taken, what is the expected mean of the sample and what is the expected standard deviation of the sample around the mean (i.e. the standard error)? What if the sample has size 100?

The mean of the sample is expected to be 200 in either case. The standard error when n = 16 is 40/4 = 10, while the standard error when n = 100 is 40/10 = 4.

6 Responses to Basic Concepts of Sampling Distributions

  1. Madhur Devkota says:

    I don’t understand one thing, why does the expected standard deviation of the sample reduce as n increases. So if I consider all 200 test papers, the expected sd of the sample will be 40/sqrt(200) = 2.8. Shouldn’t the expected sd of the sample would be same as that of the population (i.e. 40) as I have included all the observations?

    • Madhur Devkota says:

      I apologise for the blunder. I thought 200 as the sample size which is not obviously the case.
      I don’t have any confusion.

  2. Jonathan Bechtel says:

    Hi Charles,

    Why is it necessary to use the standard error instead of just using STDEV.S?

    Since STDEV.S returns the standard deviation of a sample, how is it that the standard error also returns the standard deviation of a sample but gives a different result?

    Given the way they’re worded I’d think they’re different versions of the same thing.



    • Charles says:

      In this case, the standard error is equal to the standard deviation divided by the square root of the sample size. The standard error is what you use based on the Central Limit Theorem.

  3. Gilles says:

    Dear Charles,

    first of all, thank you very much for your extremely interesting website: I’m learning statistics again !
    Regarding this page, I was wondering why the Theorem 1 was a stronger result than those given above, since they were already stating that the mean µ of the sample mean x bar are equal and its variance is sigma / sqrt(n) ? More precisely, if these rules apply generally, then they should also apply to a N(µ,sigma), and hence yield directly to the Theorem 1. Why is it “stronger” ?

    Thanks in advance,

    Best regards,


Leave a Reply

Your email address will not be published. Required fields are marked *