Single Sample Hypothesis Testing

Suppose we take a sample of size n from a normal population N(μ, σ) and ask whether the sample mean differs significantly from the overall population mean.

This is equivalent to testing the following null hypothesis H0

image380

We use a two-tailed hypothesis, although sometimes a one-tailed hypothesis is preferred (see examples below). By Theorem 1 of Basic Concepts of Sampling Distributions, the sample mean has normal distribution

image381

We can use this fact directly to test the null hypothesis or employ the following test statistic (i.e. the z-score):

z score

Example 1: National norms for a school mathematics proficiency exam are distributed N(80,20).  A random sample of 60 students from New York City is taken showing a mean proficiency score of 75. Do these sample scores differ significantly from the overall population mean?

We would like to show that any deviation from the expected value of 80 for the sample mean is due to chance. We consider three approaches, each based on a different initial hypothesis.

Approach 1: Suppose that before any data were collected we had postulated that a particular sample would have a mean lower than the population mean (one-tailed null hypothesis H0).

image384

Note that we have stated the null hypothesis in a form that we want to reject; i.e. we are hoping to prove the alternative hypothesis H1

image9048

The distribution of the sample mean  is N(μ, \sigma/\!\sqrt{n}) where μ = 80, σ = 20 and n = 60. Since the standard error = \sigma/\!\sqrt{n} = 20/\!\sqrt{60} = 2.58, the distribution of the sample mean is N(80,2.58). The critical region is the left tail, representing α = 5% of the distribution. We now test to see whether is in the critical region.

critical value (left tail) = NORMINV(α, μ, \sigma/\!\sqrt{n}) = NORMINV(.05, 80, 2.58) = 75.75

Since  we reject the null hypothesis.

Alternatively, we can test to see whether the p-value is less than α, namely

p-value = NORMDIST(, μ, \sigma/\!\sqrt{n}), TRUE) = NORMDIST(75, 80, 2.58, TRUE) = .0264

Since p-value = .0264 < .05 = α, we again reject the null hypothesis.

Another approach for arriving at the same conclusion is to use the z-score

image5027

Based on either of the following tests, we again reject the null hypothesis:

p-value = NORMSDIST(z) = NORMSDIST(-1.94) = .0264 < .05 = α

zcrit = NORMSINV(α) = -1.64 > – 1.94 = zobs

The conclusion from all these approaches is that the sample has significantly lower scores than the general population.

Approach 2: Suppose that before any data were collected we had postulated that the sample mean would be higher than the population mean (one-tailed hypothesis H0).

image394

This time, the critical region is the right tail, representing α = 5% of the distribution. We can now run any of the following four tests:

p-value = 1 – NORMDIST(75, 80, 2.58, TRUE) = 1 – .0264 = .9736 > .05 = α

crit = NORMINV(.95, 80, 2.58) = 84.25 > 75 = obs

p-value = 1 – NORMSDIST(-1.94) = 1 – .0264 = .9736 > .05 = α

zcrit = NORMSINV(.95) = 1.64 > -1.94 = zobs

We retain the null hypothesis and conclude that we do not have enough evidence to claim that the sample mean is higher than the population mean.

Approach 3: Suppose that before any data were collected we had postulated that a particular sample would have a mean different from the population mean (two-tailed hypothesis H0).

image380

Here we are testing to see whether the sample mean is significantly higher or lower than the population mean (alternative hypothesis H1).

image397

This time, the critical region is a combination of the left tail representing α/2 = 2.5% of the distribution, plus the right tail representing α/2 = 2.5% of the distribution. Once again we test to see whether  is in the critical region, in which case we reject the null hypothesis.

Due to the symmetry of the normal distribution, the p-value =

image398

Thus testing whether p-value < α is equivalent to testing whether

P( < 75) = NORMDIST(75, 80, 2.58, TRUE) < α/2

Since

NORMDIST(75, 80, 2.58, TRUE) = .0264 > .025 = α/2

we cannot reject the null hypothesis. We can reach the same conclusion as follows:

crit-left = NORMINV(.025, 80, 2.58) = 74.94 < 75 = obs

If the sample mean had been obs = 85 instead then this test would become

crit-right = NORMINV(.975, 80, 2.58) = 85.06 > 85 = obs

In either case, obs would lie just outside the critical region, and so we would retain the null hypothesis. Finally, we can reach the same conclusion by testing the z-score as follows:

NORMSDIST(-1.94) = .0264 > .025 = α/2

|zobs| = 1.94 < 1.96 = NORMSINV(.975) = |zcrit|

Example 2: Suppose that in the previous example we took a larger sample of 100 students and once again the sample mean was 75. Repeat the two-tailed test.

This time the sample error = \sigma/\!\sqrt{n} = 20/\!\sqrt{100} = 2, and so

NORMDIST(75, 80, 2, TRUE) = .006 < .025 = α/2.

This time we reject the null hypothesis.

5 Responses to Single Sample Hypothesis Testing

  1. Silva says:

    Hi Charles,
    I think there is a typo

    critical value (left tail) = NORMINV(α, sigma/sqrt{n}) = NORMINV(.05, 2.58) = 75.75
    missing the mu value.

    • Charles says:

      Silva,
      Thanks for catching this typo. I have now updated the webpage as you have indicated. I appreciate your help.
      Charles

  2. Pranati says:

    Loved the article Charles!!! With numerous approaches the conept has really become rock solid 🙂

  3. Ken Chung says:

    I think the H1 in your first approach, example 1 should be mu-xbar < mu?

Leave a Reply

Your email address will not be published. Required fields are marked *