One Sample Hypothesis Testing for Correlation

As we do in Sampling Distributions, we can consider the distribution of r over repeated samples of x and y. The following theorem is analogous to the Central Limit Theorem, but for r instead of This time we require that x and y have a joint bivariate normal distribution or that samples are sufficiently large. You can think of a bivariate normal distribution as the three-dimensional version of the normal distribution, in which any vertical slice through the surface which graphs the distribution results in an ordinary bell curve.

The sampling distribution of r is only symmetric when ρ = 0 (i.e. when x and y are independent). If ρ ≠ 0, then the sampling distribution is asymmetric and so the following theorem does not apply, and other methods of inference must be used.

Theorem 1: Suppose ρ = 0. If x and y have a bivariate normal distribution or if the sample size n is sufficiently large, then r has a normal distribution with mean 0, and t = r/sr ~ T(n – 2) where

image1543

Here the numerator r of the random variable t is the estimate of ρ = 0 and sr is the standard error of t.

Observation: If we solve the equation in Theorem 1 for r, we get

image1545

Observation: The theorem can be used to test the hypothesis that population random variables x and y are independent i.e. ρ = 0.

Example 1: A study is designed to check the relationship between smoking and longevity. A sample of 15 men 50 years and older was taken and the average number of cigarettes smoked per day and the age at death was recorded, as summarized in the table in Figure 1. Can we conclude from the sample that longevity is independent of smoking?

Data correlation

Figure 1 – Data for Example 1

The scatter diagram for this data is as follows. We have also included the linear trend line that seems to best match the data. We will study this further in Linear Regression.

Scatter diagram Excel

Figure 2 – Scatter diagram for Example 1

Next we calculate the correlation coefficient of the sample using the CORREL function:

r = CORREL(R1, R2) = -.713

From the scatter diagram and the correlation coefficient, it is clear that the population correlation is likely to be negative. The absolute value of the correlation coefficient looks high, but is it high enough? To determine this, we establish the following null hypothesis:

H0: ρ = 0

Recall that ρ = 0 would mean that the two population variables are independent. We use  t =  r/sr as the test statistic where sr is as in Theorem 1. Based on the null hypothesis, ρ = 0, we can apply Theorem 1, provided x and y have a bivariate normal distribution. It is difficult to check for bivariate normality, but we can at least check to make sure that each variable is approximately normal via QQ plots.

QQ plot correlation

Figure 3 – Testing for normality

Both samples appear to normal, and so by Theorem 1, we know that t has approximately a t-distribution with n – 2 = 13 degrees of freedom. We now calculate

image1551 image1552

Finally, we perform either one of the following tests:

p-value = TDIST(ABS(-3.67), 13, 2) = .00282 < .05 = α (two-tail)

tcrit = TINV(.05, 13) = 2.16 < 3.67 = |tobs |

And so we reject the null hypothesis, and conclude there is a non-zero correlation between smoking and longevity. In fact, it appears from the data that increased levels of smoking reduces longevity.

Example 2: The US Census Bureau collects statistics comparing the various 50 states. The following table shows the poverty rate (% of population below the poverty level) and infant mortality rate per 1,000 live births) by state. Based on this data, can we conclude the poverty and infant mortality rates by state are correlated?

Data by state

Figure 4 – Data for Example 2

The scatter diagram for this data is as follows.

Scatter diagram Excel

Figure 5 – Scatter diagram for Example 2

The correlation coefficient of the sample is given by

r = CORREL(R1, R2) = .564

Where R1 is the range containing the poverty data and R2 is the range containing the infant mortality data. From the scatter diagram and the correlation coefficient, it is clear that the population correlation is likely to be positive, and so this time we use the following one-tail null hypothesis:

         H0ρ ≤ 0

Based on the null hypothesis we will assume that ρ = 0 (best case), and so as in Example 1

image1555 image1556

Finally, we perform either one of the following tests:

p-value = TDIST(4.737, 48, 1) = 9.8E-08 < .05 = α (one-tail)

tcrit = TINV(.05, 48) = 2.011 < 4.737 = tobs

And so we reject the null hypothesis, and conclude there is a non-zero correlation between poverty and infant mortality.

Since we were confident that the correlation coefficient wasn’t negative, we chose to perform a one-tail test. It turns out that even if we had chosen a two-tailed test (i.e. H0: ρ = 0), we would have still rejected the null hypothesis.

Observation: For samples of any given size n it turns out that r is not normally distributed when ρ ≠ 0 (even when the population has a normal distribution), and so we can’t use Theorem 1.

There is a simple transformation of r, however, that gets around this problem, and allows us to test whether ρ = ρ0 for some value of ρ0 ≠ 0.

Definition 1: For any r define the Fisher transformation of r as follows:

Fisher transformation

Theorem 2: If x and y have a joint bivariate normal distribution or n is sufficiently large, then the Fisher transformation r’ of the correlation coefficient r for samples of size n has distribution N(ρ′, sr′) where

image1564

Corollary 1: Suppose r1 and r2 are as in the theorem where r1 and r2 are based on independent samples and further suppose that ρ1 = ρ2. If z is defined as follows, then z ~ N(0, 1).

image1569

where
image1570

Proof: From the theorem

image1571

for i = 1, 2. By Property 1 and 2 of Basic Characteristics of the Normal Distribution,

image1572

where s is as defined above. Since ρ1 = ρ2, it follows that \rho'_1 = \rho'_2, and so r'_1 = r'_2 ~ N(0,s) from which it follows that z ~ N(0,1).

Excel Functions: Excel provides functions that calculate the Fisher transformation and its inverse.

FISHER (r) = .5 * LN((1 + r) / (1 – r))

FISHERINV(z) = (EXP(2 * z) – 1) / (EXP(2 * z) + 1)

Observation: We can use Theorem 2 to test the null hypothesis H0: ρ = ρ0. This test is very sensitive to outliers. If outliers are present it may be better to use the Spearman rank correlation test or Kendall’s tau test.

The corollary can be used to test whether two samples are drawn from populations with equal correlations.

Example 3: Suppose we calculate r = .6 for a sample of size n = 100. Test the following null hypothesis and find the 95% confidence interval.

H0: ρ = .7

Observe that

r′ = FISHER(r) = FISHER(.6) = 0.693

ρ′ = FISHER(ρ) = FISHER(.7) = 0.867

sr′ = 1 / SQRT(n – 3) = 1 / SQRT(100 – 3) = 0.102

Since r′ < ρ′ we are looking at the left tail of a two-tail test

p-value = NORMDIST(r′, ρ′, sr′, TRUE) = NORMDIST(.693, .867, .102, TRUE) = .0432 > 0.025 = α/2

r′-crit = NORMINV(α/2, ρ′, sr′) = NORMINV(.025, .867, .102) = .668 < .693 = r

In either case, we cannot reject the null hypothesis.

The 95% confidential interval for ρ is

r′ ± zcrit ∙ sr′ = 0.693 ± 1.96 ∙ 0.102 = (0.494, 0.892)

Here zcrit = ABS(NORMSINV(.025)) = 1.96. The 95% confidence interval for ρ′ is therefore (FISHERINV(0.494), FISHERINV(0.892)) = (.457, .712). Note that .7 lies in this interval, confirming our conclusion not to reject the null hypothesis.

Example 4: Repeat the analysis of Example 2 using Theorem 2, this time performing a two-tail test (H0: ρ = 0) using the standard normal test z = (r′– ρ′) / sr′

r = CORREL(R1, R2) = .564

r′ = FISHER(r) = FISHER(.564) = .639

ρ = FISHER(ρ) = FISHER(0) = 0 (based on the null hypothesis)

sr′ = 1 / SQRT(n – 3) = .146

z = (r′ – ρ′) / sr′ = 4.38

Since z > 0, we perform the standard normal test on the right tail:

p-value = 1 – NORMSDIST(z) = NORMSDIST(4.38) = 5.9E-06 < 0.025 = α/2

zcrit = NORMSINV(1 – α/2) = NORMSINV(.975) = 1.96 < 4.38 = zobs

In either case we reject the null hypothesis (H0: ρ = 0) and conclude that there is some association between the variables.

We can also calculate the 95% confidence interval as follows:

r′ ± zcrit ∙ sr′ = .639 ± (1.96)(.146) = (.353, .925)

Using FISHERINV we transform this interval to a 95% confidence interval for ρ:

(FISHERINV(.353), FISHERINV(.925)) = (.339, .728)

Since ρ = 0 is outside this interval, once again we reject the null hypothesis.

Real Statistics Functions: The following supplemental functions are provided in the Real Statistics Resource Pack.

CorrTest(exp, obs, size) = the p-value of the one sample two-tail test of the correlation coefficient using Theorem 2 where exp is the expected population correlation coefficient and obs is the observed correlation coefficient based on a sample of the stated size.

CorrLower(r, size, alpha) = the lower bound of the 1 – alpha confidence interval of the population correlation coefficient based on a sample correlation coefficient r coming from a sample of the stated size.

CorrUpper(r, size, alpha) = the upper bound of the 1 – alpha confidence interval of the population correlation coefficient based on a sample correlation coefficient r coming from a sample of the stated size.

CorrelTest(r, size, rho, alpha, lab): array function which outputs z, p-value, lower and upper (i.e. lower and upper bound of the 1 – alpha confidence interval), where rho, r and size are as described above. If lab = True then output takes the form of a 2 × 4 range with the first column consisting of labels, while if lab = False (default) then output takes the form of a 1 × 4 range without labels.

CorrelTest(R1, R2, rho, alpha, lab) = CorrelTest(r, size, rho, alpha, lab) where r = CORREL(R1, R2) and size = the common sample size, i.e. the number of pairs from R1 and R2 which both contain numeric data.

If alpha is omitted it defaults to .05.

Observation: For Example 3, CorrTest(.7, .6, 100) = .0432, CorrLower(.6, 100, .05) = .457 and CorrLower(.6, 100, .05) =  .712.

Example 5: Test whether the correlation coefficient for the data in the ranges K11:K17 and L11:L17 of the worksheet in Figure 6 is significantly different from .9.

Correlation coefficients hypothesis testing

Figure 6 – Hypothesis testing of the correlation coefficient

The correlation coefficient for the data is .975 (calculated by the formula =CORREL(K11:K17,L11:L17) in cell O11). The test is conducted in the range N12:O15 via the array formula =CorrelTest(K11:K17,L11:L17,0.9,0.05,TRUE). We see that we cannot reject the null hypothesis that the data is taken from a population with correlation .9.

Effect size and power

Until now, when we have discussed effect size we have used some version of Cohen’s d. The correlation coefficient r (as well as r2) provides another common measure of effect size. We now show how to calculate the power of testing correlation using the approach from Power of a Sample.

Example 5: A market research team is conducting a study in which they believe the correlation between increases in product sales and marketing expenditures is 0.35. What is the power of the one-tail test if they use a sample of size 40 with α = .05? How big does their sample need to be to carry out the study with α = .05 and power = .80?

The power of the test can be calculated as in Figure 6.

Power correlation test

Figure 6 – Determining power of a correlation test

The sample size required to achieve an effect size of .40 is shown in Figure 7.

Sample size correlation test

Figure 7 – Determining sample size required for a correlation test

3 Responses to One Sample Hypothesis Testing for Correlation

  1. Colin says:

    Sir
    In Example 1, why do you use “r = CORREL(R1, R2) = -.713″ instead of “CORREL(R1, R2) = n * COVAR(R1. R2) / (STDEV(R1) * STDEV(R2) * (n  – 1)) “

    • Charles says:

      Colin,
      The sample correlation coefficient and the population correlation coefficient are equal and in fact CORREL(R1, R2) = n * COVAR(R1. R2) / (STDEV(R1) * STDEV(R2) * (n – 1)), but it is easier to use the simple formula CORREL(R1, R2).
      Charles

      • Colin says:

        Sir
        Thank you sir. I thought sample correlation coefficient and the population correlation coefficient are different.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>