**T Test**

As we do in Sampling Distributions, we can consider the distribution of *r* over repeated samples of *x* and y. The following theorem is analogous to the Central Limit Theorem, but for *r* instead of *x̄* This time we require that *x* and y have a joint bivariate normal distribution or that samples are sufficiently large. You can think of a bivariate normal distribution as the three-dimensional version of the normal distribution, in which any vertical slice through the surface which graphs the distribution results in an ordinary bell curve.

The sampling distribution of *r* is only symmetric when *ρ* = 0 (i.e. when *x* and y are independent). If *ρ* ≠ 0, then the sampling distribution is asymmetric and so the following theorem does not apply, and other methods of inference must be used.

**Theorem 1**: Suppose *ρ* = 0. If *x* and y have a bivariate normal distribution or if the sample size *n* is sufficiently large, then *r* has a normal distribution with mean 0, and *t = r/s _{r}* ~

*T*(

*n*– 2) where

Here the numerator *r* of the random variable *t* is the estimate of *ρ* = 0 and *s _{r} *is the standard error of

*t.*

**Observation**: If we solve the equation in Theorem 1 for *r*, we get

**Observation**: The theorem can be used to test the hypothesis that population random variables *x* and y are independent i.e. *ρ* = 0.

**Example 1**: A study is designed to check the relationship between smoking and longevity. A sample of 15 men 50 years and older was taken and the average number of cigarettes smoked per day and the age at death was recorded, as summarized in the table in Figure 1. Can we conclude from the sample that longevity is independent of smoking?

**Figure 1 – Data for Example 1**

The scatter diagram for this data is as follows. We have also included the linear trend line that seems to best match the data. We will study this further in Linear Regression.

Next we calculate the correlation coefficient of the sample using the CORREL function:

*r* = CORREL(R1, R2) = -.713

From the scatter diagram and the correlation coefficient, it is clear that the population correlation is likely to be negative. The absolute value of the correlation coefficient looks high, but is it high enough? To determine this, we establish the following null hypothesis:

H_{0}: *ρ* = 0

Recall that *ρ* = 0 would mean that the two population variables are independent. We use *t* = *r/s _{r} *as the test statistic where

*s*is as in Theorem 1. Based on the null hypothesis,

_{r}*ρ*= 0, we can apply Theorem 1, provided

*x*and y have a bivariate normal distribution. It is difficult to check for bivariate normality, but we can at least check to make sure that each variable is approximately normal via QQ plots.

Both samples appear to normal, and so by Theorem 1, we know that *t* has approximately a t distribution with *n* – 2 = 13 degrees of freedom. We now calculate

Finally, we perform either one of the following tests:

p-value = TDIST(ABS(-3.67), 13, 2) = .00282 < .05 = *α* (two-tail)

*t _{crit}* = TINV(.05, 13) = 2.16 < 3.67 = |

*t*|

_{obs}And so we reject the null hypothesis, and conclude there is a non-zero correlation between smoking and longevity. In fact, it appears from the data that increased levels of smoking reduce longevity.

**Example 2**: The US Census Bureau collects statistics comparing the various 50 states. The following table shows the poverty rate (% of population below the poverty level) and infant mortality rate per 1,000 live births) by state. Based on this data, can we conclude the poverty and infant mortality rates by state are correlated?

The scatter diagram for this data is as follows.

The correlation coefficient of the sample is given by

*r* = CORREL(R1, R2) = .564

Where R1 is the range containing the poverty data and R2 is the range containing the infant mortality data. Since the population correlation was expected to be non-negative, the following one-tail null hypothesis was used:

H_{0}: *ρ* ≤ 0

Based on the null hypothesis we will assume that *ρ* = 0 (best case), and so as in Example 1

Finally, we perform either one of the following tests:

p-value = TDIST(4.737, 48, 1) = 9.8E-08 < .05 = *α* (one-tail)

*t _{crit}* = TINV(.05, 48) = 1.677 < 4.737 =

*t*

_{obs}And so we reject the null hypothesis, and conclude there is a non-zero correlation between poverty and infant mortality.

Since we were confident that the correlation coefficient wasn’t negative, we chose to perform a one-tail test. It turns out that even if we had chosen a two-tailed test (i.e. H_{0}: *ρ* = 0), we would have still rejected the null hypothesis.

**Real Statistics Functions**: The following supplemental functions are provided in the Real Statistics Resource Pack.

**CorrTTest**(*r, size, tail*s) = the p-value of the one sample test of the correlation coefficient using Theorem 1 where *r* is the observed correlation coefficient based on a sample of the stated *size*. If *tails* = 2 (default) a two-tailed test is employed, while if *tails* = 1 a one tailed test is employed.

**CorrTLower**(*r, size, alpha*) = the lower bound of the 1 – *alpha* confidence interval of the population correlation coefficient based on a sample correlation coefficient *r* coming from a sample of the stated *size*.

**CorrTUpper**(*r, size, alph*a) = the upper bound of the 1 – *alpha* confidence interval of the population correlation coefficient based on a sample correlation coefficient *r* coming from a sample of the stated *size*.

**CorrelTTest**(*r, size, alpha, lab, tails*): array function which outputs t-stat, p-value, and lower and upper bound of the 1 – *alpha* confidence interval, where *r* and size are as described above. If *lab *= TRUE then output takes the form of a 4 × 2 range with the first column consisting of labels, while if *lab* = FALSE (default) then output takes the form of a 4 × 1 range without labels.

**CorrelTTest**(R1, R2, *alpha, lab, tails*) = CorrelTTest(*r, size, alpha, lab, tails*) where *r* = CORREL(R1, R2) and *size* = the common sample size, i.e. the number of pairs from R1 and R2 which both contain numeric data.

If *alpha* is omitted it defaults to .05. If *tails* = 2 (default) a two-tailed test is employed, while if *tails* = 1 a one tailed test is employed.

**Observation**: For Example 1, CorrTTest(-.713, 15) = .00282, CorrTLower(-.713, 15, .05) = -1.13 and CorrTUpper(-.713, 15, .05) = -.294.

Also =CorrelTTest(A4:A18,B4:B18,E11,TRUE) produces the following output:

**Observation**: As observed earlier

We can use this fact to create the critical values for the t test described above, namely

**Real Statistics Function**: The following function is provided in the Real Statistics Resource Pack.

**PCRIT**(*n, α, tails*) = the critical value of the t test for Pearson’s correlation for samples of size *n*, for the given value of alpha (default .05), and *tails* = 1 (one tail) or 2 (two tails), the default.

A table of such critical values can be found in Pearson’s Correlation Table.

**Fisher Transformation**

For samples of any given size *n* it turns out that *r* is not normally distributed when *ρ* ≠ 0 (even when the population has a normal distribution), and so we can’t use Theorem 1.

There is a simple transformation of *r*, however, that gets around this problem, and allows us to test whether *ρ = ρ _{0}* for some value of

*ρ*0.

_{0}≠**Definition 1**: For any *r* define the **Fisher transformation** of *r* as follows:

**Theorem 2**: If *x* and y have a joint bivariate normal distribution or *n* is sufficiently large, then the Fisher transformation *r’* of the correlation coefficient *r* for samples of size *n* has distribution *N*(*ρ′, s _{r′}*) where

**Corollary 1**: Suppose *r _{1}* and

*r*are as in the theorem where

_{2}*r*and

_{1}*r*are based on independent samples and further suppose that

_{2}*ρ*. If z is defined as follows, then

_{1}= ρ_{2}*z*~

*N*(0, 1).

Proof: From the theorem

for *i* = 1, 2. By Property 1 and 2 of Basic Characteristics of the Normal Distribution,

where *s* is as defined above. Since *ρ _{1} = ρ_{2}*, it follows that , and so ~

*N*(0,

*s*) from which it follows that

*z*~

*N*(0,1).

**Excel Functions**: Excel provides the following functions that calculate the Fisher transformation and its inverse.

**FISHER**(*r*) = .5 * LN((1 + *r*) / (1 – *r*))

**FISHERINV**(*z*) = (EXP(2 * *z*) – 1) / (EXP(2 * *z*) + 1)

**Observation**: We can use Theorem 2 to test the null hypothesis H_{0}: *ρ = ρ _{0}*. This test is very sensitive to outliers. If outliers are present it may be better to use the Spearman rank correlation test or Kendall’s tau test.

The corollary can be used to test whether two samples are drawn from populations with equal correlations.

**Example 3**: Suppose we calculate *r* = .7 for a sample of size *n* = 100. Test the following null hypothesis and find the 95% confidence interval.

H_{0}: *ρ* = .6

Observe that

*r′* = FISHER(*r*) = FISHER(.7) = 0.867

*ρ′* = FISHER(*ρ*) = FISHER(.6) = 0.693

*s _{r′}* = 1 / SQRT(

*n*– 3) = 1 / SQRT(100 – 3) = 0.102

Since *r′* > *ρ′* we are looking at the right tail of a two-tail test

p-value = 2*(1–NORMDIST(*r′, ρ′, s _{r′},* TRUE)) = 2*(1–NORMDIST(.867, .693, .102, TRUE)) = .0863 > 0.05 =

*α*

*r′-crit* = NORMINV(1–*α/2, ρ′, s _{r′}*) = NORMINV(.975, .693, .102) = .892 > .693 =

*r*

*′*In either case, we cannot reject the null hypothesis.

The 95% confidential interval for *ρ ′* is

*r′ ± z _{crit} ∙ s_{r′} *= 0.867 ± 1.96 ∙ 0.102 = (0.668, 1.066)

Since *z _{crit}* = ABS(NORMSINV(.025)) = 1.96 the 95% confidence interval for

*ρ′*is (FISHERINV(0.668), FISHERINV(1.066)) = (.584, .788). Note that .6 lies in this interval, confirming our conclusion not to reject the null hypothesis.

**Example 4**: Repeat the analysis of Example 2 using Theorem 2, this time performing a two-tail test (H_{0}: *ρ* = 0) using the standard normal test *z = (r′– ρ′) / s _{r′}*

*r* = CORREL(R1, R2) = .564

*r′* = FISHER(r) = FISHER(.564) = .639

*ρ′* = FISHER(*ρ*) = FISHER(0) = 0 (based on the null hypothesis)

*s _{r′} *= 1 / SQRT(

*n*– 3) = .146

*z = (r′ – ρ′) / s _{r′} *= 4.38

Since *z* > 0, we perform the standard normal test on the right tail:

p-value = 1 – NORMSDIST(*z*) = NORMSDIST(4.38) = 5.9E-06 < 0.025 = *α*/2

*z _{crit} *= NORMSINV(1 –

*α*/2) = NORMSINV(.975) = 1.96 < 4.38 =

*z*

_{obs}In either case we reject the null hypothesis (H_{0}: *ρ* = 0) and conclude that there is some association between the variables.

We can also calculate the 95% confidence interval for *ρ′ *as follows:

*r′ ± z _{crit} ∙ s_{r′} *= .639 ± (1.96)(.146) = (.353, .925)

Using FISHERINV we transform this interval to a 95% confidence interval for *ρ*:

(FISHERINV(.353), FISHERINV(.925)) = (.339, .728)

Since *ρ* = 0 is outside this interval, once again we reject the null hypothesis.

**Real Statistics Functions**: The following functions are provided in the Real Statistics Resource Pack.

**CorrTest**(*exp, obs, size, tails*) = the p-value of the one sample two-tail test of the correlation coefficient using Theorem 2 where *exp* is the expected population correlation coefficient and *obs* is the observed correlation coefficient based on a sample of the stated *size*. If *tails* = 2 (default) a two-tailed test is employed, while if *tails* = 1 a one tailed test is employed.

**CorrLower**(*r, size, alpha*) = the lower bound of the 1 – *alpha* confidence interval of the population correlation coefficient based on a sample correlation coefficient *r* for a sample of the stated *size*.

**CorrUpper**(*r, size, alpha*) = the upper bound of the 1 – *alpha* confidence interval of the population correlation coefficient based on a sample correlation coefficient *r* for a sample of the stated *size*.

**CorrelTest**(*r, size, rho, alpha, lab, tails*): array function which outputs *z*, p-value, lower and upper (i.e. lower and upper bound of the 1 – alpha confidence interval), where *rho*, *r* and *size* are as described above. If *lab* = True then output takes the form of a 4 × 2 range with the first column consisting of labels, while if *lab* = False (default) then output takes the form of a 4 × 1 range without labels.

**CorrelTest**(R1, R2, *rho, alpha, lab, tails*) = CorrelTest(*r, size, rho, alpha, lab, tails*) where *r* = CORREL(R1, R2) and *size* = the common sample size, i.e. the number of pairs from R1 and R2 which both contain numeric data.

If *alpha* is omitted it defaults to .05. If *tails* = 2 (default) a two-tailed test is employed, while if *tails* = 1 a one tailed test is employed.

**Observation**: For Example 3, CorrTest(.6, .7, 100) = .0864, CorrLower(.7, 100, .05) = .584 and CorrLower(.7, 100, .05) = .788. Also =CorrelTest(.7, 100, .6, 100, .05, TRUE) generates the following output:

**Example 5**: Test whether the correlation coefficient for the data in the ranges K12:K18 and L12:L18 of the worksheet in Figure 6 is significantly different from .9.

We calculate the correlation coefficient for the two samples is .975 (cell O12) using the formula =CORREL(K12:K18,L12:L18). The two-tailed test is conducted in the range N14:O17 via the array formula =CorrelTest(K12:K18,L12:L18,0.9,0.05,TRUE). Since p-value = .15 > .05 = *α*, we cannot reject the null hypothesis that the data is taken from a population with correlation .9.

**Effect size and power**

Until now, when we have discussed effect size we have used some version of Cohen’s *d*. The correlation coefficient *r* (as well as *r ^{2}*) provides another common measure of effect size. We now show how to calculate the power of a one-sample correlation test using the approach from Power of a Sample.

**Example 6**: A market research team is conducting a study in which they believe the correlation between increases in product sales and marketing expenditures is 0.35. What is the power of the one-tail test if they use a sample of size 40 with *α* = .05? How big does their sample need to be to carry out the study with *α* = .05 and power = .80?

The power of the test can be calculated as in Figure 6.

The sample size required to achieve power of 80% and an effect size of .35 is shown in Figure 7.

**Real Statistics Functions**: The Real Statistics Resource Pack supplies the following functions:

**CORREL1_POWER**(*r*0, *r*1, *n*, *tails*, *α*) = the power of a one sample correlation test using the Fisher transformation when *r*0 = the population correlation (based on the null-hypothesis), *r*1 = the effect size (observed correlation), *n *= the sample size, *tails* = # of tails: 1 or 2 (default) and *α* = alpha (default .05).

**CORREL1_SIZ**E(*r*0, *r*1, 1−*β,tails, α*) = the sample size required to detect an effect of size of *r*1 (observed correlation) with power 1−*β* (default .80) when the population correlation (based on the null-hypothesis) is *r*0, *tails* = # of tails: 1 or 2 (default) and *α* = alpha (default .05).

**Observation**: Using these Real Statistics functions, we can calculate the results of Example 6 for both the one-tail and two-tail tests as follows:

CORREL1_POWER(0, .35, 40, 1) = 71.8%

CORREL1_SIZE(0, .35, .80, 1) = 49.3

CORREL1_POWER(0, .35, 40, 2) = 60.4%

CORREL1_SIZE(0, .35, .80, 2) = 61.8

Hello, im having problem in calculating sample size. I dont know to use what formula. im doing cross sectional study design..My hypothesis is there is an association between knowledge and mammography screening. My question is, based on my hyphotesis, what formula should i used in order to calculate sample size needed? TQ

Example 6 of the referenced webpage show how to calculate the sample size. The specific calculations required are shown in Figure 7. You can also use the

CORREL1_SIZEfunction orStatistical Power and Sample Sizedata analysis tool.Charles

Hi Charles

I am trying to develop a model that calculates what GPower statistical software calls exact power correlations tests.

However it came to my mind that we could use the relation b=r*sigmay/sigmax to transform the correlation values in the test. If we think in terms of standardized variables we even have b=r.

I have tried made several compairisons bewen using the t test for the slope and the Fisher approximation and I have doubts if the differences found are due to the fact that the Fisher transformation provides only an approximation.

Is there any theoretical flaw in this line of thought?

António,

It is not surprising that using the t test to test the hypothesis that the correlation coefficient is zero is related to testing that the slope of the regression line is zero using the t test. I am not sure how Fisher’s approximation enters the picture, though, since this is useful when testing that the correlation coefficient is equal to some specific value which usually not zero. Perhaps I missed something.

Charles

Hello

In the text presenting Example 3 is r=.6 when it should be r=.7?

Regards

António Teixeira

António,

Yes, you are correct. I have changed the webpage to correct this typo. Thanks very much for catching this error.

Charles

Hi Charles,

I am interested in why for a bivariate normal distribution, its sample correlation coefficient r will have a standard error of [(1-r^2)/(n-2)]^2? Is there any derivation, or intuitive explanation for that?

Thanks,

Yan

Yan,

I believe the proof is not easy.

Charles

Hi Charles

On another statistical website I’ve seen that in addition to the requirements on the data (binormal distribution), to make inferences there’s also a requirement also for the residuals to be normally distributed. Do you agree and, time permitting, could you elaborate on the subject?

Thanks

MAtteo

Matteo,

There is a similar requirement in linear regression. If the data is binormally distributed it turns out that the residuals will be normally distributed.

Charles

Thank you for taking the time of answering all my questions

Charles,

A different question but still related to this post.

Suppose someone published an informal paper or a summary showing some results from a study similar to your Example 1. However they do not include the actual data as you did with the table in Figure 1. All they show is the scatterplot, and they provide the correlation coefficient and the number of points used.

If I could still assume the data was normally distrubuted (say I take their word for it), but did not have a way to calculate the descriptive statistics, can you still make inferences to probe their results, say estimate the confidence interval for their correlation coefficient at the 95% confidence level?

Matteo,

Based on the assumptions that you have made (assuming the data is

binormallydistributed), yes you could make the same inferences.Charles

Great!

Thank you

Hi Charles,

Excellent blog, very useful!

Did you write (or are you planning to write) about deriving confidence interval for correlation coefficient in the case of multiple correlation?

I’ll give you some two real examples from my discipline (geosciences) to explain why I am interested in it.

Very often the porosity of a rock can be related to its acoustic impedance (the product of rock velocity and rock density measured in wells) and a correlation coefficient can be calculated. This is an example of linear correlation and the calculation of the confidence interval for the 95% confidence level is fairly straightforward (and now with your resource pack even easier).

The second example is that of quality of crude oil, which can depend on age, depth, temperature at which it was formed (and possibly other variables). If a multi-linear correlation coefficient is calculated, how can the confidence interval for the 95% confidence level be estimated?

Thank you,

Matteo

Matteo,

Good point. The approach for create a confidence interval for multiple correlation is the same as that used on the referenced page. In any case, I will add an example on the Multiple Correlation webpage.

Charles

Thank you Charles

I thought so (intuitively) but could not quite explain why, so an example will help.

Matteo

Sir, Is the pearson correlation is suit with the impact on job satisfaction through an incentive scheme of employees . how do i do it with standard deviation, mean value that are output of SPSS tool. please explain sir immediately. Enoka

Enoka,

I don’t have access to SPSS. If you want to perform the calculation in Excel then please look at the webpage http://www.real-statistics.com/. In any case you need to have the values for the means and standard deviations of the two samples is not sufficient to calculate the Pearson’s Correlation. You also need the sum of the pairwise products of the data elements in the two samples.

Charles

We would need to test the null hypothesis that there is no correlation (H0: rho=0) between two variables x and y. In our case, however, neither of these variables are normally distributed but more like 1/exp (x). We have two data x-y samples, one in which x and y appears to be linearly correlated according to the function y=x (a strait line with 45 degrees slope, the calculated linear correlation is 0.99) and the other when the correlation function appears to be more very approximately like y=sqrt(x). In both cases we would like to test the null hypothesis of no correlation at all, i.e. derive the p-value for that there is no correlation at all between x and y. Can you please refer us to a computer code that would do this for the case when the variables that are not even approximately normally distributed.

Tord,

Here is a link to a website which address this issue:

http://bisharaa.people.cofc.edu/preprints/BisharaHittner2012.pdf

Charles

Sir

In Example 1, why do you use “r = CORREL(R1, R2) = -.713″ instead of “CORREL(R1, R2) = n * COVAR(R1. R2) / (STDEV(R1) * STDEV(R2) * (n – 1)) “

Colin,

The sample correlation coefficient and the population correlation coefficient are equal and in fact CORREL(R1, R2) = n * COVAR(R1. R2) / (STDEV(R1) * STDEV(R2) * (n – 1)), but it is easier to use the simple formula CORREL(R1, R2).

Charles

Sir

Thank you sir. I thought sample correlation coefficient and the population correlation coefficient are different.