# Comparing correlation coefficients of overlapping samples

We now consider the case where the two sample pairs are not drawn independently because the two correlations have one variable in common.

Example 1: IQ tests are given to 20 couples. The oldest son of each couple is also given the IQ test with the scores displayed in Figure 1. We would like to know whether the correlation between son and mother is the significantly different from the correlation between son and father.

Figure 1 – Data for Example 1

We will use the following test statistic

where  S is the 3 × 3 sample correlation matrix and

For this problem the results are displayed in Figure 2, where the upper part of the figure contains the correlation matrix (e.g. the correlation between Mother and Son is calculated by the function =CORREL(B4:B23,C4:C23) and is shown in cells H5 and G6).

The 95% confidence interval is calculated in the usual way using the fact that the standard error is the reciprocal of the square root term in the definition of t.

Figure 2 – Analysis for Example 1

Since p-value = .042 < .05 = α (or t < t-crit) we reject the null hypothesis, and conclude that the correlation between mother and son is significantly different from the correlation between father and son.

Real Statistics Functions: The following array functions are provided in the Real Statistics Resource Pack.

Correl2OverlapTTest(r12, r13, r23, n, alpha, lab): array function which outputs the difference between the correlation coefficients r12 and r13, t statistic, p-value (two-tailed) and the lower and upper bound of the 1 – alpha confidence interval, where r12 is the correlation coefficient between the first and second samples,  r13  is the correlation coefficient between the first and third samples, r23 is the correlation coefficient between the second and third samples and n is the size of each of the three samples. If lab = TRUE then the output takes the form of a 5 × 2 range with the first column consisting of labels, while if lab = FALSE (default) then the output takes the form of a 5 × 1 range without labels; if alpha is omitted it defaults to .05.

Corr2OverlapTTest(R1, R2, R3, alpha, lab) performs the two sample correlation test for samples R1, R2 and R3 where R2 is the overlapping sample. This array function return the array from =Correl2OverlapTTest(r12, r13, r23, n, alpha, lab) where r12 = CORREL(R1, R2), r13 = CORREL(R1, R3),  r23 = CORREL(R2, R3) and n = the common sample size for R1, R2 and R3.

For Example 1, the output from =Correl2OverlapTTest(F6,G6,G4,F8,,TRUE) is shown in Figure 3.

Figure 3 – Test using Real Statistics function

The same output is produced by the function

=Corr2OverlapTTest(C4:C23,A4:A23,B4:B23,,TRUE)

Observation: We can perform another version of this two sample correlation test using the Fisher transformation, as shown in Figure 4.

Figure 4 – Fisher analysis for Example 1

As you can see from range F15:F16, the 95% confidence interval calculated this time (taking absolute values) is (02202, .87714), which is not so different from the interval calculated in Figure 2, namely (016408, .834219).

Real Statistics Functions: The following array functions are provided in the Real Statistics Resource Pack to implement the Fisher test described above.

Correl2OverlapTest(r12, r13, r23, n, alpha, lab)

Corr2OverlapTest(R1, R2, R3, alpha, lab)

These functions are identical to Correl2OverlapTTest(r12, r13, r23, n, alpha, lab) and Corr2OverlapTTest(R1, R3, R2, alpha, lab), except that the Fisher transformation is used as described above and the output only has three elements: difference between the correlations and the end points of the confidence interval.

For Example 1, the output from =Correl2OverlapTest(F6,G6,G4,F8,,TRUE) is as shown in range F14:F16 of Figure 3. The same output is produced by the array function

=Corr2OverlapTest(C4:C23,A4:A23,B4:B23,,TRUE)

### 17 Responses to Comparing correlation coefficients of overlapping samples

1. Baogui zhang says:

Hi Charles,
Thank you very much for providing the well illustrated example. I noticed one typo error in the formular t (r12-r13)^2/4*(1-r23)^3 should be (r12+r13)^2/4*(1-r23)^3

• Charles says:

Hi Baogui,
Good catch. This is indeed a typo. I have now corrected the formula. Thanks for you help.
Charles

• Baogui zhang says:

Great. Do you have any formular or procedure to test if there is significant difference between more than two independent samples? I get one in the note of Alan Pickering http://homepages.gold.ac.uk/aphome/correlnotes.doc, but I am not sure it is correct.

2. Yoyo Gong says:

Dear Professor Zaiontz,
I am a medical student persueing master degree from China. I am really greatful for your sharing this wonderful formula with us on the web, for I want to take this statistic into my writing article. But I wonder where this formula comes from. Would you please tell me the references? Thanks a lot.
My best wishes.
Yoyo Gong

• Charles says:

The reference is Howell, D. C. (2010). Statistical methods for psychology (7th ed.). Wadsworth, Cengage Learning.
Charles

• Yoyo Gong says:

Thanks a lot. This is quite useful for me.

3. Gustaf says:

If I understand it correct this is versus the alternative that their IQ:s are different.
Now if I want to make a one sided test of this, should go about the same way as in previous examples?

• Charles says:

Gustaf,
This is a one sided test.
Charles

4. Gesang says:

Good Nigt mr.charles, i want to ask you about the simbol of ~T(n-3), what does it mean? especially the T simbol. Thank you.
from Indonesia 🙂

• Charles says:

x ~ T(n-3) means that the random variable x has a t distribution with n-3 degrees of freedom. It can also mean that x has approxomately a t distribution with n-3 degrees of freedom.
Charles

5. Richard Tetteh says:

Can I get the proof for this formula? Thanks a lot.

• Charles says:

Richard,
I don’t have the proof of either approach. I discovered the first approach (presumably the one you are referencing) in David Howell’s textbook. He is referencing a paper by Williams, E.J. (1959). See Bibliography for details.
Charles

6. C. Martin R. says:

Good Morning Prof. Zaiontz,
first of all, thank you for this detailed page!
I’ve got a problem, on which I’m spinning my head around, and therefor a tricky question for you:

I have two datasets (patients and age/sex matched controls, but different N) with n repeated measures (basically it’s some derived MRI scores in ascending coordinates). Since some assumptions for an rmANOVA are not met and transformation doesn’t help, I’m searching for other ways to examine the interaction.

I assume a priori a linear dependency of my DV on the the n measurements, so I got the idea of just comparing the correlations. But now I’m a bit stuck in the decision on how to do so exactly.

My first impulse was to average the DV per n in each cohort, then compute the correlations etc. as suggested above (I suppose it’s William’s t). Here, I get a significant difference for r(data1, n) and r(data2, n).
But supposing that the groups are independent, I’d rather go with the method suggested in ‘Two Sample Hypothesis Testing for Correlation’ (which is Fishers z’). Here, I computed the correlations r(DV, n) per subject, transformed them into z and averaged them, and eventually inverted them back to rs. Thus, I input two mean correlations mr(data1, n), mr(data2, n) with the corresponding Ns in the comparison. This outputs no significant difference.

So, which one’s the correct method to apply?! I’m at a point where my thoughts drive in a roundabout, so I hope you can bring some light in this mess and maybe give me a hint, I would really appreciate it. 🙂

Kind regards from Germany
Martin

• Charles says:

Martin,
Let me make sure I understand the basics of the situation. From what I see, you have the follow situation.
Data set 1 (treatment): N1 elements and n repeated measures (not replication!). E.g. 20 patients with a blood pressure reading at 9am, 12pm, 3pm, 6pm. Here N1 = 20 and n = 4
Data set 2 (control): N2 elements and n repeated measures, where N1 and N2 may be different
Please let me know whether this is correct. Also what hypothesis are you trying to test? I would prefer an answer in experimental terms and not statistical terms.
Charles

• C. Martin R. says:

Charles,