The **Mann-Whitney U test** is essentially an alternative form of the Wilcoxon Rank-Sum test for independent samples and is completely equivalent.

Define the following test statistics for samples 1 and 2 where *n*_{1} is the size of sample 1 and* n*_{2} is the size of sample 2, and *R*_{1} is the adjusted rank sum for sample 1 and *R*_{2} is the adjusted rank sum of sample 2. It doesn’t matter which sample is bigger.

As for the Wilcoxon version of the test, if the observed value of *U* is < *U _{crit}* then the test is significant (at the

*α*level), i.e. we reject the null hypothesis. The values of

*U*for

_{crit}*α*= .05 (two-tailed) are given in the Mann-Whitney Tables.

**Example 1**: Repeat Example 1 of the Wilcoxon Rank Sum Test using the Mann-Whitney U test.

**Figure 1 – Mann-Whitney U Test**

Since *R*_{1} = 117.5 and *R*_{2} = 158.5, we can calculate *U*_{1} and *U*_{2} to get *U* = 39.5. Next we look up in the Mann-Whitney Tables for *n*_{1}* *= 12 and *n*_{2} = 11 to get *U _{crit} *= 33. Since 33 < 39.5, we cannot reject the null hypothesis at

*α*= .05 level of significance.

**Property 2**: For *n*_{1} and *n*_{2} large enough the *U* statistic is approximately normal *N*(*μ, σ*) where

**Observation**: Click here for proofs of Property 1 and 2.

**Property 3**: Where there are a number of ties, the following revised version of the variance gives better results:

where *n* = *n*_{1} + *n*_{2}, *t* varies over the set of tied ranks and *f _{t}* is the number of times (i.e. frequency) the rank

*t*appears. An equivalent formula is

**Observation**: A further complication is that it is often desirable to account for the fact that we are approximating a discrete distribution via a continuous one by applying a **continuity correction**. This is done by using a z-score of

instead of the same formula without the .5 continuity correction factor.

**Example 2**: Repeat Example 2 of the Wilcoxon Rank Sum Test using the Mann-Whitney U test.

We show the results of the one-tailed test (without using a ties correction) is shown in Figure 2. Column W displays the formulas used in column T.

**Figure 2 – Mann-Whitney U test using normal approximation**

As can be seen in cell T19, the p-value for the one-tail test is the same as that found in Wilcoxon Example 2 using the Wilcoxon rank-sum test. Once again we reject the null hypothesis and conclude that non-smokers live longer.

**Real Statistics Excel Functions**: The following functions are provided in the Real Statistics Pack:

**MANN**(R1, R2) = *U* for the samples contained in ranges R1 and R2

**MANN**(R1, *n*) = *U* for the sample contained in the first *n* columns of range R1 and the sample consisting of the remaining columns of range R1. If the second argument is omitted it defaults to 1.

**MTEST**(R1, R2,* tails*) = p-value of the Mann-Whitney U test for the samples contained in ranges R1 and R2. *tails* = # of tails: *t* = 1 (default) or 2.

**MTEST**(R1, *n, tails*) = p-value of the Mann-Whitney U test for the sample contained in the first *n* columns of range R1 and the sample consisting of the remaining columns of range R1. If the second argument is omitted it defaults to 1. *tails* = # of tails: *t* = 1 (default) or 2.

**MCRIT**(*n _{1}, n_{2}, α, tails, h*) = critical value of the Mann-Whitney

*U*test for samples of size

*n*

_{1}and

*n*

_{2}, for the given value of alpha and

*tails*= 1 (one tail) or 2 (two tails) based on the Mann-Whitney Table. If

*h*= TRUE (default) harmonic interpolation is used; otherwise linear interpolation is used.

**MPROB**(*x, n*1, *n*2, *tails, iter*) = an approximate p-value for the Mann-Whitney test for the U value equal to *x* for samples of size *n*1 and *n*2 and *tails* = 1 (one tail) or 2 (two tails, default) based on an interpolation of the values in the table in Mann-Whitney Table, using *iter* number of iterations (default = 40) to calculate the approximation.

Note that the values for *α* in the table in Mann-Whitney Table range from .01 to .1 for tails = 2 and .005 to .05 for tails = 1. If the p-value produced by the MPROB function is less than .01 (tails = 2) or .005 (tails = 1) then the p-value in MPROB is given as 0 and if the p-value is greater than .1 (tails = 2) or .05 (tails = 1) then the p-value in MPROB is given as 1.

Any empty or non-numeric cells in R1 or R2 are ignored.

**Observation**: In Example 1, we can use Real Statistics functions to arrive at the same value for *U*, namely MANN(A6:B17) = 39.5. Also MCRIT(H5,I5,H9,H10) = MCRIT(12, 11, .05, 2) = 33 (the value in cell H12 of Figure 1). Finally note that the p-value = MPROB(H5,I5,H9,H10) = MPROB(39.5, 12, 11, 2) = 1 (meaning that p-value > .1), and so once again we can’t reject the null hypothesis.

If *U* had been 32, then p-value = MPROB(32, 12,11, 2) = 0.044 < .05 = *α*, and so we would reject the null hypothesis. This is consistent with the fact that *U* = 32 < 33 = *U _{crit.}*.

Similarly in Example 2, we can use Real Statistics functions to arrive at the same value for *U*, namely MANN(A6:H15,4) = MANN(A6:D15,E6:H15) = 486, as well as the same p-value (assuming a normal approximation described above), namely MTEST(A6:H15,4) = MTEST(A6:D15,E6:H15) = 0.003081.

Also note that the supplemental functions RANK_COMBINED and RANK_SUM, as defined in Wilcoxon Rank-Sum Test, can be used in conjunction with the Mann-Whitney test.

**Observation**: The effect size for the data using the Mann-Whitney test can be calculated in the same manner as for the Wilcoxon test, and the result will be the same.

The effect size of .31 for the data in Example 2 is calculated as in Figure 2. Namely, the z-score (cell T17) is calculated using the formula =(T13-T14)/T16 and the effect size (cell 20) is calculated by the formula =ABS(T17)/SQRT(T6+U6).

Also note that the z-score and the effect size *r* can be calculated using the supplemental function MTEST as follows:

z-score = NORMSINV(MTEST(R1, R2))

*r* = NORMSINV(MTEST(R1, R2))/SQRT(COUNT(R1)+COUNT(R2))

**Observation**: The results of analysis for Example 2 can be summarized as follows: The life expectancy of non-smokers (*Mdn* = 76.5) was significantly higher than that of smokers (*Mdn* = 70.5), *U* = 486, *z* = -2.74, *p* = .0038 < .05, *r* = .31.

**Real Statistics Function**: The Real Statistics Pack also provides the following array function for the samples in ranges R1 and R2 where alpha is the *α* value (default .05) and *tails* is the number of tails (1 or 2 = default).

**MANN_TEST**(R1, R2, *lab, tails, alpha, ties, cont*): returns the following values in a 7 × 1 column range:* U, alpha, tails, z, r, U*-crit, p-value. If *ties* = TRUE (default) the ties correction factor of Property 3 is applied. If *cont* = TRUE (default) then the continuity correction is applied. If *lab* = TRUE then an extra column with labels is included.

If the size of the two samples is 26 or less, i.e. COUNT(R1) + COUNT(R2) ≤ 26, then an exact test will be performed. In this case, the output is a 9 × 1 column range (or a 9 × 2 range if* lab* = TRUE), including *U*-crit (exact) and p-value (exact).

For Example 2, the array formula =SRANK_TEST(B4:B33,C4:C33,TRUE,1,.05,FALSE) returns the following array for the one-tailed test with continuity correction but no correction for ties:

**Figure 3 – Output from MANN_TEST**

**Real Statistics Data Analysis Tool**: The Real Statistics Resource Pack also provides a data analysis tool which performs the Mann-Whitney test for independent samples, automatically calculating the medians, rank sums, U test statistic, z-score, p-value and effect size *r*.

For example, to perform the analysis in Example 1, enter **Ctrl-m** and choose the **T Test and Non-parametric Equivalents**. The dialog box shown in Figure 4 appears.

**Figure 4 – Dialog box for Real Statistics Mann-Whitney Test**

Enter A5:B17 as the **Input Range 1** (although we could insert A5:A17 in **Input Range 1** and B5:B17 in **Input Range 2**) click on **Column headings included with data** and choose the **Two independent samples** and **Non-parametric** options and click on the **OK **button. Keep the default of 0 for **Hypothetical Mean/Median** (this value is not used anyway) and .05 for **Alpha**. For this version of the test, we check **Use continuity correction**, **Include exact test **and **Include table lookup** but we leave the **Use ties correction** option unchecked.

The output is shown in Figure 5.

**Figure 5 – Mann-Whitney test data analysis tool output**

Both the one-tail and two-tail tests are shown. Also, three versions of the test are shown: the test using the normal approximation (range O18:P20), the test using the critical values (range O22:P23) from the Mann-Whitney Table and the exact test (range O25:P26) as described later on this webpage.

If we check the **Use Ties correction** in Figure 4 we would obtain the output shown in Figure 6.

**Figure 6 – Mann-Whitney test data analysis tool with ties correction**

In this case the ties correction of Property 3 is applied to the normal approximation (range U18:V20). As you can see there is very little difference between the outputs shown in Figure 5 and 6.

Note too that the ties correction (as well as the continuity correction) only applies to the normal approximation. The table and exact versions of the test do not apply the ties or continuity correction.

**Real Statistics Function**: The Real Statistics Pack also provides the following function to calculate the ties correction used in the data analysis tool.

**TiesCorrection**(R1, R2, *type*) = ties correction value for the data in range R1 and optionally range R2, where *type *= 0: one sample, *type *= 1: paired sample, *type *= 2: independent samples

For the Mann-Whitney test *type* = 2. The ties correction is used in the calculation of the standard deviation (cell U15 of Figure 6) as follows

=SQRT(U14*((U6+V6)^3-(U6+V6)-TiesCorrection(A6:A17,B6:B17,2))/(6*((U6+V6)^2-(U6+V6))))

**Exact Test**

Click here for a description of the exact version of the Mann-Whitney Test using the permutation function.

**Confidence Interval of the Median**

Click here for a description of how to calculate confidence interval of the median based on the Mann-Whitney Test.

dear charles,

i performed mann whitney test on spss version 20.0. i got mann whitney test U value in thousands (e.g. U=1453) . my sample size was n=1280. i am confused to see this large value of U?? can this U value be in thousands?? U value may be in thousands?? please

Rais,

Yes, U can be this large.

Charles

Dear Charles,

I have two samples which have the sample size n1=102 and n2=110. I want to compare these samples using the U-test.

Could you, please, tell me how can I calculate the critical value for the Mann-Whitney U-test with given confidence level (0.05 or 0.01)?

Thanks a lot

Sergiy

Sergiy,

This is explained on the referenced webpage, namely with such a large sample you should use the normal approximation and therefore use the critical value for the appropriate normal distribution.

Charles

Hi

I performed a Wilcoxon rank-sum test with two samples x and y with sample size n_x=55 and n_y=20.

I have used matlab for this.

[p,h,stats] = ranksum(x,y)

Results are

p = 0.2678;

h = 0;

stats =

zval: 1.1082

ranksum: 853

What is the interpretation of this result? what does p value signify here?

Thanking you in advance.

Hello Abhijit,

The p value indicates that you can’t reject the null hypothesis. See the following webpage for how to interpret the p-value

Null and Alternative Hypothesis.

Charles

Regarding the Effect Size calculation for Mann-Whitney U = ( Z score / SQRT(N)), I am struggling to find any supporting reference to cite it in my thesis.

Any help please ? Thanks

Sergio,

See http://comp.uark.edu/~whlevine/psyc5133/fritz.morris.richler.2012.xge.pdf

Charles

Thank you so much, Charles

Hi,

First of all congratulation on your website.

I hope you can help me better understand the statistics behind the Mann Whitney test.

I have two very different sample sizes (n1 : 57, n2 : 4). Since n1+n2>20 U statistic should be considered normal with with U= n1*n2/2.

What I don’t get is how the p value which is then compared to 0.05 is obtained from the U statistic. What is the relation between the U statistic, p -value and the type 1 error alpha ?

Thank you in advance

Rhonda,

The U value is not n1*n2/2. The mean of the normal approximation is n1*n2/2 and the variance is n1*n2*(n1+n2+1)/12. You still need to calculate U as described on the referenced webpage and then p-value = 1-NORM.DIST(U, mean, stdev, TRUE).

Charles

Pingback: Some Assembly Required » Blog Archive » Support page for GDC16 “TLDR statistics”

Hello Charles,

Firstly, thank you for this clear and useful website !

Concerning Mann-Whitney U (M-W U) test application, a distinction is generally made between distributions of “same shape” or “different shape”. In the first case, M-W U compares the medians while in the second case, it compares mean ranks.

Does Real statistics enable to test for distribution shape ? Is there a link to the 3 different versions (norm/table/exact) displayed in the data analysis tool output ?

Also, I’m not sure I get how to choose between those 3 versions, with or without ties/continuity corrections … Which procedure do you follow generally to choose the most appropriated version (in order to state only one p-value at the end) ?

thank you,

Franck

Franck,

Your point about the shape issue is correct, but currently the Real Statistics software does not provide any test to determine whether the shape is significantly different. You can compare charts or histograms of both samples to see if they have the same shape.

The three versions of the test (norm, table, exact) are described on the website. Here is my advice as to which one to use.

For large samples, the normal approximation gives good results; if there are lots of ties then it is better to use the ties correction. There is not universal agreement as to whether or not to use the continuity correction; usually it won’t make a big difference either way. I tend to always use the ties correction (since if there are no ties the results are the same). I usually don’t use the continuity correction.

For small samples you should use the exact test or table of critical values. You shouldn’t use the normal approximation.

If there are no or few ties and the samples are small enough so that you can use the exact test, then I would favor that over the normal approximation, although the results should be very similar. The limitation of the exact test is that it is computationally intensive and doesn’t take ties into account.

Charles

Thanks for your answer !

If distributions have different shape, are the M-W U results obtained with Real statistics still correct ? I read that SPSS used two different test procedures (depending on same vs different shape) ?

Franck,

Yes. The Real Statistics software does not take shape into account when reporting the results.

I don’t know whether SPSS does or not.

Charles

Hello Charles,

thank you very much for this excellent explanation of the U-test. It really helped me a lot to understand the entire concept and also to pull it off in Excel.

Calculating my results however, I end up with an z-value of 14,41. (I transfered the result to a z-value, since I have n= 5314).

I am quite sure I did the calulations right. Is there a chance that you have a look at the Excel and help me out?

Thank you very much

Best regards,

Ben

Ben,

Yes. Please send an Excel file with your calculations to my email address as listed on Contact Us.

Charles

Hey Charles,

I sent you a mail. I hope it reached you.

Thanks a lot.

Ben

Ben,

I have received your email.

Charles

Hi Charles,

This website and tool are excellent!

I have a naive question regarding the test corrections.

I have a very small sample size: 2 groups with 3 data points each.

The two groups are clearly different between each other, but the 3 values from each group are very similar. What correction (ties, continuity) should be the most appropriate, if any?

Thank you very much.

Ruth

Ruth,

Glad you like the website and tools.

With such a small sample, it probably doesn’t matter much since any result will be somewhat suspect. I would use a continuity correction and probably a ties correction as well.

Charles

Hi,

congratulations for the website! It is really interesting.

I would ask you what is the good interpretation of the results of Wilcoxon-Mann-Whitney’s test and the null hypothesis.

I have two independent samples of 15 and 10 observations respectively that describe two different types of banks. I would like to use this test to verified if belong to one of the two groups is different from belong to the other group. Do you think that this test is useful to reach this result?

The Wilcoxon-Mann-Whitney test can be used for this purpose. Generally this test is used when the assumptions for the two sample t test are not met.

Charles

Great site,

Can I use this method to compare two different tests on the same individuals (compare the results from the entire populations of both test to see if there is a significant difference). I am currently using the receiver operating characteristics (ROC) analysis for each method to figure out which testing method can provide higher area under the curve but trying to link this to WMW test. Please help

Ali,

The Mann-Whitney test assumes independent samples, which by definition excludes the situation where both variables apply to the same individuals. You should consider using the paired t test or the nonparametric Wilcoxon signed ranks test.

Charles

Great resource…thanks a bunch. When I’m ready to do Mann- WHitney pairwise comparisons because the Kurskal-Wallis (non parametric ANOVA equivilent) model came back significant, must I adjust the alpha level by dividing by the number of pairwise comparisons I will be doing (as in a bonferroni adjustment) or just accept the default 0.05 alpha level?

Also, should I be reporting the the Mann-Whitney significance or that of the exact test (as I have had at least one data set come back contradictory)?

Thanks for your consideration!

Herman,

Thanks for your kind words about the Real Statistics resources.

If you are going to perform multiple post-hoc tests, you should correct for familywise error in some way (e.g. Bonferroni). In the next release of the Real Statistics Resource Pack I will add the Nemenyi post-hoc test which is like Tukey’s HSD test but for Kruskal-Wallis. This test will also correct for familywise error.

If the sample size is very small (under 10) then you shouldn’t use the normal approximation to the Mann-Whitney test, and so you should only report the exact version of the test. If you have lots of ties then unless your sample is very small, you shouldn’t use the exact test. In all other cases, if you are getting contradictory results from the Mann-Whitney test (normal approximation vs. exact test), you should report both results. If the results are very different, then there is probably an error. Otherwise, you need to show that the test is at the borderline between significant and not significant.

Charles

Hi Charles,

I’m using Nemenyi test, and in some cases I get a negative p-value back. Is there a problem with the test in your package?

Thanks a lot for this wonderful resource.

Amjad

Amjad,

That is strange. Can you send me an Excel file with your data and analysis? You can find my email address at Contact us.

Charles

I am trying to understand how the critical values for U are calculated. When I look up Ucrit for alpha(2)=0.05, n1=06, n2=12 in Zar 4th edition, I get 58, but your chart gives 14, and when I run the Mann Whitney U test for independent samples in Real Statistics it reports 24.072. What am I not understanding?

Thanks, Car

Car,

I believe that the values in Zar’s book are equal to n1*n2-crit where crit are the values shown in the table on the Real Statistics website.

I don’t know where you obtained the value 24.072. It looks like some value based on the normal approximation, but it doesn’t look like it comes from some table on my site. The value of the Real Statistics formula MCRIT(6,12,.05,2) = 14.

Charles

Hi Charles

When I am using the tool, the rank sum which it is calculating is significantly bigger than when I ask Excel to calculate the sum of the same values in the column. Any idea why that might be occurring?

Thanks

Fiona,

If you send me an Excel file with your data, I will try to figure out what is going wrong.

See Contact Us for my email address.

Charles

Hi,

Thank you for this useful tutorial. I have problem with using MANN_TEST. The problem is that the results of using MANN_TEST do not match the results of MANN(R1, R2)!! Even the sample counts are not correct when I use MANN_TEST…What is the problem?

Thank you again,

Ferra

I found the reason! I forgot to remove the check for “column headings included with data”.

Thank you

Firstly thank you for this wonderful website. Things are so well explained!

Secondly, I have a question for which the answer may be so obvious that I am ashamed to ask! But in figure 2, calculate the variance you use the formula

VARIANCE=(N1*N2/2)*((N1+N2+1)/6)

I just wonder where the ‘6’ at the end comes from?

Apologies in the advance if this is a daft question, or if it answered elsewhere (I have looked but couldn’t find anything. I do tend to get lost quite easily when dealing with formulas and numbers though!

Mark,

The answer is not so obvious. If you look at the proof of Property 2 on the webpage Mann-Whitney Test – Advanced you’ll see that the 6 is the result of some mathematical calculations.

Charles

Dear Charles,

First let me thank you for this website! It has been very useful – although I am just using the method/approach, not the computing programme.

Secondly, however, I am finding it difficult to understand my results/if i’ve calculated everything, or need more…

I have n1=37 and n2=37, with R1=1195.5 and R2= 1579.5…this gives me U1=876.5 and U2=492.5.

Now I know my ‘expected U value’ is 684.5, and I have a Stdiv of 92.5 and a Z value of -2.08

What do I take from this?

I know I can’t use a critical value table as the n>20 means its normalised… but what does this mean? Is there a critical value I can work out/get a reference-able source for?

Is there a difference between my data (what shows me this?)? What actually is Z (does it matter?)…do I need to calculate P, how would I do this?

Sorry for so many questions. I know I have glaring gaps in my understanding but I just haven’t been able to find anything on the web that explains things clearly (laymans English not this stats talk) for n>20 situations… Hope you can help!

Kind regards

Anna

Anna,

For samples sufficiently large you simply use one hypothesis testing using the normal distribution as described in Example 2, with additional details shown on the webpage:

http://www.real-statistics.com/sampling-distributions/single-sample-hypothesis-testing/

Charles

Dear Charles,

Thank you for the excellent resources pack. I am running the MANN_TEST function. I have no problem if I turn off correction but I have a lot of tied data so need it on. However, when on it returns “#VALUE!” for std dev so rest cannot be calculated. I cannot see why the correction for ties will not work. Please help.

I’m using the Excel 2007 version of the software.

Thank you,

Phil

(P.S. – I have a question regarding the Fisher Test function – is it accurate for larger then 2X2 tables – it returns a value for my 2×8 table but I’m not sure if its accurate. )

Philip,

If you send me an Excel file with your data I can try to figure out why it is returning #VALUE! for standard deviation when the Ties correction option is used.

The Fisher Exact Test as implemented only supports 2 x 2 tables.

Charles

Dear Charles,

I am trying to do Mann-Whitney test using data from Your example 1, but I get an error “A runtime error has occured. The analysis tool will be aborted. Type mismatch.”

The results looks as follow:

count 12 11

median 14,5 28

rank sum 117,5 158,5

U 92,5 39,5

and rest are blank

one tail two tail

alpha

I have excel 2013. Could You provide any suggestion? Thank You.

Andrey,

The usual problem is the setting of the value for alpha on the dialog box. This value defaults to .05, but for some languages you will need to re-enter the value as .05 or ,05.

Charles

Charles,

Many thanks, that was the solution. Thank You for quick response and the Real-Statistics pakage.

I have the big R1+R2 result can I get my r1 and r2 from that?

Nick,

I don’t know what r1 and r2 are, but note that R1 + R2 = n(n+1)/2 where n = n1 + n2.

Charles

Awesome website Charles. I’m taking reservoir characterization and your tutorials have really helped. I was trying to follow this tutorial and apply it to my data. Here’s the question: Using the Mann-Whitney test, does the fracture height documented by the initial analysis of scan lines A & B represent the same population of fractures, or different populations? The scan line measurements are at different intervals for A and B. For instance scan line A has distance from origin measurements of 0, 0.6, 1.8, and 4.4. With corresponding fracture heights of 0.4, 5, 12.6, and 9.6. Scan line B has distance from origin of 0.8, 3, 4.6, and 6.4 with corresponding fracture heights of 9.6, 4.8, 11.4 and 5.5. Is their a way to use MANN_TEST for these values. I realize that I must rank my fracture heights in relation to the distance from origin. So for the values I gave it would be: 0.4, 5, 9.6, 12.6, 4.8, 9.6, 11.4, 5.5. Scan line A no has a rank sum of 13 and B has a sum of 23. What would be the best way to perform a Mann-Whitney test?

Brandon,

If I understand your question properly, you should be able to use Mann-Whitney for this analysis using the approach described on the referenced webpage, including the MANN_TEST formula or the T Test and Non-parametric Equivalents data analysis tool.

Charles

Never mind, it just occurred to me the p-value for a 2-tailed test is (probably) twice the p-value of the one-tailed test 🙂

Hi Charles,

my compliments on the blog!

I was wondering if you could perhaps explain how the formulas in Example 2 change if you would like to calculate the values for a 2-tailed test.

Best

Nina

Hi,

first of all, congratulation on the website. It’s really well built, as least for me, a null at stats. And it’s surely an enormous amount of work that you’ve put at our disposition.

Thank you!

Unfortunately, i’m running an older version of excel, who does not support your package.

So, trying do go trough this test with samples n1=31 and n2=15, i’m having difficulty finding the critical value. furthermore, my results do not comply with the U1+U2 = n1n2 property… actually, U1 = U2 = n1n2…

i’ve gone through every formula and i did not find any mistake…

Could you help me?

Thanks in advance. Cheers,

Simao

Simao,

1. It is very strange that your results don’t comply with U1+U2 = n1*n2, since mathematically this should always hold. If you send me a spreadsheet with your calculations, I will try to figure out where the problem is.

2. The critical value for alpha = .05, two-tailed, n1 = 31 and n3 = 15 is 148.

3. The Real Statistics Resource Pack works with all Windows versions of Excel back to Excel 2002. Do you have an older version than this or are you using a version for the Mac prior to Excel 2011?

Charles

sir, what stat tool i’m going to use if i have two groups with unequal number of respondents? I want to determine if there is a significant difference in their performance in terms of the knowing, applying and reasoning skills of the students between the control and experimental groups?

with less than 40 respondents each group. thanks

Mann-Whitney U-test?

Provided the assumptions (normality, etc.) hold then you can use the t test for independent samples. If these assumptions are violated then you can use the Mann-Whitney U test.

Hi and thank you for all your work! Your website is an amazing resource for me.

As Fig. 4 shows there are two different output values for significance, rows 20 and 23. Where’s the difference? I got different results for some of my analyses and don’t know how to deal with it…

Felix

Hi Felix,

Row 20 is based on the normal approximation (when the sample size is large), while row 23 is based on the exact value using the table of critical values. If the sample is large (sample size > 20) then no row 23 is generated. If row 23 is generated then you should use the results from row 23; otherwise you should use the results from row 20 (the only choice).

Charles

Hi Charles

I noticed that calculating p-value using

MTEST(R1, R2, t) or

Ctrl-m and choose the T Test and Non-parametric Equivalents

gives different results (~ 10% different)!

Any insight pls?

Thanks

Saad,

I have never seen this before. It sounds lie an error. Can you send me an example where this is the case?

Charles

Hi Saad,

In al the examples that I have seen, the function and data analysis tool give the same results. Can you send me the example where the two results are different?

Charles

Any chance you can get the mean and standard deviation for two tailed Mann-Whitney U Test? I assume when it says Wilcoxon Signed-Rank Test for Paired Samples after I do the test it is actually the Mann-Whitney U Test, correct?

And what if I’m dealing with time? Do I still leave the Mean/Median at 0? Just want to make sure it doesn’t mess up my results.

And if this works, you are a LIFESAVER!

Amber,

The mean and standard deviation provided work for both the one-tail and two-tail tests. I just didn’t write the information twice (e.g. in Figure 4) since it is the same.

The Wilcoxon Signed-Rank Test for Paired Samples is not the same as the Mann-Whitney U Test, although they have many characteristics in common. If you have paired samples you should use the test described on the webpage http://www.real-statistics.com/non-parametric-tests/wilcoxon-signed-ranks-test/.

The Hypothetical Mean/Median field is not used with the current implementation of the Mann-Whitney Test or Wilcoxon Signed-Rank Test for Paired Samples, and so you may assume that the value is 0.

Charles

Thank you.

Charles:

Can you please tell me:

1)How I can use the “rank” function in Excel to rank continuous variables, that is numbers with decimal points, e.g., 1.38, 3.6. 40.9 etc. When I’m trying to rank them, the number like 1.38 that is enlisted in the array twice is not being properly ranked although I have used the following formula to correct for repeat numbers:

=Rank(number, range, order)+(count(range)+1-rank(number, range, 1)-rank(number, range, 0))/2. However, the number like 70.4 which is also listed twice is being correctly ranked.

My second question is if the sample size of the two groups are more than 20, e.g., 30, 40 etc., I cannot use the Mann Whitney table. I have to calculate the z score. But which table do I then use to look up if the computed U value is above or below the critical value?

The third question is do I have to pay to download the “Real Statistics tool pack”?

Will look forward to your response.

With sincerest thanks for your comments,

JB

Hi JB,

Q1: I am not sure why you are using such a complicated formula. I use =RANK(x,R1) + (COUNTIF(R1,x)-1)/2 and it works fine. I tried it with repeated values of 1.38, 3.6 and 70.4 and it works as you would expect. You can also use RANK.AVG in Excel 2010/2013. With earlier versions of Excel you can use the function RANK_AVG found in the Real Statistics tool pack.

Q2: For larger sample sizes, you don’t need a table. The idea is that the z value is normally distributed and so you can use the NORMSDIST function. This is easier. Easier still is to use the MTEST function or T-Test and Non-parametric Equivalents data analysis function found in the Real Statistics tool pack.

Q3: You can download the Real Statistics tool pack for free.

Charles

Does MTEST always report a p-value for a 2-tailed test? Can you do a 1-tailed test with the function?

Aaron,

The

MTEST(as well as theT Tests and Non-parametric Equivalentsdata analysis tool) reports the p-value of the one-tail test. To get the two-tail test you simply double the answer.I had intended to report the two-tail test, in which case you would have had to half the p-value to get the one-tail test. I will fix this in the next release, but for now MTEST reports the one-tail test.

Charles

Update: In the latest release (R2.1) theT Tests and Non-parametric Equivalentsdata analysis tool reports both the one tail and two tail tests.I downloaded and installed the Resources Pack as per your website instructions and it shows as an addin under excel options (RealStats). But after I check it and click OK, no additional tools show up under Data Analysis. I have Excel 2007. Any ideas why the tools are not available?

Robert,

Once you install the Real Analysis Resource Pack, the additional tools are available by simply pressing Ctrl-m. This will bring up a menu with all the Real Statistics data analysis tools. I thought that this would be the easiest approach since this can be done no matter which ribbon is active.

The other recommended approach is to add the Real Statistics tools to the Quick Access Toolbar (QAT), especially since the QAT is also available no matter which ribbon is active. The instructions for doing this are included in webpage http://www.real-statistics.com/excel-capabilities/supplemental-data-analysis-tools/accessing-supplemental-data-analysis-tools/, although not everyone has been successful at getting this to work.

Excel doesn’t let you customize the ribbon by adding an addin to an existing group (such as Data>Data Analysis). Instead you can add the addin as a custom group on any of the ribbons (e.g. right next to Data Analysis on the Data ribbon). Instructions for doing this are now available on the same web page as the one referenced above.

Charles