Statistical Tests for Normality and Symmetry

In this section we briefly touch upon using the Chi-square, Kolmogorov-Smirnov and Shapiro-Wilk tests to determine whether data is normally distributed.

Chi-square Test

In Goodness of Fit we show that the chi-square goodness of fit test could be used to determine whether data adequately fit some distribution. In particular, in Example 4 of Goodness of Fit we show how to test whether data fit a Poisson distribution. In a similar fashion, we can test whether data fit a normal distribution.

For additional information and some examples click here.

Kolmogorov-Smirnov (KS) Test

The KS test is a general test that can be used to determine whether sample data is consistent with any specific distribution. In particular, it can be used to check for normality, but it tends to be less powerful than tests specifically designed to check for normality.

It has the advantage over the chi-square test in that it can be used for small samples and does not require that data frequencies be larger than 5.

For additional information and some examples click here.

Lilliefors Test

The Lilliefors Test is an improvement over the Kormogorov-Smirnov test, based on a different table of critical values.

For additional information and some examples click here.

Shapiro-Wilk (SW) Test

The SW test is specifically designed to test the null hypothesis that data are sampled from a normal distribution. The test has the following characteristics:

  • The SW test is designed to check for departures from normality and is generally more powerful than the KS test.
  • The mean and variance do not need to be specified in advance.
  • In essence, the SW test provides a correlation between the raw data and the values that would be expected if the observations followed a normal distribution. The SW statistic tests if this correlation is different from 1 (see Basic Concepts of Correlation).
  • The SW test is a relatively powerful test of non-normality and is capable of detecting even small departures from normality even with small sample sizes. This may make it even more powerful than we need (i.e. data that fails the SW test may still be suitable for the test under consideration).

We provide two approaches: the original algorithm of Shapiro-Wilk (limited to samples of size 3 to 50) and an expanded algorithm due to J.P. Royston which supports samples of size 12 to 5,000. Both approaches are supported by the Real Statistics Resource Pack.

For additional information and some examples of the original approach click here.

For additional information and some examples of the expanded approach click here.

Jarque-Barre Test

A data set which is normally distributed has skewness and kurtosis of zero. This fact is the basis of a simple test of normality called the Jarque-Barre Test.

For additional information and some examples click here.

D’Agostino-Pearson Test

The D’Agostino-Pearson Test also uses the fact that a normally distributed data set has zero skewness and kurtosis. This test is more accurate than the Jarque-Barre test mentioned above.

For additional information and some examples click here.

2 Responses to Statistical Tests for Normality and Symmetry

  1. Wei Sun says:

    Very Insightful, thank you very much! 🙂

  2. ali says:

    Very good website ! Thank you

Leave a Reply

Your email address will not be published. Required fields are marked *