Lilliefors Test for Normality

When the population mean and standard deviation are known we can use the one sample Kolmogorov-Smirnov test to test for normality, as described in  Kolmogorov-Smirnov Test for Normality.

However, when the population mean and standard deviation are not known, but instead are estimated from the sample data, then the usual Kolmogorov-Smirnov test based on the critical values in the Kolmogorov-Smirnov Table yields results that are too conservative. Lilliefors created a related test that gives more accurate results in this case (see Lilliefors Test Table).

The Lilliefors Test uses the same calculations as the Kolmogorov-Smirnov Test, but the table of critical values in the Lilliefors Test Table is used instead of the Kolmogorov-Smirnov Table. Since the critical values in this table are smaller, the Lilliefors Test is less likely to show that data is normally distributed.

Example 1: Repeat Examples 1 and 2 of the Kolmogorov-Smirnov Test for Normality using the Lilliefors Test.

For Example 1 of Kolmogorov-Smirnov Test for Normality, using the Lilliefors Test Table, we have

image9199

Since Dn = 0.0117 < 0.0283 = Dn,α, once again we conclude that the data is a good fit with the normal distribution. (Note that the critical value of .0283 is smaller than the critical value of .043 from the KS Test.)

For Example 2 of Kolmogorov-Smirnov Test for Normality, using the Lilliefors Test Table with n = 15 and α = .05, we find that Dn = 0.184 < 0.2196 = Dn,α, which confirms that the data is normally distributed.

Real Statistics Functions: The following functions are provided in the Real Statistics Resource Pack to automate the table lookup:

LCRIT(n, α, tails, h) = the critical value of the Lilliefors test for a sample of size n, for the given value of alpha (default .05) and tails = 1 (one tail) or 2 (two tails, default) based on the Lilliefors Test Table. If h = TRUE (default) harmonic interpolation is used; otherwise linear interpolation is used.

LPROB(x, n, tails, iter, h) = an approximate p-value for the Lilliefors test for the Dn value equal to x for a sample of size n and tails = 1 (one tail) or 2 (two tails, default) based on a linear interpolation (if h = FALSE) or harmonic interpolation (if h = TRUE, default) of the critical values in the Lilliefors Test Table, using iter number of iterations (default = 40).

Note that the values for α in the table in the Lilliefors Test Table range from .01 to .2 (for tails = 2) and .005 to .1 for tails = 1. If the p-value is less than .01 (tails = 2) or .005 (tails = 1) then the p-value is given as 0 and if the p-value is greater than .2 (tails = 2) or .1 (tails = 1) then the p-value is given as 1.

For Example 2 of Kolmogorov-Smirnov Test for Normality, Dn,α = LCRIT(15, .05, 2) = .2196 > .184 = Dn and p-value = LPROB(0.184, 15) = .182858 > .05 = α, and so once again we can’t reject the null hypothesis that the data is normally distributed.

Lilliefors Distribution

Especially for values of α not found in the Lilliefors Test Table, we can use an approximation to the Lilliefors distribution. Click here for more information about this distribution, including some useful functions provided by the Real Statistics Resource Pack.

2 Responses to Lilliefors Test for Normality

  1. Keith Wild says:

    Of the many tests regimes there are for tests for normality. Is there a list illustrating the order of preference for the test method according to the type of data you have?
    I mean which test should I use for what type of data? It seems to be so easy to fudge a result as necessary according to the test method.

    • Charles says:

      Keith,
      In general, I believe that the Shapiro-Wilk test is the best one to use. If you have a number of ties, then d’Agostino-Pearson is probably better.
      Charles

Leave a Reply

Your email address will not be published. Required fields are marked *