**Basic Concepts**

**Observation**: Suppose the random variable *x* has binomial distribution *B*(*n, p*) and define *z* as

By Corollary 1 of Relationship between Binomial and Normal Distributions, provided *n* is large enough, generally if *np* ≥ 5 and *n*(1–*p*) ≥ 5, then *z* is approximately normally distributed with mean 0 and standard deviation 1.

Thus by Corollary 1 of Chi-square Distribution, *z*^{2} ~ χ^{2}(1), where

**Example 1**: Suppose we flip a coin 10 times and obtain 9 heads and 1 tail. Is the coin fair?

We have already studied problems like this in Binomial Distribution and Hypothesis Testing using the Binomial Distribution. We can even use the normal approximation of the binomial distribution to solve such problems. This time we will use the chi-square distribution.

Let *x* be the random variable that counts the number of heads in *n* trials and define the null hypothesis as follows:

H_{0}: the coin is fair, i.e. *p* = .5 (one-tail test)

Since *np* = 10 ∙ .5 = 5 ≥ 5 and *n*(1–*p*) = 10 ∙ .5 = 5 ≥ 5, we can apply the chi-square distribution as described above.

Now p-value = CHIDIST(6.4, 1) = 0.011412 < .05 = *α*, and so we reject the null hypothesis and conclude with 95% confidence that the coin is not fair.

**Observation**: If for each trial, let* E _{1}* = success and

*E*= failure, and let

_{2}*obs*= number of observed successes and

_{1}*obs*= number of observed failures in

_{2}*n*trials. Furthermore, let

*exp*= number of expected successes and

_{1}*exp*= number of expected failures in n trials. Using the terminology of Example 1, this means that

_{2}*obs*,

_{1}= x, obs_{2}= n – x*exp*and

_{1}= np*exp*=

_{2}*n*(1–

*p*).

By algebra, *z*^{2} can be expressed as

As we have observed above, *z*^{2} ~ χ^{2}(1). This observation amounts to what is known as the chi-square goodness of fit test for two mutually exclusive outcomes. We now look at the situation for more than two outcomes.

**Observation**: Suppose we have an experiment consisting of* n* independent trials, each with *k* mutually exclusive outcomes *E _{i}*, such that for each trial the probability of outcome

*E*is

_{i}*p*. Suppose further that for each

_{i}*i*the observed number of occurrences of outcome

*E*is

_{i}*n*. These are the conditions for the multinomial distribution.

_{i}Our objective is to determine whether the observed frequencies *n _{1 },…, n_{k} *are consistent with the expected outcomes based on the multinomial distribution, namely

*np*.

_{1 },…, np_{k}This problem is equivalent to determining whether to accept the following null hypothesis:

H_{0}: *p _{i} = n_{i}*/

*n*for all i = 1, …,

*k*

For small samples the problem can be solved using the Fisher’s Exact Test, but for larger samples this becomes computationally difficult. Instead we use the maximum likelihood ratio λ, as described in Maximum Likelihood Function. The idea behind this approach is to create a model for which the probability of obtaining the observed data is maximized, and then compare this model with the probability of obtaining the observed data under the null hypothesis.

We now use a theorem from advanced theoretical statistics that under certain conditions the random variable -2 ln λ, where λ is as defined in Definition 2 of Maximum Likelihood Function, has a distribution which approaches χ^{2 }(*k*) as *n* → ∞ where *n* = sample size and *k* = the number of parameters determined by the null hypothesis.

For a multinomial distribution the random variable -2 ln λ can be expressed as

Here the *n _{i} *are the observed values and the

*np*are the expected values (based on the multinomial distribution). Since

_{i}the null hypothesis H_{0} reference only *k* – 1 parameters, and so by this theorem from advanced theoretical statistics, y ~ χ^{2 }(*k*–1) for sufficiently large *n*.

We summarize the above observations in Theorem 1 using the following alternative way of expressing the random variable y.

**Definition 1**: The **maximum likelihood statistic** can be expressed as:

**Theorem 1**: For sufficiently large values of *n*, the maximum likelihood test statistic has an approximately chi-square distribution with *k* – 1 degrees of freedom, i.e. χ^{2 }(*k*–1).

**Definition 2**: The **Pearson’s chi-square test statistic**, is defined as:

**Observation**: In general, the maximum likelihood test statistic is not used directly. Instead a further approximation, the Pearson’s chi-square test statistic is commonly used. For large samples the results are similar, but for small samples the maximum likelihood statistic yields better results. In the case where *k* = 2, the Pearson’s chi-square test statistic is the* z*^{2} statistic we looked at earlier in this section.

**Theorem 2**: For sufficiently large values of *n*, the Pearson’s chi-square test statistic has approximately a chi-square distribution with *k* – 1 degrees of freedom, i.e. χ^{2 }(*k*–1)

**Observation**: Theorem 2 is used to perform what is called **goodness of fit** testing, where we check to see whether the observed data correspond sufficiently well to the expected values. In order to apply such tests, the following assumptions must be met (otherwise the chi-square approximation in Theorem 2 may not be accurate), namely:

- Random sample: Data must come from a random sampling of a population.
- Independence: The observations must be independent of each other. This means chi-square cannot be used to test correlated data (e.g. matched pairs).
- Cell size:
*k*≥ 5 and the expected frequencies*exp*≥ 5. When_{i}*k*< 5 it is better to have even larger values for the*exp*. These assumptions are similar to those for the normal approximation to the binomial distribution._{i}

Since the data is usually organized in the form of a table, the last assumption means that there must be at least 5 cells in the table and the expected frequency for each cell should be at least 5. For large values of *k*, a small percentage of cells with expected frequency of less than 5 can be acceptable. Even for smaller values of *k* this may not cause big problems, but it is probably a better choice to use Fisher Exact Test in this case. In any event, you should avoid using the chi-square test where there is an expected frequency of less than 1 in any cell.

If the expected frequency for one or more cells is less than 5, it may be beneficial to combine one or more cells so that this condition can be met (although this must be done in such a way as to not bias the results).

**Example 2**: We have a die which we suspect is loaded to favor one or more numbers over the others. To test this we throw the die 60 times and get the following count for each of the 6 possible throws (as shown in the upper part of the worksheet in Figure 1):

**Figure 1 – Data for Example 2**

Essentially we are testing the following hypothesis about the multinomial distribution:

H_{0}: the probability of throwing any of the six faces on the die is 1/6.

We calculate the chi-square test statistic to be 12.4 (using the formula =SUM(B7:G7) in cell H7 of Figure 1). Here cell B7 contains the formula =(B4-B5)^2/B5 (and similarly for the other cells in range B7:G7). We now apply the chi-square test with* k* = 6 (and so *df* = 5) as follows:

p-value = CHIDIST(χ^{2}, *df*) = CHIDIST(12.4,5) = .0297 < .05 = *α*

Since p-value < *α*, we reject the null hypothesis, and conclude (with 95% confidence) that the die is loaded. We can reach the same conclusion by looking at the critical value of the test statistic:

χ^{2}-crit = CHIINV(*α, df*) = CHIINV(.05,5) = 11.07 < 12.4 = χ^{2}-obs

**Excel Function**: Excel provides the following function which automates the above calculations:

**CHITEST**(R1, R2) = CHIDIST(χ^{2}, *df*) where R1 = the array of observed data, R2 = the array of expected values, χ^{2} is calculated from R1 and R2 as in Definition 2 and *df* = the number of elements in R1 (or R2) minus 1.

The ranges R1 and R2 must both have either one row or one column, they must contain the same number of elements and all the cells in R1 and R2 must contain only numeric values.

For Example 2 we can calculate p-value = CHITEST(B4:G4,B5:G5) = .0297.

**Example 3**: A safari park in Africa is divided into 8 zones, each containing a known population of elephants. A sample is taken of the number of elephants found in each zone to determine whether the distribution of elephants is significantly different from what would be expected based on the known population in each zone. The table on the left of Figure 2 (columns A-C) summarizes the data:

**Figure 2 – Data for Example 3**

The sample consists of the 55 elephants actually recorded (*obs _{i} *by zone. If we scale the known population of 205 elephants down to 55 (by multiplying the population in each zone by 55/205) we arrive at the expected number of elephants (

*exp*) in each zone (column E). For the analysis we use the following null hypothesis:

_{i}H_{0}: there is no significant difference between the distribution of the sample and the population distribution

We now calculate the p-value = CHITEST(B4:B11, E4:E11) = 0.82

Since p-value = .82 > .05 = *α*, we don’t reject the null hypothesis, and conclude there is no significant difference between the distribution of elephants in the sample among the zones compared with the known population.

**Fitting data to a distribution**

**Observation**: The chi-square goodness of fit test (as well as the maximum likeliness test) can also be applied to determine whether observed data fit a certain distribution (or curve). For this purpose a modified version of Theorem 1 or 2 can be employed as follows.

**Theorem 3**: Where there are *m* unknown parameters in the distribution or curve being fitted, the test statistic in Theorem 2 has approximately the chi-square distribution χ^{2 }(*k–m*–1).

Thus when fitting data to a Poisson distribution *m* = 1 (the mean parameter), while if fitting data to a normal distribution *m* = 2 (the mean and standard deviation parameters).

**Example 4**: A substance is bombarded with radioactive particles for 200 minutes. It is observed that between 0 and 7 hits are made in any one minute interval, as summarized in columns A and B of the worksheet in Figure 3. Thus for 8 of the one minute intervals there were no hits, for 33 one minute intervals there was 1 hit, etc.

**Figure 3 – Data for Example 4 plus calculation of chi-square**

We hypothesize that the data follows a Poisson distribution whose mean is the weighted average of the observed number of hits per minute, which we calculate to be 612/200 = 3.06 (cell B14).

H_{0}: the observed data follows a Poisson distribution

For each row in the table we next calculate the probability of *x* (where *x* = hits per minute) for *x* = 0 through 7 using the Poisson pdf, i.e. *f*(*x*) = POISSON(*x*, 3.06, FALSE), and then multiply this probability by 200 to get the expected number of hits per interval assuming the null hypothesis is true (column D).

We would like to proceed as in Example 3, except that this time we can’t use CHITEST since *df* ≠ the sample size minus 1. In fact, *df = k – m* – 1 = 8 – 1 – 1 = 6 since there are 8 intervals (*k*) and the Poisson distribution has 1 unidentified parameter (*m*), namely the mean. We therefore proceed as in Example 2 and explicitly calculate the chi-square test statistic (in column F) to be 3.085. We next calculate the following:

p-value = CHIDIST(χ^{2}, *df*) = CHIDIST(3.085,6) = .798 > .05 = *α*

χ^{2}-crit = CHIINV(*α, df*) = CHIINV(.05,6) = 12.59 > 3.09 = χ^{2}-obs

Based on either of the above inequalities, we retain the null hypothesis, and so with 95% confidence conclude that the observed data follow a Poisson distribution.

**Real Statistics Function**: The Real Statistics Resource Pack provides the following function to handle analyses such as that used for Example 3:

**FIT_TEST**(R1, R2, *par*) = CHISQ.DIST.RT(χ^{2}, *df*) where R1 = the array of observed data, R2 = the array of expected values, *par* = the number of unknown parameters as in Theorem 3 (default = 0) and χ^{2} is calculated from R1 and R2 as in Definition 2 with *df* = the number of elements in R1 (or R2) – *par* – 1.

For Example 2, FIT_TEST(B4:G4,B5:G5) = CHITEST(B4:G4,B5:G5) = .0297 and for Example 3, FIT_TEST(B4:B11,D4:D11,1) = .798.

**Testing using the index of dispersion**

As we saw above, Theorem 3 can be used to determine whether data follows a Poisson distribution. Note that if we want to test whether data follows a Poisson distribution with a predefined mean then we can use Theorem 2 instead, and so don’t need to reduce the degrees of freedom of the chi-square test by one.

The index of dispersion can also be used to test whether a data set follows a Poisson distribution. This test is especially useful with small data sets where the approach based on Theorem 2 or 3 is impractical.

**Definition 3**: The **Poisson index of dispersion** is defined as

Since the index of dispersion is the variance divided by the mean, the Poisson index of dispersion is simply the index of dispersion multiplied by *n−*1. The Poisson index of dispersion for the data in R1 can be calculated by the Excel formula =DEVSQ(R1)/AVERAGE(R1).

**Property 1**: For sample size sufficiently large and mean ≥ 4, the Poisson index of dispersion follows a chi-square distribution with degrees of freedom.

**Observation**: The estimate is pretty good when the mean ≥ 4 even for values of as low as 5. Thus the property is especially useful with small samples, where we don’t have sufficient data to use the goodness-of-fit test described previously.

**Example 4**: Use Property 1 to determine whether the data in range A3:B8 of Figure 5 follows a Poisson distribution.

As we can see from the analysis in Figure 5, we don’t have sufficient reason to reject the null hypothesis that the data follows a Poisson distribution.

**Figure 5 – Testing using Poisson Index of Dispersion**

Is this a valid test for determining whether a histogram of samples is from a normal distribution:

H0: Histogram is NOT from a normally distributed population

where the test_stat=The Pearson’s chi-square test statistic as before but now the critical value is a maximum threshold with any values lower than that threshold indicaing that the null hypothesis should be rejected. In that way, one could assert at a specific level of significance, when the test_stat is very small, that a histogram is from a normal distribution.

This is sort of the opposite approach. Can you explain why this would potentially be invalid?

Brian,

Sorry but I don’t understand what you are describing.

Charles

If I write the data in time series (demands series data)

Y_t Y_t-1 Y_t-2 … Y_t-23

SKU1 2 0 27 1

SKU2 56 73 24 13

SKU3 0 0 1 0

SKU4 2 1 1 0

.

.

.

SKU3000 15 10 3 7

I want to check the dispersion and will use the definition 4.

How to use the formula?

Sir, I want to ask my problem.

I have a data series. The data is from automotive industry consists of demand histories of 3000 SKUs over a period of 2 years (24 months).

I want to test the dispersion of poisson disribution in order to remove tha data series of highly variable demands.

So I have a data table of 3000×24.

So X_1 have 3000 data

X_2 has 3000 data

…

X_24 the same.

Am I right??

based on the definition 4, how can I calculate and check the dispersion??

Hopefully, I can get your reply soon.

regards, Kinzie

Kinzie,

Assuming that you have placed the 3000×24 data table in the range A1:X3000. You can place the dispersion for each column in range AA1:XX1. This can be done by placing the formula =VAR.S(A1:X3000)/AVERAGE(A1:X3000) and then highlighting the range AA1:XX1 and pressing Ctrl-R.

The closer these values are to 1 the more likely each of the columns contains data from a Poisson distribution.

I have not provided a statistical test for this since instead I have explained how to use the Chi-square Goodness of Fit approach for determining whether data fits a Poisson distribution.

Charles

Wonderfull explanation of the theory.

Now I understand qhy some authors use expected vaules in the denominator and some error size.

Keep on writting!

i have a set of two gravity values obtained by two different methods. now i want to assess whether the two methods give reliable results. how do i do this?

It depends on what you mean by “reliable results”. If you mean that each set of gravity values fits values predicted by a theory of gravity, then you can perform the chi-square goodness of fit test twice, once for each set of values vs. the theoretical values.

Charles

Sir, what if I have certain set of data ex: 25, 35, 40, 50, 60, 35, 40, 70…

How can I know if this has a poisson distribution or not?

Thanks!

Jon,

Just follow the approach in Example 4 of the referenced webpage.

Charles

I think you are my hero. I have frequency data as follows:

n f

0 1215

10 0

20 0

30 3004

40 0

50 1833

60 496

70 135

80 86

90 191

100 18

110 11

120 0

130 4

Sum(f) = 6994, 7 card poker hands – Texas Hold-em

Mode = 30 points assigned to the completed hand

The point generating algorithm uses 9 probabilistic factors that mirror the odds of getting each type of hand(straight, 3 of kind, etc.)

I want to tune the model using solver to optimize the goodness of fit. Subtle changes in the factors will begin to fill in the missing pieces of the distribution providing more of a continuous evaluative function.

1) What would be your worksheet suggestion on the metric or the solver target variable that would necessarily improve the quality and predictability of the distribution?

2) Would we be better off, with regard to item 1, using the Pearson value from the regression between the pure theoretical odds and the 9 probabilistic factors employed? (Except, I don’t know how to get this non-linear R^2 into a worksheet formula.

Your knowledge is impressive and supported with wonderful examples. Any help you could lend me would be blessing.

staryman@att.net

I am trying to understand your data. What does n represent (10 = no pair, 20 = 1 pair, 30 = 2 pairs, etc.)? Which point generating algorithm are you referring to? (a trendline? a trendline on a pixel background? etc.). Is the goal to create a straight line which best approximates the raw data (in which case linear regression seems to be the right choice)? I need to understand the problem better.

Charles

Charles,

Thanks for the help.

I can see it by typing it into the cell. I had been looking for it by clicking on the Add-Ins macro button (“Real Statistics”). If one does this, you can’t see the Fit_Test. Anyway, I am no able to use it, so thanks again.

It is interesting though, as soon as a single interval is skipped and subsequently there is a “hit” for the next or one of the subsequent interval, the Poisson distribution does not fit. Is this what you would expect? So for example, in the example 4 above, if you have hits for interval 0, 1, 2,3 and 4; Not for 5, but again have hits for 6, and 7 and you do the fit_test, I find that it does not fit Poisson.

Glad to see that you can now use the FIT_TEST function. It would be surprising to find that there were no intervals with 5 hits in a sample of 200 which follows the Poisson distribution.

Charles

Charles,

Excellent website, examples and free tools!

Thank you so much!

Similar to ndm above I am unable to find the FIT_TEST.

I downloaded the latest version on the download page, but still was not able to find it.

I am working on Excel 2010.

I can see all other functions including the CHI-Square Test.

Any help would be appreciated.

Regards

ST

ST,

The FIT_TEST function was introduced in Release 2.17 on 1 September 2014, and so should work on any version of the software that has a higher version number, which can be determined by inserting the formula =VER() in any cell.

Not seeing FIT_TEST is quite strange. I am using Release 3.2.1, working in Excel 2010, and the FIT_TEST function works fine.

What do you see when you try to enter the function in a cell? Do you see #NAME? What do you see when you enter the CHI_TEST function?

Caution: If you try to locate any of the Real Statistics functions using the formula bar, you will find them all under theUser Definedcateory. In any case, you should be able to enter =FIT_TEST(…. in a cell just like any Excel formula.Charles

Zaiontz: Sir, you are a gentlemanly wizard! thanks so much for your work and your website. i found you weeks ago looking for Fisher’s Exact Test help, but you have really provided a superlative service here with all of your offerings–both theory and tools! (i’ve bookmarked your site, of course.)

Say, i was testing an observed range against an expected range and used

CHISQ.TEST. but i wanted to use your handyFIT_TEST(R1, R2, par)instead, but it didn’t run. do i need to re-install your Real Statistics Resource Pack?This function was only included in the latest version of the software, and so you would need to re-install the software if you don’t have the latest release. All you need to do is download the software and replace the existing version with the new version. Provided you don’t change the location or name of the file you don’t need to do anything else.

I plan to issue yet a new release in a few days.

Charles

thanks, Charles–that worked! (and i’ll keep an eye out for the new release.)

Sir

The “Goodness of Fit” is great. But the “maximum likelihood” is a little difficult to me. I don’t understand Definition 1. Maybe I should review the theory of maximum likelihood, I almost forget them. LOL

Sir

There is a typo in Example 1: “Now p-value = CHIDIST(.05, 1) = 0.011412 < .05 = α" . ".05" in the CHIDIST() should be 6.4