The F-distribution is primarily used to compare the variances of two populations, as described in Hypothesis Testing to Compare Variances. This is particularly relevant in the analysis of variance testing (ANOVA) and in regression analysis.

**Definition 1**: The The F-distribution with *n _{1}, n_{2} *degrees of freedom is defined by

**Theorem 1**: If we draw two independent samples of size *n _{1}* and

*n*respectively from two normal populations with the same variance then

_{2}Proof: By Theorem 2 of Chi-square Distribution, If *x* is drawn from a normally distributed population *N*(*μ ,σ*) then for samples of size *n*:

Thus if we draw two independent samples from two normal populations with the same variance σ, then by Definition 1,

**Property 1**: A random variable* t* has distribution *T*(*k*) if and only if *t*^{2} has distribution *F*(1, *k*).

**Excel Functions**: The following Excel functions are defined for the distribution:

**FDIST**(*x, df _{1}, df_{2}*) = the probability that the F-distribution with

*df*and

_{1}*df*degrees of freedom is ≥

_{2}*x*; i.e. 1 –

*F*(

*x*) where

*F*is the cumulative F-distribution function.

**FINV**(*α, df _{1}, df_{2}*) = the value

*x*such that FDIST(

*x, df*) = 1 –

_{1}, df_{2}*α*; i.e. the value

*x*such that the right tail of the F-distribution with area

*α*occurs at

*x*. This means that

*F*(

*x*) = 1 –

*α*, where

*F*is the cumulative F-function.

With Excel 2010/2013/2016 there are a number of new functions (**F.DIST, F.INV, F.DIST.RT **and **F.INV.RT**) that provide equivalent functionality to FDIST and FINV, but whose syntax is more consistent with other distribution functions. These functions are described in Built-in Statistical Functions.

**Observation**: Excel only calculates the above functions for positive integer values of *df*1 and *df*2. Non-integer values are rounded down to the nearest integer. Thus, F.DIST(3,1.6,5,TRUE) = F.DIST(3,1,5,TRUE). In particular, all of the above Excel functions yield an error value when *df*1 < 1 or *df*2 < 1.

If you need a more accurate value of any of the F distribution functions when either or both of the degrees of freedom are not integers, and in particular when either of them is less than one, then you can use Real Statistics’ noncentral F distribution functions (with noncentrality value of zero), as described in Noncentral F Distribution. For example, the formula F.DIST(3,1,5,TRUE) = .8562, but F.DIST(3,0.99,5,TRUE) = #NUM!, whereas NF_DIST(3,0.99,5,0,TRUE) = .8606.

Alternatively, you can use the following Real Statistics functions.

**Real Statistics Functions**: The Real Statistics Resource Pack provides the following functions:

**F_DIST**(*x*, *df*1, *df*2, *cum*) = BETA.DIST(*x* * *df*1 / (*x* * *df*1 + *df*2), *df*1 / 2, *df*2 / 2, *cum*)

**F_INV**(*p*, *df*1, *df*2) = *x* * *df*2 / (*df*1 * (1 – *x*)) where *x* = BETA.INV(*p*, *df*1/2, *df*2/2)

Here F_DIST is a substitute for F.DIST and F_INV is as substitute for F.INV. Not only do these functions provide better estimates of the F distribution when the degrees of freedom are not integers, but F_DIST is also useful in providing an estimate of the pdf for versions of Excel prior to Excel 2010, where F.DIST(*x*, *df*1, *df*2, FALSE) is not available.

Thank you so much for this valuable site! I have one suggestion: would it be possible to post a graph of the F distribution? Thank you!

Michael,

Ok, I will add this once I have issued the next software release.

Charles

I don’t understand the attributes of F distribution

I can see that the info I have provided on page http://www.real-statistics.com/chi-square-and-f-distributions/f-distribution/ is a bit sketchy. I will try to add a little more explanation, but it would be helpful if you can tell me what sort info would be helpful to you.

In any case, tHe F distribution is used to test whether the variances of two populations are significantly different. It is commonly used in ANOVA testing. You can get more information about this on the page http://www.real-statistics.com/chi-square-and-f-distributions/two-sample-hypothesis-testing-comparing-variances/.

Charles