# Multivariate Normal Distribution Basic Concepts

### Univariate case

A random variable x has normal distribution if its probability density function (pdf) can be expressed as

Here e is the constant 2.7183…, and π is the constant 3.1415…

The normal distribution is completely determined by the parameters μ (mean) and σ (standard deviation). We use the abbreviation N(μ, σ) to refer to a normal distribution with mean μ and standard deviation σ, although for comparison with the multivariate case it would actually be better to use the abbreviation N(μσ2) where σ2 is the variance.

### Multivariate case

Definition 1: A random vector X has a multivariate normal distribution with vector mean μ and covariance matrix Σ, written X ~ N(μ, Σ) if X has the following joint probability density function:

Here |Σ| is the determinant of the population covariance matrix Σ. The exponent of e consists of the product of the transpose of Xμ, the inverse of Σ and Xμ, which has dimension (1 × k) × (k × k) × (k × 1) = 1 × 1, i.e. a scalar. Thus f(X) yields a single value. The coefficient  (2π)k |Σ| can also be expressed as |2πΣ|.

Definition 2: The expression

which appears in the exponent of e, is called the squared Mahalanobis distance between X and μ. We can also define the squared Mahalanobis distance for a sample to be

Where S is the sample covariance matrix and  is the sample mean vector. In MANOVA we give an example of how to calculate this value and also introduce the Real Statistics function MDistSq which calculates this value automatically.

Observation: If k = 1 then the above definition is equivalent to the univariate normal distribution. If k = 2 the result is a three dimensional bell shaped curve (as described in Figure 1).

Property 1: If X ~ N(μ, Σ) where all the xj in X are independent, then the population covariance matrix is a diagonal matrix [aij] with ajj= ${\sigma}_j^2$ for all j and aij = 0 for all ij, and so the joint probability function simplifies to

where each ${f}_{\mu_j ,\sigma_j}(x_j)$ is the univariate normal pdf of xj with mean μj and standard deviation σj.

Property 2: If y = $\sum_{j=1}^{k} c_j x_j$ = CTX, where C = the k × 1 vector [cj], and X ~ N(μ, Σ) then y has normal distribution with mean $\sum_{j=1}^{k} c_j \mu_j$CTμ and variance $\sum_{i=1}^{k} \sum_{j=1}^{k} c_i c_j \sigma_{ij}$ = CTΣC; i.e. y ~ N(CTμ,CTΣC).

Observation: The unbiased estimates for population mean and population variance are given by the sample mean $\sum_{j=1}^{k} c_j \bar{X}_j$ = CT and sample variance $\sum_{i=1}^{k} \sum_{j=1}^{k} c_i c_j s_{ij}$ = CTSC, where sij = cov(xi, xj).

Observation: When k = 2, the joint pdf of X depends on the parameters μ1, μ2, σ1, σ2, and ρ. A plot of the distribution for different values of the correlation coefficient ρ is displayed in Figure 1.

Figure 1 – Bivariate normal density function

Observation: Suppose X has a multivariate normal distribution. For any constant c, the set of points X which have a Mahalanobis distance from μ of c sketches out a k-dimensional ellipse. The value of the probability density function at all these points is the constant

Let’s take a look at the situation where k = 2. In this case, we have

Thus

and so

Hence

where

Finally, note that the equation

is an ellipse with foci at μ = (μ1, μ2).

Observation: Note that when ρ = 0, indicating that x1 and x2 are uncorrelated, then the ellipse takes the form (z1 – z2)2 = c2 which is a circle. When ρ = ±1, indicating that x1 and x2 are completely correlated, then the ellipse becomes a straight line.

Observation: Using the above calculations, when k = 2 the multivariate normal pdf is

Property 3: If X ~ N(μ, Σ), then the squared Mahalanobis distance between X and μ has a chi-square distribution with k degrees of freedom.

Observation: This property is an extension of Corollary 1 of Chi-square Distribution. We can interpret the property as follows. Let c = the critical value of the chi-square distribution with k degrees of freedom for α = .05. Then the probability that X will fall within the ellipse defined by c, i.e. (X–μ)T Σ-1 (X–μ) = c2, is 1 – α = .95.

### 7 Responses to Multivariate Normal Distribution Basic Concepts

1. Dr. buneos días, disculpe que significa pdf y cdf, podría ser que
pdf es lo mismo que función de densidad de probabilidad?
Pero cdf?
Muchas gracias
Dr Zaionts, good morning, excuse me what mean PDF and CDF values?
Could be PDF= probability density function?
But CDF?
Thank you

• Charles says:

Gerardo,
Yes, PDF = probability density function. CDF = cumulative distribution function.
Charles

2. Reza says:

Is there any way we can plot the PDF of a bivariate normal distribution in EXCEL?

This youtube may help: https://www.youtube.com/watch?v=4BrMSVV7mBM
Thanks,

• Charles says:

Reza,
This would have to be a three dimensional graph. In any case, you can use the Real Statistics function BNORMDIST to calculate the pdf values.
Charles

3. Alex says:

Hi,

Are you sure about BNORMDIST function ?
Using covariance matrix or simplified equation (btw thanks) calculus I get the same result that i believe to be good. When using BNORMDIST i got a value 72.56433684 higher (value remains constant) than simplified equation …
i think there is a glitch somewhere (and should be me) , just want to understand.
BR