**Theorem 1** (**Singular Value Decomposition**): For any *m × n* matrix *A* there exit an *m ×m* orthogonal matrix *U*, an *n × n* orthogonal matrix *V* and a diagonal *m × n* matrix *D* with non-negative values on the diagonal such that *A = UDV ^{T}.*

In fact, such matrices can be constructed where the columns of *U* are the eigenvectors of *AA ^{T}*, the columns of

*V*are the eigenvectors of

*A*and the main diagonal of

^{T}A*D*contains the square roots of the eigenvalues of

*U*(or

*V*) in descending order.

Proof: By Property 2 of Positive Definite Matrices, *A ^{T}A* is a positive semidefinite

*n*×

*n*matrix, and so by Property 1 of Positive Definite Matrices, it is symmetric. By Theorem 1 of Spectral Decomposition, it has a spectral decomposition

*A*where

^{T}A = VEV^{T}*V*is an orthogonal

*n*×

*n*matrix whose columns are unit eigenvalues of

*A*and

^{T}A*E*is an

*n*×

*n*diagonal matrix whose main diagonal consists of the eigenvalues

*λ*

_{1}, …,

*λ*of

_{n}*A*in descending order. Since

^{T}A*A*is positive semidefinite, these eigenvalues are non -negative. Thus there is an

^{T}A*r*such

*λ*

_{1}≥ … ≥

*λ*>0 and

_{r}*λ*

_{r}_{+1}=⋯=

*λ*= 0.

_{n}Since *V _{j}* is a unit vector

and so *AV _{j} *= 0 when

*j > r*. We now construct an

*m × m*matrix

*U*as follows. First define the first

*r*columns of

*U*by

*U*=

_{j}*AV*. Since the

_{j}*V*are orthogonal, so are the

_{j}*U*. Since

_{j}*U _{j} *is a unit vector. If

*r < m*, then we can expand

*U*

_{1}, …,

*U*to an orthonormal basis

_{r}*U*

_{1}, …,

*U*for the set of

_{m}*m*× 1 column vectors. In either case let

*U*be the m × m matrix whose columns are the

*U*Based on the construction described above,

_{j}.*U*is an orthogonal matrix.

Now let *B = U ^{T}AV* = [

*b*]. Then

_{ij}for *j ≤ r*, and

for *j > r. *Let *D* = the *m × n* diagonal matrix whose main diagonal consists of , …, followed by zeros (if needed). We have just shown that *U ^{T}AV = D*, and so

*A = UDV*.

^{T}**Observation**: From the proof of the theorem, it follows that

**Observation**: Note that *AA ^{T}* = (

*A*)

^{T}^{T}(

*A*) is a positive semidefinite

^{T}*m × m*matrix. In fact, we could have used

*AA*instead of

^{T}*A*in the proof of Theorem 1. Also note that

^{T}AThese are simply spectral decompositions of *A ^{T}A* and

*AA*. Note too that the diagonal matrix

^{T}*D*

^{2}for

*A*is

^{T}A*m × m*, while the diagonal matrix

*D*

^{2}for

*AA*is

^{T}*n × n*, but both have the same non-zero values on their main diagonals.

If *A* is a symmetric *n × n* matrix, then *A ^{T}A* =

*A*

^{2}=

*AA*and the two spectral decompositions can be considered equal with

^{T}*U = V*. In fact, the singular value decomposition of

*A*is then

*A = UDU*, which is the same as its spectral decomposition.

^{T}**Observation**: The columns of *U* corresponding to the non-zero diagonal elements form an orthonormal basis for the range of *A*, and so the rank of *A* = the number of non-zero diagonal elements. Thus a square matrix is invertible if and only if all the elements in *D* are positive. If *A* is invertible then *A*^{-1} = (*UDV ^{T}* )

^{-1}=

*VD*

^{-1}

*UT*

The solutions to the equation *AX = C* can be found as follows:

*C = AX = UDV ^{T}X*

and so

*X = VD ^{*}U^{T}C*

Where *D ^{*}* is the diagonal matrix whose main diagonal consists of the reciprocals of the non-negative elements in

*D*followed by zeros. We can view

*VD*as representing a sort of inverse for

^{*}U^{T}*A*even when A is not a square matrix.

**Observation**: The columns of *V* corresponding to the zero diagonal elements form an orthogonal basis for the null space of *A*, and so the dimension of the null space of *A* = the number of columns in *A* minus the rank of *A*, i.e. *n* – *r* in the proof of Theorem 1. Thus any linear combination of columns in *V* is a solution to the homogeneous equation *AX* = 0.

Note that *AX* = 0 if and only if *AX = UDV ^{T}X* = 0 if and only if

Thus *X* is a solution of *AX* = 0 if and only if *X*′ is a solution *DX*′ = 0 where *X*‘ = *V ^{T}X*. This means that

*λ*= 0 for all

_{j}*j*. But since the

*λ*= 0 for

_{j}*j*=

*r*+1, …,

*n*, it follows that = 0 for such

*j*, and so

*X*=

_{j}= VVTX_{j}*V*= 0. Thus if

*AX*= 0 then

*X*is a linear combination of the final

*n – r*columns in

*V*.

Since the* λ _{j}* = 0 for

*j*=

*r*+1, …,

*n*, any linear combination of the final

*n – r*columns in

*V*is a solution to

*AX*= 0. Since the columns of

*V*are orthogonal and therefore independent, it follows that the final

*n – r*columns of

*V*form a basis for the null space, and so the dimension of the null space is

*n – r*.

**Real Statistics Functions**: The Real Statistics Resource Pack provides the following functions:

**SVD_U**(R1, *iter*) = *U* matrix of the singular vector decomposition (SVD) for the matrix *A* corresponding to range R1; thus *A = UDV ^{T}* where

*U*and

*V*are orthogonal matrices and

*D*is a diagonal matrix.

**SVD_D**(R1, *iter*) = *D* matrix of the SVD for the matrix *A* corresponding to range R1

**SVD_V**(R1, *iter*) = *V* matrix of the SVD for the matrix *A* corresponding to range R1

Here *iter* is the number of iterations in the algorithm used to compute the SVD (default 100). The current implementation of these functions only supports square matrices.

**Real Statistics Data Analysis Tool**: The **SVD Factorization** option of the Real Statistics **Matrix Operations** data analysis tool also provides the means to output the singular value decomposition of a square matrix.

Hi Charles,

Do you have a SVD function implemented?

Thanks,

Jun

Jun,

I don’t have an SVD function implemented, but it wouldn’t be hard to do. I’ll try to add it shortly.

Charles