Singular Value Decomposition

Theorem 1 (Singular Value Decomposition): For any m × n matrix A there exit an m × m orthogonal matrix U, an n × n orthogonal matrix V and an m × n diagonal matrix D with non-negative values on the diagonal such that A = UDVT.

In fact, such matrices can be constructed where the columns of U are the eigenvectors of AAT, the columns of V are the eigenvectors of ATA and the main diagonal of D contains the square roots of the eigenvalues of U (or V) in descending order.

Proof: By Property 2 of Positive Definite Matrices, ATA is a positive semidefinite n × n matrix, and so by Property 1 of Positive Definite Matrices, it is symmetric. By Theorem 1 of Spectral Decomposition, it has a spectral decomposition ATA = VEVT where V is an orthogonal n × n matrix whose columns are unit eigenvalues of ATA and E is an n × n diagonal matrix whose main diagonal consists of the eigenvalues λ1, …, λn of ATA in descending order. Since ATA is positive semidefinite, these eigenvalues are non -negative. Thus there is an r such λ1 ≥ … ≥ λr >0 and λr+1 =⋯= λn = 0.

Since Vj is a unit vector


and so AVj = 0 when j > r. We now construct an m × m matrix U as follows. First define the first r columns of U by Uj = \frac{1}{\sqrt \lambda_j}AVj . Since the Vj are orthogonal, so are the Uj. Since


Uj is a unit vector. If r < m, then we can expand U1, …, Ur to an orthonormal basis U1, …, Um for the set of m × 1 column vectors. In either case let U be the m × m matrix whose columns are the Uj. Based on the construction described above, U is an orthogonal matrix.

Now let B = UTAV = [bij]. Then


for j ≤ r, and


for j > r. Let D = the m × n diagonal matrix whose main diagonal consists of \sqrt \lambda_1, …, \sqrt \lambda_rfollowed by zeros (if needed). We have just shown that UTAV = D, and so A = UDVT.

Observation: From the proof of the theorem, it follows that


Observation: Note that AAT = (AT)T(AT) is a positive semidefinite m × m matrix. In fact, we could have used AAT instead of ATA in the proof of Theorem 1. Also note that

image9362 image9363

These are simply spectral decompositions of ATA and AAT. Note too that the diagonal matrix D2 for ATA is m × m, while the diagonal matrix D2 for AAT is n × n, but both have the same non-zero values on their main diagonals.

If A is a symmetric n × n matrix, then ATA = A2 = AAT and the two spectral decompositions can be considered equal with U = V. In fact, the singular value decomposition of A is then A = UDUT, which is the same as its spectral decomposition.

Observation: The columns of U corresponding to the non-zero diagonal elements form an orthonormal basis for the range of A, and so the rank of A = the number of non-zero diagonal elements. Thus a square matrix is invertible if and only if all the elements in D are positive. If A is invertible then A-1 = (UDVT )-1 = VD-1UT

The solutions to the equation AX = C can be found as follows:


and so


Where D* is the diagonal matrix whose main diagonal consists of the reciprocals of the non-negative elements in D followed by zeros. We can view VD*UT as representing a sort of inverse for A even when A is not a square matrix.

Observation: The columns of V corresponding to the zero diagonal elements form an orthogonal basis for the null space of A, and so the dimension of the null space of A = the number of columns in A minus the rank of A, i.e. nr in the proof of Theorem 1. Thus any linear combination of columns in V is a solution to the homogeneous equation AX = 0.

Note that AX = 0 if and only if AX = UDVTX = 0 if and only if


Thus X is a solution of AX = 0 if and only if X′ is a solution DX′ = 0 where X‘ = VTX. This means that λjX'_j = 0 for all j. But since the λj = 0 for j = r+1, …, n, it follows that X'_j = 0 for such j, and so Xj = VVTXj = VX'_j = 0. Thus if AX = 0 then X is a linear combination of the final n – r columns in V.

Since the λj = 0 for j = r+1, …, n, any linear combination of the final n – r columns in V is a solution to AX = 0. Since the columns of V are orthogonal and therefore independent, it follows that the final n – r columns of V form a basis for the null space, and so the dimension of the null space is n – r.

Real Statistics Functions: The Real Statistics Resource Pack provides the following functions:

SVD_U(R1, iter) = U matrix of the singular vector decomposition (SVD) for the matrix A corresponding to range R1; thus A = UDVT where and V are orthogonal matrices and D is a diagonal matrix.

SVD_D(R1, iter) = D matrix of the SVD for the matrix A corresponding to range R1

SVD_V(R1, iter) = V matrix of the SVD for the matrix A corresponding to range R1

Here iter is the number of iterations in the algorithm used to compute the SVD (default 100).

Real Statistics Data Analysis Tool: The SVD Factorization option of the Real Statistics Matrix Operations data analysis tool also provides the means to output the singular value decomposition of a square matrix.

11 Responses to Singular Value Decomposition

  1. Jun Zhao says:

    Hi Charles,
    Do you have a SVD function implemented?

  2. Pierluigi Mozzo says:

    I’m not able to find SVD functions in RealStatistic function list installed on my MAC.

    • Charles says:

      Unfortunately, this function isnot yet supported on the mac version of the software, only the Windows version.

  3. Monte says:

    SVD_U sometimes gives #VALUE! error. At first, I thought this only happened when U contained near zero values. Today, however, I found this same error on this simple matrix:
    1 16.85 1.46
    1 24.81 -4.61
    1 18.85 -0.21
    1 12.63 4.93
    1 21.38 -1.36
    1 18.78 -0.08
    1 15.58 2.98
    1 16.3 1.73
    This matrix is from this article on SVD and regression analysis:
    The comparable SVDU function from the old Digilander site’s matrix.xla returns values without any problem (but I don’t like using it in today’s Excel – it was written for an older version and seems to cause some problems in current Excel). The Digilander site is no longer but it appears you can download the old matrix.xla here:
    I just mention matrix.xla in case it may be helpful in solving the problems with SVD_U

    • Charles says:

      The SVD_U function seems to work fine on this matrix and doesn’t return error values. I just tested it on my computer and will send you the results by email.

  4. Monte says:

    Thank you VERY MUCH for your e-mail reply with the attachment showing that SVD_U does indeed work properly on the example matrix. I went back to your website and downloaded the add-in anew to make sure I have the most current version. Voila, indeed, the SVD_U function works 🙂

  5. Rami says:

    SVD_U is sometimes giving #Value! It happened to many matrices I will give an example. Any help is greatly appreciated

    -0.088721917 -0.611548267 -0.962796502
    -5.647089783 -0.502167919 -0.79059257
    -3.790783085 -0.337095695 -0.530709632
    -2.34760153 -0.208760657 -0.328664214
    -1.431077773 -0.127258708 -0.200350888
    -0.907758067 -0.08072246 -0.127086129

    • Charles says:

      Perhaps you are using an old version of the Real Statistics software. In one of the last few versions I correction some error in the SVD_U function. The answer I get now for U is
      -0.037381909 -0.999301052 -9.10094E-09 2.98519E-10 -7.01112E-11 4.20293E-10
      -0.763403531 0.028557441 -0.018845734 -0.082496343 0.091553822 -0.071487468
      -0.512458152 0.019170063 0.018095374 -0.401453297 -0.322848265 -0.214276705
      -0.317361219 0.011871866 -0.013228788 0.874574071 -0.250001454 0.15765527
      -0.193460679 0.007236988 0.040372035 0.104632223 0.908208565 -0.018597225
      -0.122715547 0.00459055 0.012237016 -0.237069389 -0.006585681 0.961131622

Leave a Reply

Your email address will not be published. Required fields are marked *