Skip to article frontmatterSkip to article content

Singular value decomposition

We now introduce another factorization that is as fundamental as the EVD.

7.3.1Connections to the EVD

Except for some unimportant technicalities, the eigenvectors of AA\mathbf{A}^*\mathbf{A}, when appropriately ordered and normalized, are right singular vectors of A\mathbf{A}. The left singular vectors could then be deduced from the identity AV=US\mathbf{A}\mathbf{V} = \mathbf{U}\mathbf{S}.

Another close connection between EVD and SVD comes via the (m+n)×(m+n)(m+n)\times (m+n) matrix

C=[0AA0].\mathbf{C} = \begin{bmatrix} 0 & \mathbf{A}^* \\ \mathbf{A} & 0 \end{bmatrix}.

If σ is a singular value of B\mathbf{B}, then σ and σ-\sigma are eigenvalues of C\mathbf{C}, and the associated eigenvector immediately reveals a left and a right singular vector (see Exercise 7.3.11). This connection is implicitly exploited by software to compute the SVD.

7.3.2Interpreting the SVD

Another way to write A=USV\mathbf{A}=\mathbf{U}\mathbf{S}\mathbf{V}^* is

AV=US.\mathbf{A}\mathbf{V}=\mathbf{U}\mathbf{S}.

Taken columnwise, this equation means

Avk=σkuk,k=1,,r=min{m,n}.\mathbf{A} \mathbf{v}_{k} = \sigma_k \mathbf{u}_{k}, \qquad k=1,\ldots,r=\min\{m,n\}.

In words, each right singular vector is mapped by A\mathbf{A} to a scaled version of its corresponding left singular vector; the magnitude of scaling is its singular value.

Both the SVD and the EVD describe a matrix in terms of some special vectors and a few scalars. Table 7.3.1 summarizes the key differences. The SVD sacrifices having the same basis in both source and image spaces—after all, they may not even have the same dimension—but as a result gains orthogonality in both spaces.

Table 7.3.1:Comparison of the EVD and SVD

EVDSVD
exists for most square matricesexists for all rectangular and square matrices
Axk=λkxk\mathbf{A}\mathbf{x}_k = \lambda_k \mathbf{x}_kAvk=σkuk\mathbf{A} \mathbf{v}_k = \sigma_k \mathbf{u}_k
same basis for domain and range of A\mathbf{A}two orthonormal bases
may have poor conditioningperfectly conditioned

7.3.3Thin form

In The QR factorization we saw that a matrix has both full and thin forms of the QR factorization. A similar situation holds with the SVD.

Suppose A\mathbf{A} is m×nm\times n with m>nm > n and let A=USV\mathbf{A}=\mathbf{U}\mathbf{S}\mathbf{V}^* be an SVD. The last mnm-n rows of S\mathbf{S} are all zero due to the fact that S\mathbf{S} is diagonal. Hence

US=[u1unun+1um][σ1σn0]=[u1un][σ1σn]=U^S^,\begin{align*} \mathbf{U} \mathbf{S} & = \begin{bmatrix} \mathbf{u}_1 & \cdots & \mathbf{u}_n & \mathbf{u}_{n+1} & \cdots & \mathbf{u}_m \end{bmatrix} \begin{bmatrix} \sigma_1 & & \\ & \ddots & \\ & & \sigma_n \\ & & \\ & \boldsymbol{0} & \\ & & \end{bmatrix} \\ &= \begin{bmatrix} \mathbf{u}_1 & \cdots & \mathbf{u}_n \end{bmatrix} \begin{bmatrix} \sigma_1 & & \\ & \ddots & \\ & & \sigma_n \end{bmatrix} = \hat{\mathbf{U}} \hat{\mathbf{S}}, \end{align*}

in which U^\hat{\mathbf{U}} is m×nm\times n and S^\hat{\mathbf{S}} is n×nn\times n. This allows us to define the thin SVD

A=U^S^V,\mathbf{A}=\hat{\mathbf{U}}\hat{\mathbf{S}}\mathbf{V}^*,

in which S^\hat{\mathbf{S}} is square and diagonal and U^\hat{\mathbf{U}} is ONC but not square.

So, in sketch form, a full SVD of a matrix that is taller than it is wide looks like

  =          \rule{1cm}{2.4cm} \; \raisebox{11mm}{=} \; \rule{2.4cm}{2.4cm} \; \raisebox{11mm}{$\centerdot$} \; \rule{1cm}{2.4cm}\; \raisebox{11mm}{$\centerdot$} \; \rule[6mm]{1cm}{1cm}\quad

while a thin SVD looks like

  =          \rule{1cm}{2.4cm} \; \raisebox{11mm}{=} \; \rule{1cm}{2.4cm} \; \raisebox{11mm}{$\centerdot$} \; \rule[6mm]{1cm}{1cm} \; \raisebox{11mm}{$\centerdot$} \; \rule[6mm]{1cm}{1cm}

The thin form retains all the information about A\mathbf{A} from the SVD; the factorization is still an equality, not an approximation. It is computationally preferable when mnm \gg n, since it requires far less storage than a full SVD. For a matrix with more columns than rows, one can derive a thin form by taking the adjoint of the thin SVD of A\mathbf{A}^*.

7.3.4SVD and the 2-norm

The SVD is intimately connected to the 2-norm, as the following theorem describes.

The conclusion (7.3.14) can be proved by vector calculus. In the square case m=nm=n, A\mathbf{A} having full rank is identical to being invertible. The SVD is the usual means for computing the 2-norm and condition number of a matrix.

7.3.5Exercises