Skip to article frontmatterSkip to article content

Sparsity and structure

Very large matrices cannot be stored all within primary memory of a computer unless they are sparse. A sparse matrix has structural zeros, meaning entries that are known to be exactly zero. For instance, the adjacency matrix of a graph has zeros where there are no links in the graph. To store and operate with a sparse matrix efficiently, it is not represented as an array of all of its values. There is a variety of sparse formats available; for the most part, you can imagine that the matrix is stored as triples (i,j,Aij)(i,j,A_{ij}) for all the nonzero (i,j)(i,j) locations.

8.1.1Computing with sparse matrices

Most graphs with real applications have many fewer edges than the maximum possible n2n^2 for nn nodes. Accordingly, their adjacency matrices have mostly zero elements and should be represented sparsely.

Arithmetic operations such as +, -, *, and ^ respect and exploit sparsity if the matrix operands are sparse. However, matrix operations may substantially decrease the amount of sparsity, a phenomenon known as fill-in.

8.1.2Banded matrices

A particularly important type of sparse matrix is a banded matrix. Recall from Exploiting matrix structure that A\mathbf{A} has upper bandwidth pp if ji>pj-i > p implies Aij=0A_{ij}=0, and lower bandwidth qq if ij>qi-j > q implies Aij=0A_{ij}=0. We say the total bandwidth is p+q+1p+q+1. Banded matrices appear naturally in many applications where each element interacts directly with only a few neighbors.

Without pivoting, an LU factorization preserves bandwidth, but pivoting can change or destroy bandedness.

8.1.3Systems and eigenvalues

If given a sparse matrix, the backslash operator will automatically try a form of sparse-aware Cholesky or pivoted LU factorization. Depending on the sparsity pattern of the matrix, the time taken to solve the linear system may be well below the O(n3)O(n^3) needed in the general case.

For very large matrices, it’s unlikely that you will want to find all of its eigenvalues and eigenvectors. In Krylov subspaces we describe some of the math behind an algorithm that can find a selected number of eigenvalues of largest magnitude, lying to the extreme left or right, or nearest a given complex number.

8.1.4Exercises