Skip to article frontmatterSkip to article content

Chapter 3

Functions

Solution of least squares by the normal equations
lsnormal.jl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
"""
    lsnormal(A, b)

Solve a linear least-squares problem by the normal equations.
Returns the minimizer of ||b-Ax||.
"""
function lsnormal(A, b)
    N = A' * A
    z = A' * b
    R = cholesky(N).U
    w = forwardsub(R', z)                   # solve R'z=c
    x = backsub(R, w)                       # solve Rx=z
    return x
end
Solution of least squares by QR factorization
lsqrfact.jl
1
2
3
4
5
6
7
8
9
10
11
12
"""
    lsqrfact(A, b)

Solve a linear least-squares problem by QR factorization. Returns
the minimizer of ||b-Ax||.
"""
function lsqrfact(A, b)
    Q, R = qr(A)
    c = Q' * b
    x = backsub(R, c)
    return x
end
QR factorization by Householder reflections
qrfact.jl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
"""
    qrfact(A)

QR factorization by Householder reflections. Returns Q and R.
"""
function qrfact(A)
    m, n = size(A)
    Qt = diagm(ones(m))
    R = float(copy(A))
    for k in 1:n
        z = R[k:m, k]
        w = [-sign(z[1]) * norm(z) - z[1]; -z[2:end]]
        nrmw = norm(w)
        if nrmw < eps()
            continue    # already in place; skip this iteration
        end
        v = w / nrmw
        # Apply the reflection to each relevant column of R and Q
        for j in k:n
            R[k:m, j] -= v * (2 * (v' * R[k:m, j]))
        end
        for j in 1:m
            Qt[k:m, j] -= v * (2 * (v' * Qt[k:m, j]))
        end
    end
    return Qt', triu(R)
end

Examples

3.1 Fitting functions to data

Example 3.1.1

Here are 5-year averages of the worldwide temperature anomaly as compared to the 1951–1980 average (source: NASA).

year = 1955:5:2000
temp = [ -0.0480, -0.0180, -0.0360, -0.0120, -0.0040,
       0.1180, 0.2100, 0.3320, 0.3340, 0.4560 ]
    
scatter(year, temp, label="data",
    xlabel="year", ylabel="anomaly (degrees C)", 
    legend=:bottomright)
Loading...

A polynomial interpolant can be used to fit the data. Here we build one using a Vandermonde matrix. First, though, we express time as decades since 1950, as it improves the condition number of the matrix.

t = @. (year - 1950) / 10
n = length(t)
V = [ t[i]^j for i in 1:n, j in 0:n-1 ]
c = V \ temp
10-element Vector{Float64}: -14.114000001832462 76.36173810552113 -165.45597224550528 191.96056669514388 -133.27347224319684 58.015577787494486 -15.962888891734785 2.6948063497166928 -0.2546666667177082 0.010311111113288083

The coefficients in vector c are used to create a polynomial. Then we create a function that evaluates the polynomial after changing the time variable as we did for the Vandermonde matrix.

using Polynomials, Plots
p = Polynomial(c)
f = yr -> p((yr - 1950) / 10)
plot!(f, 1955, 2000, label="interpolant")
Loading...

As you can see, the interpolant does represent the data, in a sense. However it’s a crazy-looking curve for the application. Trying too hard to reproduce all the data exactly is known as overfitting.

Example 3.1.2

Here are the 5-year temperature averages again.

year = 1955:5:2000
t = @. (year - 1950) / 10
temp = [ -0.0480, -0.0180, -0.0360, -0.0120, -0.0040,
          0.1180, 0.2100, 0.3320, 0.3340, 0.4560 ]
10-element Vector{Float64}: -0.048 -0.018 -0.036 -0.012 -0.004 0.118 0.21 0.332 0.334 0.456

The standard best-fit line results from using a linear polynomial that meets the least-squares criterion.

V = [ t.^0 t ]    # Vandermonde-ish matrix
@show size(V)
c = V \ temp
p = Polynomial(c)
Loading...
f = yr -> p((yr - 1955) / 10)
scatter(year, temp, label="data",
    xlabel="year", ylabel="anomaly (degrees C)", leg=:bottomright)
plot!(f, 1955, 2000, label="linear fit")
Loading...

If we use a global cubic polynomial, the points are fit more closely.

V = [ t[i]^j for i in 1:length(t), j in 0:3 ]   
@show size(V);
size(V) = (10, 4)

Now we solve the new least-squares problem to redefine the fitting polynomial.

p = Polynomial( V \ temp )
plot!(f, 1955, 2000, label="cubic fit")
Loading...

If we were to continue increasing the degree of the polynomial, the residual at the data points would get smaller, but overfitting would increase.

Example 3.1.3
a = [1/k^2 for k=1:100] 
s = cumsum(a)        # cumulative summation
p = @. sqrt(6*s)

scatter(1:100, p;
    title="Sequence convergence",
    xlabel=L"k",  ylabel=L"p_k")
Loading...

This graph suggests that maybe pkπp_k\to \pi, but it’s far from clear how close the sequence gets. It’s more informative to plot the sequence of errors, ϵk=πpk\epsilon_k= |\pi-p_k|. By plotting the error sequence on a log-log scale, we can see a nearly linear relationship.

ϵ = @. abs(π - p)    # error sequence
scatter(1:100, ϵ;
    title="Convergence of errors",
    xaxis=(:log10,L"k"),  yaxis=(:log10,"error"))
Loading...

The straight line on the log-log scale suggests a power-law relationship where ϵkakb\epsilon_k\approx a k^b, or logϵkb(logk)+loga\log \epsilon_k \approx b (\log k) + \log a.

k = 1:100
V = [ k.^0 log.(k) ]     # fitting matrix
c = V \ log.(ϵ)          # coefficients of linear fit
2-element Vector{Float64}: -0.18237524972829994 -0.9674103233127929

In terms of the parameters aa and bb used above, we have

a, b = exp(c[1]), c[2];
@show b;
b = -0.9674103233127929

It’s tempting to conjecture that the slope b1b\to -1 asymptotically. Here is how the numerical fit compares to the original convergence curve.

plot!(k, a * k.^b, l=:dash, label="power-law fit")
Loading...

3.2 The normal equations

Example 3.2.1

Because the functions sin2(t)\sin^2(t), cos2(t)\cos^2(t), and 1 are linearly dependent, we should find that the following matrix is somewhat ill-conditioned.

t = range(0, 3, 400)
f = [ x -> sin(x)^2, x -> cos((1 + 1e-7) * x)^2, x -> 1. ]
A = [ f(t) for t in t, f in f ]
@show κ = cond(A);
κ = cond(A) = 1.8253225426741675e7

Now we set up an artificial linear least-squares problem with a known exact solution that actually makes the residual zero.

x = [1., 2, 1]
b = A * x;

Using backslash to find the least-squares solution, we get a relative error that is well below κ times machine epsilon.

x_BS = A \ b
@show observed_error = norm(x_BS - x) / norm(x);
@show error_bound = κ * eps();
observed_error = norm(x_BS - x) / norm(x) = 1.0163949045357309e-10

error_bound = κ * eps() = 4.053030228488391e-9

If we formulate and solve via the normal equations, we get a much larger relative error. With κ21014\kappa^2\approx 10^{14}, we may not be left with more than about 2 accurate digits.

N = A' * A
x_NE = N \ (A'*b)
@show observed_err = norm(x_NE - x) / norm(x);
@show digits = -log10(observed_err);
observed_err = norm(x_NE - x) / norm(x) = 0.021745909192780664
digits = -(log10(observed_err)) = 1.6626224298403076

3.3 The QR factorization

Example 3.3.1

Julia provides access to both the thin and full forms of the QR factorization.

A = rand(1.:9., 6, 4)
@show m,n = size(A);
(m, n) = size(A) = (6, 4)

Here is a standard call:

Q,R = qr(A);
Q
6×6 LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}, Matrix{Float64}}
R
4×4 Matrix{Float64}: -11.3578 -11.8861 -12.2383 -14.6155 0.0 -6.61218 -4.92045 -3.06693 0.0 0.0 5.10039 -0.384062 0.0 0.0 0.0 2.6142

If you look carefully, you see that we seemingly got a full Q\mathbf{Q} but a thin R\mathbf{R}. However, the Q\mathbf{Q} above is not a standard matrix type. If you convert it to a true matrix, then it reverts to the thin form.

Q̂ = Matrix(Q)
6×4 Matrix{Float64}: -0.35218 0.179373 0.70044 0.257 -0.17609 -0.43964 -0.0623996 0.0206751 -0.704361 0.20751 -0.509591 -0.326627 -0.440225 -0.267301 0.450389 -0.413481 -0.35218 0.330609 -0.133977 0.694365 -0.17609 -0.742112 -0.158137 0.416808

We can test that Q\mathbf{Q} is an orthogonal matrix:

opnorm(Q' * Q - I)
5.649139754096605e-16

The thin Q^\hat{\mathbf{Q}} cannot be an orthogonal matrix, because it is not square, but it is still ONC:

Q̂' * Q̂ - I
4×4 Matrix{Float64}: -2.22045e-16 2.24598e-17 -8.16179e-17 2.73695e-17 2.24598e-17 4.44089e-16 3.7505e-17 -1.96721e-17 -8.16179e-17 3.7505e-17 4.44089e-16 1.04171e-16 2.73695e-17 -1.96721e-17 1.04171e-16 -2.22045e-16
Example 3.3.2

We’ll repeat the experiment of Demo 3.2.1, which exposed instability in the normal equations.

t = range(0, 3, 400)
f = [ x -> sin(x)^2, x -> cos((1 + 1e-7) * x)^2, x -> 1. ]
A = [ f(t) for t in t, f in f ]
x = [1., 2, 1]
b = A * x;

The error in the solution by Function 3.3.2 is similar to the bound predicted by the condition number.

observed_error = norm(FNC.lsqrfact(A, b) - x) / norm(x);
@show observed_error;
@show error_bound = cond(A) * eps();
observed_error = 4.665273501889628e-9

error_bound = cond(A) * eps() = 4.053030228488391e-9

3.4 Computing QR factorizations

Example 3.4.1

We will use Householder reflections to produce a QR factorization of a random matrix.

A = rand(float(1:9), 6, 4)
m,n = size(A)
(6, 4)

Our first step is to introduce zeros below the diagonal in column 1 by using (3.4.4) and (3.4.1).

z = A[:, 1];
v = normalize(z - norm(z) * [1; zeros(m-1)])
P₁ = I - 2v * v'   # reflector
6×6 Matrix{Float64}: 0.555136 0.475831 0.0793052 0.0793052 0.475831 0.475831 0.475831 0.491046 -0.0848256 -0.0848256 -0.508954 -0.508954 0.0793052 -0.0848256 0.985862 -0.0141376 -0.0848256 -0.0848256 0.0793052 -0.0848256 -0.0141376 0.985862 -0.0848256 -0.0848256 0.475831 -0.508954 -0.0848256 -0.0848256 0.491046 -0.508954 0.475831 -0.508954 -0.0848256 -0.0848256 -0.508954 0.491046

We check that this reflector introduces zeros as it should:

P₁ * z
6-element Vector{Float64}: 12.60952021291849 1.5543122344752192e-15 3.885780586188048e-16 3.885780586188048e-16 1.1102230246251565e-15 1.1102230246251565e-15

Now we replace A\mathbf{A} by PA\mathbf{P}\mathbf{A}.

A = P₁ * A
6×4 Matrix{Float64}: 12.6095 15.2266 6.1858 12.8474 1.55431e-15 -2.72963 -1.54679 -4.25448 3.88578e-16 5.71173 1.07554 4.95759 3.88578e-16 7.71173 8.07554 3.95759 1.11022e-15 0.270365 -2.54679 0.745523 1.11022e-15 -0.729635 -2.54679 1.74552

We are set to put zeros into column 2. We must not use row 1 in any way, lest it destroy the zeros we just introduced. So we leave it out of the next reflector.

z = A[2:m, 2]
v = normalize(z - norm(z) * [1; zeros(m-2)])
P₂ = I - 2v * v'
5×5 Matrix{Float64}: -0.272758 0.570742 0.770591 0.0270161 -0.0729085 0.570742 0.744062 -0.345556 -0.0121148 0.0326943 0.770591 -0.345556 0.533445 -0.0163569 0.0441425 0.0270161 -0.0121148 -0.0163569 0.999427 0.00154759 -0.0729085 0.0326943 0.0441425 0.00154759 0.995824

We now apply this reflector to rows 2 and below only.

A[2:m, :] = P₂ * A[2:m, :]
A
6×4 Matrix{Float64}: 12.6095 15.2266 6.1858 12.8474 4.63114e-17 10.0075 7.37557 6.93251 1.06481e-15 1.65057e-16 -2.92551 -0.0589864 1.3016e-15 3.06531e-16 2.67349 -2.81557 1.14223e-15 2.79111e-18 -2.73618 0.508063 1.02384e-15 -7.37155e-17 -2.03568 2.38636

We need to iterate the process for the last two columns.

for j in 3:n
    z = A[j:m, j]
    v = normalize(z - norm(z) * [1; zeros(m-j)])
    P = I - 2v * v'
    A[j:m, :] = P * A[j:m, :]
end

We have now reduced the original to an upper triangular matrix using four orthogonal Householder reflections:

R = triu(A)
6×4 Matrix{Float64}: 12.6095 15.2266 6.1858 12.8474 0.0 10.0075 7.37557 6.93251 0.0 0.0 5.22847 -2.60169 0.0 0.0 0.0 2.66739 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0