Functions¶
Solution of least squares by the normal equations
1 2 3 4 5 6 7 8 9 10 11 12 13 14
""" lsnormal(A, b) Solve a linear least-squares problem by the normal equations. Returns the minimizer of ||b-Ax||. """ function lsnormal(A, b) N = A' * A z = A' * b R = cholesky(N).U w = forwardsub(R', z) # solve R'z=c x = backsub(R, w) # solve Rx=z return x end
About the code
The syntax on line 9 is a field reference to extract the matrix we want from the structure returned by cholesky
.
Solution of least squares by QR factorization
1 2 3 4 5 6 7 8 9 10 11 12
""" lsqrfact(A, b) Solve a linear least-squares problem by QR factorization. Returns the minimizer of ||b-Ax||. """ function lsqrfact(A, b) Q, R = qr(A) c = Q' * b x = backsub(R, c) return x end
QR factorization by Householder reflections
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
""" qrfact(A) QR factorization by Householder reflections. Returns Q and R. """ function qrfact(A) m, n = size(A) Qt = diagm(ones(m)) R = float(copy(A)) for k in 1:n z = R[k:m, k] w = [-sign(z[1]) * norm(z) - z[1]; -z[2:end]] nrmw = norm(w) if nrmw < eps() continue # already in place; skip this iteration end v = w / nrmw # Apply the reflection to each relevant column of R and Q for j in k:n R[k:m, j] -= v * (2 * (v' * R[k:m, j])) end for j in 1:m Qt[k:m, j] -= v * (2 * (v' * Qt[k:m, j])) end end return Qt', triu(R) end
Examples¶
3.1 Fitting functions to data¶
Example 3.1.1
Here are 5-year averages of the worldwide temperature anomaly as compared to the 1951–1980 average (source: NASA).
year = 1955:5:2000
temp = [ -0.0480, -0.0180, -0.0360, -0.0120, -0.0040,
0.1180, 0.2100, 0.3320, 0.3340, 0.4560 ]
scatter(year, temp, label="data",
xlabel="year", ylabel="anomaly (degrees C)",
legend=:bottomright)
A polynomial interpolant can be used to fit the data. Here we build one using a Vandermonde matrix. First, though, we express time as decades since 1950, as it improves the condition number of the matrix.
t = @. (year - 1950) / 10
n = length(t)
V = [ t[i]^j for i in 1:n, j in 0:n-1 ]
c = V \ temp
10-element Vector{Float64}:
-14.114000001832462
76.36173810552113
-165.45597224550528
191.96056669514388
-133.27347224319684
58.015577787494486
-15.962888891734785
2.6948063497166928
-0.2546666667177082
0.010311111113288083
The coefficients in vector c
are used to create a polynomial. Then we create a function that evaluates the polynomial after changing the time variable as we did for the Vandermonde matrix.
Tip
If you plot
a function, then the points are chosen automatically to make a smooth curve.
using Polynomials, Plots
p = Polynomial(c)
f = yr -> p((yr - 1950) / 10)
plot!(f, 1955, 2000, label="interpolant")
As you can see, the interpolant does represent the data, in a sense. However it’s a crazy-looking curve for the application. Trying too hard to reproduce all the data exactly is known as overfitting.
Example 3.1.2
Here are the 5-year temperature averages again.
year = 1955:5:2000
t = @. (year - 1950) / 10
temp = [ -0.0480, -0.0180, -0.0360, -0.0120, -0.0040,
0.1180, 0.2100, 0.3320, 0.3340, 0.4560 ]
10-element Vector{Float64}:
-0.048
-0.018
-0.036
-0.012
-0.004
0.118
0.21
0.332
0.334
0.456
The standard best-fit line results from using a linear polynomial that meets the least-squares criterion.
Tip
Backslash solves overdetermined linear systems in a least-squares sense.
V = [ t.^0 t ] # Vandermonde-ish matrix
@show size(V)
c = V \ temp
p = Polynomial(c)
f = yr -> p((yr - 1955) / 10)
scatter(year, temp, label="data",
xlabel="year", ylabel="anomaly (degrees C)", leg=:bottomright)
plot!(f, 1955, 2000, label="linear fit")
If we use a global cubic polynomial, the points are fit more closely.
V = [ t[i]^j for i in 1:length(t), j in 0:3 ]
@show size(V);
size(V) = (10, 4)
Now we solve the new least-squares problem to redefine the fitting polynomial.
Tip
The definition of f
above is in terms of p
. When p
is changed, then f
calls the new version.
p = Polynomial( V \ temp )
plot!(f, 1955, 2000, label="cubic fit")
If we were to continue increasing the degree of the polynomial, the residual at the data points would get smaller, but overfitting would increase.
Example 3.1.3
a = [1/k^2 for k=1:100]
s = cumsum(a) # cumulative summation
p = @. sqrt(6*s)
scatter(1:100, p;
title="Sequence convergence",
xlabel=L"k", ylabel=L"p_k")
This graph suggests that maybe , but it’s far from clear how close the sequence gets. It’s more informative to plot the sequence of errors, . By plotting the error sequence on a log-log scale, we can see a nearly linear relationship.
ϵ = @. abs(π - p) # error sequence
scatter(1:100, ϵ;
title="Convergence of errors",
xaxis=(:log10,L"k"), yaxis=(:log10,"error"))
The straight line on the log-log scale suggests a power-law relationship where , or .
k = 1:100
V = [ k.^0 log.(k) ] # fitting matrix
c = V \ log.(ϵ) # coefficients of linear fit
2-element Vector{Float64}:
-0.18237524972829994
-0.9674103233127929
In terms of the parameters and used above, we have
a, b = exp(c[1]), c[2];
@show b;
b = -0.9674103233127929
It’s tempting to conjecture that the slope asymptotically. Here is how the numerical fit compares to the original convergence curve.
plot!(k, a * k.^b, l=:dash, label="power-law fit")
3.2 The normal equations¶
Example 3.2.1
Because the functions , , and 1 are linearly dependent, we should find that the following matrix is somewhat ill-conditioned.
Tip
The local variable scoping rule for loops applies to comprehensions as well.
t = range(0, 3, 400)
f = [ x -> sin(x)^2, x -> cos((1 + 1e-7) * x)^2, x -> 1. ]
A = [ f(t) for t in t, f in f ]
@show κ = cond(A);
κ = cond(A) = 1.8253225426741675e7
Now we set up an artificial linear least-squares problem with a known exact solution that actually makes the residual zero.
x = [1., 2, 1]
b = A * x;
Using backslash to find the least-squares solution, we get a relative error that is well below κ times machine epsilon.
x_BS = A \ b
@show observed_error = norm(x_BS - x) / norm(x);
@show error_bound = κ * eps();
observed_error = norm(x_BS - x) / norm(x) = 1.0163949045357309e-10
error_bound = κ * eps() = 4.053030228488391e-9
If we formulate and solve via the normal equations, we get a much larger relative error. With , we may not be left with more than about 2 accurate digits.
N = A' * A
x_NE = N \ (A'*b)
@show observed_err = norm(x_NE - x) / norm(x);
@show digits = -log10(observed_err);
observed_err = norm(x_NE - x) / norm(x) = 0.021745909192780664
digits = -(log10(observed_err)) = 1.6626224298403076
3.3 The QR factorization¶
Example 3.3.1
Julia provides access to both the thin and full forms of the QR factorization.
A = rand(1.:9., 6, 4)
@show m,n = size(A);
(m, n) = size(A) = (6, 4)
Here is a standard call:
Q,R = qr(A);
Q
6×6 LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}, Matrix{Float64}}
R
4×4 Matrix{Float64}:
-9.32738 -11.3644 -14.3663 -7.39758
0.0 -8.35767 -4.27579 -2.74371
0.0 0.0 -4.28098 0.936095
0.0 0.0 0.0 -4.56855
If you look carefully, you see that we seemingly got a full but a thin . However, the above is not a standard matrix type. If you convert it to a true matrix, then it reverts to the thin form.
Tip
To enter the accented character Q̂
, type Q\hat
followed by Tab.
Q̂ = Matrix(Q)
6×4 Matrix{Float64}:
-0.536056 -0.108648 0.50589 0.599135
-0.321634 -0.160909 0.539293 -0.585387
-0.214423 -0.545992 -0.136649 -0.00955763
-0.321634 -0.28056 -0.275566 -0.461606
-0.643268 0.635386 -0.34464 -0.067267
-0.214423 -0.426341 -0.489746 0.284011
We can test that is an orthogonal matrix:
opnorm(Q' * Q - I)
3.8063921299074416e-16
The thin cannot be an orthogonal matrix, because it is not square, but it is still ONC:
Q̂' * Q̂ - I
4×4 Matrix{Float64}:
-1.11022e-16 1.46031e-17 2.20411e-16 -4.14866e-17
1.46031e-17 -1.11022e-16 1.73433e-16 -6.49759e-17
2.20411e-16 1.73433e-16 -1.11022e-16 8.18697e-17
-4.14866e-17 -6.49759e-17 8.18697e-17 2.22045e-16
Example 3.3.2
We’ll repeat the experiment of Demo 3.2.1, which exposed instability in the normal equations.
t = range(0, 3, 400)
f = [ x -> sin(x)^2, x -> cos((1 + 1e-7) * x)^2, x -> 1. ]
A = [ f(t) for t in t, f in f ]
x = [1., 2, 1]
b = A * x;
The error in the solution by Function 3.3.2 is similar to the bound predicted by the condition number.
observed_error = norm(FNC.lsqrfact(A, b) - x) / norm(x);
@show observed_error;
@show error_bound = cond(A) * eps();
observed_error = 4.665273501889628e-9
error_bound = cond(A) * eps() =
4.053030228488391e-9
3.4 Computing QR factorizations¶
Example 3.4.1
We will use Householder reflections to produce a QR factorization of a random matrix.
Tip
The rand
function can select randomly from within the interval , or from a vector or range that you specify.
A = rand(float(1:9), 6, 4)
m,n = size(A)
(6, 4)
Our first step is to introduce zeros below the diagonal in column 1 by using (3.4.4) and (3.4.1).
Tip
I
can stand for an identity matrix of any size, inferred from the context when needed.
z = A[:, 1];
v = normalize(z - norm(z) * [1; zeros(m-1)])
P₁ = I - 2v * v' # reflector
6×6 Matrix{Float64}:
0.2566 0.44905 0.5132 0.5132 0.06415 0.44905
0.44905 0.728752 -0.309998 -0.309998 -0.0387498 -0.271248
0.5132 -0.309998 0.645716 -0.354284 -0.0442855 -0.309998
0.5132 -0.309998 -0.354284 0.645716 -0.0442855 -0.309998
0.06415 -0.0387498 -0.0442855 -0.0442855 0.994464 -0.0387498
0.44905 -0.271248 -0.309998 -0.309998 -0.0387498 0.728752
We check that this reflector introduces zeros as it should:
P₁ * z
6-element Vector{Float64}:
15.5884572681199
-1.5543122344752192e-15
-2.55351295663786e-15
-2.55351295663786e-15
-2.0816681711721685e-16
-1.1102230246251565e-15
Now we replace by .
A = P₁ * A
6×4 Matrix{Float64}:
15.5885 5.3886 10.7772 12.2527
-1.55431e-15 -0.650932 -3.69782 -1.17286
-2.55351e-15 -1.02964 -2.36893 2.37387
-2.55351e-15 -1.02964 1.63107 3.37387
-2.08167e-16 5.6213 5.32888 2.54673
-1.11022e-15 1.34907 4.30218 2.82714
We are set to put zeros into column 2. We must not use row 1 in any way, lest it destroy the zeros we just introduced. So we leave it out of the next reflector.
z = A[2:m, 2]
v = normalize(z - norm(z) * [1; zeros(m-2)])
P₂ = I - 2v * v'
5×5 Matrix{Float64}:
-0.108545 -0.171695 -0.171695 0.937365 0.22496
-0.171695 0.973407 -0.0265925 0.145182 0.0348425
-0.171695 -0.0265925 0.973407 0.145182 0.0348425
0.937365 0.145182 0.145182 0.207382 -0.190222
0.22496 0.0348425 0.0348425 -0.190222 0.954348
We now apply this reflector to rows 2 and below only.
A[2:m, :] = P₂ * A[2:m, :]
A
6×4 Matrix{Float64}:
15.5885 5.3886 10.7772 12.2527
6.00676e-16 5.99691 6.49099 2.16366
-2.21974e-15 -7.34063e-17 -0.79086 2.89064
-2.21974e-15 -7.34063e-17 3.20914 3.89064
-2.03039e-15 -5.54008e-17 -3.28659 -0.274573
-1.54754e-15 1.30103e-16 2.23454 2.15004
We need to iterate the process for the last two columns.
for j in 3:n
z = A[j:m, j]
v = normalize(z - norm(z) * [1; zeros(m-j)])
P = I - 2v * v'
A[j:m, :] = P * A[j:m, :]
end
We have now reduced the original to an upper triangular matrix using four orthogonal Householder reflections:
R = triu(A)
6×4 Matrix{Float64}:
15.5885 5.3886 10.7772 12.2527
0.0 5.99691 6.49099 2.16366
0.0 0.0 5.16903 3.07723
0.0 0.0 0.0 4.32685
0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0