Power iteration finds only the dominant eigenvalue. We next show that it can be adapted to find any eigenvalue, provided you start with a reasonably good estimate of it. Some simple linear algebra is all that is needed.
Let A \mathbf{A} A be an n × n n\times n n × n matrix with eigenvalues λ 1 , … , λ n \lambda_1,\ldots,\lambda_n λ 1 , … , λ n (possibly with repeats), and let s s s be a complex scalar. Then:
The eigenvalues of the matrix A − s I \mathbf{A}-s\mathbf{I} A − s I are λ 1 − s , … , λ n − s \lambda_1-s,\ldots,\lambda_n-s λ 1 − s , … , λ n − s . If s s s is not an eigenvalue of A \mathbf{A} A , the eigenvalues of the matrix ( A − s I ) − 1 (\mathbf{A}-s\mathbf{I})^{-1} ( A − s I ) − 1 are ( λ 1 − s ) − 1 , … , ( λ n − s ) − 1 (\lambda_1-s)^{-1},\ldots,(\lambda_n-s)^{-1} ( λ 1 − s ) − 1 , … , ( λ n − s ) − 1 . The eigenvectors associated with the eigenvalues in the first two parts are the same as those of A \mathbf{A} A . The equation A v = λ v \mathbf{A}\mathbf{v}=\lambda \mathbf{v} Av = λ v implies that ( A − s I ) v = A v − s I v = λ v − s v = ( λ − s ) v (\mathbf{A}-s\mathbf{I})\mathbf{v} = \mathbf{A}\mathbf{v} - s\mathbf{I}\mathbf{v} = \lambda\mathbf{v} - s\mathbf{v} = (\lambda-s)\mathbf{v} ( A − s I ) v = Av − s Iv = λ v − s v = ( λ − s ) v . That proves the first part of the theorem. For the second part, we note that by assumption, ( A − s I ) (\mathbf{A}-s\mathbf{I}) ( A − s I ) is nonsingular, so ( A − s I ) v = ( λ − s ) v (\mathbf{A}-s\mathbf{I})\mathbf{v} = (\lambda-s) \mathbf{v} ( A − s I ) v = ( λ − s ) v implies that v = ( λ − s ) ( A − s I ) v \mathbf{v} = (\lambda-s) (\mathbf{A}-s\mathbf{I}) \mathbf{v} v = ( λ − s ) ( A − s I ) v , or ( λ − s ) − 1 v = ( A − s I ) − 1 v (\lambda-s)^{-1} \mathbf{v} =(\mathbf{A}-s\mathbf{I})^{-1} \mathbf{v} ( λ − s ) − 1 v = ( A − s I ) − 1 v . The discussion above also proves the third part of the theorem.
Consider first part 2 of the theorem with s = 0 s=0 s = 0 , and suppose that A \mathbf{A} A has a smallest eigenvalue,
∣ λ n ∣ ≥ ∣ λ n − 1 ∣ ≥ ⋯ > ∣ λ 1 ∣ . |\lambda_n| \ge |\lambda_{n-1}| \ge \cdots > |\lambda_1|. ∣ λ n ∣ ≥ ∣ λ n − 1 ∣ ≥ ⋯ > ∣ λ 1 ∣. Then clearly
∣ λ 1 − 1 ∣ > ∣ λ 2 − 1 ∣ ≥ ⋯ ≥ ∣ λ n − 1 ∣ , |\lambda_1^{-1}| > |\lambda_{2}^{-1}| \ge \cdots \ge |\lambda_n^{-1}|, ∣ λ 1 − 1 ∣ > ∣ λ 2 − 1 ∣ ≥ ⋯ ≥ ∣ λ n − 1 ∣ , and A − 1 \mathbf{A}^{-1} A − 1 has a dominant eigenvalue . Hence power iteration on A − 1 \mathbf{A}^{-1} A − 1 can be used to find the eigenvalue of A \mathbf{A} A closest to zero. For nonzero values of s s s , then we suppose there is an ordering
∣ λ n − s ∣ ≥ ⋯ ≥ ∣ λ 2 − s ∣ > ∣ λ 1 − s ∣ . |\lambda_n-s| \ge \cdots \ge |\lambda_2-s| > |\lambda_1-s|. ∣ λ n − s ∣ ≥ ⋯ ≥ ∣ λ 2 − s ∣ > ∣ λ 1 − s ∣. Then it follows that
∣ λ 1 − s ∣ − 1 > ∣ λ 2 − s ∣ − 1 ≥ ⋯ ≥ ∣ λ n − s ∣ − 1 , |\lambda_1-s|^{-1} > |\lambda_{2}-s|^{-1} \ge \cdots \ge |\lambda_n-s|^{-1}, ∣ λ 1 − s ∣ − 1 > ∣ λ 2 − s ∣ − 1 ≥ ⋯ ≥ ∣ λ n − s ∣ − 1 , and power iteration on the matrix ( A − s I ) − 1 (\mathbf{A}-s\mathbf{I})^{-1} ( A − s I ) − 1 converges to ( λ 1 − s ) − 1 (\lambda_1-s)^{-1} ( λ 1 − s ) − 1 , which is easily solved for λ 1 \lambda_1 λ 1 itself.
8.3.1 Algorithm ¶ A literal application of Algorithm 8.2.1 would include the step
y k = ( A − s I ) − 1 x k . \mathbf{y}_k = (\mathbf{A}-s\mathbf{I})^{-1} \mathbf{x}_k. y k = ( A − s I ) − 1 x k . As always, we do not want to explicitly find the inverse of a matrix. Instead we should write this step as the solution of a linear system.
Algorithm 8.3.1 (Inverse iteration)
Given matrix A \mathbf{A} A and shift s s s :
Choose x 1 \mathbf{x}_1 x 1 .
For k = 1 , 2 , … k=1,2,\ldots k = 1 , 2 , … ,
a. Solve for y k \mathbf{y}_k y k in
( A − s I ) y k = x k . (\mathbf{A}-s\mathbf{I}) \mathbf{y}_k =\mathbf{x}_k . ( A − s I ) y k = x k . b. Find m m m such that ∣ y k , m ∣ = ∥ y k ∥ ∞ |y_{k,m}|=\|{\mathbf{y}_k} \|_\infty ∣ y k , m ∣ = ∥ y k ∥ ∞ .
c. Set α k = 1 y k , m \alpha_k = \dfrac{1}{y_{k,m}} α k = y k , m 1 and β k = s + x k , m y k , m \,\beta_k = s + \dfrac{x_{k,m}}{y_{k,m}} β k = s + y k , m x k , m .
d. Set x k + 1 = α k y k \mathbf{x}_{k+1} = \alpha_k \mathbf{y}_k x k + 1 = α k y k .
Return β 1 , β 2 , … \beta_1,\beta_2,\ldots β 1 , β 2 , … as eigenvalue estimates, and x 1 , x 2 , … \mathbf{x}_1,\mathbf{x}_2,\ldots x 1 , x 2 , … as associated eigenvector estimates.
In Algorithm 8.2.1 , we used y k , m / x k , m y_{k,m}/x_{k,m} y k , m / x k , m as an estimate of the dominant eigenvalue of A \mathbf{A} A . Here, that ratio is an estimate of ( λ 1 − s ) − 1 (\lambda_1-s)^{-1} ( λ 1 − s ) − 1 , and solving for λ 1 \lambda_1 λ 1 gives the β k \beta_k β k in Algorithm 8.3.1 .
Each pass of inverse iteration requires the solution of a linear system of equations with the matrix B = A − s I \mathbf{B}=\mathbf{A}-s\mathbf{I} B = A − s I . This solution might use methods we consider later in this chapter. Here, we use (sparse) PLU factorization and hope for the best. Since the matrix B \mathbf{B} B is constant, the factorization needs to be done only once for all iterations. The details are in Function 8.3.2 .
Inverse iteration1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
"""
inviter(A, s, numiter)
Perform `numiter` inverse iterations with the matrix `A` and shift
`s`, starting from a random vector. Returns a vector of
eigenvalue estimates and the final eigenvector approximation.
"""
function inviter(A, s, numiter)
n = size(A, 1)
x = normalize(randn(n), Inf)
β = zeros(numiter)
fact = lu(A - s * I)
for k in 1:numiter
y = fact \ x
normy, m = findmax(abs.(y))
β[k] = x[m] / y[m] + s
x = y / y[m]
end
return β, x
end
Inverse iteration1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
function [beta, x] = inviter(A, s, numiter)
% INVITER Shifted inverse iteration for the closest eigenvalue.
% Input:
% A square matrix
% s value close to targeted eigenvalue (complex scalar)
% numiter number of iterations
% Output:
% beta sequence of eigenvalue approximations (vector)
% x final eigenvector approximation
n = length(A);
x = randn(n, 1);
x = x / norm(x, inf);
B = A - s*eye(n);
[L,U] = lu(B);
for k = 1:numiter
y = U \ (L\x);
[normy, m] = max(abs(y));
beta(k) = (x(m) / y(m)) + s;
x = y / y(m);
end
Inverse iteration1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
def inviter(A, s, numiter):
"""
inviter(A, s, numiter)
Perform numiter inverse iterations with the matrix A and shift s, starting
from a random vector, and return a vector of eigenvalue estimates and the final
eigenvector approximation.
"""
n = A.shape[0]
x = np.random.randn(n)
x = x / np.linalg.norm(x, np.inf)
gamma = np.zeros(numiter)
PL, U = lu(A - s * np.eye(n), permute_l=True)
for k in range(numiter):
y = np.linalg.solve(U, np.linalg.solve(PL, x))
m = np.argmax(abs(y))
gamma[k] = x[m] / y[m] + s
x = y / y[m]
return gamma, x
8.3.2 Convergence ¶ The convergence is linear, at a rate found by reinterpreting (8.2.10) with ( A − s I ) − 1 (\mathbf{A}-s\mathbf{I})^{-1} ( A − s I ) − 1 in place of A \mathbf{A} A :
β k + 1 − λ 1 β k − λ 1 → λ 1 − s λ 2 − s as k → ∞ , \frac{\beta_{k+1} - \lambda_1}{\beta_{k} - \lambda_1} \rightarrow
\frac{ \lambda_1 - s } {\lambda_2 - s}\quad \text{ as } \quad k\rightarrow \infty, β k − λ 1 β k + 1 − λ 1 → λ 2 − s λ 1 − s as k → ∞ , with the eigenvalues ordered as in (8.3.3) . Thus, the convergence is best when the shift s s s is close to the target eigenvalue λ 1 \lambda_1 λ 1 , specifically when it is much closer to that eigenvalue than to any other.
Example 8.3.1 (Convergence of inverse iteration)
Example 8.3.1 We set up a 5 × 5 5\times 5 5 × 5 triangular matrix with prescribed eigenvalues on its diagonal.
λ = [1, -0.75, 0.6, -0.4, 0]
# Make a triangular matrix with eigenvalues on the diagonal.
A = triu(ones(5, 5), 1) + diagm(λ)
5×5 Matrix{Float64}:
1.0 1.0 1.0 1.0 1.0
0.0 -0.75 1.0 1.0 1.0
0.0 0.0 0.6 1.0 1.0
0.0 0.0 0.0 -0.4 1.0
0.0 0.0 0.0 0.0 0.0
We run inverse iteration with the shift s = 0.7 s=0.7 s = 0.7 and take the final estimate as our “exact” answer to observe the convergence.
s = 0.7
β, x = FNC.inviter(A, s, 30)
eigval = β[end]
As expected, the eigenvalue that was found is the one closest to 0.7. The convergence is again linear.
using Plots
err = @. abs(eigval - β)
plot(0:28, err[1:end-1];
m=:o, xlabel=L"k",
yaxis=(L"|\lambda_3-\beta_k|", :log10, [1e-16, 1]),
title="Convergence of inverse iteration")
The observed linear convergence rate is found from the data.
@show observed_rate = err[22] / err[21];
observed_rate = err[22] / err[21] = 0.3332653552331581
We reorder the eigenvalues to enforce (8.3.3) .
Tip
The sortperm
function returns the index permutation needed to sort the given vector, rather than the sorted vector itself.
λ = λ[sortperm(abs.(λ .- s))]
5-element Vector{Float64}:
0.6
1.0
0.0
-0.4
-0.75
Hence the theoretical convergence rate is
@show theoretical_rate = (λ[1] - s) / (λ[2] - s);
theoretical_rate = (λ[1] - s) / (λ[2] - s) = -0.3333333333333332
Example 8.3.1 We set up a 5 × 5 5\times 5 5 × 5 triangular matrix with prescribed eigenvalues on its diagonal.
ev = [1, -0.75, 0.6, -0.4, 0];
A = triu(ones(5, 5), 1) + diag(ev);
We run inverse iteration with the shift s = 0.7 s=0.7 s = 0.7 . The result should converge to the eigenvalue closest to 0.7, which we know to be 0.6 here.
s = 0.7;
[beta, x] = inviter(A, s, 30);
format short
beta(1:10)
The convergence is again linear.
err = abs(0.6 - beta);
semilogy(abs(err),'.-')
title('Convergence of inverse iteration')
xlabel('k'), ylabel(('|\lambda_j - \beta_k|'));
Let’s reorder the eigenvalues to enforce (8.3.3) .
Tip
The second output of sort
returns the index permutation needed to sort the given vector.
[~, idx] = sort(abs(ev - s));
ev = ev(idx)
Now it is easy to compare the theoretical and observed linear convergence rates.
theoretical_rate = (ev(1) - s) / (ev(2) - s)
observed_rate = err(26) / err(25)
Example 8.3.1 We set up a 5 × 5 5\times 5 5 × 5 triangular matrix with prescribed eigenvalues on its diagonal.
ev = array([1, -0.75, 0.6, -0.4, 0])
A = triu(ones([5, 5]), 1) + diag(ev) # triangular matrix, eigs on diagonal
We run inverse iteration with the shift s = 0.7 s=0.7 s = 0.7 . Convergence should be to the eigenvalue closest to the shift, which we know to be 0.6 here.
beta, x = FNC.inviter(A, 0.7, 30)
print(beta)
[0.64812339 0.54460204 0.61083493 0.59524782 0.60141657 0.59950405
0.60016195 0.59994554 0.60001809 0.59999396 0.60000201 0.59999933
0.60000022 0.59999993 0.60000002 0.59999999 0.6 0.6
0.6 0.6 0.6 0.6 0.6 0.6
0.6 0.6 0.6 0.6 0.6 0.6 ]
As expected, the eigenvalue that was found is the one closest to 0.7. The convergence is again linear.
err = beta[-1] - beta # last estimate is our best
semilogy(arange(30), abs(err), "-o")
ylim(1e-16, 1)
xlabel("$k$"), ylabel("$|\\lambda_3 - \\beta_k|$")
title(("Convergence of inverse iteration"));
Let’s reorder the eigenvalues to enforce (8.3.3) .
Tip
The argsort
function returns the index permutation needed to sort the given vector, rather than the sorted vector itself.
ev = ev[argsort(abs(ev - 0.7))]
print(ev)
Now it is easy to compare the theoretical and observed linear convergence rates.
print(f"theory: {(ev[0] - 0.7) / (ev[1] - 0.7):.5f}")
print(f"observed: {err[21] / err[20]:.5f}")
theory: -0.33333
observed: -0.33327
8.3.3 Dynamic shifting ¶ There is a clear opportunity for positive feedback in Algorithm 8.3.1 . The convergence rate of inverse iteration improves as the shift gets closer to the true eigenvalue—and the algorithm computes improving eigenvalue estimates! If we update the shift to s = β k s=\beta_k s = β k after each iteration, the convergence accelerates. You are asked to implement this algorithm in Exercise 6 .
Let’s analyze the resulting convergence. If the eigenvalues are ordered by distance to s s s , then the convergence is linear with rate ∣ λ 1 − s ∣ / ∣ λ 2 − s ∣ |\lambda_1-s|/|\lambda_2-s| ∣ λ 1 − s ∣/∣ λ 2 − s ∣ . As s → λ 1 s\to\lambda_1 s → λ 1 , the change in the denominator is negligible. So if the error ( λ 1 − s ) (\lambda_1-s) ( λ 1 − s ) is ε, then the error in the next estimate is reduced by a factor O ( ϵ ) O(\epsilon) O ( ϵ ) . That is, ε becomes O ( ϵ 2 ) O(\epsilon^2) O ( ϵ 2 ) , which is quadratic convergence.
Example 8.3.2 (Dynamic shift strategy)
Example 8.3.2 λ = [1, -0.75, 0.6, -0.4, 0]
# Make a triangular matrix with eigenvalues on the diagonal.
A = triu(ones(5, 5), 1) + diagm(λ)
5×5 Matrix{Float64}:
1.0 1.0 1.0 1.0 1.0
0.0 -0.75 1.0 1.0 1.0
0.0 0.0 0.6 1.0 1.0
0.0 0.0 0.0 -0.4 1.0
0.0 0.0 0.0 0.0 0.0
We begin with a shift s = 0.7 s=0.7 s = 0.7 , which is closest to the eigenvalue 0.6.
s = 0.7
x = ones(5)
y = (A - s * I) \ x
β = x[1] / y[1] + s
Note that the result is not yet any closer to the targeted 0.6. But we proceed (without being too picky about normalization here).
s = β
x = y / y[1]
y = (A - s * I) \ x
β = x[1] / y[1] + s
Still not much apparent progress. However, in just a few more iterations the results are dramatically better.
for k in 1:4
s = β
x = y / y[1]
y = (A - s * I) \ x
@show β = x[1] / y[1] + s
end
β = x[1] / y[1] + s = 0.5964312884753865
β = x[1] / y[1] + s = 0.5999717091820104
β = x[1] / y[1] + s = 0.5999999978556353
β = x[1] / y[1] + s = 0.6
Example 8.3.2 ev = [1, -0.75, 0.6, -0.4, 0];
A = triu(ones(5, 5), 1) + diag(ev);
We begin with a shift s = 0.7 s=0.7 s = 0.7 , which is closest to the eigenvalue 0.6.
s = 0.7;
x = ones(5, 1);
y = (A - s * eye(5)) \ x;
beta = x(1) / y(1) + s
Error using -
Arrays have incompatible sizes for this operation.
Note that the result is not yet any closer to the targeted 0.6. But we proceed (without being too picky about normalization here).
s = beta;
x = y / y(1);
y = (A - s * eye(5)) \ x;
beta = x(1) / y(1) + s
Error using *
Incorrect dimensions for matrix multiplication. Check that the number of columns in the first matrix matches the number of rows in the second matrix. To operate on each element of the matrix individually, use TIMES (.*) for elementwise multiplication.
Still not much apparent progress. However, in just a few more iterations the results are dramatically better.
format long
for k = 1:4
s = beta;
x = y / y(1);
y = (A - s * eye(5)) \ x;
beta = x(1) / y(1) + s
end
Error using *
Incorrect dimensions for matrix multiplication. Check that the number of columns in the first matrix matches the number of rows in the second matrix. To operate on each element of the matrix individually, use TIMES (.*) for elementwise multiplication.
Example 8.3.2 ev = array([1, -0.75, 0.6, -0.4, 0])
A = triu(ones([5, 5]), 1) + diag(ev) # triangular matrix, eigs on diagonal
We begin with a shift s = 0.7 s=0.7 s = 0.7 , which is closest to the eigenvalue 0.6.
from numpy.linalg import solve
s = 0.7
x = ones(5)
y = solve(A - s * eye(5), x)
beta = x[0] / y[0] + s
print(f"latest estimate: {beta:.8f}")
latest estimate: 0.70348139
Note that the result is not yet any closer to the targeted 0.6. But we proceed (without being too picky about normalization here).
s = beta
x = y / y[0]
y = solve(A - s * eye(5), x)
beta = x[0] / y[0] + s
print(f"latest estimate: {beta:.8f}")
latest estimate: 0.56127614
Still not much apparent progress. However, in just a few more iterations the results are dramatically better.
for k in range(4):
s = beta
x = y / y[0]
y = solve(A - s * eye(5), x)
beta = x[0] / y[0] + s
print(f"latest estimate: {beta:.12f}")
latest estimate: 0.596431288475
latest estimate: 0.599971709182
latest estimate: 0.599999997856
latest estimate: 0.600000000000
There is a price to pay for this improvement. The matrix of the linear system to be solved, ( A − s I ) , (\mathbf{A}-s\mathbf{I}), ( A − s I ) , now changes with each iteration. That means that we can no longer do just one LU factorization for the entire iteration. The speedup in convergence usually makes this tradeoff worthwhile, however.
In practice power and inverse iteration are not as effective as the algorithms used by eigs
and based on the mathematics described in the rest of this chapter. However, inverse iteration can be useful for turning an eigenvalue estimate into an eigenvector estimate.
8.3.4 Exercises ¶ ⌨ Use Function 8.3.2 to perform 10 iterations for the given matrix and shift. Compare the results quantitatively to the convergence given by (8.3.7) .
(a) A = [ 1.1 1 0 2.1 ] , s = 1 \mathbf{A} = \begin{bmatrix}
1.1 & 1 \\
0 & 2.1
\end{bmatrix}, \; s = 1 \qquad A = [ 1.1 0 1 2.1 ] , s = 1
(b) A = [ 1.1 1 0 2.1 ] , s = 2 \mathbf{A} = \begin{bmatrix}
1.1 & 1 \\
0 & 2.1
\end{bmatrix}, \; s = 2\qquad A = [ 1.1 0 1 2.1 ] , s = 2
(c) A = [ 1.1 1 0 2.1 ] , s = 1.6 \mathbf{A} = \begin{bmatrix}
1.1 & 1 \\
0 & 2.1
\end{bmatrix}, \; s = 1.6\qquad A = [ 1.1 0 1 2.1 ] , s = 1.6
(d) A = [ 2 1 1 0 ] , s = − 0.33 \mathbf{A} = \begin{bmatrix}
2 & 1 \\
1 & 0
\end{bmatrix}, \; s = -0.33 \qquad A = [ 2 1 1 0 ] , s = − 0.33
(e) A = [ 6 5 4 5 4 3 4 3 2 ] , s = 0.1 \mathbf{A} = \begin{bmatrix}
6 & 5 & 4 \\
5 & 4 & 3 \\
4 & 3 & 2
\end{bmatrix}, \; s = 0.1 A = ⎣ ⎡ 6 5 4 5 4 3 4 3 2 ⎦ ⎤ , s = 0.1
✍ Let A = [ 1.1 1 0 2.1 ] . \mathbf{A} = \displaystyle \begin{bmatrix} 1.1 & 1 \\ 0 & 2.1 \end{bmatrix}. A = [ 1.1 0 1 2.1 ] . Given the starting vector x 1 = [ 1 , 1 ] \mathbf{x}_1=[1,1] x 1 = [ 1 , 1 ] , find the vector x 2 \mathbf{x}_2 x 2 for the following shifts.
(a) s = 1 s=1\quad s = 1 (b) s = 2 s=2\quad s = 2 (c) s = 1.6 s=1.6 s = 1.6
✍ When the shift s s s is very close to an eigenvalue of A \mathbf{A} A , the matrix A − s I \mathbf{A}-s\mathbf{I} A − s I is close to a singular matrix. But then (8.3.6) is a linear system with a badly conditioned matrix, which should create a lot of error in the numerical solution for y k \mathbf{y}_k y k . However, it happens that the error is mostly in the direction of the eigenvector we are looking for, as the following toy example illustrates.
Prove that [ 1 1 0 0 ] \displaystyle \begin{bmatrix} 1 & 1 \\ 0 & 0 \end{bmatrix} [ 1 0 1 0 ] has an eigenvalue at zero with associated eigenvector v = [ − 1 , 1 ] T \mathbf{v}=[-1,1]^T v = [ − 1 , 1 ] T . Suppose this matrix is perturbed slightly to A = [ 1 1 0 ϵ ] \displaystyle \mathbf{A} = \begin{bmatrix} 1 & 1 \\ 0 & \epsilon \end{bmatrix} A = [ 1 0 1 ϵ ] , and that x k = [ 1 , 1 ] \mathbf{x}_k=[1,1] x k = [ 1 , 1 ] in (8.3.6) . Show that once y k \mathbf{y}_k y k is normalized by its infinity norm, the result is within ε of a multiple of v \mathbf{v} v .
⌨ (Continuation of Exercise 8.2.3 .) This exercise concerns the n 2 × n 2 n^2\times n^2 n 2 × n 2 sparse matrix defined by FNC.poisson(n)
for integer n n n . It represents a lumped model of a vibrating square membrane held fixed around the edges.
(a) The eigenvalues of A \mathbf{A} A closest to zero are approximately squares of the frequencies of vibration for the membrane. Using eigs
, find the eigenvalue λ m \lambda_m λ m closest to zero for n = 10 , 15 , 20 , 25 n=10,15,20,25 n = 10 , 15 , 20 , 25 .
(b) For each n n n in part (a), apply 50 steps of Function 8.3.2 with zero shift. On one graph, plot the four convergence curves ∣ β k − λ m ∣ |\beta_k-\lambda_m| ∣ β k − λ m ∣ using a semi-log scale.
(c) Let v
be the eigenvector (second output) found by Function 8.3.2 for n = 25 n=25 n = 25 . Make a surface plot of the vibration mode by reshaping v
into an n × n n\times n n × n matrix.
⌨ This problem explores the use of dynamic shifting to accelerate the inverse iteration.
(a) Modify Function 8.3.2 to change the value of the shift s s s to be the most recently computed value in the vector β. Note that the matrix B
must also change with each iteration, and the LU factorization cannot be done just once.
(b) Define a 100 × 100 100\times 100 100 × 100 matrix with values k 2 k^2 k 2 for k = 1 , … , 100 k=1,\ldots,100 k = 1 , … , 100 on the main diagonal and random values uniformly distributed between 0 and 1 on the first superdiagonal. (Since this matrix is triangular, the diagonal values are its eigenvalues.) Using an initial shift of s = 920 s=920 s = 920 , apply the dynamic inverse iteration. Determine which eigenvalue was found and make a table of the log10
of the errors in the iteration as a function of iteration number. (These should approximately double, until machine precision is reached, due to quadratic convergence.)
(c) Repeat part (b) using a different initial shift of your choice.