Collocation for linear problems Tobin A. Driscoll Richard J. Braun
Let us now devise a numerical method based on finite differences for the linear TPBVP
u ′ ′ + p ( x ) u ′ + q ( x ) u = r ( x ) , u ( a ) = α , u ( b ) = β . u'' + p(x)u' + q(x)u = r(x), \quad u(a)=\alpha,\; u(b)=\beta. u ′′ + p ( x ) u ′ + q ( x ) u = r ( x ) , u ( a ) = α , u ( b ) = β . The first step is to select nodes x 0 = a < x 1 < ⋯ < x n = b x_0=a < x_1< \cdots < x_n=b x 0 = a < x 1 < ⋯ < x n = b . For finite differences these will most likely be equally spaced, while for spectral differentiation, they will be Chebyshev points.
Rather than solving for a function, we will solve for a vector of its approximate values at the nodes:
u = [ u 0 u 1 ⋮ u n − 1 u n ] ≈ [ u ^ ( x 0 ) u ^ ( x 1 ) ⋮ u ^ ( x n − 1 ) u ^ ( x n ) ] , \mathbf{u} =
\begin{bmatrix}
u_0 \\ u_1 \\ \vdots \\ u_{n-1} \\ u_n
\end{bmatrix} \approx
\begin{bmatrix}
\hat{u}(x_0) \\ \hat{u}(x_1) \\ \vdots \\ \hat{u}(x_{n-1}) \\ \hat{u}(x_{n})
\end{bmatrix}, u = ⎣ ⎡ u 0 u 1 ⋮ u n − 1 u n ⎦ ⎤ ≈ ⎣ ⎡ u ^ ( x 0 ) u ^ ( x 1 ) ⋮ u ^ ( x n − 1 ) u ^ ( x n ) ⎦ ⎤ , where u ^ \hat{u} u ^ is the exact solution of (10.4.1) . If we so desire, we can use interpolation to convert the values ( x i , u i ) (x_i,u_i) ( x i , u i ) into a function after the discrete solution is found.
10.4.1 Collocation ¶ Having defined values at the nodes as our unknowns, we impose approximations to the ODE at the same nodes. This approach is known as collocation . Derivatives of the solution are found (approximately) using differentiation matrices. For example,
[ u ^ ′ ( x 0 ) u ^ ′ ( x 1 ) ⋮ u ^ ′ ( x n ) ] ≈ u ′ = D x u , \begin{bmatrix}
\hat{u}'(x_0) \\[1mm] \hat{u}'(x_1) \\ \vdots \\ \hat{u}'(x_n)
\end{bmatrix} \approx \mathbf{u}' = \mathbf{D}_x \mathbf{u}, ⎣ ⎡ u ^ ′ ( x 0 ) u ^ ′ ( x 1 ) ⋮ u ^ ′ ( x n ) ⎦ ⎤ ≈ u ′ = D x u , with an appropriately chosen differentiation matrix D x \mathbf{D}_x D x . Similarly, we define
[ u ^ ′ ′ ( x 0 ) u ^ ′ ′ ( x 1 ) ⋮ u ^ ′ ′ ( x n ) ] ≈ u ′ ′ = D x x u , \begin{bmatrix}
\hat{u}''(x_0) \\[1mm] \hat{u}''(x_1) \\ \vdots \\ \hat{u}''(x_n)
\end{bmatrix} \approx \mathbf{u}'' = \mathbf{D}_{xx} \mathbf{u}, ⎣ ⎡ u ^ ′′ ( x 0 ) u ^ ′′ ( x 1 ) ⋮ u ^ ′′ ( x n ) ⎦ ⎤ ≈ u ′′ = D xx u , with D x x \mathbf{D}_{xx} D xx chosen in accordance with the node set.
The discrete form of (10.4.1) at the n + 1 n+1 n + 1 chosen nodes is
u ′ ′ + P u ′ + Q u = r , \mathbf{u}'' + \mathbf{P}\mathbf{u}' + \mathbf{Q}\mathbf{u} = \mathbf{r}, u ′′ + P u ′ + Qu = r , where
P = [ p ( x 0 ) ⋱ p ( x n ) ] , Q = [ q ( x 0 ) ⋱ q ( x n ) ] , r = [ r ( x 0 ) ⋮ r ( x n ) ] . \begin{split}
\mathbf{P} = \begin{bmatrix}
p(x_0) & & \\
& \ddots & \\
& & p(x_{n})
\end{bmatrix},
\quad
\mathbf{Q} =
\begin{bmatrix}
q(x_0) & & \\
& \ddots & \\
& & q(x_{n})
\end{bmatrix}, \quad
\mathbf{r} = \begin{bmatrix}
r(x_0) \\ \vdots \\ r(x_n)
\end{bmatrix}.
\end{split} P = ⎣ ⎡ p ( x 0 ) ⋱ p ( x n ) ⎦ ⎤ , Q = ⎣ ⎡ q ( x 0 ) ⋱ q ( x n ) ⎦ ⎤ , r = ⎣ ⎡ r ( x 0 ) ⋮ r ( x n ) ⎦ ⎤ . If we apply the definitions of u ′ \mathbf{u}' u ′ and u ′ ′ \mathbf{u}'' u ′′ and rearrange, we obtain
L u = r , L = D x x + P D x + Q , \mathbf{L} \mathbf{u} = \mathbf{r}, \qquad \mathbf{L} = \mathbf{D}_{xx} + \mathbf{P}\mathbf{D}_x + \mathbf{Q}, Lu = r , L = D xx + P D x + Q , which is a linear system of n + 1 n+1 n + 1 equations in n + 1 n+1 n + 1 unknowns.
We have not yet incorporated the boundary conditions. Those take the form of the additional linear conditions u 0 = α u_0=\alpha u 0 = α and u n = β u_n=\beta u n = β . We might regard this situation as an overdetermined system, suitable for linear least-squares. However, it’s usually preferred to impose the boundary conditions and collocation conditions exactly, so we need to discard two of the collocation equations to keep the system square. The obvious candidates for deletion are the collocation conditions at the two endpoints. We may express these deletions by means of a matrix that is an ( n + 1 ) × ( n + 1 ) (n+1)\times(n+1) ( n + 1 ) × ( n + 1 ) identity with the first and last rows deleted:
E = [ 0 1 0 ⋯ 0 ⋮ ⋱ ⋮ 0 ⋯ 0 1 0 ] = [ e 1 T ⋮ e n − 1 T ] , \mathbf{E} =
\begin{bmatrix}
0 & 1 & 0 & \cdots & 0 \\
\vdots & & \ddots & & \vdots \\
0 & \cdots & 0 & 1 & 0
\end{bmatrix}
=
\begin{bmatrix}
\mathbf{e}_1^T \\ \vdots \\ \mathbf{e}_{n-1}^T
\end{bmatrix}, E = ⎣ ⎡ 0 ⋮ 0 1 ⋯ 0 ⋱ 0 ⋯ 1 0 ⋮ 0 ⎦ ⎤ = ⎣ ⎡ e 1 T ⋮ e n − 1 T ⎦ ⎤ , where, as always, e k \mathbf{e}_k e k is the k k k th column (here starting from k = 0 k=0 k = 0 ) of an identity matrix. The product E A \mathbf{E} \mathbf{A} EA deletes the first and last rows of A \mathbf{A} A , leaving a matrix that is ( n − 1 ) × ( n + 1 ) (n-1)\times(n+1) ( n − 1 ) × ( n + 1 ) . Similarly, E r \mathbf{E}\mathbf{r} Er deletes the first and last rows of r \mathbf{r} r .
Finally, we note that u ^ ( a ) = e 0 T u \hat{u}(a)= \mathbf{e}_0^T\mathbf{u} u ^ ( a ) = e 0 T u and u ^ ( b ) = e n T u \hat{u}(b)= \mathbf{e}_n^T\mathbf{u} u ^ ( b ) = e n T u , so the linear system including both the ODE and the boundary condition collocations is
[ e 0 T E L e n T ] ⏟ A u = [ α E r β ] ⏟ b . \underbrace{
\begin{bmatrix}
\mathbf{e}_0^T \\[1mm] \mathbf{E}\mathbf{L} \\[1mm] \mathbf{e}_n^T
\end{bmatrix}
}_{\mathbf{A}}
\mathbf{u}
=
\underbrace{
\begin{bmatrix}
\alpha \\[1mm] \mathbf{E}\mathbf{r} \\[1mm] \beta
\end{bmatrix}
}_{\mathbf{b}}. A ⎣ ⎡ e 0 T EL e n T ⎦ ⎤ u = b ⎣ ⎡ α Er β ⎦ ⎤ . We have arrived at an ( n + 1 ) × ( n + 1 ) (n+1)\times (n+1) ( n + 1 ) × ( n + 1 ) linear system A u = b \mathbf{A}\mathbf{u}=\mathbf{b} Au = b , which we can solve for u \mathbf{u} u using any of the methods we have investigated.
Example 10.4.1 (Collocation for a linear
BVP )
Let’s look at the discretization of the linear BVP
u ′ ′ = u − x u ′ , u ( 0 ) = − 2 , u ( 2 ) = 3 , u'' = u - x u', \quad u(0)=-2, \; u(2)=3, u ′′ = u − x u ′ , u ( 0 ) = − 2 , u ( 2 ) = 3 , on an equispaced grid with n = 4 n=4 n = 4 points. This gives h = ( 2 − 0 ) / 4 = 1 / 2 h=(2-0)/4 = 1/2 h = ( 2 − 0 ) /4 = 1/2 .
Writing the ODE as u ′ ′ + x u ′ − u = 0 u'' + x u' - u = 0 u ′′ + x u ′ − u = 0 , we identify p ( x ) = x p(x)=x p ( x ) = x , q ( x ) = − 1 q(x)=-1 q ( x ) = − 1 , and r ( x ) = 0 r(x)=0 r ( x ) = 0 . Hence,
P = [ 0 1 / 2 1 3 / 2 2 ] , Q = − I 5 , r = [ 0 0 0 0 0 ] . \mathbf{P} = \begin{bmatrix}
0 & & & & \\ & 1/2 & & & \\ & & 1 & & \\ & & & 3/2 & \\ & & & & 2
\end{bmatrix},
\quad
\mathbf{Q} = -\mathbf{I}_5,
\quad
\mathbf{r} = \begin{bmatrix}
0 \\ 0 \\ 0 \\ 0 \\ 0
\end{bmatrix}. P = ⎣ ⎡ 0 1/2 1 3/2 2 ⎦ ⎤ , Q = − I 5 , r = ⎣ ⎡ 0 0 0 0 0 ⎦ ⎤ . Using (10.3.6) and (10.3.10) as the differentiation matrices, we find
L = 1 1 / 4 [ 2 − 5 4 − 1 1 − 2 1 1 − 2 1 1 − 2 1 − 1 4 − 5 2 ] + P ⋅ 1 1 / 2 [ − 3 / 2 2 − 1 / 2 − 1 / 2 0 1 / 2 − 1 / 2 0 1 / 2 − 1 / 2 0 1 / 2 1 / 2 − 2 3 / 2 ] + Q = [ 7 − 20 16 − 4 0 7 / 2 − 9 9 / 2 0 0 0 3 − 9 5 0 0 0 5 / 2 − 9 11 / 2 0 − 4 18 − 28 13 ] . \begin{align*}
\mathbf{L}& =
\frac{1}{1/4} \begin{bmatrix}
2 & -5 & 4 & -1 & \\
1 & -2 & 1 & & \\
& 1 & -2 & 1 & \\
& & 1 & -2 & 1 \\
& -1 & 4 & -5 & 2
\end{bmatrix} \\
& + \mathbf{P} \cdot
\frac{1}{1/2} \begin{bmatrix}
-3/2 & 2 & -1/2 & & \\
-1/2 & 0 & 1/2 & & \\
& -1/2 & 0 & 1/2 & \\
& & -1/2 & 0 & 1/2 \\
& & 1/2 & -2 & 3/2
\end{bmatrix} \\
& + \mathbf{Q} \\
& = \begin{bmatrix}
7 & -20 & 16 & -4 & 0 \\ 7/2 & -9 & 9/2 & 0 & 0 \\ 0 & 3 & -9 & 5 & 0 \\
0 & 0 & 5/2 & -9 & 11/2 \\ 0 & -4 & 18 & -28 & 13
\end{bmatrix}.
\end{align*} L = 1/4 1 ⎣ ⎡ 2 1 − 5 − 2 1 − 1 4 1 − 2 1 4 − 1 1 − 2 − 5 1 2 ⎦ ⎤ + P ⋅ 1/2 1 ⎣ ⎡ − 3/2 − 1/2 2 0 − 1/2 − 1/2 1/2 0 − 1/2 1/2 1/2 0 − 2 1/2 3/2 ⎦ ⎤ + Q = ⎣ ⎡ 7 7/2 0 0 0 − 20 − 9 3 0 − 4 16 9/2 − 9 5/2 18 − 4 0 5 − 9 − 28 0 0 0 11/2 13 ⎦ ⎤ . The system L u = r \mathbf{L} \mathbf{u} = \mathbf{r} Lu = r represents the discretized ODE . When we modify the system to include the boundary conditions, we arrive at
A = [ 1 0 0 0 0 7 / 2 − 9 9 / 2 0 0 0 3 − 9 5 0 0 0 5 / 2 − 9 11 / 2 0 0 0 0 1 ] , b = [ − 2 0 0 0 3 ] . \mathbf{A} = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 \\
7/2 & -9 & 9/2 & 0 & 0 \\
0 & 3 & -9 & 5 & 0 \\
0 & 0 & 5/2 & -9 & 11/2 \\
0 & 0 & 0 & 0 & 1
\end{bmatrix}, \quad
\mathbf{b} = \begin{bmatrix}
-2 \\ 0 \\ 0 \\ 0 \\ 3
\end{bmatrix}. A = ⎣ ⎡ 1 7/2 0 0 0 0 − 9 3 0 0 0 9/2 − 9 5/2 0 0 0 5 − 9 0 0 0 0 11/2 1 ⎦ ⎤ , b = ⎣ ⎡ − 2 0 0 0 3 ⎦ ⎤ . The system A u = b \mathbf{A} \mathbf{u} = \mathbf{b} Au = b is the collocated BVP .
10.4.2 Implementation ¶ Our implementation of linear collocation is Function 10.4.1 . It uses second-order finite differences but makes no attempt to exploit the sparsity of the matrices. It would be trivial to change the function to use spectral differentiation.
Solution of a linear boundary-value problem1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
"""
bvplin(p, q, r, xspan, lval, rval, n)
Use finite differences to solve a linear bopundary value problem.
The ODE is u''+`p`(x)u'+`q`(x)u = `r`(x) on the interval `xspan`,
with endpoint function values given as `lval` and `rval`. There will
be `n`+1 equally spaced nodes, including the endpoints.
Returns vectors of the nodes and the solution values.
"""
function bvplin(p, q, r, xspan, lval, rval, n)
x, Dₓ, Dₓₓ = diffmat2(n, xspan)
P = diagm(p.(x))
Q = diagm(q.(x))
L = Dₓₓ + P * Dₓ + Q # ODE expressed at the nodes
# Replace first and last rows using boundary conditions.
z = zeros(1, n)
A = [[1 z]; L[2:n, :]; [z 1]]
b = [lval; r.(x[2:n]); rval]
# Solve the system.
u = A \ b
return x, u
end
About the code
Note that there is no need to explicitly form the row-deletion matrix E \mathbf{E} E from (10.4.8) . Since it only appears as left-multiplying L \mathbf{L} L or r \mathbf{r} r , we simply perform the row deletions as needed using indexing.
Solution of a linear boundary-value problem1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
function [x, u] = bvplin(p, q, r, a, b, ua, ub,n)
% BVPLIN Solve a linear boundary-value problem.
% Input:
% p, q, r define u'' + pu' + qu = r (functions)
% a, b endpoints of the domain (scalars)
% ua value of u(a)
% ub value of u(b)
% n number of subintervals (integer)
% Output:
% x collocation nodes (vector, length n+1)
% u solution at nodes (vector, length n+1)
[x, Dx, Dxx] = diffmat2(n, [a, b]);
P = diag(p(x));
Q = diag(q(x));
L = Dxx + P * Dx + Q; % ODE expressed at the nodes
r = r(x);
% Replace first and last rows using boundary conditions.
I = speye(n+1);
A = [ I(:, 1)'; L(2:n, :); I(:, n+1)' ];
f = [ ua; r(2:n); ub ];
% Solve the system.
u = A \ f;
end
Solution of a linear boundary-value problem1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
def bvplin(p, q, r, xspan, lval, rval, n):
"""
bvplin(p, q, r, xspan, lval, rval, n)
Use finite differences to solve a linear bopundary value problem. The ODE is
u''+p(x)u'+q(x)u = r(x) on the interval xspan, with endpoint function
values given as lval and rval. There will be n+1 equally spaced nodes,
including the endpoints.
Return vectors of the nodes and the solution values.
"""
x, Dx, Dxx = diffmat2(n, xspan)
P = np.diag(p(x))
Q = np.diag(q(x))
L = Dxx + P @ Dx + Q # ODE expressed at the nodes
# Replace first and last rows using boundary conditions.
I = np.eye(n + 1)
A = np.vstack([I[0], L[1:-1], I[-1]])
b = np.hstack([lval, r(x[1:-1]), rval])
# Solve the system.
u = np.linalg.solve(A, b)
return x, u
About the code
Note that there is no need to explicitly form the row-deletion matrix E \mathbf{E} E from (10.4.8) . Since it only appears as left-multiplying L \mathbf{L} L or r \mathbf{r} r , we simply perform the row deletions as needed using indexing.
10.4.3 Accuracy and stability ¶ We revisit Example 10.2.3 , which exposed instability in the shooting method, in order to verify second-order convergence.
If we write the solution u \mathbf{u} u of Equation (10.4.9) as the exact solution minus an error vector e \mathbf{e} e , i.e., u = u ^ − e \mathbf{u} = \hat{\mathbf{u}} - \mathbf{e} u = u ^ − e , we obtain
A u ^ − A e = b , e = A − 1 [ A u ^ − b ] = A − 1 τ ( n ) , \begin{gather*}
\mathbf{A} \hat{\mathbf{u}} - \mathbf{A} \mathbf{e} = \mathbf{b}, \\
\mathbf{e} = \mathbf{A}^{-1} \left[ \mathbf{A} \hat{\mathbf{u}} - \mathbf{b} \right] = \mathbf{A}^{-1} \boldsymbol{\tau}(n),
\end{gather*} A u ^ − Ae = b , e = A − 1 [ A u ^ − b ] = A − 1 τ ( n ) , where τ \boldsymbol{\tau} τ is the truncation error of the derivative discretizations (except at the boundary rows, where it is zero). It follows that ∥ e ∥ \|\mathbf{e}\| ∥ e ∥ vanishes at the same rate as the truncation error if ∥ A − 1 ∥ \| \mathbf{A}^{-1}\| ∥ A − 1 ∥ is bounded above as n → ∞ n\to \infty n → ∞ . In the present context, this property is known as stability . Proving stability is too technical to walk through here, but stability is guaranteed under some reasonable conditions on the BVP .
10.4.4 Exercises ¶ ✍ For each boundary-value problem, verify that the given solution is correct. Using n = 3 n=3 n = 3 and the differentiation matrices in (10.3.6) and (10.3.10) , write out L \mathbf{L} L and r \mathbf{r} r from (10.4.7) and A \mathbf{A} A and b \mathbf{b} b from (10.4.9) .
(a) u ′ ′ + u = 0 , u ( 0 ) = 0 , u ( 3 ) = sin 3 u'' + u = 0, \quad u(0) =0, \; u(3) = \sin 3 u ′′ + u = 0 , u ( 0 ) = 0 , u ( 3 ) = sin 3
Solution: u ( x ) = sin x u(x) = \sin x u ( x ) = sin x
(b) u ′ ′ − 3 x u ′ + 4 x 2 u = 0 , u ( 1 ) = 0 , u ( 4 ) = 32 log 2 u'' - \frac{3}{x} u' + \frac{4}{x^2} u = 0, \quad u(1) =0,\; u(4) = 32 \log 2 u ′′ − x 3 u ′ + x 2 4 u = 0 , u ( 1 ) = 0 , u ( 4 ) = 32 log 2
Solution: u ( x ) = x 2 log x u(x) = x^2 \log x u ( x ) = x 2 log x
(c)
u ′ ′ − ( x + 1 2 ) − 1 u ′ + 2 ( x + 1 2 ) − 2 u = 10 ( x + 1 2 ) − 4 , u ( x + 1 2 ) = 1 , u ( x + 5 2 ) = 1 9 u'' - \left(x+\frac{1}{2}\right)^{-1}\, u' + 2\left(x+\frac{1}{2}\right)^{-2}\, u = 10\left(x+\frac{1}{2}\right)^{-4}, \quad u\left(x+\frac{1}{2}\right)=1,\; u\left(x+\frac{5}{2}\right) = \frac{1}{9} u ′′ − ( x + 2 1 ) − 1 u ′ + 2 ( x + 2 1 ) − 2 u = 10 ( x + 2 1 ) − 4 , u ( x + 2 1 ) = 1 , u ( x + 2 5 ) = 9 1
Solution: u ( x ) = ( x + 1 2 ) − 2 u(x) = \left(x+\frac{1}{2}\right)^{-2} u ( x ) = ( x + 2 1 ) − 2
⌨ For each of the cases in the previous exercise, use Function 10.4.1 to solve the problem with n = 60 n=60 n = 60 and make a plot of its error as a function of x x x . Then, for each n = 10 , 20 , 40 , … , 640 n=10,20,40,\ldots,640 n = 10 , 20 , 40 , … , 640 , find the infinity norm of the error. Make a log-log plot of error versus n n n and include a graphical comparison to second-order convergence.
⌨ Modify Function 10.4.1 to use spectral differentiation rather than second-order finite differences. For each of the cases in Exercise 10.4.1 , solve the problem with n = 5 , 10 , 15 , … , 40 n=5,10,15,\ldots,40 n = 5 , 10 , 15 , … , 40 , finding the infinity norm of the error in each case. Make a log-linear plot of error versus n n n .
⌨ The Airy equation is u ′ ′ = x u u''=x u u ′′ = xu . Its solution is exponential for x > 0 x>0 x > 0 and oscillatory for x < 0 x<0 x < 0 . The exact solution is given by u = c 1 Ai ( x ) + c 2 Bi ( x ) u=c_1 \operatorname{Ai}(x) + c_2 \operatorname{Bi}(x) u = c 1 Ai ( x ) + c 2 Bi ( x ) , where Ai and Bi are Airy functions. In Julia they are computed by airyai
and airybi
, respectively.
(a) Suppose that u ( − 10 ) = − 1 u(-10) =-1 u ( − 10 ) = − 1 , u ( 2 ) = 1 u(2) =1 u ( 2 ) = 1 . By setting up and solving a 2 × 2 2\times 2 2 × 2 linear system, find numerical values for c 1 c_1 c 1 and c 2 c_2 c 2 . Plot the resulting exact solution.
(b) Use Function 10.4.1 with n = 120 n=120 n = 120 to find the solution with the boundary conditions in part (a). In a 2-by-1 subplot array, plot the finite-difference solution and its error. (The solution is not very accurate.)
(c) Repeat part (b) with n = 800 n=800 n = 800 .
Consider the boundary-value problem ϵ u ′ ′ + ( 1 + ϵ ) u ′ + u = 0 \epsilon u''+(1+\epsilon)u'+u =0 ϵ u ′′ + ( 1 + ϵ ) u ′ + u = 0 over x ∈ ( 0 , 1 ) x\in (0,1) x ∈ ( 0 , 1 ) , with u ( 0 ) = 0 u(0)=0 u ( 0 ) = 0 , u ( 1 ) = 1 u(1)=1 u ( 1 ) = 1 . As the parameter ε is decreased, the solution gets a thin region of high activity near x = 0 x=0 x = 0 called a boundary layer .
(a) ✍ Verify that the exact solution to the problem is
u ( x ) = e − x − e − x / ϵ e − 1 − e − 1 / ϵ . u(x) = \frac{e^{-x}-e^{-x/\epsilon}}{e^{\,-1}-e^{\,-1/\epsilon}}. u ( x ) = e − 1 − e − 1/ ϵ e − x − e − x / ϵ . On one graph, plot u ( x ) u(x) u ( x ) for ϵ = 1 4 , 1 16 , 1 64 \epsilon=\frac{1}{4},\frac{1}{16},\frac{1}{64} ϵ = 4 1 , 16 1 , 64 1 .
(b) ⌨ Define N ( ϵ ) N(\epsilon) N ( ϵ ) as the smallest integer value of n n n needed to make the max-norm error of the result of Function 10.4.1 less than 10-4 . For each of the values ϵ = 1 2 , 1 4 , 1 8 … , 1 64 \epsilon = \frac{1}{2},\frac{1}{4},\frac{1}{8}\ldots,\frac{1}{64} ϵ = 2 1 , 4 1 , 8 1 … , 64 1 , estimate N ( ϵ ) N(\epsilon) N ( ϵ ) by starting with n = 50 n=50 n = 50 and incrementing by 25 until the measured error is sufficiently small.
(c) ⌨ Plot the error as a function of x x x for ϵ = 1 64 \epsilon=\frac{1}{64} ϵ = 64 1 and n = N ( ϵ ) n=N(\epsilon) n = N ( ϵ ) . Compare the peak of the error to the graph from part (a).
(d) ⌨ Develop a hypothesis for the leading-order behavior of N ( ϵ ) N(\epsilon) N ( ϵ ) . Plot the observed N ( ϵ ) N(\epsilon) N ( ϵ ) and your hypothesis together on a log-log plot.
(e) ✍ Finite-difference errors depend on the solution as well as on n n n . Given that this error decreases as O ( n − 2 ) O(n^{-2}) O ( n − 2 ) , what does your hypothesis for N ( ϵ ) N(\epsilon) N ( ϵ ) suggest about the behavior of the error for fixed n n n as ϵ → 0 \epsilon\to 0 ϵ → 0 ?