Finite differences Tobin A. Driscoll Richard J. Braun
Now we turn to one of the most common and important applications of interpolants: finding derivatives of functions. Because differentiation is a linear operation, we will constrain ourselves to formulas that are linear in the nodal values.
Definition 5.4.1 (Finite-difference formula)
A finite-difference formula is a list of values a − p , … , a q a_{-p},\ldots,a_q a − p , … , a q , called weights , such that for all f f f in some class of functions,
The weights are independent of f f f and h h h . The formula is said to be convergent if the approximation becomes equality in the limit h → 0 h\to 0 h → 0 for a suitable class of functions.
Note that while (5.4.1) is about finding the derivative at a single point x x x , the same formula can be applied for different x x x . The usual situation is a regularly spaced grid of nodes, a , a + h , a + 2 h , … , b a,a+h,a+2h,\ldots,b a , a + h , a + 2 h , … , b , and then the value of f f f at each node takes part in multiple applications of the formula. This will be demonstrated in Example 5.4.1 below.
5.4.1 Common examples ¶ There are three appealing special cases of (5.4.1) that get special attention.
The simplest example of a forward difference formula is inspired by the familiar limit definition of a derivative:
f ′ ( x ) ≈ f ( x + h ) − f ( x ) h , f'(x) \approx \frac{f(x+h)-f(x)}{h}, f ′ ( x ) ≈ h f ( x + h ) − f ( x ) , which is (5.4.1) with p = 0 p=0 p = 0 , q = 1 q=1 q = 1 , a 0 = − 1 a_0=-1 a 0 = − 1 , and a 1 = 1 a_1=1 a 1 = 1 . Analogously, we have the backward difference
f ′ ( x ) ≈ f ( x ) − f ( x − h ) h , f'(x) \approx \frac{f(x)-f(x-h)}{h}, f ′ ( x ) ≈ h f ( x ) − f ( x − h ) , in which p = 1 p=1 p = 1 , q = 0 q=0 q = 0 .
Suppose f ( x ) = x 2 f(x)=x^2 f ( x ) = x 2 , and we take h = 1 4 h=\frac{1}{4} h = 4 1 over the interval [ 0 , 1 ] [0,1] [ 0 , 1 ] . This results in the nodes 0 , 1 4 , 1 2 , 3 4 , 1 0,\frac{1}{4},\frac{1}{2},\frac{3}{4},1 0 , 4 1 , 2 1 , 4 3 , 1 . We evaluate f f f at the nodes to get
f ( 0 ) = 0 , f ( 1 4 ) = 1 16 , f ( 1 2 ) = 1 4 , f ( 3 4 ) = 9 16 , f ( 1 ) = 1. f(0) = 0, \; f\left(\tfrac{1}{4}\right) = \tfrac{1}{16},\; f\left(\tfrac{1}{2}\right)=\tfrac{1}{4},\; f\left(\tfrac{3}{4}\right)=\tfrac{9}{16}, \; f(1)=1. f ( 0 ) = 0 , f ( 4 1 ) = 16 1 , f ( 2 1 ) = 4 1 , f ( 4 3 ) = 16 9 , f ( 1 ) = 1. This gives four forward difference estimates,
f ′ ( 0 ) ≈ 4 ( 1 16 − 0 ) , f ′ ( 1 4 ) ≈ 4 ( 1 4 − 1 16 ) , f ′ ( 1 2 ) ≈ 4 ( 9 16 − 1 4 ) , f ′ ( 3 4 ) ≈ 4 ( 1 − 9 16 ) . \begin{aligned}
f'(0) & \approx 4\left(\tfrac{1}{16}-0\right), &\quad
f'\left(\tfrac{1}{4}\right)& \approx 4\left(\tfrac{1}{4}-\tfrac{1}{16}\right), \\
f'\left(\tfrac{1}{2}\right)& \approx 4\left(\tfrac{9}{16}-\tfrac{1}{4}\right), &\quad
f'\left(\tfrac{3}{4}\right) &\approx 4\left(1-\tfrac{9}{16}\right).
\end{aligned} f ′ ( 0 ) f ′ ( 2 1 ) ≈ 4 ( 16 1 − 0 ) , ≈ 4 ( 16 9 − 4 1 ) , f ′ ( 4 1 ) f ′ ( 4 3 ) ≈ 4 ( 4 1 − 16 1 ) , ≈ 4 ( 1 − 16 9 ) . We also get four backward difference estimates,
f ′ ( 1 4 ) ≈ 4 ( 1 16 − 0 ) , f ′ ( 1 2 ) ≈ 4 ( 1 4 − 1 16 ) , f ′ ( 3 4 ) ≈ 4 ( 9 16 − 1 4 ) , f ′ ( 1 ) ≈ 4 ( 1 − 9 16 ) . \begin{aligned}
f'\left(\tfrac{1}{4}\right) &\approx 4\left(\tfrac{1}{16}-0\right), &\quad
f'\left(\tfrac{1}{2}\right) &\approx 4\left(\tfrac{1}{4}-\tfrac{1}{16}\right), \\
f'\left(\tfrac{3}{4}\right) &\approx 4\left(\tfrac{9}{16}-\tfrac{1}{4}\right), &\quad
f'\left(1\right) &\approx 4\left(1-\tfrac{9}{16}\right).
\end{aligned} f ′ ( 4 1 ) f ′ ( 4 3 ) ≈ 4 ( 16 1 − 0 ) , ≈ 4 ( 16 9 − 4 1 ) , f ′ ( 2 1 ) f ′ ( 1 ) ≈ 4 ( 4 1 − 16 1 ) , ≈ 4 ( 1 − 16 9 ) . Notice that it’s the same four differences each time, but we’re interpreting them as derivative estimates at different nodes.
As pointed out in Example 5.4.1 , the only real distinction between (5.4.2) and (5.4.3) is whether we think that f ′ f' f ′ is being evaluated at the left node or the right one. Symmetry would suggest that we should evaluate it halfway between. That is the motivation behind centered difference formulas.
Let’s derive the shortest centered formula using p = q = 1 p=q=1 p = q = 1 . For simplicity, we will set x = 0 x=0 x = 0 without affecting the result. This means that f ( − h ) f(-h) f ( − h ) , f ( 0 ) f(0) f ( 0 ) , and f ( h ) f(h) f ( h ) are all available in (5.4.1) .
Note that (5.4.2) is simply the slope of the line through the points ( 0 , f ( 0 ) ) \bigl(0,f(0)\bigr) ( 0 , f ( 0 ) ) and ( h , f ( h ) ) \bigl(h,f(h)\bigr) ( h , f ( h ) ) . One route to using all three function values is to differentiate the quadratic polynomial that interpolates ( − h , f ( − h ) ) \bigl(-h,f(-h)\bigr) ( − h , f ( − h ) ) as well (see Exercise 5.4.1 ):
Q ( x ) = x ( x − h ) 2 h 2 f ( − h ) − x 2 − h 2 h 2 f ( 0 ) + x ( x + h ) 2 h 2 f ( h ) . Q(x) = \frac{x(x-h)}{2h^2} f(-h) - \frac{x^2-h^2}{h^2} f(0) + \frac{x(x+h)}{2h^2} f(h). Q ( x ) = 2 h 2 x ( x − h ) f ( − h ) − h 2 x 2 − h 2 f ( 0 ) + 2 h 2 x ( x + h ) f ( h ) . This leads to
f ′ ( 0 ) ≈ Q ′ ( 0 ) = f ( h ) − f ( − h ) 2 h . f'(0) \approx Q'(0) = \frac{f(h)-f(-h)}{2h}. f ′ ( 0 ) ≈ Q ′ ( 0 ) = 2 h f ( h ) − f ( − h ) . This result is equivalent to (5.4.1) with p = q = 1 p=q=1 p = q = 1 and weights a − 1 = − 1 2 a_{-1}=-\frac{1}{2} a − 1 = − 2 1 , a 0 = 0 a_0=0 a 0 = 0 , and a 1 = 1 2 a_1=\frac{1}{2} a 1 = 2 1 . Observe that while the value of f ( 0 ) f(0) f ( 0 ) was available during the derivation, its weight ends up being zero.
Besides the aesthetic appeal of symmetry, in Convergence of finite differences we will see another important advantage of (5.4.5) compared to the one-sided formulas.
We can in principle derive any finite-difference formula from the same process: Interpolate the given function values, then differentiate the interpolant exactly. Some results of the process are given in Table 5.4.1 for centered differences, and in Table 5.4.2 for forward differences. Both show the weights for estimating the derivative at x = 0 x=0 x = 0 . To get backward differences, you change the signs and reverse the order of the coefficients in any row of Table 5.4.2 ; see Exercise 5.4.2 .
Table 5.4.1: Weights for centered finite-difference formulas.
order − 4 h -4h − 4 h − 3 h -3h − 3 h − 2 h -2h − 2 h − h -h − h 0 h h h 2 h 2h 2 h 3 h 3h 3 h 4 h 4h 4 h 2 − 1 2 -\frac{1}{2} − 2 1 0 1 2 \frac{1}{2} 2 1 4 1 12 \frac{1}{12} 12 1 − 2 3 -\frac{2}{3} − 3 2 0 2 3 \frac{2}{3} 3 2 − 1 12 -\frac{1}{12} − 12 1 6 − 1 60 -\frac{1}{60} − 60 1 3 20 \frac{3}{20} 20 3 − 3 4 -\frac{3}{4} − 4 3 0 3 4 \frac{3}{4} 4 3 − 3 20 -\frac{3}{20} − 20 3 1 60 \frac{1}{60} 60 1 8 1 280 \frac{1}{280} 280 1 − 4 105 -\frac{4}{105} − 105 4 1 5 \frac{1}{5} 5 1 − 4 5 -\frac{4}{5} − 5 4 0 4 5 \frac{4}{5} 5 4 − 1 5 -\frac{1}{5} − 5 1 4 105 \frac{4}{105} 105 4 − 1 280 -\frac{1}{280} − 280 1
Table 5.4.2: Weights for forward finite-difference formulas. To get backward differences, change the signs and reverse the order of the coefficients.
order 0 h h h 2 h 2h 2 h 3 h 3h 3 h 4 h 4h 4 h 1 -1 1 2 − 3 2 -\frac{3}{2} − 2 3 2 − 1 2 -\frac{1}{2} − 2 1 3 − 11 6 -\frac{11}{6} − 6 11 3 − 3 2 -\frac{3}{2} − 2 3 1 3 \frac{1}{3} 3 1 4 − 25 12 -\frac{25}{12} − 12 25 4 -3 4 3 \frac{4}{3} 3 4 − 1 4 -\frac{1}{4} − 4 1
The main motivation for using more function values in a formula is to improve the accuracy. This is measured by order of accuracy , which is shown in the tables and explored in Section 5.5 .
Example 5.4.3 (Finite differences)
Example 5.4.3 If f ( x ) = e sin ( x ) f(x)=e^{\,\sin(x)} f ( x ) = e s i n ( x ) , then f ′ ( 0 ) = 1 f'(0)=1 f ′ ( 0 ) = 1 .
Here are the first two centered differences from Table 5.4.1 .
h = 0.05
CD2 = (-f(-h) + f(h)) / 2h
CD4 = (f(-2h) - 8f(-h) + 8f(h) - f(2h)) / 12h
@show (CD2, CD4);
(CD2, CD4) = (0.9999995835069508, 1.0000016631938748)
Here are the first two forward differences from Table 5.4.2 .
FD1 = (-f(0) + f(h)) / h
FD2 = (-3f(0) + 4f(h) - f(2h)) / 2h
@show (FD1, FD2);
(FD1, FD2) = (1.024983957209069, 1.0000996111012461)
Finally, here are the backward differences that come from reverse-negating the forward differences.
BD1 = (-f(-h) + f(0)) / h
BD2 = (f(-2h) - 4f(-h) + 3f(0)) / 2h
@show (BD1, BD2);
(BD1, BD2) = (0.9750152098048326, 0.9999120340342049)
Example 5.4.3 If f ( x ) = e sin ( x ) f(x)=e^{\,\sin(x)} f ( x ) = e s i n ( x ) , then f ′ ( 0 ) = 1 f'(0)=1 f ′ ( 0 ) = 1 .
Here are the first two centered differences from Table 5.4.1 .
h = 0.05;
format long
CD2 = (-f(-h) + f(h)) / (2*h)
CD4 = (f(-2*h) - 8*f(-h) + 8*f(h) - f(2*h)) / (12*h)
Here are the first two forward differences from Table 5.4.2 .
FD1 = (-f(0) + f(h)) / h
FD2 = (-3*f(0) + 4*f(h) - f(2*h)) / (2*h)
Finally, here are the backward differences that come from reverse-negating the forward differences.
BD1 = (-f(-h) + f(0)) / h
BD2 = (f(-2*h) - 4*f(-h) + 3*f(0)) / (2*h)
Example 5.4.3 If f ( x ) = e sin ( x ) f(x)=e^{\,\sin(x)} f ( x ) = e s i n ( x ) , then f ′ ( 0 ) = 1 f'(0)=1 f ′ ( 0 ) = 1 .
f = lambda x: exp(sin(x))
Here are the first two centered differences from Table 5.4.1 .
h = 0.05
CD2 = (-f(-h) + f(h)) / (2*h)
CD4 = (f(-2*h) - 8*f(-h) + 8*f(h) - f(2*h)) / (12*h)
print(f"CD2 is {CD2:.9f} and CD4 is {CD4:.9f}")
CD2 is 0.999999584 and CD4 is 1.000001663
Here are the first two forward differences from Table 5.4.2 .
FD1 = (-f(0) + f(h)) / h
FD2 = (-3*f(0) + 4*f(h) - f(2*h)) / (2*h)
print(f"FD1 is {FD1:.9f} and FD2 is {FD2:.9f}")
FD1 is 1.024983957 and FD2 is 1.000099611
Finally, here are the backward differences that come from reverse-negating the forward differences.
BD1 = (-f(-h) + f(0)) / h
BD2 = (f(-2*h) - 4*f(-h) + 3*f(0)) / (2*h)
print(f"BD1 is {BD1:.9f} and BD2 is {BD2:.9f}")
BD1 is 0.975015210 and BD2 is 0.999912034
5.4.2 Higher derivatives ¶ Many applications require the second derivative of a function. It’s tempting to use the finite difference of a finite difference. For example, applying (5.4.5) to f ′ f' f ′ gives
f ′ ′ ( 0 ) ≈ f ′ ( h ) − f ′ ( h ) 2 h . f''(0) \approx \frac{ f'(h) - f'(h) }{2h}. f ′′ ( 0 ) ≈ 2 h f ′ ( h ) − f ′ ( h ) . Then applying (5.4.5) to approximate the appearances of f ′ f' f ′ leads to
f ′ ′ ( 0 ) ≈ f ( − 2 h ) − 2 f ( 0 ) + f ( 2 h ) 4 h 2 . f''(0) \approx \frac{ f(-2h) - 2 f(0) + f(2h) }{4h^2}. f ′′ ( 0 ) ≈ 4 h 2 f ( − 2 h ) − 2 f ( 0 ) + f ( 2 h ) . This is a valid formula, but it uses values at ± 2 h \pm 2h ± 2 h rather than the closer values at ± h \pm h ± h . A better and more generalizable tactic is to return to the quadratic Q ( x ) Q(x) Q ( x ) in (5.4.4) and use Q ′ ′ ( 0 ) Q''(0) Q ′′ ( 0 ) to approximate f ′ ′ ( 0 ) f''(0) f ′′ ( 0 ) . Doing so yields
f ′ ′ ( 0 ) ≈ f ( − h ) − 2 f ( 0 ) + f ( h ) h 2 , f''(0) \approx \frac{ f(-h) - 2 f(0) + f(h) }{h^2}, f ′′ ( 0 ) ≈ h 2 f ( − h ) − 2 f ( 0 ) + f ( h ) , which is the simplest centered second-difference formula . As with the first derivative, we can choose larger values of p p p and q q q in (5.4.1) to get new formulas, such as
f ′ ′ ( 0 ) ≈ f ( 0 ) − 2 f ( h ) + f ( 2 h ) h 2 , f''(0) \approx \frac{ f(0) - 2 f(h) + f(2h) }{h^2}, f ′′ ( 0 ) ≈ h 2 f ( 0 ) − 2 f ( h ) + f ( 2 h ) , and
f ′ ′ ( 0 ) ≈ 2 f ( 0 ) − 5 f ( h ) + 4 f ( 2 h ) − f ( 3 h ) h 2 . f''(0) \approx \frac{ 2f(0) - 5 f(h) + 4 f(2h) -f(3h) }{h^2}. f ′′ ( 0 ) ≈ h 2 2 f ( 0 ) − 5 f ( h ) + 4 f ( 2 h ) − f ( 3 h ) . For the second derivative, converting a forward difference to a backward difference requires reversing the order of the weights, while not changing their signs.
Example 5.4.4 (Finite differences for
f ′ ′ f'' f ′′ )
Example 5.4.4 If f ( x ) = e sin ( x ) f(x)=e^{\,\sin(x)} f ( x ) = e s i n ( x ) , then f ′ ′ ( 0 ) = 1 f''(0)=1 f ′′ ( 0 ) = 1 .
Here is a centered estimate given by (5.4.9) .
h = 0.05
CD2 = (f(-h) - 2f(0) + f(h)) / h^2
@show CD2;
For the same h h h , here are forward estimates given by (5.4.10) and (5.4.11) .
FD1 = (f(0) - 2f(h) + f(2h)) / h^2
FD2 = (2f(0) - 5f(h) + 4f(2h) - f(3h)) / h^2
@show (FD1, FD2);
(FD1, FD2) = (0.9953738443129188, 1.0078811479598213)
Finally, here are the backward estimates that come from reversing (5.4.10) and (5.4.11) .
BD1 = (f(-2h) - 2f(-h) + f(0)) / h^2
BD2 = (-f(-3h) + 4f(-2h) - 5f(-h) + 2f(0)) / h^2
@show (BD1, BD2);
(BD1, BD2) = (0.9958729691748489, 1.0058928192789194)
Example 5.4.4 If f ( x ) = e sin ( x ) f(x)=e^{\,\sin(x)} f ( x ) = e s i n ( x ) , then f ′ ′ ( 0 ) = 1 f''(0)=1 f ′′ ( 0 ) = 1 .
Here is a centered estimate given by (5.4.9) .
h = 0.05;
format long
CD2 = (f(-h) - 2*f(0) + f(h)) / h^2
For the same h h h , here are forward estimates given by (5.4.10) and (5.4.11) .
FD1 = (f(0) - 2*f(h) + f(2*h)) / h^2
FD2 = (2*f(0) - 5*f(h) + 4*f(2*h) - f(3*h)) / h^2
Finally, here are the backward estimates that come from reversing (5.4.10) and (5.4.11) .
BD1 = (f(-2*h) - 2*f(-h) + f(0)) / h^2
BD2 = (-f(-3*h) + 4*f(-2*h) - 5*f(-h) + 2*f(0)) / h^2
Example 5.4.4 If f ( x ) = e sin ( x ) f(x)=e^{\,\sin(x)} f ( x ) = e s i n ( x ) , then f ′ ′ ( 0 ) = 1 f''(0)=1 f ′′ ( 0 ) = 1 .
f = lambda x: exp(sin(x))
Here is a centered estimate given by (5.4.9) .
h = 0.05
CD2 = (f(-h) - 2*f(0) + f(h)) / h**2
print(f"CD2 is {CD2:.9f}")
For the same h h h , here are forward estimates given by (5.4.10) and (5.4.11) .
FD1 = (f(0) - 2*f(h) + f(2*h)) / h**2
FD2 = (2*f(0) - 5*f(h) + 4*f(2*h) - f(3*h)) / h**2
print(f"FD1 is {FD1:.9f} and FD2 is {FD2:.9f}")
FD1 is 0.995373844 and FD2 is 1.007881148
Finally, here are the backward estimates that come from reversing (5.4.10) and (5.4.11) .
BD1 = (f(-2*h) - 2*f(-h) + f(0)) / h**2
BD2 = (-f(-3*h) + 4*f(-2*h) - 5*f(-h) + 2*f(0)) / h**2
print(f"BD1 is {BD1:.9f} and BD2 is {BD2:.9f}")
BD1 is 0.995872969 and BD2 is 1.005892819
5.4.3 Arbitrary nodes ¶ Although function values at equally spaced nodes are a common and convenient situation, the node locations may be arbitrary. The general form of a finite-difference formula is
We no longer assume equally spaced nodes, so there is no “h h h ” to be used in the formula. As before, the weights may be applied after any translation of the independent variable. The weights again follow from the interpolate/differentiate recipe, but the algebra becomes complicated. Fortunately there is an elegant recursion known as Fornberg’s algorithm that can calculate these weights for any desired formula. We present it without derivation as Function 5.4.1 .
Fornberg’s algorithm for finite difference weights1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
"""
fdweights(t, m)
Compute weights for the `m`th derivative of a function at zero using
values at the nodes in vector `t`.
"""
function fdweights(t, m)
# This is a compact implementation, not an efficient one.
# Recursion for one weight.
function weight(t, m, r, k)
# Inputs
# t: vector of nodes
# m: order of derivative sought
# r: number of nodes to use from t
# k: index of node whose weight is found
if (m < 0) || (m > r) # undefined coeffs must be zero
c = 0
elseif (m == 0) && (r == 0) # base case of one-point interpolation
c = 1
else # generic recursion
if k < r
c =
(t[r+1] * weight(t, m, r-1, k) - m * weight(t, m-1, r-1, k)) /
(t[r+1] - t[k+1])
else
numer = r > 1 ? prod(t[r] - x for x in t[1:r-1]) : 1
denom = r > 0 ? prod(t[r+1] - x for x in t[1:r]) : 1
β = numer / denom
c =
β *
(m * weight(t, m - 1, r-1, r-1) - t[r] * weight(t, m, r-1, r-1))
end
end
return c
end
r = length(t) - 1
w = zeros(size(t))
return [weight(t, m, r, k) for k in 0:r]
end
Fornberg’s algorithm for finite difference weights1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
function w = fdweights(t,m)
%FDWEIGHTS Fornberg's algorithm for finite difference weights.
% Input:
% t nodes (vector, length r+1)
% m order of derivative sought at x=0 (integer scalar)
% Output:
% w weights for the approximation to the jth derivative (vector)
% This is a compact implementation, not an efficient one.
r = length(t) - 1;
w = zeros(size(t));
for k = 0:r
w(k+1) = weight(t, m, r, k);
end
end
function c = weight(t, m, r, k)
% Implement a recursion for the weights.
% Input:
% t nodes (vector)
% m order of derivative sought
% r number of nodes to use from t (<= length(t))
% k index of node whose weight is found
% Output:
% c finite difference weight
if (m < 0) || (m > r) % undefined coeffs must be zero
c = 0;
elseif (m == 0) && (r == 0) % base case of one-point interpolation
c = 1;
else % generic recursion
if k < r
c = t(r+1) * weight(t, m, r-1, k) - m * weight(t, m-1, r-1, k);
c = c / (t(r+1) - t(k+1));
else
beta = prod(t(r) - t(1:r-1)) / prod(t(r+1) - t(1:r));
c = beta * (m * weight(t, m-1, r-1, r-1) - t(r) * weight(t, m, r-1, r-1));
end
end
end
Fornberg’s algorithm for finite difference weights1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
def fdweights(t, m):
"""
fdweights(t, m)
Return weights for the mth derivative of a function at zero using values at the
nodes in vector t.
"""
# This is a compact implementation, not an efficient one.
def weight(t, m, r, k):
# Recursion for one weight.
# Input:
# t nodes (vector)
# m order of derivative sought
# r number of nodes to use from t (<= length(t))
# k index of node whose weight is found
if (m < 0) or (m > r): # undefined coeffs must be zero
c = 0
elif (m == 0) and (r == 0): # base case of one-point interpolation
c = 1
else: # generic recursion
if k < r:
denom = t[r] - t[k]
c = (t[r] * weight(t, m, r-1, k) - m * weight(t, m-1, r-1, k)) / denom
else:
beta = np.prod(t[r-1] - t[:r-1]) / np.prod(t[r] - t[:r])
c = beta * (m * weight(t, m-1, r-1, r-1) - t[r-1] * weight(t, m, r-1, r-1))
return c
r = len(t) - 1
w = np.zeros(t.shape)
return np.array([ weight(t, m, r, k) for k in range(r+1) ])
Example 5.4.5 (Finite differences at arbitrary nodes)
Example 5.4.5 We will estimate the derivative of cos ( x 2 ) \cos(x^2) cos ( x 2 ) at x = 0.5 x=0.5 x = 0.5 using five nodes.
t = [0.35, 0.5, 0.57, 0.6, 0.75] # nodes
f = x -> cos(x^2)
df_dx = x -> -2 * x * sin(x^2)
exact_value = df_dx(0.5)
We have to shift the nodes so that the point of estimation for the derivative is at x = 0 x=0 x = 0 . (To subtract a scalar from a vector, we must use the .-
operator.)
w = FNC.fdweights(t .- 0.5, 1)
5-element Vector{Float64}:
-0.5303030303030298
-21.61904761904763
45.09379509379508
-23.333333333333307
0.38888888888888845
The finite-difference formula is a dot product (i.e., inner product) between the vector of weights and the vector of function values at the nodes.
We can reproduce the weights in the finite-difference tables by using equally spaced nodes with h = 1 h=1 h = 1 . For example, here is a one-sided formula at four nodes.
4-element Vector{Float64}:
-1.8333333333333333
3.0
-1.5
0.3333333333333333
By giving nodes of type Rational
, we can get exact values instead.
FNC.fdweights(Rational.(0:3), 1)
4-element Vector{Rational{Int64}}:
-11//6
3
-3//2
1//3
Example 5.4.5 We will estimate the derivative of cos ( x 2 ) \cos(x^2) cos ( x 2 ) at x = 0.5 x=0.5 x = 0.5 using five nodes.
t = [0.35, 0.5, 0.57, 0.6, 0.75]; % nodes
f = @(x) cos(x.^2);
dfdx = @(x) -2 * x * sin(x^2);
exact_value = dfdx(0.5)
We have to shift the nodes so that the point of estimation for the derivative is at x = 0 x=0 x = 0 . (To subtract a scalar from a vector, we must use the .-
operator.)
format short
w = fdweights(t - 0.5, 1)
Unrecognized function or variable 'fdweights'.
The finite-difference formula is a dot product (i.e., inner product) between the vector of weights and the vector of function values at the nodes.
Unrecognized function or variable 'w'.
We can reproduce the weights in the finite-difference tables by using equally spaced nodes with h = 1 h=1 h = 1 . For example, here is a one-sided formula at four nodes.
Unrecognized function or variable 'fdweights'.
Example 5.4.5 We will estimate the derivative of cos ( x 2 ) \cos(x^2) cos ( x 2 ) at x = 0.5 x=0.5 x = 0.5 using five nodes.
t = array([0.35, 0.5, 0.57, 0.6, 0.75]) # nodes
f = lambda x: cos(x**2)
dfdx = lambda x: -2 * x * sin(x**2)
exact_value = dfdx(0.5)
We have to shift the nodes so that the point of estimation for the derivative is at x = 0 x=0 x = 0 . (To subtract a scalar from a vector, we must use the .-
operator.)
w = FNC.fdweights(t - 0.5, 1)
The finite-difference formula is a dot product (i.e., inner product) between the vector of weights and the vector of function values at the nodes.
We can reproduce the weights in the finite-difference tables by using equally spaced nodes with h = 1 h=1 h = 1 . For example, here is a one-sided formula at four nodes.
print(FNC.fdweights(linspace(0, 3, 4), 1))
[-1.83333333 3. -1.5 0.33333333]
5.4.4 Exercises ¶ ✍ This problem refers to Q ( x ) Q(x) Q ( x ) defined by (5.4.4) .
(a) Show that Q ( x ) Q(x) Q ( x ) interpolates the three values of f f f at x = − h x=-h x = − h , x = 0 x=0 x = 0 , and x = h x=h x = h .
(b) Show that Q ′ ( 0 ) Q'(0) Q ′ ( 0 ) gives the finite-difference formula defined by (5.4.5) .
(a) ✍ Table 5.4.2 lists forward difference formulas in which p = 0 p=0 p = 0 in (5.4.1) . Show that the change of variable g ( x ) = f ( − x ) g(x) = f(-x) g ( x ) = f ( − x ) transforms these formulas into backward difference formulas with q = 0 q=0 q = 0 , and write out the table analogous to Table 5.4.2 for backward differences.
(b) ⌨ Suppose you are given the nodes t 0 = 0.9 t_0=0.9 t 0 = 0.9 , t 1 = 1 t_1=1 t 1 = 1 , and t 2 = 1.1 t_2=1.1 t 2 = 1.1 , and f ( x ) = sin ( 2 x ) f(x) = \sin(2x) f ( x ) = sin ( 2 x ) . Using formulas from Table 5.4.1 and Table 5.4.2 , compute second-order accurate approximations to f ′ f' f ′ at each of the three nodes.
⌨ Let f ( x ) = e − x f(x)=e^{-x} f ( x ) = e − x , x = 0.5 x=0.5 x = 0.5 , and h = 0.2 h=0.2 h = 0.2 . Using Function 5.4.1 to get the necessary weights on five nodes centered at x x x , find finite-difference approximations to the first, second, third, and fourth derivatives of f f f . Make a table showing the derivative values and the errors in each case.
⌨ In the manner of Example 5.4.5 >, use {numref}
Function {number} <function-fdweights on centered node vectors of length 3, 5, 7, and 9 to produce a table analogous to Table 5.4.1 for the second derivative f ′ ′ ( 0 ) f''(0) f ′′ ( 0 ) . (You do not need to show the orders of accuracy, just the weights.)
⌨ For this problem, let f ( x ) = tan ( 2 x ) f(x)=\tan(2x) f ( x ) = tan ( 2 x ) .
(a) ⌨ Apply Function 5.4.1 to find a finite-difference approximation to f ′ ′ ( 0.3 ) f''(0.3) f ′′ ( 0.3 ) using the five nodes t j = 0.3 + j h t_j=0.3+jh t j = 0.3 + jh for j = − 2 , … , 2 j=-2,\ldots,2 j = − 2 , … , 2 and h = 0.05 h=0.05 h = 0.05 . Compare to the exact value of f ′ ′ ( 0.3 ) f''(0.3) f ′′ ( 0.3 ) .
(b) ⌨ Repeat part (a) for f ′ ′ ( 0.75 ) f''(0.75) f ′′ ( 0.75 ) on the nodes t j = 0.75 + j h t_j=0.75+jh t j = 0.75 + jh . Why is the finite-difference result so inaccurate? (Hint: A plot of f f f might be informative.)
(a) ✍ Show using L’Hôpital’s Rule that the centered formula approximation (5.4.5) converges to an equality as h → 0 h\to 0 h → 0 .
(b) ✍ Derive two conditions on the finite-difference weights in (5.4.1) that arise from requiring convergence as h → 0 h\to 0 h → 0 . (Hint: Consider what is required in order to apply L’Hôpital’s Rule, as well as the result of applying it.)