Stability of polynomial interpolation Tobin A. Driscoll Richard J. Braun
With barycentric interpolation available in the form of Function 9.2.1 , we can explore polynomial interpolation using a numerically stable algorithm. Any remaining sensitivity to error is due to the conditioning of the interpolation process itself.
Example 9.3.1 (Ill-conditioning in polynomial interpolation)
Example 9.3.1 We choose a function over the interval [ 0 , 1 ] [0,1] [ 0 , 1 ] .
Here is a graph of f f f and its polynomial interpolant using seven equally spaced nodes.
Sourceusing Plots
plot(f, 0, 1, label="function", legend=:bottomleft)
t = range(0, 1, 7) # 7 equally spaced nodes
y = f.(t)
scatter!(t, y, label="nodes")
p = FNC.polyinterp(t, y)
plot!(p, 0, 1, label="interpolant", title="Equispaced interpolant, n=6")
This looks pretty good. We want to track the behavior of the error as n n n increases. We will estimate the error in the continuous interpolant by sampling it at a large number of points and taking the max-norm.
Sourcen = 5:5:60
err = zeros(size(n))
x = range(0, 1, 2001) # for measuring error
for (k, n) in enumerate(n)
t = range(0, 1, n+1) # equally spaced nodes
y = f.(t) # interpolation data
p = FNC.polyinterp(t, y)
err[k] = norm((@. f(x) - p(x)), Inf)
end
plot(n, err, m=:o,
xaxis=(L"n"), yaxis=(:log10, "max error"),
title="Interpolation error for equispaced nodes")
The error initially decreases as one would expect but then begins to grow. Both phases occur at rates that are exponential in n n n , i.e., O ( K n O(K^n O ( K n ) for a constant K K K , appearing linear on a semi-log plot.
Example 9.3.1 We choose a function over the interval [ 0 , 1 ] [0,1] [ 0 , 1 ] . Using 7 equally spaced nodes, the interpolation looks fine.
f = @(x) sin(exp(2*x));
clf, fplot(f, [0, 1], displayname="function")
t = linspace(0, 1, 7);
y = f(t);
hold on, scatter(t, y, displayname="nodes")
p = polyinterp(t, y)
fplot(p, [0, 1], displayname="interpolant")
xlabel('x'), ylabel('f(x)')
title('Test function')
Unrecognized function or variable 'polyinterp'.
We want to track the behavior of the error as n n n increases. We will estimate the error in the continuous interpolant by sampling it at a large number of points and taking the max-norm.
Sourcen = (5:5:60)';
err = zeros(size(n));
x = linspace(0, 1, 1001)'; % for measuring error
for k = 1:length(n)
t = linspace(0, 1, n(k) + 1)'; % equally spaced nodes
y = f(t); % interpolation data
p = polyinterp(t, y);
err(k) = norm(f(x) - p(x), Inf);
end
clf, semilogy(n, err, 'o-')
xlabel('n'), ylabel('max error')
title('Equispaced polynomial interpolation error')
Unrecognized function or variable 'polyinterp'.
The error initially decreases as one would expect but then begins to grow. Both phases occur at rates that are exponential in n n n , i.e., O ( K n O(K^n O ( K n ) for a constant K K K , appearing linear on a semi-log plot.
Example 9.3.1 We choose a function over the interval [ 0 , 1 ] [0,1] [ 0 , 1 ] .
f = lambda x: sin(exp(2 * x))
Here is a graph of f f f and its polynomial interpolant using seven equally spaced nodes.
Sourcex = linspace(0, 1, 500)
plot(x, f(x), label="function")
t = linspace(0, 1, 7)
y = f(t)
p = FNC.polyinterp(t, y)
plot(x, p(x), label="interpolant")
plot(t, y, 'ko', label="nodes")
legend(), title("Equispaced interpolant, n=6");
This looks pretty good. We want to track the behavior of the error as n n n increases. We will estimate the error in the continuous interpolant by sampling it at a large number of points and taking the max-norm.
SourceN = arange(5, 65, 5)
err = zeros(N.size)
x = linspace(0, 1, 1001) # for measuring error
for k, n in enumerate(N):
t = linspace(0, 1, n + 1) # equally spaced nodes
y = f(t) # interpolation data
p = FNC.polyinterp(t, y)
err[k] = max(abs(f(x) - p(x)))
semilogy(N, err, "-o")
xlabel("$n$"), ylabel("max error")
title(("Polynomial interpolation error"));
The error initially decreases as one would expect but then begins to grow. Both phases occur at rates that are exponential in n n n , i.e., O ( K n O(K^n O ( K n ) for a constant K K K , appearing linear on a semi-log plot.
9.3.1 Runge phenomenon ¶ The disappointing loss of convergence in Example 9.3.1 is a sign of ill conditioning due to the use of equally spaced nodes. We will examine this effect using the error formula (9.1.6) as a guide:
f ( x ) − p ( x ) = f ( n + 1 ) ( ξ ) ( n + 1 ) ! Φ ( x ) , Φ ( x ) = ∏ i = 0 n ( x − t i ) . f(x) - p(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} \Phi(x), \qquad \Phi(x) = \prod_{i=0}^n (x-t_i). f ( x ) − p ( x ) = ( n + 1 )! f ( n + 1 ) ( ξ ) Φ ( x ) , Φ ( x ) = i = 0 ∏ n ( x − t i ) . While the dependence on f f f is messy here, the error indicator Φ ( x ) \Phi(x) Φ ( x ) can be studied as a function of the nodes only.
Example 9.3.2 (Error indicator function for equispaced nodes)
Example 9.3.2 We plot ∣ Φ ( x ) ∣ |\Phi(x)| ∣Φ ( x ) ∣ over the interval [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] with equispaced nodes for different values of n n n .
Sourceplot(xaxis=(L"x"), yaxis=(:log10, L"|\Phi(x)|", [1e-25, 1]), legend=:bottomleft)
x = range(-1, 1, 2001)
for n in 10:10:50
t = range(-1, 1, n+1)
Φ(x) = prod(x - t for t in t)
scatter!(x, abs.(Φ.(x)), m=(1, stroke(0)), label="n=$n")
end
title!("Error indicator for equispaced nodes")
Each time Φ passes through zero at an interpolation node, the value on the log scale should go to − ∞ -\infty − ∞ , which explains the numerous cusps on the curves.
Example 9.3.2 We plot ∣ Φ ( x ) ∣ |\Phi(x)| ∣Φ ( x ) ∣ over the interval [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] with equispaced nodes for different values of n n n .
Sourceclf
x = linspace(-1, 1, 1601)';
Phi = zeros(size(x));
for n = 10:10:50
t = linspace(-1, 1, n+1)';
for k = 1:length(x)
Phi(k) = prod(x(k) - t);
end
semilogy(x, abs(Phi)), hold on
end
title('Error indicator on equispaced nodes')
xlabel('x'), ylabel('|\Phi(x)|')
Each time Φ passes through zero at an interpolation node, the value on the log scale should go to − ∞ -\infty − ∞ , which explains the numerous cusps on the curves.
Example 9.3.2 We plot ∣ Φ ( x ) ∣ |\Phi(x)| ∣Φ ( x ) ∣ over the interval [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] with equispaced nodes for different values of n n n .
Sourcex = linspace(-1, 1, 1601)
for n in range(10, 60, 10):
t = linspace(-1, 1, n + 1)
Phi = array([prod(xk - t) for xk in x])
semilogy(x, abs(Phi), ".", markersize=2)
xlabel("$x$")
ylabel("$|\Phi(x)|$")
ylim([1e-25, 1])
title(("Effect of equispaced nodes"));
<>:7: SyntaxWarning: invalid escape sequence '\P'
<>:7: SyntaxWarning: invalid escape sequence '\P'
/var/folders/0m/fy_f4_rx5xv68sdkrpcm26r00000gq/T/ipykernel_8927/3615654825.py:7: SyntaxWarning: invalid escape sequence '\P'
ylabel("$|\Phi(x)|$")
Each time Φ passes through zero at an interpolation node, the value on the log scale should go to − ∞ -\infty − ∞ , which explains the numerous cusps on the curves.
Two observations from the result of Example 9.3.2 are important. First, ∣ Φ ∣ |\Phi| ∣Φ∣ decreases exponentially at each fixed location in the interval (note that the spacing between curves is constant for constant increments of n n n ). Second, ∣ Φ ∣ |\Phi| ∣Φ∣ is larger at the ends of the interval than in the middle, by an exponentially growing factor. This gap is what can ruin the convergence of polynomial interpolation.
Example 9.3.3 (Runge phenomenon)
Example 9.3.3 This function has infinitely many continuous derivatives on the entire real line and looks easy to approximate over [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] .
f(x) = 1 / (x^2 + 16)
plot(f, -1, 1, title="Test function", legend=:none)
We start by doing equispaced polynomial interpolation for some small values of n n n .
Sourceplot(xaxis=(L"x"), yaxis=(:log10, L"|f(x)-p(x)|", [1e-20, 1]))
x = range(-1, 1, 2501)
n = 4:4:12
for (k, n) in enumerate(n)
t = range(-1, 1, n+1) # equally spaced nodes
y = f.(t) # interpolation data
p = FNC.polyinterp(t, y)
err = @. abs(f(x) - p(x))
plot!(x, err, m=(1, :o, stroke(0)), label="degree $n")
end
title!("Error for low degrees")
The convergence so far appears rather good, though not uniformly so. However, notice what happens as we continue to increase the degree.
Sourcen = @. 12 + 15 * (1:3)
plot(xaxis=(L"x"), yaxis=(:log10, L"|f(x)-p(x)|", [1e-20, 1]))
for (k, n) in enumerate(n)
t = range(-1, 1, n+1) # equally spaced nodes
y = f.(t) # interpolation data
p = FNC.polyinterp(t, y)
err = @. abs(f(x) - p(x))
plot!(x, err, m=(1, :o, stroke(0)), label="degree $n")
end
title!("Error for higher degrees")
The convergence in the middle can’t get any better than machine precision relative to the function values. So maintaining the growing gap between the center and the ends pushes the error curves upward exponentially fast at the ends, wrecking the convergence.
Example 9.3.3 This function has infinitely many continuous derivatives on the entire real line and looks easy to approximate over [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] .
f = @(x) 1 ./ (x.^2 + 16);
clf, fplot(f, [-1, 1])
xlabel('x'), ylabel('f(x)')
title('Test function')
We start by doing equispaced polynomial interpolation for some small values of n n n .
Sourcex = linspace(-1, 1, 1601)';
n = (4:4:12)';
for k = 1:length(n)
t = linspace(-1, 1, n(k) + 1)'; % equally spaced nodes
p = polyinterp(t, f(t));
semilogy(x, abs(f(x) - p(x))); hold on
end
title('Error for degrees 4, 8, 12')
xlabel('x'), ylabel('|f(x) - p(x)|')
Unrecognized function or variable 'polyinterp'.
The convergence so far appears rather good, though not uniformly so. However, notice what happens as we continue to increase the degree.
Sourcen = 12 + 15 * (1:3);
clf
for k = 1:length(n)
t = linspace(-1, 1, n(k) + 1)'; % equally spaced nodes
p = polyinterp(t, f(t));
semilogy(x, abs(f(x) - p(x))); hold on
end
title('Error for degrees 27, 42, 57')
xlabel('x'), ylabel('|f(x) - p(x)|')
Unrecognized function or variable 'polyinterp'.
The convergence in the middle can’t get any better than machine precision relative to the function values. So maintaining the growing gap between the center and the ends pushes the error curves upward exponentially fast at the ends, wrecking the convergence.
Example 9.3.3 This function has infinitely many continuous derivatives on the entire real line and looks easy to approximate over [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] .
f = lambda x: 1 / (x**2 + 16)
x = linspace(-1, 1, 1601)
plot(x, f(x))
title("Test function");
We start by doing equispaced polynomial interpolation for some small values of n n n .
SourceN = arange(4, 16, 4)
label = []
for k, n in enumerate(N):
t = linspace(-1, 1, n + 1) # equally spaced nodes
y = f(t) # interpolation data
p = FNC.polyinterp(t, y)
err = abs(f(x) - p(x))
semilogy(x, err, ".", markersize=2)
label.append(f"degree {n}")
xlabel("$x$"), ylabel("$|f(x)-p(x)|$")
ylim([1e-20, 1])
legend(label), title("Error for low degrees");
The convergence so far appears rather good, though not uniformly so. However, notice what happens as we continue to increase the degree.
SourceN = 12 + 15 * arange(1, 4)
labels = []
for k, n in enumerate(N):
t = linspace(-1, 1, n + 1) # equally spaced nodes
y = f(t) # interpolation data
p = FNC.polyinterp(t, y)
err = abs(f(x) - p(x))
semilogy(x, err, ".", markersize=2)
labels.append(f"degree {n}")
xlabel("$x$"), ylabel("$|f(x)-p(x)|$"), ylim([1e-20, 1])
legend(labels), title("Error for higher degrees");
The convergence in the middle can’t get any better than machine precision relative to the function values. So maintaining the growing gap between the center and the ends pushes the error curves upward exponentially fast at the ends, wrecking the convergence.
The observation of instability in Example 9.3.3 is known as the Runge phenomenon . The Runge phenomenon is an instability manifested when the nodes of the interpolant are equally spaced and the degree of the polynomial increases. We reiterate that the phenomenon is rooted in the interpolation convergence theory and not a consequence of the algorithm chosen to implement polynomial interpolation.
Significantly, the convergence observed in Example 9.3.3 is stable within a middle portion of the interval. By redistributing the interpolation nodes, we will next sacrifice a little of the convergence in the middle portion in order to improve it near the ends and rescue the process globally.
9.3.2 Chebyshev nodes ¶ The observations above hint that we might find success by having more nodes near the ends of the interval than in the middle. Though we will not give the details, it turns out that there is a precise asymptotic sense in which this must be done to make polynomial interpolation work over the entire interval. One especially important node family that gives stable convergence for polynomial interpolation is the Chebyshev points of the second kind (or Chebyshev extreme points ) defined by
t k = − cos ( k π n ) , k = 0 , … , n . t_k = - \cos\left(\frac{k \pi}{n}\right), \qquad k=0,\ldots,n. t k = − cos ( n kπ ) , k = 0 , … , n . These are the projections onto the x x x -axis of n n n equally spaced points on a unit circle. They are densely clustered near the ends of [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] , and this feature turns out to overcome the Runge phenomenon.
Example 9.3.4 (Error indicator function for Chebyshev nodes)
Example 9.3.4 Now we look at the error indicator function Φ for Chebyshev node sets.
Sourceplot(xaxis=(L"x"), yaxis=(:log10, L"|\Phi(x)|", [1e-18, 1e-2]))
x = range(-1, 1, 2001)
for n in 10:10:50
t = [-cospi(k / n) for k in 0:n]
Φ(x) = prod(x - t for t in t)
plot!(x, abs.(Φ.(x)), m=(1, :o, stroke(0)), label="n=$n")
end
title!("Error indicator for Chebyshev nodes")
In contrast to the equispaced case, ∣ Φ ∣ |\Phi| ∣Φ∣ decreases exponentially with n n n almost uniformly across the interval.
Example 9.3.4 Now we look at the error indicator function Φ for Chebyshev node sets.
Sourceclf
x = linspace(-1, 1, 1601)';
Phi = zeros(size(x));
for n = 10:10:50
theta = linspace(0, pi, n+1)';
t = -cos(theta);
for k = 1:length(x)
Phi(k) = prod(x(k) - t);
end
semilogy(x, abs(Phi)); hold on
end
axis tight, title('Effect of Chebyshev nodes')
xlabel('x'), ylabel('|\Phi(x)|')
ylim([1e-18, 1e-2])
In contrast to the equispaced case, ∣ Φ ∣ |\Phi| ∣Φ∣ decreases exponentially with n n n almost uniformly across the interval.
Example 9.3.4 Now we look at the error indicator function Φ for Chebyshev node sets.
Sourcex = linspace(-1, 1, 1601)
labels = []
for n in range(10, 60, 10):
theta = pi * arange(n + 1) / n
t = -cos(theta)
Phi = array([prod(xk - t) for xk in x])
semilogy(x, abs(Phi), ".")
labels.append(f"degree {n}")
xlabel("$x$"), ylabel("$|\\Phi(x)|$"), ylim([1e-18, 1e-2])
legend(labels), title("Error indicator for Chebyshev nodes");
In contrast to the equispaced case, ∣ Φ ∣ |\Phi| ∣Φ∣ decreases exponentially with n n n almost uniformly across the interval.
Example 9.3.5 (Stability of interpolation with Chebyshev nodes)
Example 9.3.5 Here again is the function from Example 9.3.3 that provoked the Runge phenomenon when using equispaced nodes.
Sourceplot(label="", xaxis=(L"x"), yaxis=(:log10, L"|f(x)-p(x)|", [1e-20, 1]))
x = range(-1, 1, 2001)
for (k, n) in enumerate([4, 10, 16, 40])
t = [-cospi(k / n) for k in 0:n]
y = f.(t) # interpolation data
p = FNC.polyinterp(t, y)
err = @. abs(f(x) - p(x))
plot!(x, err, m=(1, :o, stroke(0)), label="degree $n")
end
title!("Error for Chebyshev interpolants")
By degree 16 the error is uniformly within machine epsilon, and, importantly, it stays there as n n n increases. Note that as predicted by the error indicator function, the error is uniform over the interval at each value of n n n .
Example 9.3.5 Here again is the function from Example 9.3.3 that provoked the Runge phenomenon when using equispaced nodes.
f = @(x) 1 ./ (x.^2 + 16);
Sourceclf
x = linspace(-1, 1, 1601)';
n = [4, 10, 16, 40];
for k = 1:length(n)
theta = linspace(0, pi, n(k) + 1)';
t = -cos(theta);
p = polyinterp(t, f(t));
semilogy( x, abs(f(x) - p(x)) ); hold on
end
title('Error for degrees 4, 10, 16, 40')
xlabel('x'), ylabel('|f(x)-p(x)|')
Unrecognized function or variable 'polyinterp'.
By degree 16 the error is uniformly within machine epsilon, and, importantly, it stays there as n n n increases. Note that as predicted by the error indicator function, the error is uniform over the interval at each value of n n n .
Example 9.3.5 Here again is the function from Example 9.3.3 that provoked the Runge phenomenon when using equispaced nodes.
f = lambda x: 1 / (x**2 + 16)
Sourcex = linspace(-1, 1, 1601)
labels = []
for k, n in enumerate([4, 10, 16, 40]):
t = -cos(pi * arange(n + 1) / n) # Chebyshev nodes
y = f(t) # interpolation data
p = FNC.polyinterp(t, y)
err = abs(f(x) - p(x))
semilogy(x, err, ".", markersize=2)
labels.append(f"degree {n}")
xlabel("$x$"), ylabel("$|f(x)-p(x)|$"), ylim([1e-20, 1])
legend(labels), title("Error for Chebyshev interpolants");
By degree 16 the error is uniformly within machine epsilon, and, importantly, it stays there as n n n increases. Note that as predicted by the error indicator function, the error is uniform over the interval at each value of n n n .
As a bonus, for Chebyshev nodes the barycentric weights are simple:
w k = ( − 1 ) k d k , d k = { 1 / 2 if k = 0 or k = n , 1 otherwise . w_k = (-1)^k d_k, \qquad d_k =
\begin{cases}
1/2 & \text{if $k=0$ or $k=n$},\\
1 & \text{otherwise}.
\end{cases} w k = ( − 1 ) k d k , d k = { 1/2 1 if k = 0 or k = n , otherwise . 9.3.3 Spectral convergence ¶ If we take n → ∞ n\rightarrow \infty n → ∞ and use polynomial interpolation on Chebyshev nodes, the convergence rate is exponential in n n n . The following is typical of the results that can be proved.
Theorem 9.3.1 (Spectral convergence)
Suppose f ( x ) f(x) f ( x ) is analytic in an open real interval containing [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] . Then there exist constants C > 0 C>0 C > 0 and K > 1 K>1 K > 1 such that
max x ∈ [ − 1 , 1 ] ∣ f ( x ) − p ( x ) ∣ ≤ C K − n , \max_{x\in[-1,1]} | f(x) - p(x) | \le C K^{-n}, x ∈ [ − 1 , 1 ] max ∣ f ( x ) − p ( x ) ∣ ≤ C K − n , where p p p is the unique polynomial of degree n n n or less defined by interpolation on n + 1 n+1 n + 1 Chebyshev second-kind points.
The condition “f f f is analytic” means that the Taylor series of f f f converges to f ( x ) f(x) f ( x ) in an open interval containing [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] .[1] A necessary condition of analyticity is that f f f is infinitely differentiable.
In other contexts we refer to (9.3.4) as linear convergence, but here it is usual to say that the rate is exponential or that one has spectral convergence . It achieves constant reduction factors in the error by constant increments of n n n . By contrast, algebraic convergence in the form O ( n − p ) O(n^{-p}) O ( n − p ) for some p > 0 p>0 p > 0 requires multiplying n n n by a constant factor in order to reduce error by a constant factor. Graphically, spectral error is a straight line on a log-linear scale, while algebraic convergence is a straight line on a log-log scale.
Example 9.3.6 (Spectral convergence)
On the left, we use a log-log scale, which makes fourth-order algebraic convergence O ( n − 4 ) O(n^{-4}) O ( n − 4 ) a straight line. On the right, we use a log-linear scale, which makes spectral convergence O ( K − n ) O(K^{-n}) O ( K − n ) a straight line.
Example 9.3.6 using Logging
disable_logging(Logging.Warn);
Sourcen = 20:20:400
algebraic = @. 100 / n^4
spectral = @. 10 * 0.85^n
plot(n, [algebraic spectral], layout=(1, 2), subplot=1,
xaxis=(L"n", :log10), yaxis=(:log10, (1e-15, 1)),
label=["algebraic" "spectral"], title="Log-log")
plot!(n, [algebraic spectral], subplot=2,
xaxis=L"n", yaxis=(:log10, (1e-15, 1)),
label=["algebraic" "spectral"], title="log-linear")
Example 9.3.6 Sourcen = (20:20:400)';
algebraic = 100 ./ n.^4;
spectral = 10 * 0.85.^n;
clf, subplot(2, 1, 1)
loglog(n, algebraic, 'o-', displayname="algebraic")
hold on; loglog(n, spectral, 'o-', displayname="spectral")
xlabel('n'), ylabel('error')
title('log–log')
axis tight, ylim([1e-16, 1]); legend(location="southwest")
subplot(2, 1, 2)
semilogy(n, algebraic, 'o-', displayname="algebraic")
hold on; semilogy(n, spectral, 'o-', displayname="spectral")
xlabel('n'), ylabel('error'), ylim([1e-16, 1])
title('log–linear')
axis tight, ylim([1e-16, 1]); legend(location="southwest")
Example 9.3.6 Sourcen = arange(20, 420, 20)
algebraic = 100 / n**4
spectral = 10 * 0.85**n
subplot(2, 1, 1)
loglog(n, algebraic, 'o-', label="algebraic")
loglog(n, spectral, 'o-', label="spectral")
xlabel('n'), ylabel('error')
title('log–log'), ylim([1e-16, 1]);
legend()
subplot(2, 1, 2)
semilogy(n, algebraic, 'o-', label="algebraic")
semilogy(n, spectral, 'o-', label="spectral")
xlabel('n'), ylabel('error')
title('log–linear') , ylim([1e-16, 1]);
legend();
9.3.4 Exercises ¶ ⌨ For each case, compute the polynomial interpolant using n n n second-kind Chebyshev nodes in [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] for n = 4 , 8 , 12 , … , 60 n=4,8,12,\ldots,60 n = 4 , 8 , 12 , … , 60 . At each value of n n n , compute the infinity-norm error (that is, max ∣ p ( x ) − f ( x ) ∣ \max |p(x)-f(x)| max ∣ p ( x ) − f ( x ) ∣ evaluated for at least 4000 values of x x x ). Using a log-linear scale, plot the error as a function of n n n , then determine a good approximation to the constant K K K in (9.3.4) .
(a) f ( x ) = 1 / ( 25 x 2 + 1 ) f(x) = 1/(25x^2+1)\qquad f ( x ) = 1/ ( 25 x 2 + 1 )
(b) f ( x ) = tanh ( 5 x + 2 ) f(x) = \tanh(5 x+2) f ( x ) = tanh ( 5 x + 2 )
(c) f ( x ) = cosh ( sin x ) f(x) = \cosh(\sin x)\qquad f ( x ) = cosh ( sin x )
(d) f ( x ) = sin ( cosh x ) f(x) = \sin(\cosh x) f ( x ) = sin ( cosh x )
⌨ Write a function chebinterp(f,n)
that returns a function representing the polynomial interpolant of the input function f
using n + 1 n+1 n + 1 Chebyshev second kind nodes over [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] . You should use (9.3.3) to compute the barycentric weights directly, rather than using the method in Function 9.2.1 . Test your function by revisiting Example 9.3.3 to use Chebyshev rather than equally spaced nodes.
Theorem 9.3.1 assumes that the function being approximated has infinitely many derivatives over [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] . But now consider the family of functions f m ( x ) = ∣ x ∣ m f_m(x)=|x|^m f m ( x ) = ∣ x ∣ m .
(a) ✍ How many continuous derivatives over [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] does f m f_m f m possess?
(b) ⌨ Compute the polynomial interpolant using n n n second-kind Chebyshev nodes in [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] for n = 10 , 20 , 30 , … , 100 n=10,20,30,\ldots,100 n = 10 , 20 , 30 , … , 100 . At each value of n n n , compute the max-norm error (that is, max ∣ p ( x ) − f m ( x ) ∣ \max |p(x)-f_m(x)| max ∣ p ( x ) − f m ( x ) ∣ evaluated for at least 5000 values of x x x ). Using a single log-log graph, plot the error as a function of n n n for all six values m = 1 , 3 , 5 , 7 , 9 m=1,3,5,7,9 m = 1 , 3 , 5 , 7 , 9 . Each convergence curve should be nearly a straight line.
(c) ✍ Based on the results of parts (a) and (b), form a hypothesis about the asymptotic behavior of the error for fixed m m m as n → ∞ n\rightarrow \infty n → ∞ . (Hint: Find the slopes of the lines in part (b).)
The Chebyshev points can be used when the interval of interpolation is [ a , b ] [a,b] [ a , b ] rather than [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] by means of the change of variable
z = ψ ( x ) = a + ( b − a ) ( x + 1 ) 2 . z = \psi(x) = a + (b-a)\frac{(x+1)}{2}. z = ψ ( x ) = a + ( b − a ) 2 ( x + 1 ) . (a) ✍ Show that ψ ( − 1 ) = a \psi(-1) = a ψ ( − 1 ) = a , ψ ( 1 ) = b \psi(1) = b ψ ( 1 ) = b , and ψ is strictly increasing on [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] .
(b) ✍ Invert the relation (9.3.5) to solve for x x x in terms of ψ − 1 ( z ) \psi^{-1}(z) ψ − 1 ( z ) .
(c) ✍ Let t 0 , … , t n t_0,\ldots,t_n t 0 , … , t n be standard second-kind Chebyshev points. Then a polynomial in x x x can be used to interpolate the function values f ( ψ ( t i ) ) f\bigl(\psi(t_i)\bigr) f ( ψ ( t i ) ) . This in turn implies an interpolating function P ~ ( z ) = P ( ψ − 1 ( z ) ) \tilde{P}(z) =P\bigl(\psi^{-1}(z)\bigr) P ~ ( z ) = P ( ψ − 1 ( z ) ) . Show that P ~ \tilde{P} P ~ is a polynomial in z z z .
(d) ⌨ Implement the idea of part (c) to plot a polynomial interpolant of f ( z ) = cosh ( sin z ) f(z) =\cosh(\sin z) f ( z ) = cosh ( sin z ) over [ 0 , 2 π ] [0,2\pi] [ 0 , 2 π ] using n + 1 n+1 n + 1 Chebyshev nodes with n = 40 n=40 n = 40 .
The Chebyshev points can be used for interpolation of functions defined on the entire real line by using the change of variable
z = ϕ ( x ) = 2 x 1 − x 2 , z = \phi(x) = \frac{2x}{1-x^2}, z = ϕ ( x ) = 1 − x 2 2 x , which maps the interval ( − 1 , 1 ) (-1,1) ( − 1 , 1 ) in one-to-one fashion to the entire real line.
(a) ✍ Find lim x → 1 − ϕ ( x ) \displaystyle \lim_{x\to 1^-} \phi(x) x → 1 − lim ϕ ( x ) and lim x → − 1 + ϕ ( x ) \displaystyle \lim_{x\to -1^+} \phi(x) x → − 1 + lim ϕ ( x ) .
(b) ✍ Invert (9.3.6) to express x = ϕ − 1 ( z ) x=\phi^{-1}(z) x = ϕ − 1 ( z ) . (Be sure to enforce − 1 ≤ x ≤ 1 -1\le x \le 1 − 1 ≤ x ≤ 1 .)
(c) ⌨ Let t 0 , … , t n t_0,\ldots,t_n t 0 , … , t n be standard second-kind Chebyshev points. These map to the z z z variable as ζ i = ϕ ( t i ) \zeta_i=\phi(t_i) ζ i = ϕ ( t i ) for all i i i . Suppose that f ( z ) f(z) f ( z ) is a given function whose domain is the entire real line. Then the function values y i = f ( ζ i ) y_i=f(\zeta_i) y i = f ( ζ i ) can be associated with the Chebyshev nodes t i t_i t i , leading to a polynomial interpolant p ( x ) p(x) p ( x ) . This in turn implies an interpolating function on the real line, defined as
q ( z ) = p ( ϕ − 1 ( z ) ) = p ( x ) . q(z)=p\bigl(\phi^{-1}(z)\bigr) = p(x). q ( z ) = p ( ϕ − 1 ( z ) ) = p ( x ) . Implement this idea to plot an interpolant of f ( z ) = ( z 2 − 2 z + 2 ) − 1 f(z)=(z^2-2z+2)^{-1} f ( z ) = ( z 2 − 2 z + 2 ) − 1 using n = 30 n=30 n = 30 . Your plot should show q ( z ) q(z) q ( z ) evaluated at 1000 evenly spaced points in [ − 6 , 6 ] [-6,6] [ − 6 , 6 ] , with markers at the nodal values (those lying within the [ − 6 , 6 ] [-6,6] [ − 6 , 6 ] window).
Alternatively, analyticity means that the function is extensible to one that is differentiable in the complex plane.