Skip to article frontmatterSkip to article content

The method of lines

Our strategy in Black–Scholes equation was to discretize both the time and space derivatives using finite differences, then rearrange so that we could march the solution forward through time. It was partially effective, but as Demo 11.1.3 shows, not a sure thing, for reasons we look into starting in the next section.

First, though, we want to look at a broader version of the discretization approach. To introduce ideas, let’s use the simpler heat equation, ut=uxxu_t = u_{xx}, as a model. Because boundaries always complicate things, we will start by doing the next best thing to having no boundaries at all: periodic end conditions. Specifically, we will solve the PDE over 0x<10\le x < 1 and require at all times that

u(x+1,t)=u(x,t)for all x.u(x+1,t)=u(x,t) \quad \text{for all $x$}.

This is a little different from simply u(1,t)=u(0,t)u(1,t)=u(0,t), as Figure 11.2.1 illustrates.

periodic function illustration

Figure 11.2.1:Left: A function whose values are the same at the endpoints of an interval does not necessarily extend to a smooth periodic function. Right: For a truly periodic function, the function values and all derivatives match at the endpoints of one period.

11.2.1Semidiscretization

As always, we use u^\hat{u} when we specifically refer to the exact solution of the PDE. In order to avoid carrying along redundant information about the function, we use xi=ihx_i = ih only for i=0,,m1i=0,\ldots,m-1, where h=1/mh=1/m, and it’s understood that a reference to xmx_m is silently translated to one at x0x_0. More generally, we have the identity

u^(xi,t)=u^(x(imodm),t) \hat{u}(x_i,t) = \hat{u}\bigl(x_{(i \bmod{m})},t \bigr)

for the exact solution u^\hat{u} at any value of ii.

Next we define a vector u\mathbf{u} by

u(t)=[u0(t)u1(t)un(t)].\mathbf{u}(t) = \begin{bmatrix} u_0(t) \\ u_1(t) \\ \vdots \\ u_n(t) \end{bmatrix}.

This step is called semidiscretization, since space is discretized but time is not. As in Chapter 10, we will replace uxxu_{xx} with multiplication of u\mathbf{u} by a differentiation matrix Dxx\mathbf{D}_{xx}. The canonical choice is the three-point finite-difference formula (5.4.12), which in light of the periodicity (11.2.2) leads to

Dxx=1h2[211121121112].\mathbf{D}_{xx} = \frac{1}{h^2} \begin{bmatrix} -2 & 1 & & & 1 \\ 1 & -2 & 1 & & \\ & \ddots & \ddots & \ddots & \\ & & 1 & -2 & 1 \\ 1 & & & 1 & -2 \end{bmatrix}.

Note well how the first and last rows have elements that “wrap around” from one end of the domain to the other by periodicity. Because we will be using this matrix quite a lot, we create Function 11.2.1 to compute it, as well as the corresponding second-order first derivative matrix Dx\mathbf{D}_x for periodic end conditions.

The PDE ut=uxxu_t=u_{xx} is now approximated by the semidiscrete problem

du(t)dt=Dxxu(t), \frac{d \mathbf{u}(t)}{d t} = \mathbf{D}_{xx} \mathbf{u}(t),

which is simply a linear, constant-coefficient system of ordinary differential equations. Given the initial values u(0)\mathbf{u}(0) obtained from u(xi,0)u(x_i,0), we have an initial-value problem that we already know how to solve!

Semidiscretization is often called the method of lines. Despite the name, it is not exactly a single method because both space and time discretizations have to be specified in order to get a concrete algorithm. The key concept is the separation of those two discretizations, and in that way, it’s related to separation of variables in analytic methods for the heat equation.

The method in Example 11.2.1 and Demo 11.2.2 is essentially the same one we used for the Black–Scholes equation in Black–Scholes equation. By changing the time integrator, we can get much better results.

Demo 11.2.4 suggests that implicit time stepping methods have an important role in diffusion. We will analyze the reason in the next few sections.

11.2.2Black-box IVP solvers

Instead of coding one of the Runge–Kutta or multistep formulas directly for a method of lines solution, we could use any of the IVP solvers from Chapter 6, or a solver from the DifferentialEquations package, to solve the ODE initial-value problem (11.2.5).

The adaptive time integrators can all produce solutions. But, as seen in Demo 11.2.5, they are not equivalent in every important sense. Whether we choose to implement a method directly with a fixed step size, or automatically with adaptation, there is something crucial to understand about the semidiscrete problem (11.2.5) that will occupy our attention in the next two sections.

11.2.3Exercises

  1. ⌨ Revisit Demo 11.2.2. For each m=20,30,,120m=20,30,\dots,120 points in space, let n=20,30,40,n=20,30,40,\dots in turn until you reach the smallest nn such that the numerical solution remains bounded above by 2 for all time; call this value N(m)N(m). Make a log-log plot of N(m)N(m) as a function of mm. If you suppose that N=O(mp)N=O(m^p) for a simple rational number pp, what is a reasonable hypothesis for pp?

  2. In Demo 11.2.5, as tt\to \infty the solution u(x,t)u(x,t) approaches a value that is constant in both space and time.

    (a) ⌨ Set m=400m=400 and use Rodas4P, as shown in Demo 11.2.5, to find this constant value to at least eight digits of accuracy.

    (b) ✍ Prove that 01u(x,t)dx\int_0^1 u(x,t) \,dx is constant in time.

    (c) ⌨ Use Function 5.6.1 on the initial condition function, and compare to the result of part (a).

  3. ✍ Apply the trapezoid IVP formula (AM2) to the semidiscretization (11.2.5) and derive what is known as the Crank–Nicolson method:

    (I12τDxx)uj+1=(I+12τDxx)uj.(\mathbf{I} - \tfrac{1}{2}\tau \mathbf{D}_{xx}) \mathbf{u}_{j+1} = (\mathbf{I} + \tfrac{1}{2}\tau \mathbf{D}_{xx}) \mathbf{u}_{j}.

    Note that each side of the method is evaluated at a different time level.

  4. ⌨ Repeat Demo 11.2.4 using the Crank–Nicolson method (11.2.8). Then try for n=240n=240 as well, which uses a time step ten times larger than before. Does the solution remain stable?

  5. The PDE ut=2u+uxxu_t = 2u + u_{xx} combines growth with diffusion.

    (a) ✍ Derive an equation analogous to (11.2.7) that combines second-order semidiscretization in space with the backward Euler solver in time.

    (b) ⌨ Apply your formula from part (a) to solve this PDE with periodic boundary conditions for the same initial condition as in Demo 11.2.4. Use m=200m=200 points in space and n=1000n=1000 time levels. Plot the solution on one graph at times t=0,0.04,0.08,,0.2t=0,0.04,0.08,\ldots,0.2, or animate the solution over 0t0.20\le t \le 0.2.

  6. ✍ In this problem, you will analyze the convergence of the explicit method given by (11.2.6). Recall that the discrete approximation ui,ju_{i,j} approximates the solution at xix_i and tjt_j.

    (a) Write the method in scalar form as

    ui,j+1=(12λ)ui,j+λui+1,j+λui1,j,u_{i,j+1} = (1-2\lambda) u_{i,j} + \lambda u_{i+1,j} + \lambda u_{i-1,j},

    where λ=τ/h2>0\lambda = \tau/h^2>0.

    (b) Taylor series of the exact solution u^\hat{u} imply that

    u^i,j+1=ui,j+u^t(xi,tj)τ+O(τ2),u^i±1,j=u^i,j±u^x(xi,tj)h+2u^x2(xi,tj)h22±3u^x3(xi,tj)h36+O(h4).\begin{align*} \hat{u}_{i,j+1} &= u_{i,j} + \frac{\partial \hat{u}}{\partial t} (x_i,t_j) \tau + O(\tau^2),\\ % \frac{\partial^2 u}{\partial t^2} (x_i,\bar{t}) \frac{\tau^2}{2} \hat{u}_{i\pm1,j} &= \hat{u}_{i,j} \pm \frac{\partial \hat{u}}{\partial x} (x_i,t_j) h + \frac{\partial^2 \hat{u}}{\partial x^2} (x_i,t_j) \frac{h^2}{2} \pm \frac{\partial^3 \hat{u}}{\partial x^3} (x_i,t_j) \frac{h^3}{6}+ O(h^4). %\frac{\partial^4 u}{\partial x^4} (\bar{x}_\pm,t_j) \frac{h^4}{24}. \end{align*}

    Use these to show that

    u^i,j+1=[(12λ)u^i,j+λu^i+1,j+λu^i1,j]+O(τ2+h2)=F(λ,u^i,j,u^i+1,j,u^i1,j)+O(τ2+h2).\begin{align*} \hat{u}_{i,j+1} & = \left[ (1-2\lambda) \hat{u}_{i,j} + \lambda \hat{u}_{i+1,j} + \lambda \hat{u}_{i-1,j}\right] + O\Bigl(\tau^2+h^2 \Bigr)\\ &= F\left( \lambda,\hat{u}_{i,j}, \hat{u}_{i+1,j} , \hat{u}_{i-1,j}\right) + O\Bigl(\tau^2+h^2\Bigr). \end{align*}

    (The last line should be considered a definition of the function FF.)

    (c) The numerical solution satisfies

    ui,j+1=F(λ,ui,j,ui+1,j,ui1,j)u_{i,j+1}=F\bigl( \lambda,u_{i,j}, u_{i+1,j} , u_{i-1,j}\bigr)

    exactly. Using this fact, subtract ui,j+1u_{i,j+1} from both sides of the last line in part (b) to show that

    ei,j+1=F(λ,ei,j,ei+1,j,ei1,j)+O(τ2+h2),e_{i,j+1} = F\left( \lambda,e_{i,j}, e_{i+1,j} ,e_{i-1,j}\right) + O\Bigl(\tau^2+h^2\Bigr),

    where ei,j=u^i,jui,je_{i,j}=\hat{u}_{i,j}-u_{i,j} is the error in the numerical solution for all ii and jj .

    (d) Define EjE_j as the maximum of ei,j|e_{i,j}| over all values of ii, and use the result of part (c) to show that if λ<1/2\lambda<1/2 is kept fixed as hh and τ approach zero, then for sufficiently small τ and hh,

    Ej+1=Ej+O(τ2+h2)Ej+Kj(τ2+h2)E_{j+1} = E_{j} + O\Bigl(\tau^2+h^2\Bigr) \le E_{j} + K_j\bigl(\tau^2+h^2\bigr)

    for a positive KjK_j independent of τ and hh.

    (e) If the initial conditions are exact, then E0=0E_0=0. Use this to show finally that if the KjK_j are bounded above and λ<1/2\lambda<1/2 is kept fixed, then En=O(τ)E_n = O(\tau) as τ0\tau\to 0.