Skip to article frontmatterSkip to article content

Collocation for linear problems

Let us now devise a numerical method based on finite differences for the linear TPBVP

u+p(x)u+q(x)u=r(x),u(a)=α,  u(b)=β. u'' + p(x)u' + q(x)u = r(x), \quad u(a)=\alpha,\; u(b)=\beta.

The first step is to select nodes x0=a<x1<<xn=bx_0=a < x_1< \cdots < x_n=b. For finite differences these will most likely be equally spaced, while for spectral differentiation, they will be Chebyshev points.

Rather than solving for a function, we will solve for a vector of its approximate values at the nodes:

u=[u0u1un1un][u^(x0)u^(x1)u^(xn1)u^(xn)], \mathbf{u} = \begin{bmatrix} u_0 \\ u_1 \\ \vdots \\ u_{n-1} \\ u_n \end{bmatrix} \approx \begin{bmatrix} \hat{u}(x_0) \\ \hat{u}(x_1) \\ \vdots \\ \hat{u}(x_{n-1}) \\ \hat{u}(x_{n}) \end{bmatrix},

where u^\hat{u} is the exact solution of (10.4.1). If we so desire, we can use interpolation to convert the values (xi,ui)(x_i,u_i) into a function after the discrete solution is found.

10.4.1Collocation

Having defined values at the nodes as our unknowns, we impose approximations to the ODE at the same nodes. This approach is known as collocation. Derivatives of the solution are found (approximately) using differentiation matrices. For example,

[u^(x0)u^(x1)u^(xn)]u=Dxu, \begin{bmatrix} \hat{u}'(x_0) \\[1mm] \hat{u}'(x_1) \\ \vdots \\ \hat{u}'(x_n) \end{bmatrix} \approx \mathbf{u}' = \mathbf{D}_x \mathbf{u},

with an appropriately chosen differentiation matrix Dx\mathbf{D}_x. Similarly, we define

[u^(x0)u^(x1)u^(xn)]u=Dxxu, \begin{bmatrix} \hat{u}''(x_0) \\[1mm] \hat{u}''(x_1) \\ \vdots \\ \hat{u}''(x_n) \end{bmatrix} \approx \mathbf{u}'' = \mathbf{D}_{xx} \mathbf{u},

with Dxx\mathbf{D}_{xx} chosen in accordance with the node set.

The discrete form of (10.4.1) at the n+1n+1 chosen nodes is

u+Pu+Qu=r, \mathbf{u}'' + \mathbf{P}\mathbf{u}' + \mathbf{Q}\mathbf{u} = \mathbf{r},

where

P=[p(x0)p(xn)],Q=[q(x0)q(xn)],r=[r(x0)r(xn)]. \begin{split} \mathbf{P} = \begin{bmatrix} p(x_0) & & \\ & \ddots & \\ & & p(x_{n}) \end{bmatrix}, \quad \mathbf{Q} = \begin{bmatrix} q(x_0) & & \\ & \ddots & \\ & & q(x_{n}) \end{bmatrix}, \quad \mathbf{r} = \begin{bmatrix} r(x_0) \\ \vdots \\ r(x_n) \end{bmatrix}. \end{split}

If we apply the definitions of u\mathbf{u}' and u\mathbf{u}'' and rearrange, we obtain

Lu=r,L=Dxx+PDx+Q, \mathbf{L} \mathbf{u} = \mathbf{r}, \qquad \mathbf{L} = \mathbf{D}_{xx} + \mathbf{P}\mathbf{D}_x + \mathbf{Q},

which is a linear system of n+1n+1 equations in n+1n+1 unknowns.

We have not yet incorporated the boundary conditions. Those take the form of the additional linear conditions u0=αu_0=\alpha and un=βu_n=\beta. We might regard this situation as an overdetermined system, suitable for linear least-squares. However, it’s usually preferred to impose the boundary conditions and collocation conditions exactly, so we need to discard two of the collocation equations to keep the system square. The obvious candidates for deletion are the collocation conditions at the two endpoints. We may express these deletions by means of a matrix that is an (n+1)×(n+1)(n+1)\times(n+1) identity with the first and last rows deleted:

E=[01000010]=[e1Ten1T], \mathbf{E} = \begin{bmatrix} 0 & 1 & 0 & \cdots & 0 \\ \vdots & & \ddots & & \vdots \\ 0 & \cdots & 0 & 1 & 0 \end{bmatrix} = \begin{bmatrix} \mathbf{e}_1^T \\ \vdots \\ \mathbf{e}_{n-1}^T \end{bmatrix},

where, as always, ek\mathbf{e}_k is the kkth column (here starting from k=0k=0) of an identity matrix. The product EA\mathbf{E} \mathbf{A} deletes the first and last rows of A\mathbf{A}, leaving a matrix that is (n1)×(n+1)(n-1)\times(n+1). Similarly, Er\mathbf{E}\mathbf{r} deletes the first and last rows of r\mathbf{r}.

Finally, we note that u^(a)=e0Tu\hat{u}(a)= \mathbf{e}_0^T\mathbf{u} and u^(b)=enTu\hat{u}(b)= \mathbf{e}_n^T\mathbf{u}, so the linear system including both the ODE and the boundary condition collocations is

[e0TELenT]Au=[αErβ]b.\underbrace{ \begin{bmatrix} \mathbf{e}_0^T \\[1mm] \mathbf{E}\mathbf{L} \\[1mm] \mathbf{e}_n^T \end{bmatrix} }_{\mathbf{A}} \mathbf{u} = \underbrace{ \begin{bmatrix} \alpha \\[1mm] \mathbf{E}\mathbf{r} \\[1mm] \beta \end{bmatrix} }_{\mathbf{b}}.

We have arrived at an (n+1)×(n+1)(n+1)\times (n+1) linear system Au=b\mathbf{A}\mathbf{u}=\mathbf{b}, which we can solve for u\mathbf{u} using any of the methods we have investigated.

10.4.2Implementation

Our implementation of linear collocation is Function 10.4.1. It uses second-order finite differences but makes no attempt to exploit the sparsity of the matrices. It would be trivial to change the function to use spectral differentiation.

10.4.3Accuracy and stability

We revisit Example 10.2.3, which exposed instability in the shooting method, in order to verify second-order convergence.

If we write the solution u\mathbf{u} of Equation (10.4.9) as the exact solution minus an error vector e\mathbf{e}, i.e., u=u^e\mathbf{u} = \hat{\mathbf{u}} - \mathbf{e}, we obtain

Au^Ae=b,e=A1[Au^b]=A1τ(n),\begin{gather*} \mathbf{A} \hat{\mathbf{u}} - \mathbf{A} \mathbf{e} = \mathbf{b}, \\ \mathbf{e} = \mathbf{A}^{-1} \left[ \mathbf{A} \hat{\mathbf{u}} - \mathbf{b} \right] = \mathbf{A}^{-1} \boldsymbol{\tau}(n), \end{gather*}

where τ\boldsymbol{\tau} is the truncation error of the derivative discretizations (except at the boundary rows, where it is zero). It follows that e\|\mathbf{e}\| vanishes at the same rate as the truncation error if A1\| \mathbf{A}^{-1}\| is bounded above as nn\to \infty. In the present context, this property is known as stability. Proving stability is too technical to walk through here, but stability is guaranteed under some reasonable conditions on the BVP.

10.4.4Exercises