Archetypical PDE of hyperbolic type, representing transport phenomena. (Definition 12.1.1)
algebraic convergence
Sequence in which the difference between sequence value and limit asymptotically decreases as an integer power of a step size or a number of nodes, appearing as a straight line on a log-log convergence plot. (Definition 5.2.2)
algorithm
List of instructions for transforming data into a result. (Algorithms)
Arnoldi iteration
Stable algorithm for finding orthonormal bases of nested Krylov subspaces. (Definition 8.4.2)
asymptotic
Relationship indicating that two functions have the same leading behavior in some limit. (Definition 2.5.1)
backward error
Change to the input of a problem required to produce the result found by an inexact algorithm. (Definition 1.4.1)
backward substitution
Systematic method for solving a linear system with an upper triangular matrix. (2.3.9)
bandwidth
The number of diagonals around the main diagonal that have nonzero elements. (Definition 2.9.2)
barycentric interpolation formula
Computationally useful expression for the interpolating polynomial as a ratio of rational terms. (9.2.3)
big-O
Relationship indicating that one function is bounded above by a multiple of another in some limit. (Definition 2.5.1)
boundary-value problem
A differential equation with which partial information about the solution is given at multiple points on the boundary of the domain. (Definition 10.1.1)
cardinal function
Interpolating function that is 1 at one node and 0 at all the others. (Definition 5.1.2)
Cholesky factorization
Symmetrized version of LU factorization for SPD matrices. (Theorem 2.9.3)
collocation
Solution of a differential equation by imposing it approximately at a set of nodes. (10.4.7)
condition number
Ratio of the size of change in the output of a function to the size of change in the input that produced it. (Definition 1.2.1)
cubic spline
Piecewise cubic function with two globally continuous derivatives, most often used for interpolation or approximation. (Definition 5.3.1)
diagonalizable matrix
Matrix that admits an eigenvalue decomposition. Also known as nondefective. (Definition 7.2.3)
differentiation matrix
Matrix mapping a vector of function values to a vector of approximate derivative values. (Definition 10.3.1)
Dirichlet condition
Boundary condition specifying the value of the solution. (Definition 10.1.2)
Subtraction of a shift followed by matrix inversion, used in power iteration to transform the eigenvalue closest to a target value into a dominant one. (Definition 8.3.1)
Jacobian matrix
Matrix of first partial derivatives that defines the linearization of a vector-valued function. (Definition 4.5.2)
Kronecker product
Alternative type of matrix multiplication useful for problems on a tensor-product domain. (Definition 13.3.2)
Krylov subspace
Vector space generated by powers of a square matrix that is often useful for reducing the dimension of large problems. (Definition 8.4.1)
Lagrange formula
Theoretically useful expression for an interpolating polynomial. (Theorem 9.1.2)
Lanczos iteration
Specialization of the Arnoldi iteration to the case of a hermitian (or real symmetric) matrix. (8.6.2)
Laplace equation
Archetypical elliptic PDE describing a steady state. (Definition 13.3.1)
linear convergence
Sequence in which the difference between sequence value and limit asymptotically decreases by a constant factor at each term, making a straight line on a log-linear graph. (Definition 4.2.3)
linear least-squares problem
Minimization of the 2-norm of the residual for an overdetermined linear system. (Definition 3.1.1)
Factorization of a square matrix into the product of a unit lower triangular matrix and an upper triangular matrix. (Definition 2.4.2)
machine epsilon
Distance from 1 to the next-largest floating-point number. Also called unit roundoff or machine precision, though the usages are not consistent across different references. (Definition 1.1.2)
matrix condition number
Norm of the matrix times the norm of its inverse, equivalent to the condition number for solving a linear system. (Definition 2.8.1)
method of lines
Solution technique for evolutionary partial differential equations in which each independent variable is discretized separately. Also called semidiscretization. (Section 11.2.1)
multistep
Formula using information over more than a single time step to advance the solution. (Definition 6.6.1)
Neumann condition
Boundary condition specifying the derivative of the solution. (Definition 10.1.2)
Newton’s method
Rootfinding iteration that uses the linearization of the given function in order to define the next root approximation. (Definition 4.3.1)
nodes
Values of the independent variable where an interpolant’s values are prescribed. (Definition 5.1.1)
nonlinear least-squares problem
Minimization of the 2-norm of the residual of a function that depends nonlinearly on a vector. (Definition 4.7.1)
Nonzero vectors that have an inner product of zero. (Definition 3.3.1)
orthogonal matrix
Square ONC matrix, i.e., matrix whose transpose is its inverse. (Definition 3.3.3)
orthogonal polynomials
Family of polynomials whose distinct members have an integral inner product equal to zero, as with Legendre and Chebyshev polynomials. (Definition 9.4.4, Definition 9.4.5)
orthonormal vectors
Vectors that are both mutually orthogonal and all of unit 2-norm. (Definition 3.3.1)
outer product
Multiplication of two vectors resulting in a rank-1 matrix. (A.5)
overdetermined
Characterized by having more constraints than available degrees of freedom. (Definition 3.1.1)
piecewise linear
Function that is linear between each consecutive pair of nodes, but whose slope may jump at the nodes. (Definition 5.2.1)
Repeated application of a matrix to a vector, followed by normalization, resulting in convergence to an eigenvector for the dominant eigenvalue. (Definition 8.2.2)
preconditioner
An approximate matrix or its inverse, used to improve the convergence rate of Krylov iterations for a linear system. (Definition 8.8.1)
pseudoinverse
Rectangular matrix that maps data to solution in the linear least-squares problem, generalizing the matrix inverse. (Definition 3.2.2)
QR factorization
Representation of a matrix as the product of an orthogonal and an upper triangular matrix. (Theorem 3.3.3, Definition 3.3.4)
quadratic convergence
Sequence in which the difference between sequence value and limit asymptotically decreases by a constant times the square of the preceding difference. (Definition 4.3.2)
quasi-Newton methods
Rootfinding methods that overcome the issues of Jacobian computation and lack of global convergence in Newton’s method. (Quasi-Newton methods)
quasimatrix
Collection of functions (such as orthogonal polynomials) that have algebraic parallels to columns of a matrix. (Definition 9.4.2)
Rayleigh quotient
Function of vectors that equals an eigenvalue when given its eigenvector as input. (Definition 7.4.1)
For a linear system, the difference between b and Ax~ for a computed solution approximation x~. More generally, the actual value of a quantity that is made zero by an exact solution. (Definition 2.8.2, Definition 4.1.2)
restarting
Technique used in GMRES to prevent the work per iteration and overall storage from growing uncontrollably. (Section 8.5.3)
Matrix that is symmetric and positive definite, thereby permitting a Cholesky factorization. Correspondingly called HPD in the complex case. (Definition 2.9.3)
spectral convergence
Exponentially rapid decrease in error as the number of interpolation nodes increases, e.g., as observed in Chebyshev polynomial and trigonometric interpolation. (Theorem 9.3.1)
Increment in time between successive solution values in a numerical IVP solver. (6.2.2)
stiff differential equation
Describes an IVP in which stability is a greater restriction than accuracy for many solution methods, usually favoring the use of an implicit time stepping method. (Example 6.7.2, Stiffness)
subtractive cancellation
Growth in relative error that occurs when two numbers are added/subtracted to get a result that is much smaller in magnitude than the operands; also called loss of significance or cancellation error. (1.2.7)
superlinear convergence
Sequence for which the convergence is asymptotically faster than any linear rate. (Definition 4.4.2)
symmetric matrix
Square matrix that is equal to its transpose. (Definition 2.2.2)
tensor-product domain
A domain that can be parameterized using variables that lay in a logical rectangle or cuboid; i.e., each variable independently varies in an interval. (Definition 13.1.1)
thin QR
Variant of the QR factorization that discards information not needed to fully represent the original matrix. (Definition 3.3.4)
thin SVD
Variant of the singular value decomposition that discards information not needed to fully represent the original matrix. (7.3.10)
trapezoid formula
Numerical integration method resulting from integration of a piecewise linear interpolant. (Definition 5.6.2, Table 6.6.1)
triangular matrix
Matrix that is all zero either above (for lower triangular) or below (for upper triangular) the main diagonal. (Section 2.3.2, Definition A.1)
tridiagonal matrix
Matrix with nonzeros only on the main diagonal and the adjacent two diagonals. (Definition 2.9.2)
trigonometric polynomial
Linear combination of real or complex trigonometric functions at multiples of a base frequency. (Definition 9.5.1)
Square matrix with complex-valued entries whose columns are orthonormal. (Definition 7.2.2)
unstable
Allowing perturbations of the data to have much larger effects on the results than can be explained by the problem’s condition number. (Stability)
upper Hessenberg matrix
Matrix that has nonzeros only in the upper triangle and first subdiagonal. (Definition 8.4.3)
upwind
Direction opposite to the flow of information in an evolutionary PDE. (Definition 12.2.3)
Vandermonde matrix
Matrix whose columns are formed from elementwise powers of a given vector, important for polynomial interpolation and approximation of data. (Definition 2.1.2)
wave equation
Evolutionary PDE that allows propagation in all directions equally. (12.4.1)