In numerical methods, it is important to know how reliable the numerical results produced by some algorithm are, and how sensitive the algorithm is with respect to the inevitable noise in the input variables and system parameters, due possibly to some observational error and truncation error. We need to know how sensitive the output of a given method is with respect to the various perturbations that may occur in computational process. In general, if a large difference in the output may be caused by a small change in the input, then the result is not reliable. How sensitive a problem is and whether the numerical solution is reliable or not depends on whether the problem is well behaved or ill behaved:
How well or ill conditioned a system is can be measured quantitatively by the absolute or relative condition numbers:
Here is the norm (representing ``size'') of any variable (scalar, vector, matrix, function, etc.). In the following, we will use mostly the norm for vectors and the corresponding spectral norm for matrices.
As the relative condition number compares the normalized changes in both the input and output, its value is invariant to the specific units used to measure them. Also the change in input can also be equivalently normalized by the input plus change as well as in input. The same is true for the change in the output.
Obviously, the smaller the condition number, the better conditioned the problem is, the less sensitive the solution is with respect to the perturbation of the data (error, noise, approximation, etc.); and the larger the condition number, the more ill conditioned the problem is. In the following, we specifically consider the condition numbers defined for some different systems.
System with single input and output variables
Consider a system of a single input and single output , which can
be described by a function to be evaluated at a given input .
The condition number of the function at point is defined as the
greatest (worst case) ratio between the changes of the output and the
When the perturbation is small, the Taylor expansion of the
function can be approximated as
The discussion above is for the task of evaluating the output
of a system as a function of the input . However, if the task is to
solve a given equation , for the purpose of finding treated
as the output that satisfies the equation with treated as the
input, the problem needs to be converted into the form of evaluating the
at . The absolute condition number of
In the figure below, , therefore for the problem of solving , is better conditioned than , but for the problem of evaluating , is more ill-conditioned than .
Example: Consider the function
Systems with Multiple input and output variables
The results above can be generalized to a system of multiple inputs
and multiple outputs
represented by a function
. A change
in the input will cause certain change in the output:
In particular, consider a linear system of inputs
To find the relative condition number, we note that some change
in the output may be caused by perturbation
in the coefficient matrix as well as
in the input .
from both sides we get
We see that for both evaluation and equation solving problems, we have
. If the
, the singular values
, and their
norms can be written in terms of their greatest and smallest eigenvalues,
. Now we have
In fact, is a measurement of how close is to singularity. When is singular, one or more of its singular values are zero, i.e., , then .
In Matlab, function
cond(A,p) generates the condition number of matrix
A based on its p-norm. When , this is the ratio between the greatest and
smallest singular values. The following will give the same result:
Let us now consider the worst case scenarios for both evaluating the matrix-vector product and solving the linear equation system . (In both cases, given as the input, we find as the output.) To do so, we first carry out the SVD decomposition of to get with the singular values in sorted in descending order: . The inverse of can also be SVD decomposed: .
is the minimum right singular vector corresponding
to the smallest singular value
. The equation
is the maximum left singular vector corresponding
to the largest singular value
. The equation becomes:
Example 1: Consider solving the linear system
Example 2: Consider the worst case of the evaluation of
Example 3: Consider the worst case while solving the equation
, where is the same matrix used in
the previous example. If
, we get