next up previous
Next: Solving Linear Algebraic Equations Up: Preliminaries Previous: Sensitivity and Conditioning

Order and rate of convergence

Iteration is a common approach widely used in various numerical methods. It is the hope that an iteration in the general form of $x_{n+1}=g(x_n)$ will eventually converge to the true solution $x^*$ of the problem at the limit when $n\rightarrow\infty$. The concern is whether this iteration will converge, and, if so, the rate of convergence. Specifically we use the following to represent how quickly the error $e_n=x_n-x^*$ converges to zero:

\begin{displaymath}
\lim\limits_{n\rightarrow\infty}\frac{\vert e_{n+1}\vert}{\v...
...\infty}\frac{\vert x_{n+1}-x^*\vert}{\vert x_n-x^*\vert^p}=\mu
\end{displaymath}

Here $p\ge 1$ is called the order of convergence, the constant $\mu$ is the rate of convergence or asymptotic error constant.

This expression may be better understood when it is interpreted as $\vert e_{n+1}\vert=\mu \vert e_n\vert^p$ when $n\rightarrow\infty$. Obviously, the larger $p$ and the smaller $\mu$, the more quickly the sequence converges. Specially, we consider the following cases:

The iteration $x_{n+1}=g(x_n)$ can be written in terms of the errors $e_{n+1}$ and $e_n$. Consider the Taylor expansion:

\begin{displaymath}
x_{n+1}=x+e_{n+1}=g(x_n)=g(x+e_n)=g(x)+g'(x)e_n+\frac{1}{2}g''(x)e_n^2+
\cdots
\end{displaymath}

Subtracting $g(x)=x$ from both sides, we get

\begin{displaymath}
e_{n+1}=g'(x)e_n+\frac{1}{2}g''(x)e_n^2+\cdots
\end{displaymath}

At the limit $n\rightarrow\infty$, all higher order error terms become zero, we have

\begin{displaymath}
\lim\limits_{n\rightarrow\infty}\frac{\vert e_{n+1}\vert}{\vert e_n\vert}\e\vert g'(x)\vert
\end{displaymath}

If $0<\vert g'(x)<1$, then the convergence is linear. However, if $g'(x)=0$, then

\begin{displaymath}
e_{n+1}=\frac{1}{2}g''(x)e_n^2+\cdots
\end{displaymath}

and we have

\begin{displaymath}
\lim\limits_{n\rightarrow\infty}\frac{\vert e_{n+1}\vert}{\vert e_n\vert^2}\le\frac{1}{2}\vert g''(x)\vert
\end{displaymath}

the convergence is quadratic. We see that if the iteration function has zero derivative at the fixed point, the iteration may be accelerated.

Examples:

From these examples we see that there is a unique exponent $p\ge 0$, the order of convergence, so that

\begin{displaymath}
\lim\limits_{n\rightarrow\infty}\frac{\vert e_{n+1}\vert}{\v...
...array}{cc}0 & q<p \mu & q=p \infty & q>p\end{array}\right.
\end{displaymath}

In practice, the true solution $x^*$ is unknown and so is the error $e_n=x_n-x^*$. However, it can be estimated if the convergence is superlinear, satisfying

\begin{displaymath}
\lim\limits_{n\rightarrow\infty}\frac{\vert x_{n+1}-x^*\vert}{\vert x_n-x^*\vert}=0
\end{displaymath}

Consider:
    $\displaystyle \lim\limits_{n\rightarrow\infty}\frac{\vert x_{n+1}-x_n\vert}{\ve...
...s_{n\rightarrow\infty}\frac{\vert x_{n+1}-x^*+x^*-x_n\vert}{\vert x_n-x^*\vert}$  
  $\textstyle =$ $\displaystyle \lim\limits_{n\rightarrow\infty}\frac{\vert x_{n+1}-x^*\vert}{\ve...
...limits_{n\rightarrow\infty}\frac{\vert x^*-x_n\vert}{\vert x_n-x^*\vert}
=0+1=1$  

i.e., when $n\rightarrow\infty$,

\begin{displaymath}
\vert e_n\vert=\vert x_n-x^*\vert \approx \vert x_{n+1}-x_n\vert
\end{displaymath}

The order and rate of convergence can be estimated by

\begin{displaymath}
\vert e_{n+1}\vert\le \mu \;\vert e_n\vert^p
\end{displaymath}


next up previous
Next: Solving Linear Algebraic Equations Up: Preliminaries Previous: Sensitivity and Conditioning
Ruye Wang 2015-02-12