next up previous
Next: Newton-Raphson method (multivariate) Up: Solving Non-Linear Equations Previous: Fixed point iteration

Newton-Raphson method (univariate)

To solve equation $f(x)=0$, we first consider the Taylor series expansion of $f(x)$ at any point $x_0$:

\end{displaymath} (55)

If $f(x)$ is linear, i.e., its slope $f'(x)$ is a constant for any $x$, then the second and higher order terms are all zero, and we have
\end{displaymath} (56)

To find the root $x^*$ at which $f(x^*)=0$, we set the above to zero and solve the resulting equation for $x$ to get
\end{displaymath} (57)

where $d_0=-f(x_0)/f'(x_0)$ is the step we need to take.

If $f(x)$ is not linear, the sum of the first two terms of the Taylor expansion is only an approximation of $f(x)$, and the resulting $x$ we found above is only an estimate of the root. However, it can still be used as a step of an iterative process by which the estimate is continuously improved and the solution can be eventually reached:

\end{displaymath} (58)

where $g(x)=x-f(x)/f'(x)$, and $x_{n+1}$ is an improved estimate of the root based on the previous approximation $x_n$, i.e., the root will be approached by this iteration, as illustrated in the figure.


The Newton-Raphson method can be considered as the fixed point iteration $x_{n+1}=g(x_n)$ based on $g(x)=x-f(x)/f'(x)$. The root $x^*$ at which $f(x^*)=0$ is also the fixed point of $g(x)$, i.e., $g(x^*)=x^*$. For the iteration to converge, $g(x)$ needs to be a contraction with $\vert g'(x)\vert<1$. Consider

\end{displaymath} (59)

We make the following notes:

The order of convergence of the Newton-Raphson iteration can be found based on the Taylor expansion of $f(x)$ at the neighborhood of the root $x^*=x_n+e_n$:

\end{displaymath} (61)

where $e_n=x^*-x_n$ is the error at the nth step. Substituting the Newton-Raphson's iteration
\end{displaymath} (62)

into the equation above, we get
$\displaystyle 0$ $\textstyle =$ $\displaystyle f'(x_n)(x_n-x_{n+1})+f'(x_n)(x^*-x_n)+\frac{f''(x_n)}{2}e_n^2+O(e_n^3)$  
  $\textstyle =$ $\displaystyle f'(x_n)(x^*-x_{n+1})+\frac{f''(x)}{2}e_n^2+O(e_n^3)
=f'(x_n)e_{n+1}+\frac{f''(x_n)}{2}e_n^2+O(e_n^3)$ (63)

\end{displaymath} (64)

When $n\rightarrow\infty$ all the higher order terms disappear, and the above can be written as
\lim\limits_{n\rightarrow\infty}\frac{\vert e_{n+1}\vert}{\v...
\le \frac{\vert f''(x^*)\vert}{2\vert f'(x^*)\vert}=\mu
\end{displaymath} (65)

Alternatively, we can get the Taylor expansion in terms of $g(x)$:
\end{displaymath} (66)

Subtracting $g(x^*)=x^*$ from both sides we get:
\end{displaymath} (67)

Now we find $g'(x)$ and $g''(x)$:
\end{displaymath} (68)

$\displaystyle g''(x)$ $\textstyle =$ $\displaystyle \left(\frac{f(x)f''(x)}{(f'(x))^2}\right)'
  $\textstyle =$ $\displaystyle \frac{(f'(x))^3f''(x)}{(f'(x))^4}
=\frac{f''(x)}{f'(x)}$ (69)

Evaluating these at $x=x^*$ at which $f(x^*)=0$, and substituting them back into the expression for $e_{n+1}$ above, the linear term is zero as $g'(x^*)=0$, i.e., the convergence is quadratic, and we get the same result:
\lim\limits_{n\rightarrow\infty}\frac{\vert e_{n+1}\vert}{\v...
...(x^*)\vert}{2\vert f'(x^*)\vert}=\frac{\vert g''(x^*)\vert}{2}
\end{displaymath} (70)

We see that, if $f'(x^*)\ne 0$, then the order of convergence of the Newton-Raphson method is $p=2$, and the rate of convergence is $\mu=\vert f''(x)\vert/2\vert f'(x)\vert$. However, if $f'(x^*)=0$, the convergence is linear rather than quadratic, as shown in the example below.

Example: Consider solving the equation


which has a repeated root $x=1$ as well as a single root $x=2$. We have


Note that $f'(x^*)=f'(1)=0$ at the root $x^*=1$. We further find:


$\displaystyle g'(x)$ $\textstyle =$ $\displaystyle \frac{f(x)f''(x)}{(f'(x))^2}
  $\textstyle =$ $\displaystyle \frac{(x-2)(6x-8)}{(3x-5)^2}
=\left\{\begin{array}{ll}1/2 & x=1  0 & x=2\end{array}\right.$  

We therefore have

...rray}{ll}e_n/2 & x=1  -g''(x)e_n^2/2 & x=2\end{array}\right.

We see the iteration converges quadratically to the single root $x=2$, but only linearly to the repeated root $x=1$.

We consider in general a function with a repeated root at $x=a$ of multiplicity $k$:

\end{displaymath} (71)

its derivative is
\end{displaymath} (72)

As $f'(a)=0$, the convergence of the Newton-Raphson method to $x=a$ is linear, rather than quadratic.

In such case, we can accelerate the iteration by using a step size $\delta=k>1$:

\end{displaymath} (73)

=1-k+k \frac{f(x)f''(x)}{(f'(x))^2}
\end{displaymath} (74)

Now we show that this $g'(x)$ is zero at the repeated root $x=a$, therefore the convergence to this root is still quadratic.

We substitute $f(x)=(x-a)^kh(x)$, $f'(x)=k(x-a)^{k-1}[kh+(x-a)h']$, and

    $\displaystyle f''(x)=[(x-a)^{k-1}[kh+(x-a)h']]'$  
  $\textstyle =$ $\displaystyle 1-k+k (k-1)(x-a)^{k-2}[kh+(x-a)h']$  
    $\displaystyle +(x-a)^{k-1}[(k+1)h'+(x-a)h'']$ (75)

into the expression for $g'(x)$ above to get (after some algebra):
$\displaystyle g'(x)$ $\textstyle =$ $\displaystyle 1-k+k\frac{f(x)f''(x)}{(f'(x))^2}$  
  $\textstyle =$ $\displaystyle 1-k+k \frac{h(k-1)(kh+(x-a)h')+h(x-a)((k+1)h'+(x-a)h')}{(kh+(x-a)h')^2}$ (76)

At $x=a$, we get
g'(x)\bigg\vert _{x=a}=1-k+k\frac{(k-1)kh^2}{(kh)^2}=0
\end{displaymath} (77)

i.e., the convergence to the repeated root at $x=a$ is no longer linear but quadratic.

The difficulty, however, is that the multiplicity $k$ of a root is unknown ahead of time. If $\delta>1$ is used blindly some root may be skipped, and the iteration may oscillate around the real root.

Example: Consider solving $f(x)=x^3-4x^2+5x-2=(x-1)^2(x-2)=0$, with a double root $x=1$ and a single root $x=2$. In the following, we compare the performance of both $g_1(x)=x-f(x)/f'(x)$ and $g_2(x)=x-2f(x)/f'(x)$.

next up previous
Next: Newton-Raphson method (multivariate) Up: Solving Non-Linear Equations Previous: Fixed point iteration
Ruye Wang 2018-09-19