Next: Newton-Raphson method (multivariate) Up: Solving Non-Linear Equations Previous: Fixed point iteration

## Newton-Raphson method (univariate)

To solve equation , we first consider the Taylor series expansion of at any point :

 (55)

If is linear, i.e., its slope is a constant for any , then the second and higher order terms are all zero, and we have
 (56)

To find the root at which , we set the above to zero and solve the resulting equation for to get
 (57)

where is the step we need to take.

If is not linear, the sum of the first two terms of the Taylor expansion is only an approximation of , and the resulting we found above is only an estimate of the root. However, it can still be used as a step of an iterative process by which the estimate is continuously improved and the solution can be eventually reached:

 (58)

where , and is an improved estimate of the root based on the previous approximation , i.e., the root will be approached by this iteration, as illustrated in the figure.

The Newton-Raphson method can be considered as the fixed point iteration based on . The root at which is also the fixed point of , i.e., . For the iteration to converge, needs to be a contraction with . Consider

 (59)

We make the following notes:

• At the root where , if , then , is a contraction and the iteration converges quadratically when is close to .
• The iteration cannot proceed if (the tangent is horizontal). In this case we can modify by adding a small value to so that .
• It is difficult to know the number of roots of a nonlinear equation (unlike the case of a linear equation), which can vary from zero (e.g., ) to infinity (e.g., ). One can try different initial guesses in the range of interest to see if different roots can be found.
• Sometime a parameter can be used to control the step size of the iteration:
 (60)

• If , the iteration is de-accelerated. Although the convergence becomes slower, this may be desirable if the function is not smooth with many local variations.

• If , the iteration is accelerated. The convergence may or may not be accelerated. Due to the greater step size, the root may be skipped and missed. Sometimes the convergence may become significantly slowed or even oscillate around the true root, such as the example shown in the figure below with .

The order of convergence of the Newton-Raphson iteration can be found based on the Taylor expansion of at the neighborhood of the root :

 (61)

where is the error at the nth step. Substituting the Newton-Raphson's iteration
 (62)

into the equation above, we get
 (63)

i.e.
 (64)

When all the higher order terms disappear, and the above can be written as
 (65)

Alternatively, we can get the Taylor expansion in terms of :
 (66)

Subtracting from both sides we get:
 (67)

Now we find and :
 (68)

and
 (69)

Evaluating these at at which , and substituting them back into the expression for above, the linear term is zero as , i.e., the convergence is quadratic, and we get the same result:
 (70)

We see that, if , then the order of convergence of the Newton-Raphson method is , and the rate of convergence is . However, if , the convergence is linear rather than quadratic, as shown in the example below.

Example: Consider solving the equation

which has a repeated root as well as a single root . We have

Note that at the root . We further find:

and

We therefore have

We see the iteration converges quadratically to the single root , but only linearly to the repeated root .

We consider in general a function with a repeated root at of multiplicity :

 (71)

its derivative is
 (72)

As , the convergence of the Newton-Raphson method to is linear, rather than quadratic.

In such case, we can accelerate the iteration by using a step size :

 (73)

and
 (74)

Now we show that this is zero at the repeated root , therefore the convergence to this root is still quadratic.

We substitute , , and

 (75)

into the expression for above to get (after some algebra):
 (76)

At , we get
 (77)

i.e., the convergence to the repeated root at is no longer linear but quadratic.

The difficulty, however, is that the multiplicity of a root is unknown ahead of time. If is used blindly some root may be skipped, and the iteration may oscillate around the real root.

Example: Consider solving , with a double root and a single root . In the following, we compare the performance of both and .

• First use an initial guess .
• When is used, the convergence is linear around the double root . It takes 16 iterations to get with :

• When is used, the convergence is quadratic around the double root . It takes only 3 iterations to get with :

• Next use a different initial guess .
• If is used, it takes 7 iterations to get with , the convergence is quadratic.

 n 1 x=3.00000 4.00000 8.00000 2 x=2.50000 1.12500 3.75000 3 x=2.20000 0.28800 1.92000 4 x=2.05000 0.05513 1.20750 5 x=2.00435 0.00439 1.01745 6 x=2.00004 0.00004 1.00015 7 x=2.00000 0.00000 1.00000 8 x=2.00000 0.00000

• If is used, oscillation happens as shown in the figure below. However, if a better initial guess is used instead of , it takes only one step to get with , the convergence is significantly accelerated.

 n 1 3.00000 4.00000 8.00000 2 2.00000 0.00000

Next: Newton-Raphson method (multivariate) Up: Solving Non-Linear Equations Previous: Fixed point iteration
Ruye Wang 2018-09-19