All formulas
Num. approx.
Misc. math.
Delirium !

Boris Gourévitch
The world of Pi - V2.57
modif. 13/04/2013

Home Version history Guestbook Who I am Some pictures (fr) Acknowledgements Last modifications Contact

Cette page en français This page in English


Isaac Newton

Newton’s method

1 The roots of the problem

Let’s imagine that you’re asked for the first digits of  V~ -
  2  . Silence for a few seconds... mmmh... well, it’s above 1, less than 2 because  2
2  = 4  ... It’s less than 1.5 since   2
1.5  = 2.25  etc... Nothing very simple... So how could the mathematicians of the 17th century numerically compute the roots of a 4th degree polynomial ?

Always prolific, Newton proposed around 1669 (!) an iterative algorithm converging quadratically towards roots of a number. Quadratically means that the number of correct digits of the numerical estimate doubles at each step. Thenceforth, algorithm complexity of the division and roots extraction becomes equal to that of multiplication. A godsend for modern computations, where bruising roots extractions are experienced in the algorithms of Salamin  or those from the Borwein  brothers. The Newton’s algorithm a several qualities, which are exposed in the following paragraphs.

2 Wording of Newton’s method

Originally, i.e. more than three centuries ago, Newton’s method was used to numerically estimate a root a  in the equation f(x) = 0  . Actually, this algorithm allows to show the complexity equivalence of multiplication, division and roots extraction.

In 1669, Newton proposed an iterative algorithm to find roots of polynomial (Newton’s iteration). Then, Raphson reworded it in a more modern way in 1690. Simpson, Mourraille, Cauchy and Kantorovich completed it and made it more general, which finally gives the following wording:

Theorem 2.1 Let f  be analytic in a complex neighboring of z  . Let’s suppose that f (z) = 0  and f'(z) /= 0  . Then, the following algorithm

xk+1 = xk - f'(xk)

uniformly and quadratically converges towards z  when the initial value x0   is chosen to be in a neighboring of z  .

This algorithm has an amazing property of errors autocompensation : if xk  is modified such that only M  correct digits are left compared to z  then xk+1  will have 2M  correct digits though, given that enough digits are used at this step. In other words, the quadratic convergence is preserved.

One straightforward consequence is that the computation of z  to n  digits only require to initialize the algorithm with a couple of correct digits, then double the number of digits taken into account at each step until at least n  digits are reached. This dramatically reduces the complexity of the computation and allows to show that multiplication, division and roots extraction are equivalent.

3 Applications

3.1 Multiplicative inverse of a number

Let’s choose a number |y|< 1  . We want to compute its multiplicative inverse with n = 2d  correct digits. Using the function f(x) = 1/x - y  , the Newton’s algorithm becomes

xk+1 = 2xk -x2ky

which is only using multiplications and additions. The equation

      1      (     1)2
xk+1- y-= - y xk - y-

illustrates the quadratic convergence of the algorithm. Suppose now that we initialize the algorithm with one correct digit. We then have

||    1||   1
||x0 - y||<  10.

From the errors autocompensation property of the Newton’s method, we are allowed to only use  k
2  digits at the k  th step. And from 2, we only need 2  multiplications (xk-1 xk-1  et  2
xk-1y  ) and 2  additions (xk-1 + xk-1  et         2
2xk-1 -xk- 1y  ) to obtain xk  . Thanks to the quadratic convergence, only log2(n) = d  itérations of 2 are needed to obtain n  correct digits of 1
y  . If we define M (n)  as being the complexity of a multiplication method applied to n  digits, given that the complexity of an addition will be n  , we then obtain a final complexity of

log sum 2(n)
     (2M (2k)+ 2.2k)< 4M (n)+ 4n < 8M (n)

for the multiplicative inverse computation. Indded, we can reasonably suppose that 2M (p) < M (2p)  , which means that the complexity of 2  multiplications of numbers with p  digits is slightly less complex than the multiplication of two numbers with 2p  digits. From this ensues that     ( )     (    )
2M  2k  < M  2k+1 < ...<  2d-1k-1M (n)  in 5. Thus, the multiplicative inverse is not far more complex than multiplication (it’s a O (M (n))  ). Moreover, since a.b = a.1b  , division also is a O (M (n))  .

3.2 Square root extraction

Things also are going well. We can use the function f(x) = x12- y  which leads to the estimation of 1 V~ y-  then we use the fact that  V~ y-= y.1 V~ -
        y  . The associated iteration is

             1- y.x2k
xk+1 = xk + xk--2---

which practically leads to a final complexity of around 4M (n)  .

A natural alternative uses directly f(x) = y- x2  , but this leads to an iteration showing a division:

            y- x2k
xk+1 = xk + -2xk-.

However, the prolific Schönhage, full of ideas (see also FFT), proposed to replace -1-
2xk  with another iteration in 1971. He then obtained the following coupled iteration


vk+1  estimates -1 V~ -
2 y  . this algorithm is called “Newton’s coupled iteration” ([5], p257) and practically gives a complexity of around 3M (n)  .

You can find several refinements of Newton’s method in [14]. The complexity equivalence of multiplication, division and roots extraction is exposed in [23].


[1]   J. Arndt, C. Haenel, ”p  Unleashed”, Springer-Verlag, Berlin Heidelberg New York, 2001.

[2]   R. Brent, “Fast multiple-precision evaluation of elementary fonctions”, Journal of the Association of Computing Machinery, 1976, pp.242-251.

[3]   S. A. Cook, S. O. Aanderaa, “On the minimum computation of functions”, Trans. AMS, 1969, vol. 142, pp291-314.

[4]   M. D. Householder, “The Numerical Treatment of a Single Nonlinear Equation”, 1970, MacGraw-Hill, New-York.

[5]   A. Schönhage, V. Strassen, “Schennelle Multiplikation grosser Zahlen”, Computing, 1971, vol. 7, pp. 281-292.

back to home page