Always prolific, Newton proposed around 1669 (!) an iterative algorithm converging quadratically towards roots of a number. Quadratically means that the number of correct digits of the numerical estimate doubles at each step. Thenceforth, algorithm complexity of the division and roots extraction becomes equal to that of multiplication. A godsend for modern computations, where bruising roots extractions are experienced in the algorithms of Salamin or those from the Borwein brothers. The Newton’s algorithm a several qualities, which are exposed in the following paragraphs.
Originally, i.e. more than three centuries ago, Newton’s method was used to numerically estimate a root in the equation . Actually, this algorithm allows to show the complexity equivalence of multiplication, division and roots extraction.
In 1669, Newton proposed an iterative algorithm to find roots of polynomial (Newton’s iteration). Then, Raphson reworded it in a more modern way in 1690. Simpson, Mourraille, Cauchy and Kantorovich completed it and made it more general, which finally gives the following wording:
Theorem 2.1 Let be analytic in a complex neighboring of . Let’s suppose that and . Then, the following algorithm
uniformly and quadratically converges towards when the initial value is chosen to be in a neighboring of .
This algorithm has an amazing property of errors autocompensation : if is modified such that only correct digits are left compared to then will have correct digits though, given that enough digits are used at this step. In other words, the quadratic convergence is preserved.
One straightforward consequence is that the computation of to digits only require to initialize the algorithm with a couple of correct digits, then double the number of digits taken into account at each step until at least digits are reached. This dramatically reduces the complexity of the computation and allows to show that multiplication, division and roots extraction are equivalent.
Let’s choose a number . We want to compute its multiplicative inverse with correct digits. Using the function , the Newton’s algorithm becomes
which is only using multiplications and additions. The equation
illustrates the quadratic convergence of the algorithm. Suppose now that we initialize the algorithm with one correct digit. We then have
From the errors autocompensation property of the Newton’s method, we are allowed to only use digits at the th step. And from 2, we only need multiplications ( et ) and additions ( et ) to obtain . Thanks to the quadratic convergence, only itérations of 2 are needed to obtain correct digits of . If we define as being the complexity of a multiplication method applied to digits, given that the complexity of an addition will be , we then obtain a final complexity of
for the multiplicative inverse computation. Indded, we can reasonably suppose that , which means that the complexity of multiplications of numbers with digits is slightly less complex than the multiplication of two numbers with digits. From this ensues that in 5. Thus, the multiplicative inverse is not far more complex than multiplication (it’s a ). Moreover, since , division also is a .
Things also are going well. We can use the function which leads to the estimation of then we use the fact that . The associated iteration is
which practically leads to a final complexity of around .
A natural alternative uses directly , but this leads to an iteration showing a division:
However, the prolific Schönhage, full of ideas (see also FFT), proposed to replace with another iteration in 1971. He then obtained the following coupled iteration
estimates . this algorithm is called “Newton’s coupled iteration” (, p257) and practically gives a complexity of around .
back to home page