This is another post inspired by the New Turing Omnibus, which is a great book, even if its author believes in some very stupid things.

The Newton-Raphson method is a way to find the roots of a polynomial equation. It does not always work, but it is a powerful tool when it does.

Eg., let us say we had , the root of this is the value of that causes the equation to evaluate to zero.

To do this we can first estimate a root value, evaluate the function, if it is not zero then use as the next guess the point where the tangent to the function at the first guess evaluates to zero. We can compute the tangent from the derivative.

Eg., in the case here , so .

How to find the point where the tangent evaluates to zero? Well, the formula for a tangent (or any straight line) is of the form and in this case, with our guess , that means and, at the guessed point .

Clearly the tangent line is zero when and we know that .

So

Therefore our new guess, , should be

If we try the example here, we can start with a guess of 2 – which evaluates to 39 and has a derivative of 103, giving a new guess of 1.62 (to two significant places).

In turn this evaluates to 9.29 and so on…

**Update:**I tested this out on a spreadsheet and it took just seven iterations to get a root that evaluated to zero at 9 significant places:

1.4248334823 |

## 3 responses to “Newton-Raphson method”

[…] Post navigation ← Previous […]

Under relatively mild assumptions about the smoothness of the function (for one thing, it obviously needs to be at least once differentiable), N-R is locally quadratically convergent. That means that once you get near enough to a root, the error in each solution is bounded by a constant multiple of the square of the previous error. Roughly speaking, once you’re close, you double the number of correct decimal places with each iteration (until either you stop or run out of numerical precision).

Yes, you can see that very clearly on a spreadsheet or a logarithmic plot.