On 06/10/2013 05:51, Alexey A. Illarionov wrote:
> Hi,
> 
> Just curious, how is this form preferable than the other?

Consider finding the root of an equation with a root at, say, x = 1 when
during iteration we end up very close to the root with, for example,

  (f / df) = 1e-20

In the old form:

  x_new = x - (f / df)
  f_new = F(x_new)
  df_new = (f_new - f) / (x_new - x)

since x is close to 1, subtracting 1e-20 from it in double precision
doesn't change its value so we end up with x_new == x.

Hence, when we come to calculate df_new, the denominator is zero and
we end up an infinite value of df_new (or a divide by zero situation).

But we can see from the first line that x_new - x = -(f / df) so we
can rewrite df_new as:

  df_new = (f_new - f) / -(f / df) = df * (f - f_new) / f

Now, provided f is not zero (which we check), we will obtain a
finite value of df_new.

> In my opinions none of the forms avoid the loss of precision when
> f and f_new are close.

The problem is that when x_new and x are very close together x - x_new
is not equal to (f / df) because of the severe loss of precision
caused by the subtraction used to calculate x_new.

The new form avoids this by using (f / df) instead.

   Brian

Reply via email to