On Mon, Jan 19, 2009 at 11:26 PM, Robert Kern <robert.k...@gmail.com> wrote:

> On Tue, Jan 20, 2009 at 00:21, Charles R Harris
> <charlesr.har...@gmail.com> wrote:
> >
> > On Mon, Jan 19, 2009 at 10:48 PM, Robert Kern <robert.k...@gmail.com>
> wrote:
> >>
> >> On Mon, Jan 19, 2009 at 23:36, Charles R Harris
> >> <charlesr.har...@gmail.com> wrote:
> >> >
> >> > On Mon, Jan 19, 2009 at 9:17 PM, Robert Kern <robert.k...@gmail.com>
> >> > wrote:
> >> >>
> >> >> On Mon, Jan 19, 2009 at 22:09, Charles R Harris
> >> >> <charlesr.har...@gmail.com> wrote:
> >> >> >
> >> >> >
> >> >> > On Mon, Jan 19, 2009 at 7:23 PM, Jonathan Taylor
> >> >> > <jonathan.tay...@utoronto.ca> wrote:
> >> >> >>
> >> >> >> Interesting.  That makes sense and I suppose that also explains
> why
> >> >> >> there is no function to do this sort of thing for you.
> >> >> >
> >> >> > A combination of relative and absolute errors is another common
> >> >> > solution,
> >> >> > i.e., test against relerr*max(abs(array_of_inputs)) + abserr. In
> >> >> > cases
> >> >> > like
> >> >> > this relerr is typically eps and abserr tends to be something like
> >> >> > 1e-12,
> >> >> > which keeps you from descending towards zero any further than you
> >> >> > need
> >> >> > to.
> >> >>
> >> >> I don't think the absolute error term is appropriate in this case. If
> >> >> all of my inputs are of the size 1e-12, I would expect a result of
> >> >> 1e-14 to be significantly far from 0.
> >> >
> >> > Sure, that's why you *chose* constants appropriate to the problem.
> >>
> >> But that's what eps*max(abs(array_of_inputs)) is supposed to do.
> >>
> >> In the formulation that you are using (e.g. that of
> >> assert_arrays_almost_equal()), the absolute error comes into play when
> >> you are comparing two numbers in ignorance of the processes that
> >> created them. The relative error in that formula is being adjusted by
> >> the size of the two numbers (*not* the inputs to the algorithm). The
> >> two numbers may be close to 0, but the relevant inputs to the
> >> algorithm may be ~1, let's say. In that case, you need the absolute
> >> error term to provide the scale information that is otherwise not
> >> present in the comparison.
> >>
> >> But if you know what the inputs to the calculation were, you can
> >> estimate the scale factor for the relative tolerance directly
> >> (rigorously, if you've done the numerical analysis) and the absolute
> >> tolerance is supernumerary.
> >
> >
> > So you do bisection on an oddball curve,  512 iterations later you hit
> > zero... Or you do numeric integration where there is lots of
> cancellation.
> > These problems aren't new and the mixed method for tolerance is quite
> > standard and has been for many years. I don't see why you want to argue
> > about it, if you don't like the combined method, set the absolute error
> to
> > zero, problem solved.
>
> I think we're talking about different things. I'm talking about the
> way to estimate a good value for the absolute error. My
> array_of_inputs was not the values that you are comparing to zero, but
> the inputs to the algorithm that created the value you are comparing
> to zero.
>

Ah. But that won't generally work for polynomials, they are too ill
conditioned with respect to the coefficients. Even quadratics solved using
the standard formula with the +/- can be ill conditioned. And that isn't to
mention that the zeros are scale invariant, i.e., you can multiply the whole
equation by some ginormous number and the zeros will remain the same. It's
fun for a rainy day to check the scale invariance of the zero estimates of
various solution algorithms.

On the other hand, that method of estimating the error might work for
integrals if the result scales with the input parameters.

Chuck
_______________________________________________
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to