On Sun, 2010-09-19 at 08:12 -0700, rjf wrote: > Any program that computes a quadrature by evaluating the integral at > selected points cannot provide > a rigorous error bound. I don't know what a rigorous "error estimate" > is, unless it is an > estimate of what might be a rigorous bound. And thus not rigorous at > all, since the > estimate might be wrong.
This is, of course, true, provided that no information is known about the function except for its values on a set of measure 0. However, it is certainly possible for a numeric integration routine to provide rigorous error bounds provided that some additional constraints are satisfied. For example, many functions have bounded variation, or have a bounded derivative, or are strictly increasing. And I have found situations where I would like to provide that information to an integration routine and have it use it in order to provide me information about the error bound. And it probably isn't unreasonable for an integration routine to guess at that information, and to provide error bounds that are rigorous assuming a set of not unreasonable hypotheses about the function it is integrating. > > A nasty function: f(x) := if member(x,evaluation_point_set) then 0 > else 1. > integrate(f,a,b) should be b-a. The numerical integration program > returns 0. > I am aware that this type of problem can actually occur in real world examples, but I suspect that in the specific case of numeric integration it can be somewhat alleviated by using random sampling to choose an initial evaluation_point_set. Regardless, the existence of unsolvable problems should not stop us from solving the solvable problems. > There are SO many programs to do numerical quadrature, and some of > them are > quite good and free. Some are not free (e.g. NIntegrate in > Mathematica). > The real issue is not writing the program, but figuring out which > program to use and > how to set up the parameters to make it work best. I doubt that > providing > arbitrary-precision floats as a foundation will help solve all the > issues, though > a version of Gaussian quadrature or Clenshaw-Curtis using bigfloats > would > not be hard to do. See http://www.cs.berkeley.edu/~fateman/papers/quad.pdf > for program listings of a few quadrature routines using maxima and > bigfloats. > > > See what NIntegrate does, and think about > whether you want to do that. > > RJF > > > > On Sep 17, 1:45 pm, Fredrik Johansson <[email protected]> > wrote: > > On Fri, Sep 17, 2010 at 12:48 AM, maldun <[email protected]> wrote: > > > Do you see the problems?! These are caused by the high oscillation, > > > but we get no warning. > > > If you use scipy you would get the following: > > > > It is possible to get an error estimate back from mpmath, as follows: > > > > sage: mpmath.quad(lambda a: mpmath.exp(a)*mpmath.sin(1000*a), > > [0,mpmath.pi], error=True) > > (mpf('-0.71920642950258207'), mpf('1.0')) > > sage: mpmath.quad(lambda a: mpmath.exp(a)*mpmath.sin(1000*a), > > [0,mpmath.pi], maxdegree=15, error=True) > > (mpf('-0.022140670492108779'), mpf('1.0e-37')) > > > > Currently it doesn't seem to work with mpmath.call (because > > mpmath_to_sage doesn't know what to do with a tuple). > > > > The error estimates are also somewhat bogus (in both examples above -- > > the first value has been capped to 1.0, if I remember correctly what > > the code does, and the second is an extrapolate of the quadrature > > error alone and clearly doesn't account for the arithmetic error). But > > it should be sufficient to correctly detect a problem and signal a > > warning in the Sage wrapper code in typical cases. > > > > I'm currently working on rewriting the integration code in mpmath to > > handle error estimates more rigorously and flexibly. This will work > > much better in a future version of mpmath. > > > > Fredrik > -- To post to this group, send an email to [email protected] To unsubscribe from this group, send an email to [email protected] For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
