#8321: numerical integration with arbitrary precision
-------------------------+--------------------------------------------------
   Reporter:  burcin     |          Owner:                      
       Type:  defect     |         Status:  needs_work          
   Priority:  major      |      Milestone:  sage-4.7.2          
  Component:  symbolics  |       Keywords:  numerics,integration
Work_issues:             |       Upstream:  N/A                 
   Reviewer:             |         Author:  Stefan Reiterer     
     Merged:             |   Dependencies:                      
-------------------------+--------------------------------------------------
Changes (by fredrik.johansson):

  * owner:  burcin =>


Comment:

 Obviously mpmath should support a relative tolerance out of the box, but
 there's a not-so-hackish workaround which Sage could use. The quadrature
 rules are implemented as classes, and can be subclassed:

 {{{
 from mpmath import *
 from mpmath.calculus.quadrature import TanhSinh, GaussLegendre

 class RelativeErrorMixin(object):
     def estimate_error(self, results, prec, epsilon):
         mag = abs(results[-1])
         if len(results) == 2:
             return abs(results[0]-results[1]) / mag
         try:
             if results[-1] == results[-2] == results[-3]:
                 return self.ctx.zero
             D1 = self.ctx.log(abs(results[-1]-results[-2])/mag, 10)
             D2 = self.ctx.log(abs(results[-1]-results[-3])/mag, 10)
             if D2 == 0:
                 D2 = self.ctx.inf
             else:
                 D2 = self.ctx.one / D2
         except ValueError:
             return epsilon
         D3 = -prec
         D4 = min(0, max(D1**2 * D2, 2*D1, D3))
         return self.ctx.mpf(10) ** int(D4)

 class TanhSinhRel(RelativeErrorMixin, TanhSinh):
     pass

 class GaussLegendreRel(RelativeErrorMixin, GaussLegendre):
     pass
 }}}
 With this change:

 {{{
 >>> quad(lambda x: exp(-x**2)*log(x), [17,42], method=TanhSinhRel)
 mpf('2.5657285005610513e-127')
 >>>
 >>> quad(lambda x: exp(-x**2)*log(x), [17,42], method=GaussLegendreRel)
 mpf('2.5657285005610513e-127')
 }}}
 BTW, I think the class will be instantiated every time this way, so the
 __init__ method should be overwritten as well to create a singleton.
 Alternatively Sage could call the summation() method directly, which would
 remove some overhead.

 The estimate_error method should be changed to something better and should
 be tested on many examples. The input "results" is a list of results
 generated at successive levels of refinement. The code above attempts to
 extrapolate the error estimate (in a rather convoluted way that should be
 fixed in mpmath :). One could be more conservative (and slower) by just
 comparing the two last results.

-- 
Ticket URL: <http://trac.sagemath.org/sage_trac/ticket/8321#comment:36>
Sage <http://www.sagemath.org>
Sage: Creating a Viable Open Source Alternative to Magma, Maple, Mathematica, 
and MATLAB

-- 
You received this message because you are subscribed to the Google Groups 
"sage-trac" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/sage-trac?hl=en.

Reply via email to