#15921: work around Maxima fpprintprec bug and other ARM-specific problems
-------------------------------------+-------------------------------------
       Reporter:  dimpase            |        Owner:
           Type:  defect             |       Status:  needs_review
       Priority:  major              |    Milestone:  sage-6.3
      Component:  calculus           |   Resolution:
       Keywords:  Maxima,            |    Merged in:
  fpprintprec, ARM                   |    Reviewers:
        Authors:                     |  Work issues:
Report Upstream:  Reported           |       Commit:
  upstream. Developers acknowledge   |  079bb9af4f12892268a19f0d218ac96bd72466f4
  bug.                               |     Stopgaps:
         Branch:                     |
  u/dimpase/arm_fixes_etc            |
   Dependencies:                     |
-------------------------------------+-------------------------------------

Comment (by pbruin):

 I'm starting to think that we shouldn't try to limit `fpprintprec` after
 all, because this will only introduce unnecessary rounding errors.  A more
 robust solution would be to leave it as 0 and to set `maxfpprintprec`
 (which is 16 by default) to 20 or some other sufficiently high value, so
 that the output precision is only controlled by the Lisp implementation
 (and the platform).  At least on x86_64 with ECL, both the length and the
 least significant digits are unpredictable: dividing some powers of 10 by
 3 gives a result that looks like
 {{{
 3.333333333333333493e-5
 3.333333333333333222e-4
 0.003333333333333333
 0.03333333333333333
 0.3333333333333333
 3.3333333333333335
 33.333333333333336
 333.3333333333333
 3333.3333333333335
 33333.333333333336
 333333.3333333333
 3333333.3333333335
 3.3333333333333332093e+7
 3.3333333333333331347e+8
 }}}
 The '...335' at the end of 3.33... is clearly wrong, but limiting the
 precision would turn that into ...34, which is even worse.  I have the
 feeling that we should just live with the above values and insert `# abs
 tol` and `# rel tol` in doctests where appropriate.

 After doing the above computation with three Lisp variants (ECL, GCL and
 SBCL) I have the impression that GCL has the best floating-point accuracy,
 ECL the worst, and SBCL is in between.  I don't know whether this is
 caused by differences in the floating-point internals or in the Lisp
 `format` function.

--
Ticket URL: <http://trac.sagemath.org/ticket/15921#comment:17>
Sage <http://www.sagemath.org>
Sage: Creating a Viable Open Source Alternative to Magma, Maple, Mathematica, 
and MATLAB

-- 
You received this message because you are subscribed to the Google Groups 
"sage-trac" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sage-trac.
For more options, visit https://groups.google.com/d/optout.

Reply via email to