Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
Le Tue, 7 Feb 2012 16:13:10 -0800, Jonathan Bober jwbo...@gmail.com a écrit : See http://trac.sagemath.org/sage_trac/ticket/12449 I made a patch to change the way that sage evaluates symbolic functions for basic python types, and at the same time changed RDF to just use math.gamma() instead of gsl's gamma function. (Note: math.gamma() should be available in sage-5.0 (python 2.7.2), but I don't think it is in 4.8.) I haven't looked at the patch yet, but the idea looks good and sane, which is a nice first step. Thank you! Snark on #sagemath -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
See http://trac.sagemath.org/sage_trac/ticket/12449 I made a patch to change the way that sage evaluates symbolic functions for basic python types, and at the same time changed RDF to just use math.gamma() instead of gsl's gamma function. (Note: math.gamma() should be available in sage-5.0 (python 2.7.2), but I don't think it is in 4.8.) -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
Le Sun, 5 Feb 2012 20:03:48 -0800, Jonathan Bober jwbo...@gmail.com a écrit : The source code does say: In extensive but non-exhaustive random tests, this function proved accurate to within = 10 ulps across the entire float domain. Note that accuracy may depend on the quality of the system math functions, the pow function in particular. So if the accuracy of pow() in eglicb relies on long doubles, then there may be a problem, but maybe it will work well there. Within 10 ulp? Look at the comment at the start of [1], where they explain how they compute things, then explain the accuracy: * Accuracy: Gamma(x) is accurate to within * x 0: error provably 0.9ulp. * Maximum observed in 1,000,000 trials was .87ulp. * x 0: * Maximum observed error 4ulp in 1,000,000 trials. This looks pretty good! Snark on #sagemath [1] http://cvsweb.netbsd.org/bsdweb.cgi/~checkout~/src/lib/libm/noieee_src/n_gamma.c -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
On 02/ 5/12 10:16 PM, Jonathan Bober wrote: Never mind all that: the gsl implementation is not very good at all, whereas the libc implementation on my machine seems quite good. Old (libc): If that's the case, why not report the fact to the appropiate mailing list - bug-gsl at gnu.org? dave -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
On Mon, Feb 6, 2012 at 3:05 PM, Dr. David Kirkby david.kir...@onetel.netwrote: On 02/ 5/12 10:16 PM, Jonathan Bober wrote: Never mind all that: the gsl implementation is not very good at all, whereas the libc implementation on my machine seems quite good. Old (libc): If that's the case, why not report the fact to the appropiate mailing list - bug-gsl at gnu.org? dave Well, I just sort of assume that the gsl developers have some idea how accurate their gamma function is and perhaps they consider their implementation just fine. It might not be a bug --- it might just be a design decision. Instead of not very good I should have said not as accurate as eglibc. -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
On Tuesday, February 7, 2012 7:57:11 AM UTC+8, Jonathan Bober wrote: On Mon, Feb 6, 2012 at 3:05 PM, Dr. David Kirkby wrote: On 02/ 5/12 10:16 PM, Jonathan Bober wrote: Never mind all that: the gsl implementation is not very good at all, whereas the libc implementation on my machine seems quite good. Old (libc): If that's the case, why not report the fact to the appropiate mailing list - bug-gsl at gnu.org? dave Well, I just sort of assume that the gsl developers have some idea how accurate their gamma function is and perhaps they consider their implementation just fine. It might not be a bug --- it might just be a design decision. Instead of not very good I should have said not as accurate as eglibc. still, might be worth trying (unlike with (e)glibc) Dima -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
Le Mon, 6 Feb 2012 17:59:57 -0800 (PST), Dima Pasechnik dimp...@gmail.com a écrit : still, might be worth trying (unlike with (e)glibc) Well, after some poking around, it seems it's a mistake to confuse eglibc and glibc, especially when considering upstream friendliness to suggestions/bug reports/patches ; here is a blog post about the debian developper which uploaded eglibc in debian : http://blog.aurel32.net/47 (now eglibc is the default libc in debian, and hence in ubuntu) So perhaps that isn't such a bad idea to try to fix the issue at its root : in eglibc! Snark -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
Hi, this thread is getting... long. Let me try to refocus it on more constructive grounds. Let's list the problematic expressions: 1. gamma(float(6)) 2. SR(10.0r).gamma() 3. float(maxima(1.7e17)) 4. binomial(0.5r,5) The following is known : (a) 1,3 and 4 give test-passing results with #tol rel, so even if wrong, they're not dead-wrong ; (b) 1, 2 and 4 are all the same bug in various disguise : the libc lgammal function isn't good enough ; this is explicit in 1 and 2, and implicit in 4 where for floats the computation is by a quotient of gamma calls [aside: that is probably a wrong way to do things]. (c) 3 is a different problem, and although #tol rel makes it pass the tests, the fact that it needs it to pass is bad. So my proposition is the following: (1) I'll try to fix the implementation of lgammal upstream ; (2) I'll try to investigate the maxima string-to-float conversion ; (3) the sage developpers should still decide and make public some statement as to what they consider a correct numerical approximation, because that thread only made obvious that there's a need for a reference document. Does that look like a good plan? Snark -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
On Sunday, 5 February 2012 21:03:46 UTC+8, Snark wrote: Hi, this thread is getting... long. Let me try to refocus it on more constructive grounds. Let's list the problematic expressions: 1. gamma(float(6)) 2. SR(10.0r).gamma() 3. float(maxima(1.7e17)) 4. binomial(0.5r,5) The following is known : (a) 1,3 and 4 give test-passing results with #tol rel, so even if wrong, they're not dead-wrong ; (b) 1, 2 and 4 are all the same bug in various disguise : the libc lgammal function isn't good enough ; more precisely, the (e)glibc gamma function is not good. (lgammal isn't too good either, but it's a different story). Let me post your link to a good implementation of gamma(), in netbsd's libc: http://cvsweb.netbsd.org/bsdweb.cgi/~checkout~/src/lib/libm/noieee_src/n_gamma.c?rev=1.7content-type=text/plainonly_with_tag=MAIN (and it states the error bounds, by the way!) this is explicit in 1 and 2, and implicit in 4 where for floats the computation is by a quotient of gamma calls [aside: that is probably a wrong way to do things]. I'm sure there is research done on how to do this numerically in a proper way... (c) 3 is a different problem, and although #tol rel makes it pass the tests, the fact that it needs it to pass is bad. So my proposition is the following: (1) I'll try to fix the implementation of lgammal upstream ; gammal, and gamma, the first thing, perhaps. Not sure if lgammal can be fixed easily, actually. (2) I'll try to investigate the maxima string-to-float conversion ; (3) the sage developpers should still decide and make public some statement as to what they consider a correct numerical approximation, because that thread only made obvious that there's a need for a reference document. This is a very good idea. The link to netbsd's implementation show that's it's doable... Does that look like a good plan? Snark Dima -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
Le 05/02/2012 14:43, Dima Pasechnik a écrit : On Sunday, 5 February 2012 21:03:46 UTC+8, Snark wrote: (b) 1, 2 and 4 are all the same bug in various disguise : the libc lgammal function isn't good enough ; more precisely, the (e)glibc gamma function is not good. (lgammal isn't too good either, but it's a different story). Let me post your link to a good implementation of gamma(), in netbsd's libc: http://cvsweb.netbsd.org/bsdweb.cgi/~checkout~/src/lib/libm/noieee_src/n_gamma.c?rev=1.7content-type=text/plainonly_with_tag=MAIN (and it states the error bounds, by the way!) (I meant gammal and wrote lgammal indeed) Yes, the code is wonderful, but isn't for double-precision I think. And there's more ; since I was looking for a good and portable version, I turned to NetBSD ; and since I didn't find exactly what I wanted, I joined #netbsd to discuss, and I learned : - 4.2BSD introduced the wrongly named gamma function as being the logarithm of the gamma function ; - in 4.3BSD, it got renamed to lgamma, but gamma was still an alias to it ; - it's only in 4.4BSD that things were corrected, but NetBSD still uses the 4.3BSD convention, even though they're supposed to be 4.4BSD, see [1] for example (I'm leaving a trail of bug reports everywhere I set a foot these days...)! - even for the GNU libc, things are muddy, as you can see in [2], where again they mention that gamma is a lgamma! - in C99, the gamma function is supposed to be named tgamma if I understand well, perhaps precisely because by the above points, the situation is... interesting! SSO... it seems trying to use gamma from a system libc/libm is just asking for pain. Dima proposed on irc to just use gsl, and I would agree: (1) it's already there and (2) it will shield sage nicely from what looks like a historical conundrum. this is explicit in 1 and 2, and implicit in 4 where for floats the computation is by a quotient of gamma calls [aside: that is probably a wrong way to do things]. I'm sure there is research done on how to do this numerically in a proper way... Well, even without going to the research level, using gamma(x+1)=x*gamma(x) makes it easy to do computations for gamma(a)/gamma(b) with numbers big like |a-b| instead of like max{a,b}. But that's unrelated with that thread and I opened a ticket about it. (c) 3 is a different problem, and although #tol rel makes it pass the tests, the fact that it needs it to pass is bad. So my proposition is the following: (1) I'll try to fix the implementation of lgammal upstream ; gammal, and gamma, the first thing, perhaps. Not sure if lgammal can be fixed easily, actually. See above : there's courage and there's inconscience. Let's flee! (2) I'll try to investigate the maxima string-to-float conversion ; (3) the sage developpers should still decide and make public some statement as to what they consider a correct numerical approximation, because that thread only made obvious that there's a need for a reference document. This is a very good idea. The link to netbsd's implementation show that's it's doable... Yes, that code is beautiful, its documentation is beautiful. Snark PS: [1] http://cvsweb.netbsd.org/bsdweb.cgi/~checkout~/src/lib/libm/src/w_gamma.c?rev=1.11content-type=text/plainonly_with_tag=MAIN [2] http://www.gnu.org/software/libc/manual/html_node/Special-Functions.html#Special-Functions -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
See http://trac.sagemath.org/sage_trac/ticket/12449 I'm in the middle of [hopefully] fixing this by calling the gsl gamma function. While I'm at it, I'll also make the evaluation on basic types much faster, as it shouldn't go though Ginac. (Actually, I've already mostly written a fix. I opened the ticket and wrote this email while running 'make ptestlong'.) Have you actually checked the gsl implementation on ARM? For me it at least satisfies sage: [ZZ(sage.gsl.special_functions.gamma(n)) == n.gamma() for n in srange(1, 23)] == [True] * 22 True (Unfortunately, I just wrote that sage.gsl.special_functions.gamma(). There is no high level interface for you to immediately check if it gives the right answer.) - Never mind all that: the gsl implementation is not very good at all, whereas the libc implementation on my machine seems quite good. Old (libc): sage: gamma(1.23).str(base=2) '0.111010010010011100111010001011001010110010111' sage: RR(gamma(1.23r)).str(base=2) '0.111010010010011100111010001011001010110010111' gsl: sage: RR(gamma(1.23r)).str(base=2) '0.11101001001001110011101011100010110111001' look at all the wrong digits! In my testing, gsl_gamma() has a typical error of around 1e-8 on random input between 1 and 2, the tgammal() rounded to double precision has a typical error of 0 (compared to the correctly rounded value from mpfr). What do you get on ARM if you do something like sage: for n in range(100): : x = RR.random_element(1, 2) : print abs(RR(gamma(float(x))) - x.gamma()) : ? -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
I think we may be overlooking a very reasonable option. Python already has a gamma function in the math module! It is a separate implementation that does not depend on libc, and it gives reasonable results (though perhaps not as good as eglibc tgammal() on x86): sage: max(( abs( RR(math.gamma(float(x))) - x.gamma())/x.gamma().ulp() for x in (RR.random_element(-171, 171) for _ in xrange(10)) )) 6.00 sage: mean([ abs( RR(math.gamma(float(x))) - x.gamma())/x.gamma().ulp() for x in (RR.random_element(-171, 171) for _ in xrange(10)) ]) 0.8679600 sage: median([ abs( RR(math.gamma(float(x))) - x.gamma())/x.gamma().ulp() for x in (RR.random_element(-171, 171) for _ in xrange(10)) ]) 1.00 sage: sage: [ZZ(math.gamma(float(n))) == n.gamma() for n in srange(1, 23)] == [True] * 22 True The source code does say: In extensive but non-exhaustive random tests, this function proved accurate to within = 10 ulps across the entire float domain. Note that accuracy may depend on the quality of the system math functions, the pow function in particular. So if the accuracy of pow() in eglicb relies on long doubles, then there may be a problem, but maybe it will work well there. On Sun, Feb 5, 2012 at 2:16 PM, Jonathan Bober jwbo...@gmail.com wrote: See http://trac.sagemath.org/sage_trac/ticket/12449 I'm in the middle of [hopefully] fixing this by calling the gsl gamma function. While I'm at it, I'll also make the evaluation on basic types much faster, as it shouldn't go though Ginac. (Actually, I've already mostly written a fix. I opened the ticket and wrote this email while running 'make ptestlong'.) Have you actually checked the gsl implementation on ARM? For me it at least satisfies sage: [ZZ(sage.gsl.special_functions.gamma(n)) == n.gamma() for n in srange(1, 23)] == [True] * 22 True (Unfortunately, I just wrote that sage.gsl.special_functions.gamma(). There is no high level interface for you to immediately check if it gives the right answer.) - Never mind all that: the gsl implementation is not very good at all, whereas the libc implementation on my machine seems quite good. Old (libc): sage: gamma(1.23).str(base=2) '0.111010010010011100111010001011001010110010111' sage: RR(gamma(1.23r)).str(base=2) '0.111010010010011100111010001011001010110010111' gsl: sage: RR(gamma(1.23r)).str(base=2) '0.11101001001001110011101011100010110111001' look at all the wrong digits! In my testing, gsl_gamma() has a typical error of around 1e-8 on random input between 1 and 2, the tgammal() rounded to double precision has a typical error of 0 (compared to the correctly rounded value from mpfr). What do you get on ARM if you do something like sage: for n in range(100): : x = RR.random_element(1, 2) : print abs(RR(gamma(float(x))) - x.gamma()) : ? -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
Le Sun, 5 Feb 2012 14:16:39 -0800, Jonathan Bober jwbo...@gmail.com a écrit : Never mind all that: the gsl implementation is not very good at all, whereas the libc implementation on my machine seems quite good. Bad :-/ Old (libc): sage: gamma(1.23).str(base=2) '0.111010010010011100111010001011001010110010111' sage: RR(gamma(1.23r)).str(base=2) '0.111010010010011100111010001011001010110010111' gsl: sage: RR(gamma(1.23r)).str(base=2) '0.11101001001001110011101011100010110111001' look at all the wrong digits! In my testing, gsl_gamma() has a typical error of around 1e-8 on random input between 1 and 2, the tgammal() rounded to double precision has a typical error of 0 (compared to the correctly rounded value from mpfr). What do you get on ARM if you do something like sage: for n in range(100): : x = RR.random_element(1, 2) : print abs(RR(gamma(float(x))) - x.gamma()) : ? On newton(x86_64): jpuydt@newton:~$ cat /tmp/test.sage from sage import * l=[] for n in range(100): x= RR.random_element(1,2) l.append (abs(RR(gamma(float(x))-x.gamma( print(max(l)) jpuydt@newton:~$ ~/sage-4.8/sage /tmp/test.sage 0.000 and the same test.sage on hecke(ARM): jpuydt@hecke:~$ ~/sage-4.8/sage test.sage 1.11022302462516e-16 Sigh. Snark on #sagemath -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
On Thu, Feb 2, 2012 at 1:16 PM, Julien Puydt julien.pu...@laposte.netwrote: Well, if I don't err, $10^{17}$ has 18 decimal digits, which is more than the 15,95.. that fit in 53 binary digits. It is not that simple. 15.95 digits is more like a guideline. At issue is whether the number in question can be exactly represented in double precision, and in this case it can be. You can check this (ignoring possible issues with the size of the exponent, which don't occur here) with: sage: ZZ(RealField(53)(1.7e17)) == 17*10^16 In fact, 42 bits suffices, but not 41: sage: ZZ(RealField(42)(1.7e17)) == 17*10^16 True sage: ZZ(RealField(41)(1.7e17)) == 17*10^16 False Instead, you could just look at it in base two sage: (17 * 10^16).str(base=2) '100101101101111001011010010001' and count the length of the string not including the trailing zeros: sage: len((17 * 10^16).str(base=2).strip('0')) 42 So, in this case, I would say that there is either a bug in maxima or a bug in the sage code that converts from maxima. I've been trying to guess what that bug would be, but I don't have a guess right now. -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
Le 03/02/2012 10:56, Jonathan Bober a écrit : On Thu, Feb 2, 2012 at 1:16 PM, Julien Puydt julien.pu...@laposte.net mailto:julien.pu...@laposte.net wrote: Well, if I don't err, $10^{17}$ has 18 decimal digits, which is more than the 15,95.. that fit in 53 binary digits. It is not that simple. 15.95 digits is more like a guideline. At issue is whether the number in question can be exactly represented in double precision, and in this case it can be. You can check this (ignoring possible issues with the size of the exponent, which don't occur here) with: Ah, yes ; I forgot powers of 10 contain powers of two and hence have a few zero digits at the end! Snark on #sagemath -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
On Thursday, 2 February 2012 12:52:53 UTC+8, Jonathan Bober wrote: I've just been looking at this trying to figure out what was going on and I was just going to say exactly the same thing. I don't really know anything about the whole glibc vs eglibc thing, but I bet the implementation is the same as glibc-2.14.1/sysdeps/ieee754/dbl-64/e_gamma_r.c: double __ieee754_gamma_r (double x, int *signgamp) { /* We don't have a real gamma implementation now. We'll use lgamma and the exp function. But due to the required boundary conditions we must check some values separately. */ [...] yes, that's correct, I think (at least I checked this to be the case on ARM). And sure enough: sage: exp(RR(log(120))).str(truncate=False) '119.97' I'm not completely convinced that the results are wrong. They are certainly less precise than the correct answers, and in this case the correct answers can be represented exactly in double precision, but I would not be surprised if on x86_64 there are still cases where the returned answer is not as exact as possible. And as far as I can tell, the underlying tgamma() works as advertised, in the sense that I cannot find any description at all of how accurate it's answers are supposed to be, so within 2ulp seems possibly reasonable to me. here, for x=6, the relative error is 10^(-15). for x=150, it gets bigger, 10^(-13). (The maxima conversion is a different issue. There the result _is_ wrong. 1.7e17 is exact in double precision, and the conversion should not change that.) On Wed, Feb 1, 2012 at 8:04 PM, Dima Pasechnik wrote: On Thursday, 2 February 2012 06:24:18 UTC+8, Robert Bradshaw wrote: On Wed, Feb 1, 2012 at 4:19 AM, Julien Puydt wrote: = Forewords = I investigated the numerical issues on my ARM build, and after much poking around and searching, I found that I was chasing the dahu : the tests were wrong, and the result were good. No, the tests are fine, the results are clearly wrong. Let's consider the numerical failures one by one : The following comments apply to all 4 tests. = 1/4 = File /home/jpuydt/sage-4.8/devel/**sage-main/sage/functions/**other.py, line 511: sage: gamma1(float(6)) Expected: 120.0 Got: 119.97 Let's see how bad this is : sage: res=gamma(float(6)) sage: res 119.97 sage: n(res,prec=57) 120.0 sage: n(res,prec=58) 119.97 sage: a = float(120) - 2^-46; a, a == 120 (119.99, False) sage: a = float(120) - 2^-46; a, a == 120 (119.99, False) sage: n(a, prec=57) 120.0 sage: n(a, prec=57) == 120 False sage: n(a, prec=57).str(truncate=False) '119.9858' The string representation of elements of RR truncates the last couple of digits to avoid confusion, but floats do not do the same. Changing tests like this to have n(..., prec=...) and relying on string formatting only confuses the matter (as well as making things ugly). It looks like ARM's libc returns incorrect (2 ulp off) values for gamma(n) for small n (at least). This should be fixed, not hidden, or at least marked as a known bug on ARM (only). IMHO this error is a much bigger issue than the noise due to a numerical integration arising from double-rounding in moving floats in and out of 80-bit registers and other low-level details. That's what the tolerance makers should be used for. = CONCLUSION = A double precision floating point number is supposed to have 53 digits, according to the norm (http://en.wikipedia.org/wiki/**IEEE_754-2008http://en.wikipedia.org/wiki/IEEE_754-2008), and the results are correct from that point of view. No, their last (two) binary digits are wrong. If the test was further digging shows that it's the implementation of gammal() in the platform's libc (eglibc is used) is to blame; they do expl(lgammal()), leading to loss of precision, as platform's long double is only 8 bytes, and it's simply not possible to stuff enough precision in log(gamma) if you only have 8 bytes! Dima sage: gamma(float(6)) == 120 True It would fail just as well. So the tests should be modified not to depend on the specific implementation : they're currently testing equality of floats! I would provide a patch for the tests so they use n(..., prec=53), but I'm hitting a problem in one of the cases ; see the mail I sent yesterday for more about that. See above. - Robert -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org -- To post to this group, send an email to
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
Le 02/02/2012 05:52, Jonathan Bober a écrit : I've just been looking at this trying to figure out what was going on and I was just going to say exactly the same thing. I don't really know anything about the whole glibc vs eglibc thing, but I bet the implementation is the same as glibc-2.14.1/sysdeps/ieee754/dbl-64/e_gamma_r.c: double __ieee754_gamma_r (double x, int *signgamp) { /* We don't have a real gamma implementation now. We'll use lgamma and the exp function. But due to the required boundary conditions we must check some values separately. */ [...] And sure enough: sage: exp(RR(log(120))).str(truncate=False) '119.97' I'm not completely convinced that the results are wrong. They are certainly less precise than the correct answers, and in this case the correct answers can be represented exactly in double precision, but I would not be surprised if on x86_64 there are still cases where the returned answer is not as exact as possible. And as far as I can tell, the underlying tgamma() works as advertised, in the sense that I cannot find any description at all of how accurate it's answers are supposed to be, so within 2ulp seems possibly reasonable to me. Let us consider : #include math.h #include stdio.h int main () { long double x; x=6.0; printf (%.20Lf\n, tgammal(x)); x=10.0; printf (%.20Lf\n, tgammal(x)); return 0; } On an x86_64 box, I get : $ ./test 119.8612 362880.22737368 and on the ARM box, I get: $ ./test 119.97157829 362880.046566128731 Notice that even for the computation of tgammal(10), as I mentioned yesterday, the relative error is 1.2832e-15, so... (The maxima conversion is a different issue. There the result _is_ wrong. 1.7e17 is exact in double precision, and the conversion should not change that.) Well, that problem goes away with # tol rel, hence the situation isn't that clear-cut. Snark on #sagemath -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
On Thu, Feb 2, 2012 at 11:23 AM, Julien Puydt julien.pu...@laposte.netwrote: Let us consider : #include math.h #include stdio.h int main () { long double x; x=6.0; printf (%.20Lf\n, tgammal(x)); x=10.0; printf (%.20Lf\n, tgammal(x)); return 0; } On an x86_64 box, I get : $ ./test 119.8612 362880.22737368 and on the ARM box, I get: $ ./test 119.97157829 362880.046566128731 Interesting. You can also replicate these results (double and long double using sage): sage: exp(RealField(64)(log(120))).str(truncate=False) '119.86' sage: exp(RealField(53)(log(120))).str(truncate=False) '119.97' Anyway, the first value doesn't fit in a double, so when it gets converted to a double it has to be rounded. It turns out that 120.0 is the closest double to 119.86, so that is why sage ends up getting the answer exactly correct: sage: RR(exp(RealField(64)(log(120.str(truncate=False) '120.00' (The maxima conversion is a different issue. There the result _is_ wrong. 1.7e17 is exact in double precision, and the conversion should not change that.) Well, that problem goes away with # tol rel, hence the situation isn't that clear-cut. Can you think of a reason that the answer should change? Does maxima use less that 53 bits of precision ever? -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
On 02/ 2/12 04:45 AM, Dima Pasechnik wrote: Here is an illustration of the same phenomenon on x86_64. There, of course, 8-byte floats are double, so the code to demonstrate the problem is as follows: #includestdio.h #includemath.h int main () { double x = 6.0; printf(sizof(double)=%d\n,sizeof(double)); printf(lgamma (%.20f)=%.20f\n, x, lgamma(x)); printf(tgamma (%.20f)=%.20f\n, x, tgamma(x)); printf(exp(lgamma) (%.20f)=%.20f\n, x, exp(lgamma(x))); return 0; } On x86_64 Linux, it outputs sizof(double)=8 lgamma (6.)=4.78749174278204581157 tgamma (6.)=119.97157829 exp(lgamma) (6.)=119.97157829 On MacOSX, libc has an honest gamma(), which does the job properly: sizof(double)=8 lgamma (6.)=4.78749174278204492339 tgamma (6.)=120. exp(lgamma) (6.)=119.87210231 So eglibc does a bad job computing gamma() on double (i.e. on 8-byte floats)... Report upstream? Dima Here's two from OpenSolaris on x86 - one with gcc, the other with the Sun compiler 'cc' drkirkby@hawk:~$ gcc test.c -lm drkirkby@hawk:~$ ./a.out sizof(double)=8 lgamma (6.)=4.78749174278204581157 tgamma (6.)=120. exp(lgamma) (6.)=119.97157829 drkirkby@hawk:~$ cc test.c -lm drkirkby@hawk:~$ ./a.out sizof(double)=8 lgamma (6.)=4.78749174278204581157 tgamma (6.)=120. exp(lgamma) (6.)=119.97157829 Same result on each I think (someone can check, as I really must rush and cant check every digit). -- A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing in e-mail? -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
Le 03/02/2012 07:31, Dr. David Kirkby a écrit : Here's two from OpenSolaris on x86 - one with gcc, the other with the Sun compiler 'cc' drkirkby@hawk:~$ gcc test.c -lm drkirkby@hawk:~$ ./a.out sizof(double)=8 lgamma (6.)=4.78749174278204581157 tgamma (6.)=120. exp(lgamma) (6.)=119.97157829 drkirkby@hawk:~$ cc test.c -lm drkirkby@hawk:~$ ./a.out sizof(double)=8 lgamma (6.)=4.78749174278204581157 tgamma (6.)=120. exp(lgamma) (6.)=119.97157829 Same result on each I think (someone can check, as I really must rush and cant check every digit). It should be the same result since it's the same -lm. Snark on #sagemath -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
Le 02/02/2012 23:22, Jonathan Bober a écrit : Can you think of a reason that the answer should change? Does maxima use less that 53 bits of precision ever? Well, if I don't err, $10^{17}$ has 18 decimal digits, which is more than the 15,95.. that fit in 53 binary digits. In any case, let me repeat : three of the four failing numerical tests pass if I add a relative tolerance of 1e-15... the only remaining one is the computation of $\Gamma(10)$, where the relative error is 1.2832...e-15, for which I don't know if it's acceptable or not, but doesn't look that crazy. Snark on #sagemath -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
On Wed, Feb 1, 2012 at 4:19 AM, Julien Puydt julien.pu...@laposte.net wrote: = Forewords = I investigated the numerical issues on my ARM build, and after much poking around and searching, I found that I was chasing the dahu : the tests were wrong, and the result were good. Let's consider the numerical failures one by one : = 1/4 = File /home/jpuydt/sage-4.8/devel/sage-main/sage/functions/other.py, line 511: sage: gamma1(float(6)) Expected: 120.0 Got: 119.97 Let's see how bad this is : sage: res=gamma(float(6)) sage: res 119.97 sage: n(res,prec=57) 120.0 sage: n(res,prec=58) 119.97 = 2/4 = File /home/jpuydt/sage-4.8/devel/sage-main/sage/symbolic/expression.pyx, line 6256: sage: SR(10.0r).gamma() Expected: 362880.0 Got: 362880.047 Let's see how bad that is: sage: res=SR(10.0r).gamma() sage: res 362880.047 sage: n(res,prec=54) 362880.0 sage: n(res,prec=55) 362880.05 = 3/4 = File /home/jpuydt/sage-4.8/devel/sage-main/sage/interfaces/maxima_abstract.py , line 1595: sage: float(maxima(1.7e+17)) Expected: 1.7e+17 Got: 1.6997e+17 Let's see how bad that is: sage: res=float(maxima(1.7e17)) sage: res 1.6997e+17 sage: n(res,prec=57) 1.700e17 sage: n(res,prec=58) 1.6997e17 = 4/4 = File /home/jpuydt/sage-4.8/devel/sage-main/sage/rings/arith.py, line 3061: sage: binomial(0.5r, 5) Expected: 0.02734375 Got: 0.027343751 Let's see how bad that is: sage: res=binomial(0.5r,5) sage: res 0.027343751 sage: n(res,prec=57) 0.027343751 sage: n(res,prec=54) 0.02734375 = CONCLUSION = A double precision floating point number is supposed to have 53 digits, according to the norm (http://en.wikipedia.org/wiki/IEEE_754-2008), and the results are correct from that point of view. So the tests should be modified not to depend on the specific implementation : they're currently testing equality of floats! See http://trac.sagemath.org/sage_trac/ticket/10952 I would provide a patch for the tests so they use n(..., prec=53), but I'm hitting a problem in one of the cases ; see the mail I sent yesterday for more about that. Snark on #sagemath -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org -- William Stein Professor of Mathematics University of Washington http://wstein.org -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
Le 01/02/2012 18:26, William Stein a écrit : On Wed, Feb 1, 2012 at 4:19 AM, Julien Puydtjulien.pu...@laposte.net wrote: So the tests should be modified not to depend on the specific implementation : they're currently testing equality of floats! See http://trac.sagemath.org/sage_trac/ticket/10952 Oh, dear! That is almost wonderful! This is wonderful because it looks so much like what is needed : the attached patch doesn't break the tests on x86_64 test, and it fixes three of the four tests on ARM. The fourth is the reason for the *almost* : sage -t --long devel/sage/sage/symbolic/expression.pyx ** File /home/jpuydt/sage-4.8/devel/sage/sage/symbolic/expression.pyx, line 6256: sage: SR(10.0r).gamma() # rel tol 1e-15 Out of tolerance 362880.0 vs 362880.0 #0: simplify_sum(expr='sum(q^k,k,0,inf)) #1: simplify_sum(expr=a*'sum(q^k,k,0,inf)) ** 1 items had failures: 1 of 18 in __main__.example_142 ***Test Failed*** 1 failures. For whitespace errors, see the file /home/jpuydt/.sage//tmp/expression_10389.py [167.8 s] Would the calculation of the relative error involve a division by zero when the two quantities are really equal? Snark on #sagemath PS: I chose rel tol 1e-15 because a double precision of 53 binary digits corresponds to about 15.95 decimal digits. -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org diff --git a/sage/functions/other.py b/sage/functions/other.py --- a/sage/functions/other.py +++ b/sage/functions/other.py @@ -508,7 +508,7 @@ sage: gamma1(x/2)(x=5) 3/4*sqrt(pi) -sage: gamma1(float(6)) +sage: gamma1(float(6)) # rel tol 1e-15 120.0 sage: gamma1(x) gamma(x) diff --git a/sage/interfaces/maxima_abstract.py b/sage/interfaces/maxima_abstract.py --- a/sage/interfaces/maxima_abstract.py +++ b/sage/interfaces/maxima_abstract.py @@ -1592,7 +1592,7 @@ sage: float(maxima(3.14)) 3.1401 -sage: float(maxima(1.7e+17)) +sage: float(maxima(1.7e+17)) # rel tol 1e-15 1.7e+17 sage: float(maxima(1.7e-17)) 1.6999e-17 diff --git a/sage/rings/arith.py b/sage/rings/arith.py --- a/sage/rings/arith.py +++ b/sage/rings/arith.py @@ -3058,7 +3058,7 @@ 0.000 sage: binomial(-2.,3) -4.00 -sage: binomial(0.5r, 5) +sage: binomial(0.5r, 5) # rel tol 1e-15 0.02734375 Test symbolic and uni/multivariate polynomials:: diff --git a/sage/symbolic/expression.pyx b/sage/symbolic/expression.pyx --- a/sage/symbolic/expression.pyx +++ b/sage/symbolic/expression.pyx @@ -6253,7 +6253,7 @@ 1 sage: SR(10).gamma() 362880 -sage: SR(10.0r).gamma() +sage: SR(10.0r).gamma() # rel tol 1e-15 362880.0 sage: SR(CDF(1,1)).gamma() 0.498015668118 - 0.154949828302*I
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
Le 01/02/2012 20:43, Julien Puydt a écrit : sage: SR(10.0r).gamma() # rel tol 1e-15 Out of tolerance 362880.0 vs 362880.0 In fact, the printed result is misleading : the floats *are* different, and the difference is 4.6566...e-10, so the relative error is 1.2832...e-15, so it's indeed out of tolerance! Would the calculation of the relative error involve a division by zero when the two quantities are really equal? Reading the code, it doesn't! Snark on #sagemath PS: I chose rel tol 1e-15 because a double precision of 53 binary digits corresponds to about 15.95 decimal digits. Reading the code, 1e-15 is the default, so I shouldn't have added that. Sigh, so there's still one last thing to care for before the ARM port is complete... Snark on #sagemath -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
On Wed, Feb 1, 2012 at 4:19 AM, Julien Puydt julien.pu...@laposte.net wrote: = Forewords = I investigated the numerical issues on my ARM build, and after much poking around and searching, I found that I was chasing the dahu : the tests were wrong, and the result were good. No, the tests are fine, the results are clearly wrong. Let's consider the numerical failures one by one : The following comments apply to all 4 tests. = 1/4 = File /home/jpuydt/sage-4.8/devel/sage-main/sage/functions/other.py, line 511: sage: gamma1(float(6)) Expected: 120.0 Got: 119.97 Let's see how bad this is : sage: res=gamma(float(6)) sage: res 119.97 sage: n(res,prec=57) 120.0 sage: n(res,prec=58) 119.97 sage: a = float(120) - 2^-46; a, a == 120 (119.99, False) sage: a = float(120) - 2^-46; a, a == 120 (119.99, False) sage: n(a, prec=57) 120.0 sage: n(a, prec=57) == 120 False sage: n(a, prec=57).str(truncate=False) '119.9858' The string representation of elements of RR truncates the last couple of digits to avoid confusion, but floats do not do the same. Changing tests like this to have n(..., prec=...) and relying on string formatting only confuses the matter (as well as making things ugly). It looks like ARM's libc returns incorrect (2 ulp off) values for gamma(n) for small n (at least). This should be fixed, not hidden, or at least marked as a known bug on ARM (only). IMHO this error is a much bigger issue than the noise due to a numerical integration arising from double-rounding in moving floats in and out of 80-bit registers and other low-level details. That's what the tolerance makers should be used for. = CONCLUSION = A double precision floating point number is supposed to have 53 digits, according to the norm (http://en.wikipedia.org/wiki/IEEE_754-2008), and the results are correct from that point of view. No, their last (two) binary digits are wrong. If the test was sage: gamma(float(6)) == 120 True It would fail just as well. So the tests should be modified not to depend on the specific implementation : they're currently testing equality of floats! I would provide a patch for the tests so they use n(..., prec=53), but I'm hitting a problem in one of the cases ; see the mail I sent yesterday for more about that. See above. - Robert -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
On Thursday, 2 February 2012 06:24:18 UTC+8, Robert Bradshaw wrote: On Wed, Feb 1, 2012 at 4:19 AM, Julien Puydt wrote: = Forewords = I investigated the numerical issues on my ARM build, and after much poking around and searching, I found that I was chasing the dahu : the tests were wrong, and the result were good. No, the tests are fine, the results are clearly wrong. Let's consider the numerical failures one by one : The following comments apply to all 4 tests. = 1/4 = File /home/jpuydt/sage-4.8/devel/sage-main/sage/functions/other.py, line 511: sage: gamma1(float(6)) Expected: 120.0 Got: 119.97 Let's see how bad this is : sage: res=gamma(float(6)) sage: res 119.97 sage: n(res,prec=57) 120.0 sage: n(res,prec=58) 119.97 sage: a = float(120) - 2^-46; a, a == 120 (119.99, False) sage: a = float(120) - 2^-46; a, a == 120 (119.99, False) sage: n(a, prec=57) 120.0 sage: n(a, prec=57) == 120 False sage: n(a, prec=57).str(truncate=False) '119.9858' The string representation of elements of RR truncates the last couple of digits to avoid confusion, but floats do not do the same. Changing tests like this to have n(..., prec=...) and relying on string formatting only confuses the matter (as well as making things ugly). It looks like ARM's libc returns incorrect (2 ulp off) values for gamma(n) for small n (at least). This should be fixed, not hidden, or at least marked as a known bug on ARM (only). IMHO this error is a much bigger issue than the noise due to a numerical integration arising from double-rounding in moving floats in and out of 80-bit registers and other low-level details. That's what the tolerance makers should be used for. = CONCLUSION = A double precision floating point number is supposed to have 53 digits, according to the norm (http://en.wikipedia.org/wiki/IEEE_754-2008), and the results are correct from that point of view. No, their last (two) binary digits are wrong. If the test was further digging shows that it's the implementation of gammal() in the platform's libc (eglibc is used) is to blame; they do expl(lgammal()), leading to loss of precision, as platform's long double is only 8 bytes, and it's simply not possible to stuff enough precision in log(gamma) if you only have 8 bytes! Dima sage: gamma(float(6)) == 120 True It would fail just as well. So the tests should be modified not to depend on the specific implementation : they're currently testing equality of floats! I would provide a patch for the tests so they use n(..., prec=53), but I'm hitting a problem in one of the cases ; see the mail I sent yesterday for more about that. See above. - Robert -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
Here is an illustration of the same phenomenon on x86_64. There, of course, 8-byte floats are double, so the code to demonstrate the problem is as follows: #include stdio.h #include math.h int main () { double x = 6.0; printf(sizof(double)=%d\n,sizeof(double)); printf(lgamma (%.20f)=%.20f\n, x, lgamma(x)); printf(tgamma (%.20f)=%.20f\n, x, tgamma(x)); printf(exp(lgamma) (%.20f)=%.20f\n, x, exp(lgamma(x))); return 0; } On x86_64 Linux, it outputs sizof(double)=8 lgamma (6.)=4.78749174278204581157 tgamma (6.)=119.97157829 exp(lgamma) (6.)=119.97157829 On MacOSX, libc has an honest gamma(), which does the job properly: sizof(double)=8 lgamma (6.)=4.78749174278204492339 tgamma (6.)=120. exp(lgamma) (6.)=119.87210231 So eglibc does a bad job computing gamma() on double (i.e. on 8-byte floats)... Report upstream? Dima -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org
Re: [sage-devel] [ARM] The failed numerical tests only show the tests are bad!
I've just been looking at this trying to figure out what was going on and I was just going to say exactly the same thing. I don't really know anything about the whole glibc vs eglibc thing, but I bet the implementation is the same as glibc-2.14.1/sysdeps/ieee754/dbl-64/e_gamma_r.c: double __ieee754_gamma_r (double x, int *signgamp) { /* We don't have a real gamma implementation now. We'll use lgamma and the exp function. But due to the required boundary conditions we must check some values separately. */ [...] And sure enough: sage: exp(RR(log(120))).str(truncate=False) '119.97' I'm not completely convinced that the results are wrong. They are certainly less precise than the correct answers, and in this case the correct answers can be represented exactly in double precision, but I would not be surprised if on x86_64 there are still cases where the returned answer is not as exact as possible. And as far as I can tell, the underlying tgamma() works as advertised, in the sense that I cannot find any description at all of how accurate it's answers are supposed to be, so within 2ulp seems possibly reasonable to me. (The maxima conversion is a different issue. There the result _is_ wrong. 1.7e17 is exact in double precision, and the conversion should not change that.) On Wed, Feb 1, 2012 at 8:04 PM, Dima Pasechnik dimp...@gmail.com wrote: On Thursday, 2 February 2012 06:24:18 UTC+8, Robert Bradshaw wrote: On Wed, Feb 1, 2012 at 4:19 AM, Julien Puydt wrote: = Forewords = I investigated the numerical issues on my ARM build, and after much poking around and searching, I found that I was chasing the dahu : the tests were wrong, and the result were good. No, the tests are fine, the results are clearly wrong. Let's consider the numerical failures one by one : The following comments apply to all 4 tests. = 1/4 = File /home/jpuydt/sage-4.8/devel/**sage-main/sage/functions/**other.py, line 511: sage: gamma1(float(6)) Expected: 120.0 Got: 119.97 Let's see how bad this is : sage: res=gamma(float(6)) sage: res 119.97 sage: n(res,prec=57) 120.0 sage: n(res,prec=58) 119.97 sage: a = float(120) - 2^-46; a, a == 120 (119.99, False) sage: a = float(120) - 2^-46; a, a == 120 (119.99, False) sage: n(a, prec=57) 120.0 sage: n(a, prec=57) == 120 False sage: n(a, prec=57).str(truncate=False) '119.9858' The string representation of elements of RR truncates the last couple of digits to avoid confusion, but floats do not do the same. Changing tests like this to have n(..., prec=...) and relying on string formatting only confuses the matter (as well as making things ugly). It looks like ARM's libc returns incorrect (2 ulp off) values for gamma(n) for small n (at least). This should be fixed, not hidden, or at least marked as a known bug on ARM (only). IMHO this error is a much bigger issue than the noise due to a numerical integration arising from double-rounding in moving floats in and out of 80-bit registers and other low-level details. That's what the tolerance makers should be used for. = CONCLUSION = A double precision floating point number is supposed to have 53 digits, according to the norm (http://en.wikipedia.org/wiki/**IEEE_754-2008http://en.wikipedia.org/wiki/IEEE_754-2008), and the results are correct from that point of view. No, their last (two) binary digits are wrong. If the test was further digging shows that it's the implementation of gammal() in the platform's libc (eglibc is used) is to blame; they do expl(lgammal()), leading to loss of precision, as platform's long double is only 8 bytes, and it's simply not possible to stuff enough precision in log(gamma) if you only have 8 bytes! Dima sage: gamma(float(6)) == 120 True It would fail just as well. So the tests should be modified not to depend on the specific implementation : they're currently testing equality of floats! I would provide a patch for the tests so they use n(..., prec=53), but I'm hitting a problem in one of the cases ; see the mail I sent yesterday for more about that. See above. - Robert -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org