Le 12/03/2011 12:11, Aurelien Jarno a écrit :
On Sat, Mar 12, 2011 at 09:48:53AM +0100, Julien PUYDT wrote:
Package: libc6
Version: 2.11.2-13

The following piece of code :

#include<stdio.h>
#include<math.h>

int
main (int argc,
       char* argv[])
{
   long double x = 6.0;
   printf ("tgammal (%20Lf)=%20Lf\n", x, tgammal (x));
   return 0;
}

Prints, on an x86 debian unstable (eglibc 2.11.2-11) :
tgammal (            6.000000)=          120.000000
And on an ARMel debian unstable (eglibc 2.11.2-13) :
tgammal (6.00000000000000000000)=119.99999999999997157829

On armel, long double is same the type as a double, and thus tgamma()
and tgammal() are the same functions. On x86 long double and double are
different types, and thus tgamma() and tgammal() are different
functions.

Your test code with long double and tgammal() on armel gives as expected
exactly the same result as double and tgamma() on x86. I don't see any
problem here, this function works as expected.

Let's see :
119.99999999999997157829
    0123456789ABCD        <- digit counting
That makes 12 good digits -- and 119 needs 7 more (it's between 2**6 and 2**7), so that makes 19 good digits.

Isn't that a little short? If I read http://en.wikipedia.org/wiki/IEEE_754-2008#Basic_formats well, even simple precision boasts 23 digits.

The above computation may be a little naive. Can you point me to the more precise place where the precision of those special functions is normalized?

Snark on #sage-devel



--
To UNSUBSCRIBE, email to debian-glibc-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4d7b82d3.4010...@laposte.net

Reply via email to