On Tue, Dec 9, 2008 at 21:01, Charles R Harris
<[EMAIL PROTECTED]> wrote:
>
>
> On Tue, Dec 9, 2008 at 1:40 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
>>
>> On Tue, Dec 9, 2008 at 09:51, Nadav Horesh <[EMAIL PROTECTED]> wrote:
>> > As much as I know float128 are in fact 80 bits (64 mantissa + 16
>> > exponent) so the precision is 18-19 digits (not 34)
>>
>> float128 should be 128 bits wide. If it's not on your platform, please
>> let us know as that is a bug in your build.
>
> I think he means the actual precision is the ieee extended precision, the
> number just happens to be stored into larger chunks of memory for alignment
> purposes.

Ah, that's good to know. Yes, float128 on my Intel Mac behaves this way.

In [12]: f = finfo(float128)

In [13]: f.nmant
Out[13]: 63

In [14]: f.nexp
Out[14]: 15

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco
_______________________________________________
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to