On Fri, Dec 11, 2015 at 11:22 AM, Anne Archibald <archib...@astron.nl> wrote:
> Actually, GCC implements 128-bit floats in software and provides them as
> __float128; there are also quad-precision versions of the usual functions.
> The Intel compiler provides this as well, I think, but I don't think
> Microsoft compilers do. A portable quad-precision library might be less
> painful.
>
> The cleanest way to add extended precision to numpy is by adding a
> C-implemented dtype. This can be done in an extension module; see the
> quaternion and half-precision modules online.
>
> Anne
>
>
> On Fri, Dec 11, 2015, 16:46 Charles R Harris <charlesr.har...@gmail.com>
> wrote:
>>
>> On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel <baruc...@gmx.com> wrote:
>>>
>>> From time to time it is asked on forums how to extend precision of
>>> computation on Numpy array. The most common answer
>>> given to this question is: use the dtype=object with some arbitrary
>>> precision module like mpmath or gmpy.
>>> See
>>> http://stackoverflow.com/questions/6876377/numpy-arbitrary-precision-linear-algebra
>>> or http://stackoverflow.com/questions/21165745/precision-loss-numpy-mpmath
>>> or
>>> http://stackoverflow.com/questions/15307589/numpy-array-with-mpz-mpfr-values
>>>
>>> While this is obviously the most relevant answer for many users because
>>> it will allow them to use Numpy arrays exactly
>>> as they would have used them with native types, the wrong thing is that
>>> from some point of view "true" vectorization
>>> will be lost.
>>>
>>> With years I got very familiar with the extended double-double type which
>>> has (for usual architectures) about 32 accurate
>>> digits with faster arithmetic than "arbitrary precision types". I even
>>> used it for research purpose in number theory and
>>> I got convinced that it is a very wonderful type as long as such
>>> precision is suitable.
>>>
>>> I often implemented it partially under Numpy, most of the time by trying
>>> to vectorize at a low-level the libqd library.
>>>
>>> But I recently thought that a very nice and portable way of implementing
>>> it under Numpy would be to use the existing layer
>>> of vectorization on floats for computing the arithmetic operations by
>>> "columns containing half of the numbers" rather than
>>> by "full numbers". As a proof of concept I wrote the following file:
>>> https://gist.github.com/baruchel/c86ed748939534d8910d
>>>
>>> I converted and vectorized the Algol 60 codes from
>>> http://szmoore.net/ipdf/documents/references/dekker1971afloating.pdf
>>> (Dekker, 1971).
>>>
>>> A test is provided at the end; for inverting 100,000 numbers, my type is
>>> about 3 or 4 times faster than GMPY and almost
>>> 50 times faster than MPmath. It should be even faster for some other
>>> operations since I had to create another np.ones
>>> array for testing this type because inversion isn't implemented here
>>> (which could of course be done). You can run this file by yourself
>>> (maybe you will have to discard mpmath or gmpy if you don't have it).
>>>
>>> I would like to discuss about the way to make available something related
>>> to that.
>>>
>>> a) Would it be relevant to include that in Numpy ? (I would think to some
>>> "contribution"-tool rather than including it in
>>> the core of Numpy because it would be painful to code all ufuncs; on the
>>> other hand I am pretty sure that many would be happy
>>> to perform several arithmetic operations by knowing that they can't use
>>> cos/sin/etc. on this type; in other words, I am not
>>> sure it would be a good idea to embed it as an every-day type but I think
>>> it would be nice to have it quickly available
>>> in some way). If you agree with that, in which way should I code it (the
>>> current link only is a "proof of concept"; I would
>>> be very happy to code it in some cleaner way)?
>>>
>>> b) Do you think such attempt should remain something external to Numpy
>>> itself and be released on my Github account without being
>>> integrated to Numpy?
>>
>>
>> I think astropy does something similar for time and dates. There has also
>> been some talk of adding a user type for ieee 128 bit doubles. I've looked
>> once for relevant code for the latter and, IIRC, the available packages were
>> GPL :(.

This might be the same as or similar to a recent announcement for Julia

https://groups.google.com/d/msg/julia-users/iHTaxRVj1yM/M-WtZCedCQAJ


It would be useful to get this in a consistent way across platforms
and compilers.
I can think of several applications where higher precision reduce
operations would be
useful in statistics.
As Windows user, I never even saw a higher precision float.

Josef


>>
>> Chuck
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to