On Tue, Mar 10, 2026 at 1:28 PM Sebastian Berg
<[email protected]> wrote:
>
> Hi all,
>
> In the NumPy 2.4 cycle, there were some native float16 implementations
> merged with rather low precision leading to the following issue:
> https://github.com/numpy/numpy/issues/30821
>
> That is, previously, it used float loops so ~0.5 ULP error, now is is
> 2+ULP for many algorithms, on _some_ hardware:
> https://github.com/numpy/numpy/pull/23351
>
> There is always an argument around that users of float16 probably don't
> care about many ULP, but I guess they also have very few bits of
> precision to begin with?
> I don't have a huge opinion on it, but we are more and more in the
> position where it is unclear if sacrificing a bit of precision is the
> right thing or not...
>
> Similar questions actually arise for float32 math, is it OK to trade-
> off precision for performance (or to what degree, everything trades a
> bit)?
> We have had discussions around this before but it is still a difficult
> trade-off to make and there is no choice that makes everyone happy. [1]
>
> - Sebastian
>
> [1] We can work towards something like `np.opts(precision="low")` or
> so, but that doesn't change the question of defaults much...

I do like the idea of having a precise/fast toggle. Until we can
develop one, I think we should prefer precise. So we should revert and
document somewhere that float16 (and the soon-to-be-incoming bfloat16)
are, in NumPy, container types, and that all the math for them is done
as float16.
Matti
_______________________________________________
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: [email protected]

Reply via email to