Hi all,

In the NumPy 2.4 cycle, there were some native float16 implementations
merged with rather low precision leading to the following issue:
https://github.com/numpy/numpy/issues/30821

That is, previously, it used float loops so ~0.5 ULP error, now is is
2+ULP for many algorithms, on _some_ hardware:
https://github.com/numpy/numpy/pull/23351

There is always an argument around that users of float16 probably don't
care about many ULP, but I guess they also have very few bits of
precision to begin with?
I don't have a huge opinion on it, but we are more and more in the
position where it is unclear if sacrificing a bit of precision is the
right thing or not...

Similar questions actually arise for float32 math, is it OK to trade-
off precision for performance (or to what degree, everything trades a
bit)?
We have had discussions around this before but it is still a difficult
trade-off to make and there is no choice that makes everyone happy. [1]

- Sebastian

[1] We can work towards something like `np.opts(precision="low")` or
so, but that doesn't change the question of defaults much...
_______________________________________________
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: [email protected]

Reply via email to