On Wed, 2026-03-11 at 11:59 +0100, Ralf Gommers via NumPy-Discussion
wrote:
> On Wed, Mar 11, 2026 at 10:58 AM matti picus via NumPy-Discussion <
> [email protected]> wrote:
> 
> > On Tue, Mar 10, 2026 at 1:28 PM Sebastian Berg
> > <[email protected]> wrote:
> > > 
> > > Hi all,
> > > 
> > > In the NumPy 2.4 cycle, there were some native float16
> > > implementations
> > > merged with rather low precision leading to the following issue:
> > > https://github.com/numpy/numpy/issues/30821
> > > 
> > > That is, previously, it used float loops so ~0.5 ULP error, now
> > > is is
> > > 2+ULP for many algorithms, on _some_ hardware:
> > > https://github.com/numpy/numpy/pull/23351
> > > 
> > > There is always an argument around that users of float16 probably
> > > don't
> > > care about many ULP, but I guess they also have very few bits of
> > > precision to begin with?
> > > I don't have a huge opinion on it, but we are more and more in
> > > the
> > > position where it is unclear if sacrificing a bit of precision is
> > > the
> > > right thing or not...
> > > 
> > > Similar questions actually arise for float32 math, is it OK to
> > > trade-
> > > off precision for performance (or to what degree, everything
> > > trades a
> > > bit)?
> > > We have had discussions around this before but it is still a
> > > difficult
> > > trade-off to make and there is no choice that makes everyone
> > > happy. [1]
> > > 
> > > - Sebastian
> > > 
> > > [1] We can work towards something like `np.opts(precision="low")`
> > > or
> > > so, but that doesn't change the question of defaults much...
> > 
> > I do like the idea of having a precise/fast toggle. Until we can
> > develop one, I think we should prefer precise. So we should revert
> > and
> > document somewhere that float16 (and the soon-to-be-incoming
> > bfloat16)
> > are, in NumPy, container types, and that all the math for them is
> > done
> > as float16.
> > 
> 
> You meant `float32` here. And yes, I agree. Having a few code paths
> use


No, I meant float16, I don't think we have a bad variability for
float32 right now and while there is a different discussion to be had
about float32, I think those paths would at least be consistent across
architectures (as it would be custom implementations).

But it sounds like you agree with "revert" here, which would is my
tendency, even if I don't have a clear picture where to draw the line,
since hardware/platform differences always exist to some degree.

- Sebastian


> platform/CPU-dependent instructions like AVX512-xxx ones, and as a
> result
> having a small subset of the NumPy API have different accuracy/speed
> trade-offs seems not all that useful to almost all users. And makes
> it
> harder to build up a mental model of what NumPy is actually doing.
> 
> Cheers,
> Ralf
> _______________________________________________
> NumPy-Discussion mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> https://mail.python.org/mailman3//lists/numpy-discussion.python.org
> Member address: [email protected]
_______________________________________________
NumPy-Discussion mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3//lists/numpy-discussion.python.org
Member address: [email protected]

Reply via email to