Hi NumPy Team! >From ralf:
> Now that the header-only version in > https://github.com/numpy/numpy/pull/30380 is close to merge-ready, I'd > like to circle back to this point - what will we expose as the new public > API? and from Sebastian: > FWIW, I think the conversion functions are the truly important ones, > after that I find it a bit hard to make a cut. Especially if we include > comparisons, utilities like `isnan`/`isfinite` and maybe `nextafter` > seem reasonable (and I suspect very simple code wise, some of these > might even be used by the comparisons/conversions). > At that point that is a large chunk of what is there... If we just look to thin out a bit: `spacing` isn't C99 and I agree that > at least the `_nonan` versions seem fine to drop. > The bit-level conversions seem ok to drop, although I wouldn't be > surprised if they are used more than the normal conversions. I agree that the conversion functions <https://github.com/numpy/numpy/blob/df285a9294f91d1bc3c24e465fb4f1df0091fa7d/numpy/_core/include/numpy/float16.h#L60-L63> are the minimum we need, but I would argue that providing a slightly richer API that avoids the overhead of having to convert back and forth to perform basic operations is preferable. >From my (admittedly cursory) review, the user journey I'm concerned about is the potential for custom logic around arrays that are otherwise handed off to accelerators or libraries. This seems to be particularly true in machine learning inference applications that are memory or bandwidth sensitive (See: Nvidia - Train With Mixed Precision <https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html>, Making Deep Learning Go Brrrr From First Principles <https://arc.net/l/quote/xdofricm>, and t <https://zilliz.com/ai-faq/how-can-you-reduce-the-memory-footprint-of-sentence-transformer-models-during-inference-or-when-handling-large-numbers-of-embeddings>his oddly specific example from a company named zilliz) <https://zilliz.com/ai-faq/how-can-you-reduce-the-memory-footprint-of-sentence-transformer-models-during-inference-or-when-handling-large-numbers-of-embeddings>.Those users seem to care more about memory footprint and data movement. For them, I can imagine being able to stay in npy_float16 for: - comparisons <https://github.com/numpy/numpy/blob/df285a9294f91d1bc3c24e465fb4f1df0091fa7d/numpy/_core/include/numpy/float16.h#L65-L74> (equal, not equal, less than, greater than, *or equal to, *nonan) - iszero, isnan, isinf, isfinite, signbit <https://github.com/numpy/numpy/blob/df285a9294f91d1bc3c24e465fb4f1df0091fa7d/numpy/_core/include/numpy/float16.h#L76-L80> - nextafter <https://github.com/numpy/numpy/blob/df285a9294f91d1bc3c24e465fb4f1df0091fa7d/numpy/_core/include/numpy/float16.h#L83>, copysign <https://github.com/numpy/numpy/blob/df285a9294f91d1bc3c24e465fb4f1df0091fa7d/numpy/_core/include/numpy/float16.h#L81> seems useful, even if the underlying implementation is simple. Also since float16.h is header-only, I'm assuming the maintenance cost of exposing these additional utilities is relatively small when compared to only exposing the conversion functions. So my preference would be to keep the logic currently present, only removing npy_float16_spacing <https://github.com/numpy/numpy/blob/df285a9294f91d1bc3c24e465fb4f1df0091fa7d/numpy/_core/include/numpy/float16.h#L82>, and maybe divmod <https://github.com/numpy/numpy/blob/df285a9294f91d1bc3c24e465fb4f1df0091fa7d/numpy/_core/include/numpy/float16.h#L84> . I could also see an argument for making the bit-level conversions <https://github.com/numpy/numpy/blob/df285a9294f91d1bc3c24e465fb4f1df0091fa7d/numpy/_core/include/numpy/float16.h#L86-L89>private, but I don't see the harm with leaving them exposed. ~Amelia On Wed, Dec 10, 2025 at 2:12 AM Sebastian Berg <[email protected]> wrote: > On Wed, 2025-12-10 at 09:26 +0100, Ralf Gommers via NumPy-Discussion > wrote: > > On Sun, Nov 23, 2025 at 11:51 AM Ralf Gommers > > <[email protected]> > > wrote: > > > > > > > > > > > On Sun, Nov 23, 2025 at 8:47 AM Matti Picus via NumPy-Discussion < > > > [email protected]> wrote: > > > > > > > > > > > > > > The question I have is whether we should expose all the > > > functionality > > > that's currently exposed in `halffloat.h` or leave out some of the > > > odd > > > ones. The conversion and comparison functions seem most useful, but > > > for > > > example the `*_nonan` variants look weird (we don't normally expose > > > "skip-nan" flavors just for some performance), as does `_iszero` > > > (unlike > > > `_isnan` et al. it doesn't have a C99 equivalent). > > > > > > > Now that the header-only version in > > https://github.com/numpy/numpy/pull/30380 is close to merge-ready, > > I'd like > > to circle back to this point - what will we expose as the new public > > API? A > > 1:1 replacement for everything is ready, but if we expose it all > > under the > > new `npy_float16_*` names then we'll be stuck with it in the future. > > We can > > also decide to just expose the conversion and comparison routines, > > plus the > > macros, but leave out the `*_nonan` routines, `_iszero`, and possibly > > also > > the low-level bit conversion routines like > > `npy_floatbits_to_float16bits`. > > They could just be ifdef'd out and in case someone needs them, it'll > > be > > quite easy to expose them in a next release. > > > > Thoughts? > > > > > Ah, I half expected more math functions (that just cast to float), but > it seems that is actually only `divmod` (which I would say we can > safely drop). > > FWIW, I think the conversion functions are the truly important ones, > after that I find it a bit hard to make a cut. Especially if we include > comparisons, utilities like `isnan`/`isfinite` and maybe `nextafter` > seem reasonable (and I suspect very simple code wise, some of these > might even be used by the comparisons/conversions). > At that point that is a large chunk of what is there... > > If we just look to thin out a bit: `spacing` isn't C99 and I agree that > at least the `_nonan` versions seem fine to drop. > The bit-level conversions seem ok to drop, although I wouldn't be > surprised if they are used more than the normal conversions. > > - Sebastian > > > > > > Cheers, > > Ralf > > _______________________________________________ > > NumPy-Discussion mailing list -- [email protected] > > To unsubscribe send an email to [email protected] > > https://mail.python.org/mailman3//lists/numpy-discussion.python.org > > Member address: [email protected] > _______________________________________________ > NumPy-Discussion mailing list -- [email protected] > To unsubscribe send an email to [email protected] > https://mail.python.org/mailman3//lists/numpy-discussion.python.org > Member address: [email protected] > -- Amelia Thurdekoos Staff Software Engineer, Quansight Quansight | Your Data Experts w: www.quansight.org <http://quansight.org/> e: [email protected] <https://www.linkedin.com/company/quansight/> <https://www.instagram.com/quansightai/> <https://bsky.app/profile/quansight.com> <https://twitter.com/quansightai>
_______________________________________________ NumPy-Discussion mailing list -- [email protected] To unsubscribe send an email to [email protected] https://mail.python.org/mailman3//lists/numpy-discussion.python.org Member address: [email protected]
