Hi Pierre,
As far as I understand, bfloat16 is specialized for machine learning
(and other applications that require little precision but might have
inputs of very varying magnitude).
The format spec does not state it explicitly, but Arrow expects
floating-point data to be represented in IEEE f
Hi,
There seems to be two competing standards for floats with 16 bits:
- https://en.wikipedia.org/wiki/Bfloat16_floating-point_format
- IEEE: https://en.wikipedia.org/wiki/IEEE_754-2008_revision
Was there any thought on how this could be handled? Would it make sense to
add some kind of Dat