Hello!

Is there interest in having the float16 logical type standardised in the
Parquet spec? I am proposing a PR for Arrow that will write float16 to
Parquet as FixedSizeBinary:https://issues.apache.org/jira/browse/ARROW-17464
but for the sake of portability between data analysis tools, it would of
course be a lot better to have this type standardised in the format itself.

Previous requests for this have been here:
https://issues.apache.org/jira/browse/PARQUET-1647 and here:
https://issues.apache.org/jira/browse/PARQUET-758 .

With the development of neural networks, half-precision floating points are
becoming more popular:
https://en.wikipedia.org/wiki/Half-precision_floating-point_format ; I do
think that a demand exists for its support. I am new to the project, but am
happy to contribute development time if there is support for this feature,
and guidance.

Warm regards,

Anja

Reply via email to