Hi again,

my next attempt with the half precision and HDF5 is to make it possible to
convert data from and to half precision but also keeping a scale factor.
Data that can normally be represented with half precision can be up to 65504
(the maximum representable value). If the original data is larger than that,
it would make sense to keep a scale factor or at least the order of
magnitude before scaling. Any ideas on optimal ways to do that with HDF5?
Should I just extract a reasonable order of magnitude and keep it as an
attribute? I looked a bit at the scaling filters but they seem to suffer by
exactly the same problems!

thanks!

-- dimitris

2010/1/11 Dimitris Servis <[email protected]>

> Hi all,
>
> just wanted to report that the half precision (16bit) floating point works
> great with HDF5. I implemented some in-place array conversion functions (for
> a good implementation with ample room of improvement see
>
> http://www.mathworks.com/matlabcentral/fileexchange/23173-ieee-754r-half-precision-floating-point-converter).
> The in-place replacement is straight forward to implement. I also
> implemented a half float type to use with C++ and standard containers. On
> the HDF5 side, I used the in-place array converters to register the new type
> using code like this:
>
> hid_t halftype = H5Tcopy(H5T_NATIVE_FLOAT);
> H5Tset_fields(halftype , 15, 10, 5, 0, 10);
> H5Tset_size(halftype , 2);
> H5Tregister(H5T_PERS_HARD, "half-to-float",  halftype , H5T_NATIVE_FLOAT,
> half_conversion_function);
> (the same for all possible conversions)
>
> So far it works great. I can convert from any array to half float and back.
> Disk usage is obviously halved, so is memory usage, especially if you're on
> the C++ side where it is easier to work with the custom half type.
>
> HTH
>
> -- dimitris
>
>
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to