Hello,

I'm a newcomer to the HDF5 World. I have tried to compare the performance
of our existing binary file i/o against HDF5 and I'm seeing modest
improvements in speed with HDF5.

The next step for me is to experiment with advanced HDF5 topics like
chunking and compression. Based on what I read in the HDF5 documentation,
chunking can come in handy when one knows the access patterns of their
dataset ahead of time. In my case, my dataset is entirely composed of
one-dimensional, double precision float arrays. Most of these arrays would
be of the same size, but some of them will considerably be smaller than
most of the other arrays. For any given read, I would need to read a single
1D array in its entirety. Given my scenario, I feel, I wouldn't gain any
performance improvement by using the chunking technique.

Is my analysis correct? If not, please help me understand how chunking will
help my cause.

Appreciate your help,
MDH.
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to