A Wednesday 03 March 2010 13:18:02 Thorben Kröger escrigué:
> On a related note, I've just found this piece of information which might
> accelerate our program as well:
> 
> ---
> 
> http://www.hdfgroup.org/HDF5/doc/ADGuide/CompatFormat180.html
> 
> H5Pset_libver_bounds( hid_t fapl_id, H5F_libver_t low, H5F_libver_t high )
> 
> Compact-or-indexed groups enable much-compressed link storage for groups
>  with very few members and improved efficiency and performance for groups
>  with very large numbers of members. The efficiency and performance impacts
>  are most noticeable at the extremes: all unnecessary overhead is
>  eliminated for groups with zero members; groups with tens of thousands of
>  members may see as much as a 100-fold performance gain.
> 
> H5Pset_libver_bounds( hid_t fapl_id, H5F_libver_t low, H5F_libver_t high )
> H5Pget_libver_bounds( hid_t fapl_id, H5F_libver_t* low, H5F_libver_t* high
>  )
> 
> Default behavior: If H5Pset_libver_bounds is not called with low equal to
> HDF_LIBVER_LATEST, then the HDF5 Library provides the greatest-possible
> format compatibility. It does this by creating objects with the earliest
> opssible format that will handle the data being stored and accommodate
> the action being taken.
> 
> ---
> 
> Though the 30GB file I was talking of was written using HDF5 1.8.4, if I
> understand correctly, it will not make use of these new features because it
> tries to maintain downward compatibility to 1.6. Correct?

Yes, you should enable the new link storage by calling H5Pset_libver_bounds() 
explicitly.  IMO, this would be the best path to improve performance for your 
scenario (except if you don't want to follow the safer path of creating large 
datasets and references to parts of them, as Werner suggested).

-- 
Francesc Alted

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to