Hello,

I currently am writing collectively to an HDF5 file in parallel using
chunks, where each processor writes its subdomain as a chunk of a full
dataset. I have this working correctly using hyperslabs, however the file
size is very large [about 18x larger than if it was created using
sequential HDF5 and a H5Pset_deflate(plist_id,6)]. If I try to apply this
routine to the property list while performing parallel I/O, HDF5 says that
this feature is not yet supported (I am using v1.8.10). Is there any way to
compress the file during parallel write?

Thank you,
Rob
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to