On 13/01/13 16:38, Robert Seigel wrote:
Hello, I currently am writing collectively to an HDF5 file in parallel using chunks, where each processor writes its subdomain as a chunk of a full dataset. I have this working correctly using hyperslabs, however the file size is very large [about 18x larger than if it was created using sequential HDF5 and a H5Pset_deflate(plist_id,6)]. If I try to apply this routine to the property list while performing parallel I/O, HDF5 says that this feature is not yet supported (I am using v1.8.10). Is there any way to compress the file during parallel write?
This is rather a compressing issue than a HDF5 one: you may look for parallel versions of current compressors (pigz, pbzip2, ...). hth, Jerome
Thank you, Rob This body part will be downloaded on demand.
_______________________________________________ Hdf-forum is for HDF software users discussion. [email protected] http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
