Thank you for the response Jerome. Is this not an HDF5 issue because it is
not possible with HDF5? I would rather not have to compress the .h5 file
after it has been created.

Rob


On Sun, Jan 13, 2013 at 11:10 AM, Jerome BENOIT <[email protected]>wrote:

>
>
> On 13/01/13 16:38, Robert Seigel wrote:
>
>> Hello,
>>
>> I currently am writing collectively to an HDF5 file in parallel using
>> chunks, where each processor writes its subdomain as a chunk of a full
>> dataset. I have this working correctly using hyperslabs, however the file
>> size is very large [about 18x larger than if it was created using
>> sequential HDF5 and a H5Pset_deflate(plist_id,6)]. If I try to apply this
>> routine to the property list while performing parallel I/O, HDF5 says that
>> this feature is not yet supported (I am using v1.8.10). Is there any way to
>> compress the file during parallel write?
>>
>>
> This is rather a compressing issue than a HDF5 one:
> you may look for parallel versions of current compressors (pigz, pbzip2,
> ...).
>
> hth,
> Jerome
>
>
>
>  Thank you,
>> Rob
>>
>>
>> This body part will be downloaded on demand.
>>
>
> ______________________________**_________________
> Hdf-forum is for HDF software users discussion.
> [email protected]
> http://mail.hdfgroup.org/**mailman/listinfo/hdf-forum_**hdfgroup.org<http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org>
>
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to