Thanks Mohamad for the info. I will surely look to try out the upcoming
beta version.

Suman


On Wed, Dec 12, 2012 at 9:59 AM, Mohamad Chaarawi <[email protected]>wrote:

>
>
> On Dec 11, 2012, at 10:26 PM, Mohamad Chaarawi <[email protected]>
> wrote:
>
> Of course, a much simpler way to do this is to create the file using
> MPI_COMM_SELF, create the data sets independently as you did since
> collective on comm_self is well, independent, and close the file.
>
>
> Ahh wait sorry i was not thinking right since it will be the same file.
> this will not work as bad things will happen with this access pattern so
> disregard this..
>
> Mohamad
>
>
>
> Then you can actually open the file again using comm_world, open the
> dataset for each process (H5Dopen is not collective), then access the
> dataset for each process independently as you were doing.
>
> Mohamad
>
>
> On Dec 11, 2012, at 10:19 PM, Mohamad Chaarawi <[email protected]>
> wrote:
>
>
>
> On Dec 11, 2012, at 9:33 PM, Suman Vajjala <[email protected]> wrote:
>
> Hi,
>
>  I thank you for the replies. The problem is that for creating a dataset
> using H5Dcreate every processor has to know that dataset's size. Is there a
> way of of doing it without the processors knowing the data size.
>
>
> If the dataset size if based on the process rank, then you can just
> calculate that at each process then call H5Dcreate collectively on all
> processes for each dataset. If you can't calculate that size at each
> process, you can use MPI_Allgather to distribute the size for each process
> to all other processes then you would be able to call the H5Dcreate n times.
>
> This is a limitation currently in the HDF5 standard that we plan to relax
> in the future. But this work is still in the prototyping stage. Ping me
> early January and might have a beta version that you might be able to use.
>
> Mohamad
>
>
> Regards
> Suman
>
> On Wed, Dec 12, 2012 at 8:58 AM, Elena Pourmal <[email protected]>wrote:
>
>> reate ia
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> [email protected]
> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> [email protected]
> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> [email protected]
> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>
>
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> [email protected]
> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>
>
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to