On Wed, Dec 08, 2010 at 03:38:42PM -0700, Leigh Orf wrote:
> I am playing with different I/O strategies for massively parallel multicore
> systems. Let's say I have a MPI communicator which represents a subset of
> cores (such as the cores on a SMP chip) and I want the root rank on that
> communicator to create the file, but I want all of the other cores to write
> to that file in a sequential (round-robin) manner.

It sounds to me like you are maybe over-thinking this problem.  You've
got an MPI program, so you've got an MPI library, and that MPI library
contains an MPI-IO implementation.  It's likely that MPI-IO
implementation will take care of these problems for you.

> ...

> My question is: can I do the round-robin write part without having to open
> and close (h5fopen_f / h5fclose_f) the file for each core? 

No, not at the HDF5 level.  If you are on a file system where that is
possible, the MPI-IO library will do that for you.

> It seems that file_id (and the other id's like dset_id, dspace_id
> etc.) carries with it something tangible that links that id to the
> file itself. I don't think I can get away with doing MPI_BCAST of
> the file_id (and the other id variables) to the other cores and use
> those handles to magically access the file created on the root
> core... right? I am trying to avoid the overhead of opening,
> seeking, closing for each core.

Here's what I'd suggest: structure your code for parallel HDF5 with
MPI-IO support and collecitve I/O enabled.   Then the library,
whenever possible, will basically do the sorts of optimizations you're
thinking about.  You do have to, via property lists, explicitly enable
MPI-IO support and collective I/O.

==rob

-- 
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to