I am playing with different I/O strategies for massively parallel multicore
systems. Let's say I have a MPI communicator which represents a subset of
cores (such as the cores on a SMP chip) and I want the root rank on that
communicator to create the file, but I want all of the other cores to write
to that file in a sequential (round-robin) manner.

Something like (fortran pseudocode)

if (myrank.eq.0)
   call h5open_f
   call h5fcreate_f (file_id)
   (write some metadata and 1D arrays)
endif

do irank=0,num_cores_per_chip
   if (myrank.eq. irank)
      call h5dwrite_f(dset_id,my_3d_array,dims)
   endif
enddo

if (myrank.eq.0) call h5close_f


My question is: can I do the round-robin write part without having to open
and close (h5fopen_f / h5fclose_f) the file for each core? It seems that
file_id (and the other id's like dset_id, dspace_id etc.) carries with it
something tangible that links that id to the file itself. I don't think I
can get away with doing MPI_BCAST of the file_id (and the other id
variables) to the other cores and use those handles to magically access the
file created on the root core... right? I am trying to avoid the overhead of
opening, seeking, closing for each core.

Leigh

-- 
Leigh Orf
Associate Professor of Atmospheric Science
Department of Geology and Meteorology
Central Michigan University
Currently on sabbatical at the National Center for Atmospheric Research
in Boulder, CO
NCAR office phone: (303) 497-8200
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to