Hi all,

On Nov 10, 2010, at 11:04 AM, Mark Howison wrote:

> On Wed, Nov 10, 2010 at 11:47 AM, Rob Latham <[email protected]> wrote:
>> I see, thanks to jumpshot, that you now have many processors doing
>> metadata updates before closing the file.  for this small test, 21 out
>> of 256 processors (ranks 0-21) do an MPI_FILE_WRITE_AT.  Used to all
>> come from rank 0. Neat!
>> Could that final metadata update be done collectively?  I think you've
>> explained to me why it could before, but I'm drawing a blank.
> 
> Hi Rob,
> 
> You are probably seeing the "round-robin" metadata writing
> optimization that came out of our work tuning HDF5 for Lustre that we
> presented at IASDS2010. I'm guessing that you are seeing only 22
> writes because that is how many individual pieces of metadata are in
> your file. I'm surprised that you are seeing independent MPI writes,
> because I thought it did generate collective calls when we were
> testing it.
> 
> Eventually, the metadata writes will be consolidated into larger
> contiguous pieces once a newer "pagefile" mechanism for metadata is
> developed. The round-robin approach is more of a stop-gap until the
> pagefile mechanism is available (at which point the metadata writes
> should be large enough to interact much better with parallel file
> systems).
> 
> (Quincey should correct me on anything I've gotten wrong here.)

        Nope, you are correct on all accounts, except that we are still 
currently doing independent I/O calls.  I think the next phase of work was to 
make the metadata I/O collective.  I'm working on the page cache design right 
now and should have something reasonable in about a month or so.

        Quincey


_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to