On Fri, Sep 27, 2013 at 07:21:58PM +0000, Biddiscombe, John A. wrote:
> I am also a little puzzled the shape of the graphs later as independent drops 
> and collective overtakes it. I presume the effect of latency on many writes 
> to IONs is causing performance to drop whilst the collective mode avoids some 
> of this. 
> This page of graphs is one of many for different file system configs and they 
> all show the same pattern to greater or lesser degree. I shall do more tests 
> I expect until I am happy that I fully understand what's going on.

Collective I/O will do a few things that can help:

- aggregate i/o accesses down to a smaller number of "i/o
  aggregators".  This happens at the MPI-IO layer before the I/O nodes
  are involved, but IBM has optimized this process for Blue Gene such
  that I/O nodes are elected as i/o aggregators based on their
  relationship to I/o forwarding nodes.   These aggregators then will
  make fewer I/O requests, and typically larger ones at that. 

- align accesses.  let's say each process does 1000000 byte writes,
  but your GPFS file system has a 4 MiB (i.e. not 4000000 but rather
  4194304 bytes) block size.  GPFS does really well when accesses are
  aligned to a multiple of the block size.  The MPI-IO layer will
  shuffle accesses around a bit, with the end result that aggregators
  will for the most part do their i/o to one or more non-shared GPFS
  file system blocks. 

Independent access just goes to the file system. There's no way to
optimize them.

-- 
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org

Reply via email to