On Tue, Feb 22, 2011 at 5:49 PM, Mark Howison <[email protected]> wrote:
> Hi Leigh,
>
> It is true that you need to align writes to Lustre stripe boundaries
> to get reasonable performance to a single shared file. If you use
> collective I/O, as Rob and Quincey have suggested, it will handles
> this automatically (since mpt/3.2) by aggregating your data on a
> subset of "writer" MPI tasks, then packaging the data into
> stripe-sized writes. It will also try to set the number of writers to
> the number of stripes.
>

OK, you answered one of the questions I just posted in another email.
There is aggregation going on with collective I/O.

My awful (60 GB in 11 minutes) performance occurred with 160 OSTs and
128 MB stripes. I chose 128 MB thinking 'bigger is better' and the
resultant file will be on the order of 500 GB or so. Maybe I went too
large...

Leigh

-- 
Leigh Orf
Associate Professor of Atmospheric Science
Department of Geology and Meteorology
Central Michigan University
Currently on sabbatical at the National Center for Atmospheric
Research inĀ Boulder, CO
NCAR office phone: (303) 497-8200

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to