Phil,

We've been testing lustre 1.6.0.1 with MPI-IO (using the mpi-io-test
benchmark that comes with pvfs2 and NCAR's POP-IO test) on our BlueGene
and have seen very poor performance...  ~10 MB/s.  IOR shows similar
results writing to the same file.  Telling IOR to run on different files
shows great scalability, but our MPI-IO apps don't work that way.  We've
also seen similar MPI-IO performance at livermore running with lustre
1.4 on a regular linux cluster (10-20 MB/s).  We don't see this with the
same tests running on our gpfs or pvfs2 file systems.

I haven't tracked this down too far yet, so if anyone has suggestions of
things to check, or similar/different experiences, I'd love to hear about
it.  The IO sizes seem reasonably large (~1MB) and there don't appear to
be any client evictions.  I did try the -o localflock mount option patch
for 1.6 since I know MPI-IO flocks regions, but haven't had a chance to
fully benchmark that yet.

-Adam

Phil Dickens <[EMAIL PROTECTED]> wrote:
> 
> Hello,
> 
>   I am trying to find information about the performance
> of ROMIO (the MPI-IO implementation developed at Argonne
> National Laboratories) on the Lustre file system. Is
> ROMIO widely used on Lustre, or are there proprietary
> implementations of MPI-IO that are used? Does anyone have
> information on the performance of ROMIO on Lustre?
> 
> Many thanks!
> 
> Phil Dickens
> 
> _______________________________________________
> Lustre-discuss mailing list
> [email protected]
> https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
> 

_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to