Mark Dixon wrote: > I'm trying to get the MPI-IO/ROMIO shipped with OpenMPI and MVAPICH2 > working with our Lustre 1.8 filesystem. Looking back at the list archives, > 3 different solutions have been offered: > > 1) Disable "data sieving" (change default library behaviour) > 2) Mount Lustre with "localflock" (flock consistent only within a node) > 3) Mount Lustre with "flock" (flock consistent across cluster) > > However, it is not entirely clear which of these was considered the > "best". Could anyone who is using MPI-IO on Lustre comment which they > picked, please?
FWIW, we've been using MPICH2's MPI-IO/ROMIO/ADIO with Lustre (v 1.8) for several months now, and it's been working reliably. We do mount the Lustre filesystem with "flock"; at one time I thought it necessary, but I don't recall if I verified that after the initial problems with MPI-IO were resolved. Only a recent MPICH2 will have a working MPI-IO/ROMIO/ADIO for Lustre; perhaps the code would work with OpenMPI and MVAPICH2 as well. -- Martin _______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
