Hi, I'm trying to get the MPI-IO/ROMIO shipped with OpenMPI and MVAPICH2 working with our Lustre 1.8 filesystem. Looking back at the list archives, 3 different solutions have been offered:
1) Disable "data sieving" (change default library behaviour) 2) Mount Lustre with "localflock" (flock consistent only within a node) 3) Mount Lustre with "flock" (flock consistent across cluster) However, it is not entirely clear which of these was considered the "best". Could anyone who is using MPI-IO on Lustre comment which they picked, please? I *think* the May 2008 list archive indicates I should be using (3), but I'd feel a whole lot better about it if I knew I wasn't alone :) Cheers, Mark -- ----------------------------------------------------------------- Mark Dixon Email : [email protected] HPC/Grid Systems Support Tel (int): 35429 Information Systems Services Tel (ext): +44(0)113 343 5429 University of Leeds, LS2 9JT, UK ----------------------------------------------------------------- _______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
