Patrick,

In recent Open MPI releases, the default component for MPI-IO is ompio (and no more romio)

unless the file is on a Lustre filesystem.


You can force romio with

mpirun --mca io ^ompio ...


Cheers,


Gilles

On 12/3/2020 4:20 PM, Patrick Bégou via users wrote:
Hi,

I'm using an old (but required by the codes) version of hdf5 (1.8.12) in
parallel mode in 2 fortran applications. It relies on MPI/IO. The
storage is NFS mounted on the nodes of a small cluster.

With OpenMPI 1.7 it runs fine but using modern OpenMPI 3.1 or 4.0.5 the
I/Os are 10x to 100x slower. Are there fundamentals changes in MPI/IO
for these new releases of OpenMPI and a solution to get back to the IO
performances with this parallel HDF5 release ?

Thanks for your advices

Patrick

Reply via email to