Patrick,

glad to hear you will upgrade Open MPI thanks to this workaround!

ompio has known performance issues on Lustre (this is why ROMIO is
still the default on this filesystem)
but I do not remember such performance issues have been reported on a
NFS filesystem.

Sharing a reproducer will be very much appreciated in order to improve ompio

Cheers,

Gilles

On Thu, Dec 3, 2020 at 6:05 PM Patrick Bégou via users
<users@lists.open-mpi.org> wrote:
>
> Thanks Gilles,
>
> this is the solution.
> I will set OMPI_MCA_io=^ompio automatically when loading the parallel
> hdf5 module on the cluster.
>
> I was tracking this problem for several weeks but not looking in the
> right direction (testing NFS server I/O, network bandwidth.....)
>
> I think we will now move definitively to modern OpenMPI implementations.
>
> Patrick
>
> Le 03/12/2020 à 09:06, Gilles Gouaillardet via users a écrit :
> > Patrick,
> >
> >
> > In recent Open MPI releases, the default component for MPI-IO is ompio
> > (and no more romio)
> >
> > unless the file is on a Lustre filesystem.
> >
> >
> > You can force romio with
> >
> > mpirun --mca io ^ompio ...
> >
> >
> > Cheers,
> >
> >
> > Gilles
> >
> > On 12/3/2020 4:20 PM, Patrick Bégou via users wrote:
> >> Hi,
> >>
> >> I'm using an old (but required by the codes) version of hdf5 (1.8.12) in
> >> parallel mode in 2 fortran applications. It relies on MPI/IO. The
> >> storage is NFS mounted on the nodes of a small cluster.
> >>
> >> With OpenMPI 1.7 it runs fine but using modern OpenMPI 3.1 or 4.0.5 the
> >> I/Os are 10x to 100x slower. Are there fundamentals changes in MPI/IO
> >> for these new releases of OpenMPI and a solution to get back to the IO
> >> performances with this parallel HDF5 release ?
> >>
> >> Thanks for your advices
> >>
> >> Patrick
> >>
>

Reply via email to