Hi, I'm using an old (but required by the codes) version of hdf5 (1.8.12) in parallel mode in 2 fortran applications. It relies on MPI/IO. The storage is NFS mounted on the nodes of a small cluster.
With OpenMPI 1.7 it runs fine but using modern OpenMPI 3.1 or 4.0.5 the I/Os are 10x to 100x slower. Are there fundamentals changes in MPI/IO for these new releases of OpenMPI and a solution to get back to the IO performances with this parallel HDF5 release ? Thanks for your advices Patrick