I have downloaded the nightly snapshot tarball of october 10th 2018 for
the 3.1 version and it solves the memory problem.
I ran my test case on 1, 2, 4, 10, 16, 20, 32, 40, and 64 cores
This version also allows to compile my prerequisites libraries, so we
can use it out of the box to
MPI_LB, MPI_UB and MPI_Type_struct have been deprecated since MPI-2 and
were removed in MPI-3 (
https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report/node34.htm). It is
trivial to replace MPI_Type_struct with MPI_Type_create_struct. Replacing
MPI_UB and MPI_LB with MPI_Type_create_resized but it
You might want to update your HDF code to not use MPI_LB and MPI_UB -- these
constants were deprecated in MPI-2.1 in 2009 (an equivalent function,
MPI_TYPE_CREATE_RESIZED was added in MPI-2.0 in 1997), and were removed from
the MPI-3.0 standard in 2012.
Meaning: the death of these
Those features (MPI_LB/MPI_UB/MPI_Type_struct) were removed in MPI-3.0. It is
fairly straightforward to update the code to be MPI-3.0 compliant.
MPI_Type_struct -> MPI_Type_create_struct
MPI_LB/MPI_UB -> MPI_Type_create_resized
types = MPI_LB;
disp = my_lb;
lens = 1;
Hi Jeff and George
thanks for your answer. I find some time to work again on this problem dans I
have downloaded OpenMPI 4.0.0rc4. It compiles without any problem but building
the first dependance of my code (hdf5 1.8.12) with this version 4 fails:
../../src/H5Smpio.c:355:28: error: 'MPI_LB'
Yeah, it's a bit terrible, but we didn't reliably reproduce this problem for
many months, either. :-\
As George noted, it's been ported to all the release branches but is not yet in
an official release. Until an official release (4.0.0 just had an rc; it will
be released soon, and 3.0.3 will
I can't speculate on why you did not notice the memory issue before, simply
because for months we (the developers) didn't noticed and our testing
infrastructure didn't catch this bug despite running millions of tests. The
root cause of the bug was a memory ordering issue, and these are really
thanks for your answer. I was previously using OpenMPI 3.1.2 and have also this
problem. However, using --enable-debug --enable-mem-debugat configuration time,
I was unable to reproduce the failure and it was quite difficult for me do trace
the problem. May be I have not run enought
Few days ago we have pushed a fix in master for a strikingly similar issue.
The patch will eventually make it in the 4.0 and 3.1 but not on the 2.x
series. The best path forward will be to migrate to a more recent OMPI
On Tue, Sep 18, 2018 at 3:50 AM Patrick Begou <
I'm moving a large CFD code from Gcc 4.8.5/OpenMPI 1.7.3 to Gcc 7.3.0/OpenMPI
2.1.5 and with this latest config I have random segfaults.
Same binary, same server, same number of processes (16), same parameters for the
run. Sometimes it runs until the end, sometime I get 'invalid memory
Mail list logo