Hi Passant, list
This is an old problem with PGI.
There are many threads in the OpenMPI mailing list archives about this,
with workarounds.
The simplest is to use FC="pgf90 -noswitcherror".
Here are two out of many threads ... well, not pthreads! :)
Hello,
I'm having an error when trying to build OMPI 4.0.3 (also tried 4.1) with PGI
20.1
./configure CPP=cpp CC=pgcc CXX=pgc++ F77=pgf77 FC=pgf90 --prefix=$PREFIX
--with-ucx=$UCX_HOME --with-slurm --with-pmi=/opt/slurm/cluster/ibex/install
--with-cuda=$CUDATOOLKIT_HOME
in the make
"Jeff Squyres (jsquyres)" writes:
> Good question. I've filed
> https://github.com/open-mpi/ompi/issues/8379 so that we can track
> this.
For the benefit of the list: I mis-remembered that osc=ucx was general
advice. The UCX docs just say you need to avoid the uct btl, which can
cause memory
Good question. I've filed https://github.com/open-mpi/ompi/issues/8379 so that
we can track this.
> On Jan 14, 2021, at 7:53 AM, Dave Love via users
> wrote:
>
> Why does 4.1 still not use the right defaults with UCX?
>
> Without specifying osc=ucx, IMB-RMA crashes like 4.0.5. I haven't
>
I will have a look at those tests. The recent fixes were not correctness, but
performance fixes.
Nevertheless, we used to pass the mpich tests, but I admit that it is not a
testsuite that we run regularly, I will have a look at them. The atomicity
tests are expected to fail, since this the one
Why does 4.1 still not use the right defaults with UCX?
Without specifying osc=ucx, IMB-RMA crashes like 4.0.5. I haven't
checked what else it is UCX says you must set for openmpi to avoid
memory corruption, at least, but I guess that won't be right either.
Users surely shouldn't have to explore
I tried mpi-io tests from mpich 4.3 with openmpi 4.1 on the ac922 system
that I understand was used to fix ompio problems on lustre. I'm puzzled
that I still see failures.
I don't know why there are disjoint sets in mpich's test/mpi/io and
src/mpi/romio/test, but I ran all the non-Fortran ones