On Mon, 3 Dec 2018 19:41:25 +
"Hammond, Simon David via users" wrote:
> Hi Open MPI Users,
>
> Just wanted to report a bug we have seen with OpenMPI 3.1.3 and 4.0.0
> when using the Intel 2019 Update 1 compilers on our
> Skylake/OmniPath-1 cluster. The bug occurs when running the Github
> ma
I'm trying to replicate using the same compiler (icc 2019) on my OSX over
TCP and shared memory with no luck so far. So either the segfault it's
something specific to OmniPath or to the memcpy implementation used on
Skylake. I tried to use the trace you sent, more specifically the
opal_datatype_cop
On Tue, 4 Dec 2018 09:15:13 -0500
George Bosilca wrote:
> I'm trying to replicate using the same compiler (icc 2019) on my OSX
> over TCP and shared memory with no luck so far. So either the
> segfault it's something specific to OmniPath or to the memcpy
> implementation used on Skylake.
Note th
If you try to build somewhere out of tree, not in a subdir of the
source, the Fortran build is likely to fail because mpi-ext-module.F90
does
include
'/openmpi-4.0.0/ompi/mpiext/pcollreq/mpif-h/mpiext_pcollreq_mpifh.h'
and can exceed the fixed line length. It either needs to add (the
com
Hi Dave; thanks for reporting.
Yes, we've fixed this -- it should be included in 4.0.1.
https://github.com/open-mpi/ompi/pull/6121
If you care, you can try the nightly 4.0.x snapshot tarball -- it should
include this fix:
https://www.open-mpi.org/nightly/v4.0.x/
> On Dec 4, 2018, at
Thanks for the report.
As far as I am concerned, this is a bug in the IMB benchmark, and I
issued a PR to fix that
https://github.com/intel/mpi-benchmarks/pull/11
Meanwhile, you can manually download and apply the patch at
https://github.com/intel/mpi-benchmarks/pull/11.patch
Cheers,
Hi,
The memory manager of IMB (IMB_mem_manager.c) do not support the
MPI_Reduce_scatter operation. It allocates too small send buffer:
sizeof(msg), but the operation requires commsize * sizeof(msg).
There are two possible solutions:
1) Fix computations of recvcounts (as proposed by Gilles)
2)
Thanks Mikhail,
You have a good point.
With the current semantic used in the IMB benchmark, this cannot be
equivalent to
MPI_Reduce() of N bytes followed by MPI_Scatterv() of N bytes.
So this is indeed a semantical question :
what should be a MPI_Reduce_scatter() of N bytes equivalent to