In the case of UCX, you can
mpirun --mca pml_base_verbose 10 ...
If the pml/ucx component is used, then your app will run over UCX.
If the pml/ob1 component is used, then you can
mpirun --mca btl_base_verbose 10 ...
btl/self should be used for communications to itself.
gt; size = this%size_dim(this%gi)*this%size_dim(this%gj)*cs3
> if(this%is_exchange_off) then
>this%bf(:,:,1:cs3) = cmplx(0.,0.)
> On Mon, Aug 19, 2019 at 3:25 PM Gilles Gouaillardet via users
>> One more thing ...
>> Your initial message mentioned a failure with gcc 8.2.0, but your
>> follow-up message mentions LLVM compiler.
>> So whi
One more thing ...
Your initial message mentioned a failure with gcc 8.2.0, but your
follow-up message mentions LLVM compiler.
So which compiler did you use to build Open MPI that fails to build your test ?
On Mon, Aug 19, 2019 at 6:49 PM Gilles Gouaillardet
and your reproducer is ?
On Mon, Aug 19, 2019 at 6:42 PM Sangam B via users
> OpenMPI is configured as follows:
> export CC=`which clang`
> export CXX=`which clang++`
> export FC=`which flang`
> export F90=`which flang`
Can you please post a full but minimal example that evidences the issue?
Also please post your Open MPI configure command line.
Sent from my iPod
> On Aug 19, 2019, at 18:13, Sangam B via users
> I get following error if the application is compiled
You need to
configure --with-pmi ...
On August 8, 2019, at 11:28 PM, Jing Gong via users
Recently our Slurm system has been upgraded to 19.0.5. I tried to recompile
openmpi v3.0 due to the bug reported in
Is the issue related to https://github.com/open-mpi/ompi/pull/4501 ?
you might have to configure with --enable-heterogeneous to evidence the
On 8/2/2019 4:06 AM, Jeff Squyres (jsquyres) via users wrote:
I am able to replicate the issue on a
MPI_Isend() does not automatically frees the buffer after it sends the message.
(it simply cannot do it since the buffer might be pointing to a global
variable or to the stack).
Can you please extract a reproducer from your program ?
Out of curiosity, what if you insert an (useless)
that Podman is running rootless. I will continue to investigate, but now
I know where to look. Thanks!
On Fri, Jul 12, 2019 at 06:48:59PM +0900, Gilles Gouaillardet via users wrote:
Can you try
mpirun --mca btl_vader_copy_mechanism none ...
Please double check the MCA
t;--> Process # 0 of 2 is alive. ->test1
>>--> Process # 1 of 2 is alive. ->test2
>> I need to tell Podman to mount /tmp from the host into the container, as
>> I am running rootless I also need to tell Podman to use the same user ID
the MPI application relies on some environment variables (they typically
start with OMPI_ and PMIX_).
The MPI application internally uses a PMIx client that must be able to
contact a PMIx server
(that is included in mpirun and the orted daemon(s) spawned on the
the PSM2 shared memory segment name is set by the PSM2 library and
my understanding is that Open MPI has no control over it.
If you believe the root cause of the crash is related to non unique PSM2
memory segment name, I guess you should report this at
Thanks for the report,
this is indeed a bug I fixed at https://github.com/open-mpi/ompi/pull/6790
meanwhile, you can manually download and apply the patch at
On 7/3/2019 1:30 AM, Gyevi-Nagy László via users wrote:
I issued https://github.com/open-mpi/ompi/pull/6782 in order to fix this
(and the alltoallw variant as well)
Meanwhile, you can manually download and apply the patch at
On 6/28/2019 1:10 PM, Zhang,
tps://github.com/openucx/ucx/issues/3336 that the UCX 1.6 might solve this
issue, so I tried the pre-release version to just check if it will.
All the best,
From: users on behalf of Gilles Gouaillardet via
Sent: Tuesday, June 25, 2019 11
UCX 1.6.0 is not yet officially released, and it seems Open MPI
(4.0.1) does not support it yet, and some porting is needed.
On Tue, Jun 25, 2019 at 5:13 PM Passant A. Hafez via users
> I'm trying to build ompi 4.0.1 with external ucx 1.6.0 but
which version of Open MPI are you using ? how many hosts in your hostsfile ?
The error message suggests this could be a bug within Open MPI, and a
potential workaround for you would be to try
mpirun -np 84 - -hostfile hostsfile --mca routed direct ./openmpi_hello.c
You might also want to
what if you move some parameters to CPPFLAGS and CXXCPPFLAGS (see the
new configure command line below)
The root cause is configure cannot run a simple Fortran program
(see the relevant log below)
I suggest you
and then try again.
configure:44254: checking Fortran value of selected_int_kind(4)
Mail list logo