Thank you very much for providing these useful suggestions! I may try
MVAPICH2 first. In my case, I transferred different data 2 times. Each time
the size is 3.146MB. Actually, I also tested problems of different sizes,
and none of them worked as expected.
Sorry for the slow reply; got caught up in SC'18 and the US Thanksgiving
Yes, you are exactly correct (I saw your GitHub issue/pull request about this
before I saw this email).
We will fix this in 4.0.1 in the very near future.
> On Nov 19, 2018, at 3:10 AM, Bert Wesarg
It does not appear to have any effect, at least not with 2.1.5.
On 11/26/18 9:17 PM, Nathan Hjelm via users wrote:
> Can you try configuring with —disable-builtin-atomics and see if that fixes
> the issue for you?
>> On Nov 26, 2018, at 9:11 PM, Orion Poplawski wrote:
On Tue, 2018-11-13 at 21:57 -0600, gil...@rist.or.jp wrote:
> can you please compress and post your config.log ?
I didn't see the config.log in response to this. Maybe Ray and Giles
took the discusison off list? As someone who might have introduced the
We have both psm and psm2 interfaces on our cluster. Since show load errors was
default to true from the v2.* series, we have been setting
"mca_base_component_show_load_errors=0" in the openmpi-mca-params.conf file to
suppress the load errors. But in version v3.1.3, this only works for
Gilles submitted a patch for that, and I approved it a couple of days back, I
*think* it has not been merged however. This was a bug in the Open MPI Lustre
configure logic, should be fixed after this one however.
Thanks a lot for providing such help to me! I may wait until the
technicians working for the university cluster finish their work, as they
agreed to help install OpenMPI 4.0.0 first (and maybe install Mvapich2 with
PGI compiler). Let me try these options first. I will let you know if
sorry for the late follow-up. The config.log was indeed sent offline.
Here is the relevant part :
configure:294375: checking for required lustre data structures
configure:294394: pgcc -O -DNDEBUG -Iyes/include -c conftest.c
PGC-S-0040-Illegal use of symbol, u_int64_t
I apologize. I did not realize that I did not reply to the list.
Going with the view this is a PGI problem, I noticed they recently
released version 18.10. I had just installed 18.7 within the last couple
The problem is resolved in 18.10.
On 11/27/18 7:55 PM, Gilles
Sorry for the delay in replying; the SC'18 show and then the US Thanksgiving
holiday got in the way. More below.
> On Nov 16, 2018, at 10:50 PM, Weicheng Xue wrote:
> Hi Jeff,
> Thank you very much for your reply! I am now using a cluster at my
Hi all - I've been trying to debug a segfault in OpenMPI 3.1.2, and in the
process I noticed that 3.1.3 is out, so I thought I'd test it. However, with
3.1.3 the code (LAMMPS) hangs very early, in dealing with input. I'm running
16 tasks on a single 16 core node, with Infiniband (which it may
Mail list logo