Bert --
Sorry for the slow reply; got caught up in SC'18 and the US Thanksgiving
holiday.
Yes, you are exactly correct (I saw your GitHub issue/pull request about this
before I saw this email).
We will fix this in 4.0.1 in the very near future.
> On Nov 19, 2018, at 3:10 AM, Bert Wesarg
It does not appear to have any effect, at least not with 2.1.5.
Thanks.
On 11/26/18 9:17 PM, Nathan Hjelm via users wrote:
> Can you try configuring with —disable-builtin-atomics and see if that fixes
> the issue for you?
>
> -Nathan
>
>> On Nov 26, 2018, at 9:11 PM, Orion Poplawski wrote:
On Tue, 2018-11-13 at 21:57 -0600, gil...@rist.or.jp wrote:
> Raymond,
>
> can you please compress and post your config.log ?
I didn't see the config.log in response to this. Maybe Ray and Giles
took the discusison off list? As someone who might have introduced the
offending configure-time
Hi,
We have both psm and psm2 interfaces on our cluster. Since show load errors was
default to true from the v2.* series, we have been setting
"mca_base_component_show_load_errors=0" in the openmpi-mca-params.conf file to
suppress the load errors. But in version v3.1.3, this only works for
Gilles submitted a patch for that, and I approved it a couple of days back, I
*think* it has not been merged however. This was a bug in the Open MPI Lustre
configure logic, should be fixed after this one however.
https://github.com/open-mpi/ompi/pull/6080
Thanks
Edgar
> -Original
Hi Xin,
Thanks a lot for providing such help to me! I may wait until the
technicians working for the university cluster finish their work, as they
agreed to help install OpenMPI 4.0.0 first (and maybe install Mvapich2 with
PGI compiler). Let me try these options first. I will let you know if
Folks,
sorry for the late follow-up. The config.log was indeed sent offline.
Here is the relevant part :
configure:294375: checking for required lustre data structures
configure:294394: pgcc -O -DNDEBUG -Iyes/include -c conftest.c
PGC-S-0040-Illegal use of symbol, u_int64_t
Sorry for the delay in replying; the SC'18 show and then the US Thanksgiving
holiday got in the way. More below.
> On Nov 16, 2018, at 10:50 PM, Weicheng Xue wrote:
>
> Hi Jeff,
>
> Thank you very much for your reply! I am now using a cluster at my
> university
Hi all - I've been trying to debug a segfault in OpenMPI 3.1.2, and in the
process I noticed that 3.1.3 is out, so I thought I'd test it. However, with
3.1.3 the code (LAMMPS) hangs very early, in dealing with input. I'm running
16 tasks on a single 16 core node, with Infiniband (which it may