[OMPI devel] [2.0.2rc3] build failure ppc64/-m32 and bultin-atomics

2017-01-05 Thread Paul Hargrove
I have a standard Linux/ppc64 system with gcc-4.8.3 I have configured the 2.0.2rc3 tarball with --prefix=... --enable-builtin-atomics \ CFLAGS=-m32 --with-wrapper-cflags=-m32 \ CXXFLAGS=-m32 --with-wrapper-cxxflags=-m32 \ FCFLAGS=-m32 --with-wrapper-fcflags=-m32 --disable-mpi-fortran (Yes, I

[OMPI devel] v2.0.2rc3 posted

2017-01-05 Thread Jeff Squyres (jsquyres)
In the usual place: https://www.open-mpi.org/software/ompi/v2.0/ The main driver for rc3 is that we think rc2 may have accidentally been made with older versions of the GNU Autotools, which may have led to https://github.com/open-mpi/ompi/issues/2665. -- Jeff Squyres jsquy...@cisco.com

Re: [OMPI devel] [2.0.2rc2] opal_fifo hang w/ --enable-osx-builtin-atomics

2017-01-05 Thread Howard Pritchard
Hi Paul, I opened issue 2666 to track this. Howard 2017-01-05 0:23 GMT-07:00 Paul Hargrove : > On Macs running Yosemite (OS X 10.10 w/ Xcode 7.1) and El Capitan (OS X > 10.11 w/ Xcode 8.1) I have configured with > CC=cc

Re: [OMPI devel] rdmacm and udcm for 2.0.1 and RoCE

2017-01-05 Thread Howard Pritchard
Hi Dave, Sorry for the delayed response. Anyway, you have to use rdmacm for connection management when using ROCE. However, with 2.0.1 and later, you have to specify per peer QP info manually on the mpirun command line. Could you try rerunning with mpirun --mca btl_openib_receive_queues

Re: [OMPI devel] [2.0.2rc2] FreeBSD-11 run failure

2017-01-05 Thread Howard Pritchard
HI Paul, I opened https://github.com/open-mpi/ompi/issues/2665 to track this. Thanks for reporting this. Howard 2017-01-04 14:43 GMT-07:00 Paul Hargrove : > With the 2.0.2rc2 tarball on FreeBSD-11 (i386 or amd64) I am configuring > with: > --prefix=... CC=clang

Re: [OMPI devel] OMPI devel] hwloc missing NUMANode object

2017-01-05 Thread r...@open-mpi.org
I can add a check to see if we have NUMA, and if not we can fall back to socket (if present) or just “none” > On Jan 5, 2017, at 1:39 AM, Gilles Gouaillardet > wrote: > > Thanks Brice, > > Right now, the user facing issue is that numa binding is requested, and

Re: [OMPI devel] OMPI devel] hwloc missing NUMANode object

2017-01-05 Thread Gilles Gouaillardet
Thanks Brice, Right now, the user facing issue is that numa binding is requested, and there is no numa, so mpirun aborts. But you have a good point, we could simply not bind at all in this case instead of aborting, since the numa node would have been the full machine, which would have been a

Re: [OMPI devel] hwloc missing NUMANode object

2017-01-05 Thread Brice Goglin
Le 05/01/2017 07:07, Gilles Gouaillardet a écrit : > Brice, > > things would be much easier if there were an HWLOC_OBJ_NODE object in > the topology. > > could you please consider backporting the relevant changes from master > into the v1.11 branch ? > > Cheers, > > Gilles Hello Unfortunately,