Re: [OMPI users] Problem with double shared library

2016-10-28 Thread Sean Ahern
Gilles, You described the problem exactly. I think we were able to nail down a solution to this one through judicious use of the -rpath $MPI_DIR/lib linker flag, allowing the runtime linker to properly find OpenMPI symbols at runtime. We're operational. Thanks for your help. -Sean -- Sean Ahern

Re: [OMPI users] MCA compilation later

2016-10-28 Thread r...@open-mpi.org
You don’t need any of the hardware - you just need the headers. Things like libfabric and libibverbs are all publicly available, and so you can build all that support even if you cannot run it on your machine. Once your customer installs the binary, the various plugins will check for their

[OMPI users] MCA compilation later

2016-10-28 Thread Sean Ahern
There's been discussion on the OpenMPI list recently about static linking of OpenMPI with all of the desired MCAs in it. I've got the opposite question. I'd like to add MCAs later on to an already-compiled version of OpenMPI and am not quite sure how to do it. Let me summarize. We've got a

Re: [OMPI users] Unable to compile OpenMPI 1.10.3 with CUDA

2016-10-28 Thread Sylvain Jeaugey
On 10/28/2016 10:33 AM, Craig tierney wrote: Sylvain, If I do not set --with-cuda, I get: configure:9964: result: no configure:10023: checking whether CU_POINTER_ATTRIBUTE_SYNC_MEMOPS is declared configure:10023: gcc -c -DNDEBUG conftest.c >&5 conftest.c:83:19: fatal error: /cuda.h: No

[OMPI users] Redusing libmpi.so size....

2016-10-28 Thread Mahesh Nanavalla
Hi all, I am using openmpi-1.10.3. openmpi-1.10.3 compiled for arm(cross compiled on X86_64 for openWRT linux) libmpi.so.12.0.3 size is 2.4MB,but if i compiled on X86_64 (linux) libmpi.so.12.0.3 size is 990.2KB. can anyone tell how to reduce the size of libmpi.so.12.0.3 compiled for arm.

[OMPI users] OpenMPI + InfiniBand

2016-10-28 Thread Sergei Hrushev
Hello, All ! We have a problem with OpenMPI version 1.10.2 on a cluster with newly installed Mellanox InfiniBand adapters. OpenMPI was re-configured and re-compiled using: --with-verbs --with-verbs-libdir=/usr/lib And our test MPI task returns proper results but it seems OpenMPI continues to use

Re: [OMPI users] Redusing libmpi.so size....

2016-10-28 Thread Jeff Squyres (jsquyres)
On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla wrote: > > i have configured as below for arm > > ./configure --enable-orterun-prefix-by-default > --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi" > CC=arm-openwrt-linux-muslgnueabi-gcc

Re: [OMPI users] OpenMPI + InfiniBand

2016-10-28 Thread Sergei Hrushev
> > Sergei, what does the command "ibv_devinfo" return please? > > I had a recent case like this, but on Qlogic hardware. > Sorry if I am mixing things up. > > An output of ibv_devinfo from cluster's 1st node is: $ ibv_devinfo -d mlx4_0 hca_id: mlx4_0 transport:

Re: [OMPI users] OpenMPI + InfiniBand

2016-10-28 Thread John Hearns via users
Sorry - shoot down my idea. Over to someone else (me hides head in shame) On 28 October 2016 at 11:28, Sergei Hrushev wrote: > Sergei, what does the command "ibv_devinfo" return please? >> >> I had a recent case like this, but on Qlogic hardware. >> Sorry if I am mixing

Re: [OMPI users] OpenMPI + InfiniBand

2016-10-28 Thread Gilles Gouaillardet
Sergei, is there any reason why you configure with --with-verbs-libdir=/usr/lib ? as far as i understand, --with-verbs should be enough, and /usr/lib nor /usr/local/lib should ever be used in the configure command line (and btw, are you running on a 32 bits system ? should the 64 bits libs be in

Re: [OMPI users] OpenMPI + InfiniBand

2016-10-28 Thread John Hearns via users
Sergei, what does the command "ibv_devinfo" return please? I had a recent case like this, but on Qlogic hardware. Sorry if I am mixing things up. On 28 October 2016 at 10:48, Sergei Hrushev wrote: > Hello, All ! > > We have a problem with OpenMPI version 1.10.2 on a

Re: [OMPI users] Fortran and MPI-3 shared memory

2016-10-28 Thread Tom Rosmond
Gilles, Thanks! With my very rudimentary understanding of C pointers and C programming in general I missed that translation subtly. The revised program runs fine with a variety of optimizations and debug options on my test system. Tom R. On 10/27/2016 10:23 PM, Gilles Gouaillardet

Re: [OMPI users] Unable to compile OpenMPI 1.10.3 with CUDA

2016-10-28 Thread Craig tierney
Sylvain, If I do not set --with-cuda, I get: configure:9964: result: no configure:10023: checking whether CU_POINTER_ATTRIBUTE_SYNC_MEMOPS is declared configure:10023: gcc -c -DNDEBUG conftest.c >&5 conftest.c:83:19: fatal error: /cuda.h: No such file or directory #include

Re: [OMPI users] Launching hybrid MPI/OpenMP jobs on a cluster: correct OpenMPI flags?

2016-10-28 Thread r...@open-mpi.org
FWIW: I’ll be presenting “Mapping, Ranking, and Binding - Oh My!” at the OMPI BoF meeting at SC’16, for those who can attend > On Oct 11, 2016, at 8:16 AM, Dave Love wrote: > > Wirawan Purwanto writes: > >> Instead of the scenario above, I was

Re: [OMPI users] what was the rationale behind rank mapping by socket?

2016-10-28 Thread r...@open-mpi.org
FWIW: I’ll be presenting “Mapping, Ranking, and Binding - Oh My!” at the OMPI BoF meeting at SC’16, for those who can attend. Will try to explain the rationale as well as the mechanics of the options > On Oct 11, 2016, at 8:09 AM, Dave Love wrote: > > Gilles

Re: [OMPI users] what was the rationale behind rank mapping by socket?

2016-10-28 Thread Bennet Fauber
Ralph, Alas, I will not be at SC16. I would like to hear and/or see what you present, so if it gets made available in alternate format, I'd appreciated know where and how to get it. I am more and more coming to think that our cluster configuration is essentially designed to frustrated MPI

Re: [OMPI users] what was the rationale behind rank mapping by socket?

2016-10-28 Thread r...@open-mpi.org
Yes, I’ve been hearing a growing number of complaints about cgroups for that reason. Our mapping/ranking/binding options will work with the cgroup envelope, but it generally winds up with a result that isn’t what the user wanted or expected. We always post the OMPI BoF slides on our web site,

Re: [OMPI users] Redusing libmpi.so size....

2016-10-28 Thread Mahesh Nanavalla
Hi Gilles, Thanks for reply i have configured as below for arm ./configure --enable-orterun-prefix-by-default --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi" CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++ --host=arm-openwrt-linux-muslgnueabi