Re: [OMPI devel] Will info keys ever be fixed?

2018-09-11 Thread Ralph H Castain
On MacOS with gcc 7.3


> On Sep 11, 2018, at 3:02 PM, Jeff Squyres (jsquyres) via devel 
>  wrote:
> 
> Ralph --
> 
> What OS / compiler are you using?
> 
> I just compiled on MacOS (first time in a while) and filed a PR and a few 
> issues about the warnings I found, but I cannot replicate these warnings.  I 
> also built with gcc 7.3.0 on RHEL; couldn't replicate the warnings.
> 
> On MacOS, I'm using the default Xcode compilers:
> 
> $ gcc --version
> Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr 
> --with-gxx-include-dir=/usr/include/c++/4.2.1
> Apple LLVM version 9.1.0 (clang-902.0.39.2)
> Target: x86_64-apple-darwin17.7.0
> Thread model: posix
> InstalledDir: 
> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
> 
> 
> 
> 
> 
>> On Sep 10, 2018, at 6:57 PM, Ralph H Castain  wrote:
>> 
>> Still seeing this in today’s head of master:
>> 
>> info_subscriber.c: In function 'opal_infosubscribe_change_info':
>> ../../opal/util/info.h:112:31: warning: '%s' directive output may be 
>> truncated writing up to 36 bytes into a region of size 27 
>> [-Wformat-truncation=]
>> #define OPAL_INFO_SAVE_PREFIX "_OMPI_IN_"
>>   ^
>> info_subscriber.c:268:13: note: in expansion of macro 'OPAL_INFO_SAVE_PREFIX'
>> OPAL_INFO_SAVE_PREFIX "%s", key);
>> ^
>> info_subscriber.c:268:36: note: format string is defined here
>> OPAL_INFO_SAVE_PREFIX "%s", key);
>>^~
>> In file included from 
>> /opt/local/lib/gcc7/gcc/x86_64-apple-darwin17/7.3.0/include-fixed/stdio.h:425:0,
>> from ../../opal/class/opal_list.h:71,
>> from ../../opal/util/info_subscriber.h:30,
>> from info_subscriber.c:45:
>> info_subscriber.c:267:9: note: '__builtin_snprintf' output between 10 and 46 
>> bytes into a destination of size 36
>> snprintf(modkey, OPAL_MAX_INFO_KEY,
>> ^
>> In file included from 
>> /opt/local/lib/gcc7/gcc/x86_64-apple-darwin17/7.3.0/include-fixed/stdio.h:425:0,
>> from ../../opal/class/opal_list.h:71,
>> from ../../opal/util/info.h:30,
>> from info.c:46:
>> info.c: In function 'opal_info_dup_mode.constprop':
>> ../../opal/util/info.h:112:31: warning: '%s' directive output may be 
>> truncated writing up to 36 bytes into a region of size 28 
>> [-Wformat-truncation=]
>> #define OPAL_INFO_SAVE_PREFIX "_OMPI_IN_"
>>   ^
>> info.c:212:22: note: in expansion of macro 'OPAL_INFO_SAVE_PREFIX'
>>  OPAL_INFO_SAVE_PREFIX "%s", pkey);
>>  ^
>> info.c:212:45: note: format string is defined here
>>  OPAL_INFO_SAVE_PREFIX "%s", pkey);
>> ^~
>> In file included from 
>> /opt/local/lib/gcc7/gcc/x86_64-apple-darwin17/7.3.0/include-fixed/stdio.h:425:0,
>> from ../../opal/class/opal_list.h:71,
>> from ../../opal/util/info.h:30,
>> from info.c:46:
>> info.c:211:18: note: '__builtin_snprintf' output between 10 and 46 bytes 
>> into a destination of size 37
>>  snprintf(savedkey, OPAL_MAX_INFO_KEY+1,
>>  ^
>> 
>> 
>> ___
>> devel mailing list
>> devel@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/devel
> 
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> 
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/devel

___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel

Re: [OMPI devel] Will info keys ever be fixed?

2018-09-11 Thread Jeff Squyres (jsquyres) via devel
Ralph --

What OS / compiler are you using?

I just compiled on MacOS (first time in a while) and filed a PR and a few 
issues about the warnings I found, but I cannot replicate these warnings.  I 
also built with gcc 7.3.0 on RHEL; couldn't replicate the warnings.

On MacOS, I'm using the default Xcode compilers:

$ gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr 
--with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 9.1.0 (clang-902.0.39.2)
Target: x86_64-apple-darwin17.7.0
Thread model: posix
InstalledDir: 
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin





> On Sep 10, 2018, at 6:57 PM, Ralph H Castain  wrote:
> 
> Still seeing this in today’s head of master:
> 
> info_subscriber.c: In function 'opal_infosubscribe_change_info':
> ../../opal/util/info.h:112:31: warning: '%s' directive output may be 
> truncated writing up to 36 bytes into a region of size 27 
> [-Wformat-truncation=]
>  #define OPAL_INFO_SAVE_PREFIX "_OMPI_IN_"
>^
> info_subscriber.c:268:13: note: in expansion of macro 'OPAL_INFO_SAVE_PREFIX'
>  OPAL_INFO_SAVE_PREFIX "%s", key);
>  ^
> info_subscriber.c:268:36: note: format string is defined here
>  OPAL_INFO_SAVE_PREFIX "%s", key);
> ^~
> In file included from 
> /opt/local/lib/gcc7/gcc/x86_64-apple-darwin17/7.3.0/include-fixed/stdio.h:425:0,
>  from ../../opal/class/opal_list.h:71,
>  from ../../opal/util/info_subscriber.h:30,
>  from info_subscriber.c:45:
> info_subscriber.c:267:9: note: '__builtin_snprintf' output between 10 and 46 
> bytes into a destination of size 36
>  snprintf(modkey, OPAL_MAX_INFO_KEY,
>  ^
> In file included from 
> /opt/local/lib/gcc7/gcc/x86_64-apple-darwin17/7.3.0/include-fixed/stdio.h:425:0,
>  from ../../opal/class/opal_list.h:71,
>  from ../../opal/util/info.h:30,
>  from info.c:46:
> info.c: In function 'opal_info_dup_mode.constprop':
> ../../opal/util/info.h:112:31: warning: '%s' directive output may be 
> truncated writing up to 36 bytes into a region of size 28 
> [-Wformat-truncation=]
>  #define OPAL_INFO_SAVE_PREFIX "_OMPI_IN_"
>^
> info.c:212:22: note: in expansion of macro 'OPAL_INFO_SAVE_PREFIX'
>   OPAL_INFO_SAVE_PREFIX "%s", pkey);
>   ^
> info.c:212:45: note: format string is defined here
>   OPAL_INFO_SAVE_PREFIX "%s", pkey);
>  ^~
> In file included from 
> /opt/local/lib/gcc7/gcc/x86_64-apple-darwin17/7.3.0/include-fixed/stdio.h:425:0,
>  from ../../opal/class/opal_list.h:71,
>  from ../../opal/util/info.h:30,
>  from info.c:46:
> info.c:211:18: note: '__builtin_snprintf' output between 10 and 46 bytes into 
> a destination of size 37
>   snprintf(savedkey, OPAL_MAX_INFO_KEY+1,
>   ^
> 
> 
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/devel


-- 
Jeff Squyres
jsquy...@cisco.com

___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel

Re: [OMPI devel] mpirun error when not using span

2018-09-11 Thread Ralph H Castain
I believe the problem is actually a little different than you described. The 
issue occurs whenever the #procs combined with PE exceeds the number of cores 
on a node. It is caused by the fact that we aren’t considering the PE number 
when mapping processes - we only appear to be looking at it when binding. I’ll 
try to poke at it a bit.


> On Sep 11, 2018, at 9:17 AM, Shrader, David Lee  wrote:
> 
> Here's the xml output from lstopo. Thank you for taking a look!
> David
> 
> From: devel  on behalf of Ralph H Castain 
> 
> Sent: Monday, September 10, 2018 5:12 PM
> To: OpenMPI Devel
> Subject: Re: [OMPI devel] mpirun error when not using span
>  
> Could you please send the output from “lstopo --of xml foo.xml” (the file 
> foo.xml) so I can try to replicate here?
> 
> 
>> On Sep 4, 2018, at 12:35 PM, Shrader, David Lee > > wrote:
>> 
>> Hello,
>> 
>> I have run this issue by Howard, and he asked me to forward it on to the 
>> Open MPI devel mailing list. I get an error when trying to use PE=n with 
>> '--map-by numa' and not using span when using more than one node:
>> 
>> [dshrader@ba001 openmpi-3.1.2]$ mpirun -n 16 --map-by numa:PE=4 --bind-to 
>> core --report-bindings true
>> --
>> A request was made to bind to that would result in binding more
>> processes than cpus on a resource:
>> 
>>Bind to: CORE
>>Node:ba001
>>#processes:  2
>>#cpus:   1
>> 
>> You can override this protection by adding the "overload-allowed"
>> option to your binding directive.
>> --
>> 
>> The absolute values of the numbers passed to -n and PE don't really matter; 
>> the error pops up as soon as those numbers are combined in such a way that 
>> an MPI rank ends up on the second node.
>> 
>> If I add the "span" parameter, everything works as expected:
>> 
>> [dshrader@ba001 openmpi-3.1.2]$ mpirun -n 16 --map-by numa:PE=4,span 
>> --bind-to core --report-bindings true
>> [ba002.localdomain:58502] MCW rank 8 bound to socket 0[core 0[hwt 0]], 
>> socket 0[core 1[hwt 0]], socket 0[core 2[hwt 0]], socket 0[core 3[hwt 0]]: 
>> [B/B/B/B/./././././././././././././.][./././././././././././././././././.]
>> [ba002.localdomain:58502] MCW rank 9 bound to socket 0[core 4[hwt 0]], 
>> socket 0[core 5[hwt 0]], socket 0[core 6[hwt 0]], socket 0[core 7[hwt 0]]: 
>> [././././B/B/B/B/./././././././././.][./././././././././././././././././.]
>> [ba002.localdomain:58502] MCW rank 10 bound to socket 0[core 8[hwt 0]], 
>> socket 0[core 9[hwt 0]], socket 0[core 10[hwt 0]], socket 0[core 11[hwt 0]]: 
>> [././././././././B/B/B/B/./././././.][./././././././././././././././././.]
>> [ba002.localdomain:58502] MCW rank 11 bound to socket 0[core 12[hwt 0]], 
>> socket 0[core 13[hwt 0]], socket 0[core 14[hwt 0]], socket 0[core 15[hwt 
>> 0]]: 
>> [././././././././././././B/B/B/B/./.][./././././././././././././././././.]
>> [ba002.localdomain:58502] MCW rank 12 bound to socket 1[core 18[hwt 0]], 
>> socket 1[core 19[hwt 0]], socket 1[core 20[hwt 0]], socket 1[core 21[hwt 
>> 0]]: 
>> [./././././././././././././././././.][B/B/B/B/./././././././././././././.]
>> [ba002.localdomain:58502] MCW rank 13 bound to socket 1[core 22[hwt 0]], 
>> socket 1[core 23[hwt 0]], socket 1[core 24[hwt 0]], socket 1[core 25[hwt 
>> 0]]: 
>> [./././././././././././././././././.][././././B/B/B/B/./././././././././.]
>> [ba002.localdomain:58502] MCW rank 14 bound to socket 1[core 26[hwt 0]], 
>> socket 1[core 27[hwt 0]], socket 1[core 28[hwt 0]], socket 1[core 29[hwt 
>> 0]]: 
>> [./././././././././././././././././.][././././././././B/B/B/B/./././././.]
>> [ba002.localdomain:58502] MCW rank 15 bound to socket 1[core 30[hwt 0]], 
>> socket 1[core 31[hwt 0]], socket 1[core 32[hwt 0]], socket 1[core 33[hwt 
>> 0]]: 
>> [./././././././././././././././././.][././././././././././././B/B/B/B/./.]
>> [ba001.localdomain:11700] MCW rank 0 bound to socket 0[core 0[hwt 0]], 
>> socket 0[core 1[hwt 0]], socket 0[core 2[hwt 0]], socket 0[core 3[hwt 0]]: 
>> [B/B/B/B/./././././././././././././.][./././././././././././././././././.]
>> [ba001.localdomain:11700] MCW rank 1 bound to socket 0[core 4[hwt 0]], 
>> socket 0[core 5[hwt 0]], socket 0[core 6[hwt 0]], socket 0[core 7[hwt 0]]: 
>> [././././B/B/B/B/./././././././././.][./././././././././././././././././.]
>> [ba001.localdomain:11700] MCW rank 2 bound to socket 0[core 8[hwt 0]], 
>> socket 0[core 9[hwt 0]], socket 0[core 10[hwt 0]], socket 0[core 11[hwt 0]]: 
>> [././././././././B/B/B/B/./././././.][./././././././././././././././././.]
>> [ba001.localdomain:11700] MCW rank 3 bound to socket 0[core 12[hwt 0]], 
>> socket 0[core 13[hwt 0]], socket 0[core 14[hwt 0]], socket 0[core 15[hwt 
>> 0]]: 
>> [././././././././././././B/B/B/B/./.][./././././././././././././././././.]
>> [ba001.localdomain:11700] MCW rank 4 bound to socket 1[core 18[hwt 0]], 

Re: [OMPI devel] MTT Perl client

2018-09-11 Thread Jeff Squyres (jsquyres) via devel
Works for me.

> On Sep 11, 2018, at 12:35 PM, Ralph H Castain  wrote:
> 
> Hi folks
> 
> Per today’s telecon, I have moved the Perl MTT client into its own 
> repository: https://github.com/open-mpi/mtt-legacy. All the Python client 
> code has been removed from that repo.
> 
> The original MTT repo remains at https://github.com/open-mpi/mtt. I have a PR 
> to remove all the Perl client code and associated libs/modules from that 
> repo. We won’t commit it until people have had a chance to switch to the 
> mtt-legacy repo and verify that things still work for them.
> 
> Please let us know if mtt-legacy is okay or has a problem.
> 
> Thanks
> Ralph
> 
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/devel


-- 
Jeff Squyres
jsquy...@cisco.com

___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel

Re: [OMPI devel] [OMPI commits] Git: open-mpi/ompi branch v4.0.x updated. v1.10.7-1907-g71d3afd

2018-09-11 Thread Jeff Squyres (jsquyres) via devel
On Sep 11, 2018, at 2:17 PM, Jeff Squyres (jsquyres) via devel 
 wrote:
> 
>> diff --git a/VERSION b/VERSION
>> index 6fadf03..a9706a3 100644
>> --- a/VERSION
>> +++ b/VERSION
> 
>> +libmpi_mpifh_so_version=61:0:21
> 
> Just curious: any reason this one is 60 and all the others are 61?

Er -- I said that backwards: any reason this one is 61 and all the rest are 60?

-- 
Jeff Squyres
jsquy...@cisco.com

___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel


Re: [OMPI devel] [OMPI commits] Git: open-mpi/ompi branch v4.0.x updated. v1.10.7-1907-g71d3afd

2018-09-11 Thread Jeff Squyres (jsquyres) via devel
On Sep 9, 2018, at 4:29 PM, Gitdub  wrote:
> 
> diff --git a/VERSION b/VERSION
> index 6fadf03..a9706a3 100644
> --- a/VERSION
> +++ b/VERSION


> +libmpi_mpifh_so_version=61:0:21

Geoff --

Just curious: any reason this one is 60 and all the others are 61?

-- 
Jeff Squyres
jsquy...@cisco.com

___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel


[OMPI devel] MTT Perl client

2018-09-11 Thread Ralph H Castain
Hi folks

Per today’s telecon, I have moved the Perl MTT client into its own repository: 
https://github.com/open-mpi/mtt-legacy. All the Python client code has been 
removed from that repo.

The original MTT repo remains at https://github.com/open-mpi/mtt. I have a PR 
to remove all the Perl client code and associated libs/modules from that repo. 
We won’t commit it until people have had a chance to switch to the mtt-legacy 
repo and verify that things still work for them.

Please let us know if mtt-legacy is okay or has a problem.

Thanks
Ralph

___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel

Re: [OMPI devel] mpirun error when not using span

2018-09-11 Thread Shrader, David Lee
Here's the xml output from lstopo. Thank you for taking a look!

David


From: devel  on behalf of Ralph H Castain 

Sent: Monday, September 10, 2018 5:12 PM
To: OpenMPI Devel
Subject: Re: [OMPI devel] mpirun error when not using span

Could you please send the output from “lstopo --of xml foo.xml” (the file 
foo.xml) so I can try to replicate here?


On Sep 4, 2018, at 12:35 PM, Shrader, David Lee 
mailto:dshra...@lanl.gov>> wrote:

Hello,

I have run this issue by Howard, and he asked me to forward it on to the Open 
MPI devel mailing list. I get an error when trying to use PE=n with '--map-by 
numa' and not using span when using more than one node:


[dshrader@ba001 openmpi-3.1.2]$ mpirun -n 16 --map-by numa:PE=4 --bind-to core 
--report-bindings true
--
A request was made to bind to that would result in binding more
processes than cpus on a resource:

   Bind to: CORE
   Node:ba001
   #processes:  2
   #cpus:   1

You can override this protection by adding the "overload-allowed"
option to your binding directive.
--

The absolute values of the numbers passed to -n and PE don't really matter; the 
error pops up as soon as those numbers are combined in such a way that an MPI 
rank ends up on the second node.

If I add the "span" parameter, everything works as expected:


[dshrader@ba001 openmpi-3.1.2]$ mpirun -n 16 --map-by numa:PE=4,span --bind-to 
core --report-bindings true
[ba002.localdomain:58502] MCW rank 8 bound to socket 0[core 0[hwt 0]], socket 
0[core 1[hwt 0]], socket 0[core 2[hwt 0]], socket 0[core 3[hwt 0]]: 
[B/B/B/B/./././././././././././././.][./././././././././././././././././.]
[ba002.localdomain:58502] MCW rank 9 bound to socket 0[core 4[hwt 0]], socket 
0[core 5[hwt 0]], socket 0[core 6[hwt 0]], socket 0[core 7[hwt 0]]: 
[././././B/B/B/B/./././././././././.][./././././././././././././././././.]
[ba002.localdomain:58502] MCW rank 10 bound to socket 0[core 8[hwt 0]], socket 
0[core 9[hwt 0]], socket 0[core 10[hwt 0]], socket 0[core 11[hwt 0]]: 
[././././././././B/B/B/B/./././././.][./././././././././././././././././.]
[ba002.localdomain:58502] MCW rank 11 bound to socket 0[core 12[hwt 0]], socket 
0[core 13[hwt 0]], socket 0[core 14[hwt 0]], socket 0[core 15[hwt 0]]: 
[././././././././././././B/B/B/B/./.][./././././././././././././././././.]
[ba002.localdomain:58502] MCW rank 12 bound to socket 1[core 18[hwt 0]], socket 
1[core 19[hwt 0]], socket 1[core 20[hwt 0]], socket 1[core 21[hwt 0]]: 
[./././././././././././././././././.][B/B/B/B/./././././././././././././.]
[ba002.localdomain:58502] MCW rank 13 bound to socket 1[core 22[hwt 0]], socket 
1[core 23[hwt 0]], socket 1[core 24[hwt 0]], socket 1[core 25[hwt 0]]: 
[./././././././././././././././././.][././././B/B/B/B/./././././././././.]
[ba002.localdomain:58502] MCW rank 14 bound to socket 1[core 26[hwt 0]], socket 
1[core 27[hwt 0]], socket 1[core 28[hwt 0]], socket 1[core 29[hwt 0]]: 
[./././././././././././././././././.][././././././././B/B/B/B/./././././.]
[ba002.localdomain:58502] MCW rank 15 bound to socket 1[core 30[hwt 0]], socket 
1[core 31[hwt 0]], socket 1[core 32[hwt 0]], socket 1[core 33[hwt 0]]: 
[./././././././././././././././././.][././././././././././././B/B/B/B/./.]
[ba001.localdomain:11700] MCW rank 0 bound to socket 0[core 0[hwt 0]], socket 
0[core 1[hwt 0]], socket 0[core 2[hwt 0]], socket 0[core 3[hwt 0]]: 
[B/B/B/B/./././././././././././././.][./././././././././././././././././.]
[ba001.localdomain:11700] MCW rank 1 bound to socket 0[core 4[hwt 0]], socket 
0[core 5[hwt 0]], socket 0[core 6[hwt 0]], socket 0[core 7[hwt 0]]: 
[././././B/B/B/B/./././././././././.][./././././././././././././././././.]
[ba001.localdomain:11700] MCW rank 2 bound to socket 0[core 8[hwt 0]], socket 
0[core 9[hwt 0]], socket 0[core 10[hwt 0]], socket 0[core 11[hwt 0]]: 
[././././././././B/B/B/B/./././././.][./././././././././././././././././.]
[ba001.localdomain:11700] MCW rank 3 bound to socket 0[core 12[hwt 0]], socket 
0[core 13[hwt 0]], socket 0[core 14[hwt 0]], socket 0[core 15[hwt 0]]: 
[././././././././././././B/B/B/B/./.][./././././././././././././././././.]
[ba001.localdomain:11700] MCW rank 4 bound to socket 1[core 18[hwt 0]], socket 
1[core 19[hwt 0]], socket 1[core 20[hwt 0]], socket 1[core 21[hwt 0]]: 
[./././././././././././././././././.][B/B/B/B/./././././././././././././.]
[ba001.localdomain:11700] MCW rank 5 bound to socket 1[core 22[hwt 0]], socket 
1[core 23[hwt 0]], socket 1[core 24[hwt 0]], socket 1[core 25[hwt 0]]: 
[./././././././././././././././././.][././././B/B/B/B/./././././././././.]
[ba001.localdomain:11700] MCW rank 6 bound to socket 1[core 26[hwt 0]], socket 
1[core 27[hwt 0]], socket 1[core 28[hwt 0]], socket 1[core 29[hwt 0]]: 
[./././././././././././././././././.][././././././././B/B/B/B/./././././.]

Re: [OMPI devel] Cannot find libverbs when without-verbs is used

2018-09-11 Thread Jeff Squyres (jsquyres) via devel
I notice from your configure log that you're building Mellanox MXM support.

Does that pull in libibverbs as a dependent library?


> On Sep 11, 2018, at 7:23 AM, Mijakovic, Robert  
> wrote:
> 
> Hi guys,
> 
> I have configured OpenMPI to build without-verbs but the build fails with an 
> error saying that ld cannot find libverbs.
> 
> Configure:
> ==> 
> '/home/hpc/pr28fa/di52sut/spack_lrz/spack/var/spack/stage/openmpi-3.1.2-jg4gwt4cjfgu66vyq5pox7yavfwzri3m/openmpi-3.1.2/configure'
>  
> '--prefix=/home/hpc/pr28fa/di52sut/spack_lrz/spack/opt/x86_avx2/linux-sles12-x86_64/gcc-7.3.0/openmpi-3.1.2-jg4gwt4cjfgu66vyq5pox7yavfwzri3m'
>  '--enable-shared' 
> '--with-wrapper-ldflags=-Wl,-rpath,/lrz/sys/compilers/gcc/7.3.0/lib64' 
> '--enable-static' '--without-pmi' '--enable-mpi-cxx' 
> '--with-zlib=/home/hpc/pr28fa/di52sut/spack_lrz/spack/opt/x86_avx2/linux-sles12-x86_64/gcc-7.3.0/zlib-1.2.11-ajxhsmrlv2kvicpk3gdckgrroxr45mdl'
>  '--without-psm' '--without-psm2' '--without-verbs' 
> '--with-mxm=/opt/mellanox/mxm' '--without-ucx' '--without-libfabric' 
> '--without-alps' '--without-lsf' '--without-tm' '--without-slurm' 
> '--without-sge' '--without-loadleveler' '--disable-memchecker' 
> '--with-hwloc=/home/hpc/pr28fa/di52sut/spack_lrz/spack/opt/x86_avx2/linux-sles12-x86_64/gcc-7.3.0/hwloc-1.11.9-c4ktzih4jwg673rwwzgy4zvofd75tgvo'
>  '--disable-java' '--disable-mpi-java' '--without-cuda' 
> '--enable-cxx-exceptions’
> 
> Build:
>   CCLD libmpi.la
> /home/hpc/pr28fa/di52sut/spack_lrz/spack/opt/x86_avx2/linux-sles12-x86_64/gcc-7.3.0/binutils-2.31.1-ntosmj7bfrraftmq4jbvwbu6xnt3kbrz/bin/ld:
>  cannot find -libverbs
> 
> Attach please find the complete log.
> 
> 
> Thank you for your time.
> 
> Best regards,
> Robert
> --
> Dr. Robert Mijaković
> 
> Leibniz Supercomputing Centre
> HPC Systems and Services
> Boltzmannstr. 1
> D-85748 Garching
> Room I.2.034
> Phone:  +49 89 35831 8734
> Fax: +49 89 35831 9700
> Mobile:+49 (157) 786 605 00
> mailto:robert.mijako...@lrz.de
> 
> ___
> devel mailing list
> devel@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/devel


-- 
Jeff Squyres
jsquy...@cisco.com

___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel