> Am 30.10.2015 um 21:45 schrieb Jeff Squyres (jsquyres) <jsquy...@cisco.com>:
> 
> Oh, that's an interesting idea: perhaps the "bind to numa" is failing -- but 
> perhaps "bind to socket" would work.
> 
> Can you try:
> 
> /opt/openmpi-1.10.0-gcc/bin/mpiexec -bind-to numa -n 4 hostname
> /opt/openmpi-1.10.0-gcc/bin/mpiexec -bind-to socket -n 4 hostname
> 
Both report the same error. Interestingly -bind-to-socket works but it does not 
bring me the performance I expect for the petsc benchmark.

I have a second ubuntu 14.4 system (two old quad core xenons) and build ompi 
there. If it works there I can try to move the binaries.

Secondly I will try intel-mpi.

Thanks for your help and efforts!


/opt/petsc-3.6.2$ /opt/openmpi-1.10.0-gcc/bin/mpiexec -bind-to-socket -n 4 
hostname
--------------------------------------------------------------------------
The following command line option and corresponding MCA parameter have
been deprecated and replaced as follows:

  Command line option:
    Deprecated:  --bind-to-socket
    Replacement: --bind-to socket

  Equivalent MCA parameter:
    Deprecated:  hwloc_base_bind_to_socket
    Replacement: hwloc_base_binding_policy=socket

The deprecated forms *will* disappear in a future version of Open MPI.
Please update to the new syntax.
--------------------------------------------------------------------------
leo
leo
leo
leo






> 
> 
> 
>> On Oct 30, 2015, at 12:02 PM, Fabian Wein <fabian.w...@fau.de> wrote:
>> 
>> On 10/30/2015 02:48 PM, Dave Love wrote:
>>> Fabian Wein <fabian.w...@fau.de> writes:
>>> 
>>>> Is this a valid test?
>>>> 
>>>> 
>>>> /opt/openmpi-1.10.0-gcc/bin/mpiexec -n 4 hostname
>>>> leo
>>>> leo
>>>> leo
>>>> leo
>>> 
>>> So, unless you turned off the default binding -- to socket? check the
>>> mpirun man page -- it worked, but the "numa" level failed.  I don't know
>>> if that level has to exist, and there have been bugs in that area
>>> before.  Running lstopo might be useful, and checking that you're
>>> picking up the right hwloc dynamic library.
>> 
>> Sorry, I don't understand. Where is hwloc dynamically linked? I made now 
>> sure I
>> have only one type of libhwloc.so and libnuma.so on the system (there were 
>> versions
>> of an older date). Is a a way to check the lib if it has the feature?
>> 
>> mpiexec only links libnuma which was actually the old version and is now the 
>> one I
>> build from the numactl source by myself.
>> 
>> ldd /opt/openmpi-1.10.0-gcc/bin/mpiexec
>>      linux-vdso.so.1 =>  (0x00007ffffdbaa000)
>>      libopen-rte.so.12 => /opt/openmpi-1.10.0-gcc/lib/libopen-rte.so.12 
>> (0x00007fbfdae58000)
>>      libopen-pal.so.13 => /opt/openmpi-1.10.0-gcc/lib/libopen-pal.so.13 
>> (0x00007fbfdab78000)
>>      libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
>> (0x00007fbfda958000)
>>      libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fbfda590000)
>>      libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 
>> (0x00007fbfda380000)
>>      libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fbfda178000)
>>      librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fbfd9f70000)
>>      libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007fbfd9d68000)
>>      /lib64/ld-linux-x86-64.so.2 (0x00007fbfdb0d8000)
>> 
>>> 
>>> What happens if you try to bind to sockets, assuming you don't want to
>>> bind to cores?  [I don't understand why the default isn't to cores when
>>> you have only one process per core.]
>> 
>> bind-to cpu and socket bring the same error as bind-to numa.
>> 
>> 
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2015/10/27959.php
> 
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/10/27964.php

Reply via email to