Ok, that's a very old kernel on a very old POWER processor, it's
expected that hwloc doesn't get much topology information, and it's then
expected that OpenMPI cannot apply most binding policies.

Brice



Le 09/03/2017 16:12, Angel de Vicente a écrit :
> Can this help? If you think any other information could be relevant,
> let me know.
>
> Cheers,
> Ángel
>
> cat /proc/cpuinfo
> processor       : 0
> cpu             : PPC970MP, altivec supported
> clock           : 2297.700000MHz
> revision        : 1.1 (pvr 0044 0101)
>
> [4 processors]
>
> timebase        : 14318000
> machine         : CHRP IBM,8844-Z0C
>
> uname -a
> Linux login1 2.6.16.60-perfctr-0.42.4-ppc64 #1 SMP Fri Aug 21 15:25:15
> CEST 2009 ppc64 ppc64 ppc64 GNU/Linux
>
> lsb_release -a
> Distributor ID: SUSE LINUX
> Description:    SUSE Linux Enterprise Server 10 (ppc)
> Release:        10
>
>
> On 9 March 2017 at 15:04, Brice Goglin <brice.gog...@inria.fr
> <mailto:brice.gog...@inria.fr>> wrote:
>
>     What's this machine made of? (processor, etc)
>     What kernel are you running ?
>
>     Getting no "socket" or "package" at all is quite rare these days.
>
>     Brice
>
>
>
>
>     Le 09/03/2017 15:28, Angel de Vicente a écrit :
>     > Hi again,
>     >
>     > thanks for your help. I installed the latest OpenMPI (2.0.2).
>     >
>     > lstopo output:
>     >
>     > ,----
>     > | lstopo --version
>     > | lstopo 1.11.2
>     > |
>     > | lstopo
>     > | Machine (7861MB)
>     > |   L2 L#0 (1024KB) + L1d L#0 (32KB) + L1i L#0 (64KB) + Core L#0
>     + PU L#0
>     > |   (P#0)
>     > |   L2 L#1 (1024KB) + L1d L#1 (32KB) + L1i L#1 (64KB) + Core L#1
>     + PU L#1
>     > |   (P#1)
>     > |   L2 L#2 (1024KB) + L1d L#2 (32KB) + L1i L#2 (64KB) + Core L#2
>     + PU L#2
>     > |   (P#2)
>     > |   L2 L#3 (1024KB) + L1d L#3 (32KB) + L1i L#3 (64KB) + Core L#3
>     + PU L#3
>     > |   (P#3)
>     > |   HostBridge L#0
>     > |     PCIBridge
>     > |       PCI 1014:028c
>     > |         Block L#0 "sda"
>     > |       PCI 14c1:8043
>     > |         Net L#1 "myri0"
>     > |     PCIBridge
>     > |       PCI 14e4:166b
>     > |         Net L#2 "eth0"
>     > |       PCI 14e4:166b
>     > |         Net L#3 "eth1"
>     > |     PCIBridge
>     > |       PCI 1002:515e
>     > `----
>     >
>     > I started with GCC 6.3.0, compiled OpenMPI 2.0.2 with it, and
>     then HDF5
>     > 1.10.0-patch1 with it. Our code then compiles OK with it, and it
>     runs OK
>     > without "mpirun":
>     >
>     > ,----
>     > | ./mancha3D
>     > |                                     __                  __   
>     _____
>     > |    /'\_/`\                         /\ \               /'__`\
>     /\  _ `\
>     > |   /\      \     __      ___     ___\ \ \___      __  /\_\L\ \\
>     \ \/\ \
>     > |   \ \ \__\ \  /'__`\  /' _ `\  /'___\ \  _ `\ 
>     /'__`\\/_/_\_<_\ \ \ \ \
>     > |    \ \ \_/\ \/\ \L\.\_/\ \/\ \/\ \__/\ \ \ \ \/\ \L\.\_/\ \L\
>     \\ \ \_\ \
>     > |     \ \_\\ \_\ \__/.\_\ \_\ \_\ \____\\ \_\ \_\ \__/.\_\
>     \____/ \ \____/
>     > |      \/_/ \/_/\/__/\/_/\/_/\/_/\/____/
>     \/_/\/_/\/__/\/_/\/___/   \/___/
>     > |
>     > |  ./mancha3D should be given the name of a control file as
>     argument.
>     > `----
>     >
>     >
>     >
>     >
>     > But it complains as before when run with mpirun
>     >
>     > ,----
>     > | mpirun --map-by socket --bind-to socket -np 1 ./mancha3D
>     > |
>     --------------------------------------------------------------------------
>     > | No objects of the specified type were found on at least one node:
>     > |
>     > |   Type: Package
>     > |   Node: login1
>     > |
>     > | The map cannot be done as specified.
>     > |
>     --------------------------------------------------------------------------
>     > `----
>     >
>     >
>     > If I submit it directly with srun, then the code runs, but not in
>     > parallel, and two individual copies of the code are started:
>     >
>     > ,----
>     > | srun -n 2 ./mancha3D
>     > |                                     __                  __   
>     _____
>     > |    /'\_/`\                         /\ \               /'__`\
>     /\  _ `\
>     > |   /\      \     __      ___     ___\ \ \___      __  /\_\L\ \\
>     \ \/\ \
>     > |   \ \ \__\ \  /'__`\  /' _ `\  /'___\ \  _ `\ 
>     /'__`\\/_/_\_<_\ \ \ \ \
>     > |    \ \ \_/\ \/\ \L\.\_/\ \/\ \/\ \__/\ \ \ \ \/\ \L\.\_/\ \L\
>     \\ \ \_\ \
>     > |     \ \_\\ \_\ \__/.\_\ \_\ \_\ \____\\ \_\ \_\ \__/.\_\
>     \____/ \ \____/
>     > |      \/_/ \/_/\/__/\/_/\/_/\/_/\/____/
>     \/_/\/_/\/__/\/_/\/___/   \/___/
>     > |
>     > |  should be given the name of a control file as argument.
>     > |                                     __                  __   
>     _____
>     > |    /'\_/`\                         /\ \               /'__`\
>     /\  _ `\
>     > |   /\      \     __      ___     ___\ \ \___      __  /\_\L\ \\
>     \ \/\ \
>     > |   \ \ \__\ \  /'__`\  /' _ `\  /'___\ \  _ `\ 
>     /'__`\\/_/_\_<_\ \ \ \ \
>     > |    \ \ \_/\ \/\ \L\.\_/\ \/\ \/\ \__/\ \ \ \ \/\ \L\.\_/\ \L\
>     \\ \ \_\ \
>     > |     \ \_\\ \_\ \__/.\_\ \_\ \_\ \____\\ \_\ \_\ \__/.\_\
>     \____/ \ \____/
>     > |      \/_/ \/_/\/__/\/_/\/_/\/_/\/____/
>     \/_/\/_/\/__/\/_/\/___/   \/___/
>     > |
>     > |  should be  given the name of a control file as argument.
>     > `----
>     >
>     >
>     >
>     > Any ideas are welcome. Many thanks,
>
>     _______________________________________________
>     users mailing list
>     users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>     https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>     <https://rfd.newmexicoconsortium.org/mailman/listinfo/users>
>
>
>
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to