Thanks for info. I was thinking it could be some wrong interpretation of per
cpu core count.
I will try newer library.
______________________________________________________________
Od: "Brice Goglin"
Komu: Open MPI Users
Dátum: 13.09.2011 13:28
Predmet: Re: [OMPI users] #cpus/socket
Le 13/09/2011 18:59, Peter Kjellström a écrit :
On Tuesday, September 13, 2011 09:07:32 AM nn3003 wrote:
Hello !
I am running wrf model on 4x AMD 6172 which is 12 core CPU. I use OpenMPI
1.4.3 and libgomp 4.3.4. I have binaries compiled for shared-memory and
distributed-memory (OpenMP and OpenMPI) I use following command
mpirun -np 4 --cpus-per-proc 6 --report-bindings --bysocket wrf.exe
It works ok and in top i see there are 4 wrf.exe and each has 6 threads on
cpu0-5 12-17 24-29 36-41 However, if I want to run 8 or more e.g.
mpirun -np 4 --cpus-per-proc 12 --report-bindings --bysocket wrf.exe
I get error
Your job has requested more cpus per process(rank) than there
are cpus in a socket:
Cpus/rank: 8
#cpus/socket: 6
Why is that ? There are 12 cores per socket in AMD 6172.
In reality a 12 core Magnycours is two 6 core dies on a socket. I'm guessing
that the topology code sees your 4x 12 core as a 8x 6 core.
plpa-info reports 4*6cores:
Number of processor sockets: 4
Number of processors online: 48
Number of processors offline: 0 (no topology information available)
Socket 0 (ID 0): 6 cores (max core ID: 5)
Socket 1 (ID 1): 6 cores (max core ID: 5)
Socket 2 (ID 2): 6 cores (max core ID: 5)
Socket 3 (ID 3): 6 cores (max core ID: 5)
This should be fixed with Open MPI 1.5.2+ with hwloc.
Brice
_______________________________________________
users mailing list
us...@open-mpi.org