Hi
We are using openmpi version 1.2.8 (packaged with ofed-1.4). I am trying
to run hpl-2.0 (linpak). We have two intel quad core CPU's in all our
server (8 total cores)  and all hosts in the hostfile have lines that
look like "10.100.0.227 slots=8max_slots=8".

Now when I use mpirun (even with --mca mpi_paffinity_alone 1) it does
not keep the affinity, the processes seem to gravitate towards first
four cores (using top and hitting 1). I know I do have MCA paffinity
available.

[root@devi DLR_WB_88]# ompi_info | grep paffinity
[devi.cisco.com:26178] mca: base: component_find: unable to open btl openib: 
file not found (ignored)
           MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.2.8)

The command line I am using is:

# mpirun -nolocal -np 896 -v  --mca mpi_paffinity_alone 1 -hostfile 
/mnt/apps/hosts/896_8slots /mnt/apps/bin/xhpl

Am I doing something wrong and is there a way to confirm cpu affinity besides 
hitting "1" on top.


[root@devi DLR_WB_88]# mpirun -V
mpirun (Open MPI) 1.2.8

Report bugs to http://www.open-mpi.org/community/help/

-- 
Iftikhar Rathore
Technical Marketing Engineer
Server Access Virtualization BU.
Cisco Systems, Inc.

Phone:  +1 408 853 5322
Mobile: +1 636 675 2982


Reply via email to