Re: [OMPI users] Hybrid OpenMPI / OpenMP programming

2012-03-02 Thread Ralph Castain
On Mar 2, 2012, at 11:52 AM, Paul Kapinos wrote: > Hello Ralph, > I've some questions on placement and -cpus-per-rank. > >> First, use the --cpus-per-rank option to separate the ranks from each other. >> In other words, instead of --bind-to-socket -bysocket, you do: >> -bind-to-core

Re: [OMPI users] Hybrid OpenMPI / OpenMP programming

2012-03-02 Thread Paul Kapinos
Hello Ralph, I've some questions on placement and -cpus-per-rank. First, use the --cpus-per-rank option to separate the ranks from each other. In other words, instead of --bind-to-socket -bysocket, you do: -bind-to-core -cpus-per-rank N This will take each rank and bind it to a unique set of

Re: [OMPI users] Hybrid OpenMPI / OpenMP programming

2012-02-29 Thread Ralph Castain
It sounds like you are running into an issue with the Linux scheduler. I have an item to add an API "bind-this-thread-to-", but that won't be available until sometime in the future. Couple of things you could try in the meantime. First, use the --cpus-per-rank option to

[OMPI users] Hybrid OpenMPI / OpenMP programming

2012-02-29 Thread Auclair Francis
Dear Open-MPI users, Our code is currently running Open-MPI (1.5.4) with SLURM on a NUMA machine (2 sockets by nodes and 4 cores by socket) with basically two levels of implementation for Open-MPI: - at lower level n "Master" MPI-processes (one by socket) are simultaneously runned by dividing