Can't you simply

export OMP_PROC_BIND=1


assuming mpirun has the correct command line (e.g. correctly bind tasks on x cores so the x OpenMP threads will be individually bound to each core), each is bound to disjoint cpusets, so i guess GOMP will bind OpenMP threads within the given cpuset.

/* at least this is what the Intel runtime is doing */


Cheers,


Gilles


On 6/29/2016 12:47 PM, Ralph Castain wrote:
Why don’t you have your application look at the OMPI_COMM_WORLD_LOCAL_RANK envar, and then use that to calculate the offset location for your threads (i.e., local rank 0 is on socket 0, local rank 1 is on socket 1, etc.). You can then putenv the correct value of the GOMP envar


On Jun 28, 2016, at 8:40 PM, Saliya Ekanayake <esal...@gmail.com <mailto:esal...@gmail.com>> wrote:

Hi,

I am trying to do something like below with OpenMPI and OpenMP (threads).

<image.png>

I was trying to use the explicit thread affinity with GOMP_CPU_AFFINITY environment variable as described here (https://gcc.gnu.org/onlinedocs/libgomp/GOMP_005fCPU_005fAFFINITY.html).

However, both P0 and P1 processes will read the same GOMP_CPU_AFFINITY and will place threads on the same set of cores.

Is there a way to overcome this and pass process specific affinity scheme to OpenMP when running with OpenMPI? For example, can I say T0 of P0 should be in Core 0, but T0 of P1 should be Core 4?

P.S. I can manually achieve this within the program using *sched_setaffinity()*, but that's not portable.

Thank you,
Saliya

--
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Informatics and Computing | Digital Science Center
Indiana University, Bloomington

_______________________________________________
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: http://www.open-mpi.org/community/lists/users/2016/06/29556.php



_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/06/29557.php

Reply via email to