Jeff,

so I ran a couple of tests today and I can not confirm your statement. I wrote simple a simple test code where a process first sets an affinity mask and than spawns a number of threads. The threads modify the affinity mask and every thread ( including the master thread) print out there affinity mask at the end.

With sched_getaffinity() and sched_setaffinity() it was indeed such that the master thread had the same affinity mask as the thread that it spawned. This means, that the modification of the affinity mask by a new thread in fact did affect the master thread.

Executing the same codesquence however using the libnuma calls, the master thread however was not affected by the new affinity mask of the children. So clearly, libnuma must be doing something differently.

The catch however is, that while I coded the example using libnuma, I realized the libnuma allows you only to assign a thread to a socket, not to a cpu/core. i.e. you do not have control on which cpu on the socket your threads are running, only on which socket.

Thanks
Edgar

Jeff Squyres wrote:
On Nov 20, 2008, at 9:43 AM, Ralph Castain wrote:

Interesting - learn something new every day! :-)

Sorry; I was out for the holiday last week, but a clarification: libnuma's man page says that numa_run_on_node*() binds a "thread", but it really should say "process". I looked at the code, and they're simply implementing a wrapper around sched_setaffinity(), which is a per-process binding. Not a per-thread binding.

On Nov 20, 2008, at 7:34 AM, Edgar Gabriel wrote:

if you look at recent versions of libnuma, there are two functions called numa_run_on_node() and numa_run_on_node_mask(), which allow thread-based assignments to CPUs....


--
Edgar Gabriel
Assistant Professor
Parallel Software Technologies Lab      http://pstl.cs.uh.edu
Department of Computer Science          University of Houston
Philip G. Hoffman Hall, Room 524        Houston, TX-77204, USA
Tel: +1 (713) 743-3857                  Fax: +1 (713) 743-3335

Reply via email to