Re: [hwloc-devel] Memory affinity

2011-02-28 Thread Jeff Squyres
On Feb 28, 2011, at 4:46 PM, David Singleton wrote: >> So: binding + pinning = binding (as long as you can ensure that the binding >> + pinning was atomic!). > > Atomicity should not be a problem. Setting memory binding and pinning (eg > mlock) are > both actions on vma properties. They

Re: [hwloc-devel] Memory affinity

2011-02-28 Thread David Singleton
On 03/01/2011 08:44 AM, Jeff Squyres wrote: On Feb 28, 2011, at 4:39 PM, Brice Goglin wrote: So: binding + pinning = binding (as long as you can ensure that the binding + pinning was atomic!). If the application swaps for real, do you really care about NUMA locality ? It seems to me that

Re: [hwloc-devel] Memory affinity

2011-02-28 Thread David Singleton
On 03/01/2011 08:30 AM, Jeff Squyres wrote: So: binding + pinning = binding (as long as you can ensure that the binding + pinning was atomic!). Atomicity should not be a problem. Setting memory binding and pinning (eg mlock) are both actions on vma properties. They would normally happen

Re: [hwloc-devel] Memory affinity

2011-02-28 Thread Jeff Squyres
On Feb 28, 2011, at 4:39 PM, Brice Goglin wrote: >> So: binding + pinning = binding (as long as you can ensure that the binding >> + pinning was atomic!). > > If the application swaps for real, do you really care about NUMA > locality ? It seems to me that the overhead of accessing distant NUMA

Re: [hwloc-devel] Memory affinity

2011-02-28 Thread Brice Goglin
Le 28/02/2011 22:30, Jeff Squyres a écrit : > This is really a pretty terrible statement we (the Linux community) are > making: it's all about manycore these days, and a direct consequence of that > is that it's all about NUMA. So you should bind your memory. > > But that may not be enough.

Re: [hwloc-devel] Memory affinity

2011-02-28 Thread David Singleton
On 03/01/2011 08:01 AM, Jeff Squyres wrote: On Feb 28, 2011, at 3:47 PM, David Singleton wrote: I dont think you can avoid the problem. Unless it has changed very recently, Linux swapin_readahead is the main culprit in messing with NUMA locality on that platform. Faulting a single page

Re: [hwloc-devel] Memory affinity

2011-02-28 Thread Jeff Squyres
On Feb 28, 2011, at 4:18 PM, Brice Goglin wrote: > Ah good point! So Jeff has to hope that pages of different processes > won't be highly mixed in the swap partition, good luck :) This is really a pretty terrible statement we (the Linux community) are making: it's all about manycore these days,

Re: [hwloc-devel] Memory affinity

2011-02-28 Thread Brice Goglin
Le 28/02/2011 21:47, David Singleton a écrit : > I dont think you can avoid the problem. Unless it has changed very > recently, Linux swapin_readahead is the main culprit in messing with > NUMA locality on that platform. Faulting a single page causes 8 or 16 > or whatever contiguous pages to be

Re: [hwloc-devel] Memory affinity

2011-02-28 Thread Jeff Squyres
On Feb 28, 2011, at 3:47 PM, David Singleton wrote: > I dont think you can avoid the problem. Unless it has changed very recently, > Linux swapin_readahead is the main culprit in messing with NUMA locality on > that platform. Faulting a single page causes 8 or 16 or whatever contiguous >

Re: [hwloc-devel] Memory affinity

2011-02-28 Thread David Singleton
On 03/01/2011 05:51 AM, Jeff Squyres wrote: On Feb 28, 2011, at 12:24 PM, Bernd Kallies wrote: 1. I have no reason to doubt this person, but was wondering if someone could confirm this (for Linux). set_mempolicy(2) of recent 2.6 kernels says: Process policy is not remembered if the page is

Re: [hwloc-devel] Memory affinity

2011-02-28 Thread Brice Goglin
Le 28/02/2011 21:35, Jeff Squyres a écrit : > On Feb 28, 2011, at 3:31 PM, Brice Goglin wrote: > > > That would seem to imply that I should always hwloc_set_area_membind() if > I want it to persist beyond any potential future swapping. > >> The kernel only looks at the

Re: [hwloc-devel] Memory affinity

2011-02-28 Thread Brice Goglin
Le 28/02/2011 21:18, Jeff Squyres a écrit : > On Feb 28, 2011, at 2:02 PM, Samuel Thibault wrote: > > >>> That would seem to imply that I should always hwloc_set_area_membind() if I >>> want it to persist beyond any potential future swapping. >>> >> I guess that could be right, but it

Re: [hwloc-devel] Memory affinity

2011-02-28 Thread Samuel Thibault
Jeff Squyres, le Mon 28 Feb 2011 21:18:52 +0100, a écrit : > On Feb 28, 2011, at 2:02 PM, Samuel Thibault wrote: > > >> That would seem to imply that I should always hwloc_set_area_membind() if > >> I want it to persist beyond any potential future swapping. > > > > I guess that could be right,

Re: [hwloc-devel] Memory affinity

2011-02-28 Thread Samuel Thibault
Jeff Squyres, le Mon 28 Feb 2011 19:54:27 +0100, a écrit : > On Feb 28, 2011, at 12:24 PM, Bernd Kallies wrote: > > >> 1. I have no reason to doubt this person, but was wondering if someone > >> could confirm this (for Linux). > > > > set_mempolicy(2) of recent 2.6 kernels says: > > Process

Re: [hwloc-devel] Memory affinity

2011-02-28 Thread Bernd Kallies
On Mon, 2011-02-28 at 11:51 -0500, Jeff Squyres wrote: > Someone just made a fairly disturbing statement to me in an Open MPI bug > ticket: if you bind some memory to a particular NUMA node, and that memory > later gets paged out, then it loses its memory binding information -- meaning > that

Re: [hwloc-devel] Memory affinity

2011-02-28 Thread Brice Goglin
Le 28/02/2011 17:51, Jeff Squyres a écrit : > Someone just made a fairly disturbing statement to me in an Open MPI bug > ticket: if you bind some memory to a particular NUMA node, and that memory > later gets paged out, then it loses its memory binding information -- meaning > that it can