Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-30 Thread Greg Kroah-Hartman
On Sat, Jul 28, 2012 at 11:26:09AM +0100, Mel Gorman wrote:
> On Sat, Jul 28, 2012 at 02:02:31AM -0300, Herton Ronaldo Krzesinski wrote:
> > > Thanks, I've merged this with the "original" in the tree, so all should
> > > be good now.
> > 
> > Thanks. I saw what seems another issue now on the patch too, sorry for
> > not noticing earlier: this backport is lacking the
> > write_seqcount_{begin,end} on set_mems_allowed for the case with
> > CONFIG_CPUSETS, like in the original patch:
> > 
> 
> Not my finest moment :(
> 
> Thanks
> 
> ---8<---
> cpuset: mm: reduce large amounts of memory barrier related damage v3 fix
> 
> Missing hunk from backport.
> 
> Reported-by: Herton Ronaldo Krzesinski 
> Signed-off-by: Mel Gorman 
> 
> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
> index 8f15695..7a7e5fd 100644
> --- a/include/linux/cpuset.h
> +++ b/include/linux/cpuset.h
> @@ -113,7 +113,9 @@ static inline bool put_mems_allowed(unsigned int seq)
>  static inline void set_mems_allowed(nodemask_t nodemask)
>  {
>   task_lock(current);
> + write_seqcount_begin(>mems_allowed_seq);
>   current->mems_allowed = nodemask;
> + write_seqcount_end(>mems_allowed_seq);
>   task_unlock(current);
>  }

Added to the patch, thanks.

I think with this change, and the others requested, I'll do a -rc2 just
so that people can test it all again.

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-30 Thread Greg Kroah-Hartman
On Mon, Jul 30, 2012 at 08:37:31AM -0700, Greg Kroah-Hartman wrote:
> On Sat, Jul 28, 2012 at 02:02:31AM -0300, Herton Ronaldo Krzesinski wrote:
> > Thanks. I saw what seems another issue now on the patch too, sorry for
> > not noticing earlier: this backport is lacking the
> > write_seqcount_{begin,end} on set_mems_allowed for the case with
> > CONFIG_CPUSETS, like in the original patch:
> > 
> >  static inline void set_mems_allowed(nodemask_t nodemask)
> >  {
> > task_lock(current);
> > +   write_seqcount_begin(>mems_allowed_seq);
> > current->mems_allowed = nodemask;
> > +   write_seqcount_end(>mems_allowed_seq);
> > task_unlock(current);
> >  }
> > 
> 
> Ok, but that's not in a patch format that I can apply :(
> 
> Care to redo it so I can add it to the existing patch?

Oh nevermind, Mel already did it.

Time for more coffee...

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-30 Thread Greg Kroah-Hartman
On Sat, Jul 28, 2012 at 02:02:31AM -0300, Herton Ronaldo Krzesinski wrote:
> Thanks. I saw what seems another issue now on the patch too, sorry for
> not noticing earlier: this backport is lacking the
> write_seqcount_{begin,end} on set_mems_allowed for the case with
> CONFIG_CPUSETS, like in the original patch:
> 
>  static inline void set_mems_allowed(nodemask_t nodemask)
>  {
> task_lock(current);
> +   write_seqcount_begin(>mems_allowed_seq);
> current->mems_allowed = nodemask;
> +   write_seqcount_end(>mems_allowed_seq);
> task_unlock(current);
>  }
> 

Ok, but that's not in a patch format that I can apply :(

Care to redo it so I can add it to the existing patch?

thanks,

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-30 Thread Greg Kroah-Hartman
On Sat, Jul 28, 2012 at 02:02:31AM -0300, Herton Ronaldo Krzesinski wrote:
 Thanks. I saw what seems another issue now on the patch too, sorry for
 not noticing earlier: this backport is lacking the
 write_seqcount_{begin,end} on set_mems_allowed for the case with
 CONFIG_CPUSETS, like in the original patch:
 
  static inline void set_mems_allowed(nodemask_t nodemask)
  {
 task_lock(current);
 +   write_seqcount_begin(current-mems_allowed_seq);
 current-mems_allowed = nodemask;
 +   write_seqcount_end(current-mems_allowed_seq);
 task_unlock(current);
  }
 

Ok, but that's not in a patch format that I can apply :(

Care to redo it so I can add it to the existing patch?

thanks,

greg k-h
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-30 Thread Greg Kroah-Hartman
On Mon, Jul 30, 2012 at 08:37:31AM -0700, Greg Kroah-Hartman wrote:
 On Sat, Jul 28, 2012 at 02:02:31AM -0300, Herton Ronaldo Krzesinski wrote:
  Thanks. I saw what seems another issue now on the patch too, sorry for
  not noticing earlier: this backport is lacking the
  write_seqcount_{begin,end} on set_mems_allowed for the case with
  CONFIG_CPUSETS, like in the original patch:
  
   static inline void set_mems_allowed(nodemask_t nodemask)
   {
  task_lock(current);
  +   write_seqcount_begin(current-mems_allowed_seq);
  current-mems_allowed = nodemask;
  +   write_seqcount_end(current-mems_allowed_seq);
  task_unlock(current);
   }
  
 
 Ok, but that's not in a patch format that I can apply :(
 
 Care to redo it so I can add it to the existing patch?

Oh nevermind, Mel already did it.

Time for more coffee...

greg k-h
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-30 Thread Greg Kroah-Hartman
On Sat, Jul 28, 2012 at 11:26:09AM +0100, Mel Gorman wrote:
 On Sat, Jul 28, 2012 at 02:02:31AM -0300, Herton Ronaldo Krzesinski wrote:
   Thanks, I've merged this with the original in the tree, so all should
   be good now.
  
  Thanks. I saw what seems another issue now on the patch too, sorry for
  not noticing earlier: this backport is lacking the
  write_seqcount_{begin,end} on set_mems_allowed for the case with
  CONFIG_CPUSETS, like in the original patch:
  
 
 Not my finest moment :(
 
 Thanks
 
 ---8---
 cpuset: mm: reduce large amounts of memory barrier related damage v3 fix
 
 Missing hunk from backport.
 
 Reported-by: Herton Ronaldo Krzesinski herton.krzesin...@canonical.com
 Signed-off-by: Mel Gorman mgor...@suse.de
 
 diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
 index 8f15695..7a7e5fd 100644
 --- a/include/linux/cpuset.h
 +++ b/include/linux/cpuset.h
 @@ -113,7 +113,9 @@ static inline bool put_mems_allowed(unsigned int seq)
  static inline void set_mems_allowed(nodemask_t nodemask)
  {
   task_lock(current);
 + write_seqcount_begin(current-mems_allowed_seq);
   current-mems_allowed = nodemask;
 + write_seqcount_end(current-mems_allowed_seq);
   task_unlock(current);
  }

Added to the patch, thanks.

I think with this change, and the others requested, I'll do a -rc2 just
so that people can test it all again.

greg k-h
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-28 Thread Mel Gorman
On Sat, Jul 28, 2012 at 02:02:31AM -0300, Herton Ronaldo Krzesinski wrote:
> > Thanks, I've merged this with the "original" in the tree, so all should
> > be good now.
> 
> Thanks. I saw what seems another issue now on the patch too, sorry for
> not noticing earlier: this backport is lacking the
> write_seqcount_{begin,end} on set_mems_allowed for the case with
> CONFIG_CPUSETS, like in the original patch:
> 

Not my finest moment :(

Thanks

---8<---
cpuset: mm: reduce large amounts of memory barrier related damage v3 fix

Missing hunk from backport.

Reported-by: Herton Ronaldo Krzesinski 
Signed-off-by: Mel Gorman 

diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 8f15695..7a7e5fd 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -113,7 +113,9 @@ static inline bool put_mems_allowed(unsigned int seq)
 static inline void set_mems_allowed(nodemask_t nodemask)
 {
task_lock(current);
+   write_seqcount_begin(>mems_allowed_seq);
current->mems_allowed = nodemask;
+   write_seqcount_end(>mems_allowed_seq);
task_unlock(current);
 }
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-28 Thread Mel Gorman
On Sat, Jul 28, 2012 at 02:02:31AM -0300, Herton Ronaldo Krzesinski wrote:
  Thanks, I've merged this with the original in the tree, so all should
  be good now.
 
 Thanks. I saw what seems another issue now on the patch too, sorry for
 not noticing earlier: this backport is lacking the
 write_seqcount_{begin,end} on set_mems_allowed for the case with
 CONFIG_CPUSETS, like in the original patch:
 

Not my finest moment :(

Thanks

---8---
cpuset: mm: reduce large amounts of memory barrier related damage v3 fix

Missing hunk from backport.

Reported-by: Herton Ronaldo Krzesinski herton.krzesin...@canonical.com
Signed-off-by: Mel Gorman mgor...@suse.de

diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 8f15695..7a7e5fd 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -113,7 +113,9 @@ static inline bool put_mems_allowed(unsigned int seq)
 static inline void set_mems_allowed(nodemask_t nodemask)
 {
task_lock(current);
+   write_seqcount_begin(current-mems_allowed_seq);
current-mems_allowed = nodemask;
+   write_seqcount_end(current-mems_allowed_seq);
task_unlock(current);
 }
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-27 Thread Herton Ronaldo Krzesinski
On Fri, Jul 27, 2012 at 12:01:16PM -0700, Greg Kroah-Hartman wrote:
> On Fri, Jul 27, 2012 at 04:23:47PM +0100, Mel Gorman wrote:
> > > > --- a/mm/slub.c
> > > > +++ b/mm/slub.c
> > > > @@ -1457,6 +1457,7 @@ static struct page *get_any_partial(stru
> > > > struct zone *zone;
> > > > enum zone_type high_zoneidx = gfp_zone(flags);
> > > > struct page *page;
> > > > +   unsigned int cpuset_mems_cookie;
> > > >  
> > > > /*
> > > >  * The defrag ratio allows a configuration of the tradeoffs 
> > > > between
> > > > @@ -1480,22 +1481,32 @@ static struct page *get_any_partial(stru
> > > > get_cycles() % 1024 > 
> > > > s->remote_node_defrag_ratio)
> > > > return NULL;
> > > >  
> > > > -   get_mems_allowed();
> > > > -   zonelist = node_zonelist(slab_node(current->mempolicy), flags);
> > > > -   for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
> > > > -   struct kmem_cache_node *n;
> > > > +   do {
> > > > +   cpuset_mems_cookie = get_mems_allowed();
> > > > +   zonelist = node_zonelist(slab_node(current->mempolicy), 
> > > > flags);
> > > > +   for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) 
> > > > {
> > > > +   struct kmem_cache_node *n;
> > > >  
> > > > -   n = get_node(s, zone_to_nid(zone));
> > > > +   n = get_node(s, zone_to_nid(zone));
> > > >  
> > > > -   if (n && cpuset_zone_allowed_hardwall(zone, flags) &&
> > > > -   n->nr_partial > s->min_partial) {
> > > > -   page = get_partial_node(n);
> > > > -   if (page) {
> > > > -   put_mems_allowed();
> > > > -   return page;
> > > > +   if (n && cpuset_zone_allowed_hardwall(zone, 
> > > > flags) &&
> > > > +   n->nr_partial > s->min_partial) 
> > > > {
> > > > +   page = get_partial_node(n);
> > > > +   if (page) {
> > > > +   /*
> > > > +* Return the object even if
> > > > +* put_mems_allowed indicated 
> > > > that
> > > > +* the cpuset mems_allowed was
> > > > +* updated in parallel. It's a
> > > > +* harmless race between the 
> > > > alloc
> > > > +* and the cpuset update.
> > > > +*/
> > > > +   
> > > > put_mems_allowed(cpuset_mems_cookie);
> > > > +   return page;
> > > > +   }
> > > > }
> > > > }
> > > > -   }
> > > > +   } while (!put_mems_allowed(cpuset_mems_cookie));
> > > > put_mems_allowed();
> > > 
> > > This doesn't build on 3.0, the backport left the stray put_mems_allowed
> > > above:
> > > 
> > > linux-stable/mm/slub.c: In function 'get_any_partial':
> > > linux-stable/mm/slub.c:1510:2: error: too few arguments to function 
> > > 'put_mems_allowed'
> > > linux-stable/include/linux/cpuset.h:108:20: note: declared here
> > > 
> > 
> > That line should have been deleted and tests were based on slab. My
> > apologies.
> > 
> > ---8<---
> > cpuset: mm: Reduce large amounts of memory barrier related damage fix
> > 
> > linux-stable/mm/slub.c: In function 'get_any_partial':
> > linux-stable/mm/slub.c:1510:2: error: too few arguments to function 
> > 'put_mems_allowed'
> > linux-stable/include/linux/cpuset.h:108:20: note: declared here
> > 
> > Reported-by: Herton Ronaldo Krzesinski 
> > Signed-off-by: Mel Gorman 
> > 
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 00ccf2c..ae6e80e 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -1507,7 +1507,6 @@ static struct page *get_any_partial(struct kmem_cache 
> > *s, gfp_t flags)
> > }
> > }
> > } while (!put_mems_allowed(cpuset_mems_cookie));
> > -   put_mems_allowed();
> >  #endif
> > return NULL;
> >  }
> 
> Thanks, I've merged this with the "original" in the tree, so all should
> be good now.

Thanks. I saw what seems another issue now on the patch too, sorry for
not noticing earlier: this backport is lacking the
write_seqcount_{begin,end} on set_mems_allowed for the case with
CONFIG_CPUSETS, like in the original patch:

 static inline void set_mems_allowed(nodemask_t nodemask)
 {
task_lock(current);
+   write_seqcount_begin(>mems_allowed_seq);
current->mems_allowed = nodemask;
+   write_seqcount_end(>mems_allowed_seq);
task_unlock(current);
 }



> 
> greg k-h
> --
> To unsubscribe from this list: 

Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-27 Thread Greg Kroah-Hartman
On Fri, Jul 27, 2012 at 04:23:47PM +0100, Mel Gorman wrote:
> > > --- a/mm/slub.c
> > > +++ b/mm/slub.c
> > > @@ -1457,6 +1457,7 @@ static struct page *get_any_partial(stru
> > >   struct zone *zone;
> > >   enum zone_type high_zoneidx = gfp_zone(flags);
> > >   struct page *page;
> > > + unsigned int cpuset_mems_cookie;
> > >  
> > >   /*
> > >* The defrag ratio allows a configuration of the tradeoffs between
> > > @@ -1480,22 +1481,32 @@ static struct page *get_any_partial(stru
> > >   get_cycles() % 1024 > s->remote_node_defrag_ratio)
> > >   return NULL;
> > >  
> > > - get_mems_allowed();
> > > - zonelist = node_zonelist(slab_node(current->mempolicy), flags);
> > > - for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
> > > - struct kmem_cache_node *n;
> > > + do {
> > > + cpuset_mems_cookie = get_mems_allowed();
> > > + zonelist = node_zonelist(slab_node(current->mempolicy), flags);
> > > + for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
> > > + struct kmem_cache_node *n;
> > >  
> > > - n = get_node(s, zone_to_nid(zone));
> > > + n = get_node(s, zone_to_nid(zone));
> > >  
> > > - if (n && cpuset_zone_allowed_hardwall(zone, flags) &&
> > > - n->nr_partial > s->min_partial) {
> > > - page = get_partial_node(n);
> > > - if (page) {
> > > - put_mems_allowed();
> > > - return page;
> > > + if (n && cpuset_zone_allowed_hardwall(zone, flags) &&
> > > + n->nr_partial > s->min_partial) {
> > > + page = get_partial_node(n);
> > > + if (page) {
> > > + /*
> > > +  * Return the object even if
> > > +  * put_mems_allowed indicated that
> > > +  * the cpuset mems_allowed was
> > > +  * updated in parallel. It's a
> > > +  * harmless race between the alloc
> > > +  * and the cpuset update.
> > > +  */
> > > + put_mems_allowed(cpuset_mems_cookie);
> > > + return page;
> > > + }
> > >   }
> > >   }
> > > - }
> > > + } while (!put_mems_allowed(cpuset_mems_cookie));
> > >   put_mems_allowed();
> > 
> > This doesn't build on 3.0, the backport left the stray put_mems_allowed
> > above:
> > 
> > linux-stable/mm/slub.c: In function 'get_any_partial':
> > linux-stable/mm/slub.c:1510:2: error: too few arguments to function 
> > 'put_mems_allowed'
> > linux-stable/include/linux/cpuset.h:108:20: note: declared here
> > 
> 
> That line should have been deleted and tests were based on slab. My
> apologies.
> 
> ---8<---
> cpuset: mm: Reduce large amounts of memory barrier related damage fix
> 
> linux-stable/mm/slub.c: In function 'get_any_partial':
> linux-stable/mm/slub.c:1510:2: error: too few arguments to function 
> 'put_mems_allowed'
> linux-stable/include/linux/cpuset.h:108:20: note: declared here
> 
> Reported-by: Herton Ronaldo Krzesinski 
> Signed-off-by: Mel Gorman 
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 00ccf2c..ae6e80e 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1507,7 +1507,6 @@ static struct page *get_any_partial(struct kmem_cache 
> *s, gfp_t flags)
>   }
>   }
>   } while (!put_mems_allowed(cpuset_mems_cookie));
> - put_mems_allowed();
>  #endif
>   return NULL;
>  }

Thanks, I've merged this with the "original" in the tree, so all should
be good now.

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-27 Thread Mel Gorman
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -1457,6 +1457,7 @@ static struct page *get_any_partial(stru
> > struct zone *zone;
> > enum zone_type high_zoneidx = gfp_zone(flags);
> > struct page *page;
> > +   unsigned int cpuset_mems_cookie;
> >  
> > /*
> >  * The defrag ratio allows a configuration of the tradeoffs between
> > @@ -1480,22 +1481,32 @@ static struct page *get_any_partial(stru
> > get_cycles() % 1024 > s->remote_node_defrag_ratio)
> > return NULL;
> >  
> > -   get_mems_allowed();
> > -   zonelist = node_zonelist(slab_node(current->mempolicy), flags);
> > -   for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
> > -   struct kmem_cache_node *n;
> > +   do {
> > +   cpuset_mems_cookie = get_mems_allowed();
> > +   zonelist = node_zonelist(slab_node(current->mempolicy), flags);
> > +   for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
> > +   struct kmem_cache_node *n;
> >  
> > -   n = get_node(s, zone_to_nid(zone));
> > +   n = get_node(s, zone_to_nid(zone));
> >  
> > -   if (n && cpuset_zone_allowed_hardwall(zone, flags) &&
> > -   n->nr_partial > s->min_partial) {
> > -   page = get_partial_node(n);
> > -   if (page) {
> > -   put_mems_allowed();
> > -   return page;
> > +   if (n && cpuset_zone_allowed_hardwall(zone, flags) &&
> > +   n->nr_partial > s->min_partial) {
> > +   page = get_partial_node(n);
> > +   if (page) {
> > +   /*
> > +* Return the object even if
> > +* put_mems_allowed indicated that
> > +* the cpuset mems_allowed was
> > +* updated in parallel. It's a
> > +* harmless race between the alloc
> > +* and the cpuset update.
> > +*/
> > +   put_mems_allowed(cpuset_mems_cookie);
> > +   return page;
> > +   }
> > }
> > }
> > -   }
> > +   } while (!put_mems_allowed(cpuset_mems_cookie));
> > put_mems_allowed();
> 
> This doesn't build on 3.0, the backport left the stray put_mems_allowed
> above:
> 
> linux-stable/mm/slub.c: In function 'get_any_partial':
> linux-stable/mm/slub.c:1510:2: error: too few arguments to function 
> 'put_mems_allowed'
> linux-stable/include/linux/cpuset.h:108:20: note: declared here
> 

That line should have been deleted and tests were based on slab. My
apologies.

---8<---
cpuset: mm: Reduce large amounts of memory barrier related damage fix

linux-stable/mm/slub.c: In function 'get_any_partial':
linux-stable/mm/slub.c:1510:2: error: too few arguments to function 
'put_mems_allowed'
linux-stable/include/linux/cpuset.h:108:20: note: declared here

Reported-by: Herton Ronaldo Krzesinski 
Signed-off-by: Mel Gorman 

diff --git a/mm/slub.c b/mm/slub.c
index 00ccf2c..ae6e80e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1507,7 +1507,6 @@ static struct page *get_any_partial(struct kmem_cache *s, 
gfp_t flags)
}
}
} while (!put_mems_allowed(cpuset_mems_cookie));
-   put_mems_allowed();
 #endif
return NULL;
 }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-27 Thread Herton Ronaldo Krzesinski
On Thu, Jul 26, 2012 at 02:29:57PM -0700, Greg Kroah-Hartman wrote:
> From: Greg KH 
> 
> 3.0-stable review patch.  If anyone has any objections, please let me know.
> 
> --
> 
> From: Mel Gorman 
> 
> commit cc9a6c8776615f9c194ccf0b63a0aa5628235545 upstream.
> 
> Stable note:  Not tracked in Bugzilla. [get|put]_mems_allowed() is extremely
>   expensive and severely impacted page allocator performance. This
>   is part of a series of patches that reduce page allocator overhead.
> 
> Commit c0ff7453bb5c ("cpuset,mm: fix no node to alloc memory when
> changing cpuset's mems") wins a super prize for the largest number of
> memory barriers entered into fast paths for one commit.
> 
> [get|put]_mems_allowed is incredibly heavy with pairs of full memory
> barriers inserted into a number of hot paths.  This was detected while
> investigating at large page allocator slowdown introduced some time
> after 2.6.32.  The largest portion of this overhead was shown by
> oprofile to be at an mfence introduced by this commit into the page
> allocator hot path.
> 
> For extra style points, the commit introduced the use of yield() in an
> implementation of what looks like a spinning mutex.
> 
> This patch replaces the full memory barriers on both read and write
> sides with a sequence counter with just read barriers on the fast path
> side.  This is much cheaper on some architectures, including x86.  The
> main bulk of the patch is the retry logic if the nodemask changes in a
> manner that can cause a false failure.
> 
> While updating the nodemask, a check is made to see if a false failure
> is a risk.  If it is, the sequence number gets bumped and parallel
> allocators will briefly stall while the nodemask update takes place.
> 
> In a page fault test microbenchmark, oprofile samples from
> __alloc_pages_nodemask went from 4.53% of all samples to 1.15%.  The
> actual results were
> 
>  3.3.0-rc3  3.3.0-rc3
>  rc3-vanillanobarrier-v2r1
> Clients   1 UserTime   0.07 (  0.00%)   0.08 (-14.19%)
> Clients   2 UserTime   0.07 (  0.00%)   0.07 (  2.72%)
> Clients   4 UserTime   0.08 (  0.00%)   0.07 (  3.29%)
> Clients   1 SysTime0.70 (  0.00%)   0.65 (  6.65%)
> Clients   2 SysTime0.85 (  0.00%)   0.82 (  3.65%)
> Clients   4 SysTime1.41 (  0.00%)   1.41 (  0.32%)
> Clients   1 WallTime   0.77 (  0.00%)   0.74 (  4.19%)
> Clients   2 WallTime   0.47 (  0.00%)   0.45 (  3.73%)
> Clients   4 WallTime   0.38 (  0.00%)   0.37 (  1.58%)
> Clients   1 Flt/sec/cpu  497620.28 (  0.00%) 520294.53 (  4.56%)
> Clients   2 Flt/sec/cpu  414639.05 (  0.00%) 429882.01 (  3.68%)
> Clients   4 Flt/sec/cpu  257959.16 (  0.00%) 258761.48 (  0.31%)
> Clients   1 Flt/sec  495161.39 (  0.00%) 517292.87 (  4.47%)
> Clients   2 Flt/sec  820325.95 (  0.00%) 850289.77 (  3.65%)
> Clients   4 Flt/sec  1020068.93 (  0.00%) 1022674.06 (  0.26%)
> MMTests Statistics: duration
> Sys Time Running Test (seconds) 135.68132.17
> User+Sys Time Running Test (seconds) 164.2160.13
> Total Elapsed Time (seconds)123.46120.87
> 
> The overall improvement is small but the System CPU time is much
> improved and roughly in correlation to what oprofile reported (these
> performance figures are without profiling so skew is expected).  The
> actual number of page faults is noticeably improved.
> 
> For benchmarks like kernel builds, the overall benefit is marginal but
> the system CPU time is slightly reduced.
> 
> To test the actual bug the commit fixed I opened two terminals.  The
> first ran within a cpuset and continually ran a small program that
> faulted 100M of anonymous data.  In a second window, the nodemask of the
> cpuset was continually randomised in a loop.
> 
> Without the commit, the program would fail every so often (usually
> within 10 seconds) and obviously with the commit everything worked fine.
> With this patch applied, it also worked fine so the fix should be
> functionally equivalent.
> 
> Signed-off-by: Mel Gorman 
> Cc: Miao Xie 
> Cc: David Rientjes 
> Cc: Peter Zijlstra 
> Cc: Christoph Lameter 
> Signed-off-by: Andrew Morton 
> Signed-off-by: Linus Torvalds 
> Signed-off-by: Mel Gorman 
> Signed-off-by: Greg Kroah-Hartman 
> 
> 
> ---
>  include/linux/cpuset.h|   49 
> ++
>  include/linux/init_task.h |8 +++
>  include/linux/sched.h |2 -
>  kernel/cpuset.c   |   43 +++-
>  kernel/fork.c |3 ++
>  mm/filemap.c  |   11 ++
>  mm/hugetlb.c  |   15 ++
>  mm/mempolicy.c|   28 +++---
>  mm/page_alloc.c   |   33 +-
>  mm/slab.c |   13 

Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-27 Thread Herton Ronaldo Krzesinski
On Thu, Jul 26, 2012 at 02:29:57PM -0700, Greg Kroah-Hartman wrote:
 From: Greg KH gre...@linuxfoundation.org
 
 3.0-stable review patch.  If anyone has any objections, please let me know.
 
 --
 
 From: Mel Gorman mgor...@suse.de
 
 commit cc9a6c8776615f9c194ccf0b63a0aa5628235545 upstream.
 
 Stable note:  Not tracked in Bugzilla. [get|put]_mems_allowed() is extremely
   expensive and severely impacted page allocator performance. This
   is part of a series of patches that reduce page allocator overhead.
 
 Commit c0ff7453bb5c (cpuset,mm: fix no node to alloc memory when
 changing cpuset's mems) wins a super prize for the largest number of
 memory barriers entered into fast paths for one commit.
 
 [get|put]_mems_allowed is incredibly heavy with pairs of full memory
 barriers inserted into a number of hot paths.  This was detected while
 investigating at large page allocator slowdown introduced some time
 after 2.6.32.  The largest portion of this overhead was shown by
 oprofile to be at an mfence introduced by this commit into the page
 allocator hot path.
 
 For extra style points, the commit introduced the use of yield() in an
 implementation of what looks like a spinning mutex.
 
 This patch replaces the full memory barriers on both read and write
 sides with a sequence counter with just read barriers on the fast path
 side.  This is much cheaper on some architectures, including x86.  The
 main bulk of the patch is the retry logic if the nodemask changes in a
 manner that can cause a false failure.
 
 While updating the nodemask, a check is made to see if a false failure
 is a risk.  If it is, the sequence number gets bumped and parallel
 allocators will briefly stall while the nodemask update takes place.
 
 In a page fault test microbenchmark, oprofile samples from
 __alloc_pages_nodemask went from 4.53% of all samples to 1.15%.  The
 actual results were
 
  3.3.0-rc3  3.3.0-rc3
  rc3-vanillanobarrier-v2r1
 Clients   1 UserTime   0.07 (  0.00%)   0.08 (-14.19%)
 Clients   2 UserTime   0.07 (  0.00%)   0.07 (  2.72%)
 Clients   4 UserTime   0.08 (  0.00%)   0.07 (  3.29%)
 Clients   1 SysTime0.70 (  0.00%)   0.65 (  6.65%)
 Clients   2 SysTime0.85 (  0.00%)   0.82 (  3.65%)
 Clients   4 SysTime1.41 (  0.00%)   1.41 (  0.32%)
 Clients   1 WallTime   0.77 (  0.00%)   0.74 (  4.19%)
 Clients   2 WallTime   0.47 (  0.00%)   0.45 (  3.73%)
 Clients   4 WallTime   0.38 (  0.00%)   0.37 (  1.58%)
 Clients   1 Flt/sec/cpu  497620.28 (  0.00%) 520294.53 (  4.56%)
 Clients   2 Flt/sec/cpu  414639.05 (  0.00%) 429882.01 (  3.68%)
 Clients   4 Flt/sec/cpu  257959.16 (  0.00%) 258761.48 (  0.31%)
 Clients   1 Flt/sec  495161.39 (  0.00%) 517292.87 (  4.47%)
 Clients   2 Flt/sec  820325.95 (  0.00%) 850289.77 (  3.65%)
 Clients   4 Flt/sec  1020068.93 (  0.00%) 1022674.06 (  0.26%)
 MMTests Statistics: duration
 Sys Time Running Test (seconds) 135.68132.17
 User+Sys Time Running Test (seconds) 164.2160.13
 Total Elapsed Time (seconds)123.46120.87
 
 The overall improvement is small but the System CPU time is much
 improved and roughly in correlation to what oprofile reported (these
 performance figures are without profiling so skew is expected).  The
 actual number of page faults is noticeably improved.
 
 For benchmarks like kernel builds, the overall benefit is marginal but
 the system CPU time is slightly reduced.
 
 To test the actual bug the commit fixed I opened two terminals.  The
 first ran within a cpuset and continually ran a small program that
 faulted 100M of anonymous data.  In a second window, the nodemask of the
 cpuset was continually randomised in a loop.
 
 Without the commit, the program would fail every so often (usually
 within 10 seconds) and obviously with the commit everything worked fine.
 With this patch applied, it also worked fine so the fix should be
 functionally equivalent.
 
 Signed-off-by: Mel Gorman mgor...@suse.de
 Cc: Miao Xie mi...@cn.fujitsu.com
 Cc: David Rientjes rient...@google.com
 Cc: Peter Zijlstra a.p.zijls...@chello.nl
 Cc: Christoph Lameter c...@linux.com
 Signed-off-by: Andrew Morton a...@linux-foundation.org
 Signed-off-by: Linus Torvalds torva...@linux-foundation.org
 Signed-off-by: Mel Gorman mgor...@suse.de
 Signed-off-by: Greg Kroah-Hartman gre...@linuxfoundation.org
 
 
 ---
  include/linux/cpuset.h|   49 
 ++
  include/linux/init_task.h |8 +++
  include/linux/sched.h |2 -
  kernel/cpuset.c   |   43 +++-
  kernel/fork.c |3 ++
  mm/filemap.c  |   11 ++
  mm/hugetlb.c  |   15 ++
  mm/mempolicy.c|   28 

Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-27 Thread Mel Gorman
  --- a/mm/slub.c
  +++ b/mm/slub.c
  @@ -1457,6 +1457,7 @@ static struct page *get_any_partial(stru
  struct zone *zone;
  enum zone_type high_zoneidx = gfp_zone(flags);
  struct page *page;
  +   unsigned int cpuset_mems_cookie;
   
  /*
   * The defrag ratio allows a configuration of the tradeoffs between
  @@ -1480,22 +1481,32 @@ static struct page *get_any_partial(stru
  get_cycles() % 1024  s-remote_node_defrag_ratio)
  return NULL;
   
  -   get_mems_allowed();
  -   zonelist = node_zonelist(slab_node(current-mempolicy), flags);
  -   for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
  -   struct kmem_cache_node *n;
  +   do {
  +   cpuset_mems_cookie = get_mems_allowed();
  +   zonelist = node_zonelist(slab_node(current-mempolicy), flags);
  +   for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
  +   struct kmem_cache_node *n;
   
  -   n = get_node(s, zone_to_nid(zone));
  +   n = get_node(s, zone_to_nid(zone));
   
  -   if (n  cpuset_zone_allowed_hardwall(zone, flags) 
  -   n-nr_partial  s-min_partial) {
  -   page = get_partial_node(n);
  -   if (page) {
  -   put_mems_allowed();
  -   return page;
  +   if (n  cpuset_zone_allowed_hardwall(zone, flags) 
  +   n-nr_partial  s-min_partial) {
  +   page = get_partial_node(n);
  +   if (page) {
  +   /*
  +* Return the object even if
  +* put_mems_allowed indicated that
  +* the cpuset mems_allowed was
  +* updated in parallel. It's a
  +* harmless race between the alloc
  +* and the cpuset update.
  +*/
  +   put_mems_allowed(cpuset_mems_cookie);
  +   return page;
  +   }
  }
  }
  -   }
  +   } while (!put_mems_allowed(cpuset_mems_cookie));
  put_mems_allowed();
 
 This doesn't build on 3.0, the backport left the stray put_mems_allowed
 above:
 
 linux-stable/mm/slub.c: In function 'get_any_partial':
 linux-stable/mm/slub.c:1510:2: error: too few arguments to function 
 'put_mems_allowed'
 linux-stable/include/linux/cpuset.h:108:20: note: declared here
 

That line should have been deleted and tests were based on slab. My
apologies.

---8---
cpuset: mm: Reduce large amounts of memory barrier related damage fix

linux-stable/mm/slub.c: In function 'get_any_partial':
linux-stable/mm/slub.c:1510:2: error: too few arguments to function 
'put_mems_allowed'
linux-stable/include/linux/cpuset.h:108:20: note: declared here

Reported-by: Herton Ronaldo Krzesinski herton.krzesin...@canonical.com
Signed-off-by: Mel Gorman mgor...@suse.de

diff --git a/mm/slub.c b/mm/slub.c
index 00ccf2c..ae6e80e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1507,7 +1507,6 @@ static struct page *get_any_partial(struct kmem_cache *s, 
gfp_t flags)
}
}
} while (!put_mems_allowed(cpuset_mems_cookie));
-   put_mems_allowed();
 #endif
return NULL;
 }
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-27 Thread Greg Kroah-Hartman
On Fri, Jul 27, 2012 at 04:23:47PM +0100, Mel Gorman wrote:
   --- a/mm/slub.c
   +++ b/mm/slub.c
   @@ -1457,6 +1457,7 @@ static struct page *get_any_partial(stru
 struct zone *zone;
 enum zone_type high_zoneidx = gfp_zone(flags);
 struct page *page;
   + unsigned int cpuset_mems_cookie;

 /*
  * The defrag ratio allows a configuration of the tradeoffs between
   @@ -1480,22 +1481,32 @@ static struct page *get_any_partial(stru
 get_cycles() % 1024  s-remote_node_defrag_ratio)
 return NULL;

   - get_mems_allowed();
   - zonelist = node_zonelist(slab_node(current-mempolicy), flags);
   - for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
   - struct kmem_cache_node *n;
   + do {
   + cpuset_mems_cookie = get_mems_allowed();
   + zonelist = node_zonelist(slab_node(current-mempolicy), flags);
   + for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
   + struct kmem_cache_node *n;

   - n = get_node(s, zone_to_nid(zone));
   + n = get_node(s, zone_to_nid(zone));

   - if (n  cpuset_zone_allowed_hardwall(zone, flags) 
   - n-nr_partial  s-min_partial) {
   - page = get_partial_node(n);
   - if (page) {
   - put_mems_allowed();
   - return page;
   + if (n  cpuset_zone_allowed_hardwall(zone, flags) 
   + n-nr_partial  s-min_partial) {
   + page = get_partial_node(n);
   + if (page) {
   + /*
   +  * Return the object even if
   +  * put_mems_allowed indicated that
   +  * the cpuset mems_allowed was
   +  * updated in parallel. It's a
   +  * harmless race between the alloc
   +  * and the cpuset update.
   +  */
   + put_mems_allowed(cpuset_mems_cookie);
   + return page;
   + }
 }
 }
   - }
   + } while (!put_mems_allowed(cpuset_mems_cookie));
 put_mems_allowed();
  
  This doesn't build on 3.0, the backport left the stray put_mems_allowed
  above:
  
  linux-stable/mm/slub.c: In function 'get_any_partial':
  linux-stable/mm/slub.c:1510:2: error: too few arguments to function 
  'put_mems_allowed'
  linux-stable/include/linux/cpuset.h:108:20: note: declared here
  
 
 That line should have been deleted and tests were based on slab. My
 apologies.
 
 ---8---
 cpuset: mm: Reduce large amounts of memory barrier related damage fix
 
 linux-stable/mm/slub.c: In function 'get_any_partial':
 linux-stable/mm/slub.c:1510:2: error: too few arguments to function 
 'put_mems_allowed'
 linux-stable/include/linux/cpuset.h:108:20: note: declared here
 
 Reported-by: Herton Ronaldo Krzesinski herton.krzesin...@canonical.com
 Signed-off-by: Mel Gorman mgor...@suse.de
 
 diff --git a/mm/slub.c b/mm/slub.c
 index 00ccf2c..ae6e80e 100644
 --- a/mm/slub.c
 +++ b/mm/slub.c
 @@ -1507,7 +1507,6 @@ static struct page *get_any_partial(struct kmem_cache 
 *s, gfp_t flags)
   }
   }
   } while (!put_mems_allowed(cpuset_mems_cookie));
 - put_mems_allowed();
  #endif
   return NULL;
  }

Thanks, I've merged this with the original in the tree, so all should
be good now.

greg k-h
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-27 Thread Herton Ronaldo Krzesinski
On Fri, Jul 27, 2012 at 12:01:16PM -0700, Greg Kroah-Hartman wrote:
 On Fri, Jul 27, 2012 at 04:23:47PM +0100, Mel Gorman wrote:
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1457,6 +1457,7 @@ static struct page *get_any_partial(stru
struct zone *zone;
enum zone_type high_zoneidx = gfp_zone(flags);
struct page *page;
+   unsigned int cpuset_mems_cookie;
 
/*
 * The defrag ratio allows a configuration of the tradeoffs 
between
@@ -1480,22 +1481,32 @@ static struct page *get_any_partial(stru
get_cycles() % 1024  
s-remote_node_defrag_ratio)
return NULL;
 
-   get_mems_allowed();
-   zonelist = node_zonelist(slab_node(current-mempolicy), flags);
-   for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
-   struct kmem_cache_node *n;
+   do {
+   cpuset_mems_cookie = get_mems_allowed();
+   zonelist = node_zonelist(slab_node(current-mempolicy), 
flags);
+   for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) 
{
+   struct kmem_cache_node *n;
 
-   n = get_node(s, zone_to_nid(zone));
+   n = get_node(s, zone_to_nid(zone));
 
-   if (n  cpuset_zone_allowed_hardwall(zone, flags) 
-   n-nr_partial  s-min_partial) {
-   page = get_partial_node(n);
-   if (page) {
-   put_mems_allowed();
-   return page;
+   if (n  cpuset_zone_allowed_hardwall(zone, 
flags) 
+   n-nr_partial  s-min_partial) 
{
+   page = get_partial_node(n);
+   if (page) {
+   /*
+* Return the object even if
+* put_mems_allowed indicated 
that
+* the cpuset mems_allowed was
+* updated in parallel. It's a
+* harmless race between the 
alloc
+* and the cpuset update.
+*/
+   
put_mems_allowed(cpuset_mems_cookie);
+   return page;
+   }
}
}
-   }
+   } while (!put_mems_allowed(cpuset_mems_cookie));
put_mems_allowed();
   
   This doesn't build on 3.0, the backport left the stray put_mems_allowed
   above:
   
   linux-stable/mm/slub.c: In function 'get_any_partial':
   linux-stable/mm/slub.c:1510:2: error: too few arguments to function 
   'put_mems_allowed'
   linux-stable/include/linux/cpuset.h:108:20: note: declared here
   
  
  That line should have been deleted and tests were based on slab. My
  apologies.
  
  ---8---
  cpuset: mm: Reduce large amounts of memory barrier related damage fix
  
  linux-stable/mm/slub.c: In function 'get_any_partial':
  linux-stable/mm/slub.c:1510:2: error: too few arguments to function 
  'put_mems_allowed'
  linux-stable/include/linux/cpuset.h:108:20: note: declared here
  
  Reported-by: Herton Ronaldo Krzesinski herton.krzesin...@canonical.com
  Signed-off-by: Mel Gorman mgor...@suse.de
  
  diff --git a/mm/slub.c b/mm/slub.c
  index 00ccf2c..ae6e80e 100644
  --- a/mm/slub.c
  +++ b/mm/slub.c
  @@ -1507,7 +1507,6 @@ static struct page *get_any_partial(struct kmem_cache 
  *s, gfp_t flags)
  }
  }
  } while (!put_mems_allowed(cpuset_mems_cookie));
  -   put_mems_allowed();
   #endif
  return NULL;
   }
 
 Thanks, I've merged this with the original in the tree, so all should
 be good now.

Thanks. I saw what seems another issue now on the patch too, sorry for
not noticing earlier: this backport is lacking the
write_seqcount_{begin,end} on set_mems_allowed for the case with
CONFIG_CPUSETS, like in the original patch:

 static inline void set_mems_allowed(nodemask_t nodemask)
 {
task_lock(current);
+   write_seqcount_begin(current-mems_allowed_seq);
current-mems_allowed = nodemask;
+   write_seqcount_end(current-mems_allowed_seq);
task_unlock(current);
 }



 
 greg k-h
 --
 To unsubscribe from this list: send the line unsubscribe stable in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 

-- 
[]'s
Herton
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to 

[ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-26 Thread Greg Kroah-Hartman
From: Greg KH 

3.0-stable review patch.  If anyone has any objections, please let me know.

--

From: Mel Gorman 

commit cc9a6c8776615f9c194ccf0b63a0aa5628235545 upstream.

Stable note:  Not tracked in Bugzilla. [get|put]_mems_allowed() is extremely
expensive and severely impacted page allocator performance. This
is part of a series of patches that reduce page allocator overhead.

Commit c0ff7453bb5c ("cpuset,mm: fix no node to alloc memory when
changing cpuset's mems") wins a super prize for the largest number of
memory barriers entered into fast paths for one commit.

[get|put]_mems_allowed is incredibly heavy with pairs of full memory
barriers inserted into a number of hot paths.  This was detected while
investigating at large page allocator slowdown introduced some time
after 2.6.32.  The largest portion of this overhead was shown by
oprofile to be at an mfence introduced by this commit into the page
allocator hot path.

For extra style points, the commit introduced the use of yield() in an
implementation of what looks like a spinning mutex.

This patch replaces the full memory barriers on both read and write
sides with a sequence counter with just read barriers on the fast path
side.  This is much cheaper on some architectures, including x86.  The
main bulk of the patch is the retry logic if the nodemask changes in a
manner that can cause a false failure.

While updating the nodemask, a check is made to see if a false failure
is a risk.  If it is, the sequence number gets bumped and parallel
allocators will briefly stall while the nodemask update takes place.

In a page fault test microbenchmark, oprofile samples from
__alloc_pages_nodemask went from 4.53% of all samples to 1.15%.  The
actual results were

 3.3.0-rc3  3.3.0-rc3
 rc3-vanillanobarrier-v2r1
Clients   1 UserTime   0.07 (  0.00%)   0.08 (-14.19%)
Clients   2 UserTime   0.07 (  0.00%)   0.07 (  2.72%)
Clients   4 UserTime   0.08 (  0.00%)   0.07 (  3.29%)
Clients   1 SysTime0.70 (  0.00%)   0.65 (  6.65%)
Clients   2 SysTime0.85 (  0.00%)   0.82 (  3.65%)
Clients   4 SysTime1.41 (  0.00%)   1.41 (  0.32%)
Clients   1 WallTime   0.77 (  0.00%)   0.74 (  4.19%)
Clients   2 WallTime   0.47 (  0.00%)   0.45 (  3.73%)
Clients   4 WallTime   0.38 (  0.00%)   0.37 (  1.58%)
Clients   1 Flt/sec/cpu  497620.28 (  0.00%) 520294.53 (  4.56%)
Clients   2 Flt/sec/cpu  414639.05 (  0.00%) 429882.01 (  3.68%)
Clients   4 Flt/sec/cpu  257959.16 (  0.00%) 258761.48 (  0.31%)
Clients   1 Flt/sec  495161.39 (  0.00%) 517292.87 (  4.47%)
Clients   2 Flt/sec  820325.95 (  0.00%) 850289.77 (  3.65%)
Clients   4 Flt/sec  1020068.93 (  0.00%) 1022674.06 (  0.26%)
MMTests Statistics: duration
Sys Time Running Test (seconds) 135.68132.17
User+Sys Time Running Test (seconds) 164.2160.13
Total Elapsed Time (seconds)123.46120.87

The overall improvement is small but the System CPU time is much
improved and roughly in correlation to what oprofile reported (these
performance figures are without profiling so skew is expected).  The
actual number of page faults is noticeably improved.

For benchmarks like kernel builds, the overall benefit is marginal but
the system CPU time is slightly reduced.

To test the actual bug the commit fixed I opened two terminals.  The
first ran within a cpuset and continually ran a small program that
faulted 100M of anonymous data.  In a second window, the nodemask of the
cpuset was continually randomised in a loop.

Without the commit, the program would fail every so often (usually
within 10 seconds) and obviously with the commit everything worked fine.
With this patch applied, it also worked fine so the fix should be
functionally equivalent.

Signed-off-by: Mel Gorman 
Cc: Miao Xie 
Cc: David Rientjes 
Cc: Peter Zijlstra 
Cc: Christoph Lameter 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
Signed-off-by: Mel Gorman 
Signed-off-by: Greg Kroah-Hartman 


---
 include/linux/cpuset.h|   49 ++
 include/linux/init_task.h |8 +++
 include/linux/sched.h |2 -
 kernel/cpuset.c   |   43 +++-
 kernel/fork.c |3 ++
 mm/filemap.c  |   11 ++
 mm/hugetlb.c  |   15 ++
 mm/mempolicy.c|   28 +++---
 mm/page_alloc.c   |   33 +-
 mm/slab.c |   13 +++-
 mm/slub.c |   35 +---
 mm/vmscan.c   |2 -
 12 files changed, 133 insertions(+), 109 deletions(-)

--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -89,36 +89,25 @@ extern void 

[ 39/40] cpuset: mm: reduce large amounts of memory barrier related damage v3

2012-07-26 Thread Greg Kroah-Hartman
From: Greg KH gre...@linuxfoundation.org

3.0-stable review patch.  If anyone has any objections, please let me know.

--

From: Mel Gorman mgor...@suse.de

commit cc9a6c8776615f9c194ccf0b63a0aa5628235545 upstream.

Stable note:  Not tracked in Bugzilla. [get|put]_mems_allowed() is extremely
expensive and severely impacted page allocator performance. This
is part of a series of patches that reduce page allocator overhead.

Commit c0ff7453bb5c (cpuset,mm: fix no node to alloc memory when
changing cpuset's mems) wins a super prize for the largest number of
memory barriers entered into fast paths for one commit.

[get|put]_mems_allowed is incredibly heavy with pairs of full memory
barriers inserted into a number of hot paths.  This was detected while
investigating at large page allocator slowdown introduced some time
after 2.6.32.  The largest portion of this overhead was shown by
oprofile to be at an mfence introduced by this commit into the page
allocator hot path.

For extra style points, the commit introduced the use of yield() in an
implementation of what looks like a spinning mutex.

This patch replaces the full memory barriers on both read and write
sides with a sequence counter with just read barriers on the fast path
side.  This is much cheaper on some architectures, including x86.  The
main bulk of the patch is the retry logic if the nodemask changes in a
manner that can cause a false failure.

While updating the nodemask, a check is made to see if a false failure
is a risk.  If it is, the sequence number gets bumped and parallel
allocators will briefly stall while the nodemask update takes place.

In a page fault test microbenchmark, oprofile samples from
__alloc_pages_nodemask went from 4.53% of all samples to 1.15%.  The
actual results were

 3.3.0-rc3  3.3.0-rc3
 rc3-vanillanobarrier-v2r1
Clients   1 UserTime   0.07 (  0.00%)   0.08 (-14.19%)
Clients   2 UserTime   0.07 (  0.00%)   0.07 (  2.72%)
Clients   4 UserTime   0.08 (  0.00%)   0.07 (  3.29%)
Clients   1 SysTime0.70 (  0.00%)   0.65 (  6.65%)
Clients   2 SysTime0.85 (  0.00%)   0.82 (  3.65%)
Clients   4 SysTime1.41 (  0.00%)   1.41 (  0.32%)
Clients   1 WallTime   0.77 (  0.00%)   0.74 (  4.19%)
Clients   2 WallTime   0.47 (  0.00%)   0.45 (  3.73%)
Clients   4 WallTime   0.38 (  0.00%)   0.37 (  1.58%)
Clients   1 Flt/sec/cpu  497620.28 (  0.00%) 520294.53 (  4.56%)
Clients   2 Flt/sec/cpu  414639.05 (  0.00%) 429882.01 (  3.68%)
Clients   4 Flt/sec/cpu  257959.16 (  0.00%) 258761.48 (  0.31%)
Clients   1 Flt/sec  495161.39 (  0.00%) 517292.87 (  4.47%)
Clients   2 Flt/sec  820325.95 (  0.00%) 850289.77 (  3.65%)
Clients   4 Flt/sec  1020068.93 (  0.00%) 1022674.06 (  0.26%)
MMTests Statistics: duration
Sys Time Running Test (seconds) 135.68132.17
User+Sys Time Running Test (seconds) 164.2160.13
Total Elapsed Time (seconds)123.46120.87

The overall improvement is small but the System CPU time is much
improved and roughly in correlation to what oprofile reported (these
performance figures are without profiling so skew is expected).  The
actual number of page faults is noticeably improved.

For benchmarks like kernel builds, the overall benefit is marginal but
the system CPU time is slightly reduced.

To test the actual bug the commit fixed I opened two terminals.  The
first ran within a cpuset and continually ran a small program that
faulted 100M of anonymous data.  In a second window, the nodemask of the
cpuset was continually randomised in a loop.

Without the commit, the program would fail every so often (usually
within 10 seconds) and obviously with the commit everything worked fine.
With this patch applied, it also worked fine so the fix should be
functionally equivalent.

Signed-off-by: Mel Gorman mgor...@suse.de
Cc: Miao Xie mi...@cn.fujitsu.com
Cc: David Rientjes rient...@google.com
Cc: Peter Zijlstra a.p.zijls...@chello.nl
Cc: Christoph Lameter c...@linux.com
Signed-off-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
Signed-off-by: Greg Kroah-Hartman gre...@linuxfoundation.org


---
 include/linux/cpuset.h|   49 ++
 include/linux/init_task.h |8 +++
 include/linux/sched.h |2 -
 kernel/cpuset.c   |   43 +++-
 kernel/fork.c |3 ++
 mm/filemap.c  |   11 ++
 mm/hugetlb.c  |   15 ++
 mm/mempolicy.c|   28 +++---
 mm/page_alloc.c   |   33 +-
 mm/slab.c |   13 +++-
 mm/slub.c |   35