Re: [PATCH] slub: fix off by one in number of slab tests

2014-06-24 Thread David Rientjes
On Tue, 24 Jun 2014, Joonsoo Kim wrote:

> min_partial means minimum number of slab cached in node partial
> list. So, if nr_partial is less than it, we keep newly empty slab
> on node partial list rather than freeing it. But if nr_partial is
> equal or greater than it, it means that we have enough partial slabs
> so should free newly empty slab. Current implementation missed
> the equal case so if we set min_partial is 0, then, at least one slab
> could be cached. This is critical problem to kmemcg destroying logic
> because it doesn't works properly if some slabs is cached. This patch
> fixes this problem.
> 
> Signed-off-by: Joonsoo Kim 

Acked-by: David Rientjes 

Needed for 3.16 to fix commit 91cb69620284 ("slub: make dead memcg caches 
discard free slabs immediately").
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] slub: fix off by one in number of slab tests

2014-06-24 Thread Vladimir Davydov
On Tue, Jun 24, 2014 at 04:44:01PM +0900, Joonsoo Kim wrote:
> min_partial means minimum number of slab cached in node partial
> list. So, if nr_partial is less than it, we keep newly empty slab
> on node partial list rather than freeing it. But if nr_partial is
> equal or greater than it, it means that we have enough partial slabs
> so should free newly empty slab. Current implementation missed
> the equal case so if we set min_partial is 0, then, at least one slab
> could be cached. This is critical problem to kmemcg destroying logic
> because it doesn't works properly if some slabs is cached. This patch
> fixes this problem.

Oops, my fault :-(

Thank you for catching this!

> Signed-off-by: Joonsoo Kim 

Acked-by: Vladimir Davydov 

> 
> diff --git a/mm/slub.c b/mm/slub.c
> index c567927..67da14d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1851,7 +1851,7 @@ redo:
>  
>   new.frozen = 0;
>  
> - if (!new.inuse && n->nr_partial > s->min_partial)
> + if (!new.inuse && n->nr_partial >= s->min_partial)
>   m = M_FREE;
>   else if (new.freelist) {
>   m = M_PARTIAL;
> @@ -1962,7 +1962,7 @@ static void unfreeze_partials(struct kmem_cache *s,
>   new.freelist, new.counters,
>   "unfreezing slab"));
>  
> - if (unlikely(!new.inuse && n->nr_partial > s->min_partial)) {
> + if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) {
>   page->next = discard_page;
>   discard_page = page;
>   } else {
> @@ -2595,7 +2595,7 @@ static void __slab_free(struct kmem_cache *s, struct 
> page *page,
>  return;
>  }
>  
> - if (unlikely(!new.inuse && n->nr_partial > s->min_partial))
> + if (unlikely(!new.inuse && n->nr_partial >= s->min_partial))
>   goto slab_empty;
>  
>   /*
> -- 
> 1.7.9.5
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] slub: fix off by one in number of slab tests

2014-06-24 Thread Joonsoo Kim
min_partial means minimum number of slab cached in node partial
list. So, if nr_partial is less than it, we keep newly empty slab
on node partial list rather than freeing it. But if nr_partial is
equal or greater than it, it means that we have enough partial slabs
so should free newly empty slab. Current implementation missed
the equal case so if we set min_partial is 0, then, at least one slab
could be cached. This is critical problem to kmemcg destroying logic
because it doesn't works properly if some slabs is cached. This patch
fixes this problem.

Signed-off-by: Joonsoo Kim 

diff --git a/mm/slub.c b/mm/slub.c
index c567927..67da14d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1851,7 +1851,7 @@ redo:
 
new.frozen = 0;
 
-   if (!new.inuse && n->nr_partial > s->min_partial)
+   if (!new.inuse && n->nr_partial >= s->min_partial)
m = M_FREE;
else if (new.freelist) {
m = M_PARTIAL;
@@ -1962,7 +1962,7 @@ static void unfreeze_partials(struct kmem_cache *s,
new.freelist, new.counters,
"unfreezing slab"));
 
-   if (unlikely(!new.inuse && n->nr_partial > s->min_partial)) {
+   if (unlikely(!new.inuse && n->nr_partial >= s->min_partial)) {
page->next = discard_page;
discard_page = page;
} else {
@@ -2595,7 +2595,7 @@ static void __slab_free(struct kmem_cache *s, struct page 
*page,
 return;
 }
 
-   if (unlikely(!new.inuse && n->nr_partial > s->min_partial))
+   if (unlikely(!new.inuse && n->nr_partial >= s->min_partial))
goto slab_empty;
 
/*
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] slub: fix off by one in number of slab tests

2014-06-24 Thread Joonsoo Kim
min_partial means minimum number of slab cached in node partial
list. So, if nr_partial is less than it, we keep newly empty slab
on node partial list rather than freeing it. But if nr_partial is
equal or greater than it, it means that we have enough partial slabs
so should free newly empty slab. Current implementation missed
the equal case so if we set min_partial is 0, then, at least one slab
could be cached. This is critical problem to kmemcg destroying logic
because it doesn't works properly if some slabs is cached. This patch
fixes this problem.

Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com

diff --git a/mm/slub.c b/mm/slub.c
index c567927..67da14d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1851,7 +1851,7 @@ redo:
 
new.frozen = 0;
 
-   if (!new.inuse  n-nr_partial  s-min_partial)
+   if (!new.inuse  n-nr_partial = s-min_partial)
m = M_FREE;
else if (new.freelist) {
m = M_PARTIAL;
@@ -1962,7 +1962,7 @@ static void unfreeze_partials(struct kmem_cache *s,
new.freelist, new.counters,
unfreezing slab));
 
-   if (unlikely(!new.inuse  n-nr_partial  s-min_partial)) {
+   if (unlikely(!new.inuse  n-nr_partial = s-min_partial)) {
page-next = discard_page;
discard_page = page;
} else {
@@ -2595,7 +2595,7 @@ static void __slab_free(struct kmem_cache *s, struct page 
*page,
 return;
 }
 
-   if (unlikely(!new.inuse  n-nr_partial  s-min_partial))
+   if (unlikely(!new.inuse  n-nr_partial = s-min_partial))
goto slab_empty;
 
/*
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] slub: fix off by one in number of slab tests

2014-06-24 Thread Vladimir Davydov
On Tue, Jun 24, 2014 at 04:44:01PM +0900, Joonsoo Kim wrote:
 min_partial means minimum number of slab cached in node partial
 list. So, if nr_partial is less than it, we keep newly empty slab
 on node partial list rather than freeing it. But if nr_partial is
 equal or greater than it, it means that we have enough partial slabs
 so should free newly empty slab. Current implementation missed
 the equal case so if we set min_partial is 0, then, at least one slab
 could be cached. This is critical problem to kmemcg destroying logic
 because it doesn't works properly if some slabs is cached. This patch
 fixes this problem.

Oops, my fault :-(

Thank you for catching this!

 Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com

Acked-by: Vladimir Davydov vdavy...@parallels.com

 
 diff --git a/mm/slub.c b/mm/slub.c
 index c567927..67da14d 100644
 --- a/mm/slub.c
 +++ b/mm/slub.c
 @@ -1851,7 +1851,7 @@ redo:
  
   new.frozen = 0;
  
 - if (!new.inuse  n-nr_partial  s-min_partial)
 + if (!new.inuse  n-nr_partial = s-min_partial)
   m = M_FREE;
   else if (new.freelist) {
   m = M_PARTIAL;
 @@ -1962,7 +1962,7 @@ static void unfreeze_partials(struct kmem_cache *s,
   new.freelist, new.counters,
   unfreezing slab));
  
 - if (unlikely(!new.inuse  n-nr_partial  s-min_partial)) {
 + if (unlikely(!new.inuse  n-nr_partial = s-min_partial)) {
   page-next = discard_page;
   discard_page = page;
   } else {
 @@ -2595,7 +2595,7 @@ static void __slab_free(struct kmem_cache *s, struct 
 page *page,
  return;
  }
  
 - if (unlikely(!new.inuse  n-nr_partial  s-min_partial))
 + if (unlikely(!new.inuse  n-nr_partial = s-min_partial))
   goto slab_empty;
  
   /*
 -- 
 1.7.9.5
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] slub: fix off by one in number of slab tests

2014-06-24 Thread David Rientjes
On Tue, 24 Jun 2014, Joonsoo Kim wrote:

 min_partial means minimum number of slab cached in node partial
 list. So, if nr_partial is less than it, we keep newly empty slab
 on node partial list rather than freeing it. But if nr_partial is
 equal or greater than it, it means that we have enough partial slabs
 so should free newly empty slab. Current implementation missed
 the equal case so if we set min_partial is 0, then, at least one slab
 could be cached. This is critical problem to kmemcg destroying logic
 because it doesn't works properly if some slabs is cached. This patch
 fixes this problem.
 
 Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com

Acked-by: David Rientjes rient...@google.com

Needed for 3.16 to fix commit 91cb69620284 (slub: make dead memcg caches 
discard free slabs immediately).
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/