Re: [PATCH -mm v3 7/8] slub: make dead memcg caches discard free slabs immediately

2014-06-24 Thread Vladimir Davydov
On Tue, Jun 24, 2014 at 04:50:11PM +0900, Joonsoo Kim wrote:
> On Fri, Jun 13, 2014 at 12:38:21AM +0400, Vladimir Davydov wrote:
> > @@ -3409,6 +3417,9 @@ int __kmem_cache_shrink(struct kmem_cache *s)
> > kmalloc(sizeof(struct list_head) * objects, GFP_KERNEL);
> > unsigned long flags;
> >  
> > +   if (memcg_cache_dead(s))
> > +   s->min_partial = 0;
> > +
> > if (!slabs_by_inuse) {
> > /*
> >  * Do not fail shrinking empty slabs if allocation of the
> 
> I think that you should move down n->nr_partial test after holding the
> lock in __kmem_cache_shrink(). Access to n->nr_partial without node lock
> is racy and you can see wrong value. It results in skipping to free empty
> slab so your destroying logic could fail.

You're right! Will fix this.

And there seems to be the same problem in SLAB, where we check
node->slabs_free list emptiness w/o holding node->list_lock (see
drain_freelist) while it can be modified concurrently by free_block.
This will be fixed automatically after we make __kmem_cache_shrink unset
node->free_limit (which must be done under the lock) though.

Thank you!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH -mm v3 7/8] slub: make dead memcg caches discard free slabs immediately

2014-06-24 Thread Joonsoo Kim
On Fri, Jun 13, 2014 at 12:38:21AM +0400, Vladimir Davydov wrote:
> Since a dead memcg cache is destroyed only after the last slab allocated
> to it is freed, we must disable caching of empty slabs for such caches,
> otherwise they will be hanging around forever.
> 
> This patch makes SLUB discard dead memcg caches' slabs as soon as they
> become empty. To achieve that, it disables per cpu partial lists for
> dead caches (see put_cpu_partial) and forbids keeping empty slabs on per
> node partial lists by setting cache's min_partial to 0 on
> kmem_cache_shrink, which is always called on memcg offline (see
> memcg_unregister_all_caches).
> 
> Signed-off-by: Vladimir Davydov 
> Thanks-to: Joonsoo Kim 
> ---
>  mm/slub.c |   11 +++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 52565a9426ef..0d2d1978e62c 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2064,6 +2064,14 @@ static void put_cpu_partial(struct kmem_cache *s, 
> struct page *page, int drain)
>  
>   } while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page)
>   != oldpage);
> +
> + if (memcg_cache_dead(s)) {
> + unsigned long flags;
> +
> + local_irq_save(flags);
> + unfreeze_partials(s, this_cpu_ptr(s->cpu_slab));
> + local_irq_restore(flags);
> + }
>  #endif
>  }
>  
> @@ -3409,6 +3417,9 @@ int __kmem_cache_shrink(struct kmem_cache *s)
>   kmalloc(sizeof(struct list_head) * objects, GFP_KERNEL);
>   unsigned long flags;
>  
> + if (memcg_cache_dead(s))
> + s->min_partial = 0;
> +
>   if (!slabs_by_inuse) {
>   /*
>* Do not fail shrinking empty slabs if allocation of the

I think that you should move down n->nr_partial test after holding the
lock in __kmem_cache_shrink(). Access to n->nr_partial without node lock
is racy and you can see wrong value. It results in skipping to free empty
slab so your destroying logic could fail.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH -mm v3 7/8] slub: make dead memcg caches discard free slabs immediately

2014-06-24 Thread Joonsoo Kim
On Fri, Jun 13, 2014 at 12:38:21AM +0400, Vladimir Davydov wrote:
 Since a dead memcg cache is destroyed only after the last slab allocated
 to it is freed, we must disable caching of empty slabs for such caches,
 otherwise they will be hanging around forever.
 
 This patch makes SLUB discard dead memcg caches' slabs as soon as they
 become empty. To achieve that, it disables per cpu partial lists for
 dead caches (see put_cpu_partial) and forbids keeping empty slabs on per
 node partial lists by setting cache's min_partial to 0 on
 kmem_cache_shrink, which is always called on memcg offline (see
 memcg_unregister_all_caches).
 
 Signed-off-by: Vladimir Davydov vdavy...@parallels.com
 Thanks-to: Joonsoo Kim iamjoonsoo@lge.com
 ---
  mm/slub.c |   11 +++
  1 file changed, 11 insertions(+)
 
 diff --git a/mm/slub.c b/mm/slub.c
 index 52565a9426ef..0d2d1978e62c 100644
 --- a/mm/slub.c
 +++ b/mm/slub.c
 @@ -2064,6 +2064,14 @@ static void put_cpu_partial(struct kmem_cache *s, 
 struct page *page, int drain)
  
   } while (this_cpu_cmpxchg(s-cpu_slab-partial, oldpage, page)
   != oldpage);
 +
 + if (memcg_cache_dead(s)) {
 + unsigned long flags;
 +
 + local_irq_save(flags);
 + unfreeze_partials(s, this_cpu_ptr(s-cpu_slab));
 + local_irq_restore(flags);
 + }
  #endif
  }
  
 @@ -3409,6 +3417,9 @@ int __kmem_cache_shrink(struct kmem_cache *s)
   kmalloc(sizeof(struct list_head) * objects, GFP_KERNEL);
   unsigned long flags;
  
 + if (memcg_cache_dead(s))
 + s-min_partial = 0;
 +
   if (!slabs_by_inuse) {
   /*
* Do not fail shrinking empty slabs if allocation of the

I think that you should move down n-nr_partial test after holding the
lock in __kmem_cache_shrink(). Access to n-nr_partial without node lock
is racy and you can see wrong value. It results in skipping to free empty
slab so your destroying logic could fail.

Thanks.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH -mm v3 7/8] slub: make dead memcg caches discard free slabs immediately

2014-06-24 Thread Vladimir Davydov
On Tue, Jun 24, 2014 at 04:50:11PM +0900, Joonsoo Kim wrote:
 On Fri, Jun 13, 2014 at 12:38:21AM +0400, Vladimir Davydov wrote:
  @@ -3409,6 +3417,9 @@ int __kmem_cache_shrink(struct kmem_cache *s)
  kmalloc(sizeof(struct list_head) * objects, GFP_KERNEL);
  unsigned long flags;
   
  +   if (memcg_cache_dead(s))
  +   s-min_partial = 0;
  +
  if (!slabs_by_inuse) {
  /*
   * Do not fail shrinking empty slabs if allocation of the
 
 I think that you should move down n-nr_partial test after holding the
 lock in __kmem_cache_shrink(). Access to n-nr_partial without node lock
 is racy and you can see wrong value. It results in skipping to free empty
 slab so your destroying logic could fail.

You're right! Will fix this.

And there seems to be the same problem in SLAB, where we check
node-slabs_free list emptiness w/o holding node-list_lock (see
drain_freelist) while it can be modified concurrently by free_block.
This will be fixed automatically after we make __kmem_cache_shrink unset
node-free_limit (which must be done under the lock) though.

Thank you!
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH -mm v3 7/8] slub: make dead memcg caches discard free slabs immediately

2014-06-13 Thread Christoph Lameter
On Fri, 13 Jun 2014, Vladimir Davydov wrote:

> Since a dead memcg cache is destroyed only after the last slab allocated
> to it is freed, we must disable caching of empty slabs for such caches,
> otherwise they will be hanging around forever.

Looks good and clean.

Acked-by: Christoph Lameter 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH -mm v3 7/8] slub: make dead memcg caches discard free slabs immediately

2014-06-13 Thread Christoph Lameter
On Fri, 13 Jun 2014, Vladimir Davydov wrote:

 Since a dead memcg cache is destroyed only after the last slab allocated
 to it is freed, we must disable caching of empty slabs for such caches,
 otherwise they will be hanging around forever.

Looks good and clean.

Acked-by: Christoph Lameter c...@linux.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH -mm v3 7/8] slub: make dead memcg caches discard free slabs immediately

2014-06-12 Thread Vladimir Davydov
Since a dead memcg cache is destroyed only after the last slab allocated
to it is freed, we must disable caching of empty slabs for such caches,
otherwise they will be hanging around forever.

This patch makes SLUB discard dead memcg caches' slabs as soon as they
become empty. To achieve that, it disables per cpu partial lists for
dead caches (see put_cpu_partial) and forbids keeping empty slabs on per
node partial lists by setting cache's min_partial to 0 on
kmem_cache_shrink, which is always called on memcg offline (see
memcg_unregister_all_caches).

Signed-off-by: Vladimir Davydov 
Thanks-to: Joonsoo Kim 
---
 mm/slub.c |   11 +++
 1 file changed, 11 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 52565a9426ef..0d2d1978e62c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2064,6 +2064,14 @@ static void put_cpu_partial(struct kmem_cache *s, struct 
page *page, int drain)
 
} while (this_cpu_cmpxchg(s->cpu_slab->partial, oldpage, page)
!= oldpage);
+
+   if (memcg_cache_dead(s)) {
+   unsigned long flags;
+
+   local_irq_save(flags);
+   unfreeze_partials(s, this_cpu_ptr(s->cpu_slab));
+   local_irq_restore(flags);
+   }
 #endif
 }
 
@@ -3409,6 +3417,9 @@ int __kmem_cache_shrink(struct kmem_cache *s)
kmalloc(sizeof(struct list_head) * objects, GFP_KERNEL);
unsigned long flags;
 
+   if (memcg_cache_dead(s))
+   s->min_partial = 0;
+
if (!slabs_by_inuse) {
/*
 * Do not fail shrinking empty slabs if allocation of the
-- 
1.7.10.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH -mm v3 7/8] slub: make dead memcg caches discard free slabs immediately

2014-06-12 Thread Vladimir Davydov
Since a dead memcg cache is destroyed only after the last slab allocated
to it is freed, we must disable caching of empty slabs for such caches,
otherwise they will be hanging around forever.

This patch makes SLUB discard dead memcg caches' slabs as soon as they
become empty. To achieve that, it disables per cpu partial lists for
dead caches (see put_cpu_partial) and forbids keeping empty slabs on per
node partial lists by setting cache's min_partial to 0 on
kmem_cache_shrink, which is always called on memcg offline (see
memcg_unregister_all_caches).

Signed-off-by: Vladimir Davydov vdavy...@parallels.com
Thanks-to: Joonsoo Kim iamjoonsoo@lge.com
---
 mm/slub.c |   11 +++
 1 file changed, 11 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index 52565a9426ef..0d2d1978e62c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2064,6 +2064,14 @@ static void put_cpu_partial(struct kmem_cache *s, struct 
page *page, int drain)
 
} while (this_cpu_cmpxchg(s-cpu_slab-partial, oldpage, page)
!= oldpage);
+
+   if (memcg_cache_dead(s)) {
+   unsigned long flags;
+
+   local_irq_save(flags);
+   unfreeze_partials(s, this_cpu_ptr(s-cpu_slab));
+   local_irq_restore(flags);
+   }
 #endif
 }
 
@@ -3409,6 +3417,9 @@ int __kmem_cache_shrink(struct kmem_cache *s)
kmalloc(sizeof(struct list_head) * objects, GFP_KERNEL);
unsigned long flags;
 
+   if (memcg_cache_dead(s))
+   s-min_partial = 0;
+
if (!slabs_by_inuse) {
/*
 * Do not fail shrinking empty slabs if allocation of the
-- 
1.7.10.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/