Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats

2016-08-17 Thread Christoph Lameter
On Wed, 17 Aug 2016, aruna.ramakris...@oracle.com wrote:

> I'll send out an updated slab counters patch with Joonsoo's suggested fix
> tomorrow (nr_slabs will be unsigned long for SLAB only, and there will be a
> separate definition for SLUB), and once that's in, I'll create a new patch
> that makes nr_slabs common for SLAB and SLUB, and also converts total_objects
> to unsigned long. Maybe it can include some more cleanup too. Does that sound
> acceptable?

Thats fine.




Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats

2016-08-17 Thread aruna . ramakrishna


On 08/16/2016 08:52 AM, Christoph Lameter wrote:


On Tue, 16 Aug 2016, Joonsoo Kim wrote:


In SLUB, nr_slabs is manipulated without holding a lock so atomic
operation should be used.


It could be moved under the node lock.



Christoph, Joonsoo,

I agree that nr_slabs could be common between SLAB and SLUB, but I think 
that should be a separate patch, since converting nr_slabs to unsigned 
long for SLUB will cause quite a bit of change in mm/slub.c that is not 
related to adding counters to SLAB.


I'll send out an updated slab counters patch with Joonsoo's suggested 
fix tomorrow (nr_slabs will be unsigned long for SLAB only, and there 
will be a separate definition for SLUB), and once that's in, I'll create 
a new patch that makes nr_slabs common for SLAB and SLUB, and also 
converts total_objects to unsigned long. Maybe it can include some more 
cleanup too. Does that sound acceptable?


Thanks,
Aruna


Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats

2016-08-16 Thread Christoph Lameter

On Tue, 16 Aug 2016, Joonsoo Kim wrote:

> In SLUB, nr_slabs is manipulated without holding a lock so atomic
> operation should be used.

It could be moved under the node lock.


Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats

2016-08-15 Thread Joonsoo Kim
On Fri, Aug 05, 2016 at 09:21:56AM -0500, Christoph Lameter wrote:
> On Fri, 5 Aug 2016, Joonsoo Kim wrote:
> 
> > If above my comments are fixed, all counting would be done with
> > holding a lock. So, atomic definition isn't needed for the SLAB.
> 
> Ditto for slub. struct kmem_cache_node is alrady defined in mm/slab.h.
> Thus it is a common definition already and can be used by both.
> 
> Making nr_slabs and total_objects unsigned long would be great.

In SLUB, nr_slabs is manipulated without holding a lock so atomic
operation should be used.

Anyway, Aruna. Could you handle my comment?

Thank.


Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats

2016-08-05 Thread Christoph Lameter
On Fri, 5 Aug 2016, Joonsoo Kim wrote:

> If above my comments are fixed, all counting would be done with
> holding a lock. So, atomic definition isn't needed for the SLAB.

Ditto for slub. struct kmem_cache_node is alrady defined in mm/slab.h.
Thus it is a common definition already and can be used by both.

Making nr_slabs and total_objects unsigned long would be great.




Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats

2016-08-05 Thread Christoph Lameter
On Thu, 4 Aug 2016, Aruna Ramakrishna wrote:

> On large systems, when some slab caches grow to millions of objects (and
> many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
> During this time, interrupts are disabled while walking the slab lists
> (slabs_full, slabs_partial, and slabs_free) for each node, and this
> sometimes causes timeouts in other drivers (for instance, Infiniband).

Acked-by: Christoph Lameter 



Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats

2016-08-04 Thread Joonsoo Kim
2016-08-05 4:01 GMT+09:00 Aruna Ramakrishna :
> On large systems, when some slab caches grow to millions of objects (and
> many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
> During this time, interrupts are disabled while walking the slab lists
> (slabs_full, slabs_partial, and slabs_free) for each node, and this
> sometimes causes timeouts in other drivers (for instance, Infiniband).
>
> This patch optimizes 'cat /proc/slabinfo' by maintaining a counter for
> total number of allocated slabs per node, per cache. This counter is
> updated when a slab is created or destroyed. This enables us to skip
> traversing the slabs_full list while gathering slabinfo statistics, and
> since slabs_full tends to be the biggest list when the cache is large, it
> results in a dramatic performance improvement. Getting slabinfo statistics
> now only requires walking the slabs_free and slabs_partial lists, and
> those lists are usually much smaller than slabs_full. We tested this after
> growing the dentry cache to 70GB, and the performance improved from 2s to
> 5ms.
>
> Signed-off-by: Aruna Ramakrishna 
> Cc: Mike Kravetz 
> Cc: Christoph Lameter 
> Cc: Pekka Enberg 
> Cc: David Rientjes 
> Cc: Joonsoo Kim 
> Cc: Andrew Morton 
> ---
> Note: this has been tested only on x86_64.
>
>  mm/slab.c | 25 -
>  mm/slab.h | 15 ++-
>  mm/slub.c | 19 +--
>  3 files changed, 31 insertions(+), 28 deletions(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index 261147b..d683840 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -233,6 +233,7 @@ static void kmem_cache_node_init(struct kmem_cache_node 
> *parent)
> spin_lock_init(&parent->list_lock);
> parent->free_objects = 0;
> parent->free_touched = 0;
> +   atomic_long_set(&parent->nr_slabs, 0);
>  }
>
>  #define MAKE_LIST(cachep, listp, slab, nodeid) \
> @@ -2333,6 +2334,7 @@ static int drain_freelist(struct kmem_cache *cache,
> n->free_objects -= cache->num;
> spin_unlock_irq(&n->list_lock);
> slab_destroy(cache, page);
> +   atomic_long_dec(&n->nr_slabs);
> nr_freed++;
> }

Please decrease counter when a slab is detached from the list.
Otherwise, there would be inconsistent between counter and
number of attached slab on the list.

>  out:
> @@ -2736,6 +2738,8 @@ static struct page *cache_grow_begin(struct kmem_cache 
> *cachep,
> if (gfpflags_allow_blocking(local_flags))
> local_irq_disable();
>
> +   atomic_long_inc(&n->nr_slabs);
> +
> return page;

Please increase counter when a slab is attached to the list
in cache_grow_end().

>  opps1:
> @@ -3455,6 +3459,7 @@ static void free_block(struct kmem_cache *cachep, void 
> **objpp,
>
> page = list_last_entry(&n->slabs_free, struct page, lru);
> list_move(&page->lru, list);
> +   atomic_long_dec(&n->nr_slabs);
> }
>  }
>
> @@ -4111,6 +4116,8 @@ void get_slabinfo(struct kmem_cache *cachep, struct 
> slabinfo *sinfo)
> unsigned long num_objs;
> unsigned long active_slabs = 0;
> unsigned long num_slabs, free_objects = 0, shared_avail = 0;
> +   unsigned long num_slabs_partial = 0, num_slabs_free = 0;
> +   unsigned long num_slabs_full = 0;
> const char *name;
> char *error = NULL;
> int node;
> @@ -4120,36 +4127,36 @@ void get_slabinfo(struct kmem_cache *cachep, struct 
> slabinfo *sinfo)
> num_slabs = 0;
> for_each_kmem_cache_node(cachep, node, n) {
>
> +   num_slabs += node_nr_slabs(n);
> check_irq_on();
> spin_lock_irq(&n->list_lock);
>
> -   list_for_each_entry(page, &n->slabs_full, lru) {
> -   if (page->active != cachep->num && !error)
> -   error = "slabs_full accounting error";
> -   active_objs += cachep->num;
> -   active_slabs++;
> -   }
> list_for_each_entry(page, &n->slabs_partial, lru) {
> if (page->active == cachep->num && !error)
> error = "slabs_partial accounting error";
> if (!page->active && !error)
> error = "slabs_partial accounting error";
> active_objs += page->active;
> -   active_slabs++;
> +   num_slabs_partial++;
> }
> +
> list_for_each_entry(page, &n->slabs_free, lru) {
> if (page->active && !error)
> error = "slabs_free accounting error";
> -   num_slabs++;
> +   num_slabs_free++;
> }
> +
> free_objects += n->free_objects;
> if (n->shared

Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats

2016-08-04 Thread Aruna Ramakrishna



On 08/04/2016 02:06 PM, Andrew Morton wrote:

On Thu,  4 Aug 2016 12:01:13 -0700 Aruna Ramakrishna 
 wrote:


On large systems, when some slab caches grow to millions of objects (and
many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
During this time, interrupts are disabled while walking the slab lists
(slabs_full, slabs_partial, and slabs_free) for each node, and this
sometimes causes timeouts in other drivers (for instance, Infiniband).

This patch optimizes 'cat /proc/slabinfo' by maintaining a counter for
total number of allocated slabs per node, per cache. This counter is
updated when a slab is created or destroyed. This enables us to skip
traversing the slabs_full list while gathering slabinfo statistics, and
since slabs_full tends to be the biggest list when the cache is large, it
results in a dramatic performance improvement. Getting slabinfo statistics
now only requires walking the slabs_free and slabs_partial lists, and
those lists are usually much smaller than slabs_full. We tested this after
growing the dentry cache to 70GB, and the performance improved from 2s to
5ms.


I assume this is tested on both slab and slub?

It isn't the smallest of patches but given the seriousness of the
problem I think I'll tag it for -stable backporting.



This was only sanity-checked on slub. The performance tests were only 
run on slab.


Thanks,
Aruna


Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats

2016-08-04 Thread Andrew Morton
On Thu,  4 Aug 2016 12:01:13 -0700 Aruna Ramakrishna 
 wrote:

> On large systems, when some slab caches grow to millions of objects (and
> many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
> During this time, interrupts are disabled while walking the slab lists
> (slabs_full, slabs_partial, and slabs_free) for each node, and this
> sometimes causes timeouts in other drivers (for instance, Infiniband).
> 
> This patch optimizes 'cat /proc/slabinfo' by maintaining a counter for
> total number of allocated slabs per node, per cache. This counter is
> updated when a slab is created or destroyed. This enables us to skip
> traversing the slabs_full list while gathering slabinfo statistics, and
> since slabs_full tends to be the biggest list when the cache is large, it
> results in a dramatic performance improvement. Getting slabinfo statistics
> now only requires walking the slabs_free and slabs_partial lists, and
> those lists are usually much smaller than slabs_full. We tested this after
> growing the dentry cache to 70GB, and the performance improved from 2s to
> 5ms.

I assume this is tested on both slab and slub?

It isn't the smallest of patches but given the seriousness of the
problem I think I'll tag it for -stable backporting.



[PATCH v2] mm/slab: Improve performance of gathering slabinfo stats

2016-08-04 Thread Aruna Ramakrishna
On large systems, when some slab caches grow to millions of objects (and
many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
During this time, interrupts are disabled while walking the slab lists
(slabs_full, slabs_partial, and slabs_free) for each node, and this
sometimes causes timeouts in other drivers (for instance, Infiniband).

This patch optimizes 'cat /proc/slabinfo' by maintaining a counter for
total number of allocated slabs per node, per cache. This counter is
updated when a slab is created or destroyed. This enables us to skip
traversing the slabs_full list while gathering slabinfo statistics, and
since slabs_full tends to be the biggest list when the cache is large, it
results in a dramatic performance improvement. Getting slabinfo statistics
now only requires walking the slabs_free and slabs_partial lists, and
those lists are usually much smaller than slabs_full. We tested this after
growing the dentry cache to 70GB, and the performance improved from 2s to
5ms.

Signed-off-by: Aruna Ramakrishna 
Cc: Mike Kravetz 
Cc: Christoph Lameter 
Cc: Pekka Enberg 
Cc: David Rientjes 
Cc: Joonsoo Kim 
Cc: Andrew Morton 
---
Note: this has been tested only on x86_64.

 mm/slab.c | 25 -
 mm/slab.h | 15 ++-
 mm/slub.c | 19 +--
 3 files changed, 31 insertions(+), 28 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index 261147b..d683840 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -233,6 +233,7 @@ static void kmem_cache_node_init(struct kmem_cache_node 
*parent)
spin_lock_init(&parent->list_lock);
parent->free_objects = 0;
parent->free_touched = 0;
+   atomic_long_set(&parent->nr_slabs, 0);
 }
 
 #define MAKE_LIST(cachep, listp, slab, nodeid) \
@@ -2333,6 +2334,7 @@ static int drain_freelist(struct kmem_cache *cache,
n->free_objects -= cache->num;
spin_unlock_irq(&n->list_lock);
slab_destroy(cache, page);
+   atomic_long_dec(&n->nr_slabs);
nr_freed++;
}
 out:
@@ -2736,6 +2738,8 @@ static struct page *cache_grow_begin(struct kmem_cache 
*cachep,
if (gfpflags_allow_blocking(local_flags))
local_irq_disable();
 
+   atomic_long_inc(&n->nr_slabs);
+
return page;
 
 opps1:
@@ -3455,6 +3459,7 @@ static void free_block(struct kmem_cache *cachep, void 
**objpp,
 
page = list_last_entry(&n->slabs_free, struct page, lru);
list_move(&page->lru, list);
+   atomic_long_dec(&n->nr_slabs);
}
 }
 
@@ -4111,6 +4116,8 @@ void get_slabinfo(struct kmem_cache *cachep, struct 
slabinfo *sinfo)
unsigned long num_objs;
unsigned long active_slabs = 0;
unsigned long num_slabs, free_objects = 0, shared_avail = 0;
+   unsigned long num_slabs_partial = 0, num_slabs_free = 0;
+   unsigned long num_slabs_full = 0;
const char *name;
char *error = NULL;
int node;
@@ -4120,36 +4127,36 @@ void get_slabinfo(struct kmem_cache *cachep, struct 
slabinfo *sinfo)
num_slabs = 0;
for_each_kmem_cache_node(cachep, node, n) {
 
+   num_slabs += node_nr_slabs(n);
check_irq_on();
spin_lock_irq(&n->list_lock);
 
-   list_for_each_entry(page, &n->slabs_full, lru) {
-   if (page->active != cachep->num && !error)
-   error = "slabs_full accounting error";
-   active_objs += cachep->num;
-   active_slabs++;
-   }
list_for_each_entry(page, &n->slabs_partial, lru) {
if (page->active == cachep->num && !error)
error = "slabs_partial accounting error";
if (!page->active && !error)
error = "slabs_partial accounting error";
active_objs += page->active;
-   active_slabs++;
+   num_slabs_partial++;
}
+
list_for_each_entry(page, &n->slabs_free, lru) {
if (page->active && !error)
error = "slabs_free accounting error";
-   num_slabs++;
+   num_slabs_free++;
}
+
free_objects += n->free_objects;
if (n->shared)
shared_avail += n->shared->avail;
 
spin_unlock_irq(&n->list_lock);
}
-   num_slabs += active_slabs;
num_objs = num_slabs * cachep->num;
+   active_slabs = num_slabs - num_slabs_free;
+   num_slabs_full = num_slabs - (num_slabs_partial + num_slabs_free);
+   active_objs += (num_slabs_full * cachep->num);
+
if (num_objs - active_objs != free_objects && !error)
error = "free_objects accounting error";