Hi Joonsoo,
On 04/09/2013 09:21 AM, Joonsoo Kim wrote:
Currently, freed pages via rcu is not counted for reclaimed_slab, because
it is freed in rcu context, not current task context. But, this free is
initiated by this task, so counting this into this task's reclaimed_slab
is meaningful to decide whether we continue reclaim, or not.
So change code to count these pages for this task's reclaimed_slab.

Cc: Christoph Lameter <c...@linux-foundation.org>
Cc: Pekka Enberg <penb...@kernel.org>
Cc: Matt Mackall <m...@selenic.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo....@lge.com>

diff --git a/mm/slub.c b/mm/slub.c
index 4aec537..16fd2d5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1409,8 +1409,6 @@ static void __free_slab(struct kmem_cache *s, struct page 
*page)
memcg_release_pages(s, order);
        page_mapcount_reset(page);
-       if (current->reclaim_state)
-               current->reclaim_state->reclaimed_slab += pages;
        __free_memcg_kmem_pages(page, order);
  }
@@ -1431,6 +1429,8 @@ static void rcu_free_slab(struct rcu_head *h) static void free_slab(struct kmem_cache *s, struct page *page)
  {
+       int pages = 1 << compound_order(page);

One question irrelevant this patch. Why slab cache can use compound page(hugetlbfs pages/thp pages)? They are just used by app to optimize tlb miss, is it?

+
        if (unlikely(s->flags & SLAB_DESTROY_BY_RCU)) {
                struct rcu_head *head;
@@ -1450,6 +1450,9 @@ static void free_slab(struct kmem_cache *s, struct page *page)
                call_rcu(head, rcu_free_slab);
        } else
                __free_slab(s, page);
+
+       if (current->reclaim_state)
+               current->reclaim_state->reclaimed_slab += pages;
  }
static void discard_slab(struct kmem_cache *s, struct page *page)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to