Commit:     fa1de9008c9bcce8ab5122529dd19b24c273eba2
Parent:     436c6541b13a73790646eb11429bdc8ee50eec41
Author:     Hugh Dickins <[EMAIL PROTECTED]>
AuthorDate: Thu Feb 7 00:14:13 2008 -0800
Committer:  Linus Torvalds <[EMAIL PROTECTED]>
CommitDate: Thu Feb 7 08:42:20 2008 -0800

    memcgroup: revert swap_state mods
    If we're charging rss and we're charging cache, it seems obvious that we
    should be charging swapcache - as has been done.  But in practice that
    doesn't work out so well: both swapin readahead and swapoff leave the
    majority of pages charged to the wrong cgroup (the cgroup that happened to
    read them in, rather than the cgroup to which they belong).
    (Which is why unuse_pte's GFP_KERNEL while holding pte lock never showed up
    as a problem: no allocation was ever done there, every page read being
    already charged to the cgroup which initiated the swapoff.)
    It all works rather better if we leave the charging to do_swap_page and
    unuse_pte, and do nothing for swapcache itself: revert mm/swap_state.c to
    what it was before the memory-controller patches.  This also speeds up
    significantly a contained process working at its limit: because it no
    longer needs to keep waiting for swap writeback to complete.
    Is it unfair that swap pages become uncharged once they're unmapped, even
    though they're still clearly private to particular cgroups?  For a short
    while, yes; but PageReclaim arranges for those pages to go to the end of
    the inactive list and be reclaimed soon if necessary.
    shmem/tmpfs pages are a distinct case: their charging also benefits from
    this change, but their second life on the lists as swapcache pages may
    prove more unfair - that I need to check next.
    Signed-off-by: Hugh Dickins <[EMAIL PROTECTED]>
    Cc: Pavel Emelianov <[EMAIL PROTECTED]>
    Acked-by: Balbir Singh <[EMAIL PROTECTED]>
    Cc: Paul Menage <[EMAIL PROTECTED]>
    Cc: Peter Zijlstra <[EMAIL PROTECTED]>
    Cc: "Eric W. Biederman" <[EMAIL PROTECTED]>
    Cc: Nick Piggin <[EMAIL PROTECTED]>
    Cc: Kirill Korotaev <[EMAIL PROTECTED]>
    Cc: Herbert Poetzl <[EMAIL PROTECTED]>
    Cc: David Rientjes <[EMAIL PROTECTED]>
    Cc: Vaidyanathan Srinivasan <[EMAIL PROTECTED]>
    Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
    Signed-off-by: Linus Torvalds <[EMAIL PROTECTED]>
 mm/swap_state.c |   13 +------------
 1 files changed, 1 insertions(+), 12 deletions(-)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index 6ce0669..ec42f01 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -17,7 +17,6 @@
 #include <linux/backing-dev.h>
 #include <linux/pagevec.h>
 #include <linux/migrate.h>
-#include <linux/memcontrol.h>
 #include <asm/pgtable.h>
@@ -75,11 +74,6 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, 
gfp_t gfp_mask)
-       error = mem_cgroup_cache_charge(page, current->mm, gfp_mask);
-       if (error)
-               goto out;
        error = radix_tree_preload(gfp_mask);
        if (!error) {
@@ -92,14 +86,10 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, 
gfp_t gfp_mask)
                        __inc_zone_page_state(page, NR_FILE_PAGES);
-               } else {
-                       mem_cgroup_uncharge_page(page);
-       } else
-               mem_cgroup_uncharge_page(page);
+       }
        return error;
@@ -114,7 +104,6 @@ void __delete_from_swap_cache(struct page *page)
-       mem_cgroup_uncharge_page(page);
        radix_tree_delete(&swapper_space.page_tree, page_private(page));
        set_page_private(page, 0);
To unsubscribe from this list: send the line "unsubscribe git-commits-head" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at

Reply via email to