Commit:     32a4330d4156e55a4888a201f484dbafed9504ed
Parent:     8691f3a72f32f8b3ed535faa27140b3ae293c90b
Author:     Rik van Riel <[EMAIL PROTECTED]>
AuthorDate: Tue Oct 16 01:24:50 2007 -0700
Committer:  Linus Torvalds <[EMAIL PROTECTED]>
CommitDate: Tue Oct 16 09:42:54 2007 -0700

    mm: prevent kswapd from freeing excessive amounts of lowmem
    The current VM can get itself into trouble fairly easily on systems with a
    small ZONE_HIGHMEM, which is common on i686 computers with 1GB of memory.
    On one side, page_alloc() will allocate down to zone->pages_low, while on
    the other side, kswapd() and balance_pgdat() will try to free memory from
    every zone, until every zone has more free pages than zone->pages_high.
    Highmem can be filled up to zone->pages_low with page tables, ramfs,
    vmalloc allocations and other unswappable things quite easily and without
    many bad side effects, since we still have a huge ZONE_NORMAL to do future
    allocations from.
    However, as long as the number of free pages in the highmem zone is below
    zone->pages_high, kswapd will continue swapping things out from
    ZONE_NORMAL, too!
    Sami Farin managed to get his system into a stage where kswapd had freed
    about 700MB of low memory and was still "going strong".
    The attached patch will make kswapd stop paging out data from zones when
    there is more than enough memory free.  We do go above zone->pages_high in
    order to keep pressure between zones equal in normal circumstances, but the
    patch should prevent the kind of excesses that made Sami's computer totally
    Signed-off-by: Rik van Riel <[EMAIL PROTECTED]>
    Cc: Nick Piggin <[EMAIL PROTECTED]>
    Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
    Signed-off-by: Linus Torvalds <[EMAIL PROTECTED]>
 mm/vmscan.c |    8 +++++++-
 1 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index a6e65d0..bc58802 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1371,7 +1371,13 @@ loop_again:
                        temp_priority[i] = priority;
                        sc.nr_scanned = 0;
                        note_zone_scanning_priority(zone, priority);
-                       nr_reclaimed += shrink_zone(priority, zone, &sc);
+                       /*
+                        * We put equal pressure on every zone, unless one
+                        * zone has way too many pages free already.
+                        */
+                       if (!zone_watermark_ok(zone, order, 8*zone->pages_high,
+                                               end_zone, 0))
+                               nr_reclaimed += shrink_zone(priority, zone, 
                        reclaim_state->reclaimed_slab = 0;
                        nr_slab = shrink_slab(sc.nr_scanned, GFP_KERNEL,
To unsubscribe from this list: send the line "unsubscribe git-commits-head" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at

Reply via email to