This is a note to let you know that I've just added the patch titled
mm: vmscan: check if reclaim should really abort even if compaction_ready()
is true for one zone
to the 3.0-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
mm-vmscan-check-if-reclaim-should-really-abort-even-if-compaction_ready-is-true-for-one-zone.patch
and it can be found in the queue-3.0 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <[email protected]> know about it.
>From 0cee34fd72c582b4f8ad8ce00645b75fb4168199 Mon Sep 17 00:00:00 2001
From: Mel Gorman <[email protected]>
Date: Thu, 12 Jan 2012 17:19:49 -0800
Subject: mm: vmscan: check if reclaim should really abort even if
compaction_ready() is true for one zone
From: Mel Gorman <[email protected]>
commit 0cee34fd72c582b4f8ad8ce00645b75fb4168199 upstream.
Stable note: Not tracked on Bugzilla. THP and compaction was found to
aggressively reclaim pages and stall systems under different
situations that was addressed piecemeal over time.
If compaction can proceed for a given zone, shrink_zones() does not
reclaim any more pages from it. After commit [e0c2327: vmscan: abort
reclaim/compaction if compaction can proceed], do_try_to_free_pages()
tries to finish as soon as possible once one zone can compact.
This was intended to prevent slabs being shrunk unnecessarily but there
are side-effects. One is that a small zone that is ready for compaction
will abort reclaim even if the chances of successfully allocating a THP
from that zone is small. It also means that reclaim can return too early
even though sc->nr_to_reclaim pages were not reclaimed.
This partially reverts the commit until it is proven that slabs are really
being shrunk unnecessarily but preserves the check to return 1 to avoid
OOM if reclaim was aborted prematurely.
[[email protected]: This patch replaces a revert from Andrea]
Signed-off-by: Mel Gorman <[email protected]>
Reviewed-by: Rik van Riel <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Dave Jones <[email protected]>
Cc: Jan Kara <[email protected]>
Cc: Andy Isaacson <[email protected]>
Cc: Nai Xia <[email protected]>
Cc: Johannes Weiner <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Mel Gorman <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
mm/vmscan.c | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2129,7 +2129,8 @@ static inline bool compaction_ready(stru
*
* This function returns true if a zone is being reclaimed for a costly
* high-order allocation and compaction is ready to begin. This indicates to
- * the caller that it should retry the allocation or fail.
+ * the caller that it should consider retrying the allocation instead of
+ * further reclaim.
*/
static bool shrink_zones(int priority, struct zonelist *zonelist,
struct scan_control *sc)
@@ -2138,7 +2139,7 @@ static bool shrink_zones(int priority, s
struct zone *zone;
unsigned long nr_soft_reclaimed;
unsigned long nr_soft_scanned;
- bool should_abort_reclaim = false;
+ bool aborted_reclaim = false;
for_each_zone_zonelist_nodemask(zone, z, zonelist,
gfp_zone(sc->gfp_mask), sc->nodemask) {
@@ -2164,7 +2165,7 @@ static bool shrink_zones(int priority, s
* allocations.
*/
if (compaction_ready(zone, sc)) {
- should_abort_reclaim = true;
+ aborted_reclaim = true;
continue;
}
}
@@ -2186,7 +2187,7 @@ static bool shrink_zones(int priority, s
shrink_zone(priority, zone, sc);
}
- return should_abort_reclaim;
+ return aborted_reclaim;
}
static bool zone_reclaimable(struct zone *zone)
@@ -2240,7 +2241,7 @@ static unsigned long do_try_to_free_page
struct zoneref *z;
struct zone *zone;
unsigned long writeback_threshold;
- bool should_abort_reclaim;
+ bool aborted_reclaim;
get_mems_allowed();
delayacct_freepages_start();
@@ -2252,9 +2253,7 @@ static unsigned long do_try_to_free_page
sc->nr_scanned = 0;
if (!priority)
disable_swap_token(sc->mem_cgroup);
- should_abort_reclaim = shrink_zones(priority, zonelist, sc);
- if (should_abort_reclaim)
- break;
+ aborted_reclaim = shrink_zones(priority, zonelist, sc);
/*
* Don't shrink slabs when reclaiming memory from
@@ -2320,8 +2319,8 @@ out:
if (oom_killer_disabled)
return 0;
- /* Aborting reclaim to try compaction? don't OOM, then */
- if (should_abort_reclaim)
+ /* Aborted reclaim to try compaction? don't OOM, then */
+ if (aborted_reclaim)
return 1;
/* top priority shrink_zones still had more to do? don't OOM, then */
Patches currently in stable-queue which might be from [email protected] are
queue-3.0/vmscan-clear-zone_congested-for-zone-with-good-watermark.patch
queue-3.0/mm-vmscan-when-reclaiming-for-compaction-ensure-there-are-sufficient-free-pages-available.patch
queue-3.0/mm-compaction-allow-compaction-to-isolate-dirty-pages.patch
queue-3.0/mm-page-allocator-do-not-call-direct-reclaim-for-thp-allocations-while-compaction-is-deferred.patch
queue-3.0/mm-vmscan-check-if-reclaim-should-really-abort-even-if-compaction_ready-is-true-for-one-zone.patch
queue-3.0/mm-zone_reclaim-make-isolate_lru_page-filter-aware.patch
queue-3.0/vmscan-add-shrink_slab-tracepoints.patch
queue-3.0/mm-change-isolate-mode-from-define-to-bitwise-type.patch
queue-3.0/mm-test-pageswapbacked-in-lumpy-reclaim.patch
queue-3.0/mm-migration-clean-up-unmap_and_move.patch
queue-3.0/mm-compaction-introduce-sync-light-migration-for-use-by-compaction.patch
queue-3.0/mm-vmscan.c-consider-swap-space-when-deciding-whether-to-continue-reclaim.patch
queue-3.0/kswapd-avoid-unnecessary-rebalance-after-an-unsuccessful-balancing.patch
queue-3.0/mm-compaction-trivial-clean-up-in-acct_isolated.patch
queue-3.0/mm-vmscan-do-not-oom-if-aborting-reclaim-to-start-compaction.patch
queue-3.0/kswapd-assign-new_order-and-new_classzone_idx-after-wakeup-in-sleeping.patch
queue-3.0/vmscan-promote-shared-file-mapped-pages.patch
queue-3.0/vmscan-reduce-wind-up-shrinker-nr-when-shrinker-can-t-do-work.patch
queue-3.0/vmscan-shrinker-nr-updates-race-and-go-wrong.patch
queue-3.0/mm-compaction-make-isolate_lru_page-filter-aware-again.patch
queue-3.0/mm-vmstat.c-cache-align-vm_stat.patch
queue-3.0/mm-compaction-determine-if-dirty-pages-can-be-migrated-without-blocking-within-migratepage.patch
queue-3.0/vmscan-activate-executable-pages-after-first-usage.patch
queue-3.0/mm-memory-hotplug-check-if-pages-are-correctly-reserved-on-a-per-section-basis.patch
queue-3.0/vmscan-limit-direct-reclaim-for-higher-order-allocations.patch
queue-3.0/mm-vmscan-fix-force-scanning-small-targets-without-swap.patch
queue-3.0/mm-compaction-make-isolate_lru_page-filter-aware.patch
queue-3.0/mm-reduce-the-amount-of-work-done-when-updating-min_free_kbytes.patch
queue-3.0/vmscan-abort-reclaim-compaction-if-compaction-can-proceed.patch
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html