The system uses global_dirtyable_memory() to calculate
number of dirtyable pages/pages that can be allocated
to the page cache.  A bug causes an underflow thus making
the page count look like a big unsigned number.  This in turn
confuses the dirty writeback throttling to aggressively write
back pages as they become dirty (usually 1 page at a time).

Fix is to ensure there is no underflow while doing the math.

Signed-off-by: Sonny Rao <sonny...@chromium.org>
Signed-off-by: Puneet Kumar <puneets...@chromium.org>
---
 mm/page-writeback.c |   17 +++++++++++++----
 1 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 830893b..2a6356c 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -194,11 +194,19 @@ static unsigned long highmem_dirtyable_memory(unsigned 
long total)
        unsigned long x = 0;
 
        for_each_node_state(node, N_HIGH_MEMORY) {
+               unsigned long nr_pages;
                struct zone *z =
                        &NODE_DATA(node)->node_zones[ZONE_HIGHMEM];
 
-               x += zone_page_state(z, NR_FREE_PAGES) +
-                    zone_reclaimable_pages(z) - z->dirty_balance_reserve;
+               nr_pages = zone_page_state(z, NR_FREE_PAGES) +
+                       zone_reclaimable_pages(z);
+               /*
+                * Unreclaimable memory (kernel memory or anonymous memory
+                * without swap) can bring down the dirtyable pages below
+                * the zone's dirty balance reserve.
+                */
+               if (nr_pages >= z->dirty_balance_reserve)
+                       x += nr_pages - z->dirty_balance_reserve;
        }
        /*
         * Make sure that the number of highmem pages is never larger
@@ -222,8 +230,9 @@ static unsigned long global_dirtyable_memory(void)
 {
        unsigned long x;
 
-       x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages() -
-           dirty_balance_reserve;
+       x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages();
+       if (x >= dirty_balance_reserve)
+               x -= dirty_balance_reserve;
 
        if (!vm_highmem_is_dirtyable)
                x -= highmem_dirtyable_memory(x);
-- 
1.7.7.3

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to