This is a note to let you know that I've just added the patch titled

    mm: PageBuddy and mapcount robustness

to the 2.6.38-stable tree which can be found at:
    
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     mm-pagebuddy-and-mapcount-robustness.patch
and it can be found in the queue-2.6.38 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <[email protected]> know about it.


>From ef2b4b95a63a1d23958dcb99eb2c6898eddc87d0 Mon Sep 17 00:00:00 2001
From: Andrea Arcangeli <[email protected]>
Date: Fri, 18 Mar 2011 00:16:35 +0100
Subject: mm: PageBuddy and mapcount robustness

From: Andrea Arcangeli <[email protected]>

commit ef2b4b95a63a1d23958dcb99eb2c6898eddc87d0 upstream.

Change the _mapcount value indicating PageBuddy from -2 to -128 for
more robusteness against page_mapcount() undeflows.

Use reset_page_mapcount instead of __ClearPageBuddy in bad_page to
ignore the previous retval of PageBuddy().

Signed-off-by: Andrea Arcangeli <[email protected]>
Reported-by: Hugh Dickins <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
 include/linux/mm.h |   11 +++++++++--
 mm/page_alloc.c    |    4 ++--
 2 files changed, 11 insertions(+), 4 deletions(-)

--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -402,16 +402,23 @@ static inline void init_page_count(struc
 /*
  * PageBuddy() indicate that the page is free and in the buddy system
  * (see mm/page_alloc.c).
+ *
+ * PAGE_BUDDY_MAPCOUNT_VALUE must be <= -2 but better not too close to
+ * -2 so that an underflow of the page_mapcount() won't be mistaken
+ * for a genuine PAGE_BUDDY_MAPCOUNT_VALUE. -128 can be created very
+ * efficiently by most CPU architectures.
  */
+#define PAGE_BUDDY_MAPCOUNT_VALUE (-128)
+
 static inline int PageBuddy(struct page *page)
 {
-       return atomic_read(&page->_mapcount) == -2;
+       return atomic_read(&page->_mapcount) == PAGE_BUDDY_MAPCOUNT_VALUE;
 }
 
 static inline void __SetPageBuddy(struct page *page)
 {
        VM_BUG_ON(atomic_read(&page->_mapcount) != -1);
-       atomic_set(&page->_mapcount, -2);
+       atomic_set(&page->_mapcount, PAGE_BUDDY_MAPCOUNT_VALUE);
 }
 
 static inline void __ClearPageBuddy(struct page *page)
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -286,7 +286,7 @@ static void bad_page(struct page *page)
 
        /* Don't complain about poisoned pages */
        if (PageHWPoison(page)) {
-               __ClearPageBuddy(page);
+               reset_page_mapcount(page); /* remove PageBuddy */
                return;
        }
 
@@ -317,7 +317,7 @@ static void bad_page(struct page *page)
        dump_stack();
 out:
        /* Leave bad fields for debug, except PageBuddy could make trouble */
-       __ClearPageBuddy(page);
+       reset_page_mapcount(page); /* remove PageBuddy */
        add_taint(TAINT_BAD_PAGE);
 }
 


Patches currently in stable-queue which might be from [email protected] are

queue-2.6.38/mm-compaction-prevent-kswapd-compacting-memory-to-reduce-cpu-usage.patch
queue-2.6.38/mm-pagebuddy-and-mapcount-robustness.patch

_______________________________________________
stable mailing list
[email protected]
http://linux.kernel.org/mailman/listinfo/stable

Reply via email to