Gitweb:     
http://git.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=81eabcbe0b991ddef5216f30ae91c4b226d54b6d
Commit:     81eabcbe0b991ddef5216f30ae91c4b226d54b6d
Parent:     8d936626dd00bd47cf574add458fea8a23b79611
Author:     Mel Gorman <[EMAIL PROTECTED]>
AuthorDate: Mon Dec 17 16:20:05 2007 -0800
Committer:  Linus Torvalds <[EMAIL PROTECTED]>
CommitDate: Mon Dec 17 19:28:16 2007 -0800

    mm: fix page allocation for larger I/O segments
    
    In some cases the IO subsystem is able to merge requests if the pages are
    adjacent in physical memory.  This was achieved in the allocator by having
    expand() return pages in physically contiguous order in situations were a
    large buddy was split.  However, list-based anti-fragmentation changed the
    order pages were returned in to avoid searching in buffered_rmqueue() for a
    page of the appropriate migrate type.
    
    This patch restores behaviour of rmqueue_bulk() preserving the physical
    order of pages returned by the allocator without incurring increased search
    costs for anti-fragmentation.
    
    Signed-off-by: Mel Gorman <[EMAIL PROTECTED]>
    Cc: James Bottomley <[EMAIL PROTECTED]>
    Cc: Jens Axboe <[EMAIL PROTECTED]>
    Cc: Mark Lord <[EMAIL PROTECTED]
    Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
    Signed-off-by: Linus Torvalds <[EMAIL PROTECTED]>
---
 mm/page_alloc.c |   11 +++++++++++
 1 files changed, 11 insertions(+), 0 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b5a58d4..d73bfad 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -847,8 +847,19 @@ static int rmqueue_bulk(struct zone *zone, unsigned int 
order,
                struct page *page = __rmqueue(zone, order, migratetype);
                if (unlikely(page == NULL))
                        break;
+
+               /*
+                * Split buddy pages returned by expand() are received here
+                * in physical page order. The page is added to the callers and
+                * list and the list head then moves forward. From the callers
+                * perspective, the linked list is ordered by page number in
+                * some conditions. This is useful for IO devices that can
+                * merge IO requests if the physical pages are ordered
+                * properly.
+                */
                list_add(&page->lru, list);
                set_page_private(page, migratetype);
+               list = &page->lru;
        }
        spin_unlock(&zone->lock);
        return i;
-
To unsubscribe from this list: send the line "unsubscribe git-commits-head" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to