Page migration will fail for memory that is pinned in memory with, for
example, get_user_pages().  In this case, it is unnecessary to take
zone->lru_lock or isolating the page and passing it to page migration
which will ultimately fail.

This is a racy check, the page can still change from under us, but in
that case we'll just fail later when attempting to move the page.

This avoids very expensive memory compaction when faulting transparent
hugepages after pinning a lot of memory with a Mellanox driver.

On a 128GB machine and pinning ~120GB of memory, before this patch we
see the enormous disparity in the number of page migration failures
because of the pinning (from /proc/vmstat):

        compact_pages_moved 8450
        compact_pagemigrate_failed 15614415

0.05% of pages isolated are successfully migrated and explicitly 
triggering memory compaction takes 102 seconds.  After the patch:

        compact_pages_moved 9197
        compact_pagemigrate_failed 7

99.9% of pages isolated are now successfully migrated in this 
configuration and memory compaction takes less than one second.

Signed-off-by: David Rientjes <rient...@google.com>
---
 v2: address page count issue per Joonsoo

 mm/compaction.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/mm/compaction.c b/mm/compaction.c
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -578,6 +578,15 @@ isolate_migratepages_range(struct zone *zone, struct 
compact_control *cc,
                        continue;
                }
 
+               /*
+                * Migration will fail if an anonymous page is pinned in memory,
+                * so avoid taking lru_lock and isolating it unnecessarily in an
+                * admittedly racy check.
+                */
+               if (!page_mapping(page) &&
+                   page_count(page) > page_mapcount(page))
+                       continue;
+
                /* Check if it is ok to still hold the lock */
                locked = compact_checklock_irqsave(&zone->lru_lock, &flags,
                                                                locked, cc);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to