Re: [PATCH v3 4/8] mm, page_alloc: count movable pages when stealing from pageblock

2017-03-29 Thread Vlastimil Babka
On 03/16/2017 02:53 AM, Joonsoo Kim wrote:
> On Tue, Mar 07, 2017 at 02:15:41PM +0100, Vlastimil Babka wrote:
>> When stealing pages from pageblock of a different migratetype, we count how
>> many free pages were stolen, and change the pageblock's migratetype if more
>> than half of the pageblock was free. This might be too conservative, as there
>> might be other pages that are not free, but were allocated with the same
>> migratetype as our allocation requested.
> 
> I think that too conservative is good for movable case. In my experiments,
> fragmentation spreads out when unmovable/reclaimable pageblock is
> changed to movable pageblock prematurely ('prematurely' means that
> allocated unmovable pages remains). As you said below, movable allocations
> falling back to other pageblocks don't causes permanent fragmentation.
> Therefore, we don't need to be less conservative for movable
> allocation. So, how about following change to keep the criteria for
> movable allocation conservative even with this counting improvement?
> 
> threshold = (1 << (pageblock_order - 1));
> if (start_type == MIGRATE_MOVABLE)
> threshold += (1 << (pageblock_order - 2));
> 
> if (free_pages + alike_pages >= threshold)
> ...

That could help, or also not. Keeping more pageblocks marked as unmovable also
means that more unmovable allocations will spread out to them all, even if they
would fit within less pageblocks. MIGRATE_MIXED was an idea to help in this
case, as truly unmovable pageblocks would be preferred to the mixed ones.

Can't decide about such change without testing :/

> Thanks.
> 



Re: [PATCH v3 4/8] mm, page_alloc: count movable pages when stealing from pageblock

2017-03-29 Thread Vlastimil Babka
On 03/16/2017 02:53 AM, Joonsoo Kim wrote:
> On Tue, Mar 07, 2017 at 02:15:41PM +0100, Vlastimil Babka wrote:
>> When stealing pages from pageblock of a different migratetype, we count how
>> many free pages were stolen, and change the pageblock's migratetype if more
>> than half of the pageblock was free. This might be too conservative, as there
>> might be other pages that are not free, but were allocated with the same
>> migratetype as our allocation requested.
> 
> I think that too conservative is good for movable case. In my experiments,
> fragmentation spreads out when unmovable/reclaimable pageblock is
> changed to movable pageblock prematurely ('prematurely' means that
> allocated unmovable pages remains). As you said below, movable allocations
> falling back to other pageblocks don't causes permanent fragmentation.
> Therefore, we don't need to be less conservative for movable
> allocation. So, how about following change to keep the criteria for
> movable allocation conservative even with this counting improvement?
> 
> threshold = (1 << (pageblock_order - 1));
> if (start_type == MIGRATE_MOVABLE)
> threshold += (1 << (pageblock_order - 2));
> 
> if (free_pages + alike_pages >= threshold)
> ...

That could help, or also not. Keeping more pageblocks marked as unmovable also
means that more unmovable allocations will spread out to them all, even if they
would fit within less pageblocks. MIGRATE_MIXED was an idea to help in this
case, as truly unmovable pageblocks would be preferred to the mixed ones.

Can't decide about such change without testing :/

> Thanks.
> 



Re: [PATCH v3 4/8] mm, page_alloc: count movable pages when stealing from pageblock

2017-03-15 Thread Joonsoo Kim
On Tue, Mar 07, 2017 at 02:15:41PM +0100, Vlastimil Babka wrote:
> When stealing pages from pageblock of a different migratetype, we count how
> many free pages were stolen, and change the pageblock's migratetype if more
> than half of the pageblock was free. This might be too conservative, as there
> might be other pages that are not free, but were allocated with the same
> migratetype as our allocation requested.

I think that too conservative is good for movable case. In my experiments,
fragmentation spreads out when unmovable/reclaimable pageblock is
changed to movable pageblock prematurely ('prematurely' means that
allocated unmovable pages remains). As you said below, movable allocations
falling back to other pageblocks don't causes permanent fragmentation.
Therefore, we don't need to be less conservative for movable
allocation. So, how about following change to keep the criteria for
movable allocation conservative even with this counting improvement?

threshold = (1 << (pageblock_order - 1));
if (start_type == MIGRATE_MOVABLE)
threshold += (1 << (pageblock_order - 2));

if (free_pages + alike_pages >= threshold)
...

Thanks.



Re: [PATCH v3 4/8] mm, page_alloc: count movable pages when stealing from pageblock

2017-03-15 Thread Joonsoo Kim
On Tue, Mar 07, 2017 at 02:15:41PM +0100, Vlastimil Babka wrote:
> When stealing pages from pageblock of a different migratetype, we count how
> many free pages were stolen, and change the pageblock's migratetype if more
> than half of the pageblock was free. This might be too conservative, as there
> might be other pages that are not free, but were allocated with the same
> migratetype as our allocation requested.

I think that too conservative is good for movable case. In my experiments,
fragmentation spreads out when unmovable/reclaimable pageblock is
changed to movable pageblock prematurely ('prematurely' means that
allocated unmovable pages remains). As you said below, movable allocations
falling back to other pageblocks don't causes permanent fragmentation.
Therefore, we don't need to be less conservative for movable
allocation. So, how about following change to keep the criteria for
movable allocation conservative even with this counting improvement?

threshold = (1 << (pageblock_order - 1));
if (start_type == MIGRATE_MOVABLE)
threshold += (1 << (pageblock_order - 2));

if (free_pages + alike_pages >= threshold)
...

Thanks.



[PATCH v3 4/8] mm, page_alloc: count movable pages when stealing from pageblock

2017-03-07 Thread Vlastimil Babka
When stealing pages from pageblock of a different migratetype, we count how
many free pages were stolen, and change the pageblock's migratetype if more
than half of the pageblock was free. This might be too conservative, as there
might be other pages that are not free, but were allocated with the same
migratetype as our allocation requested.

While we cannot determine the migratetype of allocated pages precisely (at
least without the page_owner functionality enabled), we can count pages that
compaction would try to isolate for migration - those are either on LRU or
__PageMovable(). The rest can be assumed to be MIGRATE_RECLAIMABLE or
MIGRATE_UNMOVABLE, which we cannot easily distinguish. This counting can be
done as part of free page stealing with little additional overhead.

The page stealing code is changed so that it considers free pages plus pages
of the "good" migratetype for the decision whether to change pageblock's
migratetype.

The result should be more accurate migratetype of pageblocks wrt the actual
pages in the pageblocks, when stealing from semi-occupied pageblocks. This
should help the efficiency of page grouping by mobility.

In testing based on 4.9 kernel with stress-highalloc from mmtests configured
for order-4 GFP_KERNEL allocations, this patch has reduced the number of
unmovable allocations falling back to movable pageblocks by 47%. The number
of movable allocations falling back to other pageblocks are increased by 55%,
but these events don't cause permanent fragmentation, so the tradeoff should
be positive. Later patches also offset the movable fallback increase to some
extent.

Signed-off-by: Vlastimil Babka 
Acked-by: Mel Gorman 
---
 include/linux/page-isolation.h |  5 +--
 mm/page_alloc.c| 71 +-
 mm/page_isolation.c|  5 +--
 3 files changed, 61 insertions(+), 20 deletions(-)

diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index 047d64706f2a..d4cd2014fa6f 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -33,10 +33,7 @@ bool has_unmovable_pages(struct zone *zone, struct page 
*page, int count,
 bool skip_hwpoisoned_pages);
 void set_pageblock_migratetype(struct page *page, int migratetype);
 int move_freepages_block(struct zone *zone, struct page *page,
-   int migratetype);
-int move_freepages(struct zone *zone,
- struct page *start_page, struct page *end_page,
- int migratetype);
+   int migratetype, int *num_movable);
 
 /*
  * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eda7fedf6378..db96d1ebbed8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1836,9 +1836,9 @@ static inline struct page *__rmqueue_cma_fallback(struct 
zone *zone,
  * Note that start_page and end_pages are not aligned on a pageblock
  * boundary. If alignment is required, use move_freepages_block()
  */
-int move_freepages(struct zone *zone,
+static int move_freepages(struct zone *zone,
  struct page *start_page, struct page *end_page,
- int migratetype)
+ int migratetype, int *num_movable)
 {
struct page *page;
unsigned int order;
@@ -1855,6 +1855,9 @@ int move_freepages(struct zone *zone,
VM_BUG_ON(page_zone(start_page) != page_zone(end_page));
 #endif
 
+   if (num_movable)
+   *num_movable = 0;
+
for (page = start_page; page <= end_page;) {
if (!pfn_valid_within(page_to_pfn(page))) {
page++;
@@ -1865,6 +1868,15 @@ int move_freepages(struct zone *zone,
VM_BUG_ON_PAGE(page_to_nid(page) != zone_to_nid(zone), page);
 
if (!PageBuddy(page)) {
+   /*
+* We assume that pages that could be isolated for
+* migration are movable. But we don't actually try
+* isolating, as that would be expensive.
+*/
+   if (num_movable &&
+   (PageLRU(page) || __PageMovable(page)))
+   (*num_movable)++;
+
page++;
continue;
}
@@ -1880,7 +1892,7 @@ int move_freepages(struct zone *zone,
 }
 
 int move_freepages_block(struct zone *zone, struct page *page,
-   int migratetype)
+   int migratetype, int *num_movable)
 {
unsigned long start_pfn, end_pfn;
struct page *start_page, *end_page;
@@ -1897,7 +1909,8 @@ int move_freepages_block(struct zone *zone, struct page 
*page,
if (!zone_spans_pfn(zone, end_pfn))
return 0;

[PATCH v3 4/8] mm, page_alloc: count movable pages when stealing from pageblock

2017-03-07 Thread Vlastimil Babka
When stealing pages from pageblock of a different migratetype, we count how
many free pages were stolen, and change the pageblock's migratetype if more
than half of the pageblock was free. This might be too conservative, as there
might be other pages that are not free, but were allocated with the same
migratetype as our allocation requested.

While we cannot determine the migratetype of allocated pages precisely (at
least without the page_owner functionality enabled), we can count pages that
compaction would try to isolate for migration - those are either on LRU or
__PageMovable(). The rest can be assumed to be MIGRATE_RECLAIMABLE or
MIGRATE_UNMOVABLE, which we cannot easily distinguish. This counting can be
done as part of free page stealing with little additional overhead.

The page stealing code is changed so that it considers free pages plus pages
of the "good" migratetype for the decision whether to change pageblock's
migratetype.

The result should be more accurate migratetype of pageblocks wrt the actual
pages in the pageblocks, when stealing from semi-occupied pageblocks. This
should help the efficiency of page grouping by mobility.

In testing based on 4.9 kernel with stress-highalloc from mmtests configured
for order-4 GFP_KERNEL allocations, this patch has reduced the number of
unmovable allocations falling back to movable pageblocks by 47%. The number
of movable allocations falling back to other pageblocks are increased by 55%,
but these events don't cause permanent fragmentation, so the tradeoff should
be positive. Later patches also offset the movable fallback increase to some
extent.

Signed-off-by: Vlastimil Babka 
Acked-by: Mel Gorman 
---
 include/linux/page-isolation.h |  5 +--
 mm/page_alloc.c| 71 +-
 mm/page_isolation.c|  5 +--
 3 files changed, 61 insertions(+), 20 deletions(-)

diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index 047d64706f2a..d4cd2014fa6f 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -33,10 +33,7 @@ bool has_unmovable_pages(struct zone *zone, struct page 
*page, int count,
 bool skip_hwpoisoned_pages);
 void set_pageblock_migratetype(struct page *page, int migratetype);
 int move_freepages_block(struct zone *zone, struct page *page,
-   int migratetype);
-int move_freepages(struct zone *zone,
- struct page *start_page, struct page *end_page,
- int migratetype);
+   int migratetype, int *num_movable);
 
 /*
  * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eda7fedf6378..db96d1ebbed8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1836,9 +1836,9 @@ static inline struct page *__rmqueue_cma_fallback(struct 
zone *zone,
  * Note that start_page and end_pages are not aligned on a pageblock
  * boundary. If alignment is required, use move_freepages_block()
  */
-int move_freepages(struct zone *zone,
+static int move_freepages(struct zone *zone,
  struct page *start_page, struct page *end_page,
- int migratetype)
+ int migratetype, int *num_movable)
 {
struct page *page;
unsigned int order;
@@ -1855,6 +1855,9 @@ int move_freepages(struct zone *zone,
VM_BUG_ON(page_zone(start_page) != page_zone(end_page));
 #endif
 
+   if (num_movable)
+   *num_movable = 0;
+
for (page = start_page; page <= end_page;) {
if (!pfn_valid_within(page_to_pfn(page))) {
page++;
@@ -1865,6 +1868,15 @@ int move_freepages(struct zone *zone,
VM_BUG_ON_PAGE(page_to_nid(page) != zone_to_nid(zone), page);
 
if (!PageBuddy(page)) {
+   /*
+* We assume that pages that could be isolated for
+* migration are movable. But we don't actually try
+* isolating, as that would be expensive.
+*/
+   if (num_movable &&
+   (PageLRU(page) || __PageMovable(page)))
+   (*num_movable)++;
+
page++;
continue;
}
@@ -1880,7 +1892,7 @@ int move_freepages(struct zone *zone,
 }
 
 int move_freepages_block(struct zone *zone, struct page *page,
-   int migratetype)
+   int migratetype, int *num_movable)
 {
unsigned long start_pfn, end_pfn;
struct page *start_page, *end_page;
@@ -1897,7 +1909,8 @@ int move_freepages_block(struct zone *zone, struct page 
*page,
if (!zone_spans_pfn(zone, end_pfn))
return 0;
 
-   return move_freepages(zone,