Re: [PATCH v2 5/9] mm/compaction: allow to scan nonmovable pageblock when depleted state

2015-09-30 Thread Joonsoo Kim
On Fri, Sep 25, 2015 at 12:32:02PM +0200, Vlastimil Babka wrote:
> On 08/24/2015 04:19 AM, Joonsoo Kim wrote:
> 
> [...]
> 
> > 
> > Because we just allow freepage scanner to scan non-movable pageblock
> > in very limited situation, more scanning events happen. But, allowing
> > in very limited situation results in a very important benefit that
> > memory isn't fragmented more than before. Fragmentation effect is
> > measured on following patch so please refer it.
> 
> AFAICS it's measured only for the whole series in the cover letter, no? Just 
> to
> be sure I didn't overlook something.

It takes too much time so no measurement is done on every patch.
I will try to measure it on at least this patch in next revision.

> 
> > Signed-off-by: Joonsoo Kim 
> > ---
> >  include/linux/mmzone.h |  1 +
> >  mm/compaction.c| 27 +--
> >  2 files changed, 26 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> > index e13b732..5cae0ad 100644
> > --- a/include/linux/mmzone.h
> > +++ b/include/linux/mmzone.h
> > @@ -545,6 +545,7 @@ enum zone_flags {
> >  */
> > ZONE_FAIR_DEPLETED, /* fair zone policy batch depleted */
> > ZONE_COMPACTION_DEPLETED,   /* compaction possiblity depleted */
> > +   ZONE_COMPACTION_SCANALLFREE,/* scan all kinds of pageblocks */
> 
> "SCANALLFREE" is hard to read. Otherwise yeah, I agree scanning unmovable
> pageblocks is necessary sometimes, and this seems to make a reasonable 
> tradeoff.

Good! I will think better name.

Thanks.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/9] mm/compaction: allow to scan nonmovable pageblock when depleted state

2015-09-30 Thread Joonsoo Kim
On Fri, Sep 25, 2015 at 12:32:02PM +0200, Vlastimil Babka wrote:
> On 08/24/2015 04:19 AM, Joonsoo Kim wrote:
> 
> [...]
> 
> > 
> > Because we just allow freepage scanner to scan non-movable pageblock
> > in very limited situation, more scanning events happen. But, allowing
> > in very limited situation results in a very important benefit that
> > memory isn't fragmented more than before. Fragmentation effect is
> > measured on following patch so please refer it.
> 
> AFAICS it's measured only for the whole series in the cover letter, no? Just 
> to
> be sure I didn't overlook something.

It takes too much time so no measurement is done on every patch.
I will try to measure it on at least this patch in next revision.

> 
> > Signed-off-by: Joonsoo Kim 
> > ---
> >  include/linux/mmzone.h |  1 +
> >  mm/compaction.c| 27 +--
> >  2 files changed, 26 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> > index e13b732..5cae0ad 100644
> > --- a/include/linux/mmzone.h
> > +++ b/include/linux/mmzone.h
> > @@ -545,6 +545,7 @@ enum zone_flags {
> >  */
> > ZONE_FAIR_DEPLETED, /* fair zone policy batch depleted */
> > ZONE_COMPACTION_DEPLETED,   /* compaction possiblity depleted */
> > +   ZONE_COMPACTION_SCANALLFREE,/* scan all kinds of pageblocks */
> 
> "SCANALLFREE" is hard to read. Otherwise yeah, I agree scanning unmovable
> pageblocks is necessary sometimes, and this seems to make a reasonable 
> tradeoff.

Good! I will think better name.

Thanks.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/9] mm/compaction: allow to scan nonmovable pageblock when depleted state

2015-09-25 Thread Vlastimil Babka
On 08/24/2015 04:19 AM, Joonsoo Kim wrote:

[...]

> 
> Because we just allow freepage scanner to scan non-movable pageblock
> in very limited situation, more scanning events happen. But, allowing
> in very limited situation results in a very important benefit that
> memory isn't fragmented more than before. Fragmentation effect is
> measured on following patch so please refer it.

AFAICS it's measured only for the whole series in the cover letter, no? Just to
be sure I didn't overlook something.

> Signed-off-by: Joonsoo Kim 
> ---
>  include/linux/mmzone.h |  1 +
>  mm/compaction.c| 27 +--
>  2 files changed, 26 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index e13b732..5cae0ad 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -545,6 +545,7 @@ enum zone_flags {
>*/
>   ZONE_FAIR_DEPLETED, /* fair zone policy batch depleted */
>   ZONE_COMPACTION_DEPLETED,   /* compaction possiblity depleted */
> + ZONE_COMPACTION_SCANALLFREE,/* scan all kinds of pageblocks */

"SCANALLFREE" is hard to read. Otherwise yeah, I agree scanning unmovable
pageblocks is necessary sometimes, and this seems to make a reasonable tradeoff.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 5/9] mm/compaction: allow to scan nonmovable pageblock when depleted state

2015-09-25 Thread Vlastimil Babka
On 08/24/2015 04:19 AM, Joonsoo Kim wrote:

[...]

> 
> Because we just allow freepage scanner to scan non-movable pageblock
> in very limited situation, more scanning events happen. But, allowing
> in very limited situation results in a very important benefit that
> memory isn't fragmented more than before. Fragmentation effect is
> measured on following patch so please refer it.

AFAICS it's measured only for the whole series in the cover letter, no? Just to
be sure I didn't overlook something.

> Signed-off-by: Joonsoo Kim 
> ---
>  include/linux/mmzone.h |  1 +
>  mm/compaction.c| 27 +--
>  2 files changed, 26 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index e13b732..5cae0ad 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -545,6 +545,7 @@ enum zone_flags {
>*/
>   ZONE_FAIR_DEPLETED, /* fair zone policy batch depleted */
>   ZONE_COMPACTION_DEPLETED,   /* compaction possiblity depleted */
> + ZONE_COMPACTION_SCANALLFREE,/* scan all kinds of pageblocks */

"SCANALLFREE" is hard to read. Otherwise yeah, I agree scanning unmovable
pageblocks is necessary sometimes, and this seems to make a reasonable tradeoff.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 5/9] mm/compaction: allow to scan nonmovable pageblock when depleted state

2015-08-23 Thread Joonsoo Kim
Currently, freescanner doesn't scan non-movable pageblock, because if
freepages in non-movable pageblock are exhausted, another movable
pageblock would be used for non-movable allocation and it could cause
fragmentation.

But, we should know that watermark check for compaction doesn't
distinguish where freepage is. If all freepages are in non-movable
pageblock, although, system has enough freepages and watermark check
is passed, freepage scanner can't get any freepage and compaction will
be failed. There is no way to get precise number of freepage on movable
pageblock and no way to reclaim only used pages in movable pageblock.
Therefore, I think that best way to overcome this situation is
to use freepage in non-movable pageblock in compaction.

My test setup for this situation is:

Memory is artificially fragmented to make order 3 allocation hard. And,
most of pageblocks are changed to unmovable migratetype.

  System: 512 MB with 32 MB Zram
  Memory: 25% memory is allocated to make fragmentation and kernel build
is running on background.
  Fragmentation: Successful order 3 allocation candidates may be around
1500 roughly.
  Allocation attempts: Roughly 3000 order 3 allocation attempts
with GFP_NORETRY. This value is determined to saturate allocation
success.

Below is the result of this test.

Test: build-frag-unmovable

Kernel: Base vs Nonmovable

Success(N)37  64
compact_stall6245056
compact_success  103 419
compact_fail 5214637
pgmigrate_success  22004  277106
compact_isolated   61021 1056863
compact_migrate_scanned  260936070252458
compact_free_scanned 480898923091292

Column 'Success(N) are calculated by following equations.

Success(N) = successful allocation * 100 / order 3 candidates

Result shows that success rate is roughly doubled in this case
because we can search more area.

Because we just allow freepage scanner to scan non-movable pageblock
in very limited situation, more scanning events happen. But, allowing
in very limited situation results in a very important benefit that
memory isn't fragmented more than before. Fragmentation effect is
measured on following patch so please refer it.

Signed-off-by: Joonsoo Kim 
---
 include/linux/mmzone.h |  1 +
 mm/compaction.c| 27 +--
 2 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index e13b732..5cae0ad 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -545,6 +545,7 @@ enum zone_flags {
 */
ZONE_FAIR_DEPLETED, /* fair zone policy batch depleted */
ZONE_COMPACTION_DEPLETED,   /* compaction possiblity depleted */
+   ZONE_COMPACTION_SCANALLFREE,/* scan all kinds of pageblocks */
 };
 
 static inline unsigned long zone_end_pfn(const struct zone *zone)
diff --git a/mm/compaction.c b/mm/compaction.c
index 1817564..b58f162 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -243,9 +243,17 @@ static void __reset_isolation_suitable(struct zone *zone)
zone->compact_cached_free_pfn = end_pfn;
zone->compact_blockskip_flush = false;
 
+   clear_bit(ZONE_COMPACTION_SCANALLFREE, >flags);
if (compaction_depleted(zone)) {
if (test_bit(ZONE_COMPACTION_DEPLETED, >flags))
zone->compact_depletion_depth++;
+
+   /* Last resort to make high-order page */
+   if (!zone->compact_success) {
+   set_bit(ZONE_COMPACTION_SCANALLFREE,
+   >flags);
+   }
+
else {
set_bit(ZONE_COMPACTION_DEPLETED, >flags);
zone->compact_depletion_depth = 0;
@@ -914,7 +922,8 @@ isolate_migratepages_range(struct compact_control *cc, 
unsigned long start_pfn,
 #ifdef CONFIG_COMPACTION
 
 /* Returns true if the page is within a block suitable for migration to */
-static bool suitable_migration_target(struct page *page)
+static bool suitable_migration_target(struct compact_control *cc,
+   struct page *page)
 {
/* If the page is a large free page, then disallow migration */
if (PageBuddy(page)) {
@@ -931,6 +940,16 @@ static bool suitable_migration_target(struct page *page)
if (migrate_async_suitable(get_pageblock_migratetype(page)))
return true;
 
+   /*
+* Allow to scan all kinds of pageblock. Without this relaxation,
+* all freepage could be in non-movable pageblock and compaction
+* can be satuarated and cannot make high-order page even if there
+* is enough freepage in the system.
+*/
+   if (cc->mode != MIGRATE_ASYNC &&
+   

[PATCH v2 5/9] mm/compaction: allow to scan nonmovable pageblock when depleted state

2015-08-23 Thread Joonsoo Kim
Currently, freescanner doesn't scan non-movable pageblock, because if
freepages in non-movable pageblock are exhausted, another movable
pageblock would be used for non-movable allocation and it could cause
fragmentation.

But, we should know that watermark check for compaction doesn't
distinguish where freepage is. If all freepages are in non-movable
pageblock, although, system has enough freepages and watermark check
is passed, freepage scanner can't get any freepage and compaction will
be failed. There is no way to get precise number of freepage on movable
pageblock and no way to reclaim only used pages in movable pageblock.
Therefore, I think that best way to overcome this situation is
to use freepage in non-movable pageblock in compaction.

My test setup for this situation is:

Memory is artificially fragmented to make order 3 allocation hard. And,
most of pageblocks are changed to unmovable migratetype.

  System: 512 MB with 32 MB Zram
  Memory: 25% memory is allocated to make fragmentation and kernel build
is running on background.
  Fragmentation: Successful order 3 allocation candidates may be around
1500 roughly.
  Allocation attempts: Roughly 3000 order 3 allocation attempts
with GFP_NORETRY. This value is determined to saturate allocation
success.

Below is the result of this test.

Test: build-frag-unmovable

Kernel: Base vs Nonmovable

Success(N)37  64
compact_stall6245056
compact_success  103 419
compact_fail 5214637
pgmigrate_success  22004  277106
compact_isolated   61021 1056863
compact_migrate_scanned  260936070252458
compact_free_scanned 480898923091292

Column 'Success(N) are calculated by following equations.

Success(N) = successful allocation * 100 / order 3 candidates

Result shows that success rate is roughly doubled in this case
because we can search more area.

Because we just allow freepage scanner to scan non-movable pageblock
in very limited situation, more scanning events happen. But, allowing
in very limited situation results in a very important benefit that
memory isn't fragmented more than before. Fragmentation effect is
measured on following patch so please refer it.

Signed-off-by: Joonsoo Kim iamjoonsoo@lge.com
---
 include/linux/mmzone.h |  1 +
 mm/compaction.c| 27 +--
 2 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index e13b732..5cae0ad 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -545,6 +545,7 @@ enum zone_flags {
 */
ZONE_FAIR_DEPLETED, /* fair zone policy batch depleted */
ZONE_COMPACTION_DEPLETED,   /* compaction possiblity depleted */
+   ZONE_COMPACTION_SCANALLFREE,/* scan all kinds of pageblocks */
 };
 
 static inline unsigned long zone_end_pfn(const struct zone *zone)
diff --git a/mm/compaction.c b/mm/compaction.c
index 1817564..b58f162 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -243,9 +243,17 @@ static void __reset_isolation_suitable(struct zone *zone)
zone-compact_cached_free_pfn = end_pfn;
zone-compact_blockskip_flush = false;
 
+   clear_bit(ZONE_COMPACTION_SCANALLFREE, zone-flags);
if (compaction_depleted(zone)) {
if (test_bit(ZONE_COMPACTION_DEPLETED, zone-flags))
zone-compact_depletion_depth++;
+
+   /* Last resort to make high-order page */
+   if (!zone-compact_success) {
+   set_bit(ZONE_COMPACTION_SCANALLFREE,
+   zone-flags);
+   }
+
else {
set_bit(ZONE_COMPACTION_DEPLETED, zone-flags);
zone-compact_depletion_depth = 0;
@@ -914,7 +922,8 @@ isolate_migratepages_range(struct compact_control *cc, 
unsigned long start_pfn,
 #ifdef CONFIG_COMPACTION
 
 /* Returns true if the page is within a block suitable for migration to */
-static bool suitable_migration_target(struct page *page)
+static bool suitable_migration_target(struct compact_control *cc,
+   struct page *page)
 {
/* If the page is a large free page, then disallow migration */
if (PageBuddy(page)) {
@@ -931,6 +940,16 @@ static bool suitable_migration_target(struct page *page)
if (migrate_async_suitable(get_pageblock_migratetype(page)))
return true;
 
+   /*
+* Allow to scan all kinds of pageblock. Without this relaxation,
+* all freepage could be in non-movable pageblock and compaction
+* can be satuarated and cannot make high-order page even if there
+* is enough freepage in the system.
+*/
+   if (cc-mode !=