On Fri, Jun 20, 2014 at 05:49:37PM +0200, Vlastimil Babka wrote:
> Compaction scanners try to lock zone locks as late as possible by checking
> many page or pageblock properties opportunistically without lock and skipping
> them if not unsuitable. For pages that pass the initial checks, some 
> properties
> have to be checked again safely under lock. However, if the lock was already
> held from a previous iteration in the initial checks, the rechecks are
> unnecessary.
> 
> This patch therefore skips the rechecks when the lock was already held. This 
> is
> now possible to do, since we don't (potentially) drop and reacquire the lock
> between the initial checks and the safe rechecks anymore.
> 
> Signed-off-by: Vlastimil Babka <vba...@suse.cz>
> Acked-by: Minchan Kim <minc...@kernel.org>
> Cc: Mel Gorman <mgor...@suse.de>
> Cc: Michal Nazarewicz <min...@mina86.com>
> Cc: Naoya Horiguchi <n-horigu...@ah.jp.nec.com>
> Cc: Christoph Lameter <c...@linux.com>
> Cc: Rik van Riel <r...@redhat.com>
> Acked-by: David Rientjes <rient...@google.com>

Reviewed-by: Naoya Horiguchi <n-horigu...@ah.jp.nec.com>

> ---
>  mm/compaction.c | 53 +++++++++++++++++++++++++++++++----------------------
>  1 file changed, 31 insertions(+), 22 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 40da812..9f6e857 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -324,22 +324,30 @@ static unsigned long isolate_freepages_block(struct 
> compact_control *cc,
>                       goto isolate_fail;
>  
>               /*
> -              * The zone lock must be held to isolate freepages.
> -              * Unfortunately this is a very coarse lock and can be
> -              * heavily contended if there are parallel allocations
> -              * or parallel compactions. For async compaction do not
> -              * spin on the lock and we acquire the lock as late as
> -              * possible.
> +              * If we already hold the lock, we can skip some rechecking.
> +              * Note that if we hold the lock now, checked_pageblock was
> +              * already set in some previous iteration (or strict is true),
> +              * so it is correct to skip the suitable migration target
> +              * recheck as well.
>                */
> -             if (!locked)
> +             if (!locked) {
> +                     /*
> +                      * The zone lock must be held to isolate freepages.
> +                      * Unfortunately this is a very coarse lock and can be
> +                      * heavily contended if there are parallel allocations
> +                      * or parallel compactions. For async compaction do not
> +                      * spin on the lock and we acquire the lock as late as
> +                      * possible.
> +                      */
>                       locked = compact_trylock_irqsave(&cc->zone->lock,
>                                                               &flags, cc);
> -             if (!locked)
> -                     break;
> +                     if (!locked)
> +                             break;
>  
> -             /* Recheck this is a buddy page under lock */
> -             if (!PageBuddy(page))
> -                     goto isolate_fail;
> +                     /* Recheck this is a buddy page under lock */
> +                     if (!PageBuddy(page))
> +                             goto isolate_fail;
> +             }
>  
>               /* Found a free page, break it into order-0 pages */
>               isolated = split_free_page(page);
> @@ -623,19 +631,20 @@ isolate_migratepages_range(struct zone *zone, struct 
> compact_control *cc,
>                   page_count(page) > page_mapcount(page))
>                       continue;
>  
> -             /* If the lock is not held, try to take it */
> -             if (!locked)
> +             /* If we already hold the lock, we can skip some rechecking */
> +             if (!locked) {
>                       locked = compact_trylock_irqsave(&zone->lru_lock,
>                                                               &flags, cc);
> -             if (!locked)
> -                     break;
> +                     if (!locked)
> +                             break;
>  
> -             /* Recheck PageLRU and PageTransHuge under lock */
> -             if (!PageLRU(page))
> -                     continue;
> -             if (PageTransHuge(page)) {
> -                     low_pfn += (1 << compound_order(page)) - 1;
> -                     continue;
> +                     /* Recheck PageLRU and PageTransHuge under lock */
> +                     if (!PageLRU(page))
> +                             continue;
> +                     if (PageTransHuge(page)) {
> +                             low_pfn += (1 << compound_order(page)) - 1;
> +                             continue;
> +                     }
>               }
>  
>               lruvec = mem_cgroup_page_lruvec(page, zone);
> -- 
> 1.8.4.5
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"d...@kvack.org";> em...@kvack.org </a>
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to