On Wed, Sep 21, 2016 at 11:20:11AM +0200, Vlastimil Babka wrote:
> On 08/29/2016 07:07 AM, js1...@gmail.com wrote:
> >From: Joonsoo Kim <iamjoonsoo....@lge.com>
> >
> >Until now, reserved pages for CMA are managed in the ordinary zones
> >where page's pfn are belong to. This approach has numorous problems
> >and fixing them isn't easy. (It is mentioned on previous patch.)
> >To fix this situation, ZONE_CMA is introduced in previous patch, but,
> >not yet populated. This patch implement population of ZONE_CMA
> >by stealing reserved pages from the ordinary zones.
> >
> >Unlike previous implementation that kernel allocation request with
> >__GFP_MOVABLE could be serviced from CMA region, allocation request only
> >with GFP_HIGHUSER_MOVABLE can be serviced from CMA region in the new
> >approach. This is an inevitable design decision to use the zone
> >implementation because ZONE_CMA could contain highmem. Due to this
> >decision, ZONE_CMA will work like as ZONE_HIGHMEM or ZONE_MOVABLE.
> >
> >I don't think it would be a problem because most of file cache pages
> >and anonymous pages are requested with GFP_HIGHUSER_MOVABLE. It could
> >be proved by the fact that there are many systems with ZONE_HIGHMEM and
> >they work fine. Notable disadvantage is that we cannot use these pages
> >for blockdev file cache page, because it usually has __GFP_MOVABLE but
> >not __GFP_HIGHMEM and __GFP_USER. But, in this case, there is pros and
> >cons. In my experience, blockdev file cache pages are one of the top
> >reason that causes cma_alloc() to fail temporarily. So, we can get more
> >guarantee of cma_alloc() success by discarding that case.
> >
> >Implementation itself is very easy to understand. Steal when cma area is
> >initialized and recalculate various per zone stat/threshold.
> >
> >Signed-off-by: Joonsoo Kim <iamjoonsoo....@lge.com>
> 
> ...
> 
> >@@ -145,6 +145,28 @@ err:
> > static int __init cma_init_reserved_areas(void)
> > {
> >     int i;
> >+    struct zone *zone;
> >+    unsigned long start_pfn = UINT_MAX, end_pfn = 0;
> >+
> >+    if (!cma_area_count)
> >+            return 0;
> >+
> >+    for (i = 0; i < cma_area_count; i++) {
> >+            if (start_pfn > cma_areas[i].base_pfn)
> >+                    start_pfn = cma_areas[i].base_pfn;
> >+            if (end_pfn < cma_areas[i].base_pfn + cma_areas[i].count)
> >+                    end_pfn = cma_areas[i].base_pfn + cma_areas[i].count;
> >+    }
> >+
> >+    for_each_zone(zone) {
> >+            if (!is_zone_cma(zone))
> >+                    continue;
> >+
> >+            /* ZONE_CMA doesn't need to exceed CMA region */
> >+            zone->zone_start_pfn = max(zone->zone_start_pfn, start_pfn);
> >+            zone->spanned_pages = min(zone_end_pfn(zone), end_pfn) -
> >+                                    zone->zone_start_pfn;
> >+    }
> 
> Hmm, so what happens on a system with multiple nodes? Each will have
> its own ZONE_CMA, and all will have the same start pfn and spanned
> pages?

Each of zone_start_pfn and spanned_pages are initialized in
calculate_node_totalpages() which considers node boundary. So, they will
have not the same start pfn and spanned pages. However, each would
contain unnecessary holes.

> 
> > /* Free whole pageblock and set its migration type to MIGRATE_CMA. */
> > void __init init_cma_reserved_pageblock(struct page *page)
> > {
> >     unsigned i = pageblock_nr_pages;
> >+    unsigned long pfn = page_to_pfn(page);
> >     struct page *p = page;
> >+    int nid = page_to_nid(page);
> >+
> >+    /*
> >+     * ZONE_CMA will steal present pages from other zones by changing
> >+     * page links so page_zone() is changed. Before that,
> >+     * we need to adjust previous zone's page count first.
> >+     */
> >+    adjust_present_page_count(page, -pageblock_nr_pages);
> >
> >     do {
> >             __ClearPageReserved(p);
> >             set_page_count(p, 0);
> >-    } while (++p, --i);
> >+
> >+            /* Steal pages from other zones */
> >+            set_page_links(p, ZONE_CMA, nid, pfn);
> >+    } while (++p, ++pfn, --i);
> >+
> >+    adjust_present_page_count(page, pageblock_nr_pages);
> 
> This seems to assign pages to ZONE_CMA on the proper node, which is
> good. But then ZONE_CMA on multiple nodes will have unnecessary
> holes in the spanned pages, as each will contain only a subset.

True, I will fix it and respin the series.

Thanks.

Reply via email to