On 09/22/2016 08:50 AM, Joonsoo Kim wrote:
On Thu, Sep 22, 2016 at 02:45:46PM +0900, Joonsoo Kim wrote:
>
> > /* Free whole pageblock and set its migration type to MIGRATE_CMA. */
> > void __init init_cma_reserved_pageblock(struct page *page)
> > {
> >       unsigned i = pageblock_nr_pages;
> >+      unsigned long pfn = page_to_pfn(page);
> >       struct page *p = page;
> >+      int nid = page_to_nid(page);
> >+
> >+      /*
> >+       * ZONE_CMA will steal present pages from other zones by changing
> >+       * page links so page_zone() is changed. Before that,
> >+       * we need to adjust previous zone's page count first.
> >+       */
> >+      adjust_present_page_count(page, -pageblock_nr_pages);
> >
> >       do {
> >               __ClearPageReserved(p);
> >               set_page_count(p, 0);
> >-      } while (++p, --i);
> >+
> >+              /* Steal pages from other zones */
> >+              set_page_links(p, ZONE_CMA, nid, pfn);
> >+      } while (++p, ++pfn, --i);
> >+
> >+      adjust_present_page_count(page, pageblock_nr_pages);
>
> This seems to assign pages to ZONE_CMA on the proper node, which is
> good. But then ZONE_CMA on multiple nodes will have unnecessary
> holes in the spanned pages, as each will contain only a subset.

True, I will fix it and respin the series.

I now realize that it's too late to send full series for next
merge window. I will send full series after next merge window is closed.

I think there might still be rc8 thus another week.

Anyway, I'd like to confirm that following incremental patch will solve
your concern.

Yeah that should work, as long as single cma areas don't include multiple nodes?

Thanks.


------>8--------------
 mm/cma.c | 25 ++++++++++++++++---------
 1 file changed, 16 insertions(+), 9 deletions(-)

diff --git a/mm/cma.c b/mm/cma.c
index d69bdf7..8375554 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -146,22 +146,29 @@ static int __init cma_init_reserved_areas(void)
 {
        int i;
        struct zone *zone;
-       unsigned long start_pfn = UINT_MAX, end_pfn = 0;
+       pg_data_t *pgdat;

        if (!cma_area_count)
                return 0;

-       for (i = 0; i < cma_area_count; i++) {
-               if (start_pfn > cma_areas[i].base_pfn)
-                       start_pfn = cma_areas[i].base_pfn;
-               if (end_pfn < cma_areas[i].base_pfn + cma_areas[i].count)
-                       end_pfn = cma_areas[i].base_pfn + cma_areas[i].count;
-       }
+       for_each_online_pgdat(pgdat) {
+               unsigned long start_pfn = UINT_MAX, end_pfn = 0;

-       for_each_zone(zone) {
-               if (!is_zone_cma(zone))
+               for (i = 0; i < cma_area_count; i++) {
+                       if (page_to_nid(pfn_to_page(cma_areas[i].base_pfn)) !=

We have pfn_to_nid() (although the implementation is just like this).

+                               pgdat->node_id)
+                               continue;
+
+                       start_pfn = min(start_pfn, cma_areas[i].base_pfn);
+                       end_pfn = max(end_pfn, cma_areas[i].base_pfn +
+                                               cma_areas[i].count);
+               }
+
+               if (!end_pfn)
                        continue;

+               zone = &pgdat->node_zones[ZONE_CMA];
+
                /* ZONE_CMA doesn't need to exceed CMA region */
                zone->zone_start_pfn = max(zone->zone_start_pfn, start_pfn);
                zone->spanned_pages = min(zone_end_pfn(zone), end_pfn) -


Reply via email to