RE: [Linaro-mm-sig] [PATCH 04/11] mm: page_alloc: introduce alloc_contig_range()

2012-01-18 Thread Marek Szyprowski
Hello,

On Tuesday, January 17, 2012 10:54 PM sandeep patil wrote:

 I am running a CMA test where I keep allocating from a CMA region as long
 as the allocation fails due to lack of space.
 
 However, I am seeing failures much before I expect them to happen.
 When the allocation fails, I see a warning coming from __alloc_contig_range(),
 because test_pages_isolated() returned true.
 
 The new retry code does try a new range and eventually succeeds.

(snipped)

 From the log it looks like the warning showed up because page-private
 is set to MIGRATE_CMA instead of MIGRATE_ISOLATED.

 I've also had a test case where it failed because (page_count() != 0)

This means that the page is temporarily used by someone else (like for example
io subsystem or a driver).

 Have you or anyone else seen this during the CMA testing?

Yes, we observed such issues and we are also working on fixing them. However 
we gave higher priority to get the basic CMA patches merged to mainline. Once
this happen the above issues can be fixed incrementally.

 Also, could this be because we are finding a page within (start, end)
 that actually belongs
 to a higher order Buddy block ?

No, such pages should be correctly handled.

Best regards
-- 
Marek Szyprowski
Samsung Poland RD Center


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Linaro-mm-sig] [PATCH 04/11] mm: page_alloc: introduce alloc_contig_range()

2012-01-17 Thread sandeep patil
Marek,

I am running a CMA test where I keep allocating from a CMA region as long
as the allocation fails due to lack of space.

However, I am seeing failures much before I expect them to happen.
When the allocation fails, I see a warning coming from __alloc_contig_range(),
because test_pages_isolated() returned true.

The new retry code does try a new range and eventually succeeds.


 +
 +static int __alloc_contig_migrate_range(unsigned long start, unsigned long 
 end)
 +{
 +
 +done:
 +       /* Make sure all pages are isolated. */
 +       if (!ret) {
 +               lru_add_drain_all();
 +               drain_all_pages();
 +               if (WARN_ON(test_pages_isolated(start, end)))
 +                       ret = -EBUSY;
 +       }

I tried to find out why this happened and added in a debug print inside
__test_page_isolated_in_pageblock(). Here's the resulting log ..

---
[  133.563140] !!! Found unexpected page(pfn=9aaab), (count=0),
(isBuddy=no), (private=0x0004), (flags=0x), (_mapcount=0)
!!!
[  133.576690] [ cut here ]
[  133.582489] WARNING: at mm/page_alloc.c:5804 alloc_contig_range+0x1a4/0x2c4()
[  133.594757] [c003e814] (unwind_backtrace+0x0/0xf0) from
[c0079c7c] (warn_slowpath_common+0x4c/0x64)
[  133.605468] [c0079c7c] (warn_slowpath_common+0x4c/0x64) from
[c0079cac] (warn_slowpath_null+0x18/0x1c)
[  133.616424] [c0079cac] (warn_slowpath_null+0x18/0x1c) from
[c00e0e84] (alloc_contig_range+0x1a4/0x2c4)
[  133.627471] EXT4-fs (mmcblk0p25): re-mounted. Opts: (null)
[  133.633728] [c00e0e84] (alloc_contig_range+0x1a4/0x2c4) from
[c0266690] (dma_alloc_from_contiguous+0x114/0x1c8)
[  133.697113] !!! Found unexpected page(pfn=9aaac), (count=0),
(isBuddy=no), (private=0x0004), (flags=0x), (_mapcount=0)
!!!
[  133.710510] EXT4-fs (mmcblk0p26): re-mounted. Opts: (null)
[  133.716766] [ cut here ]
[  133.721954] WARNING: at mm/page_alloc.c:5804 alloc_contig_range+0x1a4/0x2c4()
[  133.734100] Emergency Remount complete
[  133.742584] [c003e814] (unwind_backtrace+0x0/0xf0) from
[c0079c7c] (warn_slowpath_common+0x4c/0x64)
[  133.753448] [c0079c7c] (warn_slowpath_common+0x4c/0x64) from
[c0079cac] (warn_slowpath_null+0x18/0x1c)
[  133.764373] [c0079cac] (warn_slowpath_null+0x18/0x1c) from
[c00e0e84] (alloc_contig_range+0x1a4/0x2c4)
[  133.775299] [c00e0e84] (alloc_contig_range+0x1a4/0x2c4) from
[c0266690] (dma_alloc_from_contiguous+0x114/0x1c8)
---

From the log it looks like the warning showed up because page-private
is set to MIGRATE_CMA instead of MIGRATE_ISOLATED.
I've also had a test case where it failed because (page_count() != 0)

Have you or anyone else seen this during the CMA testing?

Also, could this be because we are finding a page within (start, end)
that actually belongs
to a higher order Buddy block ?


Thanks,
Sandeep
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Linaro-mm-sig] [PATCH 04/11] mm: page_alloc: introduce alloc_contig_range()

2012-01-17 Thread Michal Nazarewicz

On Tue, 17 Jan 2012 22:54:28 +0100, sandeep patil psandee...@gmail.com wrote:


Marek,

I am running a CMA test where I keep allocating from a CMA region as long
as the allocation fails due to lack of space.

However, I am seeing failures much before I expect them to happen.
When the allocation fails, I see a warning coming from __alloc_contig_range(),
because test_pages_isolated() returned true.


Yeah, we are wondering ourselves about that.  Could you try cherry-picking
commit ad10eb079c97e27b4d27bc755c605226ce1625de (update migrate type on pcp
when isolating) from git://github.com/mina86/linux-2.6.git?  It probably won't
apply cleanly but resolving the conflicts should not be hard (alternatively
you can try branch cma from the same repo but it is a work in progress at the
moment).


I tried to find out why this happened and added in a debug print inside
__test_page_isolated_in_pageblock(). Here's the resulting log ..


[...]


From the log it looks like the warning showed up because page-private
is set to MIGRATE_CMA instead of MIGRATE_ISOLATED.


My understanding of that situation is that the page is on pcp list in which
cases it's page_private is not updated.  Draining and the first patch in
the series (and also the commit I've pointed to above) are designed to fix
that but I'm unsure why they don't work all the time.


I've also had a test case where it failed because (page_count() != 0)





Have you or anyone else seen this during the CMA testing?

Also, could this be because we are finding a page within (start, end)
that actually belongs to a higher order Buddy block ?


Higher order free buddy blocks are skipped in the “if (PageBuddy(page))”
path of __test_page_isolated_in_pageblock().  Then again, now that I think
of it, something fishy may be happening on the edges.  Moving the check
outside of __alloc_contig_migrate_range() after outer_start is calculated
in alloc_contig_range() could help.  I'll take a look at it.

--
Best regards, _ _
.o. | Liege of Serenely Enlightened Majesty of  o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz(o o)
ooo +email/xmpp: m...@google.com--ooO--(_)--Ooo--
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Linaro-mm-sig] [PATCH 04/11] mm: page_alloc: introduce alloc_contig_range()

2012-01-17 Thread sandeep patil
 Yeah, we are wondering ourselves about that.  Could you try cherry-picking
 commit ad10eb079c97e27b4d27bc755c605226ce1625de (update migrate type on pcp
 when isolating) from git://github.com/mina86/linux-2.6.git?  It probably
 won't
 apply cleanly but resolving the conflicts should not be hard (alternatively
 you can try branch cma from the same repo but it is a work in progress at
 the
 moment).


I'll try this patch and report back ,,


 is set to MIGRATE_CMA instead of MIGRATE_ISOLATED.


 My understanding of that situation is that the page is on pcp list in which
 cases it's page_private is not updated.  Draining and the first patch in
 the series (and also the commit I've pointed to above) are designed to fix
 that but I'm unsure why they don't work all the time.



Will verify this if the page is found on the pcp list as well .

 I've also had a test case where it failed because (page_count() != 0)

With this, when it failed the page_count()
returned a value of 2. I am not sure why, but I will try and see If I can
reproduce this.



 Have you or anyone else seen this during the CMA testing?

 Also, could this be because we are finding a page within (start, end)
 that actually belongs to a higher order Buddy block ?


 Higher order free buddy blocks are skipped in the “if (PageBuddy(page))”
 path of __test_page_isolated_in_pageblock().  Then again, now that I think
 of it, something fishy may be happening on the edges.  Moving the check
 outside of __alloc_contig_migrate_range() after outer_start is calculated
 in alloc_contig_range() could help.  I'll take a look at it.

I was going to suggest that, moving the check until after outer_start
is calculated
will definitely help IMO. I am sure I've seen a case where

  page_count(page) = page-private = 0 and PageBuddy(page) was false.

I will try and reproduce this as well.

Thanks,
Sandeep
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Linaro-mm-sig] [PATCH 04/11] mm: page_alloc: introduce alloc_contig_range()

2012-01-17 Thread Michal Nazarewicz

My understanding of that situation is that the page is on pcp list in which
cases it's page_private is not updated.  Draining and the first patch in
the series (and also the commit I've pointed to above) are designed to fix
that but I'm unsure why they don't work all the time.


On Wed, 18 Jan 2012 01:46:37 +0100, sandeep patil psandee...@gmail.com wrote:

Will verify this if the page is found on the pcp list as well .


I was wondering in general if “!PageBuddy(page)  !page_count(page)” means
page is on PCP.  From what I've seen in page_isolate.c it seems to be the case.


I've also had a test case where it failed because (page_count() != 0)



With this, when it failed the page_count() returned a value of 2.  I am not
sure why, but I will try and see If I can reproduce this.


If I'm not mistaken, page_count() != 0 means the page is allocated.  I can see
the following scenarios which can lead to page being allocated in when
test_pages_isolated() is called:

1. The page failed to migrate.  In this case however, the code would abort 
earlier.

2. The page was migrated but then allocated.  This is not possible since
   migrated pages are freed which puts the page on MIGRATE_ISOLATE freelist 
which
   guarantees that the page will not be migrated.

3. The page was removed from PCP list but with migratetype == MIGRATE_CMA.  This
   is something the first patch in the series as well as the commit I've 
mentioned tries
   to address so hopefully it won't be an issue any more.

4. The page was allocated from PCP list.  This may happen because draining of 
PCP
   list happens after IRQs are enabled in set_migratetype_isolate().  I don't 
have
   a solution for that just yet.  One is to alter update_pcp_isolate_block() to
   remove page from the PCP list.  I haven't looked at specifics of how to 
implement
   this just yet.


Moving the check outside of __alloc_contig_migrate_range() after outer_start
is calculated in alloc_contig_range() could help.


I was going to suggest that, moving the check until after outer_start
is calculated will definitely help IMO. I am sure I've seen a case where

  page_count(page) = page-private = 0 and PageBuddy(page) was false.


Yep, I've pushed new content to my branch 
(git://github.com/mina86/linux-2.6.git cma)
and will try to get Marek to test it some time soon (I'm currently swamped with
non-Linux related work myself).

--
Best regards, _ _
.o. | Liege of Serenely Enlightened Majesty of  o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz(o o)
ooo +email/xmpp: m...@google.com--ooO--(_)--Ooo--
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 04/11] mm: page_alloc: introduce alloc_contig_range()

2012-01-16 Thread Mel Gorman
On Fri, Jan 13, 2012 at 09:04:31PM +0100, Michal Nazarewicz wrote:
 On Thu, Dec 29, 2011 at 01:39:05PM +0100, Marek Szyprowski wrote:
 From: Michal Nazarewicz min...@mina86.com
 +   /* Make sure all pages are isolated. */
 +   if (!ret) {
 +   lru_add_drain_all();
 +   drain_all_pages();
 +   if (WARN_ON(test_pages_isolated(start, end)))
 +   ret = -EBUSY;
 +   }
 
 On Tue, 10 Jan 2012 15:16:13 +0100, Mel Gorman m...@csn.ul.ie wrote:
 Another global IPI seems overkill. Drain pages only from the local CPU
 (drain_pages(get_cpu()); put_cpu()) and test if the pages are isolated.
 
 Is get_cpu() + put_cpu() required? Won't drain_local_pages() work?
 

drain_local_pages() calls smp_processor_id() without preemption
disabled. 

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 04/11] mm: page_alloc: introduce alloc_contig_range()

2012-01-16 Thread Michal Nazarewicz

On Mon, 16 Jan 2012 10:01:10 +0100, Mel Gorman m...@csn.ul.ie wrote:


On Fri, Jan 13, 2012 at 09:04:31PM +0100, Michal Nazarewicz wrote:

On Thu, Dec 29, 2011 at 01:39:05PM +0100, Marek Szyprowski wrote:
From: Michal Nazarewicz min...@mina86.com
+   /* Make sure all pages are isolated. */
+   if (!ret) {
+   lru_add_drain_all();
+   drain_all_pages();
+   if (WARN_ON(test_pages_isolated(start, end)))
+   ret = -EBUSY;
+   }

On Tue, 10 Jan 2012 15:16:13 +0100, Mel Gorman m...@csn.ul.ie wrote:
Another global IPI seems overkill. Drain pages only from the local CPU
(drain_pages(get_cpu()); put_cpu()) and test if the pages are isolated.

Is get_cpu() + put_cpu() required? Won't drain_local_pages() work?



drain_local_pages() calls smp_processor_id() without preemption
disabled.


Thanks, I wasn't sure if preemption is an issue.

--
Best regards, _ _
.o. | Liege of Serenely Enlightened Majesty of  o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz(o o)
ooo +email/xmpp: m...@google.com--ooO--(_)--Ooo--
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 04/11] mm: page_alloc: introduce alloc_contig_range()

2012-01-13 Thread Michal Nazarewicz

On Thu, Dec 29, 2011 at 01:39:05PM +0100, Marek Szyprowski wrote:

From: Michal Nazarewicz min...@mina86.com
+   /* Make sure all pages are isolated. */
+   if (!ret) {
+   lru_add_drain_all();
+   drain_all_pages();
+   if (WARN_ON(test_pages_isolated(start, end)))
+   ret = -EBUSY;
+   }


On Tue, 10 Jan 2012 15:16:13 +0100, Mel Gorman m...@csn.ul.ie wrote:

Another global IPI seems overkill. Drain pages only from the local CPU
(drain_pages(get_cpu()); put_cpu()) and test if the pages are isolated.


Is get_cpu() + put_cpu() required? Won't drain_local_pages() work?

--
Best regards, _ _
.o. | Liege of Serenely Enlightened Majesty of  o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz(o o)
ooo +email/xmpp: m...@google.com--ooO--(_)--Ooo--
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 04/11] mm: page_alloc: introduce alloc_contig_range()

2012-01-10 Thread Mel Gorman
On Thu, Dec 29, 2011 at 01:39:05PM +0100, Marek Szyprowski wrote:
 From: Michal Nazarewicz min...@mina86.com
 
 This commit adds the alloc_contig_range() function which tries
 to allocate given range of pages.  It tries to migrate all
 already allocated pages that fall in the range thus freeing them.
 Once all pages in the range are freed they are removed from the
 buddy system thus allocated for the caller to use.
 
 __alloc_contig_migrate_range() borrows some code from KAMEZAWA
 Hiroyuki's __alloc_contig_pages().
 
 Signed-off-by: Michal Nazarewicz min...@mina86.com
 Signed-off-by: Marek Szyprowski m.szyprow...@samsung.com
 ---
  include/linux/page-isolation.h |3 +
  mm/page_alloc.c|  190 
 
  2 files changed, 193 insertions(+), 0 deletions(-)
 
 diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
 index 051c1b1..d305080 100644
 --- a/include/linux/page-isolation.h
 +++ b/include/linux/page-isolation.h
 @@ -33,5 +33,8 @@ test_pages_isolated(unsigned long start_pfn, unsigned long 
 end_pfn);
  extern int set_migratetype_isolate(struct page *page);
  extern void unset_migratetype_isolate(struct page *page);
  
 +/* The below functions must be run on a range from a single zone. */
 +int alloc_contig_range(unsigned long start, unsigned long end);
 +void free_contig_range(unsigned long pfn, unsigned nr_pages);
  
  #endif
 diff --git a/mm/page_alloc.c b/mm/page_alloc.c
 index f88b320..47b0a85 100644
 --- a/mm/page_alloc.c
 +++ b/mm/page_alloc.c
 @@ -57,6 +57,7 @@
  #include linux/ftrace_event.h
  #include linux/memcontrol.h
  #include linux/prefetch.h
 +#include linux/migrate.h
  
  #include asm/tlbflush.h
  #include asm/div64.h
 @@ -5711,6 +5712,195 @@ out:
   spin_unlock_irqrestore(zone-lock, flags);
  }
  
 +static unsigned long pfn_align_to_maxpage_down(unsigned long pfn)
 +{
 + return pfn  ~(MAX_ORDER_NR_PAGES - 1);
 +}
 +
 +static unsigned long pfn_align_to_maxpage_up(unsigned long pfn)
 +{
 + return ALIGN(pfn, MAX_ORDER_NR_PAGES);
 +}
 +
 +static struct page *
 +__cma_migrate_alloc(struct page *page, unsigned long private, int **resultp)
 +{
 + return alloc_page(GFP_HIGHUSER_MOVABLE);
 +}
 +
 +static int __alloc_contig_migrate_range(unsigned long start, unsigned long 
 end)
 +{

This is compiled in even if !CONFIG_CMA

 + /* This function is based on compact_zone() from compaction.c. */
 +
 + unsigned long pfn = start;
 + int ret = -EBUSY;
 + unsigned tries = 0;
 +
 + struct compact_control cc = {
 + .nr_migratepages = 0,
 + .order = -1,
 + .zone = page_zone(pfn_to_page(start)),
 + .sync = true,
 + };

Handle the case where start and end PFNs are in different zones. It
should never happen but it should be caught, warned about and an
error returned because someone will eventually get it wrong.

 + INIT_LIST_HEAD(cc.migratepages);
 +
 + migrate_prep_local();
 +
 + while (pfn  end || cc.nr_migratepages) {
 + /* Abort on signal */
 + if (fatal_signal_pending(current)) {
 + ret = -EINTR;
 + goto done;
 + }
 +
 + /* Get some pages to migrate. */
 + if (list_empty(cc.migratepages)) {
 + cc.nr_migratepages = 0;
 + pfn = isolate_migratepages_range(cc.zone, cc,
 +  pfn, end);
 + if (!pfn) {
 + ret = -EINTR;
 + goto done;
 + }
 + tries = 0;
 + }
 +
 + /* Try to migrate. */
 + ret = migrate_pages(cc.migratepages, __cma_migrate_alloc,
 + 0, false, true);
 +
 + /* Migrated all of them? Great! */
 + if (list_empty(cc.migratepages))
 + continue;
 +
 + /* Try five times. */
 + if (++tries == 5) {
 + ret = ret  0 ? ret : -EBUSY;
 + goto done;
 + }
 +
 + /* Before each time drain everything and reschedule. */
 + lru_add_drain_all();
 + drain_all_pages();

Why drain everything on each migration failure? I do not see how it
would help.

 + cond_resched();

The cond_resched() should be outside the failure path if it exists at
all.

 + }
 + ret = 0;
 +
 +done:
 + /* Make sure all pages are isolated. */
 + if (!ret) {
 + lru_add_drain_all();
 + drain_all_pages();
 + if (WARN_ON(test_pages_isolated(start, end)))
 + ret = -EBUSY;
 + }

Another global IPI seems overkill. Drain pages only from the local CPU
(drain_pages(get_cpu()); put_cpu()) and test if the pages are isolated.
Then and only then do a global drain before trying again, 

[PATCH 04/11] mm: page_alloc: introduce alloc_contig_range()

2011-12-29 Thread Marek Szyprowski
From: Michal Nazarewicz min...@mina86.com

This commit adds the alloc_contig_range() function which tries
to allocate given range of pages.  It tries to migrate all
already allocated pages that fall in the range thus freeing them.
Once all pages in the range are freed they are removed from the
buddy system thus allocated for the caller to use.

__alloc_contig_migrate_range() borrows some code from KAMEZAWA
Hiroyuki's __alloc_contig_pages().

Signed-off-by: Michal Nazarewicz min...@mina86.com
Signed-off-by: Marek Szyprowski m.szyprow...@samsung.com
---
 include/linux/page-isolation.h |3 +
 mm/page_alloc.c|  190 
 2 files changed, 193 insertions(+), 0 deletions(-)

diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index 051c1b1..d305080 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -33,5 +33,8 @@ test_pages_isolated(unsigned long start_pfn, unsigned long 
end_pfn);
 extern int set_migratetype_isolate(struct page *page);
 extern void unset_migratetype_isolate(struct page *page);
 
+/* The below functions must be run on a range from a single zone. */
+int alloc_contig_range(unsigned long start, unsigned long end);
+void free_contig_range(unsigned long pfn, unsigned nr_pages);
 
 #endif
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f88b320..47b0a85 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -57,6 +57,7 @@
 #include linux/ftrace_event.h
 #include linux/memcontrol.h
 #include linux/prefetch.h
+#include linux/migrate.h
 
 #include asm/tlbflush.h
 #include asm/div64.h
@@ -5711,6 +5712,195 @@ out:
spin_unlock_irqrestore(zone-lock, flags);
 }
 
+static unsigned long pfn_align_to_maxpage_down(unsigned long pfn)
+{
+   return pfn  ~(MAX_ORDER_NR_PAGES - 1);
+}
+
+static unsigned long pfn_align_to_maxpage_up(unsigned long pfn)
+{
+   return ALIGN(pfn, MAX_ORDER_NR_PAGES);
+}
+
+static struct page *
+__cma_migrate_alloc(struct page *page, unsigned long private, int **resultp)
+{
+   return alloc_page(GFP_HIGHUSER_MOVABLE);
+}
+
+static int __alloc_contig_migrate_range(unsigned long start, unsigned long end)
+{
+   /* This function is based on compact_zone() from compaction.c. */
+
+   unsigned long pfn = start;
+   int ret = -EBUSY;
+   unsigned tries = 0;
+
+   struct compact_control cc = {
+   .nr_migratepages = 0,
+   .order = -1,
+   .zone = page_zone(pfn_to_page(start)),
+   .sync = true,
+   };
+   INIT_LIST_HEAD(cc.migratepages);
+
+   migrate_prep_local();
+
+   while (pfn  end || cc.nr_migratepages) {
+   /* Abort on signal */
+   if (fatal_signal_pending(current)) {
+   ret = -EINTR;
+   goto done;
+   }
+
+   /* Get some pages to migrate. */
+   if (list_empty(cc.migratepages)) {
+   cc.nr_migratepages = 0;
+   pfn = isolate_migratepages_range(cc.zone, cc,
+pfn, end);
+   if (!pfn) {
+   ret = -EINTR;
+   goto done;
+   }
+   tries = 0;
+   }
+
+   /* Try to migrate. */
+   ret = migrate_pages(cc.migratepages, __cma_migrate_alloc,
+   0, false, true);
+
+   /* Migrated all of them? Great! */
+   if (list_empty(cc.migratepages))
+   continue;
+
+   /* Try five times. */
+   if (++tries == 5) {
+   ret = ret  0 ? ret : -EBUSY;
+   goto done;
+   }
+
+   /* Before each time drain everything and reschedule. */
+   lru_add_drain_all();
+   drain_all_pages();
+   cond_resched();
+   }
+   ret = 0;
+
+done:
+   /* Make sure all pages are isolated. */
+   if (!ret) {
+   lru_add_drain_all();
+   drain_all_pages();
+   if (WARN_ON(test_pages_isolated(start, end)))
+   ret = -EBUSY;
+   }
+
+   /* Release pages */
+   putback_lru_pages(cc.migratepages);
+
+   return ret;
+}
+
+/**
+ * alloc_contig_range() -- tries to allocate given range of pages
+ * @start: start PFN to allocate
+ * @end:   one-past-the-last PFN to allocate
+ *
+ * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES
+ * aligned, hovewer it's callers responsibility to guarantee that we
+ * are the only thread that changes migrate type of pageblocks the
+ * pages fall in.
+ *
+ * Returns zero on success or negative error code.  On success all
+ * pages which PFN is in (start, end) are allocated for the caller and
+ * need to be freed with free_contig_range().
+ */