On Thu, Jul 09, 2020 at 09:57:10AM -0700, Ralph Campbell wrote:
> When migrating system memory to device private memory, if the source
> address range is a valid VMA range and there is no memory or a zero page,
> the source PFN array is marked as valid but with no PFN. This lets the
> device driver allocate private memory and clear it, then insert the new
> device private struct page into the CPU's page tables when
> migrate_vma_pages() is called. migrate_vma_pages() only inserts the
> new page if the VMA is an anonymous range. There is no point in telling
> the device driver to allocate device private memory and then not migrate
> the page. Instead, mark the source PFN array entries as not migrating to
> avoid this overhead.
> 
> Signed-off-by: Ralph Campbell <rcampb...@nvidia.com>
> ---
>  mm/migrate.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index b0125c082549..8aa434691577 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2204,9 +2204,13 @@ static int migrate_vma_collect_hole(unsigned long 
> start,
>  {
>       struct migrate_vma *migrate = walk->private;
>       unsigned long addr;
> +     unsigned long flags;
> +
> +     /* Only allow populating anonymous memory. */
> +     flags = vma_is_anonymous(walk->vma) ? MIGRATE_PFN_MIGRATE : 0;
>  
>       for (addr = start; addr < end; addr += PAGE_SIZE) {
> -             migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE;
> +             migrate->src[migrate->npages] = flags;

I see a few other such cases where we directly populate MIGRATE_PFN_MIGRATE
w/o a pfn in migrate_vma_collect_pmd() and wonder why the vma_is_anonymous()
check can't help there as well?

1. pte_none() check in migrate_vma_collect_pmd()
2. is_zero_pfn() check in migrate_vma_collect_pmd()

Regards,
Bharata.

Reply via email to