On Mon, Feb 27, 2023 at 09:36:16AM -0800, Suren Baghdasaryan wrote:
> Write-lock VMA as locked before copying it and when copy_vma produces
> a new VMA.
> 
> Signed-off-by: Suren Baghdasaryan <sur...@google.com>
> Reviewed-by: Laurent Dufour <laurent.duf...@fr.ibm.com>
> ---
>  mm/mmap.c   | 1 +
>  mm/mremap.c | 1 +
>  2 files changed, 2 insertions(+)
> 
> diff --git a/mm/mmap.c b/mm/mmap.c
> index e73fbb84ce12..1f42b9a52b9b 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -3189,6 +3189,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct 
> **vmap,
>                       get_file(new_vma->vm_file);
>               if (new_vma->vm_ops && new_vma->vm_ops->open)
>                       new_vma->vm_ops->open(new_vma);
> +             vma_start_write(new_vma);

Oh, it's to prevent handling page faults during move_page_tables().


>               if (vma_link(mm, new_vma))
>                       goto out_vma_link;
>               *need_rmap_locks = false;
> diff --git a/mm/mremap.c b/mm/mremap.c
> index 1ddf7beb62e9..327c38eb132e 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -623,6 +623,7 @@ static unsigned long move_vma(struct vm_area_struct *vma,
>                       return -ENOMEM;
>       }
>  
> +     vma_start_write(vma);
>       new_pgoff = vma->vm_pgoff + ((old_addr - vma->vm_start) >> PAGE_SHIFT);
>       new_vma = copy_vma(&vma, new_addr, new_len, new_pgoff,
>                          &need_rmap_locks);
> -- 
> 2.39.2.722.g9855ee24e9-goog

Looks good to me.

Reviewed-by: Hyeonggon Yoo <42.hye...@gmail.com>

> 
> 

Reply via email to