Suren Baghdasaryan <[email protected]> writes:

> Now that we have vma_start_write_killable() we can replace most of the
> vma_start_write() calls with it, improving reaction time to the kill
> signal.
>
> There are several places which are left untouched by this patch:
>
> 1. free_pgtables() because function should free page tables even if a
> fatal signal is pending.
>
> 2. userfaultd code, where some paths calling vma_start_write() can
> handle EINTR and some can't without a deeper code refactoring.
>
> 3. vm_flags_{set|mod|clear} require refactoring that involves moving
> vma_start_write() out of these functions and replacing it with
> vma_assert_write_locked(), then callers of these functions should
> lock the vma themselves using vma_start_write_killable() whenever
> possible.
>
> Suggested-by: Matthew Wilcox <[email protected]>
> Signed-off-by: Suren Baghdasaryan <[email protected]>
> ---
>  arch/powerpc/kvm/book3s_hv_uvmem.c |  5 +-
>  include/linux/mempolicy.h          |  5 +-
>  mm/khugepaged.c                    |  5 +-
>  mm/madvise.c                       |  4 +-
>  mm/memory.c                        |  2 +
>  mm/mempolicy.c                     | 23 ++++++--
>  mm/mlock.c                         | 20 +++++--
>  mm/mprotect.c                      |  4 +-
>  mm/mremap.c                        |  4 +-
>  mm/pagewalk.c                      | 20 +++++--
>  mm/vma.c                           | 94 +++++++++++++++++++++---------
>  mm/vma_exec.c                      |  6 +-
>  12 files changed, 139 insertions(+), 53 deletions(-)
>
> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c 
> b/arch/powerpc/kvm/book3s_hv_uvmem.c
> index 7cf9310de0ec..69750edcf8d5 100644
> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
> @@ -410,7 +410,10 @@ static int kvmppc_memslot_page_merge(struct kvm *kvm,
>                       ret = H_STATE;
>                       break;
>               }
> -             vma_start_write(vma);
> +             if (vma_start_write_killable(vma)) {
> +                     ret = H_STATE;
> +                     break;
> +             }
>               /* Copy vm_flags to avoid partial modifications in ksm_madvise 
> */
>               vm_flags = vma->vm_flags;
>               ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,


The above change w.r.t. powerpc error handling in function
kvmppc_memslot_page_merge() looks good to me. 

Please feel free to add:
Reviewed-by: Ritesh Harjani (IBM) <[email protected]> # powerpc

-ritesh

Reply via email to