Re: [RFC v5 02/11] mm: Prepare for FAULT_FLAG_SPECULATIVE

2017-08-08 Thread Peter Zijlstra
On Tue, Aug 08, 2017 at 03:54:01PM +0530, Anshuman Khandual wrote:
> On 06/16/2017 11:22 PM, Laurent Dufour wrote:
> > From: Peter Zijlstra 
> > 
> > When speculating faults (without holding mmap_sem) we need to validate
> > that the vma against which we loaded pages is still valid when we're
> > ready to install the new PTE.
> > 
> > Therefore, replace the pte_offset_map_lock() calls that (re)take the
> > PTL with pte_map_lock() which can fail in case we find the VMA changed
> > since we started the fault.
> 
> Where we are checking if VMA has changed or not since the fault ?

Not there yet, this is what you call a preparatory patch. They help
review in that you can consider smaller steps.

> > diff --git a/mm/memory.c b/mm/memory.c
> > index fd952f05e016..4083ea0d 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -2240,6 +2240,12 @@ static inline void wp_page_reuse(struct vm_fault 
> > *vmf)
> > pte_unmap_unlock(vmf->pte, vmf->ptl);
> >  }
> >  
> > +static bool pte_map_lock(struct vm_fault *vmf)
> > +{
> > +   vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address, 
> > &vmf->ptl);
> > +   return true;
> > +}
> 
> This is always true ? Then we should not have all these if 
> (!pte_map_lock(vmf))
> check blocks down below.

Later patches will make it possible to return false. This patch is about
the placing this call. Having this in a separate patch makes it easier
to review all those new error conditions.


Re: [RFC v5 02/11] mm: Prepare for FAULT_FLAG_SPECULATIVE

2017-08-08 Thread Anshuman Khandual
On 06/16/2017 11:22 PM, Laurent Dufour wrote:
> From: Peter Zijlstra 
> 
> When speculating faults (without holding mmap_sem) we need to validate
> that the vma against which we loaded pages is still valid when we're
> ready to install the new PTE.
> 
> Therefore, replace the pte_offset_map_lock() calls that (re)take the
> PTL with pte_map_lock() which can fail in case we find the VMA changed
> since we started the fault.

Where we are checking if VMA has changed or not since the fault ?

> 
> Signed-off-by: Peter Zijlstra (Intel) 
> 
> [Port to 4.12 kernel]
> [Remove the comment about the fault_env structure which has been
>  implemented as the vm_fault structure in the kernel]
> Signed-off-by: Laurent Dufour 
> ---
>  include/linux/mm.h |  1 +
>  mm/memory.c| 55 
> ++
>  2 files changed, 40 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index b892e95d4929..6b7ec2a76953 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -286,6 +286,7 @@ extern pgprot_t protection_map[16];
>  #define FAULT_FLAG_USER  0x40/* The fault originated in 
> userspace */
>  #define FAULT_FLAG_REMOTE0x80/* faulting for non current tsk/mm */
>  #define FAULT_FLAG_INSTRUCTION  0x100/* The fault was during an 
> instruction fetch */
> +#define FAULT_FLAG_SPECULATIVE   0x200   /* Speculative fault, not 
> holding mmap_sem */

We are not using this yet, may be can wait till late in the series.

>  
>  #define FAULT_FLAG_TRACE \
>   { FAULT_FLAG_WRITE, "WRITE" }, \
> diff --git a/mm/memory.c b/mm/memory.c
> index fd952f05e016..4083ea0d 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2240,6 +2240,12 @@ static inline void wp_page_reuse(struct vm_fault *vmf)
>   pte_unmap_unlock(vmf->pte, vmf->ptl);
>  }
>  
> +static bool pte_map_lock(struct vm_fault *vmf)
> +{
> + vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address, 
> &vmf->ptl);
> + return true;
> +}

This is always true ? Then we should not have all these if (!pte_map_lock(vmf))
check blocks down below.

> +
>  /*
>   * Handle the case of a page which we actually need to copy to a new page.
>   *
> @@ -2267,6 +2273,7 @@ static int wp_page_copy(struct vm_fault *vmf)
>   const unsigned long mmun_start = vmf->address & PAGE_MASK;
>   const unsigned long mmun_end = mmun_start + PAGE_SIZE;
>   struct mem_cgroup *memcg;
> + int ret = VM_FAULT_OOM;
> 

If we remove the check block over pte_map_lock(), adding VM_FAULT_OOM
becomes redundant here.

>   if (unlikely(anon_vma_prepare(vma)))
>   goto oom;
> @@ -2294,7 +2301,11 @@ static int wp_page_copy(struct vm_fault *vmf)
>   /*
>* Re-check the pte - we dropped the lock
>*/
> - vmf->pte = pte_offset_map_lock(mm, vmf->pmd, vmf->address, &vmf->ptl);
> + if (!pte_map_lock(vmf)) {
> + mem_cgroup_cancel_charge(new_page, memcg, false);
> + ret = VM_FAULT_RETRY;
> + goto oom_free_new;
> + }
>   if (likely(pte_same(*vmf->pte, vmf->orig_pte))) {
>   if (old_page) {
>   if (!PageAnon(old_page)) {
> @@ -2382,7 +2393,7 @@ static int wp_page_copy(struct vm_fault *vmf)
>  oom:
>   if (old_page)
>   put_page(old_page);
> - return VM_FAULT_OOM;
> + return ret;
>  }
>  
>  /**
> @@ -2403,8 +2414,8 @@ static int wp_page_copy(struct vm_fault *vmf)
>  int finish_mkwrite_fault(struct vm_fault *vmf)
>  {
>   WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED));
> - vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address,
> -&vmf->ptl);
> + if (!pte_map_lock(vmf))
> + return VM_FAULT_RETRY;

Cant fail.

>   /*
>* We might have raced with another page fault while we released the
>* pte_offset_map_lock.
> @@ -2522,8 +2533,11 @@ static int do_wp_page(struct vm_fault *vmf)
>   get_page(vmf->page);
>   pte_unmap_unlock(vmf->pte, vmf->ptl);
>   lock_page(vmf->page);
> - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,
> - vmf->address, &vmf->ptl);
> + if (!pte_map_lock(vmf)) {
> + unlock_page(vmf->page);
> + put_page(vmf->page);
> + return VM_FAULT_RETRY;
> + }

Same here.

>   if (!pte_same(*vmf->pte, vmf->orig_pte)) {
>   unlock_page(vmf->page);
>   pte_unmap_unlock(vmf->pte, vmf->ptl);
> @@ -2681,8 +2695,10 @@ int do_swap_page(struct vm_fault *vmf)
>* Back out if somebody else faulted in this pte
>* while we released the pte lock.
>*/