Re: [PATCH v7 04/24] mm: Dont assume page-table invariance during faults
On 08/02/2018 16:00, Matthew Wilcox wrote: > On Thu, Feb 08, 2018 at 03:35:58PM +0100, Laurent Dufour wrote: >> I reviewed that part of code, and I think I could now change the way >> pte_unmap_safe() is checking for the pte's value. Since we now have all the >> needed details in the vm_fault structure, I will pass it to >> pte_unamp_same() and deal with the VMA checks when locking for the pte as >> it is done in the other part of the page fault handler by calling >> pte_spinlock(). > > This does indeed look much better! Thank you! > >> This means that this patch will be dropped, and pte_unmap_same() will become >> : >> >> static inline int pte_unmap_same(struct vm_fault *vmf, int *same) >> { >> int ret = 0; >> >> *same = 1; >> #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT) >> if (sizeof(pte_t) > sizeof(unsigned long)) { >> if (pte_spinlock(vmf)) { >> *same = pte_same(*vmf->pte, vmf->orig_pte); >> spin_unlock(vmf->ptl); >> } >> else >> ret = VM_FAULT_RETRY; >> } >> #endif >> pte_unmap(vmf->pte); >> return ret; >> } > > I'm not a huge fan of auxiliary return values. Perhaps we could do this > instead: > > ret = pte_unmap_same(vmf); > if (ret != VM_FAULT_NOTSAME) { > if (page) > put_page(page); > goto out; > } > ret = 0; > > (we have a lot of unused bits in VM_FAULT_, so adding a new one shouldn't > be a big deal) I do agree, using an auxiliary return value is not a good idea. What about the following changes based on your suggestion ? diff --git a/include/linux/mm.h b/include/linux/mm.h index 7de4323b9e89..0cd31a37bb3d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1212,6 +1212,7 @@ static inline void clear_page_pfmemalloc(struct page *page) #define VM_FAULT_NEEDDSYNC 0x2000 /* ->fault did not modify page tables * and needs fsync() to complete (for * synchronous page faults in DAX) */ +#define VM_FAULT_PTNOTSAME 0x4000 /* Page table entries have changed */ #define VM_FAULT_ERROR (VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | \ VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE | \ diff --git a/mm/memory.c b/mm/memory.c index b7da99c74fef..c9b419f8e4c5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2433,21 +2433,30 @@ static inline bool pte_map_lock(struct vm_fault *vmf) * parts, do_swap_page must check under lock before unmapping the pte and * proceeding (but do_wp_page is only called after already making such a check; * and do_anonymous_page can safely check later on). + * + * pte_unmap_same() returns: + * 0 if the PTE are the same + * VM_FAULT_PTNOTSAME if the PTE are different + * VM_FAULT_RETRY if the VMA has changed in our back during + * a speculative page fault handling. */ -static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd, - pte_t *page_table, pte_t orig_pte) +static inline int pte_unmap_same(struct vm_fault *vmf) { - int same = 1; + int ret = 0; + #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT) if (sizeof(pte_t) > sizeof(unsigned long)) { - spinlock_t *ptl = pte_lockptr(mm, pmd); - spin_lock(ptl); - same = pte_same(*page_table, orig_pte); - spin_unlock(ptl); + if (pte_spinlock(vmf)) { + if (!pte_same(*vmf->pte, vmf->orig_pte)) + ret = VM_FAULT_PTNOTSAME; + spin_unlock(vmf->ptl); + } + else + ret = VM_FAULT_RETRY; } #endif - pte_unmap(page_table); - return same; + pte_unmap(vmf->pte); + return ret; } static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma) @@ -3037,7 +3046,7 @@ int do_swap_page(struct vm_fault *vmf) pte_t pte; int locked; int exclusive = 0; - int ret = 0; + int ret; bool vma_readahead = swap_use_vma_readahead(); if (vma_readahead) { @@ -3045,9 +3054,16 @@ int do_swap_page(struct vm_fault *vmf) swapcache = page; } - if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte)) { + ret = pte_unmap_same(vmf); + if (ret) { if (page) put_page(page); + /* +* In the case the PTE are different, meaning that the +* page has already been processed by another CPU, we return 0. +*/ + if (ret == VM_FAULT_PTNOTSAME) + ret = 0; goto out; } Thanks,
Re: [PATCH v7 04/24] mm: Dont assume page-table invariance during faults
On Thu, Feb 08, 2018 at 03:35:58PM +0100, Laurent Dufour wrote: > I reviewed that part of code, and I think I could now change the way > pte_unmap_safe() is checking for the pte's value. Since we now have all the > needed details in the vm_fault structure, I will pass it to > pte_unamp_same() and deal with the VMA checks when locking for the pte as > it is done in the other part of the page fault handler by calling > pte_spinlock(). This does indeed look much better! Thank you! > This means that this patch will be dropped, and pte_unmap_same() will become : > > static inline int pte_unmap_same(struct vm_fault *vmf, int *same) > { > int ret = 0; > > *same = 1; > #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT) > if (sizeof(pte_t) > sizeof(unsigned long)) { > if (pte_spinlock(vmf)) { > *same = pte_same(*vmf->pte, vmf->orig_pte); > spin_unlock(vmf->ptl); > } > else > ret = VM_FAULT_RETRY; > } > #endif > pte_unmap(vmf->pte); > return ret; > } I'm not a huge fan of auxiliary return values. Perhaps we could do this instead: ret = pte_unmap_same(vmf); if (ret != VM_FAULT_NOTSAME) { if (page) put_page(page); goto out; } ret = 0; (we have a lot of unused bits in VM_FAULT_, so adding a new one shouldn't be a big deal)
Re: [PATCH v7 04/24] mm: Dont assume page-table invariance during faults
On 06/02/2018 21:28, Matthew Wilcox wrote: > On Tue, Feb 06, 2018 at 05:49:50PM +0100, Laurent Dufour wrote: >> From: Peter Zijlstra >> >> One of the side effects of speculating on faults (without holding >> mmap_sem) is that we can race with free_pgtables() and therefore we >> cannot assume the page-tables will stick around. >> >> Remove the reliance on the pte pointer. >> >> Signed-off-by: Peter Zijlstra (Intel) >> >> In most of the case pte_unmap_same() was returning 1, which meaning that >> do_swap_page() should do its processing. So in most of the case there will >> be no impact. >> >> Now regarding the case where pte_unmap_safe() was returning 0, and thus >> do_swap_page return 0 too, this happens when the page has already been >> swapped back. This may happen before do_swap_page() get called or while in >> the call to do_swap_page(). In that later case, the check done when >> swapin_readahead() returns will detect that case. >> >> The worst case would be that a page fault is occuring on 2 threads at the >> same time on the same swapped out page. In that case one thread will take >> much time looping in __read_swap_cache_async(). But in the regular page >> fault path, this is even worse since the thread would wait for semaphore to >> be released before starting anything. >> >> [Remove only if !CONFIG_SPECULATIVE_PAGE_FAULT] >> Signed-off-by: Laurent Dufour > > I have a great deal of trouble connecting all of the words above to the > contents of the patch. Thanks for pushing forward here, this raised some doubts on my side. I reviewed that part of code, and I think I could now change the way pte_unmap_safe() is checking for the pte's value. Since we now have all the needed details in the vm_fault structure, I will pass it to pte_unamp_same() and deal with the VMA checks when locking for the pte as it is done in the other part of the page fault handler by calling pte_spinlock(). This means that this patch will be dropped, and pte_unmap_same() will become : static inline int pte_unmap_same(struct vm_fault *vmf, int *same) { int ret = 0; *same = 1; #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT) if (sizeof(pte_t) > sizeof(unsigned long)) { if (pte_spinlock(vmf)) { *same = pte_same(*vmf->pte, vmf->orig_pte); spin_unlock(vmf->ptl); } else ret = VM_FAULT_RETRY; } #endif pte_unmap(vmf->pte); return ret; } Laurent. > >> >> +#ifndef CONFIG_SPECULATIVE_PAGE_FAULT >> /* >> * handle_pte_fault chooses page fault handler according to an entry which >> was >> * read non-atomically. Before making any commitment, on those >> architectures >> @@ -2311,6 +2312,7 @@ static inline int pte_unmap_same(struct mm_struct *mm, >> pmd_t *pmd, >> pte_unmap(page_table); >> return same; >> } >> +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */ >> >> static inline void cow_user_page(struct page *dst, struct page *src, >> unsigned long va, struct vm_area_struct *vma) >> { >> @@ -2898,11 +2900,13 @@ int do_swap_page(struct vm_fault *vmf) >> swapcache = page; >> } >> >> +#ifndef CONFIG_SPECULATIVE_PAGE_FAULT >> if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte)) { >> if (page) >> put_page(page); >> goto out; >> } >> +#endif >> > > This feels to me like we want: > > #ifdef CONFIG_SPECULATIVE_PAGE_FAULT > [current code] > #else > /* > * Some words here which explains why we always want to return this > * value if we support speculative page faults. > */ > static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd, > pte_t *page_table, pte_t orig_pte) > { > return 1; > } > #endif > > instead of cluttering do_swap_page with an ifdef. >
Re: [PATCH v7 04/24] mm: Dont assume page-table invariance during faults
On Tue, Feb 06, 2018 at 05:49:50PM +0100, Laurent Dufour wrote: > From: Peter Zijlstra > > One of the side effects of speculating on faults (without holding > mmap_sem) is that we can race with free_pgtables() and therefore we > cannot assume the page-tables will stick around. > > Remove the reliance on the pte pointer. > > Signed-off-by: Peter Zijlstra (Intel) > > In most of the case pte_unmap_same() was returning 1, which meaning that > do_swap_page() should do its processing. So in most of the case there will > be no impact. > > Now regarding the case where pte_unmap_safe() was returning 0, and thus > do_swap_page return 0 too, this happens when the page has already been > swapped back. This may happen before do_swap_page() get called or while in > the call to do_swap_page(). In that later case, the check done when > swapin_readahead() returns will detect that case. > > The worst case would be that a page fault is occuring on 2 threads at the > same time on the same swapped out page. In that case one thread will take > much time looping in __read_swap_cache_async(). But in the regular page > fault path, this is even worse since the thread would wait for semaphore to > be released before starting anything. > > [Remove only if !CONFIG_SPECULATIVE_PAGE_FAULT] > Signed-off-by: Laurent Dufour I have a great deal of trouble connecting all of the words above to the contents of the patch. > > +#ifndef CONFIG_SPECULATIVE_PAGE_FAULT > /* > * handle_pte_fault chooses page fault handler according to an entry which > was > * read non-atomically. Before making any commitment, on those architectures > @@ -2311,6 +2312,7 @@ static inline int pte_unmap_same(struct mm_struct *mm, > pmd_t *pmd, > pte_unmap(page_table); > return same; > } > +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */ > > static inline void cow_user_page(struct page *dst, struct page *src, > unsigned long va, struct vm_area_struct *vma) > { > @@ -2898,11 +2900,13 @@ int do_swap_page(struct vm_fault *vmf) > swapcache = page; > } > > +#ifndef CONFIG_SPECULATIVE_PAGE_FAULT > if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte)) { > if (page) > put_page(page); > goto out; > } > +#endif > This feels to me like we want: #ifdef CONFIG_SPECULATIVE_PAGE_FAULT [current code] #else /* * Some words here which explains why we always want to return this * value if we support speculative page faults. */ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd, pte_t *page_table, pte_t orig_pte) { return 1; } #endif instead of cluttering do_swap_page with an ifdef.
[PATCH v7 04/24] mm: Dont assume page-table invariance during faults
From: Peter Zijlstra One of the side effects of speculating on faults (without holding mmap_sem) is that we can race with free_pgtables() and therefore we cannot assume the page-tables will stick around. Remove the reliance on the pte pointer. Signed-off-by: Peter Zijlstra (Intel) In most of the case pte_unmap_same() was returning 1, which meaning that do_swap_page() should do its processing. So in most of the case there will be no impact. Now regarding the case where pte_unmap_safe() was returning 0, and thus do_swap_page return 0 too, this happens when the page has already been swapped back. This may happen before do_swap_page() get called or while in the call to do_swap_page(). In that later case, the check done when swapin_readahead() returns will detect that case. The worst case would be that a page fault is occuring on 2 threads at the same time on the same swapped out page. In that case one thread will take much time looping in __read_swap_cache_async(). But in the regular page fault path, this is even worse since the thread would wait for semaphore to be released before starting anything. [Remove only if !CONFIG_SPECULATIVE_PAGE_FAULT] Signed-off-by: Laurent Dufour --- mm/memory.c | 4 1 file changed, 4 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index 5ec6433d6a5c..32b9eb77d95c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2288,6 +2288,7 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL_GPL(apply_to_page_range); +#ifndef CONFIG_SPECULATIVE_PAGE_FAULT /* * handle_pte_fault chooses page fault handler according to an entry which was * read non-atomically. Before making any commitment, on those architectures @@ -2311,6 +2312,7 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd, pte_unmap(page_table); return same; } +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */ static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma) { @@ -2898,11 +2900,13 @@ int do_swap_page(struct vm_fault *vmf) swapcache = page; } +#ifndef CONFIG_SPECULATIVE_PAGE_FAULT if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte)) { if (page) put_page(page); goto out; } +#endif entry = pte_to_swp_entry(vmf->orig_pte); if (unlikely(non_swap_entry(entry))) { -- 2.7.4