[PATCH v11 25/26] powerpc/mm: add speculative page fault

2018-05-17 Thread Laurent Dufour
This patch enable the speculative page fault on the PowerPC
architecture.

This will try a speculative page fault without holding the mmap_sem,
if it returns with VM_FAULT_RETRY, the mmap_sem is acquired and the
traditional page fault processing is done.

The speculative path is only tried for multithreaded process as there is no
risk of contention on the mmap_sem otherwise.

Signed-off-by: Laurent Dufour 
---
 arch/powerpc/mm/fault.c | 16 
 1 file changed, 16 insertions(+)

diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index ef268d5d9db7..d7b5742ffb2b 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -465,6 +465,21 @@ static int __do_page_fault(struct pt_regs *regs, unsigned 
long address,
if (is_exec)
flags |= FAULT_FLAG_INSTRUCTION;
 
+   /*
+* Try speculative page fault before grabbing the mmap_sem.
+* The Page fault is done if VM_FAULT_RETRY is not returned.
+* But if the memory protection keys are active, we don't know if the
+* fault is due to key mistmatch or due to a classic protection check.
+* To differentiate that, we will need the VMA we no more have, so
+* let's retry with the mmap_sem held.
+*/
+   fault = handle_speculative_fault(mm, address, flags);
+   if (fault != VM_FAULT_RETRY && (IS_ENABLED(CONFIG_PPC_MEM_KEYS) &&
+   fault != VM_FAULT_SIGSEGV)) {
+   perf_sw_event(PERF_COUNT_SW_SPF, 1, regs, address);
+   goto done;
+   }
+
/* When running in the kernel we expect faults to occur only to
 * addresses in user space.  All other faults represent errors in the
 * kernel and should generate an OOPS.  Unfortunately, in the case of an
@@ -565,6 +580,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned 
long address,
 
up_read(>mm->mmap_sem);
 
+done:
if (unlikely(fault & VM_FAULT_ERROR))
return mm_fault_error(regs, address, fault);
 
-- 
2.7.4



[PATCH v11 25/26] powerpc/mm: add speculative page fault

2018-05-17 Thread Laurent Dufour
This patch enable the speculative page fault on the PowerPC
architecture.

This will try a speculative page fault without holding the mmap_sem,
if it returns with VM_FAULT_RETRY, the mmap_sem is acquired and the
traditional page fault processing is done.

The speculative path is only tried for multithreaded process as there is no
risk of contention on the mmap_sem otherwise.

Signed-off-by: Laurent Dufour 
---
 arch/powerpc/mm/fault.c | 16 
 1 file changed, 16 insertions(+)

diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index ef268d5d9db7..d7b5742ffb2b 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -465,6 +465,21 @@ static int __do_page_fault(struct pt_regs *regs, unsigned 
long address,
if (is_exec)
flags |= FAULT_FLAG_INSTRUCTION;
 
+   /*
+* Try speculative page fault before grabbing the mmap_sem.
+* The Page fault is done if VM_FAULT_RETRY is not returned.
+* But if the memory protection keys are active, we don't know if the
+* fault is due to key mistmatch or due to a classic protection check.
+* To differentiate that, we will need the VMA we no more have, so
+* let's retry with the mmap_sem held.
+*/
+   fault = handle_speculative_fault(mm, address, flags);
+   if (fault != VM_FAULT_RETRY && (IS_ENABLED(CONFIG_PPC_MEM_KEYS) &&
+   fault != VM_FAULT_SIGSEGV)) {
+   perf_sw_event(PERF_COUNT_SW_SPF, 1, regs, address);
+   goto done;
+   }
+
/* When running in the kernel we expect faults to occur only to
 * addresses in user space.  All other faults represent errors in the
 * kernel and should generate an OOPS.  Unfortunately, in the case of an
@@ -565,6 +580,7 @@ static int __do_page_fault(struct pt_regs *regs, unsigned 
long address,
 
up_read(>mm->mmap_sem);
 
+done:
if (unlikely(fault & VM_FAULT_ERROR))
return mm_fault_error(regs, address, fault);
 
-- 
2.7.4