Commit-ID: 07f146f53e8de826e4afa3a88ea65bdb13c24959 Gitweb: http://git.kernel.org/tip/07f146f53e8de826e4afa3a88ea65bdb13c24959 Author: Dave Hansen <[email protected]> AuthorDate: Fri, 12 Feb 2016 13:02:22 -0800 Committer: Ingo Molnar <[email protected]> CommitDate: Thu, 18 Feb 2016 19:46:28 +0100
x86/mm/pkeys: Optimize fault handling in access_error() We might not strictly have to make modifictions to access_error() to check the VMA here. If we do not, we will do this: 1. app sets VMA pkey to K 2. app touches a !present page 3. do_page_fault(), allocates and maps page, sets pte.pkey=K 4. return to userspace 5. touch instruction reexecutes, but triggers PF_PK 6. do PKEY signal What happens with this patch applied: 1. app sets VMA pkey to K 2. app touches a !present page 3. do_page_fault() notices that K is inaccessible 4. do PKEY signal We basically skip the fault that does an allocation. So what this lets us do is protect areas from even being *populated* unless it is accessible according to protection keys. That seems handy to me and makes protection keys work more like an mprotect()'d mapping. Signed-off-by: Dave Hansen <[email protected]> Reviewed-by: Thomas Gleixner <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rik van Riel <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]> --- arch/x86/mm/fault.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 319331a..68ecdff 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -900,10 +900,16 @@ bad_area(struct pt_regs *regs, unsigned long error_code, unsigned long address) static inline bool bad_area_access_from_pkeys(unsigned long error_code, struct vm_area_struct *vma) { + /* This code is always called on the current mm */ + bool foreign = false; + if (!boot_cpu_has(X86_FEATURE_OSPKE)) return false; if (error_code & PF_PK) return true; + /* this checks permission keys on the VMA: */ + if (!arch_vma_access_permitted(vma, (error_code & PF_WRITE), foreign)) + return true; return false; } @@ -1091,6 +1097,8 @@ int show_unhandled_signals = 1; static inline int access_error(unsigned long error_code, struct vm_area_struct *vma) { + /* This is only called for the current mm, so: */ + bool foreign = false; /* * Access or read was blocked by protection keys. We do * this check before any others because we do not want @@ -1099,6 +1107,13 @@ access_error(unsigned long error_code, struct vm_area_struct *vma) */ if (error_code & PF_PK) return 1; + /* + * Make sure to check the VMA so that we do not perform + * faults just to hit a PF_PK as soon as we fill in a + * page. + */ + if (!arch_vma_access_permitted(vma, (error_code & PF_WRITE), foreign)) + return 1; if (error_code & PF_WRITE) { /* write, present and write, not present: */

