[tip:x86/pti] x86/mm: Remove extra filtering in pageattr code

2018-04-12 Thread tip-bot for Dave Hansen
Commit-ID:  1a54420aeb4da1ba5b28283aa5696898220c9a27
Gitweb: https://git.kernel.org/tip/1a54420aeb4da1ba5b28283aa5696898220c9a27
Author: Dave Hansen 
AuthorDate: Fri, 6 Apr 2018 13:55:11 -0700
Committer:  Ingo Molnar 
CommitDate: Thu, 12 Apr 2018 09:05:58 +0200

x86/mm: Remove extra filtering in pageattr code

The pageattr code has a mode where it can set or clear PTE bits in
existing PTEs, so the page protections of the *new* PTEs come from
one of two places:

  1. The set/clear masks: cpa->mask_clr / cpa->mask_set
  2. The existing PTE

We filter ->mask_set/clr for supported PTE bits at entry to
__change_page_attr() so we never need to filter them again.

The only other place permissions can come from is an existing PTE
and those already presumably have good bits.  We do not need to filter
them again.

Signed-off-by: Dave Hansen 
Cc: Andrea Arcangeli 
Cc: Andy Lutomirski 
Cc: Arjan van de Ven 
Cc: Borislav Petkov 
Cc: Dan Williams 
Cc: David Woodhouse 
Cc: Greg Kroah-Hartman 
Cc: Hugh Dickins 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Kees Cook 
Cc: Linus Torvalds 
Cc: Nadav Amit 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: linux...@kvack.org
Link: http://lkml.kernel.org/r/20180406205511.bc072...@viggo.jf.intel.com
Signed-off-by: Ingo Molnar 
---
 arch/x86/mm/pageattr.c | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index d3442dfdfced..968f51a2e39b 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -598,7 +598,6 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
req_prot = pgprot_clear_protnone_bits(req_prot);
if (pgprot_val(req_prot) & _PAGE_PRESENT)
pgprot_val(req_prot) |= _PAGE_PSE;
-   req_prot = canon_pgprot(req_prot);
 
/*
 * old_pfn points to the large page base pfn. So we need
@@ -718,7 +717,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, 
unsigned long address,
 */
pfn = ref_pfn;
for (i = 0; i < PTRS_PER_PTE; i++, pfn += pfninc)
-   set_pte(&pbase[i], pfn_pte(pfn, canon_pgprot(ref_prot)));
+   set_pte(&pbase[i], pfn_pte(pfn, ref_prot));
 
if (virt_addr_valid(address)) {
unsigned long pfn = PFN_DOWN(__pa(address));
@@ -935,7 +934,6 @@ static void populate_pte(struct cpa_data *cpa,
pte = pte_offset_kernel(pmd, start);
 
pgprot = pgprot_clear_protnone_bits(pgprot);
-   pgprot = canon_pgprot(pgprot);
 
while (num_pages-- && start < end) {
set_pte(pte, pfn_pte(cpa->pfn, pgprot));
@@ -1234,7 +1232,7 @@ repeat:
 * after all we're only going to change it's attributes
 * not the memory it points to
 */
-   new_pte = pfn_pte(pfn, canon_pgprot(new_prot));
+   new_pte = pfn_pte(pfn, new_prot);
cpa->pfn = pfn;
/*
 * Do we really change anything ?


[tip:x86/pti] x86/mm: Remove extra filtering in pageattr code

2018-04-09 Thread tip-bot for Dave Hansen
Commit-ID:  e71e836f463dd2cfb319ce88ae4a6e4f83904e6c
Gitweb: https://git.kernel.org/tip/e71e836f463dd2cfb319ce88ae4a6e4f83904e6c
Author: Dave Hansen 
AuthorDate: Fri, 6 Apr 2018 13:55:11 -0700
Committer:  Ingo Molnar 
CommitDate: Mon, 9 Apr 2018 18:27:33 +0200

x86/mm: Remove extra filtering in pageattr code

The pageattr code has a mode where it can set or clear PTE bits in
existing PTEs, so the page protections of the *new* PTEs come from
one of two places:

  1. The set/clear masks: cpa->mask_clr / cpa->mask_set
  2. The existing PTE

We filter ->mask_set/clr for supported PTE bits at entry to
__change_page_attr() so we never need to filter them again.

The only other place permissions can come from is an existing PTE
and those already presumably have good bits.  We do not need to filter
them again.

Signed-off-by: Dave Hansen 
Cc: Andrea Arcangeli 
Cc: Andy Lutomirski 
Cc: Arjan van de Ven 
Cc: Borislav Petkov 
Cc: Dan Williams 
Cc: David Woodhouse 
Cc: Greg Kroah-Hartman 
Cc: Hugh Dickins 
Cc: Josh Poimboeuf 
Cc: Juergen Gross 
Cc: Kees Cook 
Cc: Linus Torvalds 
Cc: Nadav Amit 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: linux...@kvack.org
Link: http://lkml.kernel.org/r/20180406205511.bc072...@viggo.jf.intel.com
Signed-off-by: Ingo Molnar 
---
 arch/x86/mm/pageattr.c | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index d3442dfdfced..968f51a2e39b 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -598,7 +598,6 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
req_prot = pgprot_clear_protnone_bits(req_prot);
if (pgprot_val(req_prot) & _PAGE_PRESENT)
pgprot_val(req_prot) |= _PAGE_PSE;
-   req_prot = canon_pgprot(req_prot);
 
/*
 * old_pfn points to the large page base pfn. So we need
@@ -718,7 +717,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, 
unsigned long address,
 */
pfn = ref_pfn;
for (i = 0; i < PTRS_PER_PTE; i++, pfn += pfninc)
-   set_pte(&pbase[i], pfn_pte(pfn, canon_pgprot(ref_prot)));
+   set_pte(&pbase[i], pfn_pte(pfn, ref_prot));
 
if (virt_addr_valid(address)) {
unsigned long pfn = PFN_DOWN(__pa(address));
@@ -935,7 +934,6 @@ static void populate_pte(struct cpa_data *cpa,
pte = pte_offset_kernel(pmd, start);
 
pgprot = pgprot_clear_protnone_bits(pgprot);
-   pgprot = canon_pgprot(pgprot);
 
while (num_pages-- && start < end) {
set_pte(pte, pfn_pte(cpa->pfn, pgprot));
@@ -1234,7 +1232,7 @@ repeat:
 * after all we're only going to change it's attributes
 * not the memory it points to
 */
-   new_pte = pfn_pte(pfn, canon_pgprot(new_prot));
+   new_pte = pfn_pte(pfn, new_prot);
cpa->pfn = pfn;
/*
 * Do we really change anything ?