[tip:x86/pti] x86/pti: Fix !PCID and sanitize defines

2018-01-14 Thread tip-bot for Thomas Gleixner
Commit-ID:  f10ee3dcc9f0aba92a5c4c064628be5200765dc2
Gitweb: https://git.kernel.org/tip/f10ee3dcc9f0aba92a5c4c064628be5200765dc2
Author: Thomas Gleixner 
AuthorDate: Sun, 14 Jan 2018 00:23:57 +0100
Committer:  Thomas Gleixner 
CommitDate: Sun, 14 Jan 2018 10:45:53 +0100

x86/pti: Fix !PCID and sanitize defines

The switch to the user space page tables in the low level ASM code sets
unconditionally bit 12 and bit 11 of CR3. Bit 12 is switching the base
address of the page directory to the user part, bit 11 is switching the
PCID to the PCID associated with the user page tables.

This fails on a machine which lacks PCID support because bit 11 is set in
CR3. Bit 11 is reserved when PCID is inactive.

While the Intel SDM claims that the reserved bits are ignored when PCID is
disabled, the AMD APM states that they should be cleared.

This went unnoticed as the AMD APM was not checked when the code was
developed and reviewed and test systems with Intel CPUs never failed to
boot. The report is against a Centos 6 host where the guest fails to boot,
so it's not yet clear whether this is a virt issue or can happen on real
hardware too, but thats irrelevant as the AMD APM clearly ask for clearing
the reserved bits.

Make sure that on non PCID machines bit 11 is not set by the page table
switching code.

Andy suggested to rename the related bits and masks so they are clearly
describing what they should be used for, which is done as well for clarity.

That split could have been done with alternatives but the macro hell is
horrible and ugly. This can be done on top if someone cares to remove the
extra orq. For now it's a straight forward fix.

Fixes: 6fd166aae78c ("x86/mm: Use/Fix PCID to optimize user/kernel switches")
Reported-by: Laura Abbott 
Signed-off-by: Thomas Gleixner 
Cc: Peter Zijlstra 
Cc: stable 
Cc: Borislav Petkov 
Cc: Andy Lutomirski 
Cc: Willy Tarreau 
Cc: David Woodhouse 
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801140009150.2371@nanos

---
 arch/x86/entry/calling.h   | 36 ++
 arch/x86/include/asm/processor-flags.h |  2 +-
 arch/x86/include/asm/tlbflush.h|  6 +++---
 3 files changed, 23 insertions(+), 21 deletions(-)

diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
index 45a63e0..3f48f69 100644
--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -198,8 +198,11 @@ For 32-bit we have the following conventions - kernel is 
built with
  * PAGE_TABLE_ISOLATION PGDs are 8k.  Flip bit 12 to switch between the two
  * halves:
  */
-#define PTI_SWITCH_PGTABLES_MASK   (1<

[tip:x86/pti] x86/pti: Fix !PCID and sanitize defines

2018-01-14 Thread tip-bot for Thomas Gleixner
Commit-ID:  f10ee3dcc9f0aba92a5c4c064628be5200765dc2
Gitweb: https://git.kernel.org/tip/f10ee3dcc9f0aba92a5c4c064628be5200765dc2
Author: Thomas Gleixner 
AuthorDate: Sun, 14 Jan 2018 00:23:57 +0100
Committer:  Thomas Gleixner 
CommitDate: Sun, 14 Jan 2018 10:45:53 +0100

x86/pti: Fix !PCID and sanitize defines

The switch to the user space page tables in the low level ASM code sets
unconditionally bit 12 and bit 11 of CR3. Bit 12 is switching the base
address of the page directory to the user part, bit 11 is switching the
PCID to the PCID associated with the user page tables.

This fails on a machine which lacks PCID support because bit 11 is set in
CR3. Bit 11 is reserved when PCID is inactive.

While the Intel SDM claims that the reserved bits are ignored when PCID is
disabled, the AMD APM states that they should be cleared.

This went unnoticed as the AMD APM was not checked when the code was
developed and reviewed and test systems with Intel CPUs never failed to
boot. The report is against a Centos 6 host where the guest fails to boot,
so it's not yet clear whether this is a virt issue or can happen on real
hardware too, but thats irrelevant as the AMD APM clearly ask for clearing
the reserved bits.

Make sure that on non PCID machines bit 11 is not set by the page table
switching code.

Andy suggested to rename the related bits and masks so they are clearly
describing what they should be used for, which is done as well for clarity.

That split could have been done with alternatives but the macro hell is
horrible and ugly. This can be done on top if someone cares to remove the
extra orq. For now it's a straight forward fix.

Fixes: 6fd166aae78c ("x86/mm: Use/Fix PCID to optimize user/kernel switches")
Reported-by: Laura Abbott 
Signed-off-by: Thomas Gleixner 
Cc: Peter Zijlstra 
Cc: stable 
Cc: Borislav Petkov 
Cc: Andy Lutomirski 
Cc: Willy Tarreau 
Cc: David Woodhouse 
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801140009150.2371@nanos

---
 arch/x86/entry/calling.h   | 36 ++
 arch/x86/include/asm/processor-flags.h |  2 +-
 arch/x86/include/asm/tlbflush.h|  6 +++---
 3 files changed, 23 insertions(+), 21 deletions(-)

diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
index 45a63e0..3f48f69 100644
--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -198,8 +198,11 @@ For 32-bit we have the following conventions - kernel is 
built with
  * PAGE_TABLE_ISOLATION PGDs are 8k.  Flip bit 12 to switch between the two
  * halves:
  */
-#define PTI_SWITCH_PGTABLES_MASK   (1<= (1 << X86_CR3_PTI_SWITCH_BIT));
+   BUILD_BUG_ON(TLB_NR_DYN_ASIDS >= (1 << X86_CR3_PTI_PCID_USER_BIT));
 
/*
 * The ASID being passed in here should have respected the
 * MAX_ASID_AVAILABLE and thus never have the switch bit set.
 */
-   VM_WARN_ON_ONCE(asid & (1 << X86_CR3_PTI_SWITCH_BIT));
+   VM_WARN_ON_ONCE(asid & (1 << X86_CR3_PTI_PCID_USER_BIT));
 #endif
/*
 * The dynamically-assigned ASIDs that get passed in are small
@@ -112,7 +112,7 @@ static inline u16 user_pcid(u16 asid)
 {
u16 ret = kern_pcid(asid);
 #ifdef CONFIG_PAGE_TABLE_ISOLATION
-   ret |= 1 << X86_CR3_PTI_SWITCH_BIT;
+   ret |= 1 << X86_CR3_PTI_PCID_USER_BIT;
 #endif
return ret;
 }