Commit-ID:  361b4b58ec4cf123e12a773909c6454dbd5e6dbc
Gitweb:     http://git.kernel.org/tip/361b4b58ec4cf123e12a773909c6454dbd5e6dbc
Author:     Kirill A. Shutemov <kirill.shute...@linux.intel.com>
AuthorDate: Thu, 30 Mar 2017 11:07:26 +0300
Committer:  Ingo Molnar <mi...@kernel.org>
CommitDate: Tue, 4 Apr 2017 08:22:33 +0200

x86/asm: Remove __VIRTUAL_MASK_SHIFT==47 assert

We don't need the assert anymore, as:

  17be0aec74fb ("x86/asm/entry/64: Implement better check for canonical 
addresses")

made canonical address checks generic wrt. address width.

Signed-off-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Andy Lutomirski <l...@amacapital.net>
Cc: Andy Lutomirski <l...@kernel.org>
Cc: Borislav Petkov <b...@alien8.de>
Cc: Brian Gerst <brge...@gmail.com>
Cc: Dave Hansen <dave.han...@intel.com>
Cc: Denys Vlasenko <dvlas...@redhat.com>
Cc: H. Peter Anvin <h...@zytor.com>
Cc: Josh Poimboeuf <jpoim...@redhat.com>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: linux-a...@vger.kernel.org
Cc: linux...@kvack.org
Link: 
http://lkml.kernel.org/r/20170330080731.65421-3-kirill.shute...@linux.intel.com
Signed-off-by: Ingo Molnar <mi...@kernel.org>
---
 arch/x86/entry/entry_64.S | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 044d18e..f07b4ef 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -265,12 +265,9 @@ return_from_SYSCALL_64:
         *
         * If width of "canonical tail" ever becomes variable, this will need
         * to be updated to remain correct on both old and new CPUs.
+        *
+        * Change top 16 bits to be the sign-extension of 47th bit
         */
-       .ifne __VIRTUAL_MASK_SHIFT - 47
-       .error "virtual address width changed -- SYSRET checks need update"
-       .endif
-
-       /* Change top 16 bits to be the sign-extension of 47th bit */
        shl     $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
        sar     $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
 

Reply via email to