[tip:x86/mm] x86/asm: Fix comment in return_from_SYSCALL_64()

2017-06-13 Thread tip-bot for Kirill A. Shutemov
Commit-ID:  cbe0317bf10acf1f41811108ed0f9a316103c0f3
Gitweb: http://git.kernel.org/tip/cbe0317bf10acf1f41811108ed0f9a316103c0f3
Author: Kirill A. Shutemov 
AuthorDate: Tue, 6 Jun 2017 14:31:21 +0300
Committer:  Ingo Molnar 
CommitDate: Tue, 13 Jun 2017 08:56:51 +0200

x86/asm: Fix comment in return_from_SYSCALL_64()

On x86-64 __VIRTUAL_MASK_SHIFT depends on paging mode now.

Signed-off-by: Kirill A. Shutemov 
Cc: Andrew Morton 
Cc: Andy Lutomirski 
Cc: Andy Lutomirski 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: Dave Hansen 
Cc: Denys Vlasenko 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: linux-a...@vger.kernel.org
Cc: linux...@kvack.org
Link: 
http://lkml.kernel.org/r/20170606113133.22974-3-kirill.shute...@linux.intel.com
Signed-off-by: Ingo Molnar 
---
 arch/x86/entry/entry_64.S | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 4a4c083..a9a8027 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -265,7 +265,8 @@ return_from_SYSCALL_64:
 * If width of "canonical tail" ever becomes variable, this will need
 * to be updated to remain correct on both old and new CPUs.
 *
-* Change top 16 bits to be the sign-extension of 47th bit
+* Change top bits to match most significant bit (47th or 56th bit
+* depending on paging mode) in the address.
 */
shl $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
sar $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx


[tip:x86/mm] x86/asm: Fix comment in return_from_SYSCALL_64()

2017-06-13 Thread tip-bot for Kirill A. Shutemov
Commit-ID:  cbe0317bf10acf1f41811108ed0f9a316103c0f3
Gitweb: http://git.kernel.org/tip/cbe0317bf10acf1f41811108ed0f9a316103c0f3
Author: Kirill A. Shutemov 
AuthorDate: Tue, 6 Jun 2017 14:31:21 +0300
Committer:  Ingo Molnar 
CommitDate: Tue, 13 Jun 2017 08:56:51 +0200

x86/asm: Fix comment in return_from_SYSCALL_64()

On x86-64 __VIRTUAL_MASK_SHIFT depends on paging mode now.

Signed-off-by: Kirill A. Shutemov 
Cc: Andrew Morton 
Cc: Andy Lutomirski 
Cc: Andy Lutomirski 
Cc: Borislav Petkov 
Cc: Brian Gerst 
Cc: Dave Hansen 
Cc: Denys Vlasenko 
Cc: H. Peter Anvin 
Cc: Josh Poimboeuf 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: linux-a...@vger.kernel.org
Cc: linux...@kvack.org
Link: 
http://lkml.kernel.org/r/20170606113133.22974-3-kirill.shute...@linux.intel.com
Signed-off-by: Ingo Molnar 
---
 arch/x86/entry/entry_64.S | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 4a4c083..a9a8027 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -265,7 +265,8 @@ return_from_SYSCALL_64:
 * If width of "canonical tail" ever becomes variable, this will need
 * to be updated to remain correct on both old and new CPUs.
 *
-* Change top 16 bits to be the sign-extension of 47th bit
+* Change top bits to match most significant bit (47th or 56th bit
+* depending on paging mode) in the address.
 */
shl $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
sar $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx