[tip:x86/urgent] x86/asm: Add _ASM_ARG* constants for argument registers to

2018-07-03 Thread tip-bot for H. Peter Anvin
Commit-ID:  0e2e160033283e20f688d8bad5b89460cc5bfcc4
Gitweb: https://git.kernel.org/tip/0e2e160033283e20f688d8bad5b89460cc5bfcc4
Author: H. Peter Anvin 
AuthorDate: Thu, 21 Jun 2018 09:23:23 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 3 Jul 2018 10:56:27 +0200

x86/asm: Add _ASM_ARG* constants for argument registers to 

i386 and x86-64 uses different registers for arguments; make them
available so we don't have to #ifdef in the actual code.

Native size and specified size (q, l, w, b) versions are provided.

Signed-off-by: H. Peter Anvin 
Signed-off-by: Nick Desaulniers 
Reviewed-by: Sedat Dilek 
Acked-by: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: a...@redhat.com
Cc: akata...@vmware.com
Cc: a...@linux-foundation.org
Cc: andrea.pa...@amarulasolutions.com
Cc: ard.biesheu...@linaro.org
Cc: a...@arndb.de
Cc: aryabi...@virtuozzo.com
Cc: astrac...@google.com
Cc: boris.ostrov...@oracle.com
Cc: brijesh.si...@amd.com
Cc: caoj.f...@cn.fujitsu.com
Cc: ge...@linux-m68k.org
Cc: ghackm...@google.com
Cc: gre...@linuxfoundation.org
Cc: jan.kis...@siemens.com
Cc: jarkko.sakki...@linux.intel.com
Cc: j...@perches.com
Cc: jpoim...@redhat.com
Cc: keesc...@google.com
Cc: kirill.shute...@linux.intel.com
Cc: kstew...@linuxfoundation.org
Cc: linux-...@vger.kernel.org
Cc: linux-kbu...@vger.kernel.org
Cc: manojgu...@google.com
Cc: mawil...@microsoft.com
Cc: michal.l...@markovi.net
Cc: mj...@google.com
Cc: m...@chromium.org
Cc: pombreda...@nexb.com
Cc: rient...@google.com
Cc: rost...@goodmis.org
Cc: thomas.lenda...@amd.com
Cc: tstel...@redhat.com
Cc: tw...@google.com
Cc: virtualizat...@lists.linux-foundation.org
Cc: will.dea...@arm.com
Cc: yamada.masah...@socionext.com
Link: http://lkml.kernel.org/r/20180621162324.36656-3-ndesaulni...@google.com
Signed-off-by: Ingo Molnar 
---
 arch/x86/include/asm/asm.h | 59 ++
 1 file changed, 59 insertions(+)

diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
index 219faaec51df..990770f9e76b 100644
--- a/arch/x86/include/asm/asm.h
+++ b/arch/x86/include/asm/asm.h
@@ -46,6 +46,65 @@
 #define _ASM_SI__ASM_REG(si)
 #define _ASM_DI__ASM_REG(di)
 
+#ifndef __x86_64__
+/* 32 bit */
+
+#define _ASM_ARG1  _ASM_AX
+#define _ASM_ARG2  _ASM_DX
+#define _ASM_ARG3  _ASM_CX
+
+#define _ASM_ARG1L eax
+#define _ASM_ARG2L edx
+#define _ASM_ARG3L ecx
+
+#define _ASM_ARG1W ax
+#define _ASM_ARG2W dx
+#define _ASM_ARG3W cx
+
+#define _ASM_ARG1B al
+#define _ASM_ARG2B dl
+#define _ASM_ARG3B cl
+
+#else
+/* 64 bit */
+
+#define _ASM_ARG1  _ASM_DI
+#define _ASM_ARG2  _ASM_SI
+#define _ASM_ARG3  _ASM_DX
+#define _ASM_ARG4  _ASM_CX
+#define _ASM_ARG5  r8
+#define _ASM_ARG6  r9
+
+#define _ASM_ARG1Q rdi
+#define _ASM_ARG2Q rsi
+#define _ASM_ARG3Q rdx
+#define _ASM_ARG4Q rcx
+#define _ASM_ARG5Q r8
+#define _ASM_ARG6Q r9
+
+#define _ASM_ARG1L edi
+#define _ASM_ARG2L esi
+#define _ASM_ARG3L edx
+#define _ASM_ARG4L ecx
+#define _ASM_ARG5L r8d
+#define _ASM_ARG6L r9d
+
+#define _ASM_ARG1W di
+#define _ASM_ARG2W si
+#define _ASM_ARG3W dx
+#define _ASM_ARG4W cx
+#define _ASM_ARG5W r8w
+#define _ASM_ARG6W r9w
+
+#define _ASM_ARG1B dil
+#define _ASM_ARG2B sil
+#define _ASM_ARG3B dl
+#define _ASM_ARG4B cl
+#define _ASM_ARG5B r8b
+#define _ASM_ARG6B r9b
+
+#endif
+
 /*
  * Macros to generate condition code outputs from inline assembly,
  * The output operand must be type "bool".


[tip:x86/urgent] x86/asm: Add _ASM_ARG* constants for argument registers to

2018-07-03 Thread tip-bot for H. Peter Anvin
Commit-ID:  0e2e160033283e20f688d8bad5b89460cc5bfcc4
Gitweb: https://git.kernel.org/tip/0e2e160033283e20f688d8bad5b89460cc5bfcc4
Author: H. Peter Anvin 
AuthorDate: Thu, 21 Jun 2018 09:23:23 -0700
Committer:  Ingo Molnar 
CommitDate: Tue, 3 Jul 2018 10:56:27 +0200

x86/asm: Add _ASM_ARG* constants for argument registers to 

i386 and x86-64 uses different registers for arguments; make them
available so we don't have to #ifdef in the actual code.

Native size and specified size (q, l, w, b) versions are provided.

Signed-off-by: H. Peter Anvin 
Signed-off-by: Nick Desaulniers 
Reviewed-by: Sedat Dilek 
Acked-by: Juergen Gross 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: a...@redhat.com
Cc: akata...@vmware.com
Cc: a...@linux-foundation.org
Cc: andrea.pa...@amarulasolutions.com
Cc: ard.biesheu...@linaro.org
Cc: a...@arndb.de
Cc: aryabi...@virtuozzo.com
Cc: astrac...@google.com
Cc: boris.ostrov...@oracle.com
Cc: brijesh.si...@amd.com
Cc: caoj.f...@cn.fujitsu.com
Cc: ge...@linux-m68k.org
Cc: ghackm...@google.com
Cc: gre...@linuxfoundation.org
Cc: jan.kis...@siemens.com
Cc: jarkko.sakki...@linux.intel.com
Cc: j...@perches.com
Cc: jpoim...@redhat.com
Cc: keesc...@google.com
Cc: kirill.shute...@linux.intel.com
Cc: kstew...@linuxfoundation.org
Cc: linux-...@vger.kernel.org
Cc: linux-kbu...@vger.kernel.org
Cc: manojgu...@google.com
Cc: mawil...@microsoft.com
Cc: michal.l...@markovi.net
Cc: mj...@google.com
Cc: m...@chromium.org
Cc: pombreda...@nexb.com
Cc: rient...@google.com
Cc: rost...@goodmis.org
Cc: thomas.lenda...@amd.com
Cc: tstel...@redhat.com
Cc: tw...@google.com
Cc: virtualizat...@lists.linux-foundation.org
Cc: will.dea...@arm.com
Cc: yamada.masah...@socionext.com
Link: http://lkml.kernel.org/r/20180621162324.36656-3-ndesaulni...@google.com
Signed-off-by: Ingo Molnar 
---
 arch/x86/include/asm/asm.h | 59 ++
 1 file changed, 59 insertions(+)

diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
index 219faaec51df..990770f9e76b 100644
--- a/arch/x86/include/asm/asm.h
+++ b/arch/x86/include/asm/asm.h
@@ -46,6 +46,65 @@
 #define _ASM_SI__ASM_REG(si)
 #define _ASM_DI__ASM_REG(di)
 
+#ifndef __x86_64__
+/* 32 bit */
+
+#define _ASM_ARG1  _ASM_AX
+#define _ASM_ARG2  _ASM_DX
+#define _ASM_ARG3  _ASM_CX
+
+#define _ASM_ARG1L eax
+#define _ASM_ARG2L edx
+#define _ASM_ARG3L ecx
+
+#define _ASM_ARG1W ax
+#define _ASM_ARG2W dx
+#define _ASM_ARG3W cx
+
+#define _ASM_ARG1B al
+#define _ASM_ARG2B dl
+#define _ASM_ARG3B cl
+
+#else
+/* 64 bit */
+
+#define _ASM_ARG1  _ASM_DI
+#define _ASM_ARG2  _ASM_SI
+#define _ASM_ARG3  _ASM_DX
+#define _ASM_ARG4  _ASM_CX
+#define _ASM_ARG5  r8
+#define _ASM_ARG6  r9
+
+#define _ASM_ARG1Q rdi
+#define _ASM_ARG2Q rsi
+#define _ASM_ARG3Q rdx
+#define _ASM_ARG4Q rcx
+#define _ASM_ARG5Q r8
+#define _ASM_ARG6Q r9
+
+#define _ASM_ARG1L edi
+#define _ASM_ARG2L esi
+#define _ASM_ARG3L edx
+#define _ASM_ARG4L ecx
+#define _ASM_ARG5L r8d
+#define _ASM_ARG6L r9d
+
+#define _ASM_ARG1W di
+#define _ASM_ARG2W si
+#define _ASM_ARG3W dx
+#define _ASM_ARG4W cx
+#define _ASM_ARG5W r8w
+#define _ASM_ARG6W r9w
+
+#define _ASM_ARG1B dil
+#define _ASM_ARG2B sil
+#define _ASM_ARG3B dl
+#define _ASM_ARG4B cl
+#define _ASM_ARG5B r8b
+#define _ASM_ARG6B r9b
+
+#endif
+
 /*
  * Macros to generate condition code outputs from inline assembly,
  * The output operand must be type "bool".


[tip:x86/urgent] x86: Mark hpa as a "Designated Reviewer" for the time being

2018-01-27 Thread tip-bot for H. Peter Anvin
Commit-ID:  8a95b74d50825067fb6c8af7f9db03e711b1cb9d
Gitweb: https://git.kernel.org/tip/8a95b74d50825067fb6c8af7f9db03e711b1cb9d
Author: H. Peter Anvin 
AuthorDate: Thu, 25 Jan 2018 11:59:34 -0800
Committer:  Ingo Molnar 
CommitDate: Sat, 27 Jan 2018 10:11:00 +0100

x86: Mark hpa as a "Designated Reviewer" for the time being

Due to some unfortunate events, I have not been directly involved in
the x86 kernel patch flow for a while now.  I have also not been able
to ramp back up by now like I had hoped to, and after reviewing what I
will need to work on both internally at Intel and elsewhere in the near
term, it is clear that I am not going to be able to ramp back up until
late 2018 at the very earliest.

It is not acceptable to not recognize that this load is currently
taken by Ingo and Thomas without my direct participation, so I mark
myself as R: (designated reviewer) rather than M: (maintainer) until
further notice.  This is in fact recognizing the de facto situation
for the past few years.

I have obviously no intention of going away, and I will do everything
within my power to improve Linux on x86 and x86 for Linux.  This,
however, puts credit where it is due and reflects a change of focus.

This patch also removes stale entries for portions of the x86
architecture which have not been maintained separately from arch/x86
for a long time.  If there is a reason to re-introduce them then that
can happen later.

Signed-off-by: H. Peter Anvin 
Signed-off-by: Thomas Gleixner 
Cc: Bruce Schlobohm 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Link: http://lkml.kernel.org/r/20180125195934.5253-1-...@zytor.com
Signed-off-by: Ingo Molnar 
---
 MAINTAINERS | 12 +---
 1 file changed, 1 insertion(+), 11 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index e358141..9497634 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6609,16 +6609,6 @@ L:   linux-...@vger.kernel.org
 S: Maintained
 F: drivers/i2c/i2c-stub.c
 
-i386 BOOT CODE
-M: "H. Peter Anvin" 
-S: Maintained
-F: arch/x86/boot/
-
-i386 SETUP CODE / CPU ERRATA WORKAROUNDS
-M: "H. Peter Anvin" 
-T: git 
git://git.kernel.org/pub/scm/linux/kernel/git/hpa/linux-2.6-x86setup.git
-S: Maintained
-
 IA64 (Itanium) PLATFORM
 M: Tony Luck 
 M: Fenghua Yu 
@@ -14858,7 +14848,7 @@ F:  net/x25/
 X86 ARCHITECTURE (32-BIT AND 64-BIT)
 M: Thomas Gleixner 
 M: Ingo Molnar 
-M: "H. Peter Anvin" 
+R: "H. Peter Anvin" 
 M: x...@kernel.org
 L: linux-kernel@vger.kernel.org
 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/core


[tip:x86/urgent] x86: Mark hpa as a "Designated Reviewer" for the time being

2018-01-27 Thread tip-bot for H. Peter Anvin
Commit-ID:  8a95b74d50825067fb6c8af7f9db03e711b1cb9d
Gitweb: https://git.kernel.org/tip/8a95b74d50825067fb6c8af7f9db03e711b1cb9d
Author: H. Peter Anvin 
AuthorDate: Thu, 25 Jan 2018 11:59:34 -0800
Committer:  Ingo Molnar 
CommitDate: Sat, 27 Jan 2018 10:11:00 +0100

x86: Mark hpa as a "Designated Reviewer" for the time being

Due to some unfortunate events, I have not been directly involved in
the x86 kernel patch flow for a while now.  I have also not been able
to ramp back up by now like I had hoped to, and after reviewing what I
will need to work on both internally at Intel and elsewhere in the near
term, it is clear that I am not going to be able to ramp back up until
late 2018 at the very earliest.

It is not acceptable to not recognize that this load is currently
taken by Ingo and Thomas without my direct participation, so I mark
myself as R: (designated reviewer) rather than M: (maintainer) until
further notice.  This is in fact recognizing the de facto situation
for the past few years.

I have obviously no intention of going away, and I will do everything
within my power to improve Linux on x86 and x86 for Linux.  This,
however, puts credit where it is due and reflects a change of focus.

This patch also removes stale entries for portions of the x86
architecture which have not been maintained separately from arch/x86
for a long time.  If there is a reason to re-introduce them then that
can happen later.

Signed-off-by: H. Peter Anvin 
Signed-off-by: Thomas Gleixner 
Cc: Bruce Schlobohm 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Link: http://lkml.kernel.org/r/20180125195934.5253-1-...@zytor.com
Signed-off-by: Ingo Molnar 
---
 MAINTAINERS | 12 +---
 1 file changed, 1 insertion(+), 11 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index e358141..9497634 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -6609,16 +6609,6 @@ L:   linux-...@vger.kernel.org
 S: Maintained
 F: drivers/i2c/i2c-stub.c
 
-i386 BOOT CODE
-M: "H. Peter Anvin" 
-S: Maintained
-F: arch/x86/boot/
-
-i386 SETUP CODE / CPU ERRATA WORKAROUNDS
-M: "H. Peter Anvin" 
-T: git 
git://git.kernel.org/pub/scm/linux/kernel/git/hpa/linux-2.6-x86setup.git
-S: Maintained
-
 IA64 (Itanium) PLATFORM
 M: Tony Luck 
 M: Fenghua Yu 
@@ -14858,7 +14848,7 @@ F:  net/x25/
 X86 ARCHITECTURE (32-BIT AND 64-BIT)
 M: Thomas Gleixner 
 M: Ingo Molnar 
-M: "H. Peter Anvin" 
+R: "H. Peter Anvin" 
 M: x...@kernel.org
 L: linux-kernel@vger.kernel.org
 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/core


[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() and static_cpu_has() in archrandom.h

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  3b290398638ee4e57f1fb2e35c02005cba9a737f
Gitweb: http://git.kernel.org/tip/3b290398638ee4e57f1fb2e35c02005cba9a737f
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:46 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: Use CC_SET()/CC_OUT() and static_cpu_has() in archrandom.h

Use CC_SET()/CC_OUT() and static_cpu_has().  This produces code good
enough to eliminate ad hoc use of alternatives in ,
greatly simplifying the code.

While we are at it, make x86_init_rdrand() compile out completely if
we don't need it.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-11-git-send-email-...@linux.intel.com

v2: fix a conflict between  and 
discovered by Ingo Molnar.  There are a few places in x86-specific
code where we need all of  even when
CONFIG_ARCH_RANDOM is disabled, so  does not
suffice.
---
 arch/x86/include/asm/archrandom.h | 128 ++
 arch/x86/kernel/cpu/rdrand.c  |   4 +-
 2 files changed, 62 insertions(+), 70 deletions(-)

diff --git a/arch/x86/include/asm/archrandom.h 
b/arch/x86/include/asm/archrandom.h
index ab6f599..5b0579a 100644
--- a/arch/x86/include/asm/archrandom.h
+++ b/arch/x86/include/asm/archrandom.h
@@ -25,8 +25,6 @@
 
 #include 
 #include 
-#include 
-#include 
 
 #define RDRAND_RETRY_LOOPS 10
 
@@ -40,97 +38,91 @@
 # define RDSEED_LONG   RDSEED_INT
 #endif
 
-#ifdef CONFIG_ARCH_RANDOM
+/* Unconditional execution of RDRAND and RDSEED */
 
-/* Instead of arch_get_random_long() when alternatives haven't run. */
 static inline bool rdrand_long(unsigned long *v)
 {
-   int ok;
-   asm volatile("1: " RDRAND_LONG "\n\t"
-"jc 2f\n\t"
-"decl %0\n\t"
-"jnz 1b\n\t"
-"2:"
-: "=r" (ok), "=a" (*v)
-: "0" (RDRAND_RETRY_LOOPS));
-   return !!ok;
+   bool ok;
+   unsigned int retry = RDRAND_RETRY_LOOPS;
+   do {
+   asm volatile(RDRAND_LONG "\n\t"
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" (*v));
+   if (ok)
+   return true;
+   } while (--retry);
+   return false;
+}
+
+static inline bool rdrand_int(unsigned int *v)
+{
+   bool ok;
+   unsigned int retry = RDRAND_RETRY_LOOPS;
+   do {
+   asm volatile(RDRAND_INT "\n\t"
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" (*v));
+   if (ok)
+   return true;
+   } while (--retry);
+   return false;
 }
 
-/* A single attempt at RDSEED */
 static inline bool rdseed_long(unsigned long *v)
 {
bool ok;
asm volatile(RDSEED_LONG "\n\t"
-"setc %0"
-: "=qm" (ok), "=a" (*v));
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" (*v));
return ok;
 }
 
-#define GET_RANDOM(name, type, rdrand, nop)\
-static inline bool name(type *v)   \
-{  \
-   int ok; \
-   alternative_io("movl $0, %0\n\t"\
-  nop, \
-  "\n1: " rdrand "\n\t"\
-  "jc 2f\n\t"  \
-  "decl %0\n\t"\
-  "jnz 1b\n\t" \
-  "2:",\
-  X86_FEATURE_RDRAND,  \
-  ASM_OUTPUT2("=r" (ok), "=a" (*v)),   \
-  "0" (RDRAND_RETRY_LOOPS));   \
-   return !!ok;\
-}
-
-#define GET_SEED(name, type, rdseed, nop)  \
-static inline bool name(type *v)   \
-{  \
-   bool ok;\
-   alternative_io("movb $0, %0\n\t"\
-  nop, \
-  rdseed "\n\t"\
-  "setc %0",   \
-  X86_FEATURE_RDSEED,  \
-  ASM_OUTPUT2("=q" (ok), "=a" (*v)));  \
-   return ok;  \
+static inline bool rdseed_int(unsigned int *v)
+{
+   bool ok;
+   asm volatile(RDSEED_INT "\n\t"
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" 

[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() in

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  35ccfb7114e2f0f454f264c049b03c31f4c6bbc0
Gitweb: http://git.kernel.org/tip/35ccfb7114e2f0f454f264c049b03c31f4c6bbc0
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:44 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: Use CC_SET()/CC_OUT() in 

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-9-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/include/asm/rwsem.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/rwsem.h b/arch/x86/include/asm/rwsem.h
index c508770..1e8be26 100644
--- a/arch/x86/include/asm/rwsem.h
+++ b/arch/x86/include/asm/rwsem.h
@@ -149,10 +149,10 @@ static inline bool __down_write_trylock(struct 
rw_semaphore *sem)
 LOCK_PREFIX "  cmpxchg  %2,%0\n\t"
 "  jnz  1b\n\t"
 "2:\n\t"
-"  sete %3\n\t"
+CC_SET(e)
 "# ending __down_write_trylock\n\t"
 : "+m" (sem->count), "=" (tmp0), "=" (tmp1),
-  "=qm" (result)
+  CC_OUT(e) (result)
 : "er" (RWSEM_ACTIVE_WRITE_BIAS)
 : "memory", "cc");
return result;


[tip:x86/asm] x86, asm, boot: Use CC_SET()/CC_OUT() in arch/x86/boot/boot.h

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  66928b4eb92dfb6d87c204238057b9278b36452b
Gitweb: http://git.kernel.org/tip/66928b4eb92dfb6d87c204238057b9278b36452b
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:45 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm, boot: Use CC_SET()/CC_OUT() in arch/x86/boot/boot.h

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
arch/x86/boot/boot.h.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-10-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/boot/boot.h | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/boot/boot.h b/arch/x86/boot/boot.h
index 2edb2d5..7c1495f 100644
--- a/arch/x86/boot/boot.h
+++ b/arch/x86/boot/boot.h
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "bitops.h"
 #include "ctype.h"
 #include "cpuflags.h"
@@ -179,15 +180,15 @@ static inline void wrgs32(u32 v, addr_t addr)
 static inline bool memcmp_fs(const void *s1, addr_t s2, size_t len)
 {
bool diff;
-   asm volatile("fs; repe; cmpsb; setnz %0"
-: "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
+   asm volatile("fs; repe; cmpsb" CC_SET(nz)
+: CC_OUT(nz) (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
 }
 static inline bool memcmp_gs(const void *s1, addr_t s2, size_t len)
 {
bool diff;
-   asm volatile("gs; repe; cmpsb; setnz %0"
-: "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
+   asm volatile("gs; repe; cmpsb" CC_SET(nz)
+: CC_OUT(nz) (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
 }
 


[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() and static_cpu_has() in archrandom.h

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  3b290398638ee4e57f1fb2e35c02005cba9a737f
Gitweb: http://git.kernel.org/tip/3b290398638ee4e57f1fb2e35c02005cba9a737f
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:46 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: Use CC_SET()/CC_OUT() and static_cpu_has() in archrandom.h

Use CC_SET()/CC_OUT() and static_cpu_has().  This produces code good
enough to eliminate ad hoc use of alternatives in ,
greatly simplifying the code.

While we are at it, make x86_init_rdrand() compile out completely if
we don't need it.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-11-git-send-email-...@linux.intel.com

v2: fix a conflict between  and 
discovered by Ingo Molnar.  There are a few places in x86-specific
code where we need all of  even when
CONFIG_ARCH_RANDOM is disabled, so  does not
suffice.
---
 arch/x86/include/asm/archrandom.h | 128 ++
 arch/x86/kernel/cpu/rdrand.c  |   4 +-
 2 files changed, 62 insertions(+), 70 deletions(-)

diff --git a/arch/x86/include/asm/archrandom.h 
b/arch/x86/include/asm/archrandom.h
index ab6f599..5b0579a 100644
--- a/arch/x86/include/asm/archrandom.h
+++ b/arch/x86/include/asm/archrandom.h
@@ -25,8 +25,6 @@
 
 #include 
 #include 
-#include 
-#include 
 
 #define RDRAND_RETRY_LOOPS 10
 
@@ -40,97 +38,91 @@
 # define RDSEED_LONG   RDSEED_INT
 #endif
 
-#ifdef CONFIG_ARCH_RANDOM
+/* Unconditional execution of RDRAND and RDSEED */
 
-/* Instead of arch_get_random_long() when alternatives haven't run. */
 static inline bool rdrand_long(unsigned long *v)
 {
-   int ok;
-   asm volatile("1: " RDRAND_LONG "\n\t"
-"jc 2f\n\t"
-"decl %0\n\t"
-"jnz 1b\n\t"
-"2:"
-: "=r" (ok), "=a" (*v)
-: "0" (RDRAND_RETRY_LOOPS));
-   return !!ok;
+   bool ok;
+   unsigned int retry = RDRAND_RETRY_LOOPS;
+   do {
+   asm volatile(RDRAND_LONG "\n\t"
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" (*v));
+   if (ok)
+   return true;
+   } while (--retry);
+   return false;
+}
+
+static inline bool rdrand_int(unsigned int *v)
+{
+   bool ok;
+   unsigned int retry = RDRAND_RETRY_LOOPS;
+   do {
+   asm volatile(RDRAND_INT "\n\t"
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" (*v));
+   if (ok)
+   return true;
+   } while (--retry);
+   return false;
 }
 
-/* A single attempt at RDSEED */
 static inline bool rdseed_long(unsigned long *v)
 {
bool ok;
asm volatile(RDSEED_LONG "\n\t"
-"setc %0"
-: "=qm" (ok), "=a" (*v));
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" (*v));
return ok;
 }
 
-#define GET_RANDOM(name, type, rdrand, nop)\
-static inline bool name(type *v)   \
-{  \
-   int ok; \
-   alternative_io("movl $0, %0\n\t"\
-  nop, \
-  "\n1: " rdrand "\n\t"\
-  "jc 2f\n\t"  \
-  "decl %0\n\t"\
-  "jnz 1b\n\t" \
-  "2:",\
-  X86_FEATURE_RDRAND,  \
-  ASM_OUTPUT2("=r" (ok), "=a" (*v)),   \
-  "0" (RDRAND_RETRY_LOOPS));   \
-   return !!ok;\
-}
-
-#define GET_SEED(name, type, rdseed, nop)  \
-static inline bool name(type *v)   \
-{  \
-   bool ok;\
-   alternative_io("movb $0, %0\n\t"\
-  nop, \
-  rdseed "\n\t"\
-  "setc %0",   \
-  X86_FEATURE_RDSEED,  \
-  ASM_OUTPUT2("=q" (ok), "=a" (*v)));  \
-   return ok;  \
+static inline bool rdseed_int(unsigned int *v)
+{
+   bool ok;
+   asm volatile(RDSEED_INT "\n\t"
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" (*v));
+   return ok;
 }
 
-#ifdef CONFIG_X86_64
-

[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() in

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  35ccfb7114e2f0f454f264c049b03c31f4c6bbc0
Gitweb: http://git.kernel.org/tip/35ccfb7114e2f0f454f264c049b03c31f4c6bbc0
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:44 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: Use CC_SET()/CC_OUT() in 

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-9-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/include/asm/rwsem.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/rwsem.h b/arch/x86/include/asm/rwsem.h
index c508770..1e8be26 100644
--- a/arch/x86/include/asm/rwsem.h
+++ b/arch/x86/include/asm/rwsem.h
@@ -149,10 +149,10 @@ static inline bool __down_write_trylock(struct 
rw_semaphore *sem)
 LOCK_PREFIX "  cmpxchg  %2,%0\n\t"
 "  jnz  1b\n\t"
 "2:\n\t"
-"  sete %3\n\t"
+CC_SET(e)
 "# ending __down_write_trylock\n\t"
 : "+m" (sem->count), "=" (tmp0), "=" (tmp1),
-  "=qm" (result)
+  CC_OUT(e) (result)
 : "er" (RWSEM_ACTIVE_WRITE_BIAS)
 : "memory", "cc");
return result;


[tip:x86/asm] x86, asm, boot: Use CC_SET()/CC_OUT() in arch/x86/boot/boot.h

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  66928b4eb92dfb6d87c204238057b9278b36452b
Gitweb: http://git.kernel.org/tip/66928b4eb92dfb6d87c204238057b9278b36452b
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:45 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm, boot: Use CC_SET()/CC_OUT() in arch/x86/boot/boot.h

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
arch/x86/boot/boot.h.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-10-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/boot/boot.h | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/boot/boot.h b/arch/x86/boot/boot.h
index 2edb2d5..7c1495f 100644
--- a/arch/x86/boot/boot.h
+++ b/arch/x86/boot/boot.h
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "bitops.h"
 #include "ctype.h"
 #include "cpuflags.h"
@@ -179,15 +180,15 @@ static inline void wrgs32(u32 v, addr_t addr)
 static inline bool memcmp_fs(const void *s1, addr_t s2, size_t len)
 {
bool diff;
-   asm volatile("fs; repe; cmpsb; setnz %0"
-: "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
+   asm volatile("fs; repe; cmpsb" CC_SET(nz)
+: CC_OUT(nz) (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
 }
 static inline bool memcmp_gs(const void *s1, addr_t s2, size_t len)
 {
bool diff;
-   asm volatile("gs; repe; cmpsb; setnz %0"
-: "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
+   asm volatile("gs; repe; cmpsb" CC_SET(nz)
+: CC_OUT(nz) (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
 }
 


[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() in

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  86b61240d4c233b440cd29daf0baa440daf4a148
Gitweb: http://git.kernel.org/tip/86b61240d4c233b440cd29daf0baa440daf4a148
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:42 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: Use CC_SET()/CC_OUT() in 

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-7-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/include/asm/bitops.h | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index ed8f485..68557f52 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -233,8 +233,8 @@ static __always_inline bool __test_and_set_bit(long nr, 
volatile unsigned long *
bool oldbit;
 
asm("bts %2,%1\n\t"
-   "setc %0"
-   : "=qm" (oldbit), ADDR
+   CC_SET(c)
+   : CC_OUT(c) (oldbit), ADDR
: "Ir" (nr));
return oldbit;
 }
@@ -273,8 +273,8 @@ static __always_inline bool __test_and_clear_bit(long nr, 
volatile unsigned long
bool oldbit;
 
asm volatile("btr %2,%1\n\t"
-"setc %0"
-: "=qm" (oldbit), ADDR
+CC_SET(c)
+: CC_OUT(c) (oldbit), ADDR
 : "Ir" (nr));
return oldbit;
 }
@@ -285,8 +285,8 @@ static __always_inline bool __test_and_change_bit(long nr, 
volatile unsigned lon
bool oldbit;
 
asm volatile("btc %2,%1\n\t"
-"setc %0"
-: "=qm" (oldbit), ADDR
+CC_SET(c)
+: CC_OUT(c) (oldbit), ADDR
 : "Ir" (nr) : "memory");
 
return oldbit;
@@ -316,8 +316,8 @@ static __always_inline bool variable_test_bit(long nr, 
volatile const unsigned l
bool oldbit;
 
asm volatile("bt %2,%1\n\t"
-"setc %0"
-: "=qm" (oldbit)
+CC_SET(c)
+: CC_OUT(c) (oldbit)
 : "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;


[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() in

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  86b61240d4c233b440cd29daf0baa440daf4a148
Gitweb: http://git.kernel.org/tip/86b61240d4c233b440cd29daf0baa440daf4a148
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:42 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: Use CC_SET()/CC_OUT() in 

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-7-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/include/asm/bitops.h | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index ed8f485..68557f52 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -233,8 +233,8 @@ static __always_inline bool __test_and_set_bit(long nr, 
volatile unsigned long *
bool oldbit;
 
asm("bts %2,%1\n\t"
-   "setc %0"
-   : "=qm" (oldbit), ADDR
+   CC_SET(c)
+   : CC_OUT(c) (oldbit), ADDR
: "Ir" (nr));
return oldbit;
 }
@@ -273,8 +273,8 @@ static __always_inline bool __test_and_clear_bit(long nr, 
volatile unsigned long
bool oldbit;
 
asm volatile("btr %2,%1\n\t"
-"setc %0"
-: "=qm" (oldbit), ADDR
+CC_SET(c)
+: CC_OUT(c) (oldbit), ADDR
 : "Ir" (nr));
return oldbit;
 }
@@ -285,8 +285,8 @@ static __always_inline bool __test_and_change_bit(long nr, 
volatile unsigned lon
bool oldbit;
 
asm volatile("btc %2,%1\n\t"
-"setc %0"
-: "=qm" (oldbit), ADDR
+CC_SET(c)
+: CC_OUT(c) (oldbit), ADDR
 : "Ir" (nr) : "memory");
 
return oldbit;
@@ -316,8 +316,8 @@ static __always_inline bool variable_test_bit(long nr, 
volatile const unsigned l
bool oldbit;
 
asm volatile("bt %2,%1\n\t"
-"setc %0"
-: "=qm" (oldbit)
+CC_SET(c)
+: CC_OUT(c) (oldbit)
 : "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;


[tip:x86/asm] x86, asm: change GEN_*_RMWcc() to use CC_SET()/CC_OUT()

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  ba741e356c49bfce0adcfa851080666870867f6b
Gitweb: http://git.kernel.org/tip/ba741e356c49bfce0adcfa851080666870867f6b
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:41 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: change GEN_*_RMWcc() to use CC_SET()/CC_OUT()

Change the GEN_*_RMWcc() macros to use the CC_SET()/CC_OUT() macros
defined in , and disable the use of asm goto if
__GCC_ASM_FLAG_OUTPUTS__ is enabled.  This allows gcc to receive the
flags output directly in gcc 6+.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-6-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/include/asm/rmwcc.h | 14 +-
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/rmwcc.h b/arch/x86/include/asm/rmwcc.h
index e3264c4..661dd30 100644
--- a/arch/x86/include/asm/rmwcc.h
+++ b/arch/x86/include/asm/rmwcc.h
@@ -1,7 +1,9 @@
 #ifndef _ASM_X86_RMWcc
 #define _ASM_X86_RMWcc
 
-#ifdef CC_HAVE_ASM_GOTO
+#if !defined(__GCC_ASM_FLAG_OUTPUTS__) && defined(CC_HAVE_ASM_GOTO)
+
+/* Use asm goto */
 
 #define __GEN_RMWcc(fullop, var, cc, ...)  \
 do {   \
@@ -19,13 +21,15 @@ cc_label:   
\
 #define GEN_BINARY_RMWcc(op, var, vcon, val, arg0, cc) \
__GEN_RMWcc(op " %1, " arg0, var, cc, vcon (val))
 
-#else /* !CC_HAVE_ASM_GOTO */
+#else /* defined(__GCC_ASM_FLAG_OUTPUTS__) || !defined(CC_HAVE_ASM_GOTO) */
+
+/* Use flags output or a set instruction */
 
 #define __GEN_RMWcc(fullop, var, cc, ...)  \
 do {   \
bool c; \
-   asm volatile (fullop "; set" #cc " %1"  \
-   : "+m" (var), "=qm" (c) \
+   asm volatile (fullop ";" CC_SET(cc) \
+   : "+m" (var), CC_OUT(cc) (c)\
: __VA_ARGS__ : "memory");  \
return c;   \
 } while (0)
@@ -36,6 +40,6 @@ do {  
\
 #define GEN_BINARY_RMWcc(op, var, vcon, val, arg0, cc) \
__GEN_RMWcc(op " %2, " arg0, var, cc, vcon (val))
 
-#endif /* CC_HAVE_ASM_GOTO */
+#endif /* defined(__GCC_ASM_FLAG_OUTPUTS__) || !defined(CC_HAVE_ASM_GOTO) */
 
 #endif /* _ASM_X86_RMWcc */


[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() in

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  64be6d36f5674f3424d1901772f76e21874f4954
Gitweb: http://git.kernel.org/tip/64be6d36f5674f3424d1901772f76e21874f4954
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:43 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: Use CC_SET()/CC_OUT() in 

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-8-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/include/asm/percpu.h | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index 184d7f3..e02e3f8 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -511,8 +511,9 @@ do {
\
 #define x86_test_and_clear_bit_percpu(bit, var)
\
 ({ \
bool old__; \
-   asm volatile("btr %2,"__percpu_arg(1)"\n\tsetc %0"  \
-: "=qm" (old__), "+m" (var)\
+   asm volatile("btr %2,"__percpu_arg(1)"\n\t" \
+CC_SET(c)  \
+: CC_OUT(c) (old__), "+m" (var)\
 : "dIr" (bit));\
old__;  \
 })
@@ -535,8 +536,8 @@ static inline bool x86_this_cpu_variable_test_bit(int nr,
bool oldbit;
 
asm volatile("bt "__percpu_arg(2)",%1\n\t"
-   "setc %0"
-   : "=qm" (oldbit)
+   CC_SET(c)
+   : CC_OUT(c) (oldbit)
: "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;


[tip:x86/asm] x86, asm: change GEN_*_RMWcc() to use CC_SET()/CC_OUT()

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  ba741e356c49bfce0adcfa851080666870867f6b
Gitweb: http://git.kernel.org/tip/ba741e356c49bfce0adcfa851080666870867f6b
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:41 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: change GEN_*_RMWcc() to use CC_SET()/CC_OUT()

Change the GEN_*_RMWcc() macros to use the CC_SET()/CC_OUT() macros
defined in , and disable the use of asm goto if
__GCC_ASM_FLAG_OUTPUTS__ is enabled.  This allows gcc to receive the
flags output directly in gcc 6+.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-6-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/include/asm/rmwcc.h | 14 +-
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/rmwcc.h b/arch/x86/include/asm/rmwcc.h
index e3264c4..661dd30 100644
--- a/arch/x86/include/asm/rmwcc.h
+++ b/arch/x86/include/asm/rmwcc.h
@@ -1,7 +1,9 @@
 #ifndef _ASM_X86_RMWcc
 #define _ASM_X86_RMWcc
 
-#ifdef CC_HAVE_ASM_GOTO
+#if !defined(__GCC_ASM_FLAG_OUTPUTS__) && defined(CC_HAVE_ASM_GOTO)
+
+/* Use asm goto */
 
 #define __GEN_RMWcc(fullop, var, cc, ...)  \
 do {   \
@@ -19,13 +21,15 @@ cc_label:   
\
 #define GEN_BINARY_RMWcc(op, var, vcon, val, arg0, cc) \
__GEN_RMWcc(op " %1, " arg0, var, cc, vcon (val))
 
-#else /* !CC_HAVE_ASM_GOTO */
+#else /* defined(__GCC_ASM_FLAG_OUTPUTS__) || !defined(CC_HAVE_ASM_GOTO) */
+
+/* Use flags output or a set instruction */
 
 #define __GEN_RMWcc(fullop, var, cc, ...)  \
 do {   \
bool c; \
-   asm volatile (fullop "; set" #cc " %1"  \
-   : "+m" (var), "=qm" (c) \
+   asm volatile (fullop ";" CC_SET(cc) \
+   : "+m" (var), CC_OUT(cc) (c)\
: __VA_ARGS__ : "memory");  \
return c;   \
 } while (0)
@@ -36,6 +40,6 @@ do {  
\
 #define GEN_BINARY_RMWcc(op, var, vcon, val, arg0, cc) \
__GEN_RMWcc(op " %2, " arg0, var, cc, vcon (val))
 
-#endif /* CC_HAVE_ASM_GOTO */
+#endif /* defined(__GCC_ASM_FLAG_OUTPUTS__) || !defined(CC_HAVE_ASM_GOTO) */
 
 #endif /* _ASM_X86_RMWcc */


[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() in

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  64be6d36f5674f3424d1901772f76e21874f4954
Gitweb: http://git.kernel.org/tip/64be6d36f5674f3424d1901772f76e21874f4954
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:43 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: Use CC_SET()/CC_OUT() in 

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-8-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/include/asm/percpu.h | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index 184d7f3..e02e3f8 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -511,8 +511,9 @@ do {
\
 #define x86_test_and_clear_bit_percpu(bit, var)
\
 ({ \
bool old__; \
-   asm volatile("btr %2,"__percpu_arg(1)"\n\tsetc %0"  \
-: "=qm" (old__), "+m" (var)\
+   asm volatile("btr %2,"__percpu_arg(1)"\n\t" \
+CC_SET(c)  \
+: CC_OUT(c) (old__), "+m" (var)\
 : "dIr" (bit));\
old__;  \
 })
@@ -535,8 +536,8 @@ static inline bool x86_this_cpu_variable_test_bit(int nr,
bool oldbit;
 
asm volatile("bt "__percpu_arg(2)",%1\n\t"
-   "setc %0"
-   : "=qm" (oldbit)
+   CC_SET(c)
+   : CC_OUT(c) (oldbit)
: "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;


[tip:x86/asm] x86, asm: define CC_SET() and CC_OUT() macros

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  ff3554b409b82d349f71e9d7082648b7b0a1a5bb
Gitweb: http://git.kernel.org/tip/ff3554b409b82d349f71e9d7082648b7b0a1a5bb
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:40 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: define CC_SET() and CC_OUT() macros

The CC_SET() and CC_OUT() macros can be used together to take
advantage of the new __GCC_ASM_FLAG_OUTPUTS__ feature in gcc 6+ while
remaining backwards compatible.  CC_SET() generates a SET instruction
on older compilers; CC_OUT() makes sure the output is received in the
correct variable.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-5-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/include/asm/asm.h | 12 
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
index f5063b6..7acb51c 100644
--- a/arch/x86/include/asm/asm.h
+++ b/arch/x86/include/asm/asm.h
@@ -42,6 +42,18 @@
 #define _ASM_SI__ASM_REG(si)
 #define _ASM_DI__ASM_REG(di)
 
+/*
+ * Macros to generate condition code outputs from inline assembly,
+ * The output operand must be type "bool".
+ */
+#ifdef __GCC_ASM_FLAG_OUTPUTS__
+# define CC_SET(c) "\n\t/* output condition code " #c "*/\n"
+# define CC_OUT(c) "=@cc" #c
+#else
+# define CC_SET(c) "\n\tset" #c " %[_cc_" #c "]\n"
+# define CC_OUT(c) [_cc_ ## c] "=qm"
+#endif
+
 /* Exception table entry */
 #ifdef __ASSEMBLY__
 # define _ASM_EXTABLE_HANDLE(from, to, handler)\


[tip:x86/asm] x86, asm: define CC_SET() and CC_OUT() macros

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  ff3554b409b82d349f71e9d7082648b7b0a1a5bb
Gitweb: http://git.kernel.org/tip/ff3554b409b82d349f71e9d7082648b7b0a1a5bb
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:40 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: define CC_SET() and CC_OUT() macros

The CC_SET() and CC_OUT() macros can be used together to take
advantage of the new __GCC_ASM_FLAG_OUTPUTS__ feature in gcc 6+ while
remaining backwards compatible.  CC_SET() generates a SET instruction
on older compilers; CC_OUT() makes sure the output is received in the
correct variable.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-5-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/include/asm/asm.h | 12 
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
index f5063b6..7acb51c 100644
--- a/arch/x86/include/asm/asm.h
+++ b/arch/x86/include/asm/asm.h
@@ -42,6 +42,18 @@
 #define _ASM_SI__ASM_REG(si)
 #define _ASM_DI__ASM_REG(di)
 
+/*
+ * Macros to generate condition code outputs from inline assembly,
+ * The output operand must be type "bool".
+ */
+#ifdef __GCC_ASM_FLAG_OUTPUTS__
+# define CC_SET(c) "\n\t/* output condition code " #c "*/\n"
+# define CC_OUT(c) "=@cc" #c
+#else
+# define CC_SET(c) "\n\tset" #c " %[_cc_" #c "]\n"
+# define CC_OUT(c) [_cc_ ## c] "=qm"
+#endif
+
 /* Exception table entry */
 #ifdef __ASSEMBLY__
 # define _ASM_EXTABLE_HANDLE(from, to, handler)\


[tip:x86/asm] x86, asm: change the GEN_*_RMWcc() macros to not quote the condition

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  18fe58229d80c7f4f138a07e84ba608e1ebd232b
Gitweb: http://git.kernel.org/tip/18fe58229d80c7f4f138a07e84ba608e1ebd232b
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:39 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: change the GEN_*_RMWcc() macros to not quote the condition

Change the lexical defintion of the GEN_*_RMWcc() macros to not take
the condition code as a quoted string.  This will help support
changing them to use the new __GCC_ASM_FLAG_OUTPUTS__ feature in a
subsequent patch.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-4-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/include/asm/atomic.h  | 8 
 arch/x86/include/asm/atomic64_64.h | 8 
 arch/x86/include/asm/bitops.h  | 6 +++---
 arch/x86/include/asm/local.h   | 8 
 arch/x86/include/asm/preempt.h | 2 +-
 arch/x86/include/asm/rmwcc.h   | 4 ++--
 6 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 17d8812..7322c15 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -77,7 +77,7 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
  */
 static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", "e");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", e);
 }
 
 /**
@@ -114,7 +114,7 @@ static __always_inline void atomic_dec(atomic_t *v)
  */
 static __always_inline bool atomic_dec_and_test(atomic_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", e);
 }
 
 /**
@@ -127,7 +127,7 @@ static __always_inline bool atomic_dec_and_test(atomic_t *v)
  */
 static __always_inline bool atomic_inc_and_test(atomic_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", e);
 }
 
 /**
@@ -141,7 +141,7 @@ static __always_inline bool atomic_inc_and_test(atomic_t *v)
  */
 static __always_inline bool atomic_add_negative(int i, atomic_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", "s");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", s);
 }
 
 /**
diff --git a/arch/x86/include/asm/atomic64_64.h 
b/arch/x86/include/asm/atomic64_64.h
index 4f881d7..57bf925 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -72,7 +72,7 @@ static inline void atomic64_sub(long i, atomic64_t *v)
  */
 static inline bool atomic64_sub_and_test(long i, atomic64_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", "e");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", e);
 }
 
 /**
@@ -111,7 +111,7 @@ static __always_inline void atomic64_dec(atomic64_t *v)
  */
 static inline bool atomic64_dec_and_test(atomic64_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", e);
 }
 
 /**
@@ -124,7 +124,7 @@ static inline bool atomic64_dec_and_test(atomic64_t *v)
  */
 static inline bool atomic64_inc_and_test(atomic64_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", e);
 }
 
 /**
@@ -138,7 +138,7 @@ static inline bool atomic64_inc_and_test(atomic64_t *v)
  */
 static inline bool atomic64_add_negative(long i, atomic64_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", "s");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", s);
 }
 
 /**
diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index 8cbb7f4..ed8f485 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -203,7 +203,7 @@ static __always_inline void change_bit(long nr, volatile 
unsigned long *addr)
  */
 static __always_inline bool test_and_set_bit(long nr, volatile unsigned long 
*addr)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "bts", *addr, "Ir", nr, "%0", "c");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "bts", *addr, "Ir", nr, "%0", c);
 }
 
 /**
@@ -249,7 +249,7 @@ static __always_inline bool __test_and_set_bit(long nr, 
volatile unsigned long *
  */
 static __always_inline bool test_and_clear_bit(long nr, volatile unsigned long 
*addr)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "btr", *addr, "Ir", nr, "%0", "c");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "btr", *addr, "Ir", nr, "%0", c);
 }
 
 /**
@@ -302,7 +302,7 @@ static __always_inline bool __test_and_change_bit(long nr, 
volatile unsigned lon
  */
 

[tip:x86/asm] x86, asm: change the GEN_*_RMWcc() macros to not quote the condition

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  18fe58229d80c7f4f138a07e84ba608e1ebd232b
Gitweb: http://git.kernel.org/tip/18fe58229d80c7f4f138a07e84ba608e1ebd232b
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:39 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: change the GEN_*_RMWcc() macros to not quote the condition

Change the lexical defintion of the GEN_*_RMWcc() macros to not take
the condition code as a quoted string.  This will help support
changing them to use the new __GCC_ASM_FLAG_OUTPUTS__ feature in a
subsequent patch.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-4-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/include/asm/atomic.h  | 8 
 arch/x86/include/asm/atomic64_64.h | 8 
 arch/x86/include/asm/bitops.h  | 6 +++---
 arch/x86/include/asm/local.h   | 8 
 arch/x86/include/asm/preempt.h | 2 +-
 arch/x86/include/asm/rmwcc.h   | 4 ++--
 6 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 17d8812..7322c15 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -77,7 +77,7 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
  */
 static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", "e");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", e);
 }
 
 /**
@@ -114,7 +114,7 @@ static __always_inline void atomic_dec(atomic_t *v)
  */
 static __always_inline bool atomic_dec_and_test(atomic_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", e);
 }
 
 /**
@@ -127,7 +127,7 @@ static __always_inline bool atomic_dec_and_test(atomic_t *v)
  */
 static __always_inline bool atomic_inc_and_test(atomic_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", e);
 }
 
 /**
@@ -141,7 +141,7 @@ static __always_inline bool atomic_inc_and_test(atomic_t *v)
  */
 static __always_inline bool atomic_add_negative(int i, atomic_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", "s");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", s);
 }
 
 /**
diff --git a/arch/x86/include/asm/atomic64_64.h 
b/arch/x86/include/asm/atomic64_64.h
index 4f881d7..57bf925 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -72,7 +72,7 @@ static inline void atomic64_sub(long i, atomic64_t *v)
  */
 static inline bool atomic64_sub_and_test(long i, atomic64_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", "e");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", e);
 }
 
 /**
@@ -111,7 +111,7 @@ static __always_inline void atomic64_dec(atomic64_t *v)
  */
 static inline bool atomic64_dec_and_test(atomic64_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", e);
 }
 
 /**
@@ -124,7 +124,7 @@ static inline bool atomic64_dec_and_test(atomic64_t *v)
  */
 static inline bool atomic64_inc_and_test(atomic64_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", e);
 }
 
 /**
@@ -138,7 +138,7 @@ static inline bool atomic64_inc_and_test(atomic64_t *v)
  */
 static inline bool atomic64_add_negative(long i, atomic64_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", "s");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", s);
 }
 
 /**
diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index 8cbb7f4..ed8f485 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -203,7 +203,7 @@ static __always_inline void change_bit(long nr, volatile 
unsigned long *addr)
  */
 static __always_inline bool test_and_set_bit(long nr, volatile unsigned long 
*addr)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "bts", *addr, "Ir", nr, "%0", "c");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "bts", *addr, "Ir", nr, "%0", c);
 }
 
 /**
@@ -249,7 +249,7 @@ static __always_inline bool __test_and_set_bit(long nr, 
volatile unsigned long *
  */
 static __always_inline bool test_and_clear_bit(long nr, volatile unsigned long 
*addr)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "btr", *addr, "Ir", nr, "%0", "c");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "btr", *addr, "Ir", nr, "%0", c);
 }
 
 /**
@@ -302,7 +302,7 @@ static __always_inline bool __test_and_change_bit(long nr, 
volatile unsigned lon
  */
 static __always_inline bool test_and_change_bit(long nr, volatile unsigned 
long *addr)
 {
-   

[tip:x86/asm] x86, asm: use bool for bitops and other assembly outputs

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  117780eef7740729e803bdcc0d5f2f48137ea8e3
Gitweb: http://git.kernel.org/tip/117780eef7740729e803bdcc0d5f2f48137ea8e3
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:38 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: use bool for bitops and other assembly outputs

The gcc people have confirmed that using "bool" when combined with
inline assembly always is treated as a byte-sized operand that can be
assumed to be 0 or 1, which is exactly what the SET instruction
emits.  Change the output types and intermediate variables of as many
operations as practical to "bool".

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-3-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/boot/bitops.h |  8 +---
 arch/x86/boot/boot.h   |  8 
 arch/x86/boot/string.c |  2 +-
 arch/x86/include/asm/apm.h |  6 +++---
 arch/x86/include/asm/archrandom.h  | 16 
 arch/x86/include/asm/atomic.h  |  8 
 arch/x86/include/asm/atomic64_64.h | 10 +-
 arch/x86/include/asm/bitops.h  | 28 ++--
 arch/x86/include/asm/local.h   |  8 
 arch/x86/include/asm/percpu.h  |  8 
 arch/x86/include/asm/rmwcc.h   |  4 ++--
 arch/x86/include/asm/rwsem.h   | 17 +
 include/linux/random.h | 12 ++--
 13 files changed, 69 insertions(+), 66 deletions(-)

diff --git a/arch/x86/boot/bitops.h b/arch/x86/boot/bitops.h
index 878e4b9..0d41d68 100644
--- a/arch/x86/boot/bitops.h
+++ b/arch/x86/boot/bitops.h
@@ -16,14 +16,16 @@
 #define BOOT_BITOPS_H
 #define _LINUX_BITOPS_H/* Inhibit inclusion of 
 */
 
-static inline int constant_test_bit(int nr, const void *addr)
+#include 
+
+static inline bool constant_test_bit(int nr, const void *addr)
 {
const u32 *p = (const u32 *)addr;
return ((1UL << (nr & 31)) & (p[nr >> 5])) != 0;
 }
-static inline int variable_test_bit(int nr, const void *addr)
+static inline bool variable_test_bit(int nr, const void *addr)
 {
-   u8 v;
+   bool v;
const u32 *p = (const u32 *)addr;
 
asm("btl %2,%1; setc %0" : "=qm" (v) : "m" (*p), "Ir" (nr));
diff --git a/arch/x86/boot/boot.h b/arch/x86/boot/boot.h
index 9011a88..2edb2d5 100644
--- a/arch/x86/boot/boot.h
+++ b/arch/x86/boot/boot.h
@@ -176,16 +176,16 @@ static inline void wrgs32(u32 v, addr_t addr)
 }
 
 /* Note: these only return true/false, not a signed return value! */
-static inline int memcmp_fs(const void *s1, addr_t s2, size_t len)
+static inline bool memcmp_fs(const void *s1, addr_t s2, size_t len)
 {
-   u8 diff;
+   bool diff;
asm volatile("fs; repe; cmpsb; setnz %0"
 : "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
 }
-static inline int memcmp_gs(const void *s1, addr_t s2, size_t len)
+static inline bool memcmp_gs(const void *s1, addr_t s2, size_t len)
 {
-   u8 diff;
+   bool diff;
asm volatile("gs; repe; cmpsb; setnz %0"
 : "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
diff --git a/arch/x86/boot/string.c b/arch/x86/boot/string.c
index 318b846..cc3bd58 100644
--- a/arch/x86/boot/string.c
+++ b/arch/x86/boot/string.c
@@ -17,7 +17,7 @@
 
 int memcmp(const void *s1, const void *s2, size_t len)
 {
-   u8 diff;
+   bool diff;
asm("repe; cmpsb; setnz %0"
: "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
diff --git a/arch/x86/include/asm/apm.h b/arch/x86/include/asm/apm.h
index 20370c6..93eebc63 100644
--- a/arch/x86/include/asm/apm.h
+++ b/arch/x86/include/asm/apm.h
@@ -45,11 +45,11 @@ static inline void apm_bios_call_asm(u32 func, u32 ebx_in, 
u32 ecx_in,
: "memory", "cc");
 }
 
-static inline u8 apm_bios_call_simple_asm(u32 func, u32 ebx_in,
-   u32 ecx_in, u32 *eax)
+static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in,
+   u32 ecx_in, u32 *eax)
 {
int cx, dx, si;
-   u8  error;
+   boolerror;
 
/*
 * N.B. We do NOT need a cld after the BIOS call
diff --git a/arch/x86/include/asm/archrandom.h 
b/arch/x86/include/asm/archrandom.h
index 69f1366..ab6f599 100644
--- a/arch/x86/include/asm/archrandom.h
+++ b/arch/x86/include/asm/archrandom.h
@@ -43,7 +43,7 @@
 #ifdef CONFIG_ARCH_RANDOM
 
 /* Instead of arch_get_random_long() when alternatives haven't run. */
-static inline int rdrand_long(unsigned long *v)
+static inline bool rdrand_long(unsigned long *v)
 {
int ok;
asm volatile("1: " RDRAND_LONG "\n\t"
@@ -53,13 +53,13 @@ 

[tip:x86/asm] x86, asm: use bool for bitops and other assembly outputs

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  117780eef7740729e803bdcc0d5f2f48137ea8e3
Gitweb: http://git.kernel.org/tip/117780eef7740729e803bdcc0d5f2f48137ea8e3
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:38 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, asm: use bool for bitops and other assembly outputs

The gcc people have confirmed that using "bool" when combined with
inline assembly always is treated as a byte-sized operand that can be
assumed to be 0 or 1, which is exactly what the SET instruction
emits.  Change the output types and intermediate variables of as many
operations as practical to "bool".

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-3-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/boot/bitops.h |  8 +---
 arch/x86/boot/boot.h   |  8 
 arch/x86/boot/string.c |  2 +-
 arch/x86/include/asm/apm.h |  6 +++---
 arch/x86/include/asm/archrandom.h  | 16 
 arch/x86/include/asm/atomic.h  |  8 
 arch/x86/include/asm/atomic64_64.h | 10 +-
 arch/x86/include/asm/bitops.h  | 28 ++--
 arch/x86/include/asm/local.h   |  8 
 arch/x86/include/asm/percpu.h  |  8 
 arch/x86/include/asm/rmwcc.h   |  4 ++--
 arch/x86/include/asm/rwsem.h   | 17 +
 include/linux/random.h | 12 ++--
 13 files changed, 69 insertions(+), 66 deletions(-)

diff --git a/arch/x86/boot/bitops.h b/arch/x86/boot/bitops.h
index 878e4b9..0d41d68 100644
--- a/arch/x86/boot/bitops.h
+++ b/arch/x86/boot/bitops.h
@@ -16,14 +16,16 @@
 #define BOOT_BITOPS_H
 #define _LINUX_BITOPS_H/* Inhibit inclusion of 
 */
 
-static inline int constant_test_bit(int nr, const void *addr)
+#include 
+
+static inline bool constant_test_bit(int nr, const void *addr)
 {
const u32 *p = (const u32 *)addr;
return ((1UL << (nr & 31)) & (p[nr >> 5])) != 0;
 }
-static inline int variable_test_bit(int nr, const void *addr)
+static inline bool variable_test_bit(int nr, const void *addr)
 {
-   u8 v;
+   bool v;
const u32 *p = (const u32 *)addr;
 
asm("btl %2,%1; setc %0" : "=qm" (v) : "m" (*p), "Ir" (nr));
diff --git a/arch/x86/boot/boot.h b/arch/x86/boot/boot.h
index 9011a88..2edb2d5 100644
--- a/arch/x86/boot/boot.h
+++ b/arch/x86/boot/boot.h
@@ -176,16 +176,16 @@ static inline void wrgs32(u32 v, addr_t addr)
 }
 
 /* Note: these only return true/false, not a signed return value! */
-static inline int memcmp_fs(const void *s1, addr_t s2, size_t len)
+static inline bool memcmp_fs(const void *s1, addr_t s2, size_t len)
 {
-   u8 diff;
+   bool diff;
asm volatile("fs; repe; cmpsb; setnz %0"
 : "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
 }
-static inline int memcmp_gs(const void *s1, addr_t s2, size_t len)
+static inline bool memcmp_gs(const void *s1, addr_t s2, size_t len)
 {
-   u8 diff;
+   bool diff;
asm volatile("gs; repe; cmpsb; setnz %0"
 : "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
diff --git a/arch/x86/boot/string.c b/arch/x86/boot/string.c
index 318b846..cc3bd58 100644
--- a/arch/x86/boot/string.c
+++ b/arch/x86/boot/string.c
@@ -17,7 +17,7 @@
 
 int memcmp(const void *s1, const void *s2, size_t len)
 {
-   u8 diff;
+   bool diff;
asm("repe; cmpsb; setnz %0"
: "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
diff --git a/arch/x86/include/asm/apm.h b/arch/x86/include/asm/apm.h
index 20370c6..93eebc63 100644
--- a/arch/x86/include/asm/apm.h
+++ b/arch/x86/include/asm/apm.h
@@ -45,11 +45,11 @@ static inline void apm_bios_call_asm(u32 func, u32 ebx_in, 
u32 ecx_in,
: "memory", "cc");
 }
 
-static inline u8 apm_bios_call_simple_asm(u32 func, u32 ebx_in,
-   u32 ecx_in, u32 *eax)
+static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in,
+   u32 ecx_in, u32 *eax)
 {
int cx, dx, si;
-   u8  error;
+   boolerror;
 
/*
 * N.B. We do NOT need a cld after the BIOS call
diff --git a/arch/x86/include/asm/archrandom.h 
b/arch/x86/include/asm/archrandom.h
index 69f1366..ab6f599 100644
--- a/arch/x86/include/asm/archrandom.h
+++ b/arch/x86/include/asm/archrandom.h
@@ -43,7 +43,7 @@
 #ifdef CONFIG_ARCH_RANDOM
 
 /* Instead of arch_get_random_long() when alternatives haven't run. */
-static inline int rdrand_long(unsigned long *v)
+static inline bool rdrand_long(unsigned long *v)
 {
int ok;
asm volatile("1: " RDRAND_LONG "\n\t"
@@ -53,13 +53,13 @@ static inline int rdrand_long(unsigned long *v)
 "2:"
 : "=r" (ok), 

[tip:x86/asm] x86, bitops: remove use of "sbb" to return CF

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  2823d4da5d8a0c222747b24eceb65f5b30717d02
Gitweb: http://git.kernel.org/tip/2823d4da5d8a0c222747b24eceb65f5b30717d02
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:37 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, bitops: remove use of "sbb" to return CF

Use SETC instead of SBB to return the value of CF from assembly. Using
SETcc enables uniformity with other flags-returning pieces of assembly
code.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-2-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/include/asm/bitops.h  | 24 
 arch/x86/include/asm/percpu.h  | 12 ++--
 arch/x86/include/asm/signal.h  |  6 +++---
 arch/x86/include/asm/sync_bitops.h | 18 +-
 arch/x86/kernel/vm86_32.c  |  5 +
 5 files changed, 31 insertions(+), 34 deletions(-)

diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index 7766d1c..b2b797d 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -230,11 +230,11 @@ test_and_set_bit_lock(long nr, volatile unsigned long 
*addr)
  */
 static __always_inline int __test_and_set_bit(long nr, volatile unsigned long 
*addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm("bts %2,%1\n\t"
-   "sbb %0,%0"
-   : "=r" (oldbit), ADDR
+   "setc %0"
+   : "=qm" (oldbit), ADDR
: "Ir" (nr));
return oldbit;
 }
@@ -270,11 +270,11 @@ static __always_inline int test_and_clear_bit(long nr, 
volatile unsigned long *a
  */
 static __always_inline int __test_and_clear_bit(long nr, volatile unsigned 
long *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("btr %2,%1\n\t"
-"sbb %0,%0"
-: "=r" (oldbit), ADDR
+"setc %0"
+: "=qm" (oldbit), ADDR
 : "Ir" (nr));
return oldbit;
 }
@@ -282,11 +282,11 @@ static __always_inline int __test_and_clear_bit(long nr, 
volatile unsigned long
 /* WARNING: non atomic and it can be reordered! */
 static __always_inline int __test_and_change_bit(long nr, volatile unsigned 
long *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("btc %2,%1\n\t"
-"sbb %0,%0"
-: "=r" (oldbit), ADDR
+"setc %0"
+: "=qm" (oldbit), ADDR
 : "Ir" (nr) : "memory");
 
return oldbit;
@@ -313,11 +313,11 @@ static __always_inline int constant_test_bit(long nr, 
const volatile unsigned lo
 
 static __always_inline int variable_test_bit(long nr, volatile const unsigned 
long *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("bt %2,%1\n\t"
-"sbb %0,%0"
-: "=r" (oldbit)
+"setc %0"
+: "=qm" (oldbit)
 : "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index e0ba66c..65039e9 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -510,9 +510,9 @@ do {
\
 /* This is not atomic against other CPUs -- CPU preemption needs to be off */
 #define x86_test_and_clear_bit_percpu(bit, var)
\
 ({ \
-   int old__;  \
-   asm volatile("btr %2,"__percpu_arg(1)"\n\tsbbl %0,%0"   \
-: "=r" (old__), "+m" (var) \
+   unsigned char old__;\
+   asm volatile("btr %2,"__percpu_arg(1)"\n\tsetc %0"  \
+: "=qm" (old__), "+m" (var)\
 : "dIr" (bit));\
old__;  \
 })
@@ -532,11 +532,11 @@ static __always_inline int 
x86_this_cpu_constant_test_bit(unsigned int nr,
 static inline int x86_this_cpu_variable_test_bit(int nr,
 const unsigned long __percpu *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("bt "__percpu_arg(2)",%1\n\t"
-   "sbb %0,%0"
-   : "=r" (oldbit)
+   "setc %0"
+   : "=qm" (oldbit)
: "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;
diff --git 

[tip:x86/asm] x86, bitops: remove use of "sbb" to return CF

2016-06-08 Thread tip-bot for H. Peter Anvin
Commit-ID:  2823d4da5d8a0c222747b24eceb65f5b30717d02
Gitweb: http://git.kernel.org/tip/2823d4da5d8a0c222747b24eceb65f5b30717d02
Author: H. Peter Anvin 
AuthorDate: Wed, 8 Jun 2016 12:38:37 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 8 Jun 2016 12:41:20 -0700

x86, bitops: remove use of "sbb" to return CF

Use SETC instead of SBB to return the value of CF from assembly. Using
SETcc enables uniformity with other flags-returning pieces of assembly
code.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465414726-197858-2-git-send-email-...@linux.intel.com
Reviewed-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Acked-by: Peter Zijlstra (Intel) 
---
 arch/x86/include/asm/bitops.h  | 24 
 arch/x86/include/asm/percpu.h  | 12 ++--
 arch/x86/include/asm/signal.h  |  6 +++---
 arch/x86/include/asm/sync_bitops.h | 18 +-
 arch/x86/kernel/vm86_32.c  |  5 +
 5 files changed, 31 insertions(+), 34 deletions(-)

diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index 7766d1c..b2b797d 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -230,11 +230,11 @@ test_and_set_bit_lock(long nr, volatile unsigned long 
*addr)
  */
 static __always_inline int __test_and_set_bit(long nr, volatile unsigned long 
*addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm("bts %2,%1\n\t"
-   "sbb %0,%0"
-   : "=r" (oldbit), ADDR
+   "setc %0"
+   : "=qm" (oldbit), ADDR
: "Ir" (nr));
return oldbit;
 }
@@ -270,11 +270,11 @@ static __always_inline int test_and_clear_bit(long nr, 
volatile unsigned long *a
  */
 static __always_inline int __test_and_clear_bit(long nr, volatile unsigned 
long *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("btr %2,%1\n\t"
-"sbb %0,%0"
-: "=r" (oldbit), ADDR
+"setc %0"
+: "=qm" (oldbit), ADDR
 : "Ir" (nr));
return oldbit;
 }
@@ -282,11 +282,11 @@ static __always_inline int __test_and_clear_bit(long nr, 
volatile unsigned long
 /* WARNING: non atomic and it can be reordered! */
 static __always_inline int __test_and_change_bit(long nr, volatile unsigned 
long *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("btc %2,%1\n\t"
-"sbb %0,%0"
-: "=r" (oldbit), ADDR
+"setc %0"
+: "=qm" (oldbit), ADDR
 : "Ir" (nr) : "memory");
 
return oldbit;
@@ -313,11 +313,11 @@ static __always_inline int constant_test_bit(long nr, 
const volatile unsigned lo
 
 static __always_inline int variable_test_bit(long nr, volatile const unsigned 
long *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("bt %2,%1\n\t"
-"sbb %0,%0"
-: "=r" (oldbit)
+"setc %0"
+: "=qm" (oldbit)
 : "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index e0ba66c..65039e9 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -510,9 +510,9 @@ do {
\
 /* This is not atomic against other CPUs -- CPU preemption needs to be off */
 #define x86_test_and_clear_bit_percpu(bit, var)
\
 ({ \
-   int old__;  \
-   asm volatile("btr %2,"__percpu_arg(1)"\n\tsbbl %0,%0"   \
-: "=r" (old__), "+m" (var) \
+   unsigned char old__;\
+   asm volatile("btr %2,"__percpu_arg(1)"\n\tsetc %0"  \
+: "=qm" (old__), "+m" (var)\
 : "dIr" (bit));\
old__;  \
 })
@@ -532,11 +532,11 @@ static __always_inline int 
x86_this_cpu_constant_test_bit(unsigned int nr,
 static inline int x86_this_cpu_variable_test_bit(int nr,
 const unsigned long __percpu *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("bt "__percpu_arg(2)",%1\n\t"
-   "sbb %0,%0"
-   : "=r" (oldbit)
+   "setc %0"
+   : "=qm" (oldbit)
: "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;
diff --git a/arch/x86/include/asm/signal.h b/arch/x86/include/asm/signal.h
index 2138c9a..dd1e7d6 100644
--- 

[tip:x86/asm] x86, asm, boot: Use CC_SET()/CC_OUT() in arch/x86/boot/boot.h

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  8d0d5a8abd88fa9671867b8b8ab4ee61b85c0c81
Gitweb: http://git.kernel.org/tip/8d0d5a8abd88fa9671867b8b8ab4ee61b85c0c81
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:09 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm, boot: Use CC_SET()/CC_OUT() in arch/x86/boot/boot.h

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
arch/x86/boot/boot.h.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-11-git-send-email-...@linux.intel.com
---
 arch/x86/boot/boot.h | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/boot/boot.h b/arch/x86/boot/boot.h
index 2edb2d5..7c1495f 100644
--- a/arch/x86/boot/boot.h
+++ b/arch/x86/boot/boot.h
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "bitops.h"
 #include "ctype.h"
 #include "cpuflags.h"
@@ -179,15 +180,15 @@ static inline void wrgs32(u32 v, addr_t addr)
 static inline bool memcmp_fs(const void *s1, addr_t s2, size_t len)
 {
bool diff;
-   asm volatile("fs; repe; cmpsb; setnz %0"
-: "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
+   asm volatile("fs; repe; cmpsb" CC_SET(nz)
+: CC_OUT(nz) (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
 }
 static inline bool memcmp_gs(const void *s1, addr_t s2, size_t len)
 {
bool diff;
-   asm volatile("gs; repe; cmpsb; setnz %0"
-: "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
+   asm volatile("gs; repe; cmpsb" CC_SET(nz)
+: CC_OUT(nz) (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
 }
 


[tip:x86/asm] x86, asm, boot: Use CC_SET()/CC_OUT() in arch/x86/boot/boot.h

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  8d0d5a8abd88fa9671867b8b8ab4ee61b85c0c81
Gitweb: http://git.kernel.org/tip/8d0d5a8abd88fa9671867b8b8ab4ee61b85c0c81
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:09 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm, boot: Use CC_SET()/CC_OUT() in arch/x86/boot/boot.h

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
arch/x86/boot/boot.h.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-11-git-send-email-...@linux.intel.com
---
 arch/x86/boot/boot.h | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/boot/boot.h b/arch/x86/boot/boot.h
index 2edb2d5..7c1495f 100644
--- a/arch/x86/boot/boot.h
+++ b/arch/x86/boot/boot.h
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "bitops.h"
 #include "ctype.h"
 #include "cpuflags.h"
@@ -179,15 +180,15 @@ static inline void wrgs32(u32 v, addr_t addr)
 static inline bool memcmp_fs(const void *s1, addr_t s2, size_t len)
 {
bool diff;
-   asm volatile("fs; repe; cmpsb; setnz %0"
-: "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
+   asm volatile("fs; repe; cmpsb" CC_SET(nz)
+: CC_OUT(nz) (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
 }
 static inline bool memcmp_gs(const void *s1, addr_t s2, size_t len)
 {
bool diff;
-   asm volatile("gs; repe; cmpsb; setnz %0"
-: "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
+   asm volatile("gs; repe; cmpsb" CC_SET(nz)
+: CC_OUT(nz) (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
 }
 


[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() and static_cpu_has() in archrandom.h

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  b0bdba9825feefa46998f29225736ef9bd77bd2e
Gitweb: http://git.kernel.org/tip/b0bdba9825feefa46998f29225736ef9bd77bd2e
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:08 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: Use CC_SET()/CC_OUT() and static_cpu_has() in archrandom.h

Use CC_SET()/CC_OUT() and static_cpu_has().  This produces code good
enough to eliminate ad hoc use of alternatives in ,
greatly simplifying the code.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-10-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/archrandom.h | 113 --
 1 file changed, 47 insertions(+), 66 deletions(-)

diff --git a/arch/x86/include/asm/archrandom.h 
b/arch/x86/include/asm/archrandom.h
index ab6f599..654da36 100644
--- a/arch/x86/include/asm/archrandom.h
+++ b/arch/x86/include/asm/archrandom.h
@@ -40,96 +40,77 @@
 # define RDSEED_LONG   RDSEED_INT
 #endif
 
-#ifdef CONFIG_ARCH_RANDOM
+/* Unconditional execution of RDRAND and RDSEED */
 
-/* Instead of arch_get_random_long() when alternatives haven't run. */
 static inline bool rdrand_long(unsigned long *v)
 {
-   int ok;
-   asm volatile("1: " RDRAND_LONG "\n\t"
-"jc 2f\n\t"
-"decl %0\n\t"
-"jnz 1b\n\t"
-"2:"
-: "=r" (ok), "=a" (*v)
-: "0" (RDRAND_RETRY_LOOPS));
-   return !!ok;
+   bool ok;
+   unsigned int retry = RDRAND_RETRY_LOOPS;
+   do {
+   asm volatile(RDRAND_LONG "\n\t"
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" (*v));
+   if (ok)
+   return true;
+   } while (--retry);
+   return false;
+}
+
+static inline bool rdrand_int(unsigned int *v)
+{
+   bool ok;
+   unsigned int retry = RDRAND_RETRY_LOOPS;
+   do {
+   asm volatile(RDRAND_INT "\n\t"
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" (*v));
+   if (ok)
+   return true;
+   } while (--retry);
+   return false;
 }
 
-/* A single attempt at RDSEED */
 static inline bool rdseed_long(unsigned long *v)
 {
bool ok;
asm volatile(RDSEED_LONG "\n\t"
-"setc %0"
-: "=qm" (ok), "=a" (*v));
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" (*v));
return ok;
 }
 
-#define GET_RANDOM(name, type, rdrand, nop)\
-static inline bool name(type *v)   \
-{  \
-   int ok; \
-   alternative_io("movl $0, %0\n\t"\
-  nop, \
-  "\n1: " rdrand "\n\t"\
-  "jc 2f\n\t"  \
-  "decl %0\n\t"\
-  "jnz 1b\n\t" \
-  "2:",\
-  X86_FEATURE_RDRAND,  \
-  ASM_OUTPUT2("=r" (ok), "=a" (*v)),   \
-  "0" (RDRAND_RETRY_LOOPS));   \
-   return !!ok;\
-}
-
-#define GET_SEED(name, type, rdseed, nop)  \
-static inline bool name(type *v)   \
-{  \
-   bool ok;\
-   alternative_io("movb $0, %0\n\t"\
-  nop, \
-  rdseed "\n\t"\
-  "setc %0",   \
-  X86_FEATURE_RDSEED,  \
-  ASM_OUTPUT2("=q" (ok), "=a" (*v)));  \
-   return ok;  \
+static inline bool rdseed_int(unsigned int *v)
+{
+   bool ok;
+   asm volatile(RDSEED_INT "\n\t"
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" (*v));
+   return ok;
 }
 
-#ifdef CONFIG_X86_64
-
-GET_RANDOM(arch_get_random_long, unsigned long, RDRAND_LONG, ASM_NOP5);
-GET_RANDOM(arch_get_random_int, unsigned int, RDRAND_INT, ASM_NOP4);
-
-GET_SEED(arch_get_random_seed_long, unsigned long, RDSEED_LONG, ASM_NOP5);
-GET_SEED(arch_get_random_seed_int, unsigned int, RDSEED_INT, ASM_NOP4);
-
-#else
-
-GET_RANDOM(arch_get_random_long, unsigned long, RDRAND_LONG, ASM_NOP3);

[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() and static_cpu_has() in archrandom.h

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  b0bdba9825feefa46998f29225736ef9bd77bd2e
Gitweb: http://git.kernel.org/tip/b0bdba9825feefa46998f29225736ef9bd77bd2e
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:08 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: Use CC_SET()/CC_OUT() and static_cpu_has() in archrandom.h

Use CC_SET()/CC_OUT() and static_cpu_has().  This produces code good
enough to eliminate ad hoc use of alternatives in ,
greatly simplifying the code.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-10-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/archrandom.h | 113 --
 1 file changed, 47 insertions(+), 66 deletions(-)

diff --git a/arch/x86/include/asm/archrandom.h 
b/arch/x86/include/asm/archrandom.h
index ab6f599..654da36 100644
--- a/arch/x86/include/asm/archrandom.h
+++ b/arch/x86/include/asm/archrandom.h
@@ -40,96 +40,77 @@
 # define RDSEED_LONG   RDSEED_INT
 #endif
 
-#ifdef CONFIG_ARCH_RANDOM
+/* Unconditional execution of RDRAND and RDSEED */
 
-/* Instead of arch_get_random_long() when alternatives haven't run. */
 static inline bool rdrand_long(unsigned long *v)
 {
-   int ok;
-   asm volatile("1: " RDRAND_LONG "\n\t"
-"jc 2f\n\t"
-"decl %0\n\t"
-"jnz 1b\n\t"
-"2:"
-: "=r" (ok), "=a" (*v)
-: "0" (RDRAND_RETRY_LOOPS));
-   return !!ok;
+   bool ok;
+   unsigned int retry = RDRAND_RETRY_LOOPS;
+   do {
+   asm volatile(RDRAND_LONG "\n\t"
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" (*v));
+   if (ok)
+   return true;
+   } while (--retry);
+   return false;
+}
+
+static inline bool rdrand_int(unsigned int *v)
+{
+   bool ok;
+   unsigned int retry = RDRAND_RETRY_LOOPS;
+   do {
+   asm volatile(RDRAND_INT "\n\t"
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" (*v));
+   if (ok)
+   return true;
+   } while (--retry);
+   return false;
 }
 
-/* A single attempt at RDSEED */
 static inline bool rdseed_long(unsigned long *v)
 {
bool ok;
asm volatile(RDSEED_LONG "\n\t"
-"setc %0"
-: "=qm" (ok), "=a" (*v));
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" (*v));
return ok;
 }
 
-#define GET_RANDOM(name, type, rdrand, nop)\
-static inline bool name(type *v)   \
-{  \
-   int ok; \
-   alternative_io("movl $0, %0\n\t"\
-  nop, \
-  "\n1: " rdrand "\n\t"\
-  "jc 2f\n\t"  \
-  "decl %0\n\t"\
-  "jnz 1b\n\t" \
-  "2:",\
-  X86_FEATURE_RDRAND,  \
-  ASM_OUTPUT2("=r" (ok), "=a" (*v)),   \
-  "0" (RDRAND_RETRY_LOOPS));   \
-   return !!ok;\
-}
-
-#define GET_SEED(name, type, rdseed, nop)  \
-static inline bool name(type *v)   \
-{  \
-   bool ok;\
-   alternative_io("movb $0, %0\n\t"\
-  nop, \
-  rdseed "\n\t"\
-  "setc %0",   \
-  X86_FEATURE_RDSEED,  \
-  ASM_OUTPUT2("=q" (ok), "=a" (*v)));  \
-   return ok;  \
+static inline bool rdseed_int(unsigned int *v)
+{
+   bool ok;
+   asm volatile(RDSEED_INT "\n\t"
+CC_SET(c)
+: CC_OUT(c) (ok), "=a" (*v));
+   return ok;
 }
 
-#ifdef CONFIG_X86_64
-
-GET_RANDOM(arch_get_random_long, unsigned long, RDRAND_LONG, ASM_NOP5);
-GET_RANDOM(arch_get_random_int, unsigned int, RDRAND_INT, ASM_NOP4);
-
-GET_SEED(arch_get_random_seed_long, unsigned long, RDSEED_LONG, ASM_NOP5);
-GET_SEED(arch_get_random_seed_int, unsigned int, RDSEED_INT, ASM_NOP4);
-
-#else
-
-GET_RANDOM(arch_get_random_long, unsigned long, RDRAND_LONG, ASM_NOP3);
-GET_RANDOM(arch_get_random_int, unsigned int, RDRAND_INT, ASM_NOP3);
-

[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() in

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  a7b6cddc8c403195d0052da3bde776ffee2fed10
Gitweb: http://git.kernel.org/tip/a7b6cddc8c403195d0052da3bde776ffee2fed10
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:07 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: Use CC_SET()/CC_OUT() in 

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-9-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/rwsem.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/rwsem.h b/arch/x86/include/asm/rwsem.h
index c508770..1e8be26 100644
--- a/arch/x86/include/asm/rwsem.h
+++ b/arch/x86/include/asm/rwsem.h
@@ -149,10 +149,10 @@ static inline bool __down_write_trylock(struct 
rw_semaphore *sem)
 LOCK_PREFIX "  cmpxchg  %2,%0\n\t"
 "  jnz  1b\n\t"
 "2:\n\t"
-"  sete %3\n\t"
+CC_SET(e)
 "# ending __down_write_trylock\n\t"
 : "+m" (sem->count), "=" (tmp0), "=" (tmp1),
-  "=qm" (result)
+  CC_OUT(e) (result)
 : "er" (RWSEM_ACTIVE_WRITE_BIAS)
 : "memory", "cc");
return result;


[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() in

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  a7b6cddc8c403195d0052da3bde776ffee2fed10
Gitweb: http://git.kernel.org/tip/a7b6cddc8c403195d0052da3bde776ffee2fed10
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:07 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: Use CC_SET()/CC_OUT() in 

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-9-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/rwsem.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/rwsem.h b/arch/x86/include/asm/rwsem.h
index c508770..1e8be26 100644
--- a/arch/x86/include/asm/rwsem.h
+++ b/arch/x86/include/asm/rwsem.h
@@ -149,10 +149,10 @@ static inline bool __down_write_trylock(struct 
rw_semaphore *sem)
 LOCK_PREFIX "  cmpxchg  %2,%0\n\t"
 "  jnz  1b\n\t"
 "2:\n\t"
-"  sete %3\n\t"
+CC_SET(e)
 "# ending __down_write_trylock\n\t"
 : "+m" (sem->count), "=" (tmp0), "=" (tmp1),
-  "=qm" (result)
+  CC_OUT(e) (result)
 : "er" (RWSEM_ACTIVE_WRITE_BIAS)
 : "memory", "cc");
return result;


[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() in

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  ad5bf6a52b7fac127d126434fdf950e7bd6f7554
Gitweb: http://git.kernel.org/tip/ad5bf6a52b7fac127d126434fdf950e7bd6f7554
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:06 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: Use CC_SET()/CC_OUT() in 

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-8-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/percpu.h | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index 184d7f3..e02e3f8 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -511,8 +511,9 @@ do {
\
 #define x86_test_and_clear_bit_percpu(bit, var)
\
 ({ \
bool old__; \
-   asm volatile("btr %2,"__percpu_arg(1)"\n\tsetc %0"  \
-: "=qm" (old__), "+m" (var)\
+   asm volatile("btr %2,"__percpu_arg(1)"\n\t" \
+CC_SET(c)  \
+: CC_OUT(c) (old__), "+m" (var)\
 : "dIr" (bit));\
old__;  \
 })
@@ -535,8 +536,8 @@ static inline bool x86_this_cpu_variable_test_bit(int nr,
bool oldbit;
 
asm volatile("bt "__percpu_arg(2)",%1\n\t"
-   "setc %0"
-   : "=qm" (oldbit)
+   CC_SET(c)
+   : CC_OUT(c) (oldbit)
: "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;


[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() in

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  ad5bf6a52b7fac127d126434fdf950e7bd6f7554
Gitweb: http://git.kernel.org/tip/ad5bf6a52b7fac127d126434fdf950e7bd6f7554
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:06 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: Use CC_SET()/CC_OUT() in 

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-8-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/percpu.h | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index 184d7f3..e02e3f8 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -511,8 +511,9 @@ do {
\
 #define x86_test_and_clear_bit_percpu(bit, var)
\
 ({ \
bool old__; \
-   asm volatile("btr %2,"__percpu_arg(1)"\n\tsetc %0"  \
-: "=qm" (old__), "+m" (var)\
+   asm volatile("btr %2,"__percpu_arg(1)"\n\t" \
+CC_SET(c)  \
+: CC_OUT(c) (old__), "+m" (var)\
 : "dIr" (bit));\
old__;  \
 })
@@ -535,8 +536,8 @@ static inline bool x86_this_cpu_variable_test_bit(int nr,
bool oldbit;
 
asm volatile("bt "__percpu_arg(2)",%1\n\t"
-   "setc %0"
-   : "=qm" (oldbit)
+   CC_SET(c)
+   : CC_OUT(c) (oldbit)
: "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;


[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() in

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  a2d4cf46b22550edb4d43a46cfa478649ebfe1d7
Gitweb: http://git.kernel.org/tip/a2d4cf46b22550edb4d43a46cfa478649ebfe1d7
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:05 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: Use CC_SET()/CC_OUT() in 

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-7-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/bitops.h | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index ed8f485..68557f52 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -233,8 +233,8 @@ static __always_inline bool __test_and_set_bit(long nr, 
volatile unsigned long *
bool oldbit;
 
asm("bts %2,%1\n\t"
-   "setc %0"
-   : "=qm" (oldbit), ADDR
+   CC_SET(c)
+   : CC_OUT(c) (oldbit), ADDR
: "Ir" (nr));
return oldbit;
 }
@@ -273,8 +273,8 @@ static __always_inline bool __test_and_clear_bit(long nr, 
volatile unsigned long
bool oldbit;
 
asm volatile("btr %2,%1\n\t"
-"setc %0"
-: "=qm" (oldbit), ADDR
+CC_SET(c)
+: CC_OUT(c) (oldbit), ADDR
 : "Ir" (nr));
return oldbit;
 }
@@ -285,8 +285,8 @@ static __always_inline bool __test_and_change_bit(long nr, 
volatile unsigned lon
bool oldbit;
 
asm volatile("btc %2,%1\n\t"
-"setc %0"
-: "=qm" (oldbit), ADDR
+CC_SET(c)
+: CC_OUT(c) (oldbit), ADDR
 : "Ir" (nr) : "memory");
 
return oldbit;
@@ -316,8 +316,8 @@ static __always_inline bool variable_test_bit(long nr, 
volatile const unsigned l
bool oldbit;
 
asm volatile("bt %2,%1\n\t"
-"setc %0"
-: "=qm" (oldbit)
+CC_SET(c)
+: CC_OUT(c) (oldbit)
 : "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;


[tip:x86/asm] x86, asm: Use CC_SET()/CC_OUT() in

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  a2d4cf46b22550edb4d43a46cfa478649ebfe1d7
Gitweb: http://git.kernel.org/tip/a2d4cf46b22550edb4d43a46cfa478649ebfe1d7
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:05 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: Use CC_SET()/CC_OUT() in 

Remove open-coded uses of set instructions to use CC_SET()/CC_OUT() in
.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-7-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/bitops.h | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index ed8f485..68557f52 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -233,8 +233,8 @@ static __always_inline bool __test_and_set_bit(long nr, 
volatile unsigned long *
bool oldbit;
 
asm("bts %2,%1\n\t"
-   "setc %0"
-   : "=qm" (oldbit), ADDR
+   CC_SET(c)
+   : CC_OUT(c) (oldbit), ADDR
: "Ir" (nr));
return oldbit;
 }
@@ -273,8 +273,8 @@ static __always_inline bool __test_and_clear_bit(long nr, 
volatile unsigned long
bool oldbit;
 
asm volatile("btr %2,%1\n\t"
-"setc %0"
-: "=qm" (oldbit), ADDR
+CC_SET(c)
+: CC_OUT(c) (oldbit), ADDR
 : "Ir" (nr));
return oldbit;
 }
@@ -285,8 +285,8 @@ static __always_inline bool __test_and_change_bit(long nr, 
volatile unsigned lon
bool oldbit;
 
asm volatile("btc %2,%1\n\t"
-"setc %0"
-: "=qm" (oldbit), ADDR
+CC_SET(c)
+: CC_OUT(c) (oldbit), ADDR
 : "Ir" (nr) : "memory");
 
return oldbit;
@@ -316,8 +316,8 @@ static __always_inline bool variable_test_bit(long nr, 
volatile const unsigned l
bool oldbit;
 
asm volatile("bt %2,%1\n\t"
-"setc %0"
-: "=qm" (oldbit)
+CC_SET(c)
+: CC_OUT(c) (oldbit)
 : "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;


[tip:x86/asm] x86, asm: define CC_SET() and CC_OUT() macros

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  2d81d0e1bd0f48049a6a6e289c937bc24c98649e
Gitweb: http://git.kernel.org/tip/2d81d0e1bd0f48049a6a6e289c937bc24c98649e
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:03 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: define CC_SET() and CC_OUT() macros

The CC_SET() and CC_OUT() macros can be used together to take
advantage of the new __GCC_ASM_FLAG_OUTPUTS__ feature in gcc 6+ while
remaining backwards compatible.  CC_SET() generates a SET instruction
on older compilers; CC_OUT() makes sure the output is received in the
correct variable.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-5-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/asm.h | 12 
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
index f5063b6..7acb51c 100644
--- a/arch/x86/include/asm/asm.h
+++ b/arch/x86/include/asm/asm.h
@@ -42,6 +42,18 @@
 #define _ASM_SI__ASM_REG(si)
 #define _ASM_DI__ASM_REG(di)
 
+/*
+ * Macros to generate condition code outputs from inline assembly,
+ * The output operand must be type "bool".
+ */
+#ifdef __GCC_ASM_FLAG_OUTPUTS__
+# define CC_SET(c) "\n\t/* output condition code " #c "*/\n"
+# define CC_OUT(c) "=@cc" #c
+#else
+# define CC_SET(c) "\n\tset" #c " %[_cc_" #c "]\n"
+# define CC_OUT(c) [_cc_ ## c] "=qm"
+#endif
+
 /* Exception table entry */
 #ifdef __ASSEMBLY__
 # define _ASM_EXTABLE_HANDLE(from, to, handler)\


[tip:x86/asm] x86, asm: change GEN_*_RMWcc() to use CC_SET()/CC_OUT()

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  560a154bc1fa2143b25b66ca013424a280bd8377
Gitweb: http://git.kernel.org/tip/560a154bc1fa2143b25b66ca013424a280bd8377
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:04 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: change GEN_*_RMWcc() to use CC_SET()/CC_OUT()

Change the GEN_*_RMWcc() macros to use the CC_SET()/CC_OUT() macros
defined in , and disable the use of asm goto if
__GCC_ASM_FLAG_OUTPUTS__ is enabled.  This allows gcc to receive the
flags output directly in gcc 6+.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-6-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/rmwcc.h | 14 +-
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/rmwcc.h b/arch/x86/include/asm/rmwcc.h
index e3264c4..661dd30 100644
--- a/arch/x86/include/asm/rmwcc.h
+++ b/arch/x86/include/asm/rmwcc.h
@@ -1,7 +1,9 @@
 #ifndef _ASM_X86_RMWcc
 #define _ASM_X86_RMWcc
 
-#ifdef CC_HAVE_ASM_GOTO
+#if !defined(__GCC_ASM_FLAG_OUTPUTS__) && defined(CC_HAVE_ASM_GOTO)
+
+/* Use asm goto */
 
 #define __GEN_RMWcc(fullop, var, cc, ...)  \
 do {   \
@@ -19,13 +21,15 @@ cc_label:   
\
 #define GEN_BINARY_RMWcc(op, var, vcon, val, arg0, cc) \
__GEN_RMWcc(op " %1, " arg0, var, cc, vcon (val))
 
-#else /* !CC_HAVE_ASM_GOTO */
+#else /* defined(__GCC_ASM_FLAG_OUTPUTS__) || !defined(CC_HAVE_ASM_GOTO) */
+
+/* Use flags output or a set instruction */
 
 #define __GEN_RMWcc(fullop, var, cc, ...)  \
 do {   \
bool c; \
-   asm volatile (fullop "; set" #cc " %1"  \
-   : "+m" (var), "=qm" (c) \
+   asm volatile (fullop ";" CC_SET(cc) \
+   : "+m" (var), CC_OUT(cc) (c)\
: __VA_ARGS__ : "memory");  \
return c;   \
 } while (0)
@@ -36,6 +40,6 @@ do {  
\
 #define GEN_BINARY_RMWcc(op, var, vcon, val, arg0, cc) \
__GEN_RMWcc(op " %2, " arg0, var, cc, vcon (val))
 
-#endif /* CC_HAVE_ASM_GOTO */
+#endif /* defined(__GCC_ASM_FLAG_OUTPUTS__) || !defined(CC_HAVE_ASM_GOTO) */
 
 #endif /* _ASM_X86_RMWcc */


[tip:x86/asm] x86, asm: define CC_SET() and CC_OUT() macros

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  2d81d0e1bd0f48049a6a6e289c937bc24c98649e
Gitweb: http://git.kernel.org/tip/2d81d0e1bd0f48049a6a6e289c937bc24c98649e
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:03 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: define CC_SET() and CC_OUT() macros

The CC_SET() and CC_OUT() macros can be used together to take
advantage of the new __GCC_ASM_FLAG_OUTPUTS__ feature in gcc 6+ while
remaining backwards compatible.  CC_SET() generates a SET instruction
on older compilers; CC_OUT() makes sure the output is received in the
correct variable.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-5-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/asm.h | 12 
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h
index f5063b6..7acb51c 100644
--- a/arch/x86/include/asm/asm.h
+++ b/arch/x86/include/asm/asm.h
@@ -42,6 +42,18 @@
 #define _ASM_SI__ASM_REG(si)
 #define _ASM_DI__ASM_REG(di)
 
+/*
+ * Macros to generate condition code outputs from inline assembly,
+ * The output operand must be type "bool".
+ */
+#ifdef __GCC_ASM_FLAG_OUTPUTS__
+# define CC_SET(c) "\n\t/* output condition code " #c "*/\n"
+# define CC_OUT(c) "=@cc" #c
+#else
+# define CC_SET(c) "\n\tset" #c " %[_cc_" #c "]\n"
+# define CC_OUT(c) [_cc_ ## c] "=qm"
+#endif
+
 /* Exception table entry */
 #ifdef __ASSEMBLY__
 # define _ASM_EXTABLE_HANDLE(from, to, handler)\


[tip:x86/asm] x86, asm: change GEN_*_RMWcc() to use CC_SET()/CC_OUT()

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  560a154bc1fa2143b25b66ca013424a280bd8377
Gitweb: http://git.kernel.org/tip/560a154bc1fa2143b25b66ca013424a280bd8377
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:04 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: change GEN_*_RMWcc() to use CC_SET()/CC_OUT()

Change the GEN_*_RMWcc() macros to use the CC_SET()/CC_OUT() macros
defined in , and disable the use of asm goto if
__GCC_ASM_FLAG_OUTPUTS__ is enabled.  This allows gcc to receive the
flags output directly in gcc 6+.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-6-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/rmwcc.h | 14 +-
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/rmwcc.h b/arch/x86/include/asm/rmwcc.h
index e3264c4..661dd30 100644
--- a/arch/x86/include/asm/rmwcc.h
+++ b/arch/x86/include/asm/rmwcc.h
@@ -1,7 +1,9 @@
 #ifndef _ASM_X86_RMWcc
 #define _ASM_X86_RMWcc
 
-#ifdef CC_HAVE_ASM_GOTO
+#if !defined(__GCC_ASM_FLAG_OUTPUTS__) && defined(CC_HAVE_ASM_GOTO)
+
+/* Use asm goto */
 
 #define __GEN_RMWcc(fullop, var, cc, ...)  \
 do {   \
@@ -19,13 +21,15 @@ cc_label:   
\
 #define GEN_BINARY_RMWcc(op, var, vcon, val, arg0, cc) \
__GEN_RMWcc(op " %1, " arg0, var, cc, vcon (val))
 
-#else /* !CC_HAVE_ASM_GOTO */
+#else /* defined(__GCC_ASM_FLAG_OUTPUTS__) || !defined(CC_HAVE_ASM_GOTO) */
+
+/* Use flags output or a set instruction */
 
 #define __GEN_RMWcc(fullop, var, cc, ...)  \
 do {   \
bool c; \
-   asm volatile (fullop "; set" #cc " %1"  \
-   : "+m" (var), "=qm" (c) \
+   asm volatile (fullop ";" CC_SET(cc) \
+   : "+m" (var), CC_OUT(cc) (c)\
: __VA_ARGS__ : "memory");  \
return c;   \
 } while (0)
@@ -36,6 +40,6 @@ do {  
\
 #define GEN_BINARY_RMWcc(op, var, vcon, val, arg0, cc) \
__GEN_RMWcc(op " %2, " arg0, var, cc, vcon (val))
 
-#endif /* CC_HAVE_ASM_GOTO */
+#endif /* defined(__GCC_ASM_FLAG_OUTPUTS__) || !defined(CC_HAVE_ASM_GOTO) */
 
 #endif /* _ASM_X86_RMWcc */


[tip:x86/asm] x86, asm: use bool for bitops and other assembly outputs

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  d3f78b979e4e060c1b36402fa7096a36a9c266da
Gitweb: http://git.kernel.org/tip/d3f78b979e4e060c1b36402fa7096a36a9c266da
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:01 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: use bool for bitops and other assembly outputs

The gcc people have confirmed that using "bool" when combined with
inline assembly always is treated as a byte-sized operand that can be
assumed to be 0 or 1, which is exactly what the SET instruction
emits.  Change the output types and intermediate variables of as many
operations as practical to "bool".

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-3-git-send-email-...@linux.intel.com
---
 arch/x86/boot/bitops.h |  8 +---
 arch/x86/boot/boot.h   |  8 
 arch/x86/boot/string.c |  2 +-
 arch/x86/include/asm/apm.h |  6 +++---
 arch/x86/include/asm/archrandom.h  | 16 
 arch/x86/include/asm/atomic.h  |  8 
 arch/x86/include/asm/atomic64_64.h | 10 +-
 arch/x86/include/asm/bitops.h  | 28 ++--
 arch/x86/include/asm/local.h   |  8 
 arch/x86/include/asm/percpu.h  |  8 
 arch/x86/include/asm/rmwcc.h   |  4 ++--
 arch/x86/include/asm/rwsem.h   | 17 +
 include/linux/random.h | 12 ++--
 13 files changed, 69 insertions(+), 66 deletions(-)

diff --git a/arch/x86/boot/bitops.h b/arch/x86/boot/bitops.h
index 878e4b9..0d41d68 100644
--- a/arch/x86/boot/bitops.h
+++ b/arch/x86/boot/bitops.h
@@ -16,14 +16,16 @@
 #define BOOT_BITOPS_H
 #define _LINUX_BITOPS_H/* Inhibit inclusion of 
 */
 
-static inline int constant_test_bit(int nr, const void *addr)
+#include 
+
+static inline bool constant_test_bit(int nr, const void *addr)
 {
const u32 *p = (const u32 *)addr;
return ((1UL << (nr & 31)) & (p[nr >> 5])) != 0;
 }
-static inline int variable_test_bit(int nr, const void *addr)
+static inline bool variable_test_bit(int nr, const void *addr)
 {
-   u8 v;
+   bool v;
const u32 *p = (const u32 *)addr;
 
asm("btl %2,%1; setc %0" : "=qm" (v) : "m" (*p), "Ir" (nr));
diff --git a/arch/x86/boot/boot.h b/arch/x86/boot/boot.h
index 9011a88..2edb2d5 100644
--- a/arch/x86/boot/boot.h
+++ b/arch/x86/boot/boot.h
@@ -176,16 +176,16 @@ static inline void wrgs32(u32 v, addr_t addr)
 }
 
 /* Note: these only return true/false, not a signed return value! */
-static inline int memcmp_fs(const void *s1, addr_t s2, size_t len)
+static inline bool memcmp_fs(const void *s1, addr_t s2, size_t len)
 {
-   u8 diff;
+   bool diff;
asm volatile("fs; repe; cmpsb; setnz %0"
 : "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
 }
-static inline int memcmp_gs(const void *s1, addr_t s2, size_t len)
+static inline bool memcmp_gs(const void *s1, addr_t s2, size_t len)
 {
-   u8 diff;
+   bool diff;
asm volatile("gs; repe; cmpsb; setnz %0"
 : "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
diff --git a/arch/x86/boot/string.c b/arch/x86/boot/string.c
index 318b846..cc3bd58 100644
--- a/arch/x86/boot/string.c
+++ b/arch/x86/boot/string.c
@@ -17,7 +17,7 @@
 
 int memcmp(const void *s1, const void *s2, size_t len)
 {
-   u8 diff;
+   bool diff;
asm("repe; cmpsb; setnz %0"
: "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
diff --git a/arch/x86/include/asm/apm.h b/arch/x86/include/asm/apm.h
index 20370c6..93eebc63 100644
--- a/arch/x86/include/asm/apm.h
+++ b/arch/x86/include/asm/apm.h
@@ -45,11 +45,11 @@ static inline void apm_bios_call_asm(u32 func, u32 ebx_in, 
u32 ecx_in,
: "memory", "cc");
 }
 
-static inline u8 apm_bios_call_simple_asm(u32 func, u32 ebx_in,
-   u32 ecx_in, u32 *eax)
+static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in,
+   u32 ecx_in, u32 *eax)
 {
int cx, dx, si;
-   u8  error;
+   boolerror;
 
/*
 * N.B. We do NOT need a cld after the BIOS call
diff --git a/arch/x86/include/asm/archrandom.h 
b/arch/x86/include/asm/archrandom.h
index 69f1366..ab6f599 100644
--- a/arch/x86/include/asm/archrandom.h
+++ b/arch/x86/include/asm/archrandom.h
@@ -43,7 +43,7 @@
 #ifdef CONFIG_ARCH_RANDOM
 
 /* Instead of arch_get_random_long() when alternatives haven't run. */
-static inline int rdrand_long(unsigned long *v)
+static inline bool rdrand_long(unsigned long *v)
 {
int ok;
asm volatile("1: " RDRAND_LONG "\n\t"
@@ -53,13 +53,13 @@ static inline int rdrand_long(unsigned long *v)
 "2:"
 : "=r" (ok), "=a" (*v)
 : "0" 

[tip:x86/asm] x86, asm: use bool for bitops and other assembly outputs

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  d3f78b979e4e060c1b36402fa7096a36a9c266da
Gitweb: http://git.kernel.org/tip/d3f78b979e4e060c1b36402fa7096a36a9c266da
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:01 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: use bool for bitops and other assembly outputs

The gcc people have confirmed that using "bool" when combined with
inline assembly always is treated as a byte-sized operand that can be
assumed to be 0 or 1, which is exactly what the SET instruction
emits.  Change the output types and intermediate variables of as many
operations as practical to "bool".

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-3-git-send-email-...@linux.intel.com
---
 arch/x86/boot/bitops.h |  8 +---
 arch/x86/boot/boot.h   |  8 
 arch/x86/boot/string.c |  2 +-
 arch/x86/include/asm/apm.h |  6 +++---
 arch/x86/include/asm/archrandom.h  | 16 
 arch/x86/include/asm/atomic.h  |  8 
 arch/x86/include/asm/atomic64_64.h | 10 +-
 arch/x86/include/asm/bitops.h  | 28 ++--
 arch/x86/include/asm/local.h   |  8 
 arch/x86/include/asm/percpu.h  |  8 
 arch/x86/include/asm/rmwcc.h   |  4 ++--
 arch/x86/include/asm/rwsem.h   | 17 +
 include/linux/random.h | 12 ++--
 13 files changed, 69 insertions(+), 66 deletions(-)

diff --git a/arch/x86/boot/bitops.h b/arch/x86/boot/bitops.h
index 878e4b9..0d41d68 100644
--- a/arch/x86/boot/bitops.h
+++ b/arch/x86/boot/bitops.h
@@ -16,14 +16,16 @@
 #define BOOT_BITOPS_H
 #define _LINUX_BITOPS_H/* Inhibit inclusion of 
 */
 
-static inline int constant_test_bit(int nr, const void *addr)
+#include 
+
+static inline bool constant_test_bit(int nr, const void *addr)
 {
const u32 *p = (const u32 *)addr;
return ((1UL << (nr & 31)) & (p[nr >> 5])) != 0;
 }
-static inline int variable_test_bit(int nr, const void *addr)
+static inline bool variable_test_bit(int nr, const void *addr)
 {
-   u8 v;
+   bool v;
const u32 *p = (const u32 *)addr;
 
asm("btl %2,%1; setc %0" : "=qm" (v) : "m" (*p), "Ir" (nr));
diff --git a/arch/x86/boot/boot.h b/arch/x86/boot/boot.h
index 9011a88..2edb2d5 100644
--- a/arch/x86/boot/boot.h
+++ b/arch/x86/boot/boot.h
@@ -176,16 +176,16 @@ static inline void wrgs32(u32 v, addr_t addr)
 }
 
 /* Note: these only return true/false, not a signed return value! */
-static inline int memcmp_fs(const void *s1, addr_t s2, size_t len)
+static inline bool memcmp_fs(const void *s1, addr_t s2, size_t len)
 {
-   u8 diff;
+   bool diff;
asm volatile("fs; repe; cmpsb; setnz %0"
 : "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
 }
-static inline int memcmp_gs(const void *s1, addr_t s2, size_t len)
+static inline bool memcmp_gs(const void *s1, addr_t s2, size_t len)
 {
-   u8 diff;
+   bool diff;
asm volatile("gs; repe; cmpsb; setnz %0"
 : "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
diff --git a/arch/x86/boot/string.c b/arch/x86/boot/string.c
index 318b846..cc3bd58 100644
--- a/arch/x86/boot/string.c
+++ b/arch/x86/boot/string.c
@@ -17,7 +17,7 @@
 
 int memcmp(const void *s1, const void *s2, size_t len)
 {
-   u8 diff;
+   bool diff;
asm("repe; cmpsb; setnz %0"
: "=qm" (diff), "+D" (s1), "+S" (s2), "+c" (len));
return diff;
diff --git a/arch/x86/include/asm/apm.h b/arch/x86/include/asm/apm.h
index 20370c6..93eebc63 100644
--- a/arch/x86/include/asm/apm.h
+++ b/arch/x86/include/asm/apm.h
@@ -45,11 +45,11 @@ static inline void apm_bios_call_asm(u32 func, u32 ebx_in, 
u32 ecx_in,
: "memory", "cc");
 }
 
-static inline u8 apm_bios_call_simple_asm(u32 func, u32 ebx_in,
-   u32 ecx_in, u32 *eax)
+static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in,
+   u32 ecx_in, u32 *eax)
 {
int cx, dx, si;
-   u8  error;
+   boolerror;
 
/*
 * N.B. We do NOT need a cld after the BIOS call
diff --git a/arch/x86/include/asm/archrandom.h 
b/arch/x86/include/asm/archrandom.h
index 69f1366..ab6f599 100644
--- a/arch/x86/include/asm/archrandom.h
+++ b/arch/x86/include/asm/archrandom.h
@@ -43,7 +43,7 @@
 #ifdef CONFIG_ARCH_RANDOM
 
 /* Instead of arch_get_random_long() when alternatives haven't run. */
-static inline int rdrand_long(unsigned long *v)
+static inline bool rdrand_long(unsigned long *v)
 {
int ok;
asm volatile("1: " RDRAND_LONG "\n\t"
@@ -53,13 +53,13 @@ static inline int rdrand_long(unsigned long *v)
 "2:"
 : "=r" (ok), "=a" (*v)
 : "0" (RDRAND_RETRY_LOOPS));
-   return ok;
+   return 

[tip:x86/asm] x86, asm: change the GEN_*_RMWcc() macros to not quote the condition

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  ebc2b1c0e47ee09960cd2474e3f4091733417f14
Gitweb: http://git.kernel.org/tip/ebc2b1c0e47ee09960cd2474e3f4091733417f14
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:02 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: change the GEN_*_RMWcc() macros to not quote the condition

Change the lexical defintion of the GEN_*_RMWcc() macros to not take
the condition code as a quoted string.  This will help support
changing them to use the new __GCC_ASM_FLAG_OUTPUTS__ feature in a
subsequent patch.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-4-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/atomic.h  | 8 
 arch/x86/include/asm/atomic64_64.h | 8 
 arch/x86/include/asm/bitops.h  | 6 +++---
 arch/x86/include/asm/local.h   | 8 
 arch/x86/include/asm/preempt.h | 2 +-
 arch/x86/include/asm/rmwcc.h   | 4 ++--
 6 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 17d8812..7322c15 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -77,7 +77,7 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
  */
 static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", "e");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", e);
 }
 
 /**
@@ -114,7 +114,7 @@ static __always_inline void atomic_dec(atomic_t *v)
  */
 static __always_inline bool atomic_dec_and_test(atomic_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", e);
 }
 
 /**
@@ -127,7 +127,7 @@ static __always_inline bool atomic_dec_and_test(atomic_t *v)
  */
 static __always_inline bool atomic_inc_and_test(atomic_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", e);
 }
 
 /**
@@ -141,7 +141,7 @@ static __always_inline bool atomic_inc_and_test(atomic_t *v)
  */
 static __always_inline bool atomic_add_negative(int i, atomic_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", "s");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", s);
 }
 
 /**
diff --git a/arch/x86/include/asm/atomic64_64.h 
b/arch/x86/include/asm/atomic64_64.h
index 4f881d7..57bf925 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -72,7 +72,7 @@ static inline void atomic64_sub(long i, atomic64_t *v)
  */
 static inline bool atomic64_sub_and_test(long i, atomic64_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", "e");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", e);
 }
 
 /**
@@ -111,7 +111,7 @@ static __always_inline void atomic64_dec(atomic64_t *v)
  */
 static inline bool atomic64_dec_and_test(atomic64_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", e);
 }
 
 /**
@@ -124,7 +124,7 @@ static inline bool atomic64_dec_and_test(atomic64_t *v)
  */
 static inline bool atomic64_inc_and_test(atomic64_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", e);
 }
 
 /**
@@ -138,7 +138,7 @@ static inline bool atomic64_inc_and_test(atomic64_t *v)
  */
 static inline bool atomic64_add_negative(long i, atomic64_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", "s");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", s);
 }
 
 /**
diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index 8cbb7f4..ed8f485 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -203,7 +203,7 @@ static __always_inline void change_bit(long nr, volatile 
unsigned long *addr)
  */
 static __always_inline bool test_and_set_bit(long nr, volatile unsigned long 
*addr)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "bts", *addr, "Ir", nr, "%0", "c");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "bts", *addr, "Ir", nr, "%0", c);
 }
 
 /**
@@ -249,7 +249,7 @@ static __always_inline bool __test_and_set_bit(long nr, 
volatile unsigned long *
  */
 static __always_inline bool test_and_clear_bit(long nr, volatile unsigned long 
*addr)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "btr", *addr, "Ir", nr, "%0", "c");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "btr", *addr, "Ir", nr, "%0", c);
 }
 
 /**
@@ -302,7 +302,7 @@ static __always_inline bool __test_and_change_bit(long nr, 
volatile unsigned lon
  */
 static __always_inline bool test_and_change_bit(long nr, volatile unsigned 
long *addr)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "btc", *addr, "Ir", nr, 

[tip:x86/asm] x86, asm: change the GEN_*_RMWcc() macros to not quote the condition

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  ebc2b1c0e47ee09960cd2474e3f4091733417f14
Gitweb: http://git.kernel.org/tip/ebc2b1c0e47ee09960cd2474e3f4091733417f14
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:02 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, asm: change the GEN_*_RMWcc() macros to not quote the condition

Change the lexical defintion of the GEN_*_RMWcc() macros to not take
the condition code as a quoted string.  This will help support
changing them to use the new __GCC_ASM_FLAG_OUTPUTS__ feature in a
subsequent patch.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-4-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/atomic.h  | 8 
 arch/x86/include/asm/atomic64_64.h | 8 
 arch/x86/include/asm/bitops.h  | 6 +++---
 arch/x86/include/asm/local.h   | 8 
 arch/x86/include/asm/preempt.h | 2 +-
 arch/x86/include/asm/rmwcc.h   | 4 ++--
 6 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 17d8812..7322c15 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -77,7 +77,7 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
  */
 static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", "e");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", e);
 }
 
 /**
@@ -114,7 +114,7 @@ static __always_inline void atomic_dec(atomic_t *v)
  */
 static __always_inline bool atomic_dec_and_test(atomic_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", e);
 }
 
 /**
@@ -127,7 +127,7 @@ static __always_inline bool atomic_dec_and_test(atomic_t *v)
  */
 static __always_inline bool atomic_inc_and_test(atomic_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", e);
 }
 
 /**
@@ -141,7 +141,7 @@ static __always_inline bool atomic_inc_and_test(atomic_t *v)
  */
 static __always_inline bool atomic_add_negative(int i, atomic_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", "s");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", s);
 }
 
 /**
diff --git a/arch/x86/include/asm/atomic64_64.h 
b/arch/x86/include/asm/atomic64_64.h
index 4f881d7..57bf925 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -72,7 +72,7 @@ static inline void atomic64_sub(long i, atomic64_t *v)
  */
 static inline bool atomic64_sub_and_test(long i, atomic64_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", "e");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", e);
 }
 
 /**
@@ -111,7 +111,7 @@ static __always_inline void atomic64_dec(atomic64_t *v)
  */
 static inline bool atomic64_dec_and_test(atomic64_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", e);
 }
 
 /**
@@ -124,7 +124,7 @@ static inline bool atomic64_dec_and_test(atomic64_t *v)
  */
 static inline bool atomic64_inc_and_test(atomic64_t *v)
 {
-   GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", "e");
+   GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", e);
 }
 
 /**
@@ -138,7 +138,7 @@ static inline bool atomic64_inc_and_test(atomic64_t *v)
  */
 static inline bool atomic64_add_negative(long i, atomic64_t *v)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", "s");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", s);
 }
 
 /**
diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index 8cbb7f4..ed8f485 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -203,7 +203,7 @@ static __always_inline void change_bit(long nr, volatile 
unsigned long *addr)
  */
 static __always_inline bool test_and_set_bit(long nr, volatile unsigned long 
*addr)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "bts", *addr, "Ir", nr, "%0", "c");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "bts", *addr, "Ir", nr, "%0", c);
 }
 
 /**
@@ -249,7 +249,7 @@ static __always_inline bool __test_and_set_bit(long nr, 
volatile unsigned long *
  */
 static __always_inline bool test_and_clear_bit(long nr, volatile unsigned long 
*addr)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "btr", *addr, "Ir", nr, "%0", "c");
+   GEN_BINARY_RMWcc(LOCK_PREFIX "btr", *addr, "Ir", nr, "%0", c);
 }
 
 /**
@@ -302,7 +302,7 @@ static __always_inline bool __test_and_change_bit(long nr, 
volatile unsigned lon
  */
 static __always_inline bool test_and_change_bit(long nr, volatile unsigned 
long *addr)
 {
-   GEN_BINARY_RMWcc(LOCK_PREFIX "btc", *addr, "Ir", nr, "%0", "c");
+   GEN_BINARY_RMWcc(LOCK_PREFIX 

[tip:x86/asm] x86, bitops: remove use of "sbb" to return CF

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  4ec1063787c26243ab709165bc7b7771a1c19bc6
Gitweb: http://git.kernel.org/tip/4ec1063787c26243ab709165bc7b7771a1c19bc6
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:00 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, bitops: remove use of "sbb" to return CF

Use SETC instead of SBB to return the value of CF from assembly. Using
SETcc enables uniformity with other flags-returning pieces of assembly
code.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-2-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/bitops.h  | 24 
 arch/x86/include/asm/percpu.h  | 12 ++--
 arch/x86/include/asm/signal.h  |  6 +++---
 arch/x86/include/asm/sync_bitops.h | 18 +-
 arch/x86/kernel/vm86_32.c  |  5 +
 5 files changed, 31 insertions(+), 34 deletions(-)

diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index 7766d1c..b2b797d 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -230,11 +230,11 @@ test_and_set_bit_lock(long nr, volatile unsigned long 
*addr)
  */
 static __always_inline int __test_and_set_bit(long nr, volatile unsigned long 
*addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm("bts %2,%1\n\t"
-   "sbb %0,%0"
-   : "=r" (oldbit), ADDR
+   "setc %0"
+   : "=qm" (oldbit), ADDR
: "Ir" (nr));
return oldbit;
 }
@@ -270,11 +270,11 @@ static __always_inline int test_and_clear_bit(long nr, 
volatile unsigned long *a
  */
 static __always_inline int __test_and_clear_bit(long nr, volatile unsigned 
long *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("btr %2,%1\n\t"
-"sbb %0,%0"
-: "=r" (oldbit), ADDR
+"setc %0"
+: "=qm" (oldbit), ADDR
 : "Ir" (nr));
return oldbit;
 }
@@ -282,11 +282,11 @@ static __always_inline int __test_and_clear_bit(long nr, 
volatile unsigned long
 /* WARNING: non atomic and it can be reordered! */
 static __always_inline int __test_and_change_bit(long nr, volatile unsigned 
long *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("btc %2,%1\n\t"
-"sbb %0,%0"
-: "=r" (oldbit), ADDR
+"setc %0"
+: "=qm" (oldbit), ADDR
 : "Ir" (nr) : "memory");
 
return oldbit;
@@ -313,11 +313,11 @@ static __always_inline int constant_test_bit(long nr, 
const volatile unsigned lo
 
 static __always_inline int variable_test_bit(long nr, volatile const unsigned 
long *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("bt %2,%1\n\t"
-"sbb %0,%0"
-: "=r" (oldbit)
+"setc %0"
+: "=qm" (oldbit)
 : "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index e0ba66c..65039e9 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -510,9 +510,9 @@ do {
\
 /* This is not atomic against other CPUs -- CPU preemption needs to be off */
 #define x86_test_and_clear_bit_percpu(bit, var)
\
 ({ \
-   int old__;  \
-   asm volatile("btr %2,"__percpu_arg(1)"\n\tsbbl %0,%0"   \
-: "=r" (old__), "+m" (var) \
+   unsigned char old__;\
+   asm volatile("btr %2,"__percpu_arg(1)"\n\tsetc %0"  \
+: "=qm" (old__), "+m" (var)\
 : "dIr" (bit));\
old__;  \
 })
@@ -532,11 +532,11 @@ static __always_inline int 
x86_this_cpu_constant_test_bit(unsigned int nr,
 static inline int x86_this_cpu_variable_test_bit(int nr,
 const unsigned long __percpu *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("bt "__percpu_arg(2)",%1\n\t"
-   "sbb %0,%0"
-   : "=r" (oldbit)
+   "setc %0"
+   : "=qm" (oldbit)
: "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;
diff --git a/arch/x86/include/asm/signal.h b/arch/x86/include/asm/signal.h
index 2138c9a..dd1e7d6 100644
--- a/arch/x86/include/asm/signal.h
+++ 

[tip:x86/asm] x86, bitops: remove use of "sbb" to return CF

2016-06-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  4ec1063787c26243ab709165bc7b7771a1c19bc6
Gitweb: http://git.kernel.org/tip/4ec1063787c26243ab709165bc7b7771a1c19bc6
Author: H. Peter Anvin 
AuthorDate: Tue, 7 Jun 2016 16:31:00 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 7 Jun 2016 16:36:42 -0700

x86, bitops: remove use of "sbb" to return CF

Use SETC instead of SBB to return the value of CF from assembly. Using
SETcc enables uniformity with other flags-returning pieces of assembly
code.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1465342269-492350-2-git-send-email-...@linux.intel.com
---
 arch/x86/include/asm/bitops.h  | 24 
 arch/x86/include/asm/percpu.h  | 12 ++--
 arch/x86/include/asm/signal.h  |  6 +++---
 arch/x86/include/asm/sync_bitops.h | 18 +-
 arch/x86/kernel/vm86_32.c  |  5 +
 5 files changed, 31 insertions(+), 34 deletions(-)

diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
index 7766d1c..b2b797d 100644
--- a/arch/x86/include/asm/bitops.h
+++ b/arch/x86/include/asm/bitops.h
@@ -230,11 +230,11 @@ test_and_set_bit_lock(long nr, volatile unsigned long 
*addr)
  */
 static __always_inline int __test_and_set_bit(long nr, volatile unsigned long 
*addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm("bts %2,%1\n\t"
-   "sbb %0,%0"
-   : "=r" (oldbit), ADDR
+   "setc %0"
+   : "=qm" (oldbit), ADDR
: "Ir" (nr));
return oldbit;
 }
@@ -270,11 +270,11 @@ static __always_inline int test_and_clear_bit(long nr, 
volatile unsigned long *a
  */
 static __always_inline int __test_and_clear_bit(long nr, volatile unsigned 
long *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("btr %2,%1\n\t"
-"sbb %0,%0"
-: "=r" (oldbit), ADDR
+"setc %0"
+: "=qm" (oldbit), ADDR
 : "Ir" (nr));
return oldbit;
 }
@@ -282,11 +282,11 @@ static __always_inline int __test_and_clear_bit(long nr, 
volatile unsigned long
 /* WARNING: non atomic and it can be reordered! */
 static __always_inline int __test_and_change_bit(long nr, volatile unsigned 
long *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("btc %2,%1\n\t"
-"sbb %0,%0"
-: "=r" (oldbit), ADDR
+"setc %0"
+: "=qm" (oldbit), ADDR
 : "Ir" (nr) : "memory");
 
return oldbit;
@@ -313,11 +313,11 @@ static __always_inline int constant_test_bit(long nr, 
const volatile unsigned lo
 
 static __always_inline int variable_test_bit(long nr, volatile const unsigned 
long *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("bt %2,%1\n\t"
-"sbb %0,%0"
-: "=r" (oldbit)
+"setc %0"
+: "=qm" (oldbit)
 : "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index e0ba66c..65039e9 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -510,9 +510,9 @@ do {
\
 /* This is not atomic against other CPUs -- CPU preemption needs to be off */
 #define x86_test_and_clear_bit_percpu(bit, var)
\
 ({ \
-   int old__;  \
-   asm volatile("btr %2,"__percpu_arg(1)"\n\tsbbl %0,%0"   \
-: "=r" (old__), "+m" (var) \
+   unsigned char old__;\
+   asm volatile("btr %2,"__percpu_arg(1)"\n\tsetc %0"  \
+: "=qm" (old__), "+m" (var)\
 : "dIr" (bit));\
old__;  \
 })
@@ -532,11 +532,11 @@ static __always_inline int 
x86_this_cpu_constant_test_bit(unsigned int nr,
 static inline int x86_this_cpu_variable_test_bit(int nr,
 const unsigned long __percpu *addr)
 {
-   int oldbit;
+   unsigned char oldbit;
 
asm volatile("bt "__percpu_arg(2)",%1\n\t"
-   "sbb %0,%0"
-   : "=r" (oldbit)
+   "setc %0"
+   : "=qm" (oldbit)
: "m" (*(unsigned long *)addr), "Ir" (nr));
 
return oldbit;
diff --git a/arch/x86/include/asm/signal.h b/arch/x86/include/asm/signal.h
index 2138c9a..dd1e7d6 100644
--- a/arch/x86/include/asm/signal.h
+++ b/arch/x86/include/asm/signal.h
@@ -81,9 +81,9 @@ static inline int 

[tip:x86/apic] x86/apic/vsmp: Make is_vsmp_box() static

2014-08-01 Thread tip-bot for H. Peter Anvin
Commit-ID:  5e3bf215f4f2efc0af89e6dbc5da789744aeb5d7
Gitweb: http://git.kernel.org/tip/5e3bf215f4f2efc0af89e6dbc5da789744aeb5d7
Author: H. Peter Anvin 
AuthorDate: Fri, 1 Aug 2014 14:47:56 -0700
Committer:  H. Peter Anvin 
CommitDate: Fri, 1 Aug 2014 15:09:45 -0700

x86/apic/vsmp: Make is_vsmp_box() static

Since checkin

411cf9ee2946 x86, vsmp: Remove is_vsmp_box() from apic_is_clustered_box()

... is_vsmp_box() is only used in vsmp_64.c and does not have any
header file declaring it, so make it explicitly static.

Reported-by: kbuild test robot 
Cc: Shai Fultheim 
Cc: Oren Twaig 
Link: 
http://lkml.kernel.org/r/1404036068-11674-1-git-send-email-o...@scalemp.com
Signed-off-by: H. Peter Anvin 
---
 arch/x86/kernel/vsmp_64.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/vsmp_64.c b/arch/x86/kernel/vsmp_64.c
index b99b9ad..ee22c1d 100644
--- a/arch/x86/kernel/vsmp_64.c
+++ b/arch/x86/kernel/vsmp_64.c
@@ -152,7 +152,7 @@ static void __init detect_vsmp_box(void)
is_vsmp = 1;
 }
 
-int is_vsmp_box(void)
+static int is_vsmp_box(void)
 {
if (is_vsmp != -1)
return is_vsmp;
@@ -166,7 +166,7 @@ int is_vsmp_box(void)
 static void __init detect_vsmp_box(void)
 {
 }
-int is_vsmp_box(void)
+static int is_vsmp_box(void)
 {
return 0;
 }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/apic] x86/apic/vsmp: Make is_vsmp_box() static

2014-08-01 Thread tip-bot for H. Peter Anvin
Commit-ID:  5e3bf215f4f2efc0af89e6dbc5da789744aeb5d7
Gitweb: http://git.kernel.org/tip/5e3bf215f4f2efc0af89e6dbc5da789744aeb5d7
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Fri, 1 Aug 2014 14:47:56 -0700
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Fri, 1 Aug 2014 15:09:45 -0700

x86/apic/vsmp: Make is_vsmp_box() static

Since checkin

411cf9ee2946 x86, vsmp: Remove is_vsmp_box() from apic_is_clustered_box()

... is_vsmp_box() is only used in vsmp_64.c and does not have any
header file declaring it, so make it explicitly static.

Reported-by: kbuild test robot fengguang...@intel.com
Cc: Shai Fultheim s...@scalemp.com
Cc: Oren Twaig o...@scalemp.com
Link: 
http://lkml.kernel.org/r/1404036068-11674-1-git-send-email-o...@scalemp.com
Signed-off-by: H. Peter Anvin h...@linux.intel.com
---
 arch/x86/kernel/vsmp_64.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/vsmp_64.c b/arch/x86/kernel/vsmp_64.c
index b99b9ad..ee22c1d 100644
--- a/arch/x86/kernel/vsmp_64.c
+++ b/arch/x86/kernel/vsmp_64.c
@@ -152,7 +152,7 @@ static void __init detect_vsmp_box(void)
is_vsmp = 1;
 }
 
-int is_vsmp_box(void)
+static int is_vsmp_box(void)
 {
if (is_vsmp != -1)
return is_vsmp;
@@ -166,7 +166,7 @@ int is_vsmp_box(void)
 static void __init detect_vsmp_box(void)
 {
 }
-int is_vsmp_box(void)
+static int is_vsmp_box(void)
 {
return 0;
 }
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] x86, vdso: Remove one final use of htole16()

2014-06-10 Thread tip-bot for H. Peter Anvin
Commit-ID:  4d048b0255e3dd4fb001c5f1f609fb67463d04d6
Gitweb: http://git.kernel.org/tip/4d048b0255e3dd4fb001c5f1f609fb67463d04d6
Author: H. Peter Anvin 
AuthorDate: Tue, 10 Jun 2014 14:25:26 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 10 Jun 2014 15:55:05 -0700

x86, vdso: Remove one final use of htole16()

One final use of the macros from  which are not available on
older system.  In this case we had one sole case of *writing* a
littleendian number, but the number is SHN_UNDEF which is the constant
zero, so rather than dealing with the general case of littleendian
puts here, just document that the constant is zero and be done with
it.

Reported-and-Tested-by: Andrew Morton 
Signed-off-by: H. Peter Anvin 
Cc: Andy Lutomirski 
Link: 
http://lkml.kernel.org/r/20140610135051.c3c34165f73d67d218b62...@linux-foundation.org
---
 arch/x86/vdso/vdso2c.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/vdso/vdso2c.h b/arch/x86/vdso/vdso2c.h
index 8a07463..d9f6f61 100644
--- a/arch/x86/vdso/vdso2c.h
+++ b/arch/x86/vdso/vdso2c.h
@@ -116,7 +116,7 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
hdr->e_shoff = 0;
hdr->e_shentsize = 0;
hdr->e_shnum = 0;
-   hdr->e_shstrndx = htole16(SHN_UNDEF);
+   hdr->e_shstrndx = SHN_UNDEF; /* SHN_UNDEF == 0 */
 
if (!name) {
fwrite(addr, load_size, 1, outfile);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] x86, vdso: Remove one final use of htole16()

2014-06-10 Thread tip-bot for H. Peter Anvin
Commit-ID:  15ea1a528e08c6bc322f10686ec8d73ba413b941
Gitweb: http://git.kernel.org/tip/15ea1a528e08c6bc322f10686ec8d73ba413b941
Author: H. Peter Anvin 
AuthorDate: Tue, 10 Jun 2014 14:25:26 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 10 Jun 2014 14:25:26 -0700

x86, vdso: Remove one final use of htole16()

One final use of the macros from  which are not available on
older system.  In this case we had one sole case of *writing* a
littleendian number, but the number is SHN_UNDEF which is the constant
zero, so rather than dealing with the general case of littleendian
puts here, just document that the constant is zero and be done with
it.

Reported-by: Andrew Morton 
Signed-off-by: H. Peter Anvin 
Cc: Andy Lutomirski 
Link: 
http://lkml.kernel.org/r/20140610135051.c3c34165f73d67d218b62...@linux-foundation.org
---
 arch/x86/vdso/vdso2c.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/vdso/vdso2c.h b/arch/x86/vdso/vdso2c.h
index 8a07463..d9f6f61 100644
--- a/arch/x86/vdso/vdso2c.h
+++ b/arch/x86/vdso/vdso2c.h
@@ -116,7 +116,7 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
hdr->e_shoff = 0;
hdr->e_shentsize = 0;
hdr->e_shnum = 0;
-   hdr->e_shstrndx = htole16(SHN_UNDEF);
+   hdr->e_shstrndx = SHN_UNDEF; /* SHN_UNDEF == 0 */
 
if (!name) {
fwrite(addr, load_size, 1, outfile);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] x86, vdso: Remove one final use of htole16()

2014-06-10 Thread tip-bot for H. Peter Anvin
Commit-ID:  15ea1a528e08c6bc322f10686ec8d73ba413b941
Gitweb: http://git.kernel.org/tip/15ea1a528e08c6bc322f10686ec8d73ba413b941
Author: H. Peter Anvin h...@zytor.com
AuthorDate: Tue, 10 Jun 2014 14:25:26 -0700
Committer:  H. Peter Anvin h...@zytor.com
CommitDate: Tue, 10 Jun 2014 14:25:26 -0700

x86, vdso: Remove one final use of htole16()

One final use of the macros from endian.h which are not available on
older system.  In this case we had one sole case of *writing* a
littleendian number, but the number is SHN_UNDEF which is the constant
zero, so rather than dealing with the general case of littleendian
puts here, just document that the constant is zero and be done with
it.

Reported-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: H. Peter Anvin h...@zytor.com
Cc: Andy Lutomirski l...@amacapital.net
Link: 
http://lkml.kernel.org/r/20140610135051.c3c34165f73d67d218b62...@linux-foundation.org
---
 arch/x86/vdso/vdso2c.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/vdso/vdso2c.h b/arch/x86/vdso/vdso2c.h
index 8a07463..d9f6f61 100644
--- a/arch/x86/vdso/vdso2c.h
+++ b/arch/x86/vdso/vdso2c.h
@@ -116,7 +116,7 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
hdr-e_shoff = 0;
hdr-e_shentsize = 0;
hdr-e_shnum = 0;
-   hdr-e_shstrndx = htole16(SHN_UNDEF);
+   hdr-e_shstrndx = SHN_UNDEF; /* SHN_UNDEF == 0 */
 
if (!name) {
fwrite(addr, load_size, 1, outfile);
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] x86, vdso: Remove one final use of htole16()

2014-06-10 Thread tip-bot for H. Peter Anvin
Commit-ID:  4d048b0255e3dd4fb001c5f1f609fb67463d04d6
Gitweb: http://git.kernel.org/tip/4d048b0255e3dd4fb001c5f1f609fb67463d04d6
Author: H. Peter Anvin h...@zytor.com
AuthorDate: Tue, 10 Jun 2014 14:25:26 -0700
Committer:  H. Peter Anvin h...@zytor.com
CommitDate: Tue, 10 Jun 2014 15:55:05 -0700

x86, vdso: Remove one final use of htole16()

One final use of the macros from endian.h which are not available on
older system.  In this case we had one sole case of *writing* a
littleendian number, but the number is SHN_UNDEF which is the constant
zero, so rather than dealing with the general case of littleendian
puts here, just document that the constant is zero and be done with
it.

Reported-and-Tested-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: H. Peter Anvin h...@zytor.com
Cc: Andy Lutomirski l...@amacapital.net
Link: 
http://lkml.kernel.org/r/20140610135051.c3c34165f73d67d218b62...@linux-foundation.org
---
 arch/x86/vdso/vdso2c.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/vdso/vdso2c.h b/arch/x86/vdso/vdso2c.h
index 8a07463..d9f6f61 100644
--- a/arch/x86/vdso/vdso2c.h
+++ b/arch/x86/vdso/vdso2c.h
@@ -116,7 +116,7 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
hdr-e_shoff = 0;
hdr-e_shentsize = 0;
hdr-e_shnum = 0;
-   hdr-e_shstrndx = htole16(SHN_UNDEF);
+   hdr-e_shstrndx = SHN_UNDEF; /* SHN_UNDEF == 0 */
 
if (!name) {
fwrite(addr, load_size, 1, outfile);
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] x86, vdso: Use for littleendian access

2014-06-06 Thread tip-bot for H. Peter Anvin
Commit-ID:  bdfb9bcc25005d06a9c301830bdeb7ca5a0b6ef7
Gitweb: http://git.kernel.org/tip/bdfb9bcc25005d06a9c301830bdeb7ca5a0b6ef7
Author: H. Peter Anvin 
AuthorDate: Fri, 6 Jun 2014 14:30:37 -0700
Committer:  H. Peter Anvin 
CommitDate: Fri, 6 Jun 2014 14:54:54 -0700

x86, vdso: Use  for littleendian access

There are no standard functions for littleendian data (unlike
bigendian data.)  Thus, use  to access
littleendian data members.  Those are fairly inefficient, but it
doesn't matter for this purpose (and can be optimized later.)  This
avoids portability problems.

Reported-by: Andrew Morton 
Signed-off-by: H. Peter Anvin 
Tested-by: Andy Lutomirski 
Link: 
http://lkml.kernel.org/r/20140606140017.afb7f91142f66cb3dd13c...@linux-foundation.org
---
 arch/x86/vdso/Makefile |  1 +
 arch/x86/vdso/vdso2c.c | 10 
 arch/x86/vdso/vdso2c.h | 62 +-
 3 files changed, 38 insertions(+), 35 deletions(-)

diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 895d4b1..9769df0 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -59,6 +59,7 @@ VDSO_LDFLAGS_vdso.lds = -m64 -Wl,-soname=linux-vdso.so.1 \
 $(obj)/vdso64.so.dbg: $(src)/vdso.lds $(vobjs) FORCE
$(call if_changed,vdso)
 
+HOST_EXTRACFLAGS += -I$(srctree)/tools/include
 hostprogs-y+= vdso2c
 
 quiet_cmd_vdso2c = VDSO2C  $@
diff --git a/arch/x86/vdso/vdso2c.c b/arch/x86/vdso/vdso2c.c
index deabaf5..450ac6e 100644
--- a/arch/x86/vdso/vdso2c.c
+++ b/arch/x86/vdso/vdso2c.c
@@ -11,6 +11,8 @@
 #include 
 #include 
 
+#include 
+
 #include 
 #include 
 
@@ -56,12 +58,12 @@ static void fail(const char *format, ...)
  */
 #define GLE(x, bits, ifnot)\
__builtin_choose_expr(  \
-   (sizeof(x) == bits/8),  \
-   (__typeof__(x))le##bits##toh(x), ifnot)
+   (sizeof(*(x)) == bits/8),   \
+   (__typeof__(*(x)))get_unaligned_le##bits(x), ifnot)
 
-extern void bad_get_le(uint64_t);
+extern void bad_get_le(void);
 #define LAST_LE(x) \
-   __builtin_choose_expr(sizeof(x) == 1, (x), bad_get_le(x))
+   __builtin_choose_expr(sizeof(*(x)) == 1, *(x), bad_get_le())
 
 #define GET_LE(x)  \
GLE(x, 64, GLE(x, 32, GLE(x, 16, LAST_LE(x
diff --git a/arch/x86/vdso/vdso2c.h b/arch/x86/vdso/vdso2c.h
index d1e99e1..8a07463 100644
--- a/arch/x86/vdso/vdso2c.h
+++ b/arch/x86/vdso/vdso2c.h
@@ -18,27 +18,27 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
const char *secstrings;
uint64_t syms[NSYMS] = {};
 
-   Elf_Phdr *pt = (Elf_Phdr *)(addr + GET_LE(hdr->e_phoff));
+   Elf_Phdr *pt = (Elf_Phdr *)(addr + GET_LE(>e_phoff));
 
/* Walk the segment table. */
-   for (i = 0; i < GET_LE(hdr->e_phnum); i++) {
-   if (GET_LE(pt[i].p_type) == PT_LOAD) {
+   for (i = 0; i < GET_LE(>e_phnum); i++) {
+   if (GET_LE([i].p_type) == PT_LOAD) {
if (found_load)
fail("multiple PT_LOAD segs\n");
 
-   if (GET_LE(pt[i].p_offset) != 0 ||
-   GET_LE(pt[i].p_vaddr) != 0)
+   if (GET_LE([i].p_offset) != 0 ||
+   GET_LE([i].p_vaddr) != 0)
fail("PT_LOAD in wrong place\n");
 
-   if (GET_LE(pt[i].p_memsz) != GET_LE(pt[i].p_filesz))
+   if (GET_LE([i].p_memsz) != GET_LE([i].p_filesz))
fail("cannot handle memsz != filesz\n");
 
-   load_size = GET_LE(pt[i].p_memsz);
+   load_size = GET_LE([i].p_memsz);
found_load = 1;
-   } else if (GET_LE(pt[i].p_type) == PT_DYNAMIC) {
-   dyn = addr + GET_LE(pt[i].p_offset);
-   dyn_end = addr + GET_LE(pt[i].p_offset) +
-   GET_LE(pt[i].p_memsz);
+   } else if (GET_LE([i].p_type) == PT_DYNAMIC) {
+   dyn = addr + GET_LE([i].p_offset);
+   dyn_end = addr + GET_LE([i].p_offset) +
+   GET_LE([i].p_memsz);
}
}
if (!found_load)
@@ -47,24 +47,24 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
 
/* Walk the dynamic table */
for (i = 0; dyn + i < dyn_end &&
-GET_LE(dyn[i].d_tag) != DT_NULL; i++) {
-   typeof(dyn[i].d_tag) tag = GET_LE(dyn[i].d_tag);
+GET_LE([i].d_tag) != DT_NULL; i++) {
+   typeof(dyn[i].d_tag) tag = GET_LE([i].d_tag);
if (tag == DT_REL 

[tip:x86/vdso] x86, vdso: Use tools/le_byteshift.h for littleendian access

2014-06-06 Thread tip-bot for H. Peter Anvin
Commit-ID:  bdfb9bcc25005d06a9c301830bdeb7ca5a0b6ef7
Gitweb: http://git.kernel.org/tip/bdfb9bcc25005d06a9c301830bdeb7ca5a0b6ef7
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Fri, 6 Jun 2014 14:30:37 -0700
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Fri, 6 Jun 2014 14:54:54 -0700

x86, vdso: Use tools/le_byteshift.h for littleendian access

There are no standard functions for littleendian data (unlike
bigendian data.)  Thus, use tools/le_byteshift.h to access
littleendian data members.  Those are fairly inefficient, but it
doesn't matter for this purpose (and can be optimized later.)  This
avoids portability problems.

Reported-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: H. Peter Anvin h...@linux.intel.com
Tested-by: Andy Lutomirski l...@amacapital.net
Link: 
http://lkml.kernel.org/r/20140606140017.afb7f91142f66cb3dd13c...@linux-foundation.org
---
 arch/x86/vdso/Makefile |  1 +
 arch/x86/vdso/vdso2c.c | 10 
 arch/x86/vdso/vdso2c.h | 62 +-
 3 files changed, 38 insertions(+), 35 deletions(-)

diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 895d4b1..9769df0 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -59,6 +59,7 @@ VDSO_LDFLAGS_vdso.lds = -m64 -Wl,-soname=linux-vdso.so.1 \
 $(obj)/vdso64.so.dbg: $(src)/vdso.lds $(vobjs) FORCE
$(call if_changed,vdso)
 
+HOST_EXTRACFLAGS += -I$(srctree)/tools/include
 hostprogs-y+= vdso2c
 
 quiet_cmd_vdso2c = VDSO2C  $@
diff --git a/arch/x86/vdso/vdso2c.c b/arch/x86/vdso/vdso2c.c
index deabaf5..450ac6e 100644
--- a/arch/x86/vdso/vdso2c.c
+++ b/arch/x86/vdso/vdso2c.c
@@ -11,6 +11,8 @@
 #include sys/mman.h
 #include sys/types.h
 
+#include tools/le_byteshift.h
+
 #include linux/elf.h
 #include linux/types.h
 
@@ -56,12 +58,12 @@ static void fail(const char *format, ...)
  */
 #define GLE(x, bits, ifnot)\
__builtin_choose_expr(  \
-   (sizeof(x) == bits/8),  \
-   (__typeof__(x))le##bits##toh(x), ifnot)
+   (sizeof(*(x)) == bits/8),   \
+   (__typeof__(*(x)))get_unaligned_le##bits(x), ifnot)
 
-extern void bad_get_le(uint64_t);
+extern void bad_get_le(void);
 #define LAST_LE(x) \
-   __builtin_choose_expr(sizeof(x) == 1, (x), bad_get_le(x))
+   __builtin_choose_expr(sizeof(*(x)) == 1, *(x), bad_get_le())
 
 #define GET_LE(x)  \
GLE(x, 64, GLE(x, 32, GLE(x, 16, LAST_LE(x
diff --git a/arch/x86/vdso/vdso2c.h b/arch/x86/vdso/vdso2c.h
index d1e99e1..8a07463 100644
--- a/arch/x86/vdso/vdso2c.h
+++ b/arch/x86/vdso/vdso2c.h
@@ -18,27 +18,27 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
const char *secstrings;
uint64_t syms[NSYMS] = {};
 
-   Elf_Phdr *pt = (Elf_Phdr *)(addr + GET_LE(hdr-e_phoff));
+   Elf_Phdr *pt = (Elf_Phdr *)(addr + GET_LE(hdr-e_phoff));
 
/* Walk the segment table. */
-   for (i = 0; i  GET_LE(hdr-e_phnum); i++) {
-   if (GET_LE(pt[i].p_type) == PT_LOAD) {
+   for (i = 0; i  GET_LE(hdr-e_phnum); i++) {
+   if (GET_LE(pt[i].p_type) == PT_LOAD) {
if (found_load)
fail(multiple PT_LOAD segs\n);
 
-   if (GET_LE(pt[i].p_offset) != 0 ||
-   GET_LE(pt[i].p_vaddr) != 0)
+   if (GET_LE(pt[i].p_offset) != 0 ||
+   GET_LE(pt[i].p_vaddr) != 0)
fail(PT_LOAD in wrong place\n);
 
-   if (GET_LE(pt[i].p_memsz) != GET_LE(pt[i].p_filesz))
+   if (GET_LE(pt[i].p_memsz) != GET_LE(pt[i].p_filesz))
fail(cannot handle memsz != filesz\n);
 
-   load_size = GET_LE(pt[i].p_memsz);
+   load_size = GET_LE(pt[i].p_memsz);
found_load = 1;
-   } else if (GET_LE(pt[i].p_type) == PT_DYNAMIC) {
-   dyn = addr + GET_LE(pt[i].p_offset);
-   dyn_end = addr + GET_LE(pt[i].p_offset) +
-   GET_LE(pt[i].p_memsz);
+   } else if (GET_LE(pt[i].p_type) == PT_DYNAMIC) {
+   dyn = addr + GET_LE(pt[i].p_offset);
+   dyn_end = addr + GET_LE(pt[i].p_offset) +
+   GET_LE(pt[i].p_memsz);
}
}
if (!found_load)
@@ -47,24 +47,24 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
 
/* Walk the dynamic table */
for (i = 0; dyn + i  dyn_end 
-GET_LE(dyn[i].d_tag) != DT_NULL; i++) 

[tip:x86/build] x86, build: Change code16gcc.h from a C header to an assembly header

2014-06-04 Thread tip-bot for H. Peter Anvin
Commit-ID:  a9cfccee6604854aebc70215610b9788667f4fec
Gitweb: http://git.kernel.org/tip/a9cfccee6604854aebc70215610b9788667f4fec
Author: H. Peter Anvin 
AuthorDate: Wed, 4 Jun 2014 13:16:48 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 4 Jun 2014 13:16:48 -0700

x86, build: Change code16gcc.h from a C header to an assembly header

By changing code16gcc.h from a C header to an assembly header and use
the -Wa,... option to gcc to force it to be added to the assembly
input, we can avoid the problems with gcc reordering code bits on us.

If we have -m16, we still use it, of course.

Suggested-by: Kevin O'Connor 
Signed-off-by: H. Peter Anvin 
Link: http://lkml.kernel.org/n/tip-xw8ibgdemucl9fz3i1bym...@git.kernel.org
---
 arch/x86/Makefile |  9 +++--
 arch/x86/boot/code16gcc.h | 24 ++--
 2 files changed, 13 insertions(+), 20 deletions(-)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 602f57e..a98cc90 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -15,12 +15,9 @@ endif
 # that way we can complain to the user if the CPU is insufficient.
 #
 # The -m16 option is supported by GCC >= 4.9 and clang >= 3.5. For
-# older versions of GCC, we need to play evil and unreliable tricks to
-# attempt to ensure that our asm(".code16gcc") is first in the asm
-# output.
-CODE16GCC_CFLAGS := -m32 -include $(srctree)/arch/x86/boot/code16gcc.h \
-   $(call cc-option, -fno-toplevel-reorder,\
- $(call cc-option, -fno-unit-at-a-time))
+# older versions of GCC, include an *assembly* header to make sure that
+# gcc doesn't play any games behind our back.
+CODE16GCC_CFLAGS := -m32 -Wa,$(srctree)/arch/x86/boot/code16gcc.h
 M16_CFLAGS  := $(call cc-option, -m16, $(CODE16GCC_CFLAGS))
 
 REALMODE_CFLAGS:= $(M16_CFLAGS) -g -Os -D__KERNEL__ \
diff --git a/arch/x86/boot/code16gcc.h b/arch/x86/boot/code16gcc.h
index d93e480..5ff4265 100644
--- a/arch/x86/boot/code16gcc.h
+++ b/arch/x86/boot/code16gcc.h
@@ -1,15 +1,11 @@
-/*
- * code16gcc.h
- *
- * This file is -include'd when compiling 16-bit C code.
- * Note: this asm() needs to be emitted before gcc emits any code.
- * Depending on gcc version, this requires -fno-unit-at-a-time or
- * -fno-toplevel-reorder.
- *
- * Hopefully gcc will eventually have a real -m16 option so we can
- * drop this hack long term.
- */
+#
+# code16gcc.h
+#
+# This file is added to the assembler via -Wa when compiling 16-bit C code.
+# This is done this way instead via asm() to make sure gcc does not reorder
+# things around us.
+#
+# gcc 4.9+ has a real -m16 option so we can drop this hack long term.
+#
 
-#ifndef __ASSEMBLY__
-asm(".code16gcc");
-#endif
+   .code16gcc
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/build] x86, build: Change code16gcc.h from a C header to an assembly header

2014-06-04 Thread tip-bot for H. Peter Anvin
Commit-ID:  a9cfccee6604854aebc70215610b9788667f4fec
Gitweb: http://git.kernel.org/tip/a9cfccee6604854aebc70215610b9788667f4fec
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Wed, 4 Jun 2014 13:16:48 -0700
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Wed, 4 Jun 2014 13:16:48 -0700

x86, build: Change code16gcc.h from a C header to an assembly header

By changing code16gcc.h from a C header to an assembly header and use
the -Wa,... option to gcc to force it to be added to the assembly
input, we can avoid the problems with gcc reordering code bits on us.

If we have -m16, we still use it, of course.

Suggested-by: Kevin O'Connor ke...@koconnor.net
Signed-off-by: H. Peter Anvin h...@linux.intel.com
Link: http://lkml.kernel.org/n/tip-xw8ibgdemucl9fz3i1bym...@git.kernel.org
---
 arch/x86/Makefile |  9 +++--
 arch/x86/boot/code16gcc.h | 24 ++--
 2 files changed, 13 insertions(+), 20 deletions(-)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 602f57e..a98cc90 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -15,12 +15,9 @@ endif
 # that way we can complain to the user if the CPU is insufficient.
 #
 # The -m16 option is supported by GCC = 4.9 and clang = 3.5. For
-# older versions of GCC, we need to play evil and unreliable tricks to
-# attempt to ensure that our asm(.code16gcc) is first in the asm
-# output.
-CODE16GCC_CFLAGS := -m32 -include $(srctree)/arch/x86/boot/code16gcc.h \
-   $(call cc-option, -fno-toplevel-reorder,\
- $(call cc-option, -fno-unit-at-a-time))
+# older versions of GCC, include an *assembly* header to make sure that
+# gcc doesn't play any games behind our back.
+CODE16GCC_CFLAGS := -m32 -Wa,$(srctree)/arch/x86/boot/code16gcc.h
 M16_CFLAGS  := $(call cc-option, -m16, $(CODE16GCC_CFLAGS))
 
 REALMODE_CFLAGS:= $(M16_CFLAGS) -g -Os -D__KERNEL__ \
diff --git a/arch/x86/boot/code16gcc.h b/arch/x86/boot/code16gcc.h
index d93e480..5ff4265 100644
--- a/arch/x86/boot/code16gcc.h
+++ b/arch/x86/boot/code16gcc.h
@@ -1,15 +1,11 @@
-/*
- * code16gcc.h
- *
- * This file is -include'd when compiling 16-bit C code.
- * Note: this asm() needs to be emitted before gcc emits any code.
- * Depending on gcc version, this requires -fno-unit-at-a-time or
- * -fno-toplevel-reorder.
- *
- * Hopefully gcc will eventually have a real -m16 option so we can
- * drop this hack long term.
- */
+#
+# code16gcc.h
+#
+# This file is added to the assembler via -Wa when compiling 16-bit C code.
+# This is done this way instead via asm() to make sure gcc does not reorder
+# things around us.
+#
+# gcc 4.9+ has a real -m16 option so we can drop this hack long term.
+#
 
-#ifndef __ASSEMBLY__
-asm(.code16gcc);
-#endif
+   .code16gcc
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] x86/vdso, build: Make LE access macros clearer, host-safe

2014-05-31 Thread tip-bot for H. Peter Anvin
Commit-ID:  c191920f737a09a7252088f018f6747f0d2f484d
Gitweb: http://git.kernel.org/tip/c191920f737a09a7252088f018f6747f0d2f484d
Author: H. Peter Anvin 
AuthorDate: Fri, 30 May 2014 17:03:22 -0700
Committer:  H. Peter Anvin 
CommitDate: Sat, 31 May 2014 03:35:27 -0700

x86/vdso, build: Make LE access macros clearer, host-safe

Make it a little clearer what the littleendian access macros in
vdso2c.[ch] actually do.  This way they can probably also be moved to
a central location (e.g. tools/include) for the benefit of other host
tools.

We should avoid implementation namespace symbols when writing code
that is compiling for the compiler host, so avoid names starting with
double underscore or underscore-capital.

Signed-off-by: H. Peter Anvin 
Cc: Andy Lutomirski 
Link: 
http://lkml.kernel.org/r/2cf258df123cb24bad63c274c8563c050547d99d.1401464755.git.l...@amacapital.net
---
 arch/x86/vdso/vdso2c.c | 16 ++---
 arch/x86/vdso/vdso2c.h | 65 ++
 2 files changed, 42 insertions(+), 39 deletions(-)

diff --git a/arch/x86/vdso/vdso2c.c b/arch/x86/vdso/vdso2c.c
index de19ced..deabaf5 100644
--- a/arch/x86/vdso/vdso2c.c
+++ b/arch/x86/vdso/vdso2c.c
@@ -54,17 +54,17 @@ static void fail(const char *format, ...)
 /*
  * Evil macros to do a little-endian read.
  */
-#define __GET_TYPE(x, type, bits, ifnot)   \
+#define GLE(x, bits, ifnot)\
__builtin_choose_expr(  \
-   __builtin_types_compatible_p(typeof(x), type),  \
-   le##bits##toh((x)), ifnot)
+   (sizeof(x) == bits/8),  \
+   (__typeof__(x))le##bits##toh(x), ifnot)
 
-extern void bad_get(uint64_t);
+extern void bad_get_le(uint64_t);
+#define LAST_LE(x) \
+   __builtin_choose_expr(sizeof(x) == 1, (x), bad_get_le(x))
 
-#define GET(x) \
-   __GET_TYPE((x), __u32, 32, __GET_TYPE((x), __u64, 64,   \
-   __GET_TYPE((x), __s32, 32, __GET_TYPE((x), __s64, 64,   \
-   __GET_TYPE((x), __u16, 16, bad_get(x))
+#define GET_LE(x)  \
+   GLE(x, 64, GLE(x, 32, GLE(x, 16, LAST_LE(x
 
 #define NSYMS (sizeof(required_syms) / sizeof(required_syms[0]))
 
diff --git a/arch/x86/vdso/vdso2c.h b/arch/x86/vdso/vdso2c.h
index f0475da..d1e99e1 100644
--- a/arch/x86/vdso/vdso2c.h
+++ b/arch/x86/vdso/vdso2c.h
@@ -18,27 +18,27 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
const char *secstrings;
uint64_t syms[NSYMS] = {};
 
-   Elf_Phdr *pt = (Elf_Phdr *)(addr + GET(hdr->e_phoff));
+   Elf_Phdr *pt = (Elf_Phdr *)(addr + GET_LE(hdr->e_phoff));
 
/* Walk the segment table. */
-   for (i = 0; i < GET(hdr->e_phnum); i++) {
-   if (GET(pt[i].p_type) == PT_LOAD) {
+   for (i = 0; i < GET_LE(hdr->e_phnum); i++) {
+   if (GET_LE(pt[i].p_type) == PT_LOAD) {
if (found_load)
fail("multiple PT_LOAD segs\n");
 
-   if (GET(pt[i].p_offset) != 0 ||
-   GET(pt[i].p_vaddr) != 0)
+   if (GET_LE(pt[i].p_offset) != 0 ||
+   GET_LE(pt[i].p_vaddr) != 0)
fail("PT_LOAD in wrong place\n");
 
-   if (GET(pt[i].p_memsz) != GET(pt[i].p_filesz))
+   if (GET_LE(pt[i].p_memsz) != GET_LE(pt[i].p_filesz))
fail("cannot handle memsz != filesz\n");
 
-   load_size = GET(pt[i].p_memsz);
+   load_size = GET_LE(pt[i].p_memsz);
found_load = 1;
-   } else if (GET(pt[i].p_type) == PT_DYNAMIC) {
-   dyn = addr + GET(pt[i].p_offset);
-   dyn_end = addr + GET(pt[i].p_offset) +
-   GET(pt[i].p_memsz);
+   } else if (GET_LE(pt[i].p_type) == PT_DYNAMIC) {
+   dyn = addr + GET_LE(pt[i].p_offset);
+   dyn_end = addr + GET_LE(pt[i].p_offset) +
+   GET_LE(pt[i].p_memsz);
}
}
if (!found_load)
@@ -46,48 +46,51 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
data_size = (load_size + 4095) / 4096 * 4096;
 
/* Walk the dynamic table */
-   for (i = 0; dyn + i < dyn_end && GET(dyn[i].d_tag) != DT_NULL; i++) {
-   typeof(dyn[i].d_tag) tag = GET(dyn[i].d_tag);
+   for (i = 0; dyn + i < dyn_end &&
+GET_LE(dyn[i].d_tag) != DT_NULL; i++) {
+   typeof(dyn[i].d_tag) tag = GET_LE(dyn[i].d_tag);
 

[tip:x86/vdso] x86/vdso, build: Make LE access macros clearer, host-safe

2014-05-31 Thread tip-bot for H. Peter Anvin
Commit-ID:  c191920f737a09a7252088f018f6747f0d2f484d
Gitweb: http://git.kernel.org/tip/c191920f737a09a7252088f018f6747f0d2f484d
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Fri, 30 May 2014 17:03:22 -0700
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Sat, 31 May 2014 03:35:27 -0700

x86/vdso, build: Make LE access macros clearer, host-safe

Make it a little clearer what the littleendian access macros in
vdso2c.[ch] actually do.  This way they can probably also be moved to
a central location (e.g. tools/include) for the benefit of other host
tools.

We should avoid implementation namespace symbols when writing code
that is compiling for the compiler host, so avoid names starting with
double underscore or underscore-capital.

Signed-off-by: H. Peter Anvin h...@linux.intel.com
Cc: Andy Lutomirski l...@amacapital.net
Link: 
http://lkml.kernel.org/r/2cf258df123cb24bad63c274c8563c050547d99d.1401464755.git.l...@amacapital.net
---
 arch/x86/vdso/vdso2c.c | 16 ++---
 arch/x86/vdso/vdso2c.h | 65 ++
 2 files changed, 42 insertions(+), 39 deletions(-)

diff --git a/arch/x86/vdso/vdso2c.c b/arch/x86/vdso/vdso2c.c
index de19ced..deabaf5 100644
--- a/arch/x86/vdso/vdso2c.c
+++ b/arch/x86/vdso/vdso2c.c
@@ -54,17 +54,17 @@ static void fail(const char *format, ...)
 /*
  * Evil macros to do a little-endian read.
  */
-#define __GET_TYPE(x, type, bits, ifnot)   \
+#define GLE(x, bits, ifnot)\
__builtin_choose_expr(  \
-   __builtin_types_compatible_p(typeof(x), type),  \
-   le##bits##toh((x)), ifnot)
+   (sizeof(x) == bits/8),  \
+   (__typeof__(x))le##bits##toh(x), ifnot)
 
-extern void bad_get(uint64_t);
+extern void bad_get_le(uint64_t);
+#define LAST_LE(x) \
+   __builtin_choose_expr(sizeof(x) == 1, (x), bad_get_le(x))
 
-#define GET(x) \
-   __GET_TYPE((x), __u32, 32, __GET_TYPE((x), __u64, 64,   \
-   __GET_TYPE((x), __s32, 32, __GET_TYPE((x), __s64, 64,   \
-   __GET_TYPE((x), __u16, 16, bad_get(x))
+#define GET_LE(x)  \
+   GLE(x, 64, GLE(x, 32, GLE(x, 16, LAST_LE(x
 
 #define NSYMS (sizeof(required_syms) / sizeof(required_syms[0]))
 
diff --git a/arch/x86/vdso/vdso2c.h b/arch/x86/vdso/vdso2c.h
index f0475da..d1e99e1 100644
--- a/arch/x86/vdso/vdso2c.h
+++ b/arch/x86/vdso/vdso2c.h
@@ -18,27 +18,27 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
const char *secstrings;
uint64_t syms[NSYMS] = {};
 
-   Elf_Phdr *pt = (Elf_Phdr *)(addr + GET(hdr-e_phoff));
+   Elf_Phdr *pt = (Elf_Phdr *)(addr + GET_LE(hdr-e_phoff));
 
/* Walk the segment table. */
-   for (i = 0; i  GET(hdr-e_phnum); i++) {
-   if (GET(pt[i].p_type) == PT_LOAD) {
+   for (i = 0; i  GET_LE(hdr-e_phnum); i++) {
+   if (GET_LE(pt[i].p_type) == PT_LOAD) {
if (found_load)
fail(multiple PT_LOAD segs\n);
 
-   if (GET(pt[i].p_offset) != 0 ||
-   GET(pt[i].p_vaddr) != 0)
+   if (GET_LE(pt[i].p_offset) != 0 ||
+   GET_LE(pt[i].p_vaddr) != 0)
fail(PT_LOAD in wrong place\n);
 
-   if (GET(pt[i].p_memsz) != GET(pt[i].p_filesz))
+   if (GET_LE(pt[i].p_memsz) != GET_LE(pt[i].p_filesz))
fail(cannot handle memsz != filesz\n);
 
-   load_size = GET(pt[i].p_memsz);
+   load_size = GET_LE(pt[i].p_memsz);
found_load = 1;
-   } else if (GET(pt[i].p_type) == PT_DYNAMIC) {
-   dyn = addr + GET(pt[i].p_offset);
-   dyn_end = addr + GET(pt[i].p_offset) +
-   GET(pt[i].p_memsz);
+   } else if (GET_LE(pt[i].p_type) == PT_DYNAMIC) {
+   dyn = addr + GET_LE(pt[i].p_offset);
+   dyn_end = addr + GET_LE(pt[i].p_offset) +
+   GET_LE(pt[i].p_memsz);
}
}
if (!found_load)
@@ -46,48 +46,51 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
data_size = (load_size + 4095) / 4096 * 4096;
 
/* Walk the dynamic table */
-   for (i = 0; dyn + i  dyn_end  GET(dyn[i].d_tag) != DT_NULL; i++) {
-   typeof(dyn[i].d_tag) tag = GET(dyn[i].d_tag);
+   for (i = 0; dyn + i  dyn_end 
+GET_LE(dyn[i].d_tag) != DT_NULL; i++) {
+   

[tip:x86/xsave] x86/xsave: Make it clear that the XSAVE macros use (%edi)/(%rdi)

2014-05-30 Thread tip-bot for H. Peter Anvin
Commit-ID:  c9e5a5a7034146493386d985ff432aed8059929a
Gitweb: http://git.kernel.org/tip/c9e5a5a7034146493386d985ff432aed8059929a
Author: H. Peter Anvin 
AuthorDate: Fri, 30 May 2014 08:19:21 -0700
Committer:  H. Peter Anvin 
CommitDate: Fri, 30 May 2014 08:19:21 -0700

x86/xsave: Make it clear that the XSAVE macros use (%edi)/(%rdi)

The XSAVE instruction family takes a memory argment.  The macros use
(%edi)/(%rdi) as that memory argument - make that clear to the reader.

Signed-off-by: H. Peter Anvin 
Cc: Fenghua Yu 
Link: 
http://lkml.kernel.org/r/1401387164-43416-7-git-send-email-fenghua...@intel.com
---
 arch/x86/include/asm/xsave.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/include/asm/xsave.h b/arch/x86/include/asm/xsave.h
index 1ba577c..bbebd6e 100644
--- a/arch/x86/include/asm/xsave.h
+++ b/arch/x86/include/asm/xsave.h
@@ -52,6 +52,7 @@ extern void xsave_init(void);
 extern void update_regset_xstate_info(unsigned int size, u64 xstate_mask);
 extern int init_fpu(struct task_struct *child);
 
+/* These macros all use (%edi)/(%rdi) as the single memory argument. */
 #define XSAVE  ".byte " REX_PREFIX "0x0f,0xae,0x27"
 #define XSAVEOPT   ".byte " REX_PREFIX "0x0f,0xae,0x37"
 #define XSAVES ".byte " REX_PREFIX "0x0f,0xc7,0x2f"
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] x86/vdso, build: Make LE access macros clearer, host-safe

2014-05-30 Thread tip-bot for H. Peter Anvin
Commit-ID:  e002e99ea4b07c8446d8e1ca892c60f44630643e
Gitweb: http://git.kernel.org/tip/e002e99ea4b07c8446d8e1ca892c60f44630643e
Author: H. Peter Anvin 
AuthorDate: Fri, 30 May 2014 17:03:22 -0700
Committer:  H. Peter Anvin 
CommitDate: Fri, 30 May 2014 17:03:22 -0700

x86/vdso, build: Make LE access macros clearer, host-safe

Make it a little clearer what the littleendian access macros in
vdso2c.[ch] actually do.  This way they can probably also be moved to
a central location (e.g. tools/include) for the benefit of other host
tools.

We should avoid implementation namespace symbols when writing code
that is compiling for the compiler host, so avoid names starting with
double underscore or underscore-capital.

Signed-off-by: H. Peter Anvin 
Cc: Andy Lutomirski 
Link: 
http://lkml.kernel.org/r/2cf258df123cb24bad63c274c8563c050547d99d.1401464755.git.l...@amacapital.net
---
 arch/x86/vdso/vdso2c.c | 16 ++---
 arch/x86/vdso/vdso2c.h | 65 ++
 2 files changed, 42 insertions(+), 39 deletions(-)

diff --git a/arch/x86/vdso/vdso2c.c b/arch/x86/vdso/vdso2c.c
index de19ced..a09d5a8 100644
--- a/arch/x86/vdso/vdso2c.c
+++ b/arch/x86/vdso/vdso2c.c
@@ -54,17 +54,17 @@ static void fail(const char *format, ...)
 /*
  * Evil macros to do a little-endian read.
  */
-#define __GET_TYPE(x, type, bits, ifnot)   \
+#define GLE(x, bits, ifnot)\
__builtin_choose_expr(  \
-   __builtin_types_compatible_p(typeof(x), type),  \
-   le##bits##toh((x)), ifnot)
+   (sizeof(x) == bits/8),  \
+   (__typeof__(x))le##bits##toh(x), ifnot)
 
-extern void bad_get(uint64_t);
+extern void bad_get_le(uint64_t);
+#define LAST_LE(x) \
+   __builtin_choose_expr(sizeof(x) == 1, (x), bad_le(x))
 
-#define GET(x) \
-   __GET_TYPE((x), __u32, 32, __GET_TYPE((x), __u64, 64,   \
-   __GET_TYPE((x), __s32, 32, __GET_TYPE((x), __s64, 64,   \
-   __GET_TYPE((x), __u16, 16, bad_get(x))
+#define GET_LE(x)  \
+   GLE(x, 64, GLE(x, 32, GLE(x, 16, LAST_LE(x
 
 #define NSYMS (sizeof(required_syms) / sizeof(required_syms[0]))
 
diff --git a/arch/x86/vdso/vdso2c.h b/arch/x86/vdso/vdso2c.h
index f0475da..d1e99e1 100644
--- a/arch/x86/vdso/vdso2c.h
+++ b/arch/x86/vdso/vdso2c.h
@@ -18,27 +18,27 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
const char *secstrings;
uint64_t syms[NSYMS] = {};
 
-   Elf_Phdr *pt = (Elf_Phdr *)(addr + GET(hdr->e_phoff));
+   Elf_Phdr *pt = (Elf_Phdr *)(addr + GET_LE(hdr->e_phoff));
 
/* Walk the segment table. */
-   for (i = 0; i < GET(hdr->e_phnum); i++) {
-   if (GET(pt[i].p_type) == PT_LOAD) {
+   for (i = 0; i < GET_LE(hdr->e_phnum); i++) {
+   if (GET_LE(pt[i].p_type) == PT_LOAD) {
if (found_load)
fail("multiple PT_LOAD segs\n");
 
-   if (GET(pt[i].p_offset) != 0 ||
-   GET(pt[i].p_vaddr) != 0)
+   if (GET_LE(pt[i].p_offset) != 0 ||
+   GET_LE(pt[i].p_vaddr) != 0)
fail("PT_LOAD in wrong place\n");
 
-   if (GET(pt[i].p_memsz) != GET(pt[i].p_filesz))
+   if (GET_LE(pt[i].p_memsz) != GET_LE(pt[i].p_filesz))
fail("cannot handle memsz != filesz\n");
 
-   load_size = GET(pt[i].p_memsz);
+   load_size = GET_LE(pt[i].p_memsz);
found_load = 1;
-   } else if (GET(pt[i].p_type) == PT_DYNAMIC) {
-   dyn = addr + GET(pt[i].p_offset);
-   dyn_end = addr + GET(pt[i].p_offset) +
-   GET(pt[i].p_memsz);
+   } else if (GET_LE(pt[i].p_type) == PT_DYNAMIC) {
+   dyn = addr + GET_LE(pt[i].p_offset);
+   dyn_end = addr + GET_LE(pt[i].p_offset) +
+   GET_LE(pt[i].p_memsz);
}
}
if (!found_load)
@@ -46,48 +46,51 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
data_size = (load_size + 4095) / 4096 * 4096;
 
/* Walk the dynamic table */
-   for (i = 0; dyn + i < dyn_end && GET(dyn[i].d_tag) != DT_NULL; i++) {
-   typeof(dyn[i].d_tag) tag = GET(dyn[i].d_tag);
+   for (i = 0; dyn + i < dyn_end &&
+GET_LE(dyn[i].d_tag) != DT_NULL; i++) {
+   typeof(dyn[i].d_tag) tag = GET_LE(dyn[i].d_tag);

[tip:x86/vdso] x86/vdso, build: Make LE access macros clearer, host-safe

2014-05-30 Thread tip-bot for H. Peter Anvin
Commit-ID:  e002e99ea4b07c8446d8e1ca892c60f44630643e
Gitweb: http://git.kernel.org/tip/e002e99ea4b07c8446d8e1ca892c60f44630643e
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Fri, 30 May 2014 17:03:22 -0700
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Fri, 30 May 2014 17:03:22 -0700

x86/vdso, build: Make LE access macros clearer, host-safe

Make it a little clearer what the littleendian access macros in
vdso2c.[ch] actually do.  This way they can probably also be moved to
a central location (e.g. tools/include) for the benefit of other host
tools.

We should avoid implementation namespace symbols when writing code
that is compiling for the compiler host, so avoid names starting with
double underscore or underscore-capital.

Signed-off-by: H. Peter Anvin h...@linux.intel.com
Cc: Andy Lutomirski l...@amacapital.net
Link: 
http://lkml.kernel.org/r/2cf258df123cb24bad63c274c8563c050547d99d.1401464755.git.l...@amacapital.net
---
 arch/x86/vdso/vdso2c.c | 16 ++---
 arch/x86/vdso/vdso2c.h | 65 ++
 2 files changed, 42 insertions(+), 39 deletions(-)

diff --git a/arch/x86/vdso/vdso2c.c b/arch/x86/vdso/vdso2c.c
index de19ced..a09d5a8 100644
--- a/arch/x86/vdso/vdso2c.c
+++ b/arch/x86/vdso/vdso2c.c
@@ -54,17 +54,17 @@ static void fail(const char *format, ...)
 /*
  * Evil macros to do a little-endian read.
  */
-#define __GET_TYPE(x, type, bits, ifnot)   \
+#define GLE(x, bits, ifnot)\
__builtin_choose_expr(  \
-   __builtin_types_compatible_p(typeof(x), type),  \
-   le##bits##toh((x)), ifnot)
+   (sizeof(x) == bits/8),  \
+   (__typeof__(x))le##bits##toh(x), ifnot)
 
-extern void bad_get(uint64_t);
+extern void bad_get_le(uint64_t);
+#define LAST_LE(x) \
+   __builtin_choose_expr(sizeof(x) == 1, (x), bad_le(x))
 
-#define GET(x) \
-   __GET_TYPE((x), __u32, 32, __GET_TYPE((x), __u64, 64,   \
-   __GET_TYPE((x), __s32, 32, __GET_TYPE((x), __s64, 64,   \
-   __GET_TYPE((x), __u16, 16, bad_get(x))
+#define GET_LE(x)  \
+   GLE(x, 64, GLE(x, 32, GLE(x, 16, LAST_LE(x
 
 #define NSYMS (sizeof(required_syms) / sizeof(required_syms[0]))
 
diff --git a/arch/x86/vdso/vdso2c.h b/arch/x86/vdso/vdso2c.h
index f0475da..d1e99e1 100644
--- a/arch/x86/vdso/vdso2c.h
+++ b/arch/x86/vdso/vdso2c.h
@@ -18,27 +18,27 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
const char *secstrings;
uint64_t syms[NSYMS] = {};
 
-   Elf_Phdr *pt = (Elf_Phdr *)(addr + GET(hdr-e_phoff));
+   Elf_Phdr *pt = (Elf_Phdr *)(addr + GET_LE(hdr-e_phoff));
 
/* Walk the segment table. */
-   for (i = 0; i  GET(hdr-e_phnum); i++) {
-   if (GET(pt[i].p_type) == PT_LOAD) {
+   for (i = 0; i  GET_LE(hdr-e_phnum); i++) {
+   if (GET_LE(pt[i].p_type) == PT_LOAD) {
if (found_load)
fail(multiple PT_LOAD segs\n);
 
-   if (GET(pt[i].p_offset) != 0 ||
-   GET(pt[i].p_vaddr) != 0)
+   if (GET_LE(pt[i].p_offset) != 0 ||
+   GET_LE(pt[i].p_vaddr) != 0)
fail(PT_LOAD in wrong place\n);
 
-   if (GET(pt[i].p_memsz) != GET(pt[i].p_filesz))
+   if (GET_LE(pt[i].p_memsz) != GET_LE(pt[i].p_filesz))
fail(cannot handle memsz != filesz\n);
 
-   load_size = GET(pt[i].p_memsz);
+   load_size = GET_LE(pt[i].p_memsz);
found_load = 1;
-   } else if (GET(pt[i].p_type) == PT_DYNAMIC) {
-   dyn = addr + GET(pt[i].p_offset);
-   dyn_end = addr + GET(pt[i].p_offset) +
-   GET(pt[i].p_memsz);
+   } else if (GET_LE(pt[i].p_type) == PT_DYNAMIC) {
+   dyn = addr + GET_LE(pt[i].p_offset);
+   dyn_end = addr + GET_LE(pt[i].p_offset) +
+   GET_LE(pt[i].p_memsz);
}
}
if (!found_load)
@@ -46,48 +46,51 @@ static void GOFUNC(void *addr, size_t len, FILE *outfile, 
const char *name)
data_size = (load_size + 4095) / 4096 * 4096;
 
/* Walk the dynamic table */
-   for (i = 0; dyn + i  dyn_end  GET(dyn[i].d_tag) != DT_NULL; i++) {
-   typeof(dyn[i].d_tag) tag = GET(dyn[i].d_tag);
+   for (i = 0; dyn + i  dyn_end 
+GET_LE(dyn[i].d_tag) != DT_NULL; i++) {
+   

[tip:x86/xsave] x86/xsave: Make it clear that the XSAVE macros use (%edi)/(%rdi)

2014-05-30 Thread tip-bot for H. Peter Anvin
Commit-ID:  c9e5a5a7034146493386d985ff432aed8059929a
Gitweb: http://git.kernel.org/tip/c9e5a5a7034146493386d985ff432aed8059929a
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Fri, 30 May 2014 08:19:21 -0700
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Fri, 30 May 2014 08:19:21 -0700

x86/xsave: Make it clear that the XSAVE macros use (%edi)/(%rdi)

The XSAVE instruction family takes a memory argment.  The macros use
(%edi)/(%rdi) as that memory argument - make that clear to the reader.

Signed-off-by: H. Peter Anvin h...@linux.intel.com
Cc: Fenghua Yu fenghua...@intel.com
Link: 
http://lkml.kernel.org/r/1401387164-43416-7-git-send-email-fenghua...@intel.com
---
 arch/x86/include/asm/xsave.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/include/asm/xsave.h b/arch/x86/include/asm/xsave.h
index 1ba577c..bbebd6e 100644
--- a/arch/x86/include/asm/xsave.h
+++ b/arch/x86/include/asm/xsave.h
@@ -52,6 +52,7 @@ extern void xsave_init(void);
 extern void update_regset_xstate_info(unsigned int size, u64 xstate_mask);
 extern int init_fpu(struct task_struct *child);
 
+/* These macros all use (%edi)/(%rdi) as the single memory argument. */
 #define XSAVE  .byte  REX_PREFIX 0x0f,0xae,0x27
 #define XSAVEOPT   .byte  REX_PREFIX 0x0f,0xae,0x37
 #define XSAVES .byte  REX_PREFIX 0x0f,0xc7,0x2f
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/urgent] x86, rdrand: When nordrand is specified, disable RDSEED as well

2014-05-11 Thread tip-bot for H. Peter Anvin
Commit-ID:  7a5091d58419b4e5222abce58a40c072786ea1d6
Gitweb: http://git.kernel.org/tip/7a5091d58419b4e5222abce58a40c072786ea1d6
Author: H. Peter Anvin 
AuthorDate: Sun, 11 May 2014 20:25:20 -0700
Committer:  H. Peter Anvin 
CommitDate: Sun, 11 May 2014 20:25:20 -0700

x86, rdrand: When nordrand is specified, disable RDSEED as well

One can logically expect that when the user has specified "nordrand",
the user doesn't want any use of the CPU random number generator,
neither RDRAND nor RDSEED, so disable both.

Reported-by: Stephan Mueller 
Cc: Theodore Ts'o 
Link: http://lkml.kernel.org/r/21542339.0lfnpsy...@myon.chronox.de
Signed-off-by: H. Peter Anvin 
---
 Documentation/kernel-parameters.txt | 8 
 arch/x86/kernel/cpu/rdrand.c| 1 +
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/Documentation/kernel-parameters.txt 
b/Documentation/kernel-parameters.txt
index 4384217..30a8ad0d 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2218,10 +2218,10 @@ bytes respectively. Such letter suffixes can also be 
entirely omitted.
noreplace-smp   [X86-32,SMP] Don't replace SMP instructions
with UP alternatives
 
-   nordrand[X86] Disable the direct use of the RDRAND
-   instruction even if it is supported by the
-   processor.  RDRAND is still available to user
-   space applications.
+   nordrand[X86] Disable kernel use of the RDRAND and
+   RDSEED instructions even if they are supported
+   by the processor.  RDRAND and RDSEED are still
+   available to user space applications.
 
noresume[SWSUSP] Disables resume and restores original swap
space.
diff --git a/arch/x86/kernel/cpu/rdrand.c b/arch/x86/kernel/cpu/rdrand.c
index 384df51..136ac74 100644
--- a/arch/x86/kernel/cpu/rdrand.c
+++ b/arch/x86/kernel/cpu/rdrand.c
@@ -27,6 +27,7 @@
 static int __init x86_rdrand_setup(char *s)
 {
setup_clear_cpu_cap(X86_FEATURE_RDRAND);
+   setup_clear_cpu_cap(X86_FEATURE_RDSEED);
return 1;
 }
 __setup("nordrand", x86_rdrand_setup);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/urgent] x86, rdrand: When nordrand is specified, disable RDSEED as well

2014-05-11 Thread tip-bot for H. Peter Anvin
Commit-ID:  7a5091d58419b4e5222abce58a40c072786ea1d6
Gitweb: http://git.kernel.org/tip/7a5091d58419b4e5222abce58a40c072786ea1d6
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Sun, 11 May 2014 20:25:20 -0700
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Sun, 11 May 2014 20:25:20 -0700

x86, rdrand: When nordrand is specified, disable RDSEED as well

One can logically expect that when the user has specified nordrand,
the user doesn't want any use of the CPU random number generator,
neither RDRAND nor RDSEED, so disable both.

Reported-by: Stephan Mueller smuel...@chronox.de
Cc: Theodore Ts'o ty...@mit.edu
Link: http://lkml.kernel.org/r/21542339.0lfnpsy...@myon.chronox.de
Signed-off-by: H. Peter Anvin h...@linux.intel.com
---
 Documentation/kernel-parameters.txt | 8 
 arch/x86/kernel/cpu/rdrand.c| 1 +
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/Documentation/kernel-parameters.txt 
b/Documentation/kernel-parameters.txt
index 4384217..30a8ad0d 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2218,10 +2218,10 @@ bytes respectively. Such letter suffixes can also be 
entirely omitted.
noreplace-smp   [X86-32,SMP] Don't replace SMP instructions
with UP alternatives
 
-   nordrand[X86] Disable the direct use of the RDRAND
-   instruction even if it is supported by the
-   processor.  RDRAND is still available to user
-   space applications.
+   nordrand[X86] Disable kernel use of the RDRAND and
+   RDSEED instructions even if they are supported
+   by the processor.  RDRAND and RDSEED are still
+   available to user space applications.
 
noresume[SWSUSP] Disables resume and restores original swap
space.
diff --git a/arch/x86/kernel/cpu/rdrand.c b/arch/x86/kernel/cpu/rdrand.c
index 384df51..136ac74 100644
--- a/arch/x86/kernel/cpu/rdrand.c
+++ b/arch/x86/kernel/cpu/rdrand.c
@@ -27,6 +27,7 @@
 static int __init x86_rdrand_setup(char *s)
 {
setup_clear_cpu_cap(X86_FEATURE_RDRAND);
+   setup_clear_cpu_cap(X86_FEATURE_RDSEED);
return 1;
 }
 __setup(nordrand, x86_rdrand_setup);
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/urgent] x86, build: Don't get confused by local symbols

2014-05-05 Thread tip-bot for H. Peter Anvin
Commit-ID:  ac008fe0a3236729751ccde655c215b436dfdaeb
Gitweb: http://git.kernel.org/tip/ac008fe0a3236729751ccde655c215b436dfdaeb
Author: H. Peter Anvin 
AuthorDate: Mon, 5 May 2014 15:23:35 -0700
Committer:  H. Peter Anvin 
CommitDate: Mon, 5 May 2014 15:23:35 -0700

x86, build: Don't get confused by local symbols

arch/x86/crypto/sha1_avx2_x86_64_asm.S introduced _end as a local
symbol, which broke the build under certain circumstances.  Although
the wisdom of _end as a local symbol can definitely be questioned, the
build should not break for that reason.

Thus, filter the output of nm to only get global symbols of
appropriate type.

Reported-by: Andy Lutomirski 
Cc: Chandramouli Narayanan 
Cc: Herbert Xu 
Signed-off-by: H. Peter Anvin 
Link: http://lkml.kernel.org/n/tip-uxm3j3w3odglcwhafwq5t...@git.kernel.org
---
 arch/x86/boot/Makefile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index abb9eba..dbe8dd2 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -71,7 +71,7 @@ $(obj)/vmlinux.bin: $(obj)/compressed/vmlinux FORCE
 
 SETUP_OBJS = $(addprefix $(obj)/,$(setup-y))
 
-sed-voffset := -e 's/^\([0-9a-fA-F]*\) . \(_text\|_end\)$$/\#define VO_\2 
0x\1/p'
+sed-voffset := -e 's/^\([0-9a-fA-F]*\) [ABCDGRSTVW] \(_text\|_end\)$$/\#define 
VO_\2 0x\1/p'
 
 quiet_cmd_voffset = VOFFSET $@
   cmd_voffset = $(NM) $< | sed -n $(sed-voffset) > $@
@@ -80,7 +80,7 @@ targets += voffset.h
 $(obj)/voffset.h: vmlinux FORCE
$(call if_changed,voffset)
 
-sed-zoffset := -e 's/^\([0-9a-fA-F]*\) . 
\(startup_32\|startup_64\|efi32_stub_entry\|efi64_stub_entry\|efi_pe_entry\|input_data\|_end\|z_.*\)$$/\#define
 ZO_\2 0x\1/p'
+sed-zoffset := -e 's/^\([0-9a-fA-F]*\) [ABCDGRSTVW] 
\(startup_32\|startup_64\|efi32_stub_entry\|efi64_stub_entry\|efi_pe_entry\|input_data\|_end\|z_.*\)$$/\#define
 ZO_\2 0x\1/p'
 
 quiet_cmd_zoffset = ZOFFSET $@
   cmd_zoffset = $(NM) $< | sed -n $(sed-zoffset) > $@
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/urgent] x86, build: Don't get confused by local symbols

2014-05-05 Thread tip-bot for H. Peter Anvin
Commit-ID:  ac008fe0a3236729751ccde655c215b436dfdaeb
Gitweb: http://git.kernel.org/tip/ac008fe0a3236729751ccde655c215b436dfdaeb
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Mon, 5 May 2014 15:23:35 -0700
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Mon, 5 May 2014 15:23:35 -0700

x86, build: Don't get confused by local symbols

arch/x86/crypto/sha1_avx2_x86_64_asm.S introduced _end as a local
symbol, which broke the build under certain circumstances.  Although
the wisdom of _end as a local symbol can definitely be questioned, the
build should not break for that reason.

Thus, filter the output of nm to only get global symbols of
appropriate type.

Reported-by: Andy Lutomirski l...@amacapital.net
Cc: Chandramouli Narayanan mo...@linux.intel.com
Cc: Herbert Xu herb...@gondor.apana.org.au
Signed-off-by: H. Peter Anvin h...@linux.intel.com
Link: http://lkml.kernel.org/n/tip-uxm3j3w3odglcwhafwq5t...@git.kernel.org
---
 arch/x86/boot/Makefile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index abb9eba..dbe8dd2 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -71,7 +71,7 @@ $(obj)/vmlinux.bin: $(obj)/compressed/vmlinux FORCE
 
 SETUP_OBJS = $(addprefix $(obj)/,$(setup-y))
 
-sed-voffset := -e 's/^\([0-9a-fA-F]*\) . \(_text\|_end\)$$/\#define VO_\2 
0x\1/p'
+sed-voffset := -e 's/^\([0-9a-fA-F]*\) [ABCDGRSTVW] \(_text\|_end\)$$/\#define 
VO_\2 0x\1/p'
 
 quiet_cmd_voffset = VOFFSET $@
   cmd_voffset = $(NM) $ | sed -n $(sed-voffset)  $@
@@ -80,7 +80,7 @@ targets += voffset.h
 $(obj)/voffset.h: vmlinux FORCE
$(call if_changed,voffset)
 
-sed-zoffset := -e 's/^\([0-9a-fA-F]*\) . 
\(startup_32\|startup_64\|efi32_stub_entry\|efi64_stub_entry\|efi_pe_entry\|input_data\|_end\|z_.*\)$$/\#define
 ZO_\2 0x\1/p'
+sed-zoffset := -e 's/^\([0-9a-fA-F]*\) [ABCDGRSTVW] 
\(startup_32\|startup_64\|efi32_stub_entry\|efi64_stub_entry\|efi_pe_entry\|input_data\|_end\|z_.*\)$$/\#define
 ZO_\2 0x\1/p'
 
 quiet_cmd_zoffset = ZOFFSET $@
   cmd_zoffset = $(NM) $ | sed -n $(sed-zoffset)  $@
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/espfix] x86, espfix: Make it possible to disable 16-bit support

2014-05-04 Thread tip-bot for H. Peter Anvin
Commit-ID:  34273f41d57ee8d854dcd2a1d754cbb546cb548f
Gitweb: http://git.kernel.org/tip/34273f41d57ee8d854dcd2a1d754cbb546cb548f
Author: H. Peter Anvin 
AuthorDate: Sun, 4 May 2014 10:36:22 -0700
Committer:  H. Peter Anvin 
CommitDate: Sun, 4 May 2014 12:27:37 -0700

x86, espfix: Make it possible to disable 16-bit support

Embedded systems, which may be very memory-size-sensitive, are
extremely unlikely to ever encounter any 16-bit software, so make it
a CONFIG_EXPERT option to turn off support for any 16-bit software
whatsoever.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-...@linux.intel.com
---
 arch/x86/Kconfig   | 23 ++-
 arch/x86/kernel/entry_32.S | 12 
 arch/x86/kernel/entry_64.S |  8 
 arch/x86/kernel/ldt.c  |  5 +
 4 files changed, 43 insertions(+), 5 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 9a952a5..956c770 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -909,14 +909,27 @@ config VM86
default y
depends on X86_32
---help---
- This option is required by programs like DOSEMU to run 16-bit legacy
- code on X86 processors. It also may be needed by software like
- XFree86 to initialize some video cards via BIOS. Disabling this
- option saves about 6k.
+ This option is required by programs like DOSEMU to run
+ 16-bit real mode legacy code on x86 processors. It also may
+ be needed by software like XFree86 to initialize some video
+ cards via BIOS. Disabling this option saves about 6K.
+
+config X86_16BIT
+   bool "Enable support for 16-bit segments" if EXPERT
+   default y
+   ---help---
+ This option is required by programs like Wine to run 16-bit
+ protected mode legacy code on x86 processors.  Disabling
+ this option saves about 300 bytes on i386, or around 6K text
+ plus 16K runtime memory on x86-64,
+
+config X86_ESPFIX32
+   def_bool y
+   depends on X86_16BIT && X86_32
 
 config X86_ESPFIX64
def_bool y
-   depends on X86_64
+   depends on X86_16BIT && X86_64
 
 config TOSHIBA
tristate "Toshiba Laptop support"
diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
index 2780b8f..98313ff 100644
--- a/arch/x86/kernel/entry_32.S
+++ b/arch/x86/kernel/entry_32.S
@@ -527,6 +527,7 @@ syscall_exit:
 restore_all:
TRACE_IRQS_IRET
 restore_all_notrace:
+#ifdef CONFIG_X86_ESPFIX32
movl PT_EFLAGS(%esp), %eax  # mix EFLAGS, SS and CS
# Warning: PT_OLDSS(%esp) contains the wrong/random values if we
# are returning to the kernel.
@@ -537,6 +538,7 @@ restore_all_notrace:
cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax
CFI_REMEMBER_STATE
je ldt_ss   # returning to user-space with LDT SS
+#endif
 restore_nocheck:
RESTORE_REGS 4  # skip orig_eax/error_code
 irq_return:
@@ -549,6 +551,7 @@ ENTRY(iret_exc)
 .previous
_ASM_EXTABLE(irq_return,iret_exc)
 
+#ifdef CONFIG_X86_ESPFIX32
CFI_RESTORE_STATE
 ldt_ss:
 #ifdef CONFIG_PARAVIRT
@@ -592,6 +595,7 @@ ldt_ss:
lss (%esp), %esp/* switch to espfix segment */
CFI_ADJUST_CFA_OFFSET -8
jmp restore_nocheck
+#endif
CFI_ENDPROC
 ENDPROC(system_call)
 
@@ -699,6 +703,7 @@ END(syscall_badsys)
  * the high word of the segment base from the GDT and swiches to the
  * normal stack and adjusts ESP with the matching offset.
  */
+#ifdef CONFIG_X86_ESPFIX32
/* fixup the stack */
mov GDT_ESPFIX_SS + 4, %al /* bits 16..23 */
mov GDT_ESPFIX_SS + 7, %ah /* bits 24..31 */
@@ -708,8 +713,10 @@ END(syscall_badsys)
pushl_cfi %eax
lss (%esp), %esp/* switch to the normal stack segment */
CFI_ADJUST_CFA_OFFSET -8
+#endif
 .endm
 .macro UNWIND_ESPFIX_STACK
+#ifdef CONFIG_X86_ESPFIX32
movl %ss, %eax
/* see if on espfix stack */
cmpw $__ESPFIX_SS, %ax
@@ -720,6 +727,7 @@ END(syscall_badsys)
/* switch to normal stack */
FIXUP_ESPFIX_STACK
 27:
+#endif
 .endm
 
 /*
@@ -1350,11 +1358,13 @@ END(debug)
 ENTRY(nmi)
RING0_INT_FRAME
ASM_CLAC
+#ifdef CONFIG_X86_ESPFIX32
pushl_cfi %eax
movl %ss, %eax
cmpw $__ESPFIX_SS, %ax
popl_cfi %eax
je nmi_espfix_stack
+#endif
cmpl $ia32_sysenter_target,(%esp)
je nmi_stack_fixup
pushl_cfi %eax
@@ -1394,6 +1404,7 @@ nmi_debug_stack_check:
FIX_STACK 24, nmi_stack_correct, 1
jmp nmi_stack_correct
 
+#ifdef CONFIG_X86_ESPFIX32
 nmi_espfix_stack:
/* We have a RING0_INT_FRAME here.
 *
@@ -1415,6 +1426,7 @@ nmi_espfix_stack:
lss 12+4(%esp), %esp# back to espfix stack
CFI_ADJUST_CFA_OFFSET -24
jmp irq_return
+#endif
CFI_ENDPROC
 END(nmi)
 

[tip:x86/espfix] x86, espfix: Make it possible do disable 16-bit support

2014-05-04 Thread tip-bot for H. Peter Anvin
Commit-ID:  2179e94315ee3bcf406a682b8bbe2f27380bb7e9
Gitweb: http://git.kernel.org/tip/2179e94315ee3bcf406a682b8bbe2f27380bb7e9
Author: H. Peter Anvin 
AuthorDate: Sun, 4 May 2014 10:36:22 -0700
Committer:  H. Peter Anvin 
CommitDate: Sun, 4 May 2014 10:56:32 -0700

x86, espfix: Make it possible do disable 16-bit support

Embedded systems, which may be very memory-size-sensitive, are
extremely unlikely to ever encounter any 16-bit software, so make it
a CONFIG_EXPERT option to turn off support for any 16-bit software
whatsoever.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-...@linux.intel.com
---
 arch/x86/Kconfig   | 23 ++-
 arch/x86/kernel/entry_32.S | 12 
 arch/x86/kernel/entry_64.S |  8 
 arch/x86/kernel/ldt.c  |  5 +
 4 files changed, 43 insertions(+), 5 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 9a952a5..956c770 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -909,14 +909,27 @@ config VM86
default y
depends on X86_32
---help---
- This option is required by programs like DOSEMU to run 16-bit legacy
- code on X86 processors. It also may be needed by software like
- XFree86 to initialize some video cards via BIOS. Disabling this
- option saves about 6k.
+ This option is required by programs like DOSEMU to run
+ 16-bit real mode legacy code on x86 processors. It also may
+ be needed by software like XFree86 to initialize some video
+ cards via BIOS. Disabling this option saves about 6K.
+
+config X86_16BIT
+   bool "Enable support for 16-bit segments" if EXPERT
+   default y
+   ---help---
+ This option is required by programs like Wine to run 16-bit
+ protected mode legacy code on x86 processors.  Disabling
+ this option saves about 300 bytes on i386, or around 6K text
+ plus 16K runtime memory on x86-64,
+
+config X86_ESPFIX32
+   def_bool y
+   depends on X86_16BIT && X86_32
 
 config X86_ESPFIX64
def_bool y
-   depends on X86_64
+   depends on X86_16BIT && X86_64
 
 config TOSHIBA
tristate "Toshiba Laptop support"
diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
index 2780b8f..98313ff 100644
--- a/arch/x86/kernel/entry_32.S
+++ b/arch/x86/kernel/entry_32.S
@@ -527,6 +527,7 @@ syscall_exit:
 restore_all:
TRACE_IRQS_IRET
 restore_all_notrace:
+#ifdef CONFIG_X86_ESPFIX32
movl PT_EFLAGS(%esp), %eax  # mix EFLAGS, SS and CS
# Warning: PT_OLDSS(%esp) contains the wrong/random values if we
# are returning to the kernel.
@@ -537,6 +538,7 @@ restore_all_notrace:
cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax
CFI_REMEMBER_STATE
je ldt_ss   # returning to user-space with LDT SS
+#endif
 restore_nocheck:
RESTORE_REGS 4  # skip orig_eax/error_code
 irq_return:
@@ -549,6 +551,7 @@ ENTRY(iret_exc)
 .previous
_ASM_EXTABLE(irq_return,iret_exc)
 
+#ifdef CONFIG_X86_ESPFIX32
CFI_RESTORE_STATE
 ldt_ss:
 #ifdef CONFIG_PARAVIRT
@@ -592,6 +595,7 @@ ldt_ss:
lss (%esp), %esp/* switch to espfix segment */
CFI_ADJUST_CFA_OFFSET -8
jmp restore_nocheck
+#endif
CFI_ENDPROC
 ENDPROC(system_call)
 
@@ -699,6 +703,7 @@ END(syscall_badsys)
  * the high word of the segment base from the GDT and swiches to the
  * normal stack and adjusts ESP with the matching offset.
  */
+#ifdef CONFIG_X86_ESPFIX32
/* fixup the stack */
mov GDT_ESPFIX_SS + 4, %al /* bits 16..23 */
mov GDT_ESPFIX_SS + 7, %ah /* bits 24..31 */
@@ -708,8 +713,10 @@ END(syscall_badsys)
pushl_cfi %eax
lss (%esp), %esp/* switch to the normal stack segment */
CFI_ADJUST_CFA_OFFSET -8
+#endif
 .endm
 .macro UNWIND_ESPFIX_STACK
+#ifdef CONFIG_X86_ESPFIX32
movl %ss, %eax
/* see if on espfix stack */
cmpw $__ESPFIX_SS, %ax
@@ -720,6 +727,7 @@ END(syscall_badsys)
/* switch to normal stack */
FIXUP_ESPFIX_STACK
 27:
+#endif
 .endm
 
 /*
@@ -1350,11 +1358,13 @@ END(debug)
 ENTRY(nmi)
RING0_INT_FRAME
ASM_CLAC
+#ifdef CONFIG_X86_ESPFIX32
pushl_cfi %eax
movl %ss, %eax
cmpw $__ESPFIX_SS, %ax
popl_cfi %eax
je nmi_espfix_stack
+#endif
cmpl $ia32_sysenter_target,(%esp)
je nmi_stack_fixup
pushl_cfi %eax
@@ -1394,6 +1404,7 @@ nmi_debug_stack_check:
FIX_STACK 24, nmi_stack_correct, 1
jmp nmi_stack_correct
 
+#ifdef CONFIG_X86_ESPFIX32
 nmi_espfix_stack:
/* We have a RING0_INT_FRAME here.
 *
@@ -1415,6 +1426,7 @@ nmi_espfix_stack:
lss 12+4(%esp), %esp# back to espfix stack
CFI_ADJUST_CFA_OFFSET -24
jmp irq_return
+#endif
CFI_ENDPROC
 END(nmi)
 

[tip:x86/espfix] x86, espfix: Make espfix64 a Kconfig option, fix UML

2014-05-04 Thread tip-bot for H. Peter Anvin
Commit-ID:  197725de65477bc8509b41388157c1a2283542bb
Gitweb: http://git.kernel.org/tip/197725de65477bc8509b41388157c1a2283542bb
Author: H. Peter Anvin 
AuthorDate: Sun, 4 May 2014 10:00:49 -0700
Committer:  H. Peter Anvin 
CommitDate: Sun, 4 May 2014 10:00:49 -0700

x86, espfix: Make espfix64 a Kconfig option, fix UML

Make espfix64 a hidden Kconfig option.  This fixes the x86-64 UML
build which had broken due to the non-existence of init_espfix_bsp()
in UML: since UML uses its own Kconfig, this option does not appear in
the UML build.

This also makes it possible to make support for 16-bit segments a
configuration option, for the people who want to minimize the size of
the kernel.

Reported-by: Ingo Molnar 
Signed-off-by: H. Peter Anvin 
Cc: Richard Weinberger 
Link: 
http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-...@linux.intel.com
---
 arch/x86/Kconfig  | 4 
 arch/x86/kernel/Makefile  | 2 +-
 arch/x86/kernel/smpboot.c | 2 +-
 init/main.c   | 2 +-
 4 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 25d2c6f..9a952a5 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -914,6 +914,10 @@ config VM86
  XFree86 to initialize some video cards via BIOS. Disabling this
  option saves about 6k.
 
+config X86_ESPFIX64
+   def_bool y
+   depends on X86_64
+
 config TOSHIBA
tristate "Toshiba Laptop support"
depends on X86_32
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 1cc3789..491ef3e 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -29,7 +29,7 @@ obj-$(CONFIG_X86_64)  += sys_x86_64.o x8664_ksyms_64.o
 obj-y  += syscall_$(BITS).o vsyscall_gtod.o
 obj-$(CONFIG_X86_64)   += vsyscall_64.o
 obj-$(CONFIG_X86_64)   += vsyscall_emu_64.o
-obj-$(CONFIG_X86_64)   += espfix_64.o
+obj-$(CONFIG_X86_ESPFIX64) += espfix_64.o
 obj-$(CONFIG_SYSFS)+= ksysfs.o
 obj-y  += bootflag.o e820.o
 obj-y  += pci-dma.o quirks.o topology.o kdebugfs.o
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 61a5350..5d93ac1 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -246,7 +246,7 @@ static void notrace start_secondary(void *unused)
/*
 * Enable the espfix hack for this CPU
 */
-#ifdef CONFIG_X86_64
+#ifdef CONFIG_X86_ESPFIX64
init_espfix_ap();
 #endif
 
diff --git a/init/main.c b/init/main.c
index 70fc00e..58c132d 100644
--- a/init/main.c
+++ b/init/main.c
@@ -617,7 +617,7 @@ asmlinkage void __init start_kernel(void)
if (efi_enabled(EFI_RUNTIME_SERVICES))
efi_enter_virtual_mode();
 #endif
-#ifdef CONFIG_X86_64
+#ifdef CONFIG_X86_ESPFIX64
/* Should be run before the first non-init thread is created */
init_espfix_bsp();
 #endif
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/espfix] x86, espfix: Make espfix64 a Kconfig option, fix UML

2014-05-04 Thread tip-bot for H. Peter Anvin
Commit-ID:  197725de65477bc8509b41388157c1a2283542bb
Gitweb: http://git.kernel.org/tip/197725de65477bc8509b41388157c1a2283542bb
Author: H. Peter Anvin h...@zytor.com
AuthorDate: Sun, 4 May 2014 10:00:49 -0700
Committer:  H. Peter Anvin h...@zytor.com
CommitDate: Sun, 4 May 2014 10:00:49 -0700

x86, espfix: Make espfix64 a Kconfig option, fix UML

Make espfix64 a hidden Kconfig option.  This fixes the x86-64 UML
build which had broken due to the non-existence of init_espfix_bsp()
in UML: since UML uses its own Kconfig, this option does not appear in
the UML build.

This also makes it possible to make support for 16-bit segments a
configuration option, for the people who want to minimize the size of
the kernel.

Reported-by: Ingo Molnar mi...@kernel.org
Signed-off-by: H. Peter Anvin h...@zytor.com
Cc: Richard Weinberger rich...@nod.at
Link: 
http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-...@linux.intel.com
---
 arch/x86/Kconfig  | 4 
 arch/x86/kernel/Makefile  | 2 +-
 arch/x86/kernel/smpboot.c | 2 +-
 init/main.c   | 2 +-
 4 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 25d2c6f..9a952a5 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -914,6 +914,10 @@ config VM86
  XFree86 to initialize some video cards via BIOS. Disabling this
  option saves about 6k.
 
+config X86_ESPFIX64
+   def_bool y
+   depends on X86_64
+
 config TOSHIBA
tristate Toshiba Laptop support
depends on X86_32
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 1cc3789..491ef3e 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -29,7 +29,7 @@ obj-$(CONFIG_X86_64)  += sys_x86_64.o x8664_ksyms_64.o
 obj-y  += syscall_$(BITS).o vsyscall_gtod.o
 obj-$(CONFIG_X86_64)   += vsyscall_64.o
 obj-$(CONFIG_X86_64)   += vsyscall_emu_64.o
-obj-$(CONFIG_X86_64)   += espfix_64.o
+obj-$(CONFIG_X86_ESPFIX64) += espfix_64.o
 obj-$(CONFIG_SYSFS)+= ksysfs.o
 obj-y  += bootflag.o e820.o
 obj-y  += pci-dma.o quirks.o topology.o kdebugfs.o
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 61a5350..5d93ac1 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -246,7 +246,7 @@ static void notrace start_secondary(void *unused)
/*
 * Enable the espfix hack for this CPU
 */
-#ifdef CONFIG_X86_64
+#ifdef CONFIG_X86_ESPFIX64
init_espfix_ap();
 #endif
 
diff --git a/init/main.c b/init/main.c
index 70fc00e..58c132d 100644
--- a/init/main.c
+++ b/init/main.c
@@ -617,7 +617,7 @@ asmlinkage void __init start_kernel(void)
if (efi_enabled(EFI_RUNTIME_SERVICES))
efi_enter_virtual_mode();
 #endif
-#ifdef CONFIG_X86_64
+#ifdef CONFIG_X86_ESPFIX64
/* Should be run before the first non-init thread is created */
init_espfix_bsp();
 #endif
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/espfix] x86, espfix: Make it possible do disable 16-bit support

2014-05-04 Thread tip-bot for H. Peter Anvin
Commit-ID:  2179e94315ee3bcf406a682b8bbe2f27380bb7e9
Gitweb: http://git.kernel.org/tip/2179e94315ee3bcf406a682b8bbe2f27380bb7e9
Author: H. Peter Anvin h...@zytor.com
AuthorDate: Sun, 4 May 2014 10:36:22 -0700
Committer:  H. Peter Anvin h...@zytor.com
CommitDate: Sun, 4 May 2014 10:56:32 -0700

x86, espfix: Make it possible do disable 16-bit support

Embedded systems, which may be very memory-size-sensitive, are
extremely unlikely to ever encounter any 16-bit software, so make it
a CONFIG_EXPERT option to turn off support for any 16-bit software
whatsoever.

Signed-off-by: H. Peter Anvin h...@zytor.com
Link: 
http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-...@linux.intel.com
---
 arch/x86/Kconfig   | 23 ++-
 arch/x86/kernel/entry_32.S | 12 
 arch/x86/kernel/entry_64.S |  8 
 arch/x86/kernel/ldt.c  |  5 +
 4 files changed, 43 insertions(+), 5 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 9a952a5..956c770 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -909,14 +909,27 @@ config VM86
default y
depends on X86_32
---help---
- This option is required by programs like DOSEMU to run 16-bit legacy
- code on X86 processors. It also may be needed by software like
- XFree86 to initialize some video cards via BIOS. Disabling this
- option saves about 6k.
+ This option is required by programs like DOSEMU to run
+ 16-bit real mode legacy code on x86 processors. It also may
+ be needed by software like XFree86 to initialize some video
+ cards via BIOS. Disabling this option saves about 6K.
+
+config X86_16BIT
+   bool Enable support for 16-bit segments if EXPERT
+   default y
+   ---help---
+ This option is required by programs like Wine to run 16-bit
+ protected mode legacy code on x86 processors.  Disabling
+ this option saves about 300 bytes on i386, or around 6K text
+ plus 16K runtime memory on x86-64,
+
+config X86_ESPFIX32
+   def_bool y
+   depends on X86_16BIT  X86_32
 
 config X86_ESPFIX64
def_bool y
-   depends on X86_64
+   depends on X86_16BIT  X86_64
 
 config TOSHIBA
tristate Toshiba Laptop support
diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
index 2780b8f..98313ff 100644
--- a/arch/x86/kernel/entry_32.S
+++ b/arch/x86/kernel/entry_32.S
@@ -527,6 +527,7 @@ syscall_exit:
 restore_all:
TRACE_IRQS_IRET
 restore_all_notrace:
+#ifdef CONFIG_X86_ESPFIX32
movl PT_EFLAGS(%esp), %eax  # mix EFLAGS, SS and CS
# Warning: PT_OLDSS(%esp) contains the wrong/random values if we
# are returning to the kernel.
@@ -537,6 +538,7 @@ restore_all_notrace:
cmpl $((SEGMENT_LDT  8) | USER_RPL), %eax
CFI_REMEMBER_STATE
je ldt_ss   # returning to user-space with LDT SS
+#endif
 restore_nocheck:
RESTORE_REGS 4  # skip orig_eax/error_code
 irq_return:
@@ -549,6 +551,7 @@ ENTRY(iret_exc)
 .previous
_ASM_EXTABLE(irq_return,iret_exc)
 
+#ifdef CONFIG_X86_ESPFIX32
CFI_RESTORE_STATE
 ldt_ss:
 #ifdef CONFIG_PARAVIRT
@@ -592,6 +595,7 @@ ldt_ss:
lss (%esp), %esp/* switch to espfix segment */
CFI_ADJUST_CFA_OFFSET -8
jmp restore_nocheck
+#endif
CFI_ENDPROC
 ENDPROC(system_call)
 
@@ -699,6 +703,7 @@ END(syscall_badsys)
  * the high word of the segment base from the GDT and swiches to the
  * normal stack and adjusts ESP with the matching offset.
  */
+#ifdef CONFIG_X86_ESPFIX32
/* fixup the stack */
mov GDT_ESPFIX_SS + 4, %al /* bits 16..23 */
mov GDT_ESPFIX_SS + 7, %ah /* bits 24..31 */
@@ -708,8 +713,10 @@ END(syscall_badsys)
pushl_cfi %eax
lss (%esp), %esp/* switch to the normal stack segment */
CFI_ADJUST_CFA_OFFSET -8
+#endif
 .endm
 .macro UNWIND_ESPFIX_STACK
+#ifdef CONFIG_X86_ESPFIX32
movl %ss, %eax
/* see if on espfix stack */
cmpw $__ESPFIX_SS, %ax
@@ -720,6 +727,7 @@ END(syscall_badsys)
/* switch to normal stack */
FIXUP_ESPFIX_STACK
 27:
+#endif
 .endm
 
 /*
@@ -1350,11 +1358,13 @@ END(debug)
 ENTRY(nmi)
RING0_INT_FRAME
ASM_CLAC
+#ifdef CONFIG_X86_ESPFIX32
pushl_cfi %eax
movl %ss, %eax
cmpw $__ESPFIX_SS, %ax
popl_cfi %eax
je nmi_espfix_stack
+#endif
cmpl $ia32_sysenter_target,(%esp)
je nmi_stack_fixup
pushl_cfi %eax
@@ -1394,6 +1404,7 @@ nmi_debug_stack_check:
FIX_STACK 24, nmi_stack_correct, 1
jmp nmi_stack_correct
 
+#ifdef CONFIG_X86_ESPFIX32
 nmi_espfix_stack:
/* We have a RING0_INT_FRAME here.
 *
@@ -1415,6 +1426,7 @@ nmi_espfix_stack:
lss 12+4(%esp), %esp# back to espfix stack
CFI_ADJUST_CFA_OFFSET -24
jmp irq_return
+#endif
  

[tip:x86/espfix] x86, espfix: Make it possible to disable 16-bit support

2014-05-04 Thread tip-bot for H. Peter Anvin
Commit-ID:  34273f41d57ee8d854dcd2a1d754cbb546cb548f
Gitweb: http://git.kernel.org/tip/34273f41d57ee8d854dcd2a1d754cbb546cb548f
Author: H. Peter Anvin h...@zytor.com
AuthorDate: Sun, 4 May 2014 10:36:22 -0700
Committer:  H. Peter Anvin h...@zytor.com
CommitDate: Sun, 4 May 2014 12:27:37 -0700

x86, espfix: Make it possible to disable 16-bit support

Embedded systems, which may be very memory-size-sensitive, are
extremely unlikely to ever encounter any 16-bit software, so make it
a CONFIG_EXPERT option to turn off support for any 16-bit software
whatsoever.

Signed-off-by: H. Peter Anvin h...@zytor.com
Link: 
http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-...@linux.intel.com
---
 arch/x86/Kconfig   | 23 ++-
 arch/x86/kernel/entry_32.S | 12 
 arch/x86/kernel/entry_64.S |  8 
 arch/x86/kernel/ldt.c  |  5 +
 4 files changed, 43 insertions(+), 5 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 9a952a5..956c770 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -909,14 +909,27 @@ config VM86
default y
depends on X86_32
---help---
- This option is required by programs like DOSEMU to run 16-bit legacy
- code on X86 processors. It also may be needed by software like
- XFree86 to initialize some video cards via BIOS. Disabling this
- option saves about 6k.
+ This option is required by programs like DOSEMU to run
+ 16-bit real mode legacy code on x86 processors. It also may
+ be needed by software like XFree86 to initialize some video
+ cards via BIOS. Disabling this option saves about 6K.
+
+config X86_16BIT
+   bool Enable support for 16-bit segments if EXPERT
+   default y
+   ---help---
+ This option is required by programs like Wine to run 16-bit
+ protected mode legacy code on x86 processors.  Disabling
+ this option saves about 300 bytes on i386, or around 6K text
+ plus 16K runtime memory on x86-64,
+
+config X86_ESPFIX32
+   def_bool y
+   depends on X86_16BIT  X86_32
 
 config X86_ESPFIX64
def_bool y
-   depends on X86_64
+   depends on X86_16BIT  X86_64
 
 config TOSHIBA
tristate Toshiba Laptop support
diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
index 2780b8f..98313ff 100644
--- a/arch/x86/kernel/entry_32.S
+++ b/arch/x86/kernel/entry_32.S
@@ -527,6 +527,7 @@ syscall_exit:
 restore_all:
TRACE_IRQS_IRET
 restore_all_notrace:
+#ifdef CONFIG_X86_ESPFIX32
movl PT_EFLAGS(%esp), %eax  # mix EFLAGS, SS and CS
# Warning: PT_OLDSS(%esp) contains the wrong/random values if we
# are returning to the kernel.
@@ -537,6 +538,7 @@ restore_all_notrace:
cmpl $((SEGMENT_LDT  8) | USER_RPL), %eax
CFI_REMEMBER_STATE
je ldt_ss   # returning to user-space with LDT SS
+#endif
 restore_nocheck:
RESTORE_REGS 4  # skip orig_eax/error_code
 irq_return:
@@ -549,6 +551,7 @@ ENTRY(iret_exc)
 .previous
_ASM_EXTABLE(irq_return,iret_exc)
 
+#ifdef CONFIG_X86_ESPFIX32
CFI_RESTORE_STATE
 ldt_ss:
 #ifdef CONFIG_PARAVIRT
@@ -592,6 +595,7 @@ ldt_ss:
lss (%esp), %esp/* switch to espfix segment */
CFI_ADJUST_CFA_OFFSET -8
jmp restore_nocheck
+#endif
CFI_ENDPROC
 ENDPROC(system_call)
 
@@ -699,6 +703,7 @@ END(syscall_badsys)
  * the high word of the segment base from the GDT and swiches to the
  * normal stack and adjusts ESP with the matching offset.
  */
+#ifdef CONFIG_X86_ESPFIX32
/* fixup the stack */
mov GDT_ESPFIX_SS + 4, %al /* bits 16..23 */
mov GDT_ESPFIX_SS + 7, %ah /* bits 24..31 */
@@ -708,8 +713,10 @@ END(syscall_badsys)
pushl_cfi %eax
lss (%esp), %esp/* switch to the normal stack segment */
CFI_ADJUST_CFA_OFFSET -8
+#endif
 .endm
 .macro UNWIND_ESPFIX_STACK
+#ifdef CONFIG_X86_ESPFIX32
movl %ss, %eax
/* see if on espfix stack */
cmpw $__ESPFIX_SS, %ax
@@ -720,6 +727,7 @@ END(syscall_badsys)
/* switch to normal stack */
FIXUP_ESPFIX_STACK
 27:
+#endif
 .endm
 
 /*
@@ -1350,11 +1358,13 @@ END(debug)
 ENTRY(nmi)
RING0_INT_FRAME
ASM_CLAC
+#ifdef CONFIG_X86_ESPFIX32
pushl_cfi %eax
movl %ss, %eax
cmpw $__ESPFIX_SS, %ax
popl_cfi %eax
je nmi_espfix_stack
+#endif
cmpl $ia32_sysenter_target,(%esp)
je nmi_stack_fixup
pushl_cfi %eax
@@ -1394,6 +1404,7 @@ nmi_debug_stack_check:
FIX_STACK 24, nmi_stack_correct, 1
jmp nmi_stack_correct
 
+#ifdef CONFIG_X86_ESPFIX32
 nmi_espfix_stack:
/* We have a RING0_INT_FRAME here.
 *
@@ -1415,6 +1426,7 @@ nmi_espfix_stack:
lss 12+4(%esp), %esp# back to espfix stack
CFI_ADJUST_CFA_OFFSET -24
jmp irq_return
+#endif
  

[tip:x86/espfix] x86-64, espfix: Don't leak bits 31: 16 of %esp returning to 16-bit stack

2014-04-30 Thread tip-bot for H. Peter Anvin
Commit-ID:  3891a04aafd668686239349ea58f3314ea2af86b
Gitweb: http://git.kernel.org/tip/3891a04aafd668686239349ea58f3314ea2af86b
Author: H. Peter Anvin 
AuthorDate: Tue, 29 Apr 2014 16:46:09 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 30 Apr 2014 14:14:28 -0700

x86-64, espfix: Don't leak bits 31:16 of %esp returning to 16-bit stack

The IRET instruction, when returning to a 16-bit segment, only
restores the bottom 16 bits of the user space stack pointer.  This
causes some 16-bit software to break, but it also leaks kernel state
to user space.  We have a software workaround for that ("espfix") for
the 32-bit kernel, but it relies on a nonzero stack segment base which
is not available in 64-bit mode.

In checkin:

b3b42ac2cbae x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels

we "solved" this by forbidding 16-bit segments on 64-bit kernels, with
the logic that 16-bit support is crippled on 64-bit kernels anyway (no
V86 support), but it turns out that people are doing stuff like
running old Win16 binaries under Wine and expect it to work.

This works around this by creating percpu "ministacks", each of which
is mapped 2^16 times 64K apart.  When we detect that the return SS is
on the LDT, we copy the IRET frame to the ministack and use the
relevant alias to return to userspace.  The ministacks are mapped
readonly, so if IRET faults we promote #GP to #DF which is an IST
vector and thus has its own stack; we then do the fixup in the #DF
handler.

(Making #GP an IST exception would make the msr_safe functions unsafe
in NMI/MC context, and quite possibly have other effects.)

Special thanks to:

- Andy Lutomirski, for the suggestion of using very small stack slots
  and copy (as opposed to map) the IRET frame there, and for the
  suggestion to mark them readonly and let the fault promote to #DF.
- Konrad Wilk for paravirt fixup and testing.
- Borislav Petkov for testing help and useful comments.

Reported-by: Brian Gerst 
Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-...@linux.intel.com
Cc: Konrad Rzeszutek Wilk 
Cc: Borislav Petkov 
Cc: Andrew Lutomriski 
Cc: Linus Torvalds 
Cc: Dirk Hohndel 
Cc: Arjan van de Ven 
Cc: comex 
Cc: Alexander van Heukelum 
Cc: Boris Ostrovsky 
Cc:  # consider after upstream merge
---
 Documentation/x86/x86_64/mm.txt |   2 +
 arch/x86/include/asm/pgtable_64_types.h |   2 +
 arch/x86/include/asm/setup.h|   3 +
 arch/x86/kernel/Makefile|   1 +
 arch/x86/kernel/entry_64.S  |  73 ++-
 arch/x86/kernel/espfix_64.c | 208 
 arch/x86/kernel/ldt.c   |  11 --
 arch/x86/kernel/smpboot.c   |   7 ++
 arch/x86/mm/dump_pagetables.c   |  44 +--
 init/main.c |   4 +
 10 files changed, 329 insertions(+), 26 deletions(-)

diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
index c584a51..afe68dd 100644
--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -12,6 +12,8 @@ c900 - e8ff (=45 bits) 
vmalloc/ioremap space
 e900 - e9ff (=40 bits) hole
 ea00 - eaff (=40 bits) virtual memory map (1TB)
 ... unused hole ...
+ff00 - ff7f (=39 bits) %esp fixup stacks
+... unused hole ...
 8000 - a000 (=512 MB)  kernel text mapping, from phys 0
 a000 - ff5f (=1525 MB) module mapping space
 ff60 - ffdf (=8 MB) vsyscalls
diff --git a/arch/x86/include/asm/pgtable_64_types.h 
b/arch/x86/include/asm/pgtable_64_types.h
index c883bf7..7166e25 100644
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -61,6 +61,8 @@ typedef struct { pteval_t pte; } pte_t;
 #define MODULES_VADDR(__START_KERNEL_map + KERNEL_IMAGE_SIZE)
 #define MODULES_END  _AC(0xff00, UL)
 #define MODULES_LEN   (MODULES_END - MODULES_VADDR)
+#define ESPFIX_PGD_ENTRY _AC(-2, UL)
+#define ESPFIX_BASE_ADDR (ESPFIX_PGD_ENTRY << PGDIR_SHIFT)
 
 #define EARLY_DYNAMIC_PAGE_TABLES  64
 
diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index 9264f04..9e3be33 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -57,6 +57,9 @@ extern void x86_ce4100_early_setup(void);
 static inline void x86_ce4100_early_setup(void) { }
 #endif
 
+extern void init_espfix_bsp(void);
+extern void init_espfix_ap(void);
+
 #ifndef _SETUP
 
 /*
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index f4d9600..1cc3789 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -29,6 +29,7 @@ obj-$(CONFIG_X86_64)  += sys_x86_64.o x8664_ksyms_64.o
 obj-y  += syscall_$(BITS).o vsyscall_gtod.o
 obj-$(CONFIG_X86_64)   += vsyscall_64.o
 obj-$(CONFIG_X86_64)   += vsyscall_emu_64.o

[tip:x86/espfix] x86-32, espfix: Remove filter for espfix32 due to race

2014-04-30 Thread tip-bot for H. Peter Anvin
Commit-ID:  246f2d2ee1d715e1077fc47d61c394569c8ee692
Gitweb: http://git.kernel.org/tip/246f2d2ee1d715e1077fc47d61c394569c8ee692
Author: H. Peter Anvin 
AuthorDate: Wed, 30 Apr 2014 14:03:25 -0700
Committer:  H. Peter Anvin 
CommitDate: Wed, 30 Apr 2014 14:14:49 -0700

x86-32, espfix: Remove filter for espfix32 due to race

It is not safe to use LAR to filter when to go down the espfix path,
because the LDT is per-process (rather than per-thread) and another
thread might change the descriptors behind our back.  Fortunately it
is always *safe* (if a bit slow) to go down the espfix path, and a
32-bit LDT stack segment is extremely rare.

Signed-off-by: H. Peter Anvin 
Link: 
http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-...@linux.intel.com
Cc:  # consider after upstream merge
---
 arch/x86/kernel/entry_32.S | 5 -
 1 file changed, 5 deletions(-)

diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
index a2a4f46..2780b8f 100644
--- a/arch/x86/kernel/entry_32.S
+++ b/arch/x86/kernel/entry_32.S
@@ -551,11 +551,6 @@ ENTRY(iret_exc)
 
CFI_RESTORE_STATE
 ldt_ss:
-   larl PT_OLDSS(%esp), %eax
-   jnz restore_nocheck
-   testl $0x0040, %eax # returning to 32bit stack?
-   jnz restore_nocheck # allright, normal return
-
 #ifdef CONFIG_PARAVIRT
/*
 * The kernel can't run on a non-flat stack if paravirt mode
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/espfix] x86-32, espfix: Remove filter for espfix32 due to race

2014-04-30 Thread tip-bot for H. Peter Anvin
Commit-ID:  246f2d2ee1d715e1077fc47d61c394569c8ee692
Gitweb: http://git.kernel.org/tip/246f2d2ee1d715e1077fc47d61c394569c8ee692
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Wed, 30 Apr 2014 14:03:25 -0700
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Wed, 30 Apr 2014 14:14:49 -0700

x86-32, espfix: Remove filter for espfix32 due to race

It is not safe to use LAR to filter when to go down the espfix path,
because the LDT is per-process (rather than per-thread) and another
thread might change the descriptors behind our back.  Fortunately it
is always *safe* (if a bit slow) to go down the espfix path, and a
32-bit LDT stack segment is extremely rare.

Signed-off-by: H. Peter Anvin h...@linux.intel.com
Link: 
http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-...@linux.intel.com
Cc: sta...@vger.kernel.org # consider after upstream merge
---
 arch/x86/kernel/entry_32.S | 5 -
 1 file changed, 5 deletions(-)

diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
index a2a4f46..2780b8f 100644
--- a/arch/x86/kernel/entry_32.S
+++ b/arch/x86/kernel/entry_32.S
@@ -551,11 +551,6 @@ ENTRY(iret_exc)
 
CFI_RESTORE_STATE
 ldt_ss:
-   larl PT_OLDSS(%esp), %eax
-   jnz restore_nocheck
-   testl $0x0040, %eax # returning to 32bit stack?
-   jnz restore_nocheck # allright, normal return
-
 #ifdef CONFIG_PARAVIRT
/*
 * The kernel can't run on a non-flat stack if paravirt mode
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/espfix] x86-64, espfix: Don't leak bits 31: 16 of %esp returning to 16-bit stack

2014-04-30 Thread tip-bot for H. Peter Anvin
Commit-ID:  3891a04aafd668686239349ea58f3314ea2af86b
Gitweb: http://git.kernel.org/tip/3891a04aafd668686239349ea58f3314ea2af86b
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Tue, 29 Apr 2014 16:46:09 -0700
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Wed, 30 Apr 2014 14:14:28 -0700

x86-64, espfix: Don't leak bits 31:16 of %esp returning to 16-bit stack

The IRET instruction, when returning to a 16-bit segment, only
restores the bottom 16 bits of the user space stack pointer.  This
causes some 16-bit software to break, but it also leaks kernel state
to user space.  We have a software workaround for that (espfix) for
the 32-bit kernel, but it relies on a nonzero stack segment base which
is not available in 64-bit mode.

In checkin:

b3b42ac2cbae x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels

we solved this by forbidding 16-bit segments on 64-bit kernels, with
the logic that 16-bit support is crippled on 64-bit kernels anyway (no
V86 support), but it turns out that people are doing stuff like
running old Win16 binaries under Wine and expect it to work.

This works around this by creating percpu ministacks, each of which
is mapped 2^16 times 64K apart.  When we detect that the return SS is
on the LDT, we copy the IRET frame to the ministack and use the
relevant alias to return to userspace.  The ministacks are mapped
readonly, so if IRET faults we promote #GP to #DF which is an IST
vector and thus has its own stack; we then do the fixup in the #DF
handler.

(Making #GP an IST exception would make the msr_safe functions unsafe
in NMI/MC context, and quite possibly have other effects.)

Special thanks to:

- Andy Lutomirski, for the suggestion of using very small stack slots
  and copy (as opposed to map) the IRET frame there, and for the
  suggestion to mark them readonly and let the fault promote to #DF.
- Konrad Wilk for paravirt fixup and testing.
- Borislav Petkov for testing help and useful comments.

Reported-by: Brian Gerst brge...@gmail.com
Signed-off-by: H. Peter Anvin h...@linux.intel.com
Link: 
http://lkml.kernel.org/r/1398816946-3351-1-git-send-email-...@linux.intel.com
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Cc: Borislav Petkov b...@alien8.de
Cc: Andrew Lutomriski aml...@gmail.com
Cc: Linus Torvalds torva...@linux-foundation.org
Cc: Dirk Hohndel d...@hohndel.org
Cc: Arjan van de Ven arjan.van.de@intel.com
Cc: comex com...@gmail.com
Cc: Alexander van Heukelum heuke...@fastmail.fm
Cc: Boris Ostrovsky boris.ostrov...@oracle.com
Cc: sta...@vger.kernel.org # consider after upstream merge
---
 Documentation/x86/x86_64/mm.txt |   2 +
 arch/x86/include/asm/pgtable_64_types.h |   2 +
 arch/x86/include/asm/setup.h|   3 +
 arch/x86/kernel/Makefile|   1 +
 arch/x86/kernel/entry_64.S  |  73 ++-
 arch/x86/kernel/espfix_64.c | 208 
 arch/x86/kernel/ldt.c   |  11 --
 arch/x86/kernel/smpboot.c   |   7 ++
 arch/x86/mm/dump_pagetables.c   |  44 +--
 init/main.c |   4 +
 10 files changed, 329 insertions(+), 26 deletions(-)

diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
index c584a51..afe68dd 100644
--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -12,6 +12,8 @@ c900 - e8ff (=45 bits) 
vmalloc/ioremap space
 e900 - e9ff (=40 bits) hole
 ea00 - eaff (=40 bits) virtual memory map (1TB)
 ... unused hole ...
+ff00 - ff7f (=39 bits) %esp fixup stacks
+... unused hole ...
 8000 - a000 (=512 MB)  kernel text mapping, from phys 0
 a000 - ff5f (=1525 MB) module mapping space
 ff60 - ffdf (=8 MB) vsyscalls
diff --git a/arch/x86/include/asm/pgtable_64_types.h 
b/arch/x86/include/asm/pgtable_64_types.h
index c883bf7..7166e25 100644
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -61,6 +61,8 @@ typedef struct { pteval_t pte; } pte_t;
 #define MODULES_VADDR(__START_KERNEL_map + KERNEL_IMAGE_SIZE)
 #define MODULES_END  _AC(0xff00, UL)
 #define MODULES_LEN   (MODULES_END - MODULES_VADDR)
+#define ESPFIX_PGD_ENTRY _AC(-2, UL)
+#define ESPFIX_BASE_ADDR (ESPFIX_PGD_ENTRY  PGDIR_SHIFT)
 
 #define EARLY_DYNAMIC_PAGE_TABLES  64
 
diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index 9264f04..9e3be33 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -57,6 +57,9 @@ extern void x86_ce4100_early_setup(void);
 static inline void x86_ce4100_early_setup(void) { }
 #endif
 
+extern void init_espfix_bsp(void);
+extern void init_espfix_ap(void);
+
 #ifndef _SETUP
 
 /*
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index f4d9600..1cc3789 100644
--- 

[tip:x86/urgent] x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels

2014-04-11 Thread tip-bot for H. Peter Anvin
Commit-ID:  b3b42ac2cbae1f3cecbb6229964a4d48af31d382
Gitweb: http://git.kernel.org/tip/b3b42ac2cbae1f3cecbb6229964a4d48af31d382
Author: H. Peter Anvin 
AuthorDate: Sun, 16 Mar 2014 15:31:54 -0700
Committer:  H. Peter Anvin 
CommitDate: Fri, 11 Apr 2014 10:10:09 -0700

x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels

The IRET instruction, when returning to a 16-bit segment, only
restores the bottom 16 bits of the user space stack pointer.  We have
a software workaround for that ("espfix") for the 32-bit kernel, but
it relies on a nonzero stack segment base which is not available in
32-bit mode.

Since 16-bit support is somewhat crippled anyway on a 64-bit kernel
(no V86 mode), and most (if not quite all) 64-bit processors support
virtualization for the users who really need it, simply reject
attempts at creating a 16-bit segment when running on top of a 64-bit
kernel.

Cc: Linus Torvalds 
Signed-off-by: H. Peter Anvin 
Link: http://lkml.kernel.org/n/tip-kicdm89kzw9lldryb1br9...@git.kernel.org
Cc: 
---
 arch/x86/kernel/ldt.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c
index ebc9873..af1d14a 100644
--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -229,6 +229,17 @@ static int write_ldt(void __user *ptr, unsigned long 
bytecount, int oldmode)
}
}
 
+   /*
+* On x86-64 we do not support 16-bit segments due to
+* IRET leaking the high bits of the kernel stack address.
+*/
+#ifdef CONFIG_X86_64
+   if (!ldt_info.seg_32bit) {
+   error = -EINVAL;
+   goto out_unlock;
+   }
+#endif
+
fill_ldt(, _info);
if (oldmode)
ldt.avl = 0;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/urgent] x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels

2014-04-11 Thread tip-bot for H. Peter Anvin
Commit-ID:  b3b42ac2cbae1f3cecbb6229964a4d48af31d382
Gitweb: http://git.kernel.org/tip/b3b42ac2cbae1f3cecbb6229964a4d48af31d382
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Sun, 16 Mar 2014 15:31:54 -0700
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Fri, 11 Apr 2014 10:10:09 -0700

x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels

The IRET instruction, when returning to a 16-bit segment, only
restores the bottom 16 bits of the user space stack pointer.  We have
a software workaround for that (espfix) for the 32-bit kernel, but
it relies on a nonzero stack segment base which is not available in
32-bit mode.

Since 16-bit support is somewhat crippled anyway on a 64-bit kernel
(no V86 mode), and most (if not quite all) 64-bit processors support
virtualization for the users who really need it, simply reject
attempts at creating a 16-bit segment when running on top of a 64-bit
kernel.

Cc: Linus Torvalds torva...@linux-foundation.org
Signed-off-by: H. Peter Anvin h...@linux.intel.com
Link: http://lkml.kernel.org/n/tip-kicdm89kzw9lldryb1br9...@git.kernel.org
Cc: sta...@vger.kernel.org
---
 arch/x86/kernel/ldt.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c
index ebc9873..af1d14a 100644
--- a/arch/x86/kernel/ldt.c
+++ b/arch/x86/kernel/ldt.c
@@ -229,6 +229,17 @@ static int write_ldt(void __user *ptr, unsigned long 
bytecount, int oldmode)
}
}
 
+   /*
+* On x86-64 we do not support 16-bit segments due to
+* IRET leaking the high bits of the kernel stack address.
+*/
+#ifdef CONFIG_X86_64
+   if (!ldt_info.seg_32bit) {
+   error = -EINVAL;
+   goto out_unlock;
+   }
+#endif
+
fill_ldt(ldt, ldt_info);
if (oldmode)
ldt.avl = 0;
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] x86, vdso: Actually discard the .discard sections

2014-03-25 Thread tip-bot for H. Peter Anvin
Commit-ID:  26f5ef2e3c3c18f1dc31461ddf1db00b014edcd4
Gitweb: http://git.kernel.org/tip/26f5ef2e3c3c18f1dc31461ddf1db00b014edcd4
Author: H. Peter Anvin 
AuthorDate: Tue, 25 Mar 2014 13:41:36 -0700
Committer:  H. Peter Anvin 
CommitDate: Tue, 25 Mar 2014 13:41:36 -0700

x86, vdso: Actually discard the .discard sections

The .discard/.discard.* sections are used to generate intermediate
results for the assembler (effectively "test assembly".)  The output
is waste and should not be retained.

Cc: Stefani Seibold 
Cc: Andy Lutomirski 
Signed-off-by: H. Peter Anvin 
Link: http://lkml.kernel.org/n/tip-psizrnant8x3nrhbgvq2v...@git.kernel.org
---
 arch/x86/vdso/vdso-layout.lds.S | 5 +
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/vdso/vdso-layout.lds.S b/arch/x86/vdso/vdso-layout.lds.S
index c6d0e1b..2e263f3 100644
--- a/arch/x86/vdso/vdso-layout.lds.S
+++ b/arch/x86/vdso/vdso-layout.lds.S
@@ -62,6 +62,11 @@ SECTIONS
. = ALIGN(0x100);
 
.text   : { *(.text*) } :text   =0x90909090
+
+   /DISCARD/ : {
+   *(.discard)
+   *(.discard.*)
+   }
 }
 
 /*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] x86, vdso: Actually discard the .discard sections

2014-03-25 Thread tip-bot for H. Peter Anvin
Commit-ID:  26f5ef2e3c3c18f1dc31461ddf1db00b014edcd4
Gitweb: http://git.kernel.org/tip/26f5ef2e3c3c18f1dc31461ddf1db00b014edcd4
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Tue, 25 Mar 2014 13:41:36 -0700
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Tue, 25 Mar 2014 13:41:36 -0700

x86, vdso: Actually discard the .discard sections

The .discard/.discard.* sections are used to generate intermediate
results for the assembler (effectively test assembly.)  The output
is waste and should not be retained.

Cc: Stefani Seibold stef...@seibold.net
Cc: Andy Lutomirski l...@amacapital.net
Signed-off-by: H. Peter Anvin h...@linux.intel.com
Link: http://lkml.kernel.org/n/tip-psizrnant8x3nrhbgvq2v...@git.kernel.org
---
 arch/x86/vdso/vdso-layout.lds.S | 5 +
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/vdso/vdso-layout.lds.S b/arch/x86/vdso/vdso-layout.lds.S
index c6d0e1b..2e263f3 100644
--- a/arch/x86/vdso/vdso-layout.lds.S
+++ b/arch/x86/vdso/vdso-layout.lds.S
@@ -62,6 +62,11 @@ SECTIONS
. = ALIGN(0x100);
 
.text   : { *(.text*) } :text   =0x90909090
+
+   /DISCARD/ : {
+   *(.discard)
+   *(.discard.*)
+   }
 }
 
 /*
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] x86, vdso32: Disable stack protector, adjust optimizations

2014-03-18 Thread tip-bot for H. Peter Anvin
Commit-ID:  008cc907de327d83a0be609cd495fccb0e5dfa4c
Gitweb: http://git.kernel.org/tip/008cc907de327d83a0be609cd495fccb0e5dfa4c
Author: H. Peter Anvin 
AuthorDate: Mon, 17 Mar 2014 23:22:12 +0100
Committer:  H. Peter Anvin 
CommitDate: Tue, 18 Mar 2014 12:52:48 -0700

x86, vdso32: Disable stack protector, adjust optimizations

For the 32-bit VDSO, match the 64-bit VDSO in:

1. Disable the stack protector.
2. Use -fno-omit-frame-pointer for user space debugging sanity.
3. Use -foptimize-sibling-calls like the 64-bit VDSO does.

Reported-by: Ingo Molnar 
Signed-off-by: Stefani Seibold 
Link: 
http://lkml.kernel.org/r/1395094933-14252-13-git-send-email-stef...@seibold.net
Signed-off-by: H. Peter Anvin 
---
 arch/x86/vdso/Makefile | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 6cef7a1..a2de5fc 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -151,6 +151,9 @@ KBUILD_CFLAGS_32 := $(filter-out 
-mcmodel=kernel,$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 := $(filter-out -fno-pic,$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
+KBUILD_CFLAGS_32 += $(call cc-option, -fno-stack-protector)
+KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
+KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
 $(vdso32-images:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
 
 $(vdso32-images:%=$(obj)/%.dbg): $(obj)/vdso32-%.so.dbg: FORCE \
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] x86, vdso32: Disable stack protector, adjust optimizations

2014-03-18 Thread tip-bot for H. Peter Anvin
Commit-ID:  008cc907de327d83a0be609cd495fccb0e5dfa4c
Gitweb: http://git.kernel.org/tip/008cc907de327d83a0be609cd495fccb0e5dfa4c
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Mon, 17 Mar 2014 23:22:12 +0100
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Tue, 18 Mar 2014 12:52:48 -0700

x86, vdso32: Disable stack protector, adjust optimizations

For the 32-bit VDSO, match the 64-bit VDSO in:

1. Disable the stack protector.
2. Use -fno-omit-frame-pointer for user space debugging sanity.
3. Use -foptimize-sibling-calls like the 64-bit VDSO does.

Reported-by: Ingo Molnar mi...@kernel.org
Signed-off-by: Stefani Seibold stef...@seibold.net
Link: 
http://lkml.kernel.org/r/1395094933-14252-13-git-send-email-stef...@seibold.net
Signed-off-by: H. Peter Anvin h...@linux.intel.com
---
 arch/x86/vdso/Makefile | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 6cef7a1..a2de5fc 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -151,6 +151,9 @@ KBUILD_CFLAGS_32 := $(filter-out 
-mcmodel=kernel,$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 := $(filter-out -fno-pic,$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
+KBUILD_CFLAGS_32 += $(call cc-option, -fno-stack-protector)
+KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
+KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
 $(vdso32-images:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
 
 $(vdso32-images:%=$(obj)/%.dbg): $(obj)/vdso32-%.so.dbg: FORCE \
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] x86, vdso, xen: Remove stray reference to FIX_VDSO

2014-03-13 Thread tip-bot for H. Peter Anvin
Commit-ID:  1f2cbcf648962cdcf511d234cb39745baa9f5d07
Gitweb: http://git.kernel.org/tip/1f2cbcf648962cdcf511d234cb39745baa9f5d07
Author: H. Peter Anvin 
AuthorDate: Thu, 13 Mar 2014 19:44:47 -0700
Committer:  H. Peter Anvin 
CommitDate: Thu, 13 Mar 2014 19:44:47 -0700

x86, vdso, xen: Remove stray reference to FIX_VDSO

Checkin

b0b49f2673f0 x86, vdso: Remove compat vdso support

... removed the VDSO from the fixmap, and thus FIX_VDSO; remove a
stray reference in Xen.

Found by Fengguang Wu's test robot.

Reported-by: Fengguang Wu 
Cc: Andy Lutomirski 
Cc: Konrad Rzeszutek Wilk 
Cc: Boris Ostrovsky 
Cc: David Vrabel 
Link: 
http://lkml.kernel.org/r/4bb4690899106eb11430b1186d5cc66ca9d1660c.1394751608.git.l...@amacapital.net
Signed-off-by: H. Peter Anvin 
---
 arch/x86/xen/mmu.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 256282e..21c6a42 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2058,7 +2058,6 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t 
phys, pgprot_t prot)
case FIX_RO_IDT:
 #ifdef CONFIG_X86_32
case FIX_WP_TEST:
-   case FIX_VDSO:
 # ifdef CONFIG_HIGHMEM
case FIX_KMAP_BEGIN ... FIX_KMAP_END:
 # endif
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] x86, vdso, xen: Remove stray reference to FIX_VDSO

2014-03-13 Thread tip-bot for H. Peter Anvin
Commit-ID:  1f2cbcf648962cdcf511d234cb39745baa9f5d07
Gitweb: http://git.kernel.org/tip/1f2cbcf648962cdcf511d234cb39745baa9f5d07
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Thu, 13 Mar 2014 19:44:47 -0700
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Thu, 13 Mar 2014 19:44:47 -0700

x86, vdso, xen: Remove stray reference to FIX_VDSO

Checkin

b0b49f2673f0 x86, vdso: Remove compat vdso support

... removed the VDSO from the fixmap, and thus FIX_VDSO; remove a
stray reference in Xen.

Found by Fengguang Wu's test robot.

Reported-by: Fengguang Wu fengguang...@intel.com
Cc: Andy Lutomirski l...@amacapital.net
Cc: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Cc: Boris Ostrovsky boris.ostrov...@oracle.com
Cc: David Vrabel david.vra...@citrix.com
Link: 
http://lkml.kernel.org/r/4bb4690899106eb11430b1186d5cc66ca9d1660c.1394751608.git.l...@amacapital.net
Signed-off-by: H. Peter Anvin h...@linux.intel.com
---
 arch/x86/xen/mmu.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 256282e..21c6a42 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2058,7 +2058,6 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t 
phys, pgprot_t prot)
case FIX_RO_IDT:
 #ifdef CONFIG_X86_32
case FIX_WP_TEST:
-   case FIX_VDSO:
 # ifdef CONFIG_HIGHMEM
case FIX_KMAP_BEGIN ... FIX_KMAP_END:
 # endif
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/urgent] x86: Ignore NMIs that come in during early boot

2014-03-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  5fa10196bdb5f190f595ebd048490ee52dddea0f
Gitweb: http://git.kernel.org/tip/5fa10196bdb5f190f595ebd048490ee52dddea0f
Author: H. Peter Anvin 
AuthorDate: Fri, 7 Mar 2014 15:05:20 -0800
Committer:  H. Peter Anvin 
CommitDate: Fri, 7 Mar 2014 15:08:14 -0800

x86: Ignore NMIs that come in during early boot

Don Zickus reports:

A customer generated an external NMI using their iLO to test kdump
worked.  Unfortunately, the machine hung.  Disabling the nmi_watchdog
made things work.

I speculated the external NMI fired, caused the machine to panic (as
expected) and the perf NMI from the watchdog came in and was latched.
My guess was this somehow caused the hang.

   

It appears that the latched NMI stays latched until the early page
table generation on 64 bits, which causes exceptions to happen which
end in IRET, which re-enable NMI.  Therefore, ignore NMIs that come in
during early execution, until we have proper exception handling.

Reported-and-tested-by: Don Zickus 
Link: 
http://lkml.kernel.org/r/1394221143-29713-1-git-send-email-dzic...@redhat.com
Signed-off-by: H. Peter Anvin 
Cc:  # v3.5+, older with some backport effort
---
 arch/x86/kernel/head_32.S | 7 ++-
 arch/x86/kernel/head_64.S | 6 +-
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index 81ba276..d2a2159 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -544,6 +544,10 @@ ENDPROC(early_idt_handlers)
/* This is global to keep gas from relaxing the jumps */
 ENTRY(early_idt_handler)
cld
+
+   cmpl $X86_TRAP_NMI,(%esp)
+   je is_nmi   # Ignore NMI
+
cmpl $2,%ss:early_recursion_flag
je hlt_loop
incl %ss:early_recursion_flag
@@ -594,8 +598,9 @@ ex_entry:
pop %edx
pop %ecx
pop %eax
-   addl $8,%esp/* drop vector number and error code */
decl %ss:early_recursion_flag
+is_nmi:
+   addl $8,%esp/* drop vector number and error code */
iret
 ENDPROC(early_idt_handler)
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index e1aabdb..33f36c7 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -343,6 +343,9 @@ early_idt_handlers:
 ENTRY(early_idt_handler)
cld
 
+   cmpl $X86_TRAP_NMI,(%rsp)
+   je is_nmi   # Ignore NMI
+
cmpl $2,early_recursion_flag(%rip)
jz  1f
incl early_recursion_flag(%rip)
@@ -405,8 +408,9 @@ ENTRY(early_idt_handler)
popq %rdx
popq %rcx
popq %rax
-   addq $16,%rsp   # drop vector number and error code
decl early_recursion_flag(%rip)
+is_nmi:
+   addq $16,%rsp   # drop vector number and error code
INTERRUPT_RETURN
 ENDPROC(early_idt_handler)
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/urgent] x86: Ignore NMIs that come in during early boot

2014-03-07 Thread tip-bot for H. Peter Anvin
Commit-ID:  5fa10196bdb5f190f595ebd048490ee52dddea0f
Gitweb: http://git.kernel.org/tip/5fa10196bdb5f190f595ebd048490ee52dddea0f
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Fri, 7 Mar 2014 15:05:20 -0800
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Fri, 7 Mar 2014 15:08:14 -0800

x86: Ignore NMIs that come in during early boot

Don Zickus reports:

A customer generated an external NMI using their iLO to test kdump
worked.  Unfortunately, the machine hung.  Disabling the nmi_watchdog
made things work.

I speculated the external NMI fired, caused the machine to panic (as
expected) and the perf NMI from the watchdog came in and was latched.
My guess was this somehow caused the hang.

   

It appears that the latched NMI stays latched until the early page
table generation on 64 bits, which causes exceptions to happen which
end in IRET, which re-enable NMI.  Therefore, ignore NMIs that come in
during early execution, until we have proper exception handling.

Reported-and-tested-by: Don Zickus dzic...@redhat.com
Link: 
http://lkml.kernel.org/r/1394221143-29713-1-git-send-email-dzic...@redhat.com
Signed-off-by: H. Peter Anvin h...@linux.intel.com
Cc: sta...@vger.kernel.org # v3.5+, older with some backport effort
---
 arch/x86/kernel/head_32.S | 7 ++-
 arch/x86/kernel/head_64.S | 6 +-
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index 81ba276..d2a2159 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -544,6 +544,10 @@ ENDPROC(early_idt_handlers)
/* This is global to keep gas from relaxing the jumps */
 ENTRY(early_idt_handler)
cld
+
+   cmpl $X86_TRAP_NMI,(%esp)
+   je is_nmi   # Ignore NMI
+
cmpl $2,%ss:early_recursion_flag
je hlt_loop
incl %ss:early_recursion_flag
@@ -594,8 +598,9 @@ ex_entry:
pop %edx
pop %ecx
pop %eax
-   addl $8,%esp/* drop vector number and error code */
decl %ss:early_recursion_flag
+is_nmi:
+   addl $8,%esp/* drop vector number and error code */
iret
 ENDPROC(early_idt_handler)
 
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index e1aabdb..33f36c7 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -343,6 +343,9 @@ early_idt_handlers:
 ENTRY(early_idt_handler)
cld
 
+   cmpl $X86_TRAP_NMI,(%rsp)
+   je is_nmi   # Ignore NMI
+
cmpl $2,early_recursion_flag(%rip)
jz  1f
incl early_recursion_flag(%rip)
@@ -405,8 +408,9 @@ ENTRY(early_idt_handler)
popq %rdx
popq %rcx
popq %rax
-   addq $16,%rsp   # drop vector number and error code
decl early_recursion_flag(%rip)
+is_nmi:
+   addq $16,%rsp   # drop vector number and error code
INTERRUPT_RETURN
 ENDPROC(early_idt_handler)
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] x86, vdso32: Disable stack protector, adjust optimizations

2014-03-06 Thread tip-bot for H. Peter Anvin
Commit-ID:  7ed5ee279499a02bf35c77f0a91d657c24f6474e
Gitweb: http://git.kernel.org/tip/7ed5ee279499a02bf35c77f0a91d657c24f6474e
Author: H. Peter Anvin 
AuthorDate: Thu, 6 Mar 2014 09:47:20 -0800
Committer:  H. Peter Anvin 
CommitDate: Thu, 6 Mar 2014 09:47:20 -0800

x86, vdso32: Disable stack protector, adjust optimizations

For the 32-bit VDSO, match the 64-bit VDSO in:

1. Disable the stack protector.
2. Use -fno-omit-frame-pointer for user space debugging sanity.
3. Use -foptimize-sibling-calls like the 64-bit VDSO does.

Reported-by: Ingo Molnar 
Cc: Stefani Seibold 
Cc: Andy Lutomirski 
Link: 
http://lkml.kernel.org/r/1393881143-3569-13-git-send-email-stef...@seibold.net
Signed-off-by: H. Peter Anvin 
---
 arch/x86/vdso/Makefile | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 6cef7a1..55e76eb 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -151,6 +151,9 @@ KBUILD_CFLAGS_32 := $(filter-out 
-mcmodel=kernel,$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 := $(filter-out -fno-pic,$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
+KBUILD_CFLAGS_32 += $(call cc-option, -fno-stack-protector) 
+KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
+KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
 $(vdso32-images:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
 
 $(vdso32-images:%=$(obj)/%.dbg): $(obj)/vdso32-%.so.dbg: FORCE \
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] x86, vdso32: Disable stack protector, adjust optimizations

2014-03-06 Thread tip-bot for H. Peter Anvin
Commit-ID:  7ed5ee279499a02bf35c77f0a91d657c24f6474e
Gitweb: http://git.kernel.org/tip/7ed5ee279499a02bf35c77f0a91d657c24f6474e
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Thu, 6 Mar 2014 09:47:20 -0800
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Thu, 6 Mar 2014 09:47:20 -0800

x86, vdso32: Disable stack protector, adjust optimizations

For the 32-bit VDSO, match the 64-bit VDSO in:

1. Disable the stack protector.
2. Use -fno-omit-frame-pointer for user space debugging sanity.
3. Use -foptimize-sibling-calls like the 64-bit VDSO does.

Reported-by: Ingo Molnar mi...@kernel.org
Cc: Stefani Seibold stef...@seibold.net
Cc: Andy Lutomirski l...@amacapital.net
Link: 
http://lkml.kernel.org/r/1393881143-3569-13-git-send-email-stef...@seibold.net
Signed-off-by: H. Peter Anvin h...@linux.intel.com
---
 arch/x86/vdso/Makefile | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/vdso/Makefile b/arch/x86/vdso/Makefile
index 6cef7a1..55e76eb 100644
--- a/arch/x86/vdso/Makefile
+++ b/arch/x86/vdso/Makefile
@@ -151,6 +151,9 @@ KBUILD_CFLAGS_32 := $(filter-out 
-mcmodel=kernel,$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 := $(filter-out -fno-pic,$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32))
 KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
+KBUILD_CFLAGS_32 += $(call cc-option, -fno-stack-protector) 
+KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
+KBUILD_CFLAGS_32 += -fno-omit-frame-pointer
 $(vdso32-images:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
 
 $(vdso32-images:%=$(obj)/%.dbg): $(obj)/vdso32-%.so.dbg: FORCE \
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/reboot] x86, reboot: Only use CF9_COND automatically, not CF9

2014-03-05 Thread tip-bot for H. Peter Anvin
Commit-ID:  fb3bd7b19b2b6ef779d18573c10c00c53cd8add6
Gitweb: http://git.kernel.org/tip/fb3bd7b19b2b6ef779d18573c10c00c53cd8add6
Author: H. Peter Anvin 
AuthorDate: Wed, 5 Mar 2014 15:41:15 -0800
Committer:  H. Peter Anvin 
CommitDate: Wed, 5 Mar 2014 15:41:15 -0800

x86, reboot: Only use CF9_COND automatically, not CF9

Only CF9_COND is appropriate for inclusion in the default chain, not
CF9; the latter will poke that register unconditionally, whereas
CF9_COND will at least look for PCI configuration method #1 or #2
first (a weak check, but better than nothing.)

CF9 should be used for explicit system configuration (command line or
DMI) only.

Cc: Aubrey Li 
Cc: Matthew Garrett 
Link: http://lkml.kernel.org/r/53130a46.1010...@linux.intel.com
Signed-off-by: H. Peter Anvin 
---
 arch/x86/kernel/reboot.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index f601295..654b465 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -535,7 +535,7 @@ static void native_machine_emergency_restart(void)
 EFI_RESET_WARM :
 EFI_RESET_COLD,
 EFI_SUCCESS, 0, NULL);
-   reboot_type = BOOT_CF9;
+   reboot_type = BOOT_CF9_COND;
break;
 
case BOOT_CF9:
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/reboot] x86, reboot: Only use CF9_COND automatically, not CF9

2014-03-05 Thread tip-bot for H. Peter Anvin
Commit-ID:  fb3bd7b19b2b6ef779d18573c10c00c53cd8add6
Gitweb: http://git.kernel.org/tip/fb3bd7b19b2b6ef779d18573c10c00c53cd8add6
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Wed, 5 Mar 2014 15:41:15 -0800
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Wed, 5 Mar 2014 15:41:15 -0800

x86, reboot: Only use CF9_COND automatically, not CF9

Only CF9_COND is appropriate for inclusion in the default chain, not
CF9; the latter will poke that register unconditionally, whereas
CF9_COND will at least look for PCI configuration method #1 or #2
first (a weak check, but better than nothing.)

CF9 should be used for explicit system configuration (command line or
DMI) only.

Cc: Aubrey Li aubrey...@intel.com
Cc: Matthew Garrett mj...@srcf.ucam.org
Link: http://lkml.kernel.org/r/53130a46.1010...@linux.intel.com
Signed-off-by: H. Peter Anvin h...@linux.intel.com
---
 arch/x86/kernel/reboot.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index f601295..654b465 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -535,7 +535,7 @@ static void native_machine_emergency_restart(void)
 EFI_RESET_WARM :
 EFI_RESET_COLD,
 EFI_SUCCESS, 0, NULL);
-   reboot_type = BOOT_CF9;
+   reboot_type = BOOT_CF9_COND;
break;
 
case BOOT_CF9:
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/cpufeature] x86, cpufeature: Rename X86_FEATURE_CLFLSH to X86_FEATURE_CLFLUSH

2014-02-27 Thread tip-bot for H. Peter Anvin
Commit-ID:  840d2830e6e56b8fdacc7ff12915dd91bf91566b
Gitweb: http://git.kernel.org/tip/840d2830e6e56b8fdacc7ff12915dd91bf91566b
Author: H. Peter Anvin 
AuthorDate: Thu, 27 Feb 2014 08:31:30 -0800
Committer:  H. Peter Anvin 
CommitDate: Thu, 27 Feb 2014 08:31:30 -0800

x86, cpufeature: Rename X86_FEATURE_CLFLSH to X86_FEATURE_CLFLUSH

We call this "clflush" in /proc/cpuinfo, and have
cpu_has_clflush()... let's be consistent and just call it that.

Cc: Gleb Natapov 
Cc: Paolo Bonzini 
Cc: Alan Cox 
Link: http://lkml.kernel.org/n/tip-mlytfzjkvuf739okyn40p...@git.kernel.org
---
 arch/x86/include/asm/cpufeature.h | 4 ++--
 arch/x86/kernel/cpu/common.c  | 2 +-
 arch/x86/kernel/smpboot.c | 2 +-
 arch/x86/kvm/cpuid.c  | 2 +-
 drivers/gpu/drm/gma500/mmu.c  | 2 +-
 5 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/cpufeature.h 
b/arch/x86/include/asm/cpufeature.h
index bc507d7..63211ef 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -37,7 +37,7 @@
 #define X86_FEATURE_PAT(0*32+16) /* Page Attribute Table */
 #define X86_FEATURE_PSE36  (0*32+17) /* 36-bit PSEs */
 #define X86_FEATURE_PN (0*32+18) /* Processor serial number */
-#define X86_FEATURE_CLFLSH (0*32+19) /* "clflush" CLFLUSH instruction */
+#define X86_FEATURE_CLFLUSH(0*32+19) /* CLFLUSH instruction */
 #define X86_FEATURE_DS (0*32+21) /* "dts" Debug Store */
 #define X86_FEATURE_ACPI   (0*32+22) /* ACPI via MSR */
 #define X86_FEATURE_MMX(0*32+23) /* Multimedia Extensions */
@@ -318,7 +318,7 @@ extern const char * const x86_power_flags[32];
 #define cpu_has_pmm_enabledboot_cpu_has(X86_FEATURE_PMM_EN)
 #define cpu_has_ds boot_cpu_has(X86_FEATURE_DS)
 #define cpu_has_pebs   boot_cpu_has(X86_FEATURE_PEBS)
-#define cpu_has_clflushboot_cpu_has(X86_FEATURE_CLFLSH)
+#define cpu_has_clflushboot_cpu_has(X86_FEATURE_CLFLUSH)
 #define cpu_has_btsboot_cpu_has(X86_FEATURE_BTS)
 #define cpu_has_gbpagesboot_cpu_has(X86_FEATURE_GBPAGES)
 #define cpu_has_arch_perfmon   boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 8e28bf2..2c6ac6f 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1025,7 +1025,7 @@ __setup("show_msr=", setup_show_msr);
 
 static __init int setup_noclflush(char *arg)
 {
-   setup_clear_cpu_cap(X86_FEATURE_CLFLSH);
+   setup_clear_cpu_cap(X86_FEATURE_CLFLUSH);
return 1;
 }
 __setup("noclflush", setup_noclflush);
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index a32da80..ffc78c3 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1379,7 +1379,7 @@ static inline void mwait_play_dead(void)
 
if (!this_cpu_has(X86_FEATURE_MWAIT))
return;
-   if (!this_cpu_has(X86_FEATURE_CLFLSH))
+   if (!this_cpu_has(X86_FEATURE_CLFLUSH))
return;
if (__this_cpu_read(cpu_info.cpuid_level) < CPUID_MWAIT_LEAF)
return;
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index c697625..e5503d8 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -263,7 +263,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 
*entry, u32 function,
F(TSC) | F(MSR) | F(PAE) | F(MCE) |
F(CX8) | F(APIC) | 0 /* Reserved */ | F(SEP) |
F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
-   F(PAT) | F(PSE36) | 0 /* PSN */ | F(CLFLSH) |
+   F(PAT) | F(PSE36) | 0 /* PSN */ | F(CLFLUSH) |
0 /* Reserved, DS, ACPI */ | F(MMX) |
F(FXSR) | F(XMM) | F(XMM2) | F(SELFSNOOP) |
0 /* HTT, TM, Reserved, PBE */;
diff --git a/drivers/gpu/drm/gma500/mmu.c b/drivers/gpu/drm/gma500/mmu.c
index 49bac41..c3e67ba 100644
--- a/drivers/gpu/drm/gma500/mmu.c
+++ b/drivers/gpu/drm/gma500/mmu.c
@@ -520,7 +520,7 @@ struct psb_mmu_driver *psb_mmu_driver_init(uint8_t __iomem 
* registers,
 
driver->has_clflush = 0;
 
-   if (boot_cpu_has(X86_FEATURE_CLFLSH)) {
+   if (boot_cpu_has(X86_FEATURE_CLFLUSH)) {
uint32_t tfms, misc, cap0, cap4, clflush_size;
 
/*
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/cpufeature] x86, cpufeature: If we disable CLFLUSH, we should disable CLFLUSHOPT

2014-02-27 Thread tip-bot for H. Peter Anvin
Commit-ID:  da4aaa7d860c63a1bfd3c73cf8309afe2840c5b9
Gitweb: http://git.kernel.org/tip/da4aaa7d860c63a1bfd3c73cf8309afe2840c5b9
Author: H. Peter Anvin 
AuthorDate: Thu, 27 Feb 2014 08:36:31 -0800
Committer:  H. Peter Anvin 
CommitDate: Thu, 27 Feb 2014 08:36:31 -0800

x86, cpufeature: If we disable CLFLUSH, we should disable CLFLUSHOPT

If we explicitly disable the use of CLFLUSH, we should disable the use
of CLFLUSHOPT as well.

Cc: Ross Zwisler 
Signed-off-by: H. Peter Anvin 
Link: http://lkml.kernel.org/n/tip-jtdv7btppr4jgzxm3sxx1...@git.kernel.org
---
 arch/x86/kernel/cpu/common.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 2c6ac6f..cca53d8 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1026,6 +1026,7 @@ __setup("show_msr=", setup_show_msr);
 static __init int setup_noclflush(char *arg)
 {
setup_clear_cpu_cap(X86_FEATURE_CLFLUSH);
+   setup_clear_cpu_cap(X86_FEATURE_CLFLUSHOPT);
return 1;
 }
 __setup("noclflush", setup_noclflush);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/nuke-platforms] x86, platforms: Remove NUMAQ

2014-02-27 Thread tip-bot for H. Peter Anvin
Commit-ID:  b5660ba76b41af69a0c09d434927bb4b4cadd4b1
Gitweb: http://git.kernel.org/tip/b5660ba76b41af69a0c09d434927bb4b4cadd4b1
Author: H. Peter Anvin 
AuthorDate: Tue, 25 Feb 2014 12:14:06 -0800
Committer:  H. Peter Anvin 
CommitDate: Thu, 27 Feb 2014 08:07:39 -0800

x86, platforms: Remove NUMAQ

The NUMAQ support seems to be unmaintained, remove it.

Cc: Paul Gortmaker 
Cc: David Rientjes 
Acked-by: Paul E. McKenney 
Signed-off-by: H. Peter Anvin 
Link: http://lkml.kernel.org/r/n/530cfd6c.7040...@zytor.com
---
 arch/x86/Kconfig |  36 +--
 arch/x86/Kconfig.cpu |   2 +-
 arch/x86/include/asm/mmzone_32.h |   3 -
 arch/x86/include/asm/mpspec.h|   6 -
 arch/x86/include/asm/numaq.h | 171 -
 arch/x86/kernel/apic/Makefile|   1 -
 arch/x86/kernel/apic/numaq_32.c  | 524 ---
 arch/x86/kernel/cpu/intel.c  |   4 -
 arch/x86/mm/numa.c   |   4 -
 arch/x86/pci/Makefile|   1 -
 arch/x86/pci/numaq_32.c  | 165 
 11 files changed, 9 insertions(+), 908 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 3c7f6db..e1d0c9a 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -346,7 +346,6 @@ config X86_EXTENDED_PLATFORM
  for the following (non-PC) 32 bit x86 platforms:
Goldfish (Android emulator)
AMD Elan
-   NUMAQ (IBM/Sequent)
RDC R-321x SoC
SGI 320/540 (Visual Workstation)
STA2X11-based (e.g. Northville)
@@ -487,32 +486,18 @@ config X86_32_NON_STANDARD
depends on X86_32 && SMP
depends on X86_EXTENDED_PLATFORM
---help---
- This option compiles in the NUMAQ, bigsmp, and STA2X11 default
- subarchitectures.  It is intended for a generic binary kernel. If you
- select them all, kernel will probe it one by one and will fallback to
- default.
+ This option compiles in the bigsmp and STA2X11 default
+ subarchitectures.  It is intended for a generic binary
+ kernel. If you select them all, kernel will probe it one by
+ one and will fallback to default.
 
 # Alphabetically sorted list of Non standard 32 bit platforms
 
-config X86_NUMAQ
-   bool "NUMAQ (IBM/Sequent)"
-   depends on X86_32_NON_STANDARD
-   depends on PCI
-   select NUMA
-   select X86_MPPARSE
-   ---help---
- This option is used for getting Linux to run on a NUMAQ (IBM/Sequent)
- NUMA multiquad box. This changes the way that processors are
- bootstrapped, and uses Clustered Logical APIC addressing mode instead
- of Flat Logical.  You will need a new lynxer.elf file to flash your
- firmware with - send email to .
-
 config X86_SUPPORTS_MEMORY_FAILURE
def_bool y
# MCE code calls memory_failure():
depends on X86_MCE
# On 32-bit this adds too big of NODES_SHIFT and we run out of page 
flags:
-   depends on !X86_NUMAQ
# On 32-bit SPARSEMEM adds too big of SECTIONS_WIDTH:
depends on X86_64 || !SPARSEMEM
select ARCH_SUPPORTS_MEMORY_FAILURE
@@ -783,7 +768,7 @@ config NR_CPUS
range 2 8192 if SMP && !MAXSMP && CPUMASK_OFFSTACK && X86_64
default "1" if !SMP
default "8192" if MAXSMP
-   default "32" if SMP && (X86_NUMAQ || X86_BIGSMP)
+   default "32" if SMP && X86_BIGSMP
default "8" if SMP
---help---
  This allows you to specify the maximum number of CPUs which this
@@ -1064,13 +1049,11 @@ config X86_CPUID
 
 choice
prompt "High Memory Support"
-   default HIGHMEM64G if X86_NUMAQ
default HIGHMEM4G
depends on X86_32
 
 config NOHIGHMEM
bool "off"
-   depends on !X86_NUMAQ
---help---
  Linux can use up to 64 Gigabytes of physical memory on x86 systems.
  However, the address space of 32-bit x86 processors is only 4
@@ -1107,7 +1090,6 @@ config NOHIGHMEM
 
 config HIGHMEM4G
bool "4GB"
-   depends on !X86_NUMAQ
---help---
  Select this if you have a 32-bit processor and between 1 and 4
  gigabytes of physical RAM.
@@ -1199,8 +1181,8 @@ config DIRECT_GBPAGES
 config NUMA
bool "Numa Memory Allocation and Scheduler Support"
depends on SMP
-   depends on X86_64 || (X86_32 && HIGHMEM64G && (X86_NUMAQ || X86_BIGSMP))
-   default y if (X86_NUMAQ || X86_BIGSMP)
+   depends on X86_64 || (X86_32 && HIGHMEM64G && X86_BIGSMP)
+   default y if X86_BIGSMP
---help---
  Enable NUMA (Non Uniform Memory Access) support.
 
@@ -1211,8 +1193,7 @@ config NUMA
  For 64-bit this is recommended if the system is Intel Core i7
  (or later), AMD Opteron, or EM64T NUMA.
 
- For 32-bit this is only needed on (rare) 32-bit-only platforms
- that support NUMA topologies, such as NUMAQ, or if you boot a 32-bit
+  

[tip:x86/nuke-platforms] x86, platforms: Remove NUMAQ

2014-02-27 Thread tip-bot for H. Peter Anvin
Commit-ID:  b5660ba76b41af69a0c09d434927bb4b4cadd4b1
Gitweb: http://git.kernel.org/tip/b5660ba76b41af69a0c09d434927bb4b4cadd4b1
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Tue, 25 Feb 2014 12:14:06 -0800
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Thu, 27 Feb 2014 08:07:39 -0800

x86, platforms: Remove NUMAQ

The NUMAQ support seems to be unmaintained, remove it.

Cc: Paul Gortmaker paul.gortma...@windriver.com
Cc: David Rientjes rient...@google.com
Acked-by: Paul E. McKenney paul...@linux.vnet.ibm.com
Signed-off-by: H. Peter Anvin h...@linux.intel.com
Link: http://lkml.kernel.org/r/n/530cfd6c.7040...@zytor.com
---
 arch/x86/Kconfig |  36 +--
 arch/x86/Kconfig.cpu |   2 +-
 arch/x86/include/asm/mmzone_32.h |   3 -
 arch/x86/include/asm/mpspec.h|   6 -
 arch/x86/include/asm/numaq.h | 171 -
 arch/x86/kernel/apic/Makefile|   1 -
 arch/x86/kernel/apic/numaq_32.c  | 524 ---
 arch/x86/kernel/cpu/intel.c  |   4 -
 arch/x86/mm/numa.c   |   4 -
 arch/x86/pci/Makefile|   1 -
 arch/x86/pci/numaq_32.c  | 165 
 11 files changed, 9 insertions(+), 908 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 3c7f6db..e1d0c9a 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -346,7 +346,6 @@ config X86_EXTENDED_PLATFORM
  for the following (non-PC) 32 bit x86 platforms:
Goldfish (Android emulator)
AMD Elan
-   NUMAQ (IBM/Sequent)
RDC R-321x SoC
SGI 320/540 (Visual Workstation)
STA2X11-based (e.g. Northville)
@@ -487,32 +486,18 @@ config X86_32_NON_STANDARD
depends on X86_32  SMP
depends on X86_EXTENDED_PLATFORM
---help---
- This option compiles in the NUMAQ, bigsmp, and STA2X11 default
- subarchitectures.  It is intended for a generic binary kernel. If you
- select them all, kernel will probe it one by one and will fallback to
- default.
+ This option compiles in the bigsmp and STA2X11 default
+ subarchitectures.  It is intended for a generic binary
+ kernel. If you select them all, kernel will probe it one by
+ one and will fallback to default.
 
 # Alphabetically sorted list of Non standard 32 bit platforms
 
-config X86_NUMAQ
-   bool NUMAQ (IBM/Sequent)
-   depends on X86_32_NON_STANDARD
-   depends on PCI
-   select NUMA
-   select X86_MPPARSE
-   ---help---
- This option is used for getting Linux to run on a NUMAQ (IBM/Sequent)
- NUMA multiquad box. This changes the way that processors are
- bootstrapped, and uses Clustered Logical APIC addressing mode instead
- of Flat Logical.  You will need a new lynxer.elf file to flash your
- firmware with - send email to martin.bl...@us.ibm.com.
-
 config X86_SUPPORTS_MEMORY_FAILURE
def_bool y
# MCE code calls memory_failure():
depends on X86_MCE
# On 32-bit this adds too big of NODES_SHIFT and we run out of page 
flags:
-   depends on !X86_NUMAQ
# On 32-bit SPARSEMEM adds too big of SECTIONS_WIDTH:
depends on X86_64 || !SPARSEMEM
select ARCH_SUPPORTS_MEMORY_FAILURE
@@ -783,7 +768,7 @@ config NR_CPUS
range 2 8192 if SMP  !MAXSMP  CPUMASK_OFFSTACK  X86_64
default 1 if !SMP
default 8192 if MAXSMP
-   default 32 if SMP  (X86_NUMAQ || X86_BIGSMP)
+   default 32 if SMP  X86_BIGSMP
default 8 if SMP
---help---
  This allows you to specify the maximum number of CPUs which this
@@ -1064,13 +1049,11 @@ config X86_CPUID
 
 choice
prompt High Memory Support
-   default HIGHMEM64G if X86_NUMAQ
default HIGHMEM4G
depends on X86_32
 
 config NOHIGHMEM
bool off
-   depends on !X86_NUMAQ
---help---
  Linux can use up to 64 Gigabytes of physical memory on x86 systems.
  However, the address space of 32-bit x86 processors is only 4
@@ -1107,7 +1090,6 @@ config NOHIGHMEM
 
 config HIGHMEM4G
bool 4GB
-   depends on !X86_NUMAQ
---help---
  Select this if you have a 32-bit processor and between 1 and 4
  gigabytes of physical RAM.
@@ -1199,8 +1181,8 @@ config DIRECT_GBPAGES
 config NUMA
bool Numa Memory Allocation and Scheduler Support
depends on SMP
-   depends on X86_64 || (X86_32  HIGHMEM64G  (X86_NUMAQ || X86_BIGSMP))
-   default y if (X86_NUMAQ || X86_BIGSMP)
+   depends on X86_64 || (X86_32  HIGHMEM64G  X86_BIGSMP)
+   default y if X86_BIGSMP
---help---
  Enable NUMA (Non Uniform Memory Access) support.
 
@@ -1211,8 +1193,7 @@ config NUMA
  For 64-bit this is recommended if the system is Intel Core i7
  (or later), AMD Opteron, or EM64T NUMA.
 
- For 32-bit this is only needed 

[tip:x86/cpufeature] x86, cpufeature: If we disable CLFLUSH, we should disable CLFLUSHOPT

2014-02-27 Thread tip-bot for H. Peter Anvin
Commit-ID:  da4aaa7d860c63a1bfd3c73cf8309afe2840c5b9
Gitweb: http://git.kernel.org/tip/da4aaa7d860c63a1bfd3c73cf8309afe2840c5b9
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Thu, 27 Feb 2014 08:36:31 -0800
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Thu, 27 Feb 2014 08:36:31 -0800

x86, cpufeature: If we disable CLFLUSH, we should disable CLFLUSHOPT

If we explicitly disable the use of CLFLUSH, we should disable the use
of CLFLUSHOPT as well.

Cc: Ross Zwisler ross.zwis...@linux.intel.com
Signed-off-by: H. Peter Anvin h...@linux.intel.com
Link: http://lkml.kernel.org/n/tip-jtdv7btppr4jgzxm3sxx1...@git.kernel.org
---
 arch/x86/kernel/cpu/common.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 2c6ac6f..cca53d8 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1026,6 +1026,7 @@ __setup(show_msr=, setup_show_msr);
 static __init int setup_noclflush(char *arg)
 {
setup_clear_cpu_cap(X86_FEATURE_CLFLUSH);
+   setup_clear_cpu_cap(X86_FEATURE_CLFLUSHOPT);
return 1;
 }
 __setup(noclflush, setup_noclflush);
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/cpufeature] x86, cpufeature: Rename X86_FEATURE_CLFLSH to X86_FEATURE_CLFLUSH

2014-02-27 Thread tip-bot for H. Peter Anvin
Commit-ID:  840d2830e6e56b8fdacc7ff12915dd91bf91566b
Gitweb: http://git.kernel.org/tip/840d2830e6e56b8fdacc7ff12915dd91bf91566b
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Thu, 27 Feb 2014 08:31:30 -0800
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Thu, 27 Feb 2014 08:31:30 -0800

x86, cpufeature: Rename X86_FEATURE_CLFLSH to X86_FEATURE_CLFLUSH

We call this clflush in /proc/cpuinfo, and have
cpu_has_clflush()... let's be consistent and just call it that.

Cc: Gleb Natapov g...@kernel.org
Cc: Paolo Bonzini pbonz...@redhat.com
Cc: Alan Cox a...@linux.intel.com
Link: http://lkml.kernel.org/n/tip-mlytfzjkvuf739okyn40p...@git.kernel.org
---
 arch/x86/include/asm/cpufeature.h | 4 ++--
 arch/x86/kernel/cpu/common.c  | 2 +-
 arch/x86/kernel/smpboot.c | 2 +-
 arch/x86/kvm/cpuid.c  | 2 +-
 drivers/gpu/drm/gma500/mmu.c  | 2 +-
 5 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/cpufeature.h 
b/arch/x86/include/asm/cpufeature.h
index bc507d7..63211ef 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -37,7 +37,7 @@
 #define X86_FEATURE_PAT(0*32+16) /* Page Attribute Table */
 #define X86_FEATURE_PSE36  (0*32+17) /* 36-bit PSEs */
 #define X86_FEATURE_PN (0*32+18) /* Processor serial number */
-#define X86_FEATURE_CLFLSH (0*32+19) /* clflush CLFLUSH instruction */
+#define X86_FEATURE_CLFLUSH(0*32+19) /* CLFLUSH instruction */
 #define X86_FEATURE_DS (0*32+21) /* dts Debug Store */
 #define X86_FEATURE_ACPI   (0*32+22) /* ACPI via MSR */
 #define X86_FEATURE_MMX(0*32+23) /* Multimedia Extensions */
@@ -318,7 +318,7 @@ extern const char * const x86_power_flags[32];
 #define cpu_has_pmm_enabledboot_cpu_has(X86_FEATURE_PMM_EN)
 #define cpu_has_ds boot_cpu_has(X86_FEATURE_DS)
 #define cpu_has_pebs   boot_cpu_has(X86_FEATURE_PEBS)
-#define cpu_has_clflushboot_cpu_has(X86_FEATURE_CLFLSH)
+#define cpu_has_clflushboot_cpu_has(X86_FEATURE_CLFLUSH)
 #define cpu_has_btsboot_cpu_has(X86_FEATURE_BTS)
 #define cpu_has_gbpagesboot_cpu_has(X86_FEATURE_GBPAGES)
 #define cpu_has_arch_perfmon   boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 8e28bf2..2c6ac6f 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1025,7 +1025,7 @@ __setup(show_msr=, setup_show_msr);
 
 static __init int setup_noclflush(char *arg)
 {
-   setup_clear_cpu_cap(X86_FEATURE_CLFLSH);
+   setup_clear_cpu_cap(X86_FEATURE_CLFLUSH);
return 1;
 }
 __setup(noclflush, setup_noclflush);
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index a32da80..ffc78c3 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1379,7 +1379,7 @@ static inline void mwait_play_dead(void)
 
if (!this_cpu_has(X86_FEATURE_MWAIT))
return;
-   if (!this_cpu_has(X86_FEATURE_CLFLSH))
+   if (!this_cpu_has(X86_FEATURE_CLFLUSH))
return;
if (__this_cpu_read(cpu_info.cpuid_level)  CPUID_MWAIT_LEAF)
return;
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index c697625..e5503d8 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -263,7 +263,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 
*entry, u32 function,
F(TSC) | F(MSR) | F(PAE) | F(MCE) |
F(CX8) | F(APIC) | 0 /* Reserved */ | F(SEP) |
F(MTRR) | F(PGE) | F(MCA) | F(CMOV) |
-   F(PAT) | F(PSE36) | 0 /* PSN */ | F(CLFLSH) |
+   F(PAT) | F(PSE36) | 0 /* PSN */ | F(CLFLUSH) |
0 /* Reserved, DS, ACPI */ | F(MMX) |
F(FXSR) | F(XMM) | F(XMM2) | F(SELFSNOOP) |
0 /* HTT, TM, Reserved, PBE */;
diff --git a/drivers/gpu/drm/gma500/mmu.c b/drivers/gpu/drm/gma500/mmu.c
index 49bac41..c3e67ba 100644
--- a/drivers/gpu/drm/gma500/mmu.c
+++ b/drivers/gpu/drm/gma500/mmu.c
@@ -520,7 +520,7 @@ struct psb_mmu_driver *psb_mmu_driver_init(uint8_t __iomem 
* registers,
 
driver-has_clflush = 0;
 
-   if (boot_cpu_has(X86_FEATURE_CLFLSH)) {
+   if (boot_cpu_has(X86_FEATURE_CLFLUSH)) {
uint32_t tfms, misc, cap0, cap4, clflush_size;
 
/*
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/nuke-platforms] x86, platforms: Remove SGI Visual Workstation

2014-02-25 Thread tip-bot for H. Peter Anvin
Commit-ID:  10f032c61d12fc4df9c9632ee08e71f1152e1691
Gitweb: http://git.kernel.org/tip/10f032c61d12fc4df9c9632ee08e71f1152e1691
Author: H. Peter Anvin 
AuthorDate: Tue, 25 Feb 2014 12:05:34 -0800
Committer:  H. Peter Anvin 
CommitDate: Tue, 25 Feb 2014 13:38:27 -0800

x86, platforms: Remove SGI Visual Workstation

The SGI Visual Workstation seems to be dead; remove support so we
don't have to continue maintaining it.

Cc: Andrey Panin 
Link: http://lkml.kernel.org/r/530cfd6c.7040...@zytor.com
Signed-off-by: H. Peter Anvin 
---
 Documentation/sgi-visws.txt|  13 -
 MAINTAINERS|   7 -
 arch/x86/Kconfig   |  13 -
 arch/x86/include/asm/visws/cobalt.h| 127 ---
 arch/x86/include/asm/visws/lithium.h   |  53 ---
 arch/x86/include/asm/visws/piix4.h | 107 --
 arch/x86/include/asm/visws/sgivw.h |   5 -
 arch/x86/pci/Makefile  |   2 -
 arch/x86/pci/visws.c   |  87 -
 arch/x86/platform/Makefile |   1 -
 arch/x86/platform/visws/Makefile   |   1 -
 arch/x86/platform/visws/visws_quirks.c | 608 -
 12 files changed, 1024 deletions(-)

diff --git a/Documentation/sgi-visws.txt b/Documentation/sgi-visws.txt
deleted file mode 100644
index 7ff0811..000
--- a/Documentation/sgi-visws.txt
+++ /dev/null
@@ -1,13 +0,0 @@
-
-The SGI Visual Workstations (models 320 and 540) are based around
-the Cobalt, Lithium, and Arsenic ASICs.  The Cobalt ASIC is the
-main system ASIC which interfaces the 1-4 IA32 cpus, the memory
-system, and the I/O system in the Lithium ASIC.  The Cobalt ASIC
-also contains the 3D gfx rendering engine which renders to main
-system memory -- part of which is used as the frame buffer which
-is DMA'ed to a video connector using the Arsenic ASIC.  A PIIX4
-chip and NS87307 are used to provide legacy device support (IDE,
-serial, floppy, and parallel).
-
-The Visual Workstation chipset largely conforms to the PC architecture
-with some notable exceptions such as interrupt handling.
diff --git a/MAINTAINERS b/MAINTAINERS
index b2cf5cf..7f9bc84 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7757,13 +7757,6 @@ F:   Documentation/ia64/serial.txt
 F: drivers/tty/serial/ioc?_serial.c
 F: include/linux/ioc?.h
 
-SGI VISUAL WORKSTATION 320 AND 540
-M: Andrey Panin 
-L: linux-visws-de...@lists.sf.net
-W: http://linux-visws.sf.net
-S: Maintained for 2.6.
-F: Documentation/sgi-visws.txt
-
 SGI XP/XPC/XPNET DRIVER
 M: Cliff Whickman 
 M: Robin Holt 
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 4c33fc2..2aa5d42 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -517,19 +517,6 @@ config X86_SUPPORTS_MEMORY_FAILURE
depends on X86_64 || !SPARSEMEM
select ARCH_SUPPORTS_MEMORY_FAILURE
 
-config X86_VISWS
-   bool "SGI 320/540 (Visual Workstation)"
-   depends on X86_32 && PCI && X86_MPPARSE && PCI_GODIRECT
-   depends on X86_32_NON_STANDARD
-   ---help---
- The SGI Visual Workstation series is an IA32-based workstation
- based on SGI systems chips with some legacy PC hardware attached.
-
- Say Y here to create a kernel to run on the SGI 320 or 540.
-
- A kernel compiled for the Visual Workstation will run on general
- PCs as well. See  for details.
-
 config STA2X11
bool "STA2X11 Companion Chip Support"
depends on X86_32_NON_STANDARD && PCI
diff --git a/arch/x86/include/asm/visws/cobalt.h 
b/arch/x86/include/asm/visws/cobalt.h
deleted file mode 100644
index 2edb376..000
--- a/arch/x86/include/asm/visws/cobalt.h
+++ /dev/null
@@ -1,127 +0,0 @@
-#ifndef _ASM_X86_VISWS_COBALT_H
-#define _ASM_X86_VISWS_COBALT_H
-
-#include 
-
-/*
- * Cobalt SGI Visual Workstation system ASIC
- */ 
-
-#define CO_CPU_NUM_PHYS 0x1e00
-#define CO_CPU_TAB_PHYS (CO_CPU_NUM_PHYS + 2)
-
-#define CO_CPU_MAX 4
-
-#defineCO_CPU_PHYS 0xc200
-#defineCO_APIC_PHYS0xc400
-
-/* see set_fixmap() and asm/fixmap.h */
-#defineCO_CPU_VADDR(fix_to_virt(FIX_CO_CPU))
-#defineCO_APIC_VADDR   (fix_to_virt(FIX_CO_APIC))
-
-/* Cobalt CPU registers -- relative to CO_CPU_VADDR, use co_cpu_*() */
-#defineCO_CPU_REV  0x08
-#defineCO_CPU_CTRL 0x10
-#defineCO_CPU_STAT 0x20
-#defineCO_CPU_TIMEVAL  0x30
-
-/* CO_CPU_CTRL bits */
-#defineCO_CTRL_TIMERUN 0x04/* 0 == disabled */
-#defineCO_CTRL_TIMEMASK0x08/* 0 == unmasked */
-
-/* CO_CPU_STATUS bits */
-#defineCO_STAT_TIMEINTR0x02/* (r) 1 == int pend, (w) 0 == 
clear */
-
-/* CO_CPU_TIMEVAL value */
-#defineCO_TIME_HZ  1   /* Cobalt core rate */
-
-/* Cobalt APIC registers -- relative to CO_APIC_VADDR, use co_apic_*() */
-#defineCO_APIC_HI(n)   

[tip:x86/nuke-platforms] x86, platforms: Remove NUMAQ

2014-02-25 Thread tip-bot for H. Peter Anvin
Commit-ID:  c0be2a85e483e2ccaec02a5a68a41e33911fb630
Gitweb: http://git.kernel.org/tip/c0be2a85e483e2ccaec02a5a68a41e33911fb630
Author: H. Peter Anvin 
AuthorDate: Tue, 25 Feb 2014 12:14:06 -0800
Committer:  H. Peter Anvin 
CommitDate: Tue, 25 Feb 2014 13:38:29 -0800

x86, platforms: Remove NUMAQ

The NUMAQ support seems to be unmaintained, remove it.

Cc: Paul Gortmaker 
Cc: David Rientjes 
Acked-by: Paul E. McKenney 
Signed-off-by: H. Peter Anvin 
Link: http://lkml.kernel.org/r/n/530cfd6c.7040...@zytor.com
---
 arch/x86/include/asm/mmzone_32.h |   3 -
 arch/x86/include/asm/numaq.h | 171 -
 arch/x86/kernel/apic/Makefile|   1 -
 arch/x86/kernel/apic/numaq_32.c  | 524 ---
 arch/x86/pci/Makefile|   1 -
 arch/x86/pci/numaq_32.c  | 165 
 6 files changed, 865 deletions(-)

diff --git a/arch/x86/include/asm/mmzone_32.h b/arch/x86/include/asm/mmzone_32.h
index 8a9b3e2..1ec990b 100644
--- a/arch/x86/include/asm/mmzone_32.h
+++ b/arch/x86/include/asm/mmzone_32.h
@@ -11,9 +11,6 @@
 #ifdef CONFIG_NUMA
 extern struct pglist_data *node_data[];
 #define NODE_DATA(nid) (node_data[nid])
-
-#include 
-
 #endif /* CONFIG_NUMA */
 
 #ifdef CONFIG_DISCONTIGMEM
diff --git a/arch/x86/include/asm/numaq.h b/arch/x86/include/asm/numaq.h
deleted file mode 100644
index c3b3c32..000
--- a/arch/x86/include/asm/numaq.h
+++ /dev/null
@@ -1,171 +0,0 @@
-/*
- * Written by: Patricia Gaughen, IBM Corporation
- *
- * Copyright (C) 2002, IBM Corp.
- *
- * All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
- * NON INFRINGEMENT.  See the GNU General Public License for more
- * details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
- *
- * Send feedback to 
- */
-
-#ifndef _ASM_X86_NUMAQ_H
-#define _ASM_X86_NUMAQ_H
-
-#ifdef CONFIG_X86_NUMAQ
-
-extern int found_numaq;
-extern int numaq_numa_init(void);
-extern int pci_numaq_init(void);
-
-extern void *xquad_portio;
-
-#define XQUAD_PORTIO_BASE 0xfe40
-#define XQUAD_PORTIO_QUAD 0x4  /* 256k per quad. */
-#define XQUAD_PORT_ADDR(port, quad) (xquad_portio + (XQUAD_PORTIO_QUAD*quad) + 
port)
-
-/*
- * SYS_CFG_DATA_PRIV_ADDR, struct eachquadmem, and struct sys_cfg_data are the
- */
-#define SYS_CFG_DATA_PRIV_ADDR 0x0009d000 /* place for scd in private
- quad space */
-
-/*
- * Communication area for each processor on lynxer-processor tests.
- *
- * NOTE: If you change the size of this eachproc structure you need
- *   to change the definition for EACH_QUAD_SIZE.
- */
-struct eachquadmem {
-   unsigned intpriv_mem_start; /* Starting address of this */
-   /* quad's private memory. */
-   /* This is always 0. */
-   /* In MB. */
-   unsigned intpriv_mem_size;  /* Size of this quad's */
-   /* private memory. */
-   /* In MB. */
-   unsigned intlow_shrd_mem_strp_start;/* Starting address of this */
-   /* quad's low shared block */
-   /* (untranslated). */
-   /* In MB. */
-   unsigned intlow_shrd_mem_start; /* Starting address of this */
-   /* quad's low shared memory */
-   /* (untranslated). */
-   /* In MB. */
-   unsigned intlow_shrd_mem_size;  /* Size of this quad's low */
-   /* shared memory. */
-   /* In MB. */
-   unsigned intlmmio_copb_start;   /* Starting address of this */
-   /* quad's local memory */
-   /* mapped I/O in the */
-   /* compatibility OPB. */
-   /* In MB. */
-   unsigned intlmmio_copb_size;/* Size of this quad's local */
-   /* memory mapped I/O in the 

[tip:x86/nuke-platforms] x86, platforms: Remove NUMAQ

2014-02-25 Thread tip-bot for H. Peter Anvin
Commit-ID:  c0be2a85e483e2ccaec02a5a68a41e33911fb630
Gitweb: http://git.kernel.org/tip/c0be2a85e483e2ccaec02a5a68a41e33911fb630
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Tue, 25 Feb 2014 12:14:06 -0800
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Tue, 25 Feb 2014 13:38:29 -0800

x86, platforms: Remove NUMAQ

The NUMAQ support seems to be unmaintained, remove it.

Cc: Paul Gortmaker paul.gortma...@windriver.com
Cc: David Rientjes rient...@google.com
Acked-by: Paul E. McKenney paul...@linux.vnet.ibm.com
Signed-off-by: H. Peter Anvin h...@linux.intel.com
Link: http://lkml.kernel.org/r/n/530cfd6c.7040...@zytor.com
---
 arch/x86/include/asm/mmzone_32.h |   3 -
 arch/x86/include/asm/numaq.h | 171 -
 arch/x86/kernel/apic/Makefile|   1 -
 arch/x86/kernel/apic/numaq_32.c  | 524 ---
 arch/x86/pci/Makefile|   1 -
 arch/x86/pci/numaq_32.c  | 165 
 6 files changed, 865 deletions(-)

diff --git a/arch/x86/include/asm/mmzone_32.h b/arch/x86/include/asm/mmzone_32.h
index 8a9b3e2..1ec990b 100644
--- a/arch/x86/include/asm/mmzone_32.h
+++ b/arch/x86/include/asm/mmzone_32.h
@@ -11,9 +11,6 @@
 #ifdef CONFIG_NUMA
 extern struct pglist_data *node_data[];
 #define NODE_DATA(nid) (node_data[nid])
-
-#include asm/numaq.h
-
 #endif /* CONFIG_NUMA */
 
 #ifdef CONFIG_DISCONTIGMEM
diff --git a/arch/x86/include/asm/numaq.h b/arch/x86/include/asm/numaq.h
deleted file mode 100644
index c3b3c32..000
--- a/arch/x86/include/asm/numaq.h
+++ /dev/null
@@ -1,171 +0,0 @@
-/*
- * Written by: Patricia Gaughen, IBM Corporation
- *
- * Copyright (C) 2002, IBM Corp.
- *
- * All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
- * NON INFRINGEMENT.  See the GNU General Public License for more
- * details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
- *
- * Send feedback to g...@us.ibm.com
- */
-
-#ifndef _ASM_X86_NUMAQ_H
-#define _ASM_X86_NUMAQ_H
-
-#ifdef CONFIG_X86_NUMAQ
-
-extern int found_numaq;
-extern int numaq_numa_init(void);
-extern int pci_numaq_init(void);
-
-extern void *xquad_portio;
-
-#define XQUAD_PORTIO_BASE 0xfe40
-#define XQUAD_PORTIO_QUAD 0x4  /* 256k per quad. */
-#define XQUAD_PORT_ADDR(port, quad) (xquad_portio + (XQUAD_PORTIO_QUAD*quad) + 
port)
-
-/*
- * SYS_CFG_DATA_PRIV_ADDR, struct eachquadmem, and struct sys_cfg_data are the
- */
-#define SYS_CFG_DATA_PRIV_ADDR 0x0009d000 /* place for scd in private
- quad space */
-
-/*
- * Communication area for each processor on lynxer-processor tests.
- *
- * NOTE: If you change the size of this eachproc structure you need
- *   to change the definition for EACH_QUAD_SIZE.
- */
-struct eachquadmem {
-   unsigned intpriv_mem_start; /* Starting address of this */
-   /* quad's private memory. */
-   /* This is always 0. */
-   /* In MB. */
-   unsigned intpriv_mem_size;  /* Size of this quad's */
-   /* private memory. */
-   /* In MB. */
-   unsigned intlow_shrd_mem_strp_start;/* Starting address of this */
-   /* quad's low shared block */
-   /* (untranslated). */
-   /* In MB. */
-   unsigned intlow_shrd_mem_start; /* Starting address of this */
-   /* quad's low shared memory */
-   /* (untranslated). */
-   /* In MB. */
-   unsigned intlow_shrd_mem_size;  /* Size of this quad's low */
-   /* shared memory. */
-   /* In MB. */
-   unsigned intlmmio_copb_start;   /* Starting address of this */
-   /* quad's local memory */
-   /* mapped I/O in the */
-   /* compatibility OPB. */
-   /* In MB. 

[tip:x86/nuke-platforms] x86, platforms: Remove SGI Visual Workstation

2014-02-25 Thread tip-bot for H. Peter Anvin
Commit-ID:  10f032c61d12fc4df9c9632ee08e71f1152e1691
Gitweb: http://git.kernel.org/tip/10f032c61d12fc4df9c9632ee08e71f1152e1691
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Tue, 25 Feb 2014 12:05:34 -0800
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Tue, 25 Feb 2014 13:38:27 -0800

x86, platforms: Remove SGI Visual Workstation

The SGI Visual Workstation seems to be dead; remove support so we
don't have to continue maintaining it.

Cc: Andrey Panin pa...@donpac.ru
Link: http://lkml.kernel.org/r/530cfd6c.7040...@zytor.com
Signed-off-by: H. Peter Anvin h...@linux.intel.com
---
 Documentation/sgi-visws.txt|  13 -
 MAINTAINERS|   7 -
 arch/x86/Kconfig   |  13 -
 arch/x86/include/asm/visws/cobalt.h| 127 ---
 arch/x86/include/asm/visws/lithium.h   |  53 ---
 arch/x86/include/asm/visws/piix4.h | 107 --
 arch/x86/include/asm/visws/sgivw.h |   5 -
 arch/x86/pci/Makefile  |   2 -
 arch/x86/pci/visws.c   |  87 -
 arch/x86/platform/Makefile |   1 -
 arch/x86/platform/visws/Makefile   |   1 -
 arch/x86/platform/visws/visws_quirks.c | 608 -
 12 files changed, 1024 deletions(-)

diff --git a/Documentation/sgi-visws.txt b/Documentation/sgi-visws.txt
deleted file mode 100644
index 7ff0811..000
--- a/Documentation/sgi-visws.txt
+++ /dev/null
@@ -1,13 +0,0 @@
-
-The SGI Visual Workstations (models 320 and 540) are based around
-the Cobalt, Lithium, and Arsenic ASICs.  The Cobalt ASIC is the
-main system ASIC which interfaces the 1-4 IA32 cpus, the memory
-system, and the I/O system in the Lithium ASIC.  The Cobalt ASIC
-also contains the 3D gfx rendering engine which renders to main
-system memory -- part of which is used as the frame buffer which
-is DMA'ed to a video connector using the Arsenic ASIC.  A PIIX4
-chip and NS87307 are used to provide legacy device support (IDE,
-serial, floppy, and parallel).
-
-The Visual Workstation chipset largely conforms to the PC architecture
-with some notable exceptions such as interrupt handling.
diff --git a/MAINTAINERS b/MAINTAINERS
index b2cf5cf..7f9bc84 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7757,13 +7757,6 @@ F:   Documentation/ia64/serial.txt
 F: drivers/tty/serial/ioc?_serial.c
 F: include/linux/ioc?.h
 
-SGI VISUAL WORKSTATION 320 AND 540
-M: Andrey Panin pa...@donpac.ru
-L: linux-visws-de...@lists.sf.net
-W: http://linux-visws.sf.net
-S: Maintained for 2.6.
-F: Documentation/sgi-visws.txt
-
 SGI XP/XPC/XPNET DRIVER
 M: Cliff Whickman c...@sgi.com
 M: Robin Holt robinmh...@gmail.com
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 4c33fc2..2aa5d42 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -517,19 +517,6 @@ config X86_SUPPORTS_MEMORY_FAILURE
depends on X86_64 || !SPARSEMEM
select ARCH_SUPPORTS_MEMORY_FAILURE
 
-config X86_VISWS
-   bool SGI 320/540 (Visual Workstation)
-   depends on X86_32  PCI  X86_MPPARSE  PCI_GODIRECT
-   depends on X86_32_NON_STANDARD
-   ---help---
- The SGI Visual Workstation series is an IA32-based workstation
- based on SGI systems chips with some legacy PC hardware attached.
-
- Say Y here to create a kernel to run on the SGI 320 or 540.
-
- A kernel compiled for the Visual Workstation will run on general
- PCs as well. See file:Documentation/sgi-visws.txt for details.
-
 config STA2X11
bool STA2X11 Companion Chip Support
depends on X86_32_NON_STANDARD  PCI
diff --git a/arch/x86/include/asm/visws/cobalt.h 
b/arch/x86/include/asm/visws/cobalt.h
deleted file mode 100644
index 2edb376..000
--- a/arch/x86/include/asm/visws/cobalt.h
+++ /dev/null
@@ -1,127 +0,0 @@
-#ifndef _ASM_X86_VISWS_COBALT_H
-#define _ASM_X86_VISWS_COBALT_H
-
-#include asm/fixmap.h
-
-/*
- * Cobalt SGI Visual Workstation system ASIC
- */ 
-
-#define CO_CPU_NUM_PHYS 0x1e00
-#define CO_CPU_TAB_PHYS (CO_CPU_NUM_PHYS + 2)
-
-#define CO_CPU_MAX 4
-
-#defineCO_CPU_PHYS 0xc200
-#defineCO_APIC_PHYS0xc400
-
-/* see set_fixmap() and asm/fixmap.h */
-#defineCO_CPU_VADDR(fix_to_virt(FIX_CO_CPU))
-#defineCO_APIC_VADDR   (fix_to_virt(FIX_CO_APIC))
-
-/* Cobalt CPU registers -- relative to CO_CPU_VADDR, use co_cpu_*() */
-#defineCO_CPU_REV  0x08
-#defineCO_CPU_CTRL 0x10
-#defineCO_CPU_STAT 0x20
-#defineCO_CPU_TIMEVAL  0x30
-
-/* CO_CPU_CTRL bits */
-#defineCO_CTRL_TIMERUN 0x04/* 0 == disabled */
-#defineCO_CTRL_TIMEMASK0x08/* 0 == unmasked */
-
-/* CO_CPU_STATUS bits */
-#defineCO_STAT_TIMEINTR0x02/* (r) 1 == int pend, (w) 0 == 
clear */
-
-/* CO_CPU_TIMEVAL value */
-#defineCO_TIME_HZ  

[tip:x86/vdso] mm: Clean up style in install_special_mapping()

2014-02-19 Thread tip-bot for H. Peter Anvin
Commit-ID:  3af7111e2066a641510c16a4e9e82dd81550115b
Gitweb: http://git.kernel.org/tip/3af7111e2066a641510c16a4e9e82dd81550115b
Author: H. Peter Anvin 
AuthorDate: Wed, 19 Feb 2014 20:46:57 -0800
Committer:  H. Peter Anvin 
CommitDate: Wed, 19 Feb 2014 20:46:57 -0800

mm: Clean up style in install_special_mapping()

We can clean up the style in install_special_mapping(), and make it
use PTR_ERR_OR_ZERO().

Reported-by: kbuild test robot 
Link: 
http://lkml.kernel.org/r/1392587568-7325-3-git-send-email-stef...@seibold.net
Signed-off-by: H. Peter Anvin 
---
 mm/mmap.c | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 81ba54f..6b78a77 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2959,12 +2959,10 @@ int install_special_mapping(struct mm_struct *mm,
unsigned long addr, unsigned long len,
unsigned long vm_flags, struct page **pages)
 {
-   struct vm_area_struct *vma = _install_special_mapping(mm,
-   addr, len, vm_flags, pages);
+   struct vm_area_struct *vma;
 
-   if (IS_ERR(vma))
-   return PTR_ERR(vma);
-   return 0;
+   vma = _install_special_mapping(mm, addr, len, vm_flags, pages);
+   return PTR_ERR_OR_ZERO(vma);
 }
 
 static DEFINE_MUTEX(mm_all_locks_mutex);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:x86/vdso] mm: Clean up style in install_special_mapping()

2014-02-19 Thread tip-bot for H. Peter Anvin
Commit-ID:  3af7111e2066a641510c16a4e9e82dd81550115b
Gitweb: http://git.kernel.org/tip/3af7111e2066a641510c16a4e9e82dd81550115b
Author: H. Peter Anvin h...@linux.intel.com
AuthorDate: Wed, 19 Feb 2014 20:46:57 -0800
Committer:  H. Peter Anvin h...@linux.intel.com
CommitDate: Wed, 19 Feb 2014 20:46:57 -0800

mm: Clean up style in install_special_mapping()

We can clean up the style in install_special_mapping(), and make it
use PTR_ERR_OR_ZERO().

Reported-by: kbuild test robot fengguang...@intel.com
Link: 
http://lkml.kernel.org/r/1392587568-7325-3-git-send-email-stef...@seibold.net
Signed-off-by: H. Peter Anvin h...@linux.intel.com
---
 mm/mmap.c | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 81ba54f..6b78a77 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2959,12 +2959,10 @@ int install_special_mapping(struct mm_struct *mm,
unsigned long addr, unsigned long len,
unsigned long vm_flags, struct page **pages)
 {
-   struct vm_area_struct *vma = _install_special_mapping(mm,
-   addr, len, vm_flags, pages);
+   struct vm_area_struct *vma;
 
-   if (IS_ERR(vma))
-   return PTR_ERR(vma);
-   return 0;
+   vma = _install_special_mapping(mm, addr, len, vm_flags, pages);
+   return PTR_ERR_OR_ZERO(vma);
 }
 
 static DEFINE_MUTEX(mm_all_locks_mutex);
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


  1   2   3   >