3.16.56-rc1 review patch.  If anyone has any objections, please let me know.

------------------

From: David Woodhouse <d...@amazon.co.uk>

commit 5096732f6f695001fa2d6f1335a2680b37912c69 upstream.

Convert all indirect jumps in 32bit checksum assembler code to use
non-speculative sequences when CONFIG_RETPOLINE is enabled.

Signed-off-by: David Woodhouse <d...@amazon.co.uk>
Signed-off-by: Thomas Gleixner <t...@linutronix.de>
Acked-by: Arjan van de Ven <ar...@linux.intel.com>
Acked-by: Ingo Molnar <mi...@kernel.org>
Cc: gno...@lxorguk.ukuu.org.uk
Cc: Rik van Riel <r...@redhat.com>
Cc: Andi Kleen <a...@linux.intel.com>
Cc: Josh Poimboeuf <jpoim...@redhat.com>
Cc: thomas.lenda...@amd.com
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Jiri Kosina <ji...@kernel.org>
Cc: Andy Lutomirski <l...@amacapital.net>
Cc: Dave Hansen <dave.han...@intel.com>
Cc: Kees Cook <keesc...@google.com>
Cc: Tim Chen <tim.c.c...@linux.intel.com>
Cc: Greg Kroah-Hartman <gre...@linux-foundation.org>
Cc: Paul Turner <p...@google.com>
Link: 
https://lkml.kernel.org/r/1515707194-20531-11-git-send-email-d...@amazon.co.uk
[bwh: Backported to 3.16: adjust context]
Signed-off-by: Ben Hutchings <b...@decadent.org.uk>
---
 arch/x86/lib/checksum_32.S | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

--- a/arch/x86/lib/checksum_32.S
+++ b/arch/x86/lib/checksum_32.S
@@ -29,7 +29,8 @@
 #include <asm/dwarf2.h>
 #include <asm/errno.h>
 #include <asm/asm.h>
-                               
+#include <asm/nospec-branch.h>
+
 /*
  * computes a partial checksum, e.g. for TCP/UDP fragments
  */
@@ -165,7 +166,7 @@ ENTRY(csum_partial)
        negl %ebx
        lea 45f(%ebx,%ebx,2), %ebx
        testl %esi, %esi
-       jmp *%ebx
+       JMP_NOSPEC %ebx
 
        # Handle 2-byte-aligned regions
 20:    addw (%esi), %ax
@@ -463,7 +464,7 @@ ENTRY(csum_partial_copy_generic)
        andl $-32,%edx
        lea 3f(%ebx,%ebx), %ebx
        testl %esi, %esi 
-       jmp *%ebx
+       JMP_NOSPEC %ebx
 1:     addl $64,%esi
        addl $64,%edi 
        SRC(movb -32(%edx),%bl) ; SRC(movb (%edx),%bl)

Reply via email to