Hello community,

here is the log from the commit of package kernel-source for openSUSE:Factory 
checked in at 2018-03-16 10:36:15
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/kernel-source (Old)
 and      /work/SRC/openSUSE:Factory/.kernel-source.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "kernel-source"

Fri Mar 16 10:36:15 2018 rev:413 rq:586504 version:4.15.9

Changes:
--------
--- /work/SRC/openSUSE:Factory/kernel-source/dtb-aarch64.changes        
2018-03-13 10:23:05.459807302 +0100
+++ /work/SRC/openSUSE:Factory/.kernel-source.new/dtb-aarch64.changes   
2018-03-16 10:36:15.703785005 +0100
@@ -1,0 +2,31 @@
+Sun Mar 11 23:30:25 CET 2018 - [email protected]
+
+- Linux 4.15.9 (bnc#1012628).
+- bpf: fix mlock precharge on arraymaps (bnc#1012628).
+- bpf: fix memory leak in lpm_trie map_free callback function
+  (bnc#1012628).
+- bpf: fix rcu lockdep warning for lpm_trie map_free callback
+  (bnc#1012628).
+- bpf, x64: implement retpoline for tail call (bnc#1012628).
+- bpf, arm64: fix out of bounds access in tail call (bnc#1012628).
+- bpf: add schedule points in percpu arrays management
+  (bnc#1012628).
+- bpf: allow xadd only on aligned memory (bnc#1012628).
+- bpf, ppc64: fix out of bounds access in tail call (bnc#1012628).
+- scsi: mpt3sas: fix oops in error handlers after shutdown/unload
+  (bnc#1012628).
+- scsi: mpt3sas: wait for and flush running commands on
+  shutdown/unload (bnc#1012628).
+- KVM: x86: fix backward migration with async_PF (bnc#1012628).
+- Refresh
+  patches.suse/0002-x86-speculation-Add-inlines-to-control-Indirect-Bran.patch.
+- commit 23fae4b
+
+-------------------------------------------------------------------
+Sat Mar 10 16:25:53 CET 2018 - [email protected]
+
+- Refresh to upstream patch (bsc#1083694)
+  patches.suse/Documentation-sphinx-Fix-Directive-import-error.patch
+- commit f3b4992
+
+-------------------------------------------------------------------
dtb-armv6l.changes: same change
dtb-armv7l.changes: same change
kernel-64kb.changes: same change
kernel-debug.changes: same change
kernel-default.changes: same change
kernel-docs.changes: same change
kernel-lpae.changes: same change
kernel-obs-build.changes: same change
kernel-obs-qa.changes: same change
kernel-pae.changes: same change
kernel-source.changes: same change
kernel-syms.changes: same change
kernel-syzkaller.changes: same change
kernel-vanilla.changes: same change
kernel-zfcpdump.changes: same change

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ dtb-aarch64.spec ++++++
--- /var/tmp/diff_new_pack.hZ1czk/_old  2018-03-16 10:36:30.803241326 +0100
+++ /var/tmp/diff_new_pack.hZ1czk/_new  2018-03-16 10:36:30.807241183 +0100
@@ -17,7 +17,7 @@
 
 
 %define srcversion 4.15
-%define patchversion 4.15.8
+%define patchversion 4.15.9
 %define variant %{nil}
 
 %include %_sourcedir/kernel-spec-macros
@@ -29,9 +29,9 @@
 %(chmod +x 
%_sourcedir/{guards,apply-patches,check-for-config-changes,group-source-files.pl,split-modules,modversions,kabi.pl,mkspec,compute-PATCHVERSION.sh,arch-symbols,log.sh,try-disable-staging-driver,compress-vmlinux.sh,mkspec-dtb})
 
 Name:           dtb-aarch64
-Version:        4.15.8
+Version:        4.15.9
 %if 0%{?is_kotd}
-Release:        <RELEASE>.g67f0889
+Release:        <RELEASE>.g2c1b8ee
 %else
 Release:        0
 %endif

dtb-armv6l.spec: same change
dtb-armv7l.spec: same change
++++++ kernel-64kb.spec ++++++
--- /var/tmp/diff_new_pack.hZ1czk/_old  2018-03-16 10:36:30.915237294 +0100
+++ /var/tmp/diff_new_pack.hZ1czk/_new  2018-03-16 10:36:30.923237006 +0100
@@ -18,7 +18,7 @@
 
 
 %define srcversion 4.15
-%define patchversion 4.15.8
+%define patchversion 4.15.9
 %define variant %{nil}
 %define vanilla_only 0
 
@@ -58,9 +58,9 @@
 Summary:        Kernel with 64kb PAGE_SIZE
 License:        GPL-2.0
 Group:          System/Kernel
-Version:        4.15.8
+Version:        4.15.9
 %if 0%{?is_kotd}
-Release:        <RELEASE>.g67f0889
+Release:        <RELEASE>.g2c1b8ee
 %else
 Release:        0
 %endif

kernel-debug.spec: same change
kernel-default.spec: same change
++++++ kernel-docs.spec ++++++
--- /var/tmp/diff_new_pack.hZ1czk/_old  2018-03-16 10:36:31.055232253 +0100
+++ /var/tmp/diff_new_pack.hZ1czk/_new  2018-03-16 10:36:31.059232109 +0100
@@ -17,7 +17,7 @@
 
 
 %define srcversion 4.15
-%define patchversion 4.15.8
+%define patchversion 4.15.9
 %define variant %{nil}
 
 %include %_sourcedir/kernel-spec-macros
@@ -31,9 +31,9 @@
 Summary:        Kernel Documentation
 License:        GPL-2.0
 Group:          Documentation/Man
-Version:        4.15.8
+Version:        4.15.9
 %if 0%{?is_kotd}
-Release:        <RELEASE>.g67f0889
+Release:        <RELEASE>.g2c1b8ee
 %else
 Release:        0
 %endif

++++++ kernel-lpae.spec ++++++
--- /var/tmp/diff_new_pack.hZ1czk/_old  2018-03-16 10:36:31.099230668 +0100
+++ /var/tmp/diff_new_pack.hZ1czk/_new  2018-03-16 10:36:31.107230380 +0100
@@ -18,7 +18,7 @@
 
 
 %define srcversion 4.15
-%define patchversion 4.15.8
+%define patchversion 4.15.9
 %define variant %{nil}
 %define vanilla_only 0
 
@@ -58,9 +58,9 @@
 Summary:        Kernel for LPAE enabled systems
 License:        GPL-2.0
 Group:          System/Kernel
-Version:        4.15.8
+Version:        4.15.9
 %if 0%{?is_kotd}
-Release:        <RELEASE>.g67f0889
+Release:        <RELEASE>.g2c1b8ee
 %else
 Release:        0
 %endif

++++++ kernel-obs-build.spec ++++++
--- /var/tmp/diff_new_pack.hZ1czk/_old  2018-03-16 10:36:31.143229084 +0100
+++ /var/tmp/diff_new_pack.hZ1czk/_new  2018-03-16 10:36:31.147228940 +0100
@@ -19,7 +19,7 @@
 
 #!BuildIgnore: post-build-checks
 
-%define patchversion 4.15.8
+%define patchversion 4.15.9
 %define variant %{nil}
 %define vanilla_only 0
 
@@ -64,9 +64,9 @@
 Summary:        package kernel and initrd for OBS VM builds
 License:        GPL-2.0
 Group:          SLES
-Version:        4.15.8
+Version:        4.15.9
 %if 0%{?is_kotd}
-Release:        <RELEASE>.g67f0889
+Release:        <RELEASE>.g2c1b8ee
 %else
 Release:        0
 %endif

++++++ kernel-obs-qa.spec ++++++
--- /var/tmp/diff_new_pack.hZ1czk/_old  2018-03-16 10:36:31.191227356 +0100
+++ /var/tmp/diff_new_pack.hZ1czk/_new  2018-03-16 10:36:31.199227068 +0100
@@ -17,7 +17,7 @@
 # needsrootforbuild
 
 
-%define patchversion 4.15.8
+%define patchversion 4.15.9
 %define variant %{nil}
 
 %include %_sourcedir/kernel-spec-macros
@@ -36,9 +36,9 @@
 Summary:        Basic QA tests for the kernel
 License:        GPL-2.0
 Group:          SLES
-Version:        4.15.8
+Version:        4.15.9
 %if 0%{?is_kotd}
-Release:        <RELEASE>.g67f0889
+Release:        <RELEASE>.g2c1b8ee
 %else
 Release:        0
 %endif

++++++ kernel-pae.spec ++++++
--- /var/tmp/diff_new_pack.hZ1czk/_old  2018-03-16 10:36:31.251225196 +0100
+++ /var/tmp/diff_new_pack.hZ1czk/_new  2018-03-16 10:36:31.255225052 +0100
@@ -18,7 +18,7 @@
 
 
 %define srcversion 4.15
-%define patchversion 4.15.8
+%define patchversion 4.15.9
 %define variant %{nil}
 %define vanilla_only 0
 
@@ -58,9 +58,9 @@
 Summary:        Kernel with PAE Support
 License:        GPL-2.0
 Group:          System/Kernel
-Version:        4.15.8
+Version:        4.15.9
 %if 0%{?is_kotd}
-Release:        <RELEASE>.g67f0889
+Release:        <RELEASE>.g2c1b8ee
 %else
 Release:        0
 %endif

++++++ kernel-source.spec ++++++
--- /var/tmp/diff_new_pack.hZ1czk/_old  2018-03-16 10:36:31.299223467 +0100
+++ /var/tmp/diff_new_pack.hZ1czk/_new  2018-03-16 10:36:31.311223035 +0100
@@ -18,7 +18,7 @@
 
 
 %define srcversion 4.15
-%define patchversion 4.15.8
+%define patchversion 4.15.9
 %define variant %{nil}
 %define vanilla_only 0
 
@@ -30,9 +30,9 @@
 Summary:        The Linux Kernel Sources
 License:        GPL-2.0
 Group:          Development/Sources
-Version:        4.15.8
+Version:        4.15.9
 %if 0%{?is_kotd}
-Release:        <RELEASE>.g67f0889
+Release:        <RELEASE>.g2c1b8ee
 %else
 Release:        0
 %endif

++++++ kernel-syms.spec ++++++
--- /var/tmp/diff_new_pack.hZ1czk/_old  2018-03-16 10:36:31.363221163 +0100
+++ /var/tmp/diff_new_pack.hZ1czk/_new  2018-03-16 10:36:31.383220443 +0100
@@ -24,10 +24,10 @@
 Summary:        Kernel Symbol Versions (modversions)
 License:        GPL-2.0
 Group:          Development/Sources
-Version:        4.15.8
+Version:        4.15.9
 %if %using_buildservice
 %if 0%{?is_kotd}
-Release:        <RELEASE>.g67f0889
+Release:        <RELEASE>.g2c1b8ee
 %else
 Release:        0
 %endif

++++++ kernel-syzkaller.spec ++++++
--- /var/tmp/diff_new_pack.hZ1czk/_old  2018-03-16 10:36:31.439218427 +0100
+++ /var/tmp/diff_new_pack.hZ1czk/_new  2018-03-16 10:36:31.443218283 +0100
@@ -18,7 +18,7 @@
 
 
 %define srcversion 4.15
-%define patchversion 4.15.8
+%define patchversion 4.15.9
 %define variant %{nil}
 %define vanilla_only 0
 
@@ -58,9 +58,9 @@
 Summary:        Kernel used for fuzzing by syzkaller
 License:        GPL-2.0
 Group:          System/Kernel
-Version:        4.15.8
+Version:        4.15.9
 %if 0%{?is_kotd}
-Release:        <RELEASE>.g67f0889
+Release:        <RELEASE>.g2c1b8ee
 %else
 Release:        0
 %endif

kernel-vanilla.spec: same change
kernel-zfcpdump.spec: same change
++++++ patches.kernel.org.tar.bz2 ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/patches.kernel.org/4.15.9-001-bpf-fix-mlock-precharge-on-arraymaps.patch 
new/patches.kernel.org/4.15.9-001-bpf-fix-mlock-precharge-on-arraymaps.patch
--- 
old/patches.kernel.org/4.15.9-001-bpf-fix-mlock-precharge-on-arraymaps.patch    
    1970-01-01 01:00:00.000000000 +0100
+++ 
new/patches.kernel.org/4.15.9-001-bpf-fix-mlock-precharge-on-arraymaps.patch    
    2018-03-11 23:30:25.000000000 +0100
@@ -0,0 +1,98 @@
+From: Daniel Borkmann <[email protected]>
+Date: Thu, 8 Mar 2018 13:16:42 +0100
+Subject: [PATCH] bpf: fix mlock precharge on arraymaps
+References: bnc#1012628
+Patch-mainline: 4.15.9
+Git-commit: 9c2d63b843a5c8a8d0559cc067b5398aa5ec3ffc
+
+[ upstream commit 9c2d63b843a5c8a8d0559cc067b5398aa5ec3ffc ]
+
+syzkaller recently triggered OOM during percpu map allocation;
+while there is work in progress by Dennis Zhou to add __GFP_NORETRY
+semantics for percpu allocator under pressure, there seems also a
+missing bpf_map_precharge_memlock() check in array map allocation.
+
+Given today the actual bpf_map_charge_memlock() happens after the
+find_and_alloc_map() in syscall path, the bpf_map_precharge_memlock()
+is there to bail out early before we go and do the map setup work
+when we find that we hit the limits anyway. Therefore add this for
+array map as well.
+
+Fixes: 6c9059817432 ("bpf: pre-allocate hash map elements")
+Fixes: a10423b87a7e ("bpf: introduce BPF_MAP_TYPE_PERCPU_ARRAY map")
+Reported-by: [email protected]
+Signed-off-by: Daniel Borkmann <[email protected]>
+Cc: Dennis Zhou <[email protected]>
+Signed-off-by: Alexei Starovoitov <[email protected]>
+Signed-off-by: Daniel Borkmann <[email protected]>
+Signed-off-by: Greg Kroah-Hartman <[email protected]>
+Signed-off-by: Jiri Slaby <[email protected]>
+---
+ kernel/bpf/arraymap.c | 28 ++++++++++++++++------------
+ 1 file changed, 16 insertions(+), 12 deletions(-)
+
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index ab94d304a634..e76aa6756fc9 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -52,11 +52,11 @@ static int bpf_array_alloc_percpu(struct bpf_array *array)
+ static struct bpf_map *array_map_alloc(union bpf_attr *attr)
+ {
+       bool percpu = attr->map_type == BPF_MAP_TYPE_PERCPU_ARRAY;
+-      int numa_node = bpf_map_attr_numa_node(attr);
++      int ret, numa_node = bpf_map_attr_numa_node(attr);
+       u32 elem_size, index_mask, max_entries;
+       bool unpriv = !capable(CAP_SYS_ADMIN);
++      u64 cost, array_size, mask64;
+       struct bpf_array *array;
+-      u64 array_size, mask64;
+ 
+       /* check sanity of attributes */
+       if (attr->max_entries == 0 || attr->key_size != 4 ||
+@@ -101,8 +101,19 @@ static struct bpf_map *array_map_alloc(union bpf_attr 
*attr)
+               array_size += (u64) max_entries * elem_size;
+ 
+       /* make sure there is no u32 overflow later in round_up() */
+-      if (array_size >= U32_MAX - PAGE_SIZE)
++      cost = array_size;
++      if (cost >= U32_MAX - PAGE_SIZE)
+               return ERR_PTR(-ENOMEM);
++      if (percpu) {
++              cost += (u64)attr->max_entries * elem_size * 
num_possible_cpus();
++              if (cost >= U32_MAX - PAGE_SIZE)
++                      return ERR_PTR(-ENOMEM);
++      }
++      cost = round_up(cost, PAGE_SIZE) >> PAGE_SHIFT;
++
++      ret = bpf_map_precharge_memlock(cost);
++      if (ret < 0)
++              return ERR_PTR(ret);
+ 
+       /* allocate all map elements and zero-initialize them */
+       array = bpf_map_area_alloc(array_size, numa_node);
+@@ -118,20 +129,13 @@ static struct bpf_map *array_map_alloc(union bpf_attr 
*attr)
+       array->map.max_entries = attr->max_entries;
+       array->map.map_flags = attr->map_flags;
+       array->map.numa_node = numa_node;
++      array->map.pages = cost;
+       array->elem_size = elem_size;
+ 
+-      if (!percpu)
+-              goto out;
+-
+-      array_size += (u64) attr->max_entries * elem_size * num_possible_cpus();
+-
+-      if (array_size >= U32_MAX - PAGE_SIZE ||
+-          bpf_array_alloc_percpu(array)) {
++      if (percpu && bpf_array_alloc_percpu(array)) {
+               bpf_map_area_free(array);
+               return ERR_PTR(-ENOMEM);
+       }
+-out:
+-      array->map.pages = round_up(array_size, PAGE_SIZE) >> PAGE_SHIFT;
+ 
+       return &array->map;
+ }
+-- 
+2.16.2
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/patches.kernel.org/4.15.9-002-bpf-fix-memory-leak-in-lpm_trie-map_free-callb.patch
 
new/patches.kernel.org/4.15.9-002-bpf-fix-memory-leak-in-lpm_trie-map_free-callb.patch
--- 
old/patches.kernel.org/4.15.9-002-bpf-fix-memory-leak-in-lpm_trie-map_free-callb.patch
      1970-01-01 01:00:00.000000000 +0100
+++ 
new/patches.kernel.org/4.15.9-002-bpf-fix-memory-leak-in-lpm_trie-map_free-callb.patch
      2018-03-11 23:30:25.000000000 +0100
@@ -0,0 +1,70 @@
+From: Yonghong Song <[email protected]>
+Date: Thu, 8 Mar 2018 13:16:43 +0100
+Subject: [PATCH] bpf: fix memory leak in lpm_trie map_free callback function
+References: bnc#1012628
+Patch-mainline: 4.15.9
+Git-commit: 9a3efb6b661f71d5675369ace9257833f0e78ef3
+
+[ upstream commit 9a3efb6b661f71d5675369ace9257833f0e78ef3 ]
+
+There is a memory leak happening in lpm_trie map_free callback
+function trie_free. The trie structure itself does not get freed.
+
+Also, trie_free function did not do synchronize_rcu before freeing
+various data structures. This is incorrect as some rcu_read_lock
+region(s) for lookup, update, delete or get_next_key may not complete yet.
+The fix is to add synchronize_rcu in the beginning of trie_free.
+The useless spin_lock is removed from this function as well.
+
+Fixes: b95a5c4db09b ("bpf: add a longest prefix match trie map implementation")
+Reported-by: Mathieu Malaterre <[email protected]>
+Reported-by: Alexei Starovoitov <[email protected]>
+Tested-by: Mathieu Malaterre <[email protected]>
+Signed-off-by: Yonghong Song <[email protected]>
+Signed-off-by: Alexei Starovoitov <[email protected]>
+Signed-off-by: Daniel Borkmann <[email protected]>
+Signed-off-by: Greg Kroah-Hartman <[email protected]>
+Signed-off-by: Jiri Slaby <[email protected]>
+---
+ kernel/bpf/lpm_trie.c | 11 +++++++----
+ 1 file changed, 7 insertions(+), 4 deletions(-)
+
+diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
+index 885e45479680..61c0b530c443 100644
+--- a/kernel/bpf/lpm_trie.c
++++ b/kernel/bpf/lpm_trie.c
+@@ -560,7 +560,10 @@ static void trie_free(struct bpf_map *map)
+       struct lpm_trie_node __rcu **slot;
+       struct lpm_trie_node *node;
+ 
+-      raw_spin_lock(&trie->lock);
++      /* Wait for outstanding programs to complete
++       * update/lookup/delete/get_next_key and free the trie.
++       */
++      synchronize_rcu();
+ 
+       /* Always start at the root and walk down to a node that has no
+        * children. Then free that node, nullify its reference in the parent
+@@ -574,7 +577,7 @@ static void trie_free(struct bpf_map *map)
+                       node = rcu_dereference_protected(*slot,
+                                       lockdep_is_held(&trie->lock));
+                       if (!node)
+-                              goto unlock;
++                              goto out;
+ 
+                       if (rcu_access_pointer(node->child[0])) {
+                               slot = &node->child[0];
+@@ -592,8 +595,8 @@ static void trie_free(struct bpf_map *map)
+               }
+       }
+ 
+-unlock:
+-      raw_spin_unlock(&trie->lock);
++out:
++      kfree(trie);
+ }
+ 
+ static int trie_get_next_key(struct bpf_map *map, void *key, void *next_key)
+-- 
+2.16.2
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/patches.kernel.org/4.15.9-003-bpf-fix-rcu-lockdep-warning-for-lpm_trie-map_f.patch
 
new/patches.kernel.org/4.15.9-003-bpf-fix-rcu-lockdep-warning-for-lpm_trie-map_f.patch
--- 
old/patches.kernel.org/4.15.9-003-bpf-fix-rcu-lockdep-warning-for-lpm_trie-map_f.patch
      1970-01-01 01:00:00.000000000 +0100
+++ 
new/patches.kernel.org/4.15.9-003-bpf-fix-rcu-lockdep-warning-for-lpm_trie-map_f.patch
      2018-03-11 23:30:25.000000000 +0100
@@ -0,0 +1,65 @@
+From: Yonghong Song <[email protected]>
+Date: Thu, 8 Mar 2018 13:16:44 +0100
+Subject: [PATCH] bpf: fix rcu lockdep warning for lpm_trie map_free callback
+References: bnc#1012628
+Patch-mainline: 4.15.9
+Git-commit: 6c5f61023c5b0edb0c8a64c902fe97c6453b1852
+
+[ upstream commit 6c5f61023c5b0edb0c8a64c902fe97c6453b1852 ]
+
+Commit 9a3efb6b661f ("bpf: fix memory leak in lpm_trie map_free callback 
function")
+fixed a memory leak and removed unnecessary locks in map_free callback 
function.
+Unfortrunately, it introduced a lockdep warning. When lockdep checking is 
turned on,
+running tools/testing/selftests/bpf/test_lpm_map will have:
+
+  [   98.294321] =============================
+  [   98.294807] WARNING: suspicious RCU usage
+  [   98.295359] 4.16.0-rc2+ #193 Not tainted
+  [   98.295907] -----------------------------
+  [   98.296486] /home/yhs/work/bpf/kernel/bpf/lpm_trie.c:572 suspicious 
rcu_dereference_check() usage!
+  [   98.297657]
+  [   98.297657] other info that might help us debug this:
+  [   98.297657]
+  [   98.298663]
+  [   98.298663] rcu_scheduler_active = 2, debug_locks = 1
+  [   98.299536] 2 locks held by kworker/2:1/54:
+  [   98.300152]  #0:  ((wq_completion)"events"){+.+.}, at: 
[<00000000196bc1f0>] process_one_work+0x157/0x5c0
+  [   98.301381]  #1:  ((work_completion)(&map->work)){+.+.}, at: 
[<00000000196bc1f0>] process_one_work+0x157/0x5c0
+
+Since actual trie tree removal happens only after no other
+accesses to the tree are possible, replacing
+  rcu_dereference_protected(*slot, lockdep_is_held(&trie->lock))
+with
+  rcu_dereference_protected(*slot, 1)
+fixed the issue.
+
+Fixes: 9a3efb6b661f ("bpf: fix memory leak in lpm_trie map_free callback 
function")
+Reported-by: Eric Dumazet <[email protected]>
+Suggested-by: Eric Dumazet <[email protected]>
+Signed-off-by: Yonghong Song <[email protected]>
+Reviewed-by: Eric Dumazet <[email protected]>
+Acked-by: David S. Miller <[email protected]>
+Signed-off-by: Daniel Borkmann <[email protected]>
+Signed-off-by: Greg Kroah-Hartman <[email protected]>
+Signed-off-by: Jiri Slaby <[email protected]>
+---
+ kernel/bpf/lpm_trie.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
+index 61c0b530c443..424f89ac4adc 100644
+--- a/kernel/bpf/lpm_trie.c
++++ b/kernel/bpf/lpm_trie.c
+@@ -574,8 +574,7 @@ static void trie_free(struct bpf_map *map)
+               slot = &trie->root;
+ 
+               for (;;) {
+-                      node = rcu_dereference_protected(*slot,
+-                                      lockdep_is_held(&trie->lock));
++                      node = rcu_dereference_protected(*slot, 1);
+                       if (!node)
+                               goto out;
+ 
+-- 
+2.16.2
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/patches.kernel.org/4.15.9-004-bpf-x64-implement-retpoline-for-tail-call.patch
 
new/patches.kernel.org/4.15.9-004-bpf-x64-implement-retpoline-for-tail-call.patch
--- 
old/patches.kernel.org/4.15.9-004-bpf-x64-implement-retpoline-for-tail-call.patch
   1970-01-01 01:00:00.000000000 +0100
+++ 
new/patches.kernel.org/4.15.9-004-bpf-x64-implement-retpoline-for-tail-call.patch
   2018-03-11 23:30:25.000000000 +0100
@@ -0,0 +1,186 @@
+From: Daniel Borkmann <[email protected]>
+Date: Thu, 8 Mar 2018 13:16:45 +0100
+Subject: [PATCH] bpf, x64: implement retpoline for tail call
+References: bnc#1012628
+Patch-mainline: 4.15.9
+Git-commit: a493a87f38cfa48caaa95c9347be2d914c6fdf29
+
+[ upstream commit a493a87f38cfa48caaa95c9347be2d914c6fdf29 ]
+
+Implement a retpoline [0] for the BPF tail call JIT'ing that converts
+the indirect jump via jmp %rax that is used to make the long jump into
+another JITed BPF image. Since this is subject to speculative execution,
+we need to control the transient instruction sequence here as well
+when CONFIG_RETPOLINE is set, and direct it into a pause + lfence loop.
+The latter aligns also with what gcc / clang emits (e.g. [1]).
+
+JIT dump after patch:
+
+  # bpftool p d x i 1
+   0: (18) r2 = map[id:1]
+   2: (b7) r3 = 0
+   3: (85) call bpf_tail_call#12
+   4: (b7) r0 = 2
+   5: (95) exit
+
+With CONFIG_RETPOLINE:
+
+  # bpftool p d j i 1
+  [...]
+  33:  cmp    %edx,0x24(%rsi)
+  36:  jbe    0x0000000000000072  |*
+  38:  mov    0x24(%rbp),%eax
+  3e:  cmp    $0x20,%eax
+  41:  ja     0x0000000000000072  |
+  43:  add    $0x1,%eax
+  46:  mov    %eax,0x24(%rbp)
+  4c:  mov    0x90(%rsi,%rdx,8),%rax
+  54:  test   %rax,%rax
+  57:  je     0x0000000000000072  |
+  59:  mov    0x28(%rax),%rax
+  5d:  add    $0x25,%rax
+  61:  callq  0x000000000000006d  |+
+  66:  pause                      |
+  68:  lfence                     |
+  6b:  jmp    0x0000000000000066  |
+  6d:  mov    %rax,(%rsp)         |
+  71:  retq                       |
+  72:  mov    $0x2,%eax
+  [...]
+
+  * relative fall-through jumps in error case
+  + retpoline for indirect jump
+
+Without CONFIG_RETPOLINE:
+
+  # bpftool p d j i 1
+  [...]
+  33:  cmp    %edx,0x24(%rsi)
+  36:  jbe    0x0000000000000063  |*
+  38:  mov    0x24(%rbp),%eax
+  3e:  cmp    $0x20,%eax
+  41:  ja     0x0000000000000063  |
+  43:  add    $0x1,%eax
+  46:  mov    %eax,0x24(%rbp)
+  4c:  mov    0x90(%rsi,%rdx,8),%rax
+  54:  test   %rax,%rax
+  57:  je     0x0000000000000063  |
+  59:  mov    0x28(%rax),%rax
+  5d:  add    $0x25,%rax
+  61:  jmpq   *%rax               |-
+  63:  mov    $0x2,%eax
+  [...]
+
+  * relative fall-through jumps in error case
+  - plain indirect jump as before
+
+  [0] https://support.google.com/faqs/answer/7625886
+  [1] 
https://github.com/gcc-mirror/gcc/commit/a31e654fa107be968b802786d747e962c2fcdb2b
+
+Signed-off-by: Daniel Borkmann <[email protected]>
+Signed-off-by: Alexei Starovoitov <[email protected]>
+Signed-off-by: Daniel Borkmann <[email protected]>
+Signed-off-by: Greg Kroah-Hartman <[email protected]>
+Signed-off-by: Jiri Slaby <[email protected]>
+---
+ arch/x86/include/asm/nospec-branch.h | 37 ++++++++++++++++++++++++++++++++++++
+ arch/x86/net/bpf_jit_comp.c          |  9 +++++----
+ 2 files changed, 42 insertions(+), 4 deletions(-)
+
+diff --git a/arch/x86/include/asm/nospec-branch.h 
b/arch/x86/include/asm/nospec-branch.h
+index 76b058533e47..81a1be326571 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -177,4 +177,41 @@ static inline void 
indirect_branch_prediction_barrier(void)
+ }
+ 
+ #endif /* __ASSEMBLY__ */
++
++/*
++ * Below is used in the eBPF JIT compiler and emits the byte sequence
++ * for the following assembly:
++ *
++ * With retpolines configured:
++ *
++ *    callq do_rop
++ *  spec_trap:
++ *    pause
++ *    lfence
++ *    jmp spec_trap
++ *  do_rop:
++ *    mov %rax,(%rsp)
++ *    retq
++ *
++ * Without retpolines configured:
++ *
++ *    jmp *%rax
++ */
++#ifdef CONFIG_RETPOLINE
++# define RETPOLINE_RAX_BPF_JIT_SIZE   17
++# define RETPOLINE_RAX_BPF_JIT()                              \
++      EMIT1_off32(0xE8, 7);    /* callq do_rop */             \
++      /* spec_trap: */                                        \
++      EMIT2(0xF3, 0x90);       /* pause */                    \
++      EMIT3(0x0F, 0xAE, 0xE8); /* lfence */                   \
++      EMIT2(0xEB, 0xF9);       /* jmp spec_trap */            \
++      /* do_rop: */                                           \
++      EMIT4(0x48, 0x89, 0x04, 0x24); /* mov %rax,(%rsp) */    \
++      EMIT1(0xC3);             /* retq */
++#else
++# define RETPOLINE_RAX_BPF_JIT_SIZE   2
++# define RETPOLINE_RAX_BPF_JIT()                              \
++      EMIT2(0xFF, 0xE0);       /* jmp *%rax */
++#endif
++
+ #endif /* _ASM_X86_NOSPEC_BRANCH_H_ */
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 0554e8aef4d5..940aac70b4da 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -13,6 +13,7 @@
+ #include <linux/if_vlan.h>
+ #include <asm/cacheflush.h>
+ #include <asm/set_memory.h>
++#include <asm/nospec-branch.h>
+ #include <linux/bpf.h>
+ 
+ int bpf_jit_enable __read_mostly;
+@@ -287,7 +288,7 @@ static void emit_bpf_tail_call(u8 **pprog)
+       EMIT2(0x89, 0xD2);                        /* mov edx, edx */
+       EMIT3(0x39, 0x56,                         /* cmp dword ptr [rsi + 16], 
edx */
+             offsetof(struct bpf_array, map.max_entries));
+-#define OFFSET1 43 /* number of bytes to jump */
++#define OFFSET1 (41 + RETPOLINE_RAX_BPF_JIT_SIZE) /* number of bytes to jump 
*/
+       EMIT2(X86_JBE, OFFSET1);                  /* jbe out */
+       label1 = cnt;
+ 
+@@ -296,7 +297,7 @@ static void emit_bpf_tail_call(u8 **pprog)
+        */
+       EMIT2_off32(0x8B, 0x85, 36);              /* mov eax, dword ptr [rbp + 
36] */
+       EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT);     /* cmp eax, MAX_TAIL_CALL_CNT 
*/
+-#define OFFSET2 32
++#define OFFSET2 (30 + RETPOLINE_RAX_BPF_JIT_SIZE)
+       EMIT2(X86_JA, OFFSET2);                   /* ja out */
+       label2 = cnt;
+       EMIT3(0x83, 0xC0, 0x01);                  /* add eax, 1 */
+@@ -310,7 +311,7 @@ static void emit_bpf_tail_call(u8 **pprog)
+        *   goto out;
+        */
+       EMIT3(0x48, 0x85, 0xC0);                  /* test rax,rax */
+-#define OFFSET3 10
++#define OFFSET3 (8 + RETPOLINE_RAX_BPF_JIT_SIZE)
+       EMIT2(X86_JE, OFFSET3);                   /* je out */
+       label3 = cnt;
+ 
+@@ -323,7 +324,7 @@ static void emit_bpf_tail_call(u8 **pprog)
+        * rdi == ctx (1st arg)
+        * rax == prog->bpf_func + prologue_size
+        */
+-      EMIT2(0xFF, 0xE0);                        /* jmp rax */
++      RETPOLINE_RAX_BPF_JIT();
+ 
+       /* out: */
+       BUILD_BUG_ON(cnt - label1 != OFFSET1);
+-- 
+2.16.2
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/patches.kernel.org/4.15.9-005-bpf-arm64-fix-out-of-bounds-access-in-tail-cal.patch
 
new/patches.kernel.org/4.15.9-005-bpf-arm64-fix-out-of-bounds-access-in-tail-cal.patch
--- 
old/patches.kernel.org/4.15.9-005-bpf-arm64-fix-out-of-bounds-access-in-tail-cal.patch
      1970-01-01 01:00:00.000000000 +0100
+++ 
new/patches.kernel.org/4.15.9-005-bpf-arm64-fix-out-of-bounds-access-in-tail-cal.patch
      2018-03-11 23:30:25.000000000 +0100
@@ -0,0 +1,163 @@
+From: Daniel Borkmann <[email protected]>
+Date: Thu, 8 Mar 2018 13:16:46 +0100
+Subject: [PATCH] bpf, arm64: fix out of bounds access in tail call
+References: bnc#1012628
+Patch-mainline: 4.15.9
+Git-commit: 16338a9b3ac30740d49f5dfed81bac0ffa53b9c7
+
+[ upstream commit 16338a9b3ac30740d49f5dfed81bac0ffa53b9c7 ]
+
+I recently noticed a crash on arm64 when feeding a bogus index
+into BPF tail call helper. The crash would not occur when the
+interpreter is used, but only in case of JIT. Output looks as
+follows:
+
+  [  347.007486] Unable to handle kernel paging request at virtual address 
fffb850e96492510
+  [...]
+  [  347.043065] [fffb850e96492510] address between user and kernel address 
ranges
+  [  347.050205] Internal error: Oops: 96000004 [#1] SMP
+  [...]
+  [  347.190829] x13: 0000000000000000 x12: 0000000000000000
+  [  347.196128] x11: fffc047ebe782800 x10: ffff808fd7d0fd10
+  [  347.201427] x9 : 0000000000000000 x8 : 0000000000000000
+  [  347.206726] x7 : 0000000000000000 x6 : 001c991738000000
+  [  347.212025] x5 : 0000000000000018 x4 : 000000000000ba5a
+  [  347.217325] x3 : 00000000000329c4 x2 : ffff808fd7cf0500
+  [  347.222625] x1 : ffff808fd7d0fc00 x0 : ffff808fd7cf0500
+  [  347.227926] Process test_verifier (pid: 4548, stack limit = 
0x000000007467fa61)
+  [  347.235221] Call trace:
+  [  347.237656]  0xffff000002f3a4fc
+  [  347.240784]  bpf_test_run+0x78/0xf8
+  [  347.244260]  bpf_prog_test_run_skb+0x148/0x230
+  [  347.248694]  SyS_bpf+0x77c/0x1110
+  [  347.251999]  el0_svc_naked+0x30/0x34
+  [  347.255564] Code: 9100075a d280220a 8b0a002a d37df04b (f86b694b)
+  [...]
+
+In this case the index used in BPF r3 is the same as in r1
+at the time of the call, meaning we fed a pointer as index;
+here, it had the value 0xffff808fd7cf0500 which sits in x2.
+
+While I found tail calls to be working in general (also for
+hitting the error cases), I noticed the following in the code
+emission:
+
+  # bpftool p d j i 988
+  [...]
+  38:   ldr     w10, [x1,x10]
+  3c:   cmp     w2, w10
+  40:   b.ge    0x000000000000007c              <-- signed cmp
+  44:   mov     x10, #0x20                      // #32
+  48:   cmp     x26, x10
+  4c:   b.gt    0x000000000000007c
+  50:   add     x26, x26, #0x1
+  54:   mov     x10, #0x110                     // #272
+  58:   add     x10, x1, x10
+  5c:   lsl     x11, x2, #3
+  60:   ldr     x11, [x10,x11]                  <-- faulting insn (f86b694b)
+  64:   cbz     x11, 0x000000000000007c
+  [...]
+
+Meaning, the tests passed because commit ddb55992b04d ("arm64:
+bpf: implement bpf_tail_call() helper") was using signed compares
+instead of unsigned which as a result had the test wrongly passing.
+
+Change this but also the tail call count test both into unsigned
+and cap the index as u32. Latter we did as well in 90caccdd8cc0
+("bpf: fix bpf_tail_call() x64 JIT") and is needed in addition here,
+too. Tested on HiSilicon Hi1616.
+
+Result after patch:
+
+  # bpftool p d j i 268
+  [...]
+  38:  ldr     w10, [x1,x10]
+  3c:  add     w2, w2, #0x0
+  40:  cmp     w2, w10
+  44:  b.cs    0x0000000000000080
+  48:  mov     x10, #0x20                      // #32
+  4c:  cmp     x26, x10
+  50:  b.hi    0x0000000000000080
+  54:  add     x26, x26, #0x1
+  58:  mov     x10, #0x110                     // #272
+  5c:  add     x10, x1, x10
+  60:  lsl     x11, x2, #3
+  64:  ldr     x11, [x10,x11]
+  68:  cbz     x11, 0x0000000000000080
+  [...]
+
+Fixes: ddb55992b04d ("arm64: bpf: implement bpf_tail_call() helper")
+Signed-off-by: Daniel Borkmann <[email protected]>
+Signed-off-by: Alexei Starovoitov <[email protected]>
+Signed-off-by: Daniel Borkmann <[email protected]>
+Signed-off-by: Greg Kroah-Hartman <[email protected]>
+Signed-off-by: Jiri Slaby <[email protected]>
+---
+ arch/arm64/net/bpf_jit_comp.c               |  5 +++--
+ tools/testing/selftests/bpf/test_verifier.c | 26 ++++++++++++++++++++++++++
+ 2 files changed, 29 insertions(+), 2 deletions(-)
+
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index bb32f7f6dd0f..be155f70f108 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -238,8 +238,9 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+       off = offsetof(struct bpf_array, map.max_entries);
+       emit_a64_mov_i64(tmp, off, ctx);
+       emit(A64_LDR32(tmp, r2, tmp), ctx);
++      emit(A64_MOV(0, r3, r3), ctx);
+       emit(A64_CMP(0, r3, tmp), ctx);
+-      emit(A64_B_(A64_COND_GE, jmp_offset), ctx);
++      emit(A64_B_(A64_COND_CS, jmp_offset), ctx);
+ 
+       /* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
+        *     goto out;
+@@ -247,7 +248,7 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+        */
+       emit_a64_mov_i64(tmp, MAX_TAIL_CALL_CNT, ctx);
+       emit(A64_CMP(1, tcc, tmp), ctx);
+-      emit(A64_B_(A64_COND_GT, jmp_offset), ctx);
++      emit(A64_B_(A64_COND_HI, jmp_offset), ctx);
+       emit(A64_ADD_I(1, tcc, tcc, 1), ctx);
+ 
+       /* prog = array->ptrs[index];
+diff --git a/tools/testing/selftests/bpf/test_verifier.c 
b/tools/testing/selftests/bpf/test_verifier.c
+index 5ed4175c4ff8..13036a145318 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -2254,6 +2254,32 @@ static struct bpf_test tests[] = {
+               .result_unpriv = REJECT,
+               .result = ACCEPT,
+       },
++      {
++              "runtime/jit: pass negative index to tail_call",
++              .insns = {
++                      BPF_MOV64_IMM(BPF_REG_3, -1),
++                      BPF_LD_MAP_FD(BPF_REG_2, 0),
++                      BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
++                                   BPF_FUNC_tail_call),
++                      BPF_MOV64_IMM(BPF_REG_0, 0),
++                      BPF_EXIT_INSN(),
++              },
++              .fixup_prog = { 1 },
++              .result = ACCEPT,
++      },
++      {
++              "runtime/jit: pass > 32bit index to tail_call",
++              .insns = {
++                      BPF_LD_IMM64(BPF_REG_3, 0x100000000ULL),
++                      BPF_LD_MAP_FD(BPF_REG_2, 0),
++                      BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
++                                   BPF_FUNC_tail_call),
++                      BPF_MOV64_IMM(BPF_REG_0, 0),
++                      BPF_EXIT_INSN(),
++              },
++              .fixup_prog = { 2 },
++              .result = ACCEPT,
++      },
+       {
+               "stack pointer arithmetic",
+               .insns = {
+-- 
+2.16.2
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/patches.kernel.org/4.15.9-006-bpf-add-schedule-points-in-percpu-arrays-manag.patch
 
new/patches.kernel.org/4.15.9-006-bpf-add-schedule-points-in-percpu-arrays-manag.patch
--- 
old/patches.kernel.org/4.15.9-006-bpf-add-schedule-points-in-percpu-arrays-manag.patch
      1970-01-01 01:00:00.000000000 +0100
+++ 
new/patches.kernel.org/4.15.9-006-bpf-add-schedule-points-in-percpu-arrays-manag.patch
      2018-03-11 23:30:25.000000000 +0100
@@ -0,0 +1,55 @@
+From: Eric Dumazet <[email protected]>
+Date: Thu, 8 Mar 2018 13:16:47 +0100
+Subject: [PATCH] bpf: add schedule points in percpu arrays management
+References: bnc#1012628
+Patch-mainline: 4.15.9
+Git-commit: 32fff239de37ef226d5b66329dd133f64d63b22d
+
+[ upstream commit 32fff239de37ef226d5b66329dd133f64d63b22d ]
+
+syszbot managed to trigger RCU detected stalls in
+bpf_array_free_percpu()
+
+It takes time to allocate a huge percpu map, but even more time to free
+it.
+
+Since we run in process context, use cond_resched() to yield cpu if
+needed.
+
+Fixes: a10423b87a7e ("bpf: introduce BPF_MAP_TYPE_PERCPU_ARRAY map")
+Signed-off-by: Eric Dumazet <[email protected]>
+Reported-by: syzbot <[email protected]>
+Signed-off-by: Daniel Borkmann <[email protected]>
+Signed-off-by: Greg Kroah-Hartman <[email protected]>
+Signed-off-by: Jiri Slaby <[email protected]>
+---
+ kernel/bpf/arraymap.c | 5 ++++-
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index e76aa6756fc9..8596aa31c75e 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -26,8 +26,10 @@ static void bpf_array_free_percpu(struct bpf_array *array)
+ {
+       int i;
+ 
+-      for (i = 0; i < array->map.max_entries; i++)
++      for (i = 0; i < array->map.max_entries; i++) {
+               free_percpu(array->pptrs[i]);
++              cond_resched();
++      }
+ }
+ 
+ static int bpf_array_alloc_percpu(struct bpf_array *array)
+@@ -43,6 +45,7 @@ static int bpf_array_alloc_percpu(struct bpf_array *array)
+                       return -ENOMEM;
+               }
+               array->pptrs[i] = ptr;
++              cond_resched();
+       }
+ 
+       return 0;
+-- 
+2.16.2
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/patches.kernel.org/4.15.9-007-bpf-allow-xadd-only-on-aligned-memory.patch 
new/patches.kernel.org/4.15.9-007-bpf-allow-xadd-only-on-aligned-memory.patch
--- 
old/patches.kernel.org/4.15.9-007-bpf-allow-xadd-only-on-aligned-memory.patch   
    1970-01-01 01:00:00.000000000 +0100
+++ 
new/patches.kernel.org/4.15.9-007-bpf-allow-xadd-only-on-aligned-memory.patch   
    2018-03-11 23:30:25.000000000 +0100
@@ -0,0 +1,263 @@
+From: Daniel Borkmann <[email protected]>
+Date: Thu, 8 Mar 2018 13:16:48 +0100
+Subject: [PATCH] bpf: allow xadd only on aligned memory
+References: bnc#1012628
+Patch-mainline: 4.15.9
+Git-commit: ca36960211eb228bcbc7aaebfa0d027368a94c60
+
+[ upstream commit ca36960211eb228bcbc7aaebfa0d027368a94c60 ]
+
+The requirements around atomic_add() / atomic64_add() resp. their
+JIT implementations differ across architectures. E.g. while x86_64
+seems just fine with BPF's xadd on unaligned memory, on arm64 it
+triggers via interpreter but also JIT the following crash:
+
+  [  830.864985] Unable to handle kernel paging request at virtual address 
ffff8097d7ed6703
+  [...]
+  [  830.916161] Internal error: Oops: 96000021 [#1] SMP
+  [  830.984755] CPU: 37 PID: 2788 Comm: test_verifier Not tainted 4.16.0-rc2+ 
#8
+  [  830.991790] Hardware name: Huawei TaiShan 2280 /BC11SPCD, BIOS 1.29 
07/17/2017
+  [  830.998998] pstate: 80400005 (Nzcv daif +PAN -UAO)
+  [  831.003793] pc : __ll_sc_atomic_add+0x4/0x18
+  [  831.008055] lr : ___bpf_prog_run+0x1198/0x1588
+  [  831.012485] sp : ffff00001ccabc20
+  [  831.015786] x29: ffff00001ccabc20 x28: ffff8017d56a0f00
+  [  831.021087] x27: 0000000000000001 x26: 0000000000000000
+  [  831.026387] x25: 000000c168d9db98 x24: 0000000000000000
+  [  831.031686] x23: ffff000008203878 x22: ffff000009488000
+  [  831.036986] x21: ffff000008b14e28 x20: ffff00001ccabcb0
+  [  831.042286] x19: ffff0000097b5080 x18: 0000000000000a03
+  [  831.047585] x17: 0000000000000000 x16: 0000000000000000
+  [  831.052885] x15: 0000ffffaeca8000 x14: 0000000000000000
+  [  831.058184] x13: 0000000000000000 x12: 0000000000000000
+  [  831.063484] x11: 0000000000000001 x10: 0000000000000000
+  [  831.068783] x9 : 0000000000000000 x8 : 0000000000000000
+  [  831.074083] x7 : 0000000000000000 x6 : 000580d428000000
+  [  831.079383] x5 : 0000000000000018 x4 : 0000000000000000
+  [  831.084682] x3 : ffff00001ccabcb0 x2 : 0000000000000001
+  [  831.089982] x1 : ffff8097d7ed6703 x0 : 0000000000000001
+  [  831.095282] Process test_verifier (pid: 2788, stack limit = 
0x0000000018370044)
+  [  831.102577] Call trace:
+  [  831.105012]  __ll_sc_atomic_add+0x4/0x18
+  [  831.108923]  __bpf_prog_run32+0x4c/0x70
+  [  831.112748]  bpf_test_run+0x78/0xf8
+  [  831.116224]  bpf_prog_test_run_xdp+0xb4/0x120
+  [  831.120567]  SyS_bpf+0x77c/0x1110
+  [  831.123873]  el0_svc_naked+0x30/0x34
+  [  831.127437] Code: 97fffe97 17ffffec 00000000 f9800031 (885f7c31)
+
+Reason for this is because memory is required to be aligned. In
+case of BPF, we always enforce alignment in terms of stack access,
+but not when accessing map values or packet data when the underlying
+arch (e.g. arm64) has CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS set.
+
+xadd on packet data that is local to us anyway is just wrong, so
+forbid this case entirely. The only place where xadd makes sense in
+fact are map values; xadd on stack is wrong as well, but it's been
+around for much longer. Specifically enforce strict alignment in case
+of xadd, so that we handle this case generically and avoid such crashes
+in the first place.
+
+Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)")
+Signed-off-by: Daniel Borkmann <[email protected]>
+Signed-off-by: Alexei Starovoitov <[email protected]>
+Signed-off-by: Daniel Borkmann <[email protected]>
+Signed-off-by: Greg Kroah-Hartman <[email protected]>
+Signed-off-by: Jiri Slaby <[email protected]>
+---
+ kernel/bpf/verifier.c                       | 42 +++++++++++++--------
+ tools/testing/selftests/bpf/test_verifier.c | 58 +++++++++++++++++++++++++++++
+ 2 files changed, 84 insertions(+), 16 deletions(-)
+
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 13551e623501..7125ddbb24df 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -985,6 +985,13 @@ static bool is_ctx_reg(struct bpf_verifier_env *env, int 
regno)
+       return reg->type == PTR_TO_CTX;
+ }
+ 
++static bool is_pkt_reg(struct bpf_verifier_env *env, int regno)
++{
++      const struct bpf_reg_state *reg = cur_regs(env) + regno;
++
++      return type_is_pkt_pointer(reg->type);
++}
++
+ static int check_pkt_ptr_alignment(struct bpf_verifier_env *env,
+                                  const struct bpf_reg_state *reg,
+                                  int off, int size, bool strict)
+@@ -1045,10 +1052,10 @@ static int check_generic_ptr_alignment(struct 
bpf_verifier_env *env,
+ }
+ 
+ static int check_ptr_alignment(struct bpf_verifier_env *env,
+-                             const struct bpf_reg_state *reg,
+-                             int off, int size)
++                             const struct bpf_reg_state *reg, int off,
++                             int size, bool strict_alignment_once)
+ {
+-      bool strict = env->strict_alignment;
++      bool strict = env->strict_alignment || strict_alignment_once;
+       const char *pointer_desc = "";
+ 
+       switch (reg->type) {
+@@ -1108,9 +1115,9 @@ static void coerce_reg_to_size(struct bpf_reg_state 
*reg, int size)
+  * if t==write && value_regno==-1, some unknown value is stored into memory
+  * if t==read && value_regno==-1, don't care what we read from memory
+  */
+-static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 
regno, int off,
+-                          int bpf_size, enum bpf_access_type t,
+-                          int value_regno)
++static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 
regno,
++                          int off, int bpf_size, enum bpf_access_type t,
++                          int value_regno, bool strict_alignment_once)
+ {
+       struct bpf_verifier_state *state = env->cur_state;
+       struct bpf_reg_state *regs = cur_regs(env);
+@@ -1122,7 +1129,7 @@ static int check_mem_access(struct bpf_verifier_env 
*env, int insn_idx, u32 regn
+               return size;
+ 
+       /* alignment checks will add in reg->off themselves */
+-      err = check_ptr_alignment(env, reg, off, size);
++      err = check_ptr_alignment(env, reg, off, size, strict_alignment_once);
+       if (err)
+               return err;
+ 
+@@ -1265,21 +1272,23 @@ static int check_xadd(struct bpf_verifier_env *env, 
int insn_idx, struct bpf_ins
+               return -EACCES;
+       }
+ 
+-      if (is_ctx_reg(env, insn->dst_reg)) {
+-              verbose(env, "BPF_XADD stores into R%d context is not 
allowed\n",
+-                      insn->dst_reg);
++      if (is_ctx_reg(env, insn->dst_reg) ||
++          is_pkt_reg(env, insn->dst_reg)) {
++              verbose(env, "BPF_XADD stores into R%d %s is not allowed\n",
++                      insn->dst_reg, is_ctx_reg(env, insn->dst_reg) ?
++                      "context" : "packet");
+               return -EACCES;
+       }
+ 
+       /* check whether atomic_add can read the memory */
+       err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
+-                             BPF_SIZE(insn->code), BPF_READ, -1);
++                             BPF_SIZE(insn->code), BPF_READ, -1, true);
+       if (err)
+               return err;
+ 
+       /* check whether atomic_add can write into the same memory */
+       return check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
+-                              BPF_SIZE(insn->code), BPF_WRITE, -1);
++                              BPF_SIZE(insn->code), BPF_WRITE, -1, true);
+ }
+ 
+ /* Does this register contain a constant zero? */
+@@ -1763,7 +1772,8 @@ static int check_call(struct bpf_verifier_env *env, int 
func_id, int insn_idx)
+        * is inferred from register state.
+        */
+       for (i = 0; i < meta.access_size; i++) {
+-              err = check_mem_access(env, insn_idx, meta.regno, i, BPF_B, 
BPF_WRITE, -1);
++              err = check_mem_access(env, insn_idx, meta.regno, i, BPF_B,
++                                     BPF_WRITE, -1, false);
+               if (err)
+                       return err;
+       }
+@@ -3933,7 +3943,7 @@ static int do_check(struct bpf_verifier_env *env)
+                        */
+                       err = check_mem_access(env, insn_idx, insn->src_reg, 
insn->off,
+                                              BPF_SIZE(insn->code), BPF_READ,
+-                                             insn->dst_reg);
++                                             insn->dst_reg, false);
+                       if (err)
+                               return err;
+ 
+@@ -3985,7 +3995,7 @@ static int do_check(struct bpf_verifier_env *env)
+                       /* check that memory (dst_reg + off) is writeable */
+                       err = check_mem_access(env, insn_idx, insn->dst_reg, 
insn->off,
+                                              BPF_SIZE(insn->code), BPF_WRITE,
+-                                             insn->src_reg);
++                                             insn->src_reg, false);
+                       if (err)
+                               return err;
+ 
+@@ -4020,7 +4030,7 @@ static int do_check(struct bpf_verifier_env *env)
+                       /* check that memory (dst_reg + off) is writeable */
+                       err = check_mem_access(env, insn_idx, insn->dst_reg, 
insn->off,
+                                              BPF_SIZE(insn->code), BPF_WRITE,
+-                                             -1);
++                                             -1, false);
+                       if (err)
+                               return err;
+ 
+diff --git a/tools/testing/selftests/bpf/test_verifier.c 
b/tools/testing/selftests/bpf/test_verifier.c
+index 13036a145318..0694527acaa0 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -8852,6 +8852,64 @@ static struct bpf_test tests[] = {
+               .result = REJECT,
+               .prog_type = BPF_PROG_TYPE_CGROUP_SOCK,
+       },
++      {
++              "xadd/w check unaligned stack",
++              .insns = {
++                      BPF_MOV64_IMM(BPF_REG_0, 1),
++                      BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
++                      BPF_STX_XADD(BPF_W, BPF_REG_10, BPF_REG_0, -7),
++                      BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
++                      BPF_EXIT_INSN(),
++              },
++              .result = REJECT,
++              .errstr = "misaligned stack access off",
++              .prog_type = BPF_PROG_TYPE_SCHED_CLS,
++      },
++      {
++              "xadd/w check unaligned map",
++              .insns = {
++                      BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
++                      BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
++                      BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
++                      BPF_LD_MAP_FD(BPF_REG_1, 0),
++                      BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
++                                   BPF_FUNC_map_lookup_elem),
++                      BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
++                      BPF_EXIT_INSN(),
++                      BPF_MOV64_IMM(BPF_REG_1, 1),
++                      BPF_STX_XADD(BPF_W, BPF_REG_0, BPF_REG_1, 3),
++                      BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 3),
++                      BPF_EXIT_INSN(),
++              },
++              .fixup_map1 = { 3 },
++              .result = REJECT,
++              .errstr = "misaligned value access off",
++              .prog_type = BPF_PROG_TYPE_SCHED_CLS,
++      },
++      {
++              "xadd/w check unaligned pkt",
++              .insns = {
++                      BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++                                  offsetof(struct xdp_md, data)),
++                      BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++                                  offsetof(struct xdp_md, data_end)),
++                      BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++                      BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++                      BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 2),
++                      BPF_MOV64_IMM(BPF_REG_0, 99),
++                      BPF_JMP_IMM(BPF_JA, 0, 0, 6),
++                      BPF_MOV64_IMM(BPF_REG_0, 1),
++                      BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
++                      BPF_ST_MEM(BPF_W, BPF_REG_2, 3, 0),
++                      BPF_STX_XADD(BPF_W, BPF_REG_2, BPF_REG_0, 1),
++                      BPF_STX_XADD(BPF_W, BPF_REG_2, BPF_REG_0, 2),
++                      BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 1),
++                      BPF_EXIT_INSN(),
++              },
++              .result = REJECT,
++              .errstr = "BPF_XADD stores into R2 packet",
++              .prog_type = BPF_PROG_TYPE_XDP,
++      },
+ };
+ 
+ static int probe_filter_length(const struct bpf_insn *fp)
+-- 
+2.16.2
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/patches.kernel.org/4.15.9-008-bpf-ppc64-fix-out-of-bounds-access-in-tail-cal.patch
 
new/patches.kernel.org/4.15.9-008-bpf-ppc64-fix-out-of-bounds-access-in-tail-cal.patch
--- 
old/patches.kernel.org/4.15.9-008-bpf-ppc64-fix-out-of-bounds-access-in-tail-cal.patch
      1970-01-01 01:00:00.000000000 +0100
+++ 
new/patches.kernel.org/4.15.9-008-bpf-ppc64-fix-out-of-bounds-access-in-tail-cal.patch
      2018-03-11 23:30:25.000000000 +0100
@@ -0,0 +1,45 @@
+From: Daniel Borkmann <[email protected]>
+Date: Thu, 8 Mar 2018 13:16:49 +0100
+Subject: [PATCH] bpf, ppc64: fix out of bounds access in tail call
+References: bnc#1012628
+Patch-mainline: 4.15.9
+Git-commit: d269176e766c71c998cb75b4ea8cbc321cc0019d
+
+[ upstream commit d269176e766c71c998cb75b4ea8cbc321cc0019d ]
+
+While working on 16338a9b3ac3 ("bpf, arm64: fix out of bounds access in
+tail call") I noticed that ppc64 JIT is partially affected as well. While
+the bound checking is correctly performed as unsigned comparison, the
+register with the index value however, is never truncated into 32 bit
+space, so e.g. a index value of 0x100000000ULL with a map of 1 element
+would pass with PPC_CMPLW() whereas we later on continue with the full
+64 bit register value. Therefore, as we do in interpreter and other JITs
+truncate the value to 32 bit initially in order to fix access.
+
+Fixes: ce0761419fae ("powerpc/bpf: Implement support for tail calls")
+Signed-off-by: Daniel Borkmann <[email protected]>
+Reviewed-by: Naveen N. Rao <[email protected]>
+Tested-by: Naveen N. Rao <[email protected]>
+Signed-off-by: Alexei Starovoitov <[email protected]>
+Signed-off-by: Daniel Borkmann <[email protected]>
+Signed-off-by: Greg Kroah-Hartman <[email protected]>
+Signed-off-by: Jiri Slaby <[email protected]>
+---
+ arch/powerpc/net/bpf_jit_comp64.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c 
b/arch/powerpc/net/bpf_jit_comp64.c
+index d183b4801bdb..35591fb09042 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -242,6 +242,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct 
codegen_context *ctx, u32
+        *   goto out;
+        */
+       PPC_LWZ(b2p[TMP_REG_1], b2p_bpf_array, offsetof(struct bpf_array, 
map.max_entries));
++      PPC_RLWINM(b2p_index, b2p_index, 0, 0, 31);
+       PPC_CMPLW(b2p_index, b2p[TMP_REG_1]);
+       PPC_BCC(COND_GE, out);
+ 
+-- 
+2.16.2
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/patches.kernel.org/4.15.9-009-scsi-mpt3sas-fix-oops-in-error-handlers-after-.patch
 
new/patches.kernel.org/4.15.9-009-scsi-mpt3sas-fix-oops-in-error-handlers-after-.patch
--- 
old/patches.kernel.org/4.15.9-009-scsi-mpt3sas-fix-oops-in-error-handlers-after-.patch
      1970-01-01 01:00:00.000000000 +0100
+++ 
new/patches.kernel.org/4.15.9-009-scsi-mpt3sas-fix-oops-in-error-handlers-after-.patch
      2018-03-11 23:30:25.000000000 +0100
@@ -0,0 +1,93 @@
+From: Mauricio Faria de Oliveira <[email protected]>
+Date: Fri, 16 Feb 2018 20:39:57 -0200
+Subject: [PATCH] scsi: mpt3sas: fix oops in error handlers after
+ shutdown/unload
+References: bnc#1012628
+Patch-mainline: 4.15.9
+Git-commit: 9ff549ffb4fb4cc9a4b24d1de9dc3e68287797c4
+
+commit 9ff549ffb4fb4cc9a4b24d1de9dc3e68287797c4 upstream.
+
+This patch adds checks for 'ioc->remove_host' in the SCSI error handlers, so
+not to access pointers/resources potentially freed in the PCI shutdown/module
+unload path.  The error handlers may be invoked after shutdown/unload,
+depending on other components.
+
+This problem was observed with kexec on a system with a mpt3sas based adapter
+and an infiniband adapter which takes long enough to shutdown:
+
+The mpt3sas driver finished shutting down / disabled interrupt handling, thus
+some commands have not finished and timed out.
+
+Since the system was still running (waiting for the infiniband adapter to
+shutdown), the scsi error handler for task abort of mpt3sas was invoked, and
+hit an oops -- either in scsih_abort() because 'ioc->scsi_lookup' was NULL
+without commit dbec4c9040ed ("scsi: mpt3sas: lockless command submission"), or
+later up in scsih_host_reset() (with or without that commit), because it
+eventually called mpt3sas_base_get_iocstate().
+
+After the above commit, the oops in scsih_abort() does not occur anymore
+(_scsih_scsi_lookup_find_by_scmd() is no longer called), but that commit is
+too big and out of the scope of linux-stable, where this patch might help, so
+still go for the changes.
+
+Also, this might help to prevent similar errors in the future, in case code
+changes and possibly tries to access freed stuff.
+
+Note the fix in scsih_host_reset() is still important anyway.
+
+Signed-off-by: Mauricio Faria de Oliveira <[email protected]>
+Acked-by: Sreekanth Reddy <[email protected]>
+Signed-off-by: Martin K. Petersen <[email protected]>
+Signed-off-by: Greg Kroah-Hartman <[email protected]>
+Signed-off-by: Jiri Slaby <[email protected]>
+---
+ drivers/scsi/mpt3sas/mpt3sas_scsih.c | 11 +++++++----
+ 1 file changed, 7 insertions(+), 4 deletions(-)
+
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c 
b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index b258f210120a..4adc7c77a4df 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -2998,7 +2998,8 @@ scsih_abort(struct scsi_cmnd *scmd)
+       _scsih_tm_display_info(ioc, scmd);
+ 
+       sas_device_priv_data = scmd->device->hostdata;
+-      if (!sas_device_priv_data || !sas_device_priv_data->sas_target) {
++      if (!sas_device_priv_data || !sas_device_priv_data->sas_target ||
++          ioc->remove_host) {
+               sdev_printk(KERN_INFO, scmd->device,
+                       "device been deleted! scmd(%p)\n", scmd);
+               scmd->result = DID_NO_CONNECT << 16;
+@@ -3060,7 +3061,8 @@ scsih_dev_reset(struct scsi_cmnd *scmd)
+       _scsih_tm_display_info(ioc, scmd);
+ 
+       sas_device_priv_data = scmd->device->hostdata;
+-      if (!sas_device_priv_data || !sas_device_priv_data->sas_target) {
++      if (!sas_device_priv_data || !sas_device_priv_data->sas_target ||
++          ioc->remove_host) {
+               sdev_printk(KERN_INFO, scmd->device,
+                       "device been deleted! scmd(%p)\n", scmd);
+               scmd->result = DID_NO_CONNECT << 16;
+@@ -3122,7 +3124,8 @@ scsih_target_reset(struct scsi_cmnd *scmd)
+       _scsih_tm_display_info(ioc, scmd);
+ 
+       sas_device_priv_data = scmd->device->hostdata;
+-      if (!sas_device_priv_data || !sas_device_priv_data->sas_target) {
++      if (!sas_device_priv_data || !sas_device_priv_data->sas_target ||
++          ioc->remove_host) {
+               starget_printk(KERN_INFO, starget, "target been deleted! 
scmd(%p)\n",
+                       scmd);
+               scmd->result = DID_NO_CONNECT << 16;
+@@ -3179,7 +3182,7 @@ scsih_host_reset(struct scsi_cmnd *scmd)
+           ioc->name, scmd);
+       scsi_print_command(scmd);
+ 
+-      if (ioc->is_driver_loading) {
++      if (ioc->is_driver_loading || ioc->remove_host) {
+               pr_info(MPT3SAS_FMT "Blocking the host reset\n",
+                   ioc->name);
+               r = FAILED;
+-- 
+2.16.2
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/patches.kernel.org/4.15.9-010-scsi-mpt3sas-wait-for-and-flush-running-comman.patch
 
new/patches.kernel.org/4.15.9-010-scsi-mpt3sas-wait-for-and-flush-running-comman.patch
--- 
old/patches.kernel.org/4.15.9-010-scsi-mpt3sas-wait-for-and-flush-running-comman.patch
      1970-01-01 01:00:00.000000000 +0100
+++ 
new/patches.kernel.org/4.15.9-010-scsi-mpt3sas-wait-for-and-flush-running-comman.patch
      2018-03-11 23:30:25.000000000 +0100
@@ -0,0 +1,119 @@
+From: Sreekanth Reddy <[email protected]>
+Date: Fri, 16 Feb 2018 20:39:58 -0200
+Subject: [PATCH] scsi: mpt3sas: wait for and flush running commands on
+ shutdown/unload
+References: bnc#1012628
+Patch-mainline: 4.15.9
+Git-commit: c666d3be99c000bb889a33353e9be0fa5808d3de
+
+commit c666d3be99c000bb889a33353e9be0fa5808d3de upstream.
+
+This patch finishes all outstanding SCSI IO commands (but not other commands,
+e.g., task management) in the shutdown and unload paths.
+
+It first waits for the commands to complete (this is done after setting
+'ioc->remove_host = 1 ', which prevents new commands to be queued) then it
+flushes commands that might still be running.
+
+This avoids triggering error handling (e.g., abort command) for all commands
+possibly completed by the adapter after interrupts disabled.
+
+[mauricfo: introduced something in commit message.]
+
+Signed-off-by: Sreekanth Reddy <[email protected]>
+Tested-by: Mauricio Faria de Oliveira <[email protected]>
+Signed-off-by: Mauricio Faria de Oliveira <[email protected]>
+Signed-off-by: Martin K. Petersen <[email protected]>
+[mauricfo: backport to linux-4.15.y (a few updates to context lines)]
+Signed-off-by: Mauricio Faria de Oliveira <[email protected]>
+Signed-off-by: Greg Kroah-Hartman <[email protected]>
+Signed-off-by: Jiri Slaby <[email protected]>
+---
+ drivers/scsi/mpt3sas/mpt3sas_base.c  |  8 ++++----
+ drivers/scsi/mpt3sas/mpt3sas_base.h  |  3 +++
+ drivers/scsi/mpt3sas/mpt3sas_scsih.c | 10 +++++++++-
+ 3 files changed, 16 insertions(+), 5 deletions(-)
+
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c 
b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 8027de465d47..f43b51452596 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -6289,14 +6289,14 @@ _base_reset_handler(struct MPT3SAS_ADAPTER *ioc, int 
reset_phase)
+ }
+ 
+ /**
+- * _wait_for_commands_to_complete - reset controller
++ * mpt3sas_wait_for_commands_to_complete - reset controller
+  * @ioc: Pointer to MPT_ADAPTER structure
+  *
+  * This function waiting(3s) for all pending commands to complete
+  * prior to putting controller in reset.
+  */
+-static void
+-_wait_for_commands_to_complete(struct MPT3SAS_ADAPTER *ioc)
++void
++mpt3sas_wait_for_commands_to_complete(struct MPT3SAS_ADAPTER *ioc)
+ {
+       u32 ioc_state;
+       unsigned long flags;
+@@ -6375,7 +6375,7 @@ mpt3sas_base_hard_reset_handler(struct MPT3SAS_ADAPTER 
*ioc,
+                       is_fault = 1;
+       }
+       _base_reset_handler(ioc, MPT3_IOC_PRE_RESET);
+-      _wait_for_commands_to_complete(ioc);
++      mpt3sas_wait_for_commands_to_complete(ioc);
+       _base_mask_interrupts(ioc);
+       r = _base_make_ioc_ready(ioc, type);
+       if (r)
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.h 
b/drivers/scsi/mpt3sas/mpt3sas_base.h
+index 60f42ca3954f..69022b10a3d8 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.h
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.h
+@@ -1435,6 +1435,9 @@ void mpt3sas_base_update_missing_delay(struct 
MPT3SAS_ADAPTER *ioc,
+ 
+ int mpt3sas_port_enable(struct MPT3SAS_ADAPTER *ioc);
+ 
++void
++mpt3sas_wait_for_commands_to_complete(struct MPT3SAS_ADAPTER *ioc);
++
+ 
+ /* scsih shared API */
+ u8 mpt3sas_scsih_event_callback(struct MPT3SAS_ADAPTER *ioc, u8 msix_index,
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c 
b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 4adc7c77a4df..741b0a28c2e3 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -4614,7 +4614,7 @@ _scsih_flush_running_cmds(struct MPT3SAS_ADAPTER *ioc)
+               _scsih_set_satl_pending(scmd, false);
+               mpt3sas_base_free_smid(ioc, smid);
+               scsi_dma_unmap(scmd);
+-              if (ioc->pci_error_recovery)
++              if (ioc->pci_error_recovery || ioc->remove_host)
+                       scmd->result = DID_NO_CONNECT << 16;
+               else
+                       scmd->result = DID_RESET << 16;
+@@ -9904,6 +9904,10 @@ static void scsih_remove(struct pci_dev *pdev)
+       unsigned long flags;
+ 
+       ioc->remove_host = 1;
++
++      mpt3sas_wait_for_commands_to_complete(ioc);
++      _scsih_flush_running_cmds(ioc);
++
+       _scsih_fw_event_cleanup_queue(ioc);
+ 
+       spin_lock_irqsave(&ioc->fw_event_lock, flags);
+@@ -9980,6 +9984,10 @@ scsih_shutdown(struct pci_dev *pdev)
+       unsigned long flags;
+ 
+       ioc->remove_host = 1;
++
++      mpt3sas_wait_for_commands_to_complete(ioc);
++      _scsih_flush_running_cmds(ioc);
++
+       _scsih_fw_event_cleanup_queue(ioc);
+ 
+       spin_lock_irqsave(&ioc->fw_event_lock, flags);
+-- 
+2.16.2
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/patches.kernel.org/4.15.9-011-KVM-x86-fix-backward-migration-with-async_PF.patch
 
new/patches.kernel.org/4.15.9-011-KVM-x86-fix-backward-migration-with-async_PF.patch
--- 
old/patches.kernel.org/4.15.9-011-KVM-x86-fix-backward-migration-with-async_PF.patch
        1970-01-01 01:00:00.000000000 +0100
+++ 
new/patches.kernel.org/4.15.9-011-KVM-x86-fix-backward-migration-with-async_PF.patch
        2018-03-11 23:30:25.000000000 +0100
@@ -0,0 +1,114 @@
+From: =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= <[email protected]>
+Date: Thu, 1 Feb 2018 22:16:21 +0100
+Subject: [PATCH] KVM: x86: fix backward migration with async_PF
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+References: bnc#1012628
+Patch-mainline: 4.15.9
+Git-commit: fe2a3027e74e40a3ece3a4c1e4e51403090a907a
+
+commit fe2a3027e74e40a3ece3a4c1e4e51403090a907a upstream.
+
+Guests on new hypersiors might set KVM_ASYNC_PF_DELIVERY_AS_PF_VMEXIT
+bit when enabling async_PF, but this bit is reserved on old hypervisors,
+which results in a failure upon migration.
+
+To avoid breaking different cases, we are checking for CPUID feature bit
+before enabling the feature and nothing else.
+
+Fixes: 52a5c155cf79 ("KVM: async_pf: Let guest support delivery of async_pf 
from guest mode")
+Cc: <[email protected]>
+Reviewed-by: Wanpeng Li <[email protected]>
+Reviewed-by: David Hildenbrand <[email protected]>
+Signed-off-by: Radim Krčmář <[email protected]>
+Signed-off-by: Paolo Bonzini <[email protected]>
+[jwang: port to 4.14]
+Signed-off-by: Jack Wang <[email protected]>
+Signed-off-by: Greg Kroah-Hartman <[email protected]>
+Signed-off-by: Jiri Slaby <[email protected]>
+---
+ Documentation/virtual/kvm/cpuid.txt  | 4 ++++
+ Documentation/virtual/kvm/msr.txt    | 3 ++-
+ arch/x86/include/uapi/asm/kvm_para.h | 1 +
+ arch/x86/kernel/kvm.c                | 8 ++++----
+ arch/x86/kvm/cpuid.c                 | 3 ++-
+ 5 files changed, 13 insertions(+), 6 deletions(-)
+
+diff --git a/Documentation/virtual/kvm/cpuid.txt 
b/Documentation/virtual/kvm/cpuid.txt
+index 3c65feb83010..a81c97a4b4a5 100644
+--- a/Documentation/virtual/kvm/cpuid.txt
++++ b/Documentation/virtual/kvm/cpuid.txt
+@@ -54,6 +54,10 @@ KVM_FEATURE_PV_UNHALT              ||     7 || guest checks 
this feature bit
+                                    ||       || before enabling paravirtualized
+                                    ||       || spinlock support.
+ ------------------------------------------------------------------------------
++KVM_FEATURE_ASYNC_PF_VMEXIT        ||    10 || paravirtualized async PF VM 
exit
++                                   ||       || can be enabled by setting bit 2
++                                   ||       || when writing to msr 0x4b564d02
++------------------------------------------------------------------------------
+ KVM_FEATURE_CLOCKSOURCE_STABLE_BIT ||    24 || host will warn if no guest-side
+                                    ||       || per-cpu warps are expected in
+                                    ||       || kvmclock.
+diff --git a/Documentation/virtual/kvm/msr.txt 
b/Documentation/virtual/kvm/msr.txt
+index 1ebecc115dc6..f3f0d57ced8e 100644
+--- a/Documentation/virtual/kvm/msr.txt
++++ b/Documentation/virtual/kvm/msr.txt
+@@ -170,7 +170,8 @@ MSR_KVM_ASYNC_PF_EN: 0x4b564d02
+       when asynchronous page faults are enabled on the vcpu 0 when
+       disabled. Bit 1 is 1 if asynchronous page faults can be injected
+       when vcpu is in cpl == 0. Bit 2 is 1 if asynchronous page faults
+-      are delivered to L1 as #PF vmexits.
++      are delivered to L1 as #PF vmexits.  Bit 2 can be set only if
++      KVM_FEATURE_ASYNC_PF_VMEXIT is present in CPUID.
+ 
+       First 4 byte of 64 byte memory location will be written to by
+       the hypervisor at the time of asynchronous page fault (APF)
+diff --git a/arch/x86/include/uapi/asm/kvm_para.h 
b/arch/x86/include/uapi/asm/kvm_para.h
+index 09cc06483bed..989db885de97 100644
+--- a/arch/x86/include/uapi/asm/kvm_para.h
++++ b/arch/x86/include/uapi/asm/kvm_para.h
+@@ -25,6 +25,7 @@
+ #define KVM_FEATURE_STEAL_TIME                5
+ #define KVM_FEATURE_PV_EOI            6
+ #define KVM_FEATURE_PV_UNHALT         7
++#define KVM_FEATURE_ASYNC_PF_VMEXIT   10
+ 
+ /* The last 8 bits are used to indicate how to interpret the flags field
+  * in pvclock structure. If no bits are set, all flags are ignored.
+diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
+index b40ffbf156c1..0a93e83b774a 100644
+--- a/arch/x86/kernel/kvm.c
++++ b/arch/x86/kernel/kvm.c
+@@ -341,10 +341,10 @@ static void kvm_guest_cpu_init(void)
+ #endif
+               pa |= KVM_ASYNC_PF_ENABLED;
+ 
+-              /* Async page fault support for L1 hypervisor is optional */
+-              if (wrmsr_safe(MSR_KVM_ASYNC_PF_EN,
+-                      (pa | KVM_ASYNC_PF_DELIVERY_AS_PF_VMEXIT) & 0xffffffff, 
pa >> 32) < 0)
+-                      wrmsrl(MSR_KVM_ASYNC_PF_EN, pa);
++              if (kvm_para_has_feature(KVM_FEATURE_ASYNC_PF_VMEXIT))
++                      pa |= KVM_ASYNC_PF_DELIVERY_AS_PF_VMEXIT;
++
++              wrmsrl(MSR_KVM_ASYNC_PF_EN, pa);
+               __this_cpu_write(apf_reason.enabled, 1);
+               printk(KERN_INFO"KVM setup async PF for cpu %d\n",
+                      smp_processor_id());
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 13f5d4217e4f..4f544f2a7b06 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -597,7 +597,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 
*entry, u32 function,
+                            (1 << KVM_FEATURE_ASYNC_PF) |
+                            (1 << KVM_FEATURE_PV_EOI) |
+                            (1 << KVM_FEATURE_CLOCKSOURCE_STABLE_BIT) |
+-                           (1 << KVM_FEATURE_PV_UNHALT);
++                           (1 << KVM_FEATURE_PV_UNHALT) |
++                           (1 << KVM_FEATURE_ASYNC_PF_VMEXIT);
+ 
+               if (sched_info_on())
+                       entry->eax |= (1 << KVM_FEATURE_STEAL_TIME);
+-- 
+2.16.2
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/patches.kernel.org/4.15.9-012-Linux-4.15.9.patch 
new/patches.kernel.org/4.15.9-012-Linux-4.15.9.patch
--- old/patches.kernel.org/4.15.9-012-Linux-4.15.9.patch        1970-01-01 
01:00:00.000000000 +0100
+++ new/patches.kernel.org/4.15.9-012-Linux-4.15.9.patch        2018-03-11 
23:30:25.000000000 +0100
@@ -0,0 +1,28 @@
+From: Greg Kroah-Hartman <[email protected]>
+Date: Sun, 11 Mar 2018 16:25:15 +0100
+Subject: [PATCH] Linux 4.15.9
+References: bnc#1012628
+Patch-mainline: 4.15.9
+Git-commit: 3eae9e93d49241dfb30be1d706b68d056b1ad29c
+
+Signed-off-by: Jiri Slaby <[email protected]>
+---
+ Makefile | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/Makefile b/Makefile
+index eb18d200a603..0420f9a0c70f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+ 
+-- 
+2.16.2
+

++++++ patches.suse.tar.bz2 ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/patches.suse/0002-x86-speculation-Add-inlines-to-control-Indirect-Bran.patch
 
new/patches.suse/0002-x86-speculation-Add-inlines-to-control-Indirect-Bran.patch
--- 
old/patches.suse/0002-x86-speculation-Add-inlines-to-control-Indirect-Bran.patch
    2018-03-07 16:09:53.000000000 +0100
+++ 
new/patches.suse/0002-x86-speculation-Add-inlines-to-control-Indirect-Bran.patch
    2018-03-11 23:31:16.000000000 +0100
@@ -28,7 +28,7 @@
 
 --- a/arch/x86/include/asm/nospec-branch.h
 +++ b/arch/x86/include/asm/nospec-branch.h
-@@ -174,5 +174,41 @@ static inline void indirect_branch_predi
+@@ -174,6 +174,42 @@ static inline void indirect_branch_predi
                     : "eax", "ecx", "edx", "memory");
  }
  
@@ -69,4 +69,5 @@
 +}
 +
  #endif /* __ASSEMBLY__ */
- #endif /* _ASM_X86_NOSPEC_BRANCH_H_ */
+ 
+ /*
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/patches.suse/Documentation-sphinx-Fix-Directive-import-error.patch 
new/patches.suse/Documentation-sphinx-Fix-Directive-import-error.patch
--- old/patches.suse/Documentation-sphinx-Fix-Directive-import-error.patch      
2018-03-07 16:09:53.000000000 +0100
+++ new/patches.suse/Documentation-sphinx-Fix-Directive-import-error.patch      
2018-03-11 23:31:16.000000000 +0100
@@ -1,34 +1,42 @@
-From: Takashi Iwai <[email protected]>
+From ff690eeed804f112242f9a0614eafdf559f9276a Mon Sep 17 00:00:00 2001
+From: Matthew Wilcox <[email protected]>
+Date: Fri, 2 Mar 2018 10:40:14 -0800
 Subject: [PATCH] Documentation/sphinx: Fix Directive import error
-Date: Fri, 02 Mar 2018 12:49:03 +0100
-Message-id: <[email protected]>
-Patch-mainline: Submitted, linux-doc ML
+Patch-mainline: v4.16-rc5
+Git-commit: ff690eeed804f112242f9a0614eafdf559f9276a
 References: bsc#1083694
 
-The sphinx.util.compat Directive stuff was deprecated in the recent
-Sphinx version, and now we get a build error.
-
-Let's take a fallback to the newer one, from docutils.parsers.rst.
+Sphinx 1.7 removed sphinx.util.compat.Directive so people
+who have upgraded cannot build the documentation.  Switch to
+docutils.parsers.rst.Directive which has been available since
+docutils 0.5 released in 2009.
 
 Bugzilla: https://bugzilla.opensuse.org/show_bug.cgi?id=1083694
-Signed-off-by: Takashi Iwai <[email protected]>
+Co-developed-by: Takashi Iwai <[email protected]>
+Acked-by: Jani Nikula <[email protected]>
+Cc: [email protected]
+Signed-off-by: Matthew Wilcox <[email protected]>
+Signed-off-by: Jonathan Corbet <[email protected]>
+Acked-by: Takashi Iwai <[email protected]>
 
 ---
----
- Documentation/sphinx/kerneldoc.py |    5 ++++-
- 1 file changed, 4 insertions(+), 1 deletion(-)
+ Documentation/sphinx/kerneldoc.py | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
 
+diff --git a/Documentation/sphinx/kerneldoc.py 
b/Documentation/sphinx/kerneldoc.py
+index 39aa9e8697cc..fbedcc39460b 100644
 --- a/Documentation/sphinx/kerneldoc.py
 +++ b/Documentation/sphinx/kerneldoc.py
-@@ -37,7 +37,10 @@ import glob
+@@ -36,8 +36,7 @@ import glob
+ 
  from docutils import nodes, statemachine
  from docutils.statemachine import ViewList
- from docutils.parsers.rst import directives
+-from docutils.parsers.rst import directives
 -from sphinx.util.compat import Directive
-+try:
-+    from sphinx.util.compat import Directive
-+except ImportError:
-+    from docutils.parsers.rst import directives, Directive
++from docutils.parsers.rst import directives, Directive
  from sphinx.ext.autodoc import AutodocReporter
  
  __version__  = '1.0'
+-- 
+2.16.2
+

++++++ series.conf ++++++
--- /var/tmp/diff_new_pack.hZ1czk/_old  2018-03-16 10:36:33.427146848 +0100
+++ /var/tmp/diff_new_pack.hZ1czk/_new  2018-03-16 10:36:33.431146704 +0100
@@ -772,6 +772,18 @@
        
patches.kernel.org/4.15.8-122-md-only-allow-remove_and_add_spares-when-no-sy.patch
        
patches.kernel.org/4.15.8-123-platform-x86-dell-laptop-fix-kbd_get_state-s-r.patch
        patches.kernel.org/4.15.8-124-Linux-4.15.8.patch
+       patches.kernel.org/4.15.9-001-bpf-fix-mlock-precharge-on-arraymaps.patch
+       
patches.kernel.org/4.15.9-002-bpf-fix-memory-leak-in-lpm_trie-map_free-callb.patch
+       
patches.kernel.org/4.15.9-003-bpf-fix-rcu-lockdep-warning-for-lpm_trie-map_f.patch
+       
patches.kernel.org/4.15.9-004-bpf-x64-implement-retpoline-for-tail-call.patch
+       
patches.kernel.org/4.15.9-005-bpf-arm64-fix-out-of-bounds-access-in-tail-cal.patch
+       
patches.kernel.org/4.15.9-006-bpf-add-schedule-points-in-percpu-arrays-manag.patch
+       
patches.kernel.org/4.15.9-007-bpf-allow-xadd-only-on-aligned-memory.patch
+       
patches.kernel.org/4.15.9-008-bpf-ppc64-fix-out-of-bounds-access-in-tail-cal.patch
+       
patches.kernel.org/4.15.9-009-scsi-mpt3sas-fix-oops-in-error-handlers-after-.patch
+       
patches.kernel.org/4.15.9-010-scsi-mpt3sas-wait-for-and-flush-running-comman.patch
+       
patches.kernel.org/4.15.9-011-KVM-x86-fix-backward-migration-with-async_PF.patch
+       patches.kernel.org/4.15.9-012-Linux-4.15.9.patch
 
        ########################################################
        # Build fixes that apply to the vanilla kernel too.

++++++ source-timestamp ++++++
--- /var/tmp/diff_new_pack.hZ1czk/_old  2018-03-16 10:36:33.479144976 +0100
+++ /var/tmp/diff_new_pack.hZ1czk/_new  2018-03-16 10:36:33.483144832 +0100
@@ -1,3 +1,3 @@
-2018-03-09 20:01:21 +0100
-GIT Revision: 67f0889645bebd7d1275c3815c3680fdde20f520
+2018-03-11 23:31:16 +0100
+GIT Revision: 2c1b8ee0db3a5bf9a7e1b357a479171911a603cb
 GIT Branch: stable


Reply via email to