[PATCH 4.10 30/62] netpoll: Check for skb->queue_mapping

2017-05-01 Thread Greg Kroah-Hartman
4.10-stable review patch.  If anyone has any objections, please let me know.

--

From: Tushar Dave 


[ Upstream commit c70b17b775edb21280e9de7531acf6db3b365274 ]

Reducing real_num_tx_queues needs to be in sync with skb queue_mapping
otherwise skbs with queue_mapping greater than real_num_tx_queues
can be sent to the underlying driver and can result in kernel panic.

One such event is running netconsole and enabling VF on the same
device. Or running netconsole and changing number of tx queues via
ethtool on same device.

e.g.
Unable to handle kernel NULL pointer dereference
tsk->{mm,active_mm}->context = 1525
tsk->{mm,active_mm}->pgd = fff800130ff9a000
  \|/  \|/
  "@'/ .. \`@"
  /_| \__/ |_\
 \__U_/
kworker/48:1(475): Oops [#1]
CPU: 48 PID: 475 Comm: kworker/48:1 Tainted: G   OE
4.11.0-rc3-davem-net+ #7
Workqueue: events queue_process
task: fff80013113299c0 task.stack: fff800131132c000
TSTATE: 004480e01600 TPC: 103f9e3c TNPC: 103f9e40 Y:
Tainted: G   OE
TPC: 
g0:  g1: 3fff g2:  g3:
0001
g4: fff80013113299c0 g5: fff8001fa6808000 g6: fff800131132c000 g7:
00c0
o0: fff8001fa760c460 o1: fff8001311329a50 o2: fff8001fa7607504 o3:
0003
o4: fff8001f96e63a40 o5: fff8001311d77ec0 sp: fff800131132f0e1 ret_pc:
0049ed94
RPC: 
l0:  l1: 0800 l2:  l3:

l4: 000b2aa30e34b10d l5:  l6:  l7:
fff8001fa7605028
i0: fff80013111a8a00 i1: fff80013155a0780 i2:  i3:

i4:  i5: 0010 i6: fff800131132f1a1 i7:
103fa4b0
I7: 
Call Trace:
 [103fa4b0] ixgbe_xmit_frame+0x30/0xa0 [ixgbe]
 [00998c74] netpoll_start_xmit+0xf4/0x200
 [00998e10] queue_process+0x90/0x160
 [00485fa8] process_one_work+0x188/0x480
 [00486410] worker_thread+0x170/0x4c0
 [0048c6b8] kthread+0xd8/0x120
 [00406064] ret_from_fork+0x1c/0x2c
 []   (null)
Disabling lock debugging due to kernel taint
Caller[103fa4b0]: ixgbe_xmit_frame+0x30/0xa0 [ixgbe]
Caller[00998c74]: netpoll_start_xmit+0xf4/0x200
Caller[00998e10]: queue_process+0x90/0x160
Caller[00485fa8]: process_one_work+0x188/0x480
Caller[00486410]: worker_thread+0x170/0x4c0
Caller[0048c6b8]: kthread+0xd8/0x120
Caller[00406064]: ret_from_fork+0x1c/0x2c
Caller[]:   (null)

Signed-off-by: Tushar Dave 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/core/netpoll.c |   10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -105,15 +105,21 @@ static void queue_process(struct work_st
while ((skb = skb_dequeue(>txq))) {
struct net_device *dev = skb->dev;
struct netdev_queue *txq;
+   unsigned int q_index;
 
if (!netif_device_present(dev) || !netif_running(dev)) {
kfree_skb(skb);
continue;
}
 
-   txq = skb_get_tx_queue(dev, skb);
-
local_irq_save(flags);
+   /* check if skb->queue_mapping is still valid */
+   q_index = skb_get_queue_mapping(skb);
+   if (unlikely(q_index >= dev->real_num_tx_queues)) {
+   q_index = q_index % dev->real_num_tx_queues;
+   skb_set_queue_mapping(skb, q_index);
+   }
+   txq = netdev_get_tx_queue(dev, q_index);
HARD_TX_LOCK(dev, txq, smp_processor_id());
if (netif_xmit_frozen_or_stopped(txq) ||
netpoll_start_xmit(skb, dev, txq) != NETDEV_TX_OK) {




[PATCH 4.10 30/62] netpoll: Check for skb->queue_mapping

2017-05-01 Thread Greg Kroah-Hartman
4.10-stable review patch.  If anyone has any objections, please let me know.

--

From: Tushar Dave 


[ Upstream commit c70b17b775edb21280e9de7531acf6db3b365274 ]

Reducing real_num_tx_queues needs to be in sync with skb queue_mapping
otherwise skbs with queue_mapping greater than real_num_tx_queues
can be sent to the underlying driver and can result in kernel panic.

One such event is running netconsole and enabling VF on the same
device. Or running netconsole and changing number of tx queues via
ethtool on same device.

e.g.
Unable to handle kernel NULL pointer dereference
tsk->{mm,active_mm}->context = 1525
tsk->{mm,active_mm}->pgd = fff800130ff9a000
  \|/  \|/
  "@'/ .. \`@"
  /_| \__/ |_\
 \__U_/
kworker/48:1(475): Oops [#1]
CPU: 48 PID: 475 Comm: kworker/48:1 Tainted: G   OE
4.11.0-rc3-davem-net+ #7
Workqueue: events queue_process
task: fff80013113299c0 task.stack: fff800131132c000
TSTATE: 004480e01600 TPC: 103f9e3c TNPC: 103f9e40 Y:
Tainted: G   OE
TPC: 
g0:  g1: 3fff g2:  g3:
0001
g4: fff80013113299c0 g5: fff8001fa6808000 g6: fff800131132c000 g7:
00c0
o0: fff8001fa760c460 o1: fff8001311329a50 o2: fff8001fa7607504 o3:
0003
o4: fff8001f96e63a40 o5: fff8001311d77ec0 sp: fff800131132f0e1 ret_pc:
0049ed94
RPC: 
l0:  l1: 0800 l2:  l3:

l4: 000b2aa30e34b10d l5:  l6:  l7:
fff8001fa7605028
i0: fff80013111a8a00 i1: fff80013155a0780 i2:  i3:

i4:  i5: 0010 i6: fff800131132f1a1 i7:
103fa4b0
I7: 
Call Trace:
 [103fa4b0] ixgbe_xmit_frame+0x30/0xa0 [ixgbe]
 [00998c74] netpoll_start_xmit+0xf4/0x200
 [00998e10] queue_process+0x90/0x160
 [00485fa8] process_one_work+0x188/0x480
 [00486410] worker_thread+0x170/0x4c0
 [0048c6b8] kthread+0xd8/0x120
 [00406064] ret_from_fork+0x1c/0x2c
 []   (null)
Disabling lock debugging due to kernel taint
Caller[103fa4b0]: ixgbe_xmit_frame+0x30/0xa0 [ixgbe]
Caller[00998c74]: netpoll_start_xmit+0xf4/0x200
Caller[00998e10]: queue_process+0x90/0x160
Caller[00485fa8]: process_one_work+0x188/0x480
Caller[00486410]: worker_thread+0x170/0x4c0
Caller[0048c6b8]: kthread+0xd8/0x120
Caller[00406064]: ret_from_fork+0x1c/0x2c
Caller[]:   (null)

Signed-off-by: Tushar Dave 
Signed-off-by: David S. Miller 
Signed-off-by: Greg Kroah-Hartman 
---
 net/core/netpoll.c |   10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -105,15 +105,21 @@ static void queue_process(struct work_st
while ((skb = skb_dequeue(>txq))) {
struct net_device *dev = skb->dev;
struct netdev_queue *txq;
+   unsigned int q_index;
 
if (!netif_device_present(dev) || !netif_running(dev)) {
kfree_skb(skb);
continue;
}
 
-   txq = skb_get_tx_queue(dev, skb);
-
local_irq_save(flags);
+   /* check if skb->queue_mapping is still valid */
+   q_index = skb_get_queue_mapping(skb);
+   if (unlikely(q_index >= dev->real_num_tx_queues)) {
+   q_index = q_index % dev->real_num_tx_queues;
+   skb_set_queue_mapping(skb, q_index);
+   }
+   txq = netdev_get_tx_queue(dev, q_index);
HARD_TX_LOCK(dev, txq, smp_processor_id());
if (netif_xmit_frozen_or_stopped(txq) ||
netpoll_start_xmit(skb, dev, txq) != NETDEV_TX_OK) {