This is a note to let you know that I've just added the patch titled
bnx2x: fix panic when TX ring is full
to the 3.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
bnx2x-fix-panic-when-tx-ring-is-full.patch
and it can be found in the queue-3.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <[email protected]> know about it.
>From c2ef397e0290d9b56a4568c0f47d0cf9626bdb19 Mon Sep 17 00:00:00 2001
From: Eric Dumazet <[email protected]>
Date: Wed, 13 Jun 2012 09:45:16 +0000
Subject: bnx2x: fix panic when TX ring is full
From: Eric Dumazet <[email protected]>
[ Upstream commit bc14786a100cc6a81cd060e8031ec481241b418c ]
There is a off by one error in the minimal number of BD in
bnx2x_start_xmit() and bnx2x_tx_int() before stopping/resuming tx queue.
A full size GSO packet, with data included in skb->head really needs
(MAX_SKB_FRAGS + 4) BDs, because of bnx2x_tx_split()
This error triggers if BQL is disabled and heavy TCP transmit traffic
occurs.
bnx2x_tx_split() definitely can be called, remove a wrong comment.
Reported-by: Tomas Hruby <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Cc: Eilon Greenstein <[email protected]>
Cc: Yaniv Rosner <[email protected]>
Cc: Merav Sicron <[email protected]>
Cc: Tom Herbert <[email protected]>
Cc: Robert Evans <[email protected]>
Cc: Willem de Bruijn <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
@@ -191,7 +191,7 @@ int bnx2x_tx_int(struct bnx2x *bp, struc
if ((netif_tx_queue_stopped(txq)) &&
(bp->state == BNX2X_STATE_OPEN) &&
- (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 3))
+ (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 4))
netif_tx_wake_queue(txq);
__netif_tx_unlock(txq);
@@ -2349,8 +2349,6 @@ int bnx2x_poll(struct napi_struct *napi,
/* we split the first BD into headers and data BDs
* to ease the pain of our fellow microcode engineers
* we use one mapping for both BDs
- * So far this has only been observed to happen
- * in Other Operating Systems(TM)
*/
static noinline u16 bnx2x_tx_split(struct bnx2x *bp,
struct bnx2x_fp_txdata *txdata,
@@ -3002,7 +3000,7 @@ netdev_tx_t bnx2x_start_xmit(struct sk_b
txdata->tx_bd_prod += nbd;
- if (unlikely(bnx2x_tx_avail(bp, txdata) < MAX_SKB_FRAGS + 3)) {
+ if (unlikely(bnx2x_tx_avail(bp, txdata) < MAX_SKB_FRAGS + 4)) {
netif_tx_stop_queue(txq);
/* paired memory barrier is in bnx2x_tx_int(), we have to keep
@@ -3011,7 +3009,7 @@ netdev_tx_t bnx2x_start_xmit(struct sk_b
smp_mb();
fp->eth_q_stats.driver_xoff++;
- if (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 3)
+ if (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 4)
netif_tx_wake_queue(txq);
}
txdata->tx_pkt++;
Patches currently in stable-queue which might be from [email protected] are
queue-3.4/be2net-fix-a-race-in-be_xmit.patch
queue-3.4/drop_monitor-dont-sleep-in-atomic-context.patch
queue-3.4/lpc_eth-fix-tx-completion.patch
queue-3.4/bonding-fix-corrupted-queue_mapping.patch
queue-3.4/bnx2x-fix-checksum-validation.patch
queue-3.4/net-neighbour-fix-neigh_dump_info.patch
queue-3.4/l2tp-fix-a-race-in-l2tp_ip_sendmsg.patch
queue-3.4/bnx2x-fix-panic-when-tx-ring-is-full.patch
queue-3.4/lpc_eth-add-missing-ndo_change_mtu.patch
queue-3.4/bql-avoid-possible-inconsistent-calculation.patch
queue-3.4/net-remove-skb_orphan_try.patch
queue-3.4/bql-avoid-unneeded-limit-decrement.patch
queue-3.4/netpoll-fix-netpoll_send_udp-bugs.patch
queue-3.4/dummy-fix-rcu_sched-self-detected-stalls.patch
queue-3.4/net-l2tp_eth-fix-kernel-panic-on-rmmod-l2tp_eth.patch
queue-3.4/bql-fix-posdiff-to-integer-overflow-aware.patch
queue-3.4/inetpeer-fix-a-race-in-inetpeer_gc_worker.patch
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html