From: Xin Long <lucien....@gmail.com>

[ Upstream commit bab9693a9a8c6dd19f670408ec1e78e12a320682 ]

A dead lock was triggered on thunderx driver:

        CPU0                    CPU1
        ----                    ----
   [01] lock(&(&nic->rx_mode_wq_lock)->rlock);
                           [11] lock(&(&mc->mca_lock)->rlock);
                           [12] lock(&(&nic->rx_mode_wq_lock)->rlock);
   [02] <Interrupt> lock(&(&mc->mca_lock)->rlock);

The path for each is:

  [01] worker_thread() -> process_one_work() -> nicvf_set_rx_mode_task()
  [02] mld_ifc_timer_expire()
  [11] ipv6_add_dev() -> ipv6_dev_mc_inc() -> igmp6_group_added() ->
  [12] dev_mc_add() -> __dev_set_rx_mode() -> nicvf_set_rx_mode()

To fix it, it needs to disable bh on [1], so that the timer on [2]
wouldn't be triggered until rx_mode_wq_lock is released. So change
to use spin_lock_bh() instead of spin_lock().

Thanks to Paolo for helping with this.

v1->v2:
  - post to netdev.

Reported-by: Rafael P. <rparr...@redhat.com>
Tested-by: Dean Nelson <dnel...@redhat.com>
Fixes: 469998c861fa ("net: thunderx: prevent concurrent data re-writing by 
nicvf_set_rx_mode")
Signed-off-by: Xin Long <lucien....@gmail.com>
Signed-off-by: David S. Miller <da...@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>
---
 drivers/net/ethernet/cavium/thunder/nicvf_main.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
@@ -2047,11 +2047,11 @@ static void nicvf_set_rx_mode_task(struc
        /* Save message data locally to prevent them from
         * being overwritten by next ndo_set_rx_mode call().
         */
-       spin_lock(&nic->rx_mode_wq_lock);
+       spin_lock_bh(&nic->rx_mode_wq_lock);
        mode = vf_work->mode;
        mc = vf_work->mc;
        vf_work->mc = NULL;
-       spin_unlock(&nic->rx_mode_wq_lock);
+       spin_unlock_bh(&nic->rx_mode_wq_lock);
 
        __nicvf_set_rx_mode_task(mode, mc, nic);
 }


Reply via email to