On initial driver load, alloc_etherdev_mqs is called with whatever max
queue values are provided by the control plane. However, if the driver
is loaded on a system where num_online_cpus() returns less than the max
queues, the netdev will think there are more queues than are actually
available. Only num_online_cpus() will be allocated, but
skb_get_queue_mapping(skb) could possibly return an index beyond the
range of allocated queues. Consequently, the packet is silently dropped
and it appears as if TX is broken.

Set the real number of queues during open so the netdev knows how many
queues will be allocated.

v2:
- call set_real_num_queues in idpf_open. Previous change called
  set_real_num_queues function in idpf_up_complete, but it is possible
  for up_complete to be called without holding the RTNL lock. If user
  brings up interface, then issues a reset, the init_task will call
  idpf_vport_open->idpf_up_complete. Since this is initiated by the
  driver, the RTNL lock is not taken.
- adjust title to reflect new changes.

Signed-off-by: Joshua Hay <[email protected]>
Fixes: 1c325aac10a8 ("idpf: configure resources for TX queues")
Reviewed-by: Madhu Chittim <[email protected]>
---
 drivers/net/ethernet/intel/idpf/idpf_lib.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c 
b/drivers/net/ethernet/intel/idpf/idpf_lib.c
index 6df7f125ebde..9dc806411002 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
@@ -2159,8 +2159,13 @@ static int idpf_open(struct net_device *netdev)
        idpf_vport_ctrl_lock(netdev);
        vport = idpf_netdev_to_vport(netdev);
 
+       err = idpf_set_real_num_queues(vport);
+       if (err)
+               goto unlock;
+
        err = idpf_vport_open(vport);
 
+unlock:
        idpf_vport_ctrl_unlock(netdev);
 
        return err;
-- 
2.39.2

Reply via email to