From: 'Simon Horman' <[email protected]>

This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
Full review at: https://sashiko.dev
---
net: mana: Query device capabilities and configure MSI-X sharing for EQs

When querying the device, this adjusts the max number of queues to allow
dedicated MSI-X vectors for each vPort, clamping to no less than
MANA_DEF_NUM_QUEUES. MSI-X sharing among vPorts is disabled by default
and enabled only when there are not enough vectors. It also renames
mana_query_device_cfg() to mana_gd_query_device_cfg().

> diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c 
> b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c

[ ... ]

> +     /*
> +      * Adjust gc->max_num_queues returned from the SOC to allow dedicated
> +      * MSIx for each vPort. Clamp to no less than MANA_DEF_NUM_QUEUES.
> +      */
> +     max_num_queues = (gc->num_msix_usable - 1) / num_ports;
> +     max_num_queues = roundup_pow_of_two(max(max_num_queues, 1U));

Does rounding up here inflate max_num_queues and unnecessarily force MSI-X
sharing?

For example, if there are 63 usable MSIs and 2 ports, the division yields 31.
Rounding up gives 32. The subsequent check below:

> +     /*
> +      * Use dedicated MSIx for EQs whenever possible, use MSIx sharing for
> +      * Ethernet EQs when (max_num_queues * num_ports > num_msix_usable - 1)
> +      */
> +     max_num_queues = min(gc->max_num_queues, max_num_queues);
> +     if (max_num_queues * num_ports > gc->num_msix_usable - 1)
> +             gc->msi_sharing = true;

would then evaluate to true (32 * 2 > 63) and force the driver into MSI-X
shared mode. This seems to contradict the intent to use dedicated MSI-X
whenever possible.

Would it be better to use rounddown_pow_of_two() instead to ensure the
calculated queues fit within the available dedicated vectors?

[ ... ]

> @@ -1968,20 +2008,30 @@ static int mana_gd_setup(struct pci_dev *pdev)
>       if (err)
>               goto destroy_hwc;
>  
> -     err = mana_gd_query_max_resources(pdev);
> +     err = mana_gd_detect_devices(pdev);
>       if (err)
>               goto destroy_hwc;
>  
> -     err = mana_gd_setup_remaining_irqs(pdev);
> -     if (err) {
> -             dev_err(gc->dev, "Failed to setup remaining IRQs: %d", err);
> -             goto destroy_hwc;
> -     }
> -
> -     err = mana_gd_detect_devices(pdev);
> +     err = mana_gd_query_max_resources(pdev);
>       if (err)
>               goto destroy_hwc;
>  
> +     if (!gc->msi_sharing) {
> +             gc->msi_bitmap = bitmap_zalloc(gc->num_msix_usable, GFP_KERNEL);
> +             if (!gc->msi_bitmap) {
> +                     err = -ENOMEM;
> +                     goto destroy_hwc;
> +             }
> +             /* Set bit for HWC */
> +             set_bit(0, gc->msi_bitmap);
> +     } else {
> +             err = mana_gd_setup_remaining_irqs(pdev);
> +             if (err) {
> +                     dev_err(gc->dev, "Failed to setup remaining IRQs: %d", 
> err);
> +                     goto destroy_hwc;
> +             }
> +     }

If msi_sharing is disabled, we allocate the msi_bitmap but skip calling
mana_gd_setup_remaining_irqs(). 

Since mana_gd_setup_hwc_irqs() only allocates a single vector for the hardware
channel when dynamic allocation is supported, does this leave the device
without interrupts for its Ethernet queues?

If so, it seems this could lead to queue creation failures when the driver
attempts to map uninitialized vectors. I notice this is fixed in a later patch
in the series ("net: mana: Allocate interrupt context for each EQ when
creating vPort"), but does leaving it out here break bisectability?

Reply via email to