On Tue, Apr 08, 2025 at 02:47:52PM +0200, Larysa Zaremba wrote:
> From: Phani R Burra <[email protected]>
> 
> All send control queue messages are allocated/freed in libeth itself
> and tracked with the unique transaction (Xn) ids until they receive
> response or time out. Responses can be received out of order, therefore
> transactions are stored in an array and tracked though a bitmap.
> 
> Pre-allocated DMA memory is used where possible. It reduces the driver
> overhead in handling memory allocation/free and message timeouts.
> 
> Reviewed-by: Maciej Fijalkowski <[email protected]>
> Signed-off-by: Phani R Burra <[email protected]>
> Co-developed-by: Victor Raj <[email protected]>
> Signed-off-by: Victor Raj <[email protected]>
> Co-developed-by: Pavan Kumar Linga <[email protected]>
> Signed-off-by: Pavan Kumar Linga <[email protected]>
> Co-developed-by: Larysa Zaremba <[email protected]>
> Signed-off-by: Larysa Zaremba <[email protected]>
> ---
>  drivers/net/ethernet/intel/libeth/controlq.c | 578 +++++++++++++++++++
>  include/net/libeth/controlq.h                | 169 ++++++
>  2 files changed, 747 insertions(+)
> 
> diff --git a/drivers/net/ethernet/intel/libeth/controlq.c 
> b/drivers/net/ethernet/intel/libeth/controlq.c

...

> +/**
> + * libeth_ctlq_xn_deinit - deallocate and free the transaction manager 
> resources
> + * @xnm: pointer to the transaction manager
> + * @ctx: controlq context structure
> + *
> + * All Rx processing must be stopped beforehand.
> + */
> +void libeth_ctlq_xn_deinit(struct libeth_ctlq_xn_manager *xnm,
> +                        struct libeth_ctlq_ctx *ctx)
> +{
> +     bool must_wait = false;
> +     u32 i;
> +
> +     /* Should be no new clear bits after this */
> +     spin_lock(&xnm->free_xns_bm_lock);
> +             xnm->shutdown = true;

nit: The line above is not correctly indented.

     Flagged by Smatch.

> +
> +     for_each_clear_bit(i, xnm->free_xns_bm, LIBETH_CTLQ_MAX_XN_ENTRIES) {
> +             struct libeth_ctlq_xn *xn = &xnm->ring[i];
> +
> +             spin_lock(&xn->xn_lock);
> +
> +             if (xn->state == LIBETH_CTLQ_XN_WAITING ||
> +                 xn->state == LIBETH_CTLQ_XN_IDLE) {
> +                     complete(&xn->cmd_completion_event);
> +                     must_wait = true;
> +             } else if (xn->state == LIBETH_CTLQ_XN_ASYNC) {
> +                     __libeth_ctlq_xn_push_free(xnm, xn);
> +             }
> +
> +             spin_unlock(&xn->xn_lock);
> +     }
> +
> +     spin_unlock(&xnm->free_xns_bm_lock);
> +
> +     if (must_wait)
> +             wait_for_completion(&xnm->can_destroy);
> +
> +     libeth_ctlq_xn_deinit_dma(&ctx->mmio_info.pdev->dev, xnm,
> +                               LIBETH_CTLQ_MAX_XN_ENTRIES);
> +     kfree(xnm);
> +     libeth_ctlq_deinit(ctx);
> +}
> +EXPORT_SYMBOL_NS_GPL(libeth_ctlq_xn_deinit, "LIBETH_CP");

...

Reply via email to