Hi,

> -----Original Message-----
> From: dev <dev-boun...@dpdk.org> On Behalf Of Michael Baum
> Sent: Tuesday, July 21, 2020 3:04 PM
> To: dev@dpdk.org
> Cc: Matan Azrad <ma...@mellanox.com>; Slava Ovsiienko
> <viachesl...@mellanox.com>
> Subject: [dpdk-dev] [PATCH] net/mlx5: optimize critical section in device free
> 
> When PMD releases shared IB device context, It locks the
> mlx5_ibv_list_mutex lock throughout the function so that it does not
> happen while removing a device from the list, another process will try
> to insert another device into it.
> On the other hand, having removed the device from the list even if it
> has not yet released all of its resources, it should not care about
> other processes and can release the lock.
> 
> However, the PMD does not release the lock even though it can, and
> performs a number of operations, some of which include sleep and may be
> long.
> To improve this, shorten the lock time to the minimum necessary.
> 
> Signed-off-by: Michael Baum <michae...@mellanox.com>
> Acked-by: Matan Azrad <ma...@mellanox.com>
> ---
>  drivers/net/mlx5/mlx5.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
> index 846398d..70338e5 100644
> --- a/drivers/net/mlx5/mlx5.c
> +++ b/drivers/net/mlx5/mlx5.c
> @@ -939,6 +939,7 @@ struct mlx5_dev_ctx_shared *
>       mlx5_mr_release_cache(&sh->share_cache);
>       /* Remove context from the global device list. */
>       LIST_REMOVE(sh, next);
> +     pthread_mutex_unlock(&mlx5_dev_ctx_list_mutex);
>       /*
>        *  Ensure there is no async event handler installed.
>        *  Only primary process handles async device events.
> @@ -968,6 +969,7 @@ struct mlx5_dev_ctx_shared *
>               mlx5_flow_id_pool_release(sh->flow_id_pool);
>       pthread_mutex_destroy(&sh->txpp.mutex);
>       mlx5_free(sh);
> +     return;
>  exit:
>       pthread_mutex_unlock(&mlx5_dev_ctx_list_mutex);
>  }
> --
> 1.8.3.1

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh

Reply via email to