In mlx5 PMD, MLX5_IPOOL_JUMP ipool configuration is used to initialize the ipool containing either:
- flow table entry when DV flow engine is chosen or, - group table entry when HW steering flow engine is chosen. Default configuration for MLX5_IPOOL_JUMP ipool specified entry size as size of mlx5_flow_tbl_data_entry struct, used with DV flow engine. This could lead to memory corruption when mlx5_flow_group struct (used with HW steering flow engine) would have bigger size than mlx5_flow_tbl_data_entry. This patch fixes that. Entry size for MLX5_IPOOL_JUMP ipool is chosen dynamically based on device configuration. Fixes: d1559d66ed2d ("net/mlx5: add table management") Cc: suanmi...@nvidia.com Cc: sta...@dpdk.org Signed-off-by: Dariusz Sosnowski <dsosnow...@nvidia.com> --- drivers/net/mlx5/mlx5.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index b373306f98..7c79cbb7be 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -241,7 +241,12 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .type = "mlx5_port_id_ipool", }, [MLX5_IPOOL_JUMP] = { - .size = sizeof(struct mlx5_flow_tbl_data_entry), + /* + * MLX5_IPOOL_JUMP ipool entry size depends on selected flow engine. + * When HW steering is enabled mlx5_flow_group struct is used. + * Otherwise mlx5_flow_tbl_data_entry struct is used. + */ + .size = 0, .trunk_size = 64, .grow_trunk = 3, .grow_shift = 2, @@ -904,6 +909,14 @@ mlx5_flow_ipool_create(struct mlx5_dev_ctx_shared *sh) sizeof(struct mlx5_flow_handle) : MLX5_FLOW_HANDLE_VERBS_SIZE; break; +#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) + /* Set MLX5_IPOOL_JUMP ipool entry size depending on selected flow engine. */ + case MLX5_IPOOL_JUMP: + cfg.size = sh->config.dv_flow_en == 2 ? + sizeof(struct mlx5_flow_group) : + sizeof(struct mlx5_flow_tbl_data_entry); + break; +#endif } if (sh->config.reclaim_mode) { cfg.release_mem_en = 1; -- 2.25.1