Hi,

> -----Original Message-----
> From: Yajun Wu <yaj...@nvidia.com>
> Sent: Monday, February 14, 2022 8:03 AM
> To: Ori Kam <or...@nvidia.com>; Slava Ovsiienko
> <viachesl...@nvidia.com>; Matan Azrad <ma...@nvidia.com>
> Cc: dev@dpdk.org; NBU-Contact-Thomas Monjalon (EXTERNAL)
> <tho...@monjalon.net>; Raslan Darawsheh <rasl...@nvidia.com>; Roni
> Bar Yanai <ron...@nvidia.com>; sta...@dpdk.org
> Subject: [PATCH] common/mlx5: fix QP ack timeout configuration
> 
> VDPA driver creates two QPs(1 queue pair include 1 send queue and 1
> receive queue) per virtio queue to get traffic events from NIC to SW.
> Two QPs(called FW QP and SW QP) are created as loopback QP and FW QP'SQ
> is connected to SW QP'RQ internally.
> 
> When packet receive or send out, HW will send WQE by FW QP'SQ, then SW
> will get CQE from the CQ of SW QP.
> 
> With large scale and heavy traffic, the SQ's request may fail to get ACK from
> RQ HW, because HW is busy.
> SQ will retry the request with qpc.retry_count times and each time wait for
> 4.096 uS *2^(ack_timeout) for the response. If still can’t get RQ’s HW
> response, SQ will go to an error state.
> 
> 16 is experienced value. It should not be too high or too low.
> Too high will make QP waits too long in case it’s packet drop.
> Too low will cause QP to go to an error state(retry-exceeded) easily.
> 
> Fixes: 15c3807e86a ("common/mlx5: support DevX QP operations")
> Cc: sta...@dpdk.org
> 
> Signed-off-by: Yajun Wu <yaj...@nvidia.com>
> Acked-by: Matan Azrad <ma...@nvidia.com>

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh

Reply via email to