On 4/16/21 11:29 PM, Lorenzo Bianconi wrote:

Took your patches for a test run with the AF_XDP sample xdpsock on an
i40e card and the throughput degradation is between 2 to 6% depending
on the setup and microbenchmark within xdpsock that is executed. And
this is without sending any multi frame packets. Just single frame
ones. Tirtha made changes to the i40e driver to support this new
interface so that is being included in the measurements.

thx for working on it. Assuming the fragmented part is only initialized/accessed
if mb is set (so for multi frame packets), I would not expect any throughput
degradation in the single frame scenario. Can you please share the i40e
support added by Tirtha?

Thanks Tirtha & Magnus for adding and testing mb support for i40e, and sharing 
those
data points; a degradation between 2-6% when mb is not used would definitely 
not be
acceptable. Would be great to root-cause and debug this further with Lorenzo, 
there
really should be close to /zero/ additional overhead to avoid regressing 
existing
performance sensitive workloads like load balancers, etc once they upgrade their
kernels/drivers.

What performance do you see with the mvneta card? How much are we
willing to pay for this feature when it is not being used or can we in
some way selectively turn it on only when needed?

IIRC I did not get sensible throughput degradation on mvneta but I will re-run
the tests running an updated bpf-next tree.

But compared to i40e, mvneta is also only 1-2.5 Gbps so potentially less 
visible,
right [0]? Either way, it's definitely good to get more data points from 
benchmarking
given this was lacking before for higher speed NICs in particular.

Thanks everyone,
Daniel

  [0] https://doc.dpdk.org/guides/nics/mvneta.html

Reply via email to