On 2/9/23 18:32, dheeraj wrote:
>
> Hi Ilya Maximets ,
>
> I have few more queries on your tests .
> What is the traffic rate ?
Maximum that OVS can handle. Basically, testpmd app generates
as many packets as OVS is able to receive. On my setup this
is about 8 Mpps.
> Please provide some more details on traffic type . tcp or udp , packet size
> , traffic duration
UDP, 64B packets, non-stop traffic.
> Is OVS started with dpdk mode or kernel mode ?
I don't understand that question. We're obviously running with
a userpsace datapath here, no kernel involved.
> How are you measuring the performance ?
I'm checking the output of pmd-perf-show appctl, e.g.
$ ovs-appctl dpif-netdev/pmd-stats-clear \
&& sleep 10 && ovs-appctl dpif-netdev/pmd-perf-show
Among other things, it provides average packet rate per PMD thread.
Best regards, Ilya Maximets.
>
> Regards ,
> Dheeraj
>
> -----Original Message-----
> From: Ilya Maximets <[email protected]>
> Sent: 09 February 2023 22:38
> To: dheeraj <[email protected]>; [email protected]
> Cc: [email protected]
> Subject: Re: [PATCH v3] dpif-netdev: Optimize flushing of output packet
> buffers
>
> On 2/9/23 17:57, dheeraj wrote:
>> Hi Ilya Maximets ,
>>
>> I did internal performance benchmarking tests with this patch . I tried with
>> different traffic (udp and vxlan) and with different packet size (64 and
>> 1024) . I don’t observe any performance degradation by applying the patch .
>> Even from the code changes , it doesn't look like that it can cause
>> performance degradation of 4%.
>>
>> Can you please elaborate how you tested .
>
> Hi. I was using my usual setup: OVS with 2 PMD threads and 2 testpmd
> applications with virtio-user ports. One in txonly and the other in mac
> mode. So, the traffic is almost bi-directional:
>
> txonly --> vhost0 --> PMD#0 --> vhost1 --> mac --> vhost1 --> PMD#1 --> drop
>
> Since the first testpmd is in txonly mode, it doesn't receive any packets and
> they end up dropped on a send attempt by PMD#1.
>
> OpenFlow rules are either a simple NORMAL action or a direct
> in_port=A,output:B + in_port=B,output:A.
>
> Best regards, Ilya Maximets.
>
>>
>> Regards ,
>> Dheeraj
>>
>> -----Original Message-----
>> From: Ilya Maximets <[email protected]>
>> Sent: 17 January 2023 01:41
>> To: Dheeraj Kumar <[email protected]>; [email protected]
>> Cc: [email protected]
>> Subject: Re: [PATCH v3] dpif-netdev: Optimize flushing of output
>> packet buffers
>>
>> On 1/13/23 13:20, Dheeraj Kumar wrote:
>>> Problem Statement:
>>> Before OVS 2.12 the OVS-DPDK datapath transmitted processed rx packet
>>> batches directly to the wanted tx queues. In OVS 2.12 each PMD stores
>>> the processed packets in an intermediate buffer per output port and
>>> flushes these output buffers in a separate step. This buffering was
>>> introduced to allow better batching of packets for transmit.
>>>
>>> The current implementation of the function that flushes the output
>>> buffers performs a full scan overall output ports, even if only one
>>> single packet was buffered for a single output port. In systems with
>>> hundreds of ports this can take a long time and degrades OVS-DPDK
>>> performance significantly.
>>>
>>> Solution:
>>> Maintain a list of output ports with buffered packets for each PMD
>>> thread and only iterate over that list when flushing output buffers.
>>>
>>> Signed-off-by: Dheeraj Kumar <[email protected]>
>>> ---
>>> lib/dpif-netdev-private-thread.h | 7 ++++---
>>> lib/dpif-netdev.c | 24 ++++++++++++------------
>>> 2 files changed, 16 insertions(+), 15 deletions(-)
>>
>> Hi, Dheeraj. Thanks for the updated version.
>> Unfortunately, on a quick test with bidirectional traffic between
>> 2 vhost-user ports I see a noticeable 4% performance degradation with this
>> change applied. Could you, please, check why is that happening or if we can
>> avoid this?
>>
>> Best regards, Ilya Maximets.
>>
>
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev