On 1/5/23 14:31, David Marchand wrote:
> Hi Ilya,
> 
> On Thu, Dec 22, 2022 at 12:18 AM Ilya Maximets <[email protected]> wrote:
>>
>> On 12/19/22 16:03, David Marchand wrote:
>>> The DPDK vhost-user library maintains more granular per queue stats
>>> which can replace what OVS was providing for vhost-user ports.
>>>
>>> The benefits for OVS:
>>> - OVS can skip parsing packet sizes on the rx side,
>>> - vhost-user is aware of which packets are transmitted to the guest,
>>>   so per *transmitted* packet size stats can be reported,
>>> - more internal stats from vhost-user may be exposed, without OVS
>>>   needing to understand them,
>>
>> Hi, David and Maxime.  Thanks for the patch!
>>
>> The change looks good to me in general.  I would like to avoid some
>> of the code duplication it introduces, but I'm not sure how to actually
>> do that, so it's fine as is.
>>
>> However, while testing the change I see a noticeable performance
>> degradation in a simple V-to-V scenario with testpmd and virtio-user
>> ports.  Performance dips by 1-3%.  It looks like the code added
>> to DPDK for some reason a bit heavier than code removed from OVS,
>> so they do not even out the packet rate with the current master.
>>
>> If I'm commenting out enabling of the RTE_VHOST_USER_NET_STATS_ENABLE
>> flag, I can get performance back.  And it's even a bit higher, but
>> not high enough to compensate for stats accounting in vhost library
>> if stats are enabled.
> 
> Sorry it took me a while but I can't see such difference in numbers.
> 
> 
> I did some experiments with a simple mono directionnal setup like:
> virtio-user testpmd/txonly --> ovs --> virtio-user testpmd/rxonly
> I had to restart testpmd between each restart of OVS: it looks like
> virtio-user (as server) reconnection has some bug.
> 
> I took care to dedicate one HT by datapath thread (i.e. all physical
> cores are isolated, and ovs pmd thread runs on a HT whose sibling is
> idle. Same for testpmd rx / tx threads).
> I rely on testpmd tx stats, grabing stats for 10 runs of a few seconds.
> 
> When OVS drops everything (no fwd to other port, which means we are
> only counting received packets from OVS pov):
> - master: 12.189Mpps
> - series: 12.226Mpps (+0.3%)
> 
> I also tried with enabling/disabling stats support in vhost library
> (+stats means adding RTE_VHOST_USER_NET_STATS_ENABLE, resp. -stats
> means removing it):
> - master+stats: 11.962Mpps (-1.9%)
> - series-stats: 12.453Mpps (+2.1%)
> 
> 
> When OVS fwds (which should be what you tested, correct?):
> - master: 7.830Mpps
> - series: 7.795Mpps (-0.5%)
> 
> - master+stats: 7.641Mpps (-2.5%)
> - series-stats: 7.967Mpps (+1.7%)


My setup is a bit different, because it's bidirectional.  I have 2
testpmd apps.  One in txonly mode, the other is in mac mode.
On OVS side I have 2 PMD threads.  Ports have a single queue pinned
to its own PMD thread.  All the used cores are on the same NUMA, but
not siblings.  Having 2 PMD threads doing RX and TX on the same port
is an important factor, because of the stats_lock contention.

My numbers are:

actions:NORMAL
  ---
  master: 7.496 Mpps
  series: 7.383 Mpps (-1.5%)

in_port=1,actions:2
in_port=2,actions:1
  ---
  master: 8.296 Mpps
  series: 8.072 Mpps (-2.7%)

> 
> I noticed though that if ovs pmd thread runs on a HT whose sibling is
> busy polling a physical port, performance numbers are far more
> unstable, and I can see differences up to +/- 5%, regardless of code
> changes.

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to