Hi Ian,

Thank you for you answers!

it is correct that i am using ovs-vsctl set Interface dpdk0 options:n_rxq=8
commands for the queues.
Could you please expand on the sentence "  It will be split between queues
based on des tip so it’s important that test traffic varies if you want
traffic to be dispersed evenly among the queues at this level."
It might be a typo, or i might just not know what you mean by "des tip",
could you please clarify for me?
Additionally, what do you mean by varying the traffic? do you mean to
somehow not have the packets at a constant frame rate?

Regarding the Vhost user queues, i am using Openstack and i did not find
yet a way to create multiple queues (i updated the image's
metadata hw_vif_multiqueue_enabled=true) but i don't know how to set the
queue amount especially that in the VM that i am running i do not have
ethtool.

Regarding the multiple queues while using one core for the PMD:
i did get much better performance when i had two cores for the PMD,
however, i am not at the luxury to be able to use two cores.
It is puzzling for me that when i use multiple queues i do get better
performance not enough but much better then when i use only one.
I am sorry but this is a confusing for me.

As for the core isolation, i have only core zero isolated for the kernel. i
checked with htop and i saw that probably the emulatorpin of the VM might
be running there so i moved it but it decreased performance.
when i use only n_rxq and n_txq=1 i get performance close to 60MB with 64
packets.

Thank you again,
Michael





On Thu, Apr 19, 2018 at 11:10 AM, Stokes, Ian <ian.sto...@intel.com> wrote:

> Hi Michael,
>
>
>
> So there are a few issues here we need to address.
>
>
>
> Queues for phy devices:
>
>
>
> I assume you have set the queues for dpdk0 and dpdk1 yourself using
>
>
>
> ovs-vsctl set Interface dpdk0 options:n_rxq=8
>
> ovs-vsctl set Interface dpdk0 options:n_rxq=8
>
>
>
> Receive Side Scaling (RSS) is used to distribute ingress traffic among the
> queues on the NIC at a hardware level. It will be split between queues
> based on des tip so it’s important that test traffic varies if you want
> traffic to be dispersed evenly among the queues at this level.
>
>
>
> Vhost user queues:
>
>
>
> You do not have to set the number of queues for vhost ports with n_rxq
> since OVS 2.6 as done above but you do have to include the number of
> supported queues in the QEMU command line argument that launches the VM by
> specifying the argument queues=’Num_Queues’ for the vhost port. If using VM
> Kernel virtio interfaces within the VM you will need to enable the extra
> queues also using ethtool –L. Seeing that there is only 1 queue for your
> vhost user port I think you are missing one of these steps.
>
>
>
> PMD configuration:
>
>
>
> Since your only using 1 PMD I don’t see much point of using multiple
> queues. Typically you match the number of PMDs to the number of queues that
> you would like to ensure an even distribution.
>
> If  using 1 PMD like in your case the traffic will always be enqueued to
> queue 0 of vhost device even if there are multiple queues available. This
> is related to the implantation within OVS.
>
>
>
> As a starting point it might be easier to start with 2 PMDs and 2 rxqs for
> each phy and vhost ports that you have and ensure that works first.
>
>
>
> Also are you isolating the cores the PMD runs on? If not then processes
> could be scheduled to that core which would interrupt the PMD processing,
> this could be related to the traffic drops you see.
>
>
>
> Below is a link to a blog that discusses vhost MQ, it uses OVS 2.5 but a
> lot of the core concepts still apply even if some of the configuration
> commands may have changed
>
>
>
> https://software.intel.com/en-us/articles/configure-vhost-
> user-multiqueue-for-ovs-with-dpdk
>
>
>
> Ian
>
>
>
> *From:* michael me [mailto:1michaelmesgu...@gmail.com]
> *Sent:* Wednesday, April 18, 2018 2:23 PM
> *To:* Stokes, Ian <ian.sto...@intel.com>
> *Cc:* ovs-discuss@openvswitch.org
> *Subject:* Re: [ovs-discuss] ovs-dpdk performance not stable
>
>
>
> Hi Ian,
>
>
>
> In the deployment i do have vhost user; below is the full output of the  
> ovs-appctl
> dpif-netdev/pmd-rxq-show  command.
>
> root@W:/# ovs-appctl dpif-netdev/pmd-rxq-show
>
> pmd thread numa_id 0 core_id 1:
>
>         isolated : false
>
>         port: dpdk1     queue-id: 0 1 2 3 4 5 6 7
>
>         port: dpdk0     queue-id: 0 1 2 3 4 5 6 7
>
>         port: vhu1cbd23fd-82    queue-id: 0
>
>         port: vhu018b3f01-39    queue-id: 0
>
>
>
> what is strange for me and i don't understand is why do i have only one
> queue in the vhost side and eight on the dpdk side. i understood that qemue
> automatically had the same amount. though, i am using only one core for the
> VM and one core for the PMD.
>
> in this setting i have eight cores in the system, is that the reason that
> i see eight possible queues?
>
> The setup is North/South (VM to Physical network)
>
> as for pinning the PMD, i always pin the PMD to core 1 (mask=0x2).
>
>
>
> when i set the n_rxq and n_txq to high values (even 64 or above) i see no
> drops for around a minute or two and then suddenly bursts of drops as if
> the cache was filled. Have you seen something similar?
>
> i tried to play with the "max-idle", but it didn't seem to help.
>
>
>
> originally, i had a setup with 2.9 and 17.11 and i was not able to get
> better, performance but it could be that i didn't tweak as much. However, i
> am trying to deploy a setup that i can install without needing to MAKE.
>
>
>
> Thank you for any input,
>
> Michael
>
>
>
> On Tue, Apr 17, 2018 at 6:28 PM, Stokes, Ian <ian.sto...@intel.com> wrote:
>
> Hi Michael,
>
>
>
> Are you using dpdk vhostuser ports in this deployment?
>
>
>
> I would expect to see them listed in the output of ovs-appctl
> dpif-netdev/pmd-rxq-show you posted below.
>
>
>
> Can you describe the expected traffic flow ( Is it North/South using DPDK
> phy devices as well as vhost devices or east/west between vm interfaces
> only).
>
>
>
> OVS 2.6 has the ability to isolate and pin rxq queues for dpdk devices to
> specific PMDs also. This can help provide more stable throughput and
> defined behavior. Without doing this I believe the distribution of rxqs was
> dealt with in a round robin manner which could change between deployments.
> This could explain what you are seeing i.e. sometimes the traffic runs
> without drops.
>
>
>
> You could try to examine ovs-appctl dpif-netdev/pmd-rxq-show when traffic
> is dropping and then again when traffic is passing without issue. This
> output along with the flows in each case might provide a clue as to what is
> happening. If there is a difference between the two you could investigate
> pinning the rxqs to the specific setup although you will only benefit from
> this when have at least 2 PMDs instead of 1.
>
>
>
> Also OVS 2.6 and DPDK 16.07 aren’t the latest releases of OVS & DPDK, have
> you tried the same tests using the latest OVS 2.9 and DPDK 17.11?
>
>
>
> Ian
>
>
>
> *From:* ovs-discuss-boun...@openvswitch.org [mailto:ovs-discuss-bounces@
> openvswitch.org] *On Behalf Of *michael me
> *Sent:* Tuesday, April 17, 2018 10:42 AM
> *To:* ovs-discuss@openvswitch.org
> *Subject:* [ovs-discuss] ovs-dpdk performance not stable
>
>
>
> Hi Everyone,
>
>
>
> I would greatly appreciate any input.
>
>
>
> The setting that i am working with is a host with ovs-dpdk connected to a
> VM.
>
>
>
> What i see when i do performance test is that after about a minute or two
> suddenly i have many drops as if the cache was full and was dumped
> improperly.
>
> I tried to play with the settings of the n-rxq and n_txq values, which
> helps but only probably until the cache is filled and then i have drops.
>
> The things is that sometimes, rarely, as if by chance the performance
> continues.
>
>
>
> My settings is as follows:
>
> OVS Version. 2.6.1
> DPDK Version. 16.07.2
> NIC Model. Ethernet controller: Intel Corporation Ethernet Connection I354
> (rev 03)
> pmd-cpu-mask. on core 1 mask=0x2
> lcore mask. core zeor "dpdk-lcore-mask=1"
>
>
>
> Port "dpdk0"
>
>             Interface "dpdk0"
>
>                 type: dpdk
>
>                 options: {n_rxq="8", n_rxq_desc="2048", n_txq="9",
> n_txq_desc="2048"}
>
>
>
> ovs-appctl dpif-netdev/pmd-rxq-show
>
> pmd thread numa_id 0 core_id 1:
>
>         isolated : false
>
>         port: dpdk0     queue-id: 0 1 2 3 4 5 6 7
>
>         port: dpdk1     queue-id: 0 1 2 3 4 5 6 7
>
>
>
> Thanks,
>
> Michael
>
>
>
_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to