Hi Ian and everyone,

Thank you for clarifying, i was just trying to understand :)

my bad about the 1 queue, though, i changed it to two queues and still the
performance was poor around 60mpps.
My findings are:
1. changed the PMD to a core where no other services were running.
2. added many queues (around 64).
3. after a few tests i could see that i would loose traffic i would reset
the VM and then again i would be able to get double to throughput for about
a test or two (each test 3 min)

thank you for answering,
Michael



On Fri, Apr 20, 2018 at 12:25 PM, Mooney, Sean K <[email protected]>
wrote:

>
>
>
>
> *From:* Stokes, Ian
> *Sent:* Thursday, April 19, 2018 9:51 PM
> *To:* michael me <[email protected]>
> *Cc:* [email protected]; Mooney, Sean K <[email protected]
> >
> *Subject:* RE: [ovs-discuss] ovs-dpdk performance not stable
>
>
>
> Hi Michael,
>
>
>
> “It will be split between queues based on des tip so it’s important that
> test traffic varies if you want traffic to be dispersed evenly among the
> queues at this level."
>
>
>
> “Des tip” should be destination IP (apologies, I replied before having a
> morning coffee J).
>
>
>
> By varying the traffic I mean changing the destination IP, if using the
> same IP I believe the rss hash will evaluate to the same queue  on the NIC.
>
>
>
> I’m not an expert on Openstack so I’m not too sure how to enable multi
> queue for vhost interfaces in that case.
>
>
>
> @ Sean (cc’d): Is there a specific way to enable vhost multi queue for
> open stack?
>
> *[Mooney, Sean K] yes to enable vhost multi queue in openstack you need to
> set an image metadata key to request it. that will result in 1 queue per
> vCPU of the guest.*
>
> *The key should be defiend here
> https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json
> <https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json>
> but its missing the key you want to add*
>
> *hw_vif_mutliqueue_enabled its is documented here
> https://docs.openstack.org/neutron/pike/admin/config-ovs-dpdk.html
> <https://docs.openstack.org/neutron/pike/admin/config-ovs-dpdk.html>. I
> should probably open a bug to add it to the glance*
>
> *default metadata refs.*
>
>
>
> I haven’t run MQ with a single PMD, so I’m not sure why you have better
> performance. Leave this with me to investigate further. I suspect as you
> have multiple queues more traffic is enqueued at the NIC leval
>
> *[Mooney, Sean K] for kernel virtio-net in the guest I belive there is a
> performance improvement due to reduction in intenal contentiuon from locks
> in the guset kernel networking stack but with dpdk in the guest I think the*
>
> *Perfromace would normally be the same however if the bottleneck you are
> hitting is on vswitch tx to the guest then perhaps that will also benefit
> form multiqueu howver unless you guest has more queues/cores then host *
>
> *pmds you would still have to use spin locks in the vhost pmd as you
> clould not setup a 1:1 pmd mapping to allow lockless enqueue in to the
> guest.*
>
>
>
> The problem with only 1 queue for the VM is that is creates a bottleneck
> in terms of transmitting traffic from the host to the VM (in your case 8
> queues trying to enqueue to 1 queue).
>
>
>
> How are you isolating core 0? Are you using isolcpus? Normally I would
> suggest isolating core 2 (i.e. the pmd core) with isolcpu.
>
>
>
> When you say you set txq =1 , why is that?
>
>
>
> Typically txq is set automatically, it will be number of PMDs +1 (in your
> case 2 txqs in total). The +1 is to account for traffic from kernel space.
>
>
>
> Thanks
>
> Ian
>
>
>
> *From:* michael me [mailto:[email protected]
> <[email protected]>]
> *Sent:* Thursday, April 19, 2018 7:12 PM
> *To:* Stokes, Ian <[email protected]>
> *Cc:* [email protected]
> *Subject:* Re: [ovs-discuss] ovs-dpdk performance not stable
>
>
>
> Hi Ian,
>
>
>
> Thank you for you answers!
>
>
>
> it is correct that i am using ovs-vsctl set Interface dpdk0
> options:n_rxq=8 commands for the queues.
>
> Could you please expand on the sentence "  It will be split between
> queues based on des tip so it’s important that test traffic varies if you
> want traffic to be dispersed evenly among the queues at this level."
>
> It might be a typo, or i might just not know what you mean by "des tip",
> could you please clarify for me?
>
> Additionally, what do you mean by varying the traffic? do you mean to
> somehow not have the packets at a constant frame rate?
>
>
>
> Regarding the Vhost user queues, i am using Openstack and i did not find
> yet a way to create multiple queues (i updated the image's
> metadata hw_vif_multiqueue_enabled=true) but i don't know how to set the
> queue amount especially that in the VM that i am running i do not have
> ethtool.
>
>
>
> Regarding the multiple queues while using one core for the PMD:
>
> i did get much better performance when i had two cores for the PMD,
> however, i am not at the luxury to be able to use two cores.
>
> It is puzzling for me that when i use multiple queues i do get better
> performance not enough but much better then when i use only one.
>
> I am sorry but this is a confusing for me.
>
>
>
> As for the core isolation, i have only core zero isolated for the kernel.
> i checked with htop and i saw that probably the emulatorpin of the VM might
> be running there so i moved it but it decreased performance.
>
> when i use only n_rxq and n_txq=1 i get performance close to 60MB with 64
> packets.
>
>
>
> Thank you again,
>
> Michael
>
>
>
>
>
>
>
>
>
>
>
> On Thu, Apr 19, 2018 at 11:10 AM, Stokes, Ian <[email protected]>
> wrote:
>
> Hi Michael,
>
>
>
> So there are a few issues here we need to address.
>
>
>
> Queues for phy devices:
>
>
>
> I assume you have set the queues for dpdk0 and dpdk1 yourself using
>
>
>
> ovs-vsctl set Interface dpdk0 options:n_rxq=8
>
> ovs-vsctl set Interface dpdk0 options:n_rxq=8
>
>
>
> Receive Side Scaling (RSS) is used to distribute ingress traffic among the
> queues on the NIC at a hardware level. It will be split between queues
> based on des tip so it’s important that test traffic varies if you want
> traffic to be dispersed evenly among the queues at this level.
>
>
>
> Vhost user queues:
>
>
>
> You do not have to set the number of queues for vhost ports with n_rxq
> since OVS 2.6 as done above but you do have to include the number of
> supported queues in the QEMU command line argument that launches the VM by
> specifying the argument queues=’Num_Queues’ for the vhost port. If using VM
> Kernel virtio interfaces within the VM you will need to enable the extra
> queues also using ethtool –L. Seeing that there is only 1 queue for your
> vhost user port I think you are missing one of these steps.
>
>
>
> PMD configuration:
>
>
>
> Since your only using 1 PMD I don’t see much point of using multiple
> queues. Typically you match the number of PMDs to the number of queues that
> you would like to ensure an even distribution.
>
> If  using 1 PMD like in your case the traffic will always be enqueued to
> queue 0 of vhost device even if there are multiple queues available. This
> is related to the implantation within OVS.
>
>
>
> As a starting point it might be easier to start with 2 PMDs and 2 rxqs for
> each phy and vhost ports that you have and ensure that works first.
>
>
>
> Also are you isolating the cores the PMD runs on? If not then processes
> could be scheduled to that core which would interrupt the PMD processing,
> this could be related to the traffic drops you see.
>
>
>
> Below is a link to a blog that discusses vhost MQ, it uses OVS 2.5 but a
> lot of the core concepts still apply even if some of the configuration
> commands may have changed
>
>
>
> https://software.intel.com/en-us/articles/configure-vhost-
> user-multiqueue-for-ovs-with-dpdk
>
>
>
> Ian
>
>
>
> *From:* michael me [mailto:[email protected]]
> *Sent:* Wednesday, April 18, 2018 2:23 PM
> *To:* Stokes, Ian <[email protected]>
> *Cc:* [email protected]
> *Subject:* Re: [ovs-discuss] ovs-dpdk performance not stable
>
>
>
> Hi Ian,
>
>
>
> In the deployment i do have vhost user; below is the full output of the  
> ovs-appctl
> dpif-netdev/pmd-rxq-show  command.
>
> root@W:/# ovs-appctl dpif-netdev/pmd-rxq-show
>
> pmd thread numa_id 0 core_id 1:
>
>         isolated : false
>
>         port: dpdk1     queue-id: 0 1 2 3 4 5 6 7
>
>         port: dpdk0     queue-id: 0 1 2 3 4 5 6 7
>
>         port: vhu1cbd23fd-82    queue-id: 0
>
>         port: vhu018b3f01-39    queue-id: 0
>
>
>
> what is strange for me and i don't understand is why do i have only one
> queue in the vhost side and eight on the dpdk side. i understood that qemue
> automatically had the same amount. though, i am using only one core for the
> VM and one core for the PMD.
>
> in this setting i have eight cores in the system, is that the reason that
> i see eight possible queues?
>
> The setup is North/South (VM to Physical network)
>
> as for pinning the PMD, i always pin the PMD to core 1 (mask=0x2).
>
>
>
> when i set the n_rxq and n_txq to high values (even 64 or above) i see no
> drops for around a minute or two and then suddenly bursts of drops as if
> the cache was filled. Have you seen something similar?
>
> i tried to play with the "max-idle", but it didn't seem to help.
>
>
>
> originally, i had a setup with 2.9 and 17.11 and i was not able to get
> better, performance but it could be that i didn't tweak as much. However, i
> am trying to deploy a setup that i can install without needing to MAKE.
>
>
>
> Thank you for any input,
>
> Michael
>
>
>
> On Tue, Apr 17, 2018 at 6:28 PM, Stokes, Ian <[email protected]> wrote:
>
> Hi Michael,
>
>
>
> Are you using dpdk vhostuser ports in this deployment?
>
>
>
> I would expect to see them listed in the output of ovs-appctl
> dpif-netdev/pmd-rxq-show you posted below.
>
>
>
> Can you describe the expected traffic flow ( Is it North/South using DPDK
> phy devices as well as vhost devices or east/west between vm interfaces
> only).
>
>
>
> OVS 2.6 has the ability to isolate and pin rxq queues for dpdk devices to
> specific PMDs also. This can help provide more stable throughput and
> defined behavior. Without doing this I believe the distribution of rxqs was
> dealt with in a round robin manner which could change between deployments.
> This could explain what you are seeing i.e. sometimes the traffic runs
> without drops.
>
>
>
> You could try to examine ovs-appctl dpif-netdev/pmd-rxq-show when traffic
> is dropping and then again when traffic is passing without issue. This
> output along with the flows in each case might provide a clue as to what is
> happening. If there is a difference between the two you could investigate
> pinning the rxqs to the specific setup although you will only benefit from
> this when have at least 2 PMDs instead of 1.
>
>
>
> Also OVS 2.6 and DPDK 16.07 aren’t the latest releases of OVS & DPDK, have
> you tried the same tests using the latest OVS 2.9 and DPDK 17.11?
>
>
>
> Ian
>
>
>
> *From:* [email protected] [mailto:ovs-discuss-bounces@
> openvswitch.org] *On Behalf Of *michael me
> *Sent:* Tuesday, April 17, 2018 10:42 AM
> *To:* [email protected]
> *Subject:* [ovs-discuss] ovs-dpdk performance not stable
>
>
>
> Hi Everyone,
>
>
>
> I would greatly appreciate any input.
>
>
>
> The setting that i am working with is a host with ovs-dpdk connected to a
> VM.
>
>
>
> What i see when i do performance test is that after about a minute or two
> suddenly i have many drops as if the cache was full and was dumped
> improperly.
>
> I tried to play with the settings of the n-rxq and n_txq values, which
> helps but only probably until the cache is filled and then i have drops.
>
> The things is that sometimes, rarely, as if by chance the performance
> continues.
>
>
>
> My settings is as follows:
>
> OVS Version. 2.6.1
> DPDK Version. 16.07.2
> NIC Model. Ethernet controller: Intel Corporation Ethernet Connection I354
> (rev 03)
> pmd-cpu-mask. on core 1 mask=0x2
> lcore mask. core zeor "dpdk-lcore-mask=1"
>
>
>
> Port "dpdk0"
>
>             Interface "dpdk0"
>
>                 type: dpdk
>
>                 options: {n_rxq="8", n_rxq_desc="2048", n_txq="9",
> n_txq_desc="2048"}
>
>
>
> ovs-appctl dpif-netdev/pmd-rxq-show
>
> pmd thread numa_id 0 core_id 1:
>
>         isolated : false
>
>         port: dpdk0     queue-id: 0 1 2 3 4 5 6 7
>
>         port: dpdk1     queue-id: 0 1 2 3 4 5 6 7
>
>
>
> Thanks,
>
> Michael
>
>
>
>
>
_______________________________________________
discuss mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to