Re: [ovs-discuss] Fwd: [ovs-dpdk] bandwidth issue of vhostuserclient virtio ovs-dpdk

2018-12-06 Thread Lam, Tiago
On 03/12/2018 10:18, LIU Yulong wrote:
> 
> 
> On Sat, Dec 1, 2018 at 1:17 AM LIU Yulong  > wrote:
> 
> 
> 
> On Fri, Nov 30, 2018 at 5:36 PM Lam, Tiago  > wrote:
> 
> On 30/11/2018 02:07, LIU Yulong wrote:
> > Hi,
> >
> > Thanks for the reply, please see my inline comments below.
> >
> >
> > On Thu, Nov 29, 2018 at 6:00 PM Lam, Tiago
> mailto:tiago@intel.com>
> > >> wrote:
> >
> >     On 29/11/2018 08:24, LIU Yulong wrote:
> >     > Hi,
> >     >
> >     > We recently tested ovs-dpdk, but we met some bandwidth
> issue. The
> >     bandwidth
> >     > from VM to VM was not close to the physical NIC, it's about
> >     4.3Gbps on a
> >     > 10Gbps NIC. For no dpdk (virtio-net) VMs, the iperf3
> test can easily
> >     > reach 9.3Gbps. We enabled the virtio multiqueue for all
> guest VMs.
> >     In the
> >     > dpdk vhostuser guest, we noticed that the interrupts are
> >     centralized to
> >     > only one queue. But for no dpdk VM, interrupts can hash
> to all queues.
> >     > For those dpdk vhostuser VMs, we also noticed that the
> PMD usages were
> >     > also centralized to one no matter server(tx) or
> client(rx). And no
> >     matter
> >     > one PMD or multiple PMDs, this behavior always exists.
> >     >
> >     > Furthuremore, my colleague add some systemtap hook on
> the openvswitch
> >     > function, he found something interesting. The function
> >     > __netdev_dpdk_vhost_send will send all the packets to one
> >     virtionet-queue.
> >     > Seems that there are some algorithm/hash table/logic
> does not do
> >     the hash
> >     > very well. 
> >     >
> >
> >     Hi,
> >
> >     When you say "no dpdk VMs", you mean that within your VM
> you're relying
> >     on the Kernel to get the packets, using virtio-net. And
> when you say
> >     "dpdk vhostuser guest", you mean you're using DPDK inside
> the VM to get
> >     the packets. Is this correct?
> >
> >
> > Sorry for the inaccurate description. I'm really new to DPDK. 
> > No DPDK inside VM, all these settings are for host only.
> > (`host` means the hypervisor physical machine in the
> perspective of
> > virtualization.
> > On the other hand `guest` means the virtual machine.)
> > "no dpdk VMs" means the host does not setup DPDK (ovs is
> working in
> > traditional way),
> > the VMs were boot on that. Maybe a new name `VMs-on-NO-DPDK-host`?
> 
> Got it. Your "no dpdk VMs" really is referred to as OvS-Kernel,
> while
> your "dpdk vhostuser guest" is referred to as OvS-DPDK.
> 
> >
> >     If so, could you also tell us which DPDK app you're using
> inside of
> >     those VMs? Is it testpmd? If so, how are you setting the
> `--rxq` and
> >     `--txq` args? Otherwise, how are you setting those in your
> app when
> >     initializing DPDK?
> >
> >
> > Inside VM, there is no DPDK app, VM kernel also
> > does not set any config related to DPDK. `iperf3` is the tool for
> > bandwidth testing.
> >
> >     The information below is useful in telling us how you're
> setting your
> >     configurations in OvS, but we are still missing the
> configurations
> >     inside the VM.
> >
> >     This should help us in getting more information,
> >
> >
> > Maybe you have noticed that, we only setup one PMD in the pasted
> > configurations.
> > But VM has 8 queues. Should the pmd quantity match the queues?
> 
> It shouldn't match the queues inside the VM per say. But in this
> case,
> since you have configured 8 rx queues on your physical NICs as
> well, and
> since you're looking for higher throughputs, you should increase
> that
> number of PMDs and pin those rxqs - take a look at [1] on how to do
> that. Later on, increasing the size of your queues could also help.
> 
> 
> I'll test it. 
> Yes, as you noticed that the vhostuserclient  port has n_rxq="8",
> options:
> {n_rxq="8",vhost-server-path="/var/lib/vhost_sockets/vhu76f9a623-9f"}.
> And the physical NIC has both n_rxq="8", n_txq="8".
> options: {dpdk-devargs=":01:00.0", n_rxq="8", n_txq="8"}
> options: 

Re: [ovs-discuss] Fwd: [ovs-dpdk] bandwidth issue of vhostuserclient virtio ovs-dpdk

2018-12-03 Thread LIU Yulong
On Sat, Dec 1, 2018 at 1:17 AM LIU Yulong  wrote:

>
>
> On Fri, Nov 30, 2018 at 5:36 PM Lam, Tiago  wrote:
>
>> On 30/11/2018 02:07, LIU Yulong wrote:
>> > Hi,
>> >
>> > Thanks for the reply, please see my inline comments below.
>> >
>> >
>> > On Thu, Nov 29, 2018 at 6:00 PM Lam, Tiago > > > wrote:
>> >
>> > On 29/11/2018 08:24, LIU Yulong wrote:
>> > > Hi,
>> > >
>> > > We recently tested ovs-dpdk, but we met some bandwidth issue. The
>> > bandwidth
>> > > from VM to VM was not close to the physical NIC, it's about
>> > 4.3Gbps on a
>> > > 10Gbps NIC. For no dpdk (virtio-net) VMs, the iperf3 test can
>> easily
>> > > reach 9.3Gbps. We enabled the virtio multiqueue for all guest VMs.
>> > In the
>> > > dpdk vhostuser guest, we noticed that the interrupts are
>> > centralized to
>> > > only one queue. But for no dpdk VM, interrupts can hash to all
>> queues.
>> > > For those dpdk vhostuser VMs, we also noticed that the PMD usages
>> were
>> > > also centralized to one no matter server(tx) or client(rx). And no
>> > matter
>> > > one PMD or multiple PMDs, this behavior always exists.
>> > >
>> > > Furthuremore, my colleague add some systemtap hook on the
>> openvswitch
>> > > function, he found something interesting. The function
>> > > __netdev_dpdk_vhost_send will send all the packets to one
>> > virtionet-queue.
>> > > Seems that there are some algorithm/hash table/logic does not do
>> > the hash
>> > > very well.
>> > >
>> >
>> > Hi,
>> >
>> > When you say "no dpdk VMs", you mean that within your VM you're
>> relying
>> > on the Kernel to get the packets, using virtio-net. And when you say
>> > "dpdk vhostuser guest", you mean you're using DPDK inside the VM to
>> get
>> > the packets. Is this correct?
>> >
>> >
>> > Sorry for the inaccurate description. I'm really new to DPDK.
>> > No DPDK inside VM, all these settings are for host only.
>> > (`host` means the hypervisor physical machine in the perspective of
>> > virtualization.
>> > On the other hand `guest` means the virtual machine.)
>> > "no dpdk VMs" means the host does not setup DPDK (ovs is working in
>> > traditional way),
>> > the VMs were boot on that. Maybe a new name `VMs-on-NO-DPDK-host`?
>>
>> Got it. Your "no dpdk VMs" really is referred to as OvS-Kernel, while
>> your "dpdk vhostuser guest" is referred to as OvS-DPDK.
>>
>> >
>> > If so, could you also tell us which DPDK app you're using inside of
>> > those VMs? Is it testpmd? If so, how are you setting the `--rxq` and
>> > `--txq` args? Otherwise, how are you setting those in your app when
>> > initializing DPDK?
>> >
>> >
>> > Inside VM, there is no DPDK app, VM kernel also
>> > does not set any config related to DPDK. `iperf3` is the tool for
>> > bandwidth testing.
>> >
>> > The information below is useful in telling us how you're setting
>> your
>> > configurations in OvS, but we are still missing the configurations
>> > inside the VM.
>> >
>> > This should help us in getting more information,
>> >
>> >
>> > Maybe you have noticed that, we only setup one PMD in the pasted
>> > configurations.
>> > But VM has 8 queues. Should the pmd quantity match the queues?
>>
>> It shouldn't match the queues inside the VM per say. But in this case,
>> since you have configured 8 rx queues on your physical NICs as well, and
>> since you're looking for higher throughputs, you should increase that
>> number of PMDs and pin those rxqs - take a look at [1] on how to do
>> that. Later on, increasing the size of your queues could also help.
>>
>>
> I'll test it.
> Yes, as you noticed that the vhostuserclient  port has n_rxq="8",
> options:
> {n_rxq="8",vhost-server-path="/var/lib/vhost_sockets/vhu76f9a623-9f"}.
> And the physical NIC has both n_rxq="8", n_txq="8".
> options: {dpdk-devargs=":01:00.0", n_rxq="8", n_txq="8"}
> options: {dpdk-devargs=":05:00.1", n_rxq="8", n_txq="8"}
> But, furthermore, when remove such configuration for vhostuserclient  port
> and physical NIC,
> the bandwidth is same to 4.3Gbps no matter one PMD or multiple PMDs.
>

Bad news, the bandwidth does not increase so much, it's about ~4.9Gps -
5.3Gbps.
The followings are the new configrations. VM still has 8 queues.
But now I have 4 PMDs.

# ovs-vsctl get interface nic-10G-1 other_config
{pmd-rxq-affinity="0:2,1:4,3:20"}
# ovs-vsctl get interface nic-10G-2 other_config
{pmd-rxq-affinity="0:2,1:4,3:20"}
# ovs-vsctl get interface vhuc8febeff-56 other_config
{pmd-rxq-affinity="0:2,1:4,3:20"}

# ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 2:
isolated : true
port: nic-10G-1 queue-id:  0pmd usage:  0 %
port: nic-10G-2 queue-id:  0pmd usage:  0 %
port: vhuc8febeff-56queue-id:  0pmd usage:  0 %
pmd thread numa_id 0 core_id 4:
isolated : true

Re: [ovs-discuss] Fwd: [ovs-dpdk] bandwidth issue of vhostuserclient virtio ovs-dpdk

2018-12-03 Thread Lam, Tiago
On 30/11/2018 17:17, LIU Yulong wrote:
> 
> 
> On Fri, Nov 30, 2018 at 5:36 PM Lam, Tiago  > wrote:
> 
> On 30/11/2018 02:07, LIU Yulong wrote:
> > Hi,
> >
> > Thanks for the reply, please see my inline comments below.
> >
> >
> > On Thu, Nov 29, 2018 at 6:00 PM Lam, Tiago  
> > >> wrote:
> >
> >     On 29/11/2018 08:24, LIU Yulong wrote:
> >     > Hi,
> >     >
> >     > We recently tested ovs-dpdk, but we met some bandwidth
> issue. The
> >     bandwidth
> >     > from VM to VM was not close to the physical NIC, it's about
> >     4.3Gbps on a
> >     > 10Gbps NIC. For no dpdk (virtio-net) VMs, the iperf3 test
> can easily
> >     > reach 9.3Gbps. We enabled the virtio multiqueue for all
> guest VMs.
> >     In the
> >     > dpdk vhostuser guest, we noticed that the interrupts are
> >     centralized to
> >     > only one queue. But for no dpdk VM, interrupts can hash to
> all queues.
> >     > For those dpdk vhostuser VMs, we also noticed that the PMD
> usages were
> >     > also centralized to one no matter server(tx) or client(rx).
> And no
> >     matter
> >     > one PMD or multiple PMDs, this behavior always exists.
> >     >
> >     > Furthuremore, my colleague add some systemtap hook on the
> openvswitch
> >     > function, he found something interesting. The function
> >     > __netdev_dpdk_vhost_send will send all the packets to one
> >     virtionet-queue.
> >     > Seems that there are some algorithm/hash table/logic does not do
> >     the hash
> >     > very well. 
> >     >
> >
> >     Hi,
> >
> >     When you say "no dpdk VMs", you mean that within your VM
> you're relying
> >     on the Kernel to get the packets, using virtio-net. And when
> you say
> >     "dpdk vhostuser guest", you mean you're using DPDK inside the
> VM to get
> >     the packets. Is this correct?
> >
> >
> > Sorry for the inaccurate description. I'm really new to DPDK. 
> > No DPDK inside VM, all these settings are for host only.
> > (`host` means the hypervisor physical machine in the perspective of
> > virtualization.
> > On the other hand `guest` means the virtual machine.)
> > "no dpdk VMs" means the host does not setup DPDK (ovs is working in
> > traditional way),
> > the VMs were boot on that. Maybe a new name `VMs-on-NO-DPDK-host`?
> 
> Got it. Your "no dpdk VMs" really is referred to as OvS-Kernel, while
> your "dpdk vhostuser guest" is referred to as OvS-DPDK.
> 
> >
> >     If so, could you also tell us which DPDK app you're using
> inside of
> >     those VMs? Is it testpmd? If so, how are you setting the
> `--rxq` and
> >     `--txq` args? Otherwise, how are you setting those in your app
> when
> >     initializing DPDK?
> >
> >
> > Inside VM, there is no DPDK app, VM kernel also
> > does not set any config related to DPDK. `iperf3` is the tool for
> > bandwidth testing.
> >
> >     The information below is useful in telling us how you're
> setting your
> >     configurations in OvS, but we are still missing the configurations
> >     inside the VM.
> >
> >     This should help us in getting more information,
> >
> >
> > Maybe you have noticed that, we only setup one PMD in the pasted
> > configurations.
> > But VM has 8 queues. Should the pmd quantity match the queues?
> 
> It shouldn't match the queues inside the VM per say. But in this case,
> since you have configured 8 rx queues on your physical NICs as well, and
> since you're looking for higher throughputs, you should increase that
> number of PMDs and pin those rxqs - take a look at [1] on how to do
> that. Later on, increasing the size of your queues could also help.
> 
> 
> I'll test it. 
> Yes, as you noticed that the vhostuserclient  port has n_rxq="8",
> options:
> {n_rxq="8",vhost-server-path="/var/lib/vhost_sockets/vhu76f9a623-9f"}.
> And the physical NIC has both n_rxq="8", n_txq="8".
> options: {dpdk-devargs=":01:00.0", n_rxq="8", n_txq="8"}
> options: {dpdk-devargs=":05:00.1", n_rxq="8", n_txq="8"}
> But, furthermore, when remove such configuration for vhostuserclient 
> port and physical NIC,
> the bandwidth is same to 4.3Gbps no matter one PMD or multiple PMDs.
>  
> 
> Just as a curiosity, I see you have a configured MTU of 1500B on the
> physical interfaces. Is that the same MTU you're using inside the VM?
> And are you using the same configurations (including that 1500B MTU)
> when running your OvS-Kernel setup?
> 
> 
> MTU inside VM is 1450. Is that OK for the high throughput?

This will depend on what you're trying to 

Re: [ovs-discuss] Fwd: [ovs-dpdk] bandwidth issue of vhostuserclient virtio ovs-dpdk

2018-11-30 Thread LIU Yulong
On Fri, Nov 30, 2018 at 5:36 PM Lam, Tiago  wrote:

> On 30/11/2018 02:07, LIU Yulong wrote:
> > Hi,
> >
> > Thanks for the reply, please see my inline comments below.
> >
> >
> > On Thu, Nov 29, 2018 at 6:00 PM Lam, Tiago  > > wrote:
> >
> > On 29/11/2018 08:24, LIU Yulong wrote:
> > > Hi,
> > >
> > > We recently tested ovs-dpdk, but we met some bandwidth issue. The
> > bandwidth
> > > from VM to VM was not close to the physical NIC, it's about
> > 4.3Gbps on a
> > > 10Gbps NIC. For no dpdk (virtio-net) VMs, the iperf3 test can
> easily
> > > reach 9.3Gbps. We enabled the virtio multiqueue for all guest VMs.
> > In the
> > > dpdk vhostuser guest, we noticed that the interrupts are
> > centralized to
> > > only one queue. But for no dpdk VM, interrupts can hash to all
> queues.
> > > For those dpdk vhostuser VMs, we also noticed that the PMD usages
> were
> > > also centralized to one no matter server(tx) or client(rx). And no
> > matter
> > > one PMD or multiple PMDs, this behavior always exists.
> > >
> > > Furthuremore, my colleague add some systemtap hook on the
> openvswitch
> > > function, he found something interesting. The function
> > > __netdev_dpdk_vhost_send will send all the packets to one
> > virtionet-queue.
> > > Seems that there are some algorithm/hash table/logic does not do
> > the hash
> > > very well.
> > >
> >
> > Hi,
> >
> > When you say "no dpdk VMs", you mean that within your VM you're
> relying
> > on the Kernel to get the packets, using virtio-net. And when you say
> > "dpdk vhostuser guest", you mean you're using DPDK inside the VM to
> get
> > the packets. Is this correct?
> >
> >
> > Sorry for the inaccurate description. I'm really new to DPDK.
> > No DPDK inside VM, all these settings are for host only.
> > (`host` means the hypervisor physical machine in the perspective of
> > virtualization.
> > On the other hand `guest` means the virtual machine.)
> > "no dpdk VMs" means the host does not setup DPDK (ovs is working in
> > traditional way),
> > the VMs were boot on that. Maybe a new name `VMs-on-NO-DPDK-host`?
>
> Got it. Your "no dpdk VMs" really is referred to as OvS-Kernel, while
> your "dpdk vhostuser guest" is referred to as OvS-DPDK.
>
> >
> > If so, could you also tell us which DPDK app you're using inside of
> > those VMs? Is it testpmd? If so, how are you setting the `--rxq` and
> > `--txq` args? Otherwise, how are you setting those in your app when
> > initializing DPDK?
> >
> >
> > Inside VM, there is no DPDK app, VM kernel also
> > does not set any config related to DPDK. `iperf3` is the tool for
> > bandwidth testing.
> >
> > The information below is useful in telling us how you're setting your
> > configurations in OvS, but we are still missing the configurations
> > inside the VM.
> >
> > This should help us in getting more information,
> >
> >
> > Maybe you have noticed that, we only setup one PMD in the pasted
> > configurations.
> > But VM has 8 queues. Should the pmd quantity match the queues?
>
> It shouldn't match the queues inside the VM per say. But in this case,
> since you have configured 8 rx queues on your physical NICs as well, and
> since you're looking for higher throughputs, you should increase that
> number of PMDs and pin those rxqs - take a look at [1] on how to do
> that. Later on, increasing the size of your queues could also help.
>
>
I'll test it.
Yes, as you noticed that the vhostuserclient  port has n_rxq="8",
options:
{n_rxq="8",vhost-server-path="/var/lib/vhost_sockets/vhu76f9a623-9f"}.
And the physical NIC has both n_rxq="8", n_txq="8".
options: {dpdk-devargs=":01:00.0", n_rxq="8", n_txq="8"}
options: {dpdk-devargs=":05:00.1", n_rxq="8", n_txq="8"}
But, furthermore, when remove such configuration for vhostuserclient  port
and physical NIC,
the bandwidth is same to 4.3Gbps no matter one PMD or multiple PMDs.


> Just as a curiosity, I see you have a configured MTU of 1500B on the
> physical interfaces. Is that the same MTU you're using inside the VM?
> And are you using the same configurations (including that 1500B MTU)
> when running your OvS-Kernel setup?
>
>
MTU inside VM is 1450. Is that OK for the high throughput?



> Hope this helps,
>
>



> Tiago.
>
> [1]
>
> http://docs.openvswitch.org/en/latest/topics/dpdk/pmd/#port-rx-queue-assigment-to-pmd-threads
>
> >
> > Tiago.
> >
> > > So I'd like to find some help from the community. Maybe I'm
> > missing some
> > > configrations.
> > >
> > > Thanks.
> > >
> > >
> > > Here is the list of the environment and some configrations:
> > > # uname -r
> > > 3.10.0-862.11.6.el7.x86_64
> > > # rpm -qa|grep dpdk
> > > dpdk-17.11-11.el7.x86_64
> > > # rpm -qa|grep openvswitch
> > > openvswitch-2.9.0-3.el7.x86_64
> > > # ovs-vsctl list 

Re: [ovs-discuss] Fwd: [ovs-dpdk] bandwidth issue of vhostuserclient virtio ovs-dpdk

2018-11-30 Thread Lam, Tiago
On 30/11/2018 02:07, LIU Yulong wrote:
> Hi,
> 
> Thanks for the reply, please see my inline comments below.
> 
> 
> On Thu, Nov 29, 2018 at 6:00 PM Lam, Tiago  > wrote:
> 
> On 29/11/2018 08:24, LIU Yulong wrote:
> > Hi,
> >
> > We recently tested ovs-dpdk, but we met some bandwidth issue. The
> bandwidth
> > from VM to VM was not close to the physical NIC, it's about
> 4.3Gbps on a
> > 10Gbps NIC. For no dpdk (virtio-net) VMs, the iperf3 test can easily
> > reach 9.3Gbps. We enabled the virtio multiqueue for all guest VMs.
> In the
> > dpdk vhostuser guest, we noticed that the interrupts are
> centralized to
> > only one queue. But for no dpdk VM, interrupts can hash to all queues.
> > For those dpdk vhostuser VMs, we also noticed that the PMD usages were
> > also centralized to one no matter server(tx) or client(rx). And no
> matter
> > one PMD or multiple PMDs, this behavior always exists.
> >
> > Furthuremore, my colleague add some systemtap hook on the openvswitch
> > function, he found something interesting. The function
> > __netdev_dpdk_vhost_send will send all the packets to one
> virtionet-queue.
> > Seems that there are some algorithm/hash table/logic does not do
> the hash
> > very well. 
> >
> 
> Hi,
> 
> When you say "no dpdk VMs", you mean that within your VM you're relying
> on the Kernel to get the packets, using virtio-net. And when you say
> "dpdk vhostuser guest", you mean you're using DPDK inside the VM to get
> the packets. Is this correct?
> 
> 
> Sorry for the inaccurate description. I'm really new to DPDK. 
> No DPDK inside VM, all these settings are for host only.
> (`host` means the hypervisor physical machine in the perspective of
> virtualization.
> On the other hand `guest` means the virtual machine.)
> "no dpdk VMs" means the host does not setup DPDK (ovs is working in
> traditional way),
> the VMs were boot on that. Maybe a new name `VMs-on-NO-DPDK-host`?

Got it. Your "no dpdk VMs" really is referred to as OvS-Kernel, while
your "dpdk vhostuser guest" is referred to as OvS-DPDK.

> 
> If so, could you also tell us which DPDK app you're using inside of
> those VMs? Is it testpmd? If so, how are you setting the `--rxq` and
> `--txq` args? Otherwise, how are you setting those in your app when
> initializing DPDK?
> 
> 
> Inside VM, there is no DPDK app, VM kernel also
> does not set any config related to DPDK. `iperf3` is the tool for
> bandwidth testing.
> 
> The information below is useful in telling us how you're setting your
> configurations in OvS, but we are still missing the configurations
> inside the VM.
> 
> This should help us in getting more information,
> 
> 
> Maybe you have noticed that, we only setup one PMD in the pasted
> configurations.
> But VM has 8 queues. Should the pmd quantity match the queues?

It shouldn't match the queues inside the VM per say. But in this case,
since you have configured 8 rx queues on your physical NICs as well, and
since you're looking for higher throughputs, you should increase that
number of PMDs and pin those rxqs - take a look at [1] on how to do
that. Later on, increasing the size of your queues could also help.

Just as a curiosity, I see you have a configured MTU of 1500B on the
physical interfaces. Is that the same MTU you're using inside the VM?
And are you using the same configurations (including that 1500B MTU)
when running your OvS-Kernel setup?

Hope this helps,

Tiago.

[1]
http://docs.openvswitch.org/en/latest/topics/dpdk/pmd/#port-rx-queue-assigment-to-pmd-threads

> 
> Tiago.
> 
> > So I'd like to find some help from the community. Maybe I'm
> missing some
> > configrations.
> >
> > Thanks.
> >
> >
> > Here is the list of the environment and some configrations:
> > # uname -r
> > 3.10.0-862.11.6.el7.x86_64
> > # rpm -qa|grep dpdk
> > dpdk-17.11-11.el7.x86_64
> > # rpm -qa|grep openvswitch
> > openvswitch-2.9.0-3.el7.x86_64
> > # ovs-vsctl list open_vswitch
> > _uuid               : a6a3d9eb-28a8-4bf0-a8b4-94577b5ffe5e
> > bridges             : [531e4bea-ce12-402a-8a07-7074c31b978e,
> > 5c1675e2-5408-4c1f-88bc-6d9c9b932d47]
> > cur_cfg             : 1305
> > datapath_types      : [netdev, system]
> > db_version          : "7.15.1"
> > external_ids        : {hostname="cq01-compute-10e112e5e140",
> > rundir="/var/run/openvswitch",
> > system-id="e2cc84fe-a3c8-455f-8c64-260741c141ee"}
> > iface_types         : [dpdk, dpdkr, dpdkvhostuser,
> dpdkvhostuserclient,
> > geneve, gre, internal, lisp, patch, stt, system, tap, vxlan]
> > manager_options     : [43803994-272b-49cb-accc-ab672d1eefc8]
> > next_cfg            : 1305
> > other_config        : {dpdk-init="true", dpdk-lcore-mask="0x1",
> > 

[ovs-discuss] Fwd: [ovs-dpdk] bandwidth issue of vhostuserclient virtio ovs-dpdk

2018-11-29 Thread LIU Yulong
Hi,

Thanks for the reply, please see my inline comments below.


On Thu, Nov 29, 2018 at 6:00 PM Lam, Tiago  wrote:

> On 29/11/2018 08:24, LIU Yulong wrote:
> > Hi,
> >
> > We recently tested ovs-dpdk, but we met some bandwidth issue. The
> bandwidth
> > from VM to VM was not close to the physical NIC, it's about 4.3Gbps on a
> > 10Gbps NIC. For no dpdk (virtio-net) VMs, the iperf3 test can easily
> > reach 9.3Gbps. We enabled the virtio multiqueue for all guest VMs. In the
> > dpdk vhostuser guest, we noticed that the interrupts are centralized to
> > only one queue. But for no dpdk VM, interrupts can hash to all queues.
> > For those dpdk vhostuser VMs, we also noticed that the PMD usages were
> > also centralized to one no matter server(tx) or client(rx). And no matter
> > one PMD or multiple PMDs, this behavior always exists.
> >
> > Furthuremore, my colleague add some systemtap hook on the openvswitch
> > function, he found something interesting. The function
> > __netdev_dpdk_vhost_send will send all the packets to one
> virtionet-queue.
> > Seems that there are some algorithm/hash table/logic does not do the hash
> > very well.
> >
>
> Hi,
>
> When you say "no dpdk VMs", you mean that within your VM you're relying
> on the Kernel to get the packets, using virtio-net. And when you say
> "dpdk vhostuser guest", you mean you're using DPDK inside the VM to get
> the packets. Is this correct?


Sorry for the inaccurate description. I'm really new to DPDK.
No DPDK inside VM, all these settings are for host only.
(`host` means the hypervisor physical machine in the perspective of
virtualization.
On the other hand `guest` means the virtual machine.)
"no dpdk VMs" means the host does not setup DPDK (ovs is working in
traditional way),
the VMs were boot on that. Maybe a new name `VMs-on-NO-DPDK-host`?

If so, could you also tell us which DPDK app you're using inside of
> those VMs? Is it testpmd? If so, how are you setting the `--rxq` and
> `--txq` args? Otherwise, how are you setting those in your app when
> initializing DPDK?
>

Inside VM, there is no DPDK app, VM kernel also
does not set any config related to DPDK. `iperf3` is the tool for bandwidth
testing.

The information below is useful in telling us how you're setting your
> configurations in OvS, but we are still missing the configurations
> inside the VM.
>
> This should help us in getting more information,
>
>
Maybe you have noticed that, we only setup one PMD in the pasted
configurations.
But VM has 8 queues. Should the pmd quantity match the queues?

Tiago.
>
> > So I'd like to find some help from the community. Maybe I'm missing some
> > configrations.
> >
> > Thanks.
> >
> >
> > Here is the list of the environment and some configrations:
> > # uname -r
> > 3.10.0-862.11.6.el7.x86_64
> > # rpm -qa|grep dpdk
> > dpdk-17.11-11.el7.x86_64
> > # rpm -qa|grep openvswitch
> > openvswitch-2.9.0-3.el7.x86_64
> > # ovs-vsctl list open_vswitch
> > _uuid   : a6a3d9eb-28a8-4bf0-a8b4-94577b5ffe5e
> > bridges : [531e4bea-ce12-402a-8a07-7074c31b978e,
> > 5c1675e2-5408-4c1f-88bc-6d9c9b932d47]
> > cur_cfg : 1305
> > datapath_types  : [netdev, system]
> > db_version  : "7.15.1"
> > external_ids: {hostname="cq01-compute-10e112e5e140",
> > rundir="/var/run/openvswitch",
> > system-id="e2cc84fe-a3c8-455f-8c64-260741c141ee"}
> > iface_types : [dpdk, dpdkr, dpdkvhostuser, dpdkvhostuserclient,
> > geneve, gre, internal, lisp, patch, stt, system, tap, vxlan]
> > manager_options : [43803994-272b-49cb-accc-ab672d1eefc8]
> > next_cfg: 1305
> > other_config: {dpdk-init="true", dpdk-lcore-mask="0x1",
> > dpdk-socket-mem="1024,1024", pmd-cpu-mask="0x10",
> > vhost-iommu-support="true"}
> > ovs_version : "2.9.0"
> > ssl : []
> > statistics  : {}
> > system_type : centos
> > system_version  : "7"
> > # lsmod |grep vfio
> > vfio_pci   41312  2
> > vfio_iommu_type1   22300  1
> > vfio   32695  7 vfio_iommu_type1,vfio_pci
> > irqbypass  13503  23 kvm,vfio_pci
> >
> > # ovs-appctl dpif/show
> > netdev@ovs-netdev: hit:759366335 missed:754283
> > br-ex:
> > bond1108 4/6: (tap)
> > br-ex 65534/3: (tap)
> > nic-10G-1 5/4: (dpdk: configured_rx_queues=8,
> > configured_rxq_descriptors=2048, configured_tx_queues=2,
> > configured_txq_descriptors=2048, mtu=1500, requested_rx_queues=8,
> > requested_rxq_descriptors=2048, requested_tx_queues=2,
> > requested_txq_descriptors=2048, rx_csum_offload=true)
> > nic-10G-2 6/5: (dpdk: configured_rx_queues=8,
> > configured_rxq_descriptors=2048, configured_tx_queues=2,
> > configured_txq_descriptors=2048, mtu=1500, requested_rx_queues=8,
> > requested_rxq_descriptors=2048, requested_tx_queues=2,
> > requested_txq_descriptors=2048, rx_csum_offload=true)
> > phy-br-ex 3/none: (patch: peer=int-br-ex)
> > br-int:
> > br-int 65534/2: (tap)
> > int-br-ex 1/none: (patch: