Hi, Flavio and Ilya

After I checked it further, I'm very sure it isn't led by bandwidth limit, the 
issue is still there after I totally removed bandwidth limit (tc qdisc show can 
confirm this).

The issue becomes more seriously when we started more VMs in a subnet, we did 
see the actions which output to many ports, here is output of "sudo ovs-appctl 
dpif/dump-flows br-int" when "iperf3 is running"

recirc_id(0),in_port(12),eth(src=fa:16:3e:49:26:51,dst=fa:16:3e:a7:0a:3a),eth_type(0x0800),ipv4(tos=0/0x3,frag=no),
 packets:11012944, bytes:726983412, used:0.000s, flags:SP., 
actions:push_vlan(vid=1,pcp=0),2,set(tunnel(tun_id=0x49,src=10.3.2.17,dst=10.3.2.16,ttl=64,tp_dst=4789,flags(df|key))),pop_vlan,9,8,11,13,14,15,16,17,18,19

number of output ports will be linearly related with number of VMs.

Obviously, mac fdbs ovs learned didn't include this destination mac which is 
MAC of iperf3 server VM in another compute node.

$ sudo ovs-vsctl show | grep Bridge
    Bridge br-floating
    Bridge br-int
    Bridge "br-bond1"
    Bridge br-tun
$ sudo ovs-appctl fdb/show br-floating | grep fa:16:3e:49:26:51
$ sudo ovs-appctl fdb/show br-tun | grep fa:16:3e:49:26:51
$ sudo ovs-appctl fdb/show br-bond1 | grep fa:16:3e:49:26:51
$ sudo ovs-appctl fdb/show br-int | grep fa:16:3e:49:26:51

This is indeed done by default "NORMAL action" in br-int, my question is why 
"NORMAL" action will output it to some other ports? why it can't learn MAC in 
other compute nodes by vxlan? Is there any good way to fix it?

By the way, all the other VMs which have same subnet as iperf3 client VM can 
receive iperf3 client packets(ifconfig eth0 in these VMs can see RX packets 
number which is increasing very quickly, tcpdump can see these packets), 
destination MAC of these packets are fa:16:3e:49:26:51, not broadcast MAC, VM 
interface is not in promisc mode.

Look forward to getting your guide, is this default behavior of OVS? Can you 
explain this "NORMAL" action?

-----邮件原件-----
发件人: Flavio Leitner [mailto:f...@sysclose.org] 
发送时间: 2020年2月13日 22:21
收件人: Ilya Maximets <i.maxim...@ovn.org>
抄送: Yi Yang (杨燚)-云服务集团 <yangy...@inspur.com>; ovs-disc...@openvswitch.org; 
ovs-dev@openvswitch.org
主题: Re: 答复: [ovs-dev] OVS performance issue: why small udp packet pps 
performance between VMs is highly related with number of ovs ports and number 
of VMs?

On Thu, Feb 13, 2020 at 03:07:33PM +0100, Ilya Maximets wrote:
> On 2/13/20 2:52 PM, Yi Yang (杨燚)-云服务集团 wrote:
> > Thanks Ilya, iperf3 udp should be single direction, source IP address and 
> > destination IP address are two VMs' IP, udp bandwidth will be 0 if they are 
> > wrong, but obviously UDP loss rate is 0, so it isn't the case you're 
> > saying, do we have way to disable MAC learning or MAC broadcast?
> 
> NORMAL action acts like an L2 learning switch.  If you don't want to 
> use MAC learning, remove flow with NORMAL action and add direct 
> forwarding flow like output:<desired port>.  But I don't think that 
> you want to do that in OpenStack setup.

Also iperf3 establishes the control connection which uses TCP in both 
directions. So, in theory, the FDB should be updated.

> > Is NORMAL action or MAC learning slow path process? If so, ovs-vswitchd 
> > daemon should have high cpu utilization.
> 
> It's not a slow path, so there will be no cpu usage by ovs-vswitchd 
> userspace process.  To confirm that you're flooding packets, you may 
> dump installed datapath flows with the following command:
> 
>     ovs-appctl dpctl/dump-flows
> 
> In case of flood, you will see datapath flow with big number of output 
> ports like this:
> 
>     <...>  actions:<port #1>,<port #2>,<port #3>...

I'd suggest to look at the fdb: ovs-appctl fdb/show <br> and port stats to see 
if there is traffic moving as well.
Maybe it's not your UDP test packet, but another unrelated traffic in the 
network.

HTH,
fbl


> 
> > 
> > -----邮件原件-----
> > 发件人: Ilya Maximets [mailto:i.maxim...@ovn.org]
> > 发送时间: 2020年2月13日 21:23
> > 收件人: Flavio Leitner <f...@sysclose.org>; Yi Yang (杨燚)-云服务集团 
> > <yangy...@inspur.com>
> > 抄送: ovs-disc...@openvswitch.org; ovs-dev@openvswitch.org; Ilya 
> > Maximets <i.maxim...@ovn.org>
> > 主题: Re: [ovs-dev] OVS performance issue: why small udp packet pps 
> > performance between VMs is highly related with number of ovs ports and 
> > number of VMs?
> > 
> > On 2/13/20 12:48 PM, Flavio Leitner wrote:
> >> On Thu, Feb 13, 2020 at 09:18:38AM +0000, Yi Yang (杨燚)-云服务集团 wrote:
> >>> Hi, all
> >>>
> >>> We find ovs has serious performance issue, we only launch one VM 
> >>> in one compute, and do iperf small udp pps performance test 
> >>> between these two VMs, we can see about 180000 pps (packets per 
> >>> second, -l 16), but
> >>>
> >>> 1) if we add 100 veth ports in br-int bridge, respectively, then the pps 
> >>> performance will be about 50000 pps.
> >>> 2) If we launch one more VM in every compute node, but don’t run 
> >>> any workload, the pps performance will be about 90000 pps. (note, 
> >>> no above veth ports in this test)
> >>> 3) If we launch two more VMs in every compute node (totally 3 VMs 
> >>> every compute nodes), but don’t run any workload , the pps 
> >>> performance will be about 50000 pps (note, no above veth ports in 
> >>> this test)
> >>>
> >>> Anybody can help explain why it is so? Is there any known way to 
> >>> optimized this? I really think ovs performance is bad (we can draw 
> >>> such conclusion from our test result at least), I don’t want to 
> >>> defame ovs ☺
> >>>
> >>> BTW, we used ovs kernel datapath and vhost, we can see every port has a 
> >>> vhost kernel thread, it is running with 100% cpu utilization if we run 
> >>> iperf in VM, bu for those idle VMs, the corresponding vhost still has 
> >>> about 30% cpu utilization, I don’t understand why.
> >>>
> >>> In addition, we find udp performance is also very bad for small UDP 
> >>> packet for physical NIC. But it can reach 260000 pps for –l 80 which 
> >>> enough covers vxlan header (8 bytes) + inner eth header (14) + ipudp 
> >>> header (28) + 16 = 66, if we consider performance overhead ovs bridge 
> >>> introduces, pps performance between VMs should be able to reach 200000 
> >>> pps at least, other VMs and ports shouldn’t have so big hurt against it 
> >>> because they are idle, no any workload there.
> >>
> >> What do you have in the flow table?  It sounds like the traffic is 
> >> being broadcast to all ports. Check the FDB to see if OvS is 
> >> learning the mac addresses.
> >>
> >> It's been a while since I don't run performance tests with kernel 
> >> datapath, but it should be no different than Linux bridge with just 
> >> action NORMAL in the flow table.
> >>
> > 
> > I agree that if your performance heavily depends on the number of ports 
> > than you're most likely just flooding all the packets to all the ports.  
> > Since you're using UDP traffic, please, be sure that you're sending some 
> > packets in backward direction, so OVS and all other switches (if any) will 
> > learn/not forget to which port packets should be sent.  Also, check if your 
> > IP addresses are correct.  If for some reason it's not possible for OVS to 
> > learn MAC addresses correctly, avoid using action:NORMAL.
> > 
> > Best regards, Ilya Maximets.
> > 
> 

--
fbl
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to