> Hi Flavio, thanks for reporting this issue.
> 
> Saturday, May 19, 2018 1:47 AM, Flavio Leitner:
> > Subject: Re: [ovs-dev] [PATCH v9 0/7] OVS-DPDK flow offload with
> > rte_flow
> >
> >
> >
> > Hello,
> >
> > I looked at the patchset (v9) and I found no obvious problems, but I
> > miss some instrumentation to understand what is going on. For example,
> > how many flows are offloaded, or how many per second, etc... We can
> > definitely work on that as a follow up.
> >
> > I have a MLX5 (16.20.1010) which is connected to a traffic generator.
> > The results are unexpected and I don't know why yet. I will continue
> > on it next week.
> >
> > The flow is pretty simple, just echo the packet back:
> >
> >    ovs-ofctl add-flow ovsbr0 in_port=10,action=output:in_port
> >
> > This is the result without HW offloading enabled:
> > Partial:  14619675.00 pps  7485273570.00 bps
> > Partial:  14652351.00 pps  7502003940.00 bps
> > Partial:  14655548.00 pps  7503640570.00 bps
> > Partial:  14679556.00 pps  7515932630.00 bps
> > Partial:  14681188.00 pps  7516768670.00 bps
> > Partial:  14597427.00 pps  7473882390.00 bps
> > Partial:  14712617.00 pps  7532860090.00 bps
> >
> > pmd thread numa_id 0 core_id 2:
> >         packets received: 53859055
> >         packet recirculations: 0
> >         avg. datapath passes per packet: 1.00
> >         emc hits: 53859055
> >         megaflow hits: 0
> >         avg. subtable lookups per megaflow hit: 0.00
> >         miss with success upcall: 0
> >         miss with failed upcall: 0
> >         avg. packets per output batch: 28.20
> >         idle cycles: 0 (0.00%)
> >         processing cycles: 12499399115 (100.00%)
> >         avg cycles per packet: 232.08 (12499399115/53859055)
> >         avg processing cycles per packet: 232.08
> > (12499399115/53859055)
> >
> >
> > Based on the stats, it seems 14.7Mpps is the maximum a core can do it.
> >
> > This is the result with HW Offloading enabled:
> >
> > Partial:  10713500.00 pps  5485312330.00 bps
> > Partial:  10672185.00 pps  5464158240.00 bps
> > Partial:  10747756.00 pps  5502850960.00 bps
> > Partial:  10713404.00 pps  5485267400.00 bps
> >
> >
> > pmd thread numa_id 0 core_id 2:
> >         packets received: 25902718
> >         packet recirculations: 0
> >         avg. datapath passes per packet: 1.00
> >         emc hits: 25902697
> >         megaflow hits: 0
> >         avg. subtable lookups per megaflow hit: 0.00
> >         miss with success upcall: 0
> >         miss with failed upcall: 0
> >         avg. packets per output batch: 28.11
> >         idle cycles: 0 (0.00%)
> >         processing cycles: 12138284463 (100.00%)
> >         avg cycles per packet: 468.61 (12138284463/25902718)
> >         avg processing cycles per packet: 468.61
> > (12138284463/25902718)
> >
> > 2018-05-
> > 18T22:34:57.865Z|00001|dpif_netdev(dp_netdev_flow_8)|WARN|Mark
> > id for ufid caaf720e-5dfe-4879-adb9-155bd92f9b    40 was not found
> >
> > 2018-05-
> > 18T22:35:02.920Z|00002|dpif_netdev(dp_netdev_flow_8)|WARN|Mark
> > id for ufid c75ae6c5-1d14-40ce-b4c7-6d5001a458    4c was not found
> >
> 
> Up till now makes sense. The first packet should trigger the flow
> insertion to HW.
> 
> > 2018-05-18T22:35:05.160Z|00105|memory|INFO|109700 kB peak resident set
> > size after 10.4 seconds
> >
> > 2018-05-18T22:35:05.160Z|00106|memory|INFO|handlers:1 ports:3
> > revalidators:1 rules:5 udpif keys:2
> >
> > 2018-05-
> > 18T22:35:21.910Z|00003|dpif_netdev(dp_netdev_flow_8)|WARN|Mark
> > id for ufid 73bdddc9-b12f-4007-9f12-b66b4bc189    3e was not found
> >
> > 2018-05-18T22:35:21.924Z|00004|netdev_dpdk(dp_netdev_flow_8)|ERR|rte
> > flow creat error: 2 : message : flow rule creati    on failure
> 
> This error is unexpected.
> I was able to reproduce the issue with latest 17.11 stable. Looks like
> either modify flow is broken on the DPDK side or OVS-DPDK issue which
> popped up due to some commit on stable DPDK.
> I will investigate more on it and provide a fix.

Currently we recommend DPDK 17.11.2 as it addresses a number of CVE fixes for 
vhost user. 

If the bug is specific to the DPDK side you could list it as a known issue 
under "Limitations' in Documentation/intro/install/dpdk.rst until it is fixed 
in a future release of 17.11.

Ian
> 
> For the sake of testing, can you try with 17.11 instead of stable?
> With it I found it work correctly:
> 
> 2018-05-21T14:20:32.954Z|00120|bridge|INFO|ovs-vswitchd (Open vSwitch)
> 2.9.90                                         │
> 2018-05-21T14:20:33.906Z|00121|connmgr|INFO|br0<->unix#3: 1 flow_mods in
> the last 0 s (1 adds)                        │
> 2018-05-21T14:20:33.923Z|00002|dpif_netdev(dp_netdev_flow_4)|DBG|succeed
> to modify netdev flow
> 
> BTW - if you want to avoid the modify flow, try to insert the rule before
> you start the traffic on the traffic gen.
> The below output is the expected (in both LTS and 17.11 versions):
> 
> 
> 2018-05-21T13:19:18.055Z|00120|bridge|INFO|ovs-vswitchd (Open vSwitch)
> 2.9.90
> 2018-05-21T13:19:23.560Z|00121|memory|INFO|294204 kB peak resident set
> size after 10.4 seconds
> 2018-05-21T13:19:23.560Z|00122|memory|INFO|handlers:17 ports:2
> revalidators:7 rules:5
> 2018-05-21T13:19:25.726Z|00123|connmgr|INFO|br0<->unix#3: 1 flow_mods in
> the last 0 s (1 adds)
> 2018-05-21T13:19:38.378Z|00001|dpif_netdev(pmd30)|DBG|ovs-netdev: miss
> upcall:
> skb_priority(0),skb_mark(0),ct_state(0),ct_zone(0),ct_mark(0),ct_label(0),
> recirc_id(0),dp_hash(0),in_port(1),packet_ty
> pe(ns=0,id=0),eth(src=24:8a:07:91:8d:68,dst=02:00:00:00:00:00),eth_type(0x
> 0800),ipv4(src=192.168.0.1,dst=192.168.0.2,p
> roto=17,tos=0,ttl=64,frag=no),udp(src=1024,dst=1024)
> udp,vlan_tci=0x0000,dl_src=24:8a:07:91:8d:68,dl_dst=02:00:00:00:00:00,nw_s
> rc=192.168.0.1,nw_dst=192.168.0.2,nw_tos=0,n
> w_ecn=0,nw_ttl=64,tp_src=1024,tp_dst=1024 udp_csum:0
> 2018-05-21T13:19:38.379Z|00002|dpif_netdev(pmd30)|DBG|Creating dpcls
> 0x7f29fc005310 for in_port 1
> 2018-05-21T13:19:38.379Z|00003|dpif_netdev(pmd30)|DBG|Creating 1. subtable
> 0x7f29fc006150 for in_port 1
> 2018-05-21T13:19:38.379Z|00001|dpif_netdev(dp_netdev_flow_4)|WARN|Mark id
> for ufid 3bad389c-507e-4b02-a1cf-c0e084703d5
> f was not found
> 2018-05-21T13:19:38.379Z|00004|dpif_netdev(pmd30)|DBG|flow_add:
> ufid:dd9f6d14-f085-4706-a1ac-19d458109a79 recirc_id(0)
> ,in_port(1),packet_type(ns=0,id=0),eth_type(0x0800),ipv4(frag=no),
> actions:1
> 2018-05-21T13:19:38.379Z|00005|dpif_netdev(pmd30)|DBG|flow match:
> recirc_id=0,eth,ip,vlan_tci=0x0000,nw_frag=no
> 2018-05-
> 21T13:19:38.387Z|00002|dpif_netdev(dp_netdev_flow_4)|DBG|Associated
> dp_netdev flow 0x7f29fc005a90 with mark 0 2018-05-
> 21T13:19:38.387Z|00003|dpif_netdev(dp_netdev_flow_4)|DBG|succeed to add
> netdev flow
> 
> 
> >
> >
> > Looks like offloading didn't work and now the tput rate is lower, which
> is not
> > expected either.
> >
> > I plan to continue on this and I appreciate if you have any idea of what
> is
> > going on.
> >
> > Thanks,
> > fbl

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to