On Tue, May 22, 2018 at 10:27:07AM -0300, Flavio Leitner wrote:
> On Mon, May 21, 2018 at 02:25:56PM +0000, Shahaf Shuler wrote:
> > Hi Flavio, thanks for reporting this issue.
> > 
> > Saturday, May 19, 2018 1:47 AM, Flavio Leitner:
> > > Subject: Re: [ovs-dev] [PATCH v9 0/7] OVS-DPDK flow offload with rte_flow
> > > 
> > > 
> > > 
> > > Hello,
> > > 
> > > I looked at the patchset (v9) and I found no obvious problems, but I miss
> > > some instrumentation to understand what is going on. For example, how
> > > many flows are offloaded, or how many per second, etc... We can definitely
> > > work on that as a follow up.
> > > 
> > > I have a MLX5 (16.20.1010) which is connected to a traffic generator.
> > > The results are unexpected and I don't know why yet. I will continue on it
> > > next week.
> > > 
> > > The flow is pretty simple, just echo the packet back:
> > > 
> > >    ovs-ofctl add-flow ovsbr0 in_port=10,action=output:in_port
> > > 
> > > This is the result without HW offloading enabled:
> > > Partial:  14619675.00 pps  7485273570.00 bps
> > > Partial:  14652351.00 pps  7502003940.00 bps
> > > Partial:  14655548.00 pps  7503640570.00 bps
> > > Partial:  14679556.00 pps  7515932630.00 bps
> > > Partial:  14681188.00 pps  7516768670.00 bps
> > > Partial:  14597427.00 pps  7473882390.00 bps
> > > Partial:  14712617.00 pps  7532860090.00 bps
> > > 
> > > pmd thread numa_id 0 core_id 2:
> > >         packets received: 53859055
> > >         packet recirculations: 0
> > >         avg. datapath passes per packet: 1.00
> > >         emc hits: 53859055
> > >         megaflow hits: 0
> > >         avg. subtable lookups per megaflow hit: 0.00
> > >         miss with success upcall: 0
> > >         miss with failed upcall: 0
> > >         avg. packets per output batch: 28.20
> > >         idle cycles: 0 (0.00%)
> > >         processing cycles: 12499399115 (100.00%)
> > >         avg cycles per packet: 232.08 (12499399115/53859055)
> > >         avg processing cycles per packet: 232.08 (12499399115/53859055)
> > > 
> > > 
> > > Based on the stats, it seems 14.7Mpps is the maximum a core can do it.
> > > 
> > > This is the result with HW Offloading enabled:
> > > 
> > > Partial:  10713500.00 pps  5485312330.00 bps
> > > Partial:  10672185.00 pps  5464158240.00 bps
> > > Partial:  10747756.00 pps  5502850960.00 bps
> > > Partial:  10713404.00 pps  5485267400.00 bps
> > > 
> > > 
> > > pmd thread numa_id 0 core_id 2:
> > >         packets received: 25902718
> > >         packet recirculations: 0
> > >         avg. datapath passes per packet: 1.00
> > >         emc hits: 25902697
> > >         megaflow hits: 0
> > >         avg. subtable lookups per megaflow hit: 0.00
> > >         miss with success upcall: 0
> > >         miss with failed upcall: 0
> > >         avg. packets per output batch: 28.11
> > >         idle cycles: 0 (0.00%)
> > >         processing cycles: 12138284463 (100.00%)
> > >         avg cycles per packet: 468.61 (12138284463/25902718)
> > >         avg processing cycles per packet: 468.61 (12138284463/25902718)
> > > 
> > > 2018-05-
> > > 18T22:34:57.865Z|00001|dpif_netdev(dp_netdev_flow_8)|WARN|Mark
> > > id for ufid caaf720e-5dfe-4879-adb9-155bd92f9b    40 was not found
> > > 
> > > 2018-05-
> > > 18T22:35:02.920Z|00002|dpif_netdev(dp_netdev_flow_8)|WARN|Mark
> > > id for ufid c75ae6c5-1d14-40ce-b4c7-6d5001a458    4c was not found
> > > 
> > 
> > Up till now makes sense. The first packet should trigger the flow insertion 
> > to HW.
> > 
> > > 2018-05-18T22:35:05.160Z|00105|memory|INFO|109700 kB peak resident set
> > > size after 10.4 seconds
> > > 
> > > 2018-05-18T22:35:05.160Z|00106|memory|INFO|handlers:1 ports:3
> > > revalidators:1 rules:5 udpif keys:2
> > > 
> > > 2018-05-
> > > 18T22:35:21.910Z|00003|dpif_netdev(dp_netdev_flow_8)|WARN|Mark
> > > id for ufid 73bdddc9-b12f-4007-9f12-b66b4bc189    3e was not found
> > > 
> > > 2018-05-18T22:35:21.924Z|00004|netdev_dpdk(dp_netdev_flow_8)|ERR|rte
> > > flow creat error: 2 : message : flow rule creati    on failure
> > 
> > This error is unexpected. 
> > I was able to reproduce the issue with latest 17.11 stable. Looks like 
> > either modify flow is broken on the DPDK side or OVS-DPDK issue which 
> > popped up due to some commit on stable DPDK. 
> > I will investigate more on it and provide a fix. 
> 
> Cool, thanks.

I did use 17.11 with the patch to identify the card using PCI ID only:
commit 547009bd967ec4017aa7d24593a2e9853ba690ad (HEAD -> v17.11)
Author: Yuanhan Liu <y...@fridaylinux.org>
Date:   Mon Jan 22 17:30:06 2018 +0800

    net/mlx5: use PCI address as port name

I have OVS master, V9 patchset, with the following reverted:
   Revert "netdev-dpdk: Free mempool only when no in-use mbufs."
   This reverts commit 91fccdad72a253a3892dcb3c4453a31833851bb7.

Results didn't change much:
pmd thread numa_id 0 core_id 2:
        packets received: 22110691
        packet recirculations: 0
        avg. datapath passes per packet: 1.00
        emc hits: 22110690
        megaflow hits: 1
        avg. subtable lookups per megaflow hit: 1.00
        miss with success upcall: 0
        miss with failed upcall: 0
        avg. packets per output batch: 28.24
        idle cycles: 0 (0.00%)
        processing cycles: 6929337967 (100.00%)
        avg cycles per packet: 313.39 (6929337967/22110691)
        avg processing cycles per packet: 313.39 (6929337967/22110691)


# dmesg
[  373.169791] mlx5_core 0000:06:00.1:
check_conflicting_ftes:1451:(pid 1320): FTE flow tag 65536 already
exists with different flow tag 131072
[  391.944133] mlx5_core 0000:06:00.1:
check_conflicting_ftes:1451:(pid 1320): FTE flow tag 65536 already
exists with different flow tag 196608
[  455.952162] mlx5_core 0000:06:00.1:
check_conflicting_ftes:1451:(pid 1320): FTE flow tag 65536 already
exists with different flow tag 196608
[  498.808709] mlx5_core 0000:06:00.0:
check_conflicting_ftes:1451:(pid 1320): FTE flow tag 131072 already
exists with different flow tag 196608
[  498.823007] mlx5_core 0000:06:00.1:
check_conflicting_ftes:1451:(pid 1320): FTE flow tag 65536 already
exists with different flow tag 196608

2018-05-22T20:36:27.585Z|00103|memory|INFO|95948 kB peak resident set
size after 10.4 seconds
2018-05-22T20:36:27.585Z|00104|memory|INFO|handlers:1 ports:3
revalidators:1 rules:5 udpif keys:1
2018-05-22T20:36:31.830Z|00002|dpif_netdev(dp_netdev_flow_8)|WARN|Mark
id for ufid 466bafc3-5f75-4da6-a28c-ae385b93f279 was not found
2018-05-22T20:36:31.844Z|00003|netdev_dpdk(dp_netdev_flow_8)|ERR|rte
flow creat error: 2 : message : flow rule creation failure
2018-05-22T20:36:32.640Z|00004|dpif_netdev(dp_netdev_flow_8)|WARN|Mark
id for ufid 1ba87b97-c49f-4fab-bb6d-edf896b77eb5 was not found
2018-05-22T20:36:50.604Z|00005|dpif_netdev(dp_netdev_flow_8)|WARN|Mark
id for ufid 8b2fbed6-94fd-419e-bf94-d936dbee4619 was not found
2018-05-22T20:36:50.618Z|00006|netdev_dpdk(dp_netdev_flow_8)|ERR|rte
flow creat error: 2 : message : flow rule creation failure
2018-05-22T20:37:54.613Z|00007|dpif_netdev(dp_netdev_flow_8)|WARN|Mark
id for ufid 8b2fbed6-94fd-419e-bf94-d936dbee4619 was not found
2018-05-22T20:37:54.627Z|00008|netdev_dpdk(dp_netdev_flow_8)|ERR|rte
flow creat error: 2 : message : flow rule creation failure
2018-05-22T20:38:37.469Z|00009|dpif_netdev(dp_netdev_flow_8)|WARN|Mark
id for ufid 15163171-b084-4fb3-a061-5e3b2481741b was not found
2018-05-22T20:38:37.484Z|00010|netdev_dpdk(dp_netdev_flow_8)|ERR|rte
flow creat error: 2 : message : flow rule creation failure
2018-05-22T20:38:37.484Z|00011|dpif_netdev(dp_netdev_flow_8)|WARN|Mark
id for ufid 3b61e29f-8ec3-4767-9976-6a37e52c30d9 was not found
2018-05-22T20:38:37.498Z|00012|netdev_dpdk(dp_netdev_flow_8)|ERR|rte
flow creat error: 2 : message : flow rule creation failure


fbl

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to