Hi Ian,

Thank you for answering!

OVS Version. 2.6.1
DPDK Version. 16.07.2
NIC Model. Ethernet controller: Intel Corporation Ethernet Connection I354
(rev 03)
pmd-cpu-mask. on core 1 mask=0x2
lcore mask. core zeor "dpdk-lcore-mask=1"

Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
                options: {n_rxq="8", n_rxq_desc="2048", n_txq="9",
n_txq_desc="2048"}

ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 1:
        isolated : false
        port: dpdk0     queue-id: 0 1 2 3 4 5 6 7
        port: dpdk1     queue-id: 0 1 2 3 4 5 6 7


Port configuration for all ports (including flow control, queue creation
etc).
It seems though that i understand now what was going on.
this has to do with two seperate issues.
1. there was high load on kernel core because i had an issue with the
linuxbridge mappings which was searching for an interface that was bound
already to the dpdk driver.
2. the second instance of the same processor job was because, i use only
one core for my PMD the "dpdk-lcore-mask" command would default to one of
the cores set which. and as you may know that this causes many issues
(since the handler or revalidator threads should not interrupt the ovs-dpdk
datapath packet processing.

FYI: you can see below the output of the htop command with the two
processes.
CPU   PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
 2 12650 root       10 -10 4741M  104M 10560 R 94.0  1.3  1h12:12
ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err
-vfile:info --mlockall --no-chdir
--log-file=/var/log/openvswitch/ovs-vswitchd.log
--pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor
 1 11687 root       10 -10 4741M  104M 10560 S 94.0  1.3  1h12:26
ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err
-vfile:info --mlockall --no-chdir
--log-file=/var/log/openvswitch/ovs-vswitchd.log
--pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor

Thank you all for your help.
Michael

On Wed, Apr 11, 2018 at 12:44 PM, Stokes, Ian <ian.sto...@intel.com> wrote:

> > One of the OVS-DPDK maintainers will have to speak up about the flow
> > control messages.  I don't know.
>
> Hi Michael,
>
> can you provide the following to help debug this:
>
> OVS Version.
> DPDK Version.
> NIC Model.
> pmd-cpu-mask.
> lcore mask.
> Port configuration for all ports (including flow control, queue creation
> etc).
>
>
> You should note that DPDK uses a Poll Mode Driver (PMD), in essence it
> continuously polls (regardless if traffic is received or not) using the
> core of which a queue of a device is assigned to. It's expected in this
> case that the assigned CPU is 100% utilized from the point of view of tools
> such as htop.
>
> Can you post the output of 'ovs-appctl dpif-netdev/pmd-rxq-show' also,
> this will help debug performance. A few more comments inline below.
>
> >
> > Do you see log messages reporting high CPU usage?  That would ordinarily
> > be the case, if threads other than the PMD threads are using excessive
> > CPU.
> >
> > "top" and other tools can show CPU usage by thread, and OVS gives its
> > threads helpful names.  Which threads are using high CPU?
> >
> > On Sun, Apr 08, 2018 at 10:15:11AM +0300, michael me wrote:
> > > Hi Ben,
> > >
> > > Thank you so much for your reply.
> > > here below are some of the log from ovs-vswitchd.
> > >
> > > 2018-04-08T09:52:34.897Z|00333|dpdk|WARN|Failed to enable flow control
> > > on device 0 2018-04-08T09:52:34.897Z|00334|dpdk|WARN|Failed to enable
> > > flow control on device 1
> > > 2018-04-08T09:52:35.025Z|00335|dpdk|WARN|Failed to enable flow control
> > > on device 0 2018-04-08T09:52:35.025Z|00336|dpdk|WARN|Failed to enable
> > > flow control on device 1
> > > 2018-04-08T09:52:36.370Z|00337|rconn|INFO|br-int<->tcp:127.0.0.1:6633:
> > > connected
> > > 2018-04-08T09:52:36.370Z|00338|rconn|INFO|br-eth1<->tcp:127.0.0.1:6633
> :
> > > connected
> > > 2018-04-08T09:52:36.370Z|00339|rconn|INFO|br-eth2<->tcp:127.0.0.1:6633
> :
> > > connected
> > > 2018-04-08T09:52:37.102Z|00340|dpdk|WARN|Failed to enable flow control
> > > on device 0 2018-04-08T09:52:37.102Z|00341|dpdk|WARN|Failed to enable
> > > flow control on device 1
> > > 2018-04-08T09:52:37.225Z|00342|dpdk|WARN|Failed to enable flow control
> > > on device 0 2018-04-08T09:52:37.225Z|00343|dpdk|WARN|Failed to enable
> > > flow control on device 1
> > > 2018-04-08T09:52:37.298Z|00344|dpdk|WARN|Failed to enable flow control
> > > on device 0 2018-04-08T09:52:37.298Z|00345|dpdk|WARN|Failed to enable
> > > flow control on device 1
> > > 2018-04-08T09:52:37.426Z|00346|dpdk|WARN|Failed to enable flow control
> > > on device 0 2018-04-08T09:52:37.426Z|00347|dpdk|WARN|Failed to enable
> > > flow control on device 1
> > > 2018-04-08T09:52:47.041Z|00348|connmgr|INFO|br-int<->tcp:127.0.0.1:663
> > > 3: 7 flow_mods in the 7 s starting 10 s ago (7 adds)
> > > 2018-04-08T09:52:47.245Z|00349|connmgr|INFO|br-eth1<->tcp:127.0.0.1:66
> > > 33: 3 flow_mods in the 7 s starting 10 s ago (3 adds)
> > > 2018-04-08T09:52:47.444Z|00350|connmgr|INFO|br-eth2<->tcp:127.0.0.1:66
> > > 33: 3 flow_mods in the 7 s starting 10 s ago (3 adds)
> > >
> > > is the  "Failed to enable flow control on device" related to my high
> > > CPU load?
>
> This looks like your trying to enable the flow control feature on a device
> that does not support it.
> Are you enabling flow control for rx or tx on your devices? you could
> possibly have auto negotiate enabled when adding the port.
>
> For completeness the options regarding flow control are documented in
>
> http://docs.openvswitch.org/en/latest/howto/dpdk/
>
> > > Just to be clear, i do get traffic through though performance is not
> > > great so it does make sense that there is an issue with the flow,
> > > though i don't know how to verify this.
>
> This could be related to the queue pinning. The command 'ovs-appctl
> dpif-netdev/pmd-rxq-show' could help diagnose this if you can share its
> output.
>
> Thanks
> Ian
>
> > >
> > > Below are the flows that i could find:
> > > root@dpdkApt:/# ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4):
> > >  cookie=0xb6de486b197e713c, duration=640.295s, table=0, n_packets=0,
> > > n_bytes=0, idle_age=802, priority=3,in_port=3,vlan_tci=0x0000/0x1fff
> > > actions=mod_vlan_vid:2,NORMAL
> > >  cookie=0xb6de486b197e713c, duration=640.188s, table=0, n_packets=0,
> > > n_bytes=0, idle_age=801, priority=3,in_port=4,vlan_tci=0x0000/0x1fff
> > > actions=mod_vlan_vid:3,NORMAL
> > >  cookie=0xb6de486b197e713c, duration=647.450s, table=0, n_packets=0,
> > > n_bytes=0, idle_age=911, priority=2,in_port=3 actions=drop
> > > cookie=0xb6de486b197e713c, duration=647.251s, table=0, n_packets=0,
> > > n_bytes=0, idle_age=910, priority=2,in_port=4 actions=drop
> > > cookie=0xb6de486b197e713c, duration=647.670s, table=0, n_packets=947,
> > > n_bytes=39930, idle_age=0, priority=0 actions=NORMAL
> > > cookie=0xb6de486b197e713c, duration=647.675s, table=23, n_packets=0,
> > > n_bytes=0, idle_age=911, priority=0 actions=drop
> > > cookie=0xb6de486b197e713c, duration=647.667s, table=24, n_packets=0,
> > > n_bytes=0, idle_age=911, priority=0 actions=drop
> > >
> > > root@dpdkApt:/# ovs-ofctl dump-flows br-eth1 NXST_FLOW reply
> > > (xid=0x4):
> > >  cookie=0x990e120f393a065e, duration=645.802s, table=0, n_packets=4,
> > > n_bytes=198, idle_age=776, priority=4,in_port=1,dl_vlan=2
> > > actions=strip_vlan,NORMAL  cookie=0x990e120f393a065e,
> > > duration=652.940s, table=0, n_packets=950, n_bytes=40026, idle_age=0,
> > > priority=2,in_port=1 actions=drop  cookie=0x990e120f393a065e,
> > > duration=652.967s, table=0, n_packets=0, n_bytes=0, idle_age=916,
> > > priority=0 actions=NORMAL
> > >
> > > root@dpdkApt:/# ovs-ofctl dump-flows br-eth2 NXST_FLOW reply
> > > (xid=0x4):
> > >  cookie=0xb5b43b868c9fcb5f, duration=671.350s, table=0, n_packets=4,
> > > n_bytes=198, idle_age=800, priority=4,in_port=2,dl_vlan=3
> > > actions=strip_vlan,NORMAL  cookie=0xb5b43b868c9fcb5f,
> > > duration=678.397s, table=0, n_packets=977, n_bytes=41160, idle_age=2,
> > > priority=2,in_port=2 actions=drop  cookie=0xb5b43b868c9fcb5f,
> > > duration=678.424s, table=0, n_packets=0, n_bytes=0, idle_age=941,
> > > priority=0 actions=NORMAL
> > >
> > > Do you see anything that is wrong with my setup?
> > > I would greatly appreciate your input.
> > >
> > > Thanks,
> > > Michael
> > >
> > >
> > >
> > > On Wed, Apr 4, 2018 at 7:51 PM, Ben Pfaff <b...@ovn.org> wrote:
> > >
> > > > On Wed, Apr 04, 2018 at 06:19:32PM +0300, michael me wrote:
> > > > > The setup i am working with is ovs-dpdk and i have a the PMD mask
> > > > > on core 1, but i see that the service of ovs-vswitchd is almost at
> > > > > 100% both on core zero and one.
> > > > > in the setup i have an openstack VM running and it is pinned to
> core
> > two.
> > > >
> > > > OVS normally logs some helpful messages when it uses excessive CPU.
> > > > Can you check the log for those?
> > > >
> > _______________________________________________
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to