Re: [ovs-discuss] ovs-vswitchd 2.7.3 crash

2018-05-03 Thread Nicolas Bouliane via discuss
>
>
> > Let me know if you need more information. We are curious to know if it
> is a
> > know bug, or if it is something new.
>
> It's not obviously a known bug.  It might be related to memory
> corruption, because SIGBUS can be caused by reading or writing a wild
> pointer and calloc(), which is at the top of the backtrace, can
> definitely do either one if there's been memory corruption.  OVS 2.7.4
> has a few memory corruption fixes, relative to OVS 2.7.3, so possibly
> it's a bug that's already fixed.
>
> Is this something that you see repeatedly?
>

We don't see that repeatedly for now but I will keep an eye on it.
If we start seeing it more, we might indeed upgrade to OVS 2.7.4.
Thanks for you answer !

Cheers,
Nicolas Bouliane
Digital Ocean
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Source NAT with OpenVSwitch failed

2018-05-03 Thread Guru Shetty
No, OVS NAT cannot do that. OVS NAT in your situation is more useful with a
controller that will program the OVS. And when a packet comes in that needs
to reach the gateway, the controller needs to 1. Create a ARP request for
the gateway's IP, collect the reply and update the flows such that future
packets know the MAC address.

On 2 May 2018 at 20:52, Wei-Yu Chen  wrote:

> Hi Guru,
>
> Thanks for your reply, but I can’t make sure what MAC address for Gateway,
> doesn’t this should be automatic done by OVS NAT function?
>
>
>
> ---
> Best Regards,
>
> Wei-Yu Chen
> Wireless Internet Laboratory
> Department of Computer Science
> National Chiao Tung University
>
> On 30 April 2018 at 11:49:29 PM, Guru Shetty (g...@ovn.org) wrote:
>
>
>
> On 26 April 2018 at 06:41, Wei-Yu Chen  wrote:
>
>> Hello all,
>>
>> Recently, I’m trying on SNAT with OVS, I tried to apply all possible
>> flows to OVS, but SNAT still don’t work, so I post this message for asking
>> your help.
>>
>> In my experiment environment, I used Ubuntu 16.04 with kernel version
>> 4.10.0–28-generic, and OVS’s version 2.9.0.
>>
>> I have a VM in my PC, connected VM and OVS with a Linux bridge, as
>> following illustrated:
>>
>> +———–+
>> | |
>> | +——+ +—–+ |
>> | +–+ br +———+ OVS | |
>> | | +——+ vnet2+—+-+ |
>> | | | |
>> | +–+——+ | |
>> | | VM | | |
>> | |10.1.1.2 | | |
>> | +———+ +—+—-+ |
>> | Ubuntu 16.04 | enp2s0 | |
>> ++——–+-+
>>
>> And OVS have 2 IP addresses, 10.1.1.1/24 and an public IP
>> address(140.113.x.x) original enp2s0 have. I attached vnet2 and enp2s0 on
>> my OVS.
>>
>> I referred many posts and wrote following script:
>>
>> #!/bin/sh
>> IN="vnet2"
>> OUT="enp2s0"
>>
>> flow1="in_port=$IN,ip,actions=ct(commit,zone=1,nat(src=10.1.1.1)),$OUT"
>> flow2="in_port=$OUT,ip,ct_state=-trk,actions=ct(zone=1,nat)"
>> flow3="in_port=$OUT,ip,ct_state=+trk,ct_zone=1,actions=$IN"
>>
>> # Add Flows
>> sudo ovs-ofctl add-flow $BR $flow1
>> sudo ovs-ofctl add-flow $BR $flow2
>> sudo ovs-ofctl add-flow $BR $flow3
>>
>> But I found ICMP echo to Google DNS from VM (nw_src=10.1.1.2,
>> nw_dst=8.8.8.8), when it passed to enp2s0, only source IP address changed
>> to 10.1.1.1, but source MAC address keep same as VM’s MAC, and destination
>> MAC address keep same as OVS’s MAC address. (VM’s default gateway is
>> 10.1.1.1/24, OVS’s vnet2 interface).
>>
> You need to change the MAC addresses too.
>
>
>
>> Tcpdump’s log:
>>
>> 10.1.1.1 > 8.8.8.8: ICMP echo request, id 725, seq 1, length 64
>> 21:12:09.413082 52:54:00:fd:d6:ce > 70:4d:7b:6e:16:e0, ethertype IPv4 
>> (0x0800), length 98: (tos 0x0, ttl 64, id 41649, offset 0, flags [DF], proto 
>> ICMP (1), length 84)
>>
>> I also tried to find reason by conntrack tool, but it shows only 10.1.1.2
>> have a NEW connection to 8.8.8.8 but didn’t get any reply.
>>
>> I can’t figure out why OVS’s SNAT didn’t work, do my flows have wrong?
>> Any suggestion and idea is appreciated, Thanks very much.
>>
>> P.s. Attachment is illustration snapshot, if illustrate broken in mail
>> viewer, please take a look on the attachment.
>>
>>
>> ---
>> Best Regards,
>>
>> Wei-Yu Chen
>> Wireless Internet Laboratory
>> Department of Computer Science
>> National Chiao Tung University
>>
>> ___
>> discuss mailing list
>> disc...@openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>
>>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS misconfiguration issue

2018-05-03 Thread Gregory Rose

On 5/1/2018 10:35 PM, Neelakantam Gaddam wrote:

Hi All,

The issue here is we are trying to send the packets on the same device 
in a loop. While sending on a device, a spinlock for the tx queue has 
to be acquired in dev_queue_xmit function. This is where we are trying 
to acquire the same lock again, which is leading to the kernel crash. 
This issue becomes worse if internal ports are involved in the 
configuration.


I think, we should avoid these loops in the vport send functions. But 
in case of tunneling, the tunnel send function should take care of 
these checks.


Please share your thoughts on this issue.


Hello Neelakantam,

Please provide the full kernel panic backtrace and I'll have a look at 
the problem.  I'm rather busy
dealing with some other critical issues but will try to get back to you 
next week.


Thanks,

- Greg



On Mon, Apr 30, 2018 at 11:17 AM, Neelakantam Gaddam 
> wrote:


Hi All,

OVS misconfiguration leading to spinlock recursion in dev_queue_xmit.

We are running ovs-2.8.1 with openvswitch kernel modules on two
hosts connected back to back. We are running OVS on MIPS64 platform.



We are using the below configuration.

ovs-vsctl add-br br0

ovs-vsctl add-bond br0 bond0 p1p1 p1p2

ovs-vsctl set port bond0 lacp=active bond_mode=balance-tcp

ifconfig br0 100.0.0.1 up

ovs-vsctl add-port br0 veth0

ovs-vsctl add-port br0 vx0 -- set interface vx0 type=vxlan
options:local_ip=100.0.0.1 options:remote_ip=100.0.0.2 option:key=flow

ovs-ofctl add-flow br0 "table=0, priority=1, cookie=100,
tun_id=100, in_port=4, action=output:3"

ovs-ofctl add-flow br0 "table=0, priority=1, cookie=100,
in_port=3, actions=set_field:100->tun_id output:4"

When this configuration is applied on both hosts, we are seeing
the below spinlock recursion bug.

[] show_stack+0x6c/0xf8

[] do_raw_spin_lock+0x168/0x170

[] dev_queue_xmit+0x43c/0x470

[] ip_finish_output+0x250/0x490

[] rpl_iptunnel_xmit+0x134/0x218 [openvswitch]

[] rpl_vxlan_xmit+0x430/0x538 [openvswitch]

[] do_execute_actions+0x18f8/0x19e8 [openvswitch]

[] ovs_execute_actions+0x90/0x208 [openvswitch]

[] ovs_dp_process_packet+0xb0/0x1a8 [openvswitch]

[] ovs_vport_receive+0x78/0x130 [openvswitch]

[] internal_dev_xmit+0x34/0x98 [openvswitch]

[] dev_hard_start_xmit+0x2e8/0x4f8

[] sch_direct_xmit+0xf0/0x238

[] dev_queue_xmit+0x1d8/0x470

[] arp_process+0x614/0x628

[] __netif_receive_skb_core+0x2e8/0x5d8

[] process_backlog+0xc0/0x1b0

[] net_rx_action+0x154/0x240

[] __do_softirq+0x1d0/0x218

[] do_softirq+0x68/0x70

[] local_bh_enable+0xa8/0xb0

[] netif_rx_ni+0x20/0x30

The packet path traced is : netif_rx->arp->dev_queue_xmit(internal
port)->vxlan_xmit->dev_queue_xmit(internal port). According to the
configuration, this packet path is valid. But we should not hit
the crash.


Questions:


  * Is it a kernel bug or ovs bug ?
  * How OVS handles these kind of misconfigurations especially
packet loops involved?

Any suggestion or help is greatly appreciated.



Thanks



-- 
Thanks & Regards

Neelakantam Gaddam




--
Thanks & Regards
Neelakantam Gaddam


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9

2018-05-03 Thread O'Reilly, Darragh


> -Original Message-
> From: alp.ars...@xflowresearch.com [mailto:alp.ars...@xflowresearch.com]
> 
> Number of flows and incoming traffic distribution is really high. Number of 
> rules
> is +0.5 million and possible number of IP of incoming traffic is also in 
> millions.
> But right now I am sending same pcap file to all interfaces using PKTGEN. 
> There
> is no slow path as it is OVS-DPDK, everything in userspace.
> 
By slow path I mean the ofproto classifier:
https://software.intel.com/en-us/articles/ovs-dpdk-datapath-classifier-part-2

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9

2018-05-03 Thread alp.arslan
Number of flows and incoming traffic distribution is really high. Number of
rules is +0.5 million and possible number of IP of incoming traffic is also
in millions. But right now I am sending same pcap file to all interfaces
using PKTGEN. There is no slow path as it is OVS-DPDK, everything in
userspace. 

-Original Message-
From: O'Reilly, Darragh [mailto:darragh.orei...@hpe.com] 
Sent: Thursday, May 3, 2018 8:05 PM
To: alp.ars...@xflowresearch.com; disc...@openvswitch.org
Subject: RE: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9



> -Original Message-
> From: alp.ars...@xflowresearch.com 
> [mailto:alp.ars...@xflowresearch.com]
> 
> Enabling disabling EMC has no effect on this scenario. As far as I 
> know there is one EMC per PMD thread, so the interfaces have their own 
> EMC's, the bigger question is why does traffic on one interface effect 
> the performance of the other? Are they sharing anything? The only 
> thing I can think of is the datapath and the megaflow table, and I am 
> looking for some way to do that, is this doesn't work my only other 
> option is to have 4 VMs with pass-through interfaces and run OVS-DPDK
inside VMs.
> 
Could it be the nature of the test traffic, flows and total rate of received
packets with 4 NICs? Is the slow path thread ovs-vswitchd maxed out?

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9

2018-05-03 Thread alp.arslan
If "ovs-vswitchd" manages the data paths, why does it have a utility that
lets me create more of them. And when I create them I cannot use them. I am
stuck in a loop :) . 

-Original Message-
From: Ben Pfaff [mailto:b...@ovn.org] 
Sent: Thursday, May 3, 2018 4:41 PM
To: alp.ars...@xflowresearch.com
Cc: disc...@openvswitch.org
Subject: Re: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9

On Wed, May 02, 2018 at 10:02:04PM +0500, alp.ars...@xflowresearch.com
wrote:
> I am trying to create multiple dpdk-netdev based data paths with OVS 
> 2.9 and DPDK 16.11 running on CentOS 7.4. I am able to create multiple 
> data paths using "ovs-appctl dpctl/add-dp netdev@netdev1" and I can 
> see a new data path created with "ovs-appctl dpctl/show". However I 
> cannot add any interfaces (dpdk or otherwise), and I cannot set this 
> data path as datapath_type to any bridge.

That's not useful or a good idea.  ovs-vswitchd manages datapaths itself.
Adding and removing them yourself will not help.

> Just a precap to why I am trying to do this, I am working with a lot 
> of OVS OpenFlow rules (around 0.5 million) matching layer 3 and layer 
> 4 fields. The incoming traffic is more than 40G (4 x10G Intel x520s), 
> and has multiple parallel flows (over a million IPs). With this the 
> OVS performance decreases and each port is forwarding only around 250 
> Mb/s. I am using multiple RX queues (4-6), with single RX queue it 
> drops to 70 Mb/s. Now if I shutdown three 10G interfaces, an 
> interesting thing happen, and OVS starts forwarding over 7Gb/s for 
> that single interface. That got me thinking, maybe the reason for low 
> performance is 40 G traffic hitting a single bridges flow tables, how 
> about creating multiple bridges with multiple flow tables. With this 
> setup the situation remained same, and now the only common thing 
> between the
> 4 interfaces is the data path. They are not sharing anything else. 
> They are polled by dedicated vCPUs, and they are in different tables.
> 
>  
> 
> Can anyone explain this bizarre scenario of why the OVS is able to 
> forward more traffic over single interface polled by 6 vCPUs, compared 
> to 4 interfaces polled by 24 vCPUs.
> 
> Also is there a way to create multiple data paths and remove this 
> dependency also.

You can create multiple bridges with "ovs-vsctl add-br".  OVS doesn't use
multiple datapaths.

Maybe someone who understands the DPDK port better can suggest some reason
for the performance characteristics that you see.

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9

2018-05-03 Thread alp.arslan
Enabling disabling EMC has no effect on this scenario. As far as I know
there is one EMC per PMD thread, so the interfaces have their own EMC's, the
bigger question is why does traffic on one interface effect the performance
of the other? Are they sharing anything? The only thing I can think of is
the datapath and the megaflow table, and I am looking for some way to do
that, is this doesn't work my only other option is to have 4 VMs with
pass-through interfaces and run OVS-DPDK inside VMs. 


-Original Message-
From: O'Reilly, Darragh [mailto:darragh.orei...@hpe.com] 
Sent: Thursday, May 3, 2018 5:49 PM
To: alp.ars...@xflowresearch.com; disc...@openvswitch.org
Subject: RE: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9

On Wed, May 02, 2018 at 10:02:04PM +0500, alp.ars...@xflowresearch.com
wrote:

> Can anyone explain this bizarre scenario of why the OVS is able to 
> forward more traffic over single interface polled by 6 vCPUs, compared 
> to 4 interfaces polled by 24 vCPUs.

Not really, but I would look at the cache stats: ovs-appctl
dpif-netdev/pmd-stats-show


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9

2018-05-03 Thread O'Reilly, Darragh


> -Original Message-
> From: alp.ars...@xflowresearch.com [mailto:alp.ars...@xflowresearch.com]
> 
> Enabling disabling EMC has no effect on this scenario. As far as I know there 
> is
> one EMC per PMD thread, so the interfaces have their own EMC's, the bigger
> question is why does traffic on one interface effect the performance of the
> other? Are they sharing anything? The only thing I can think of is the 
> datapath
> and the megaflow table, and I am looking for some way to do that, is this
> doesn't work my only other option is to have 4 VMs with pass-through 
> interfaces
> and run OVS-DPDK inside VMs.
> 
Could it be the nature of the test traffic, flows and total rate of received 
packets with 4 NICs? Is the slow path thread ovs-vswitchd maxed out?
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9

2018-05-03 Thread O'Reilly, Darragh
On Wed, May 02, 2018 at 10:02:04PM +0500, alp.ars...@xflowresearch.com wrote:

> Can anyone explain this bizarre scenario of why the OVS is able to 
> forward more traffic over single interface polled by 6 vCPUs, compared 
> to 4 interfaces polled by 24 vCPUs.

Not really, but I would look at the cache stats: ovs-appctl 
dpif-netdev/pmd-stats-show

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] openvswitch database save

2018-05-03 Thread Chevy Stroker via discuss
I seem to be replying to my own thread. Setting STARTMODE=nfsroot
resolves this issue since wicked does not perform ifdown on that type
of interface.

On Wed, 2018-05-02 at 03:57 -0500, Chevy Stroker via discuss wrote:
> My apologies in advance if this has been asked and answered. I would
> appreciate instructions if there is a way to search the archive.
> 
> Also, to the developers thank you for this software.
> 
> I am building a development environment on a laptop. 
> OS: openSUSE LEAP 42.3
> openvswitch version: 2.7.0-7.1
> 
> The wlan0 interface is configured through wicked for ease of
> configuration while developing while mobile or even while off the
> network.
> 
> I followed these instructions to try and configure openvswitch
> correctly:
> https://en.opensuse.org/Portal:Wicked/OpenvSwitch
> 
> I have also read the current openvswitch documents and those are well
> written. Again, thank you.
> 
> His is what I have done (shortened for brevity):
> ovs-vsctl add-br ovsbr0
> ovs-vsctl add-port ovsbr0 port0 -- set Interface port$0
> type=internal
> ovs-vsctl add-port ovsbr0 port1-- set Interface port$1
> type=internal
> 
> ifcfg* files are:
> kvm:/etc/sysconfig/network # more ifcfg-ovsbr0 
> STARTMODE=auto
> BOOTPROTO=static
> IPADDR=10.0.0.254
> NETMASK=255.255.255.0
> LINK_REQUIRED=no
> OVS_BRIDGE=yes
> OVS_BRIDGE_PORT_DEVICE=port0
> 
> kvm:/etc/sysconfig/network # more ifcfg-port0
> STARTMODE=auto
> BOOTPROTO=none
> 
> kvm:/etc/sysconfig/network # more ifcfg-port1
> STARTMODE=auto
> BOOTPROTO=none
> 
> Run ifup all and start my KVM VM guest assigned to port1 and
> everything
> works.
> 
> Reboot and the openvswitch database is empty (e.g. ovs-vsctl show
> displays nothing)
> 
> I assumed that I had a timing conflict between wicked and
> openvswitch.
> Scripting the openvswitch build in bash and re-doing my configuration
> is not a big deal, but I did investigate farther.
> 
> These were my steps:
> systemctl disable openvswitch.service
> systemctl stop openvswitch.service (this also stops ovs-
> vswitchd.service  ovsdb-server.service)
> systemctl start openvswitch.service
> ovs-vsctl show (everything is there)
> 
> systemctl stop openvswitch.service (hoping to not lose my database)
> reboot
> 
> Once the system is restarted I:
> systemctl start openvswitch.service (this also starts ovs-
> vswitchd.service  ovsdb-server.service)
> ovs-vsctl show (the database is empty)
> 
> Luckily I backed up the database so:
> systemctl stop openvswitch.service
> cd /etc/openvswitch
> mv conf.db.backup conf.db
> systemctl start openvswitch.service
> ovs-vsctl show (everything is there)
> ifup all (I'm back in business)
> 
> Any idea why the database ends up empty or is there a configuration I
> have missed?
> 
> Thank you in advance for any suggestions, and if there are none that
> is
> OK as well. I am happy scripting the rebuild steps.
> 
> Sorry for the long post. It just seemed easier to cover all of the
> details thoroughly than do a back and forth on the list flooding
> people's INBOX.
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Multiple dpdk-netdev datapath with OVS 2.9

2018-05-03 Thread Ben Pfaff
On Wed, May 02, 2018 at 10:02:04PM +0500, alp.ars...@xflowresearch.com wrote:
> I am trying to create multiple dpdk-netdev based data paths with OVS 2.9 and
> DPDK 16.11 running on CentOS 7.4. I am able to create multiple data paths
> using "ovs-appctl dpctl/add-dp netdev@netdev1" and I can see a new data path
> created with "ovs-appctl dpctl/show". However I cannot add any interfaces
> (dpdk or otherwise), and I cannot set this data path as datapath_type to any
> bridge. 

That's not useful or a good idea.  ovs-vswitchd manages datapaths
itself.  Adding and removing them yourself will not help.

> Just a precap to why I am trying to do this, I am working with a lot of OVS
> OpenFlow rules (around 0.5 million) matching layer 3 and layer 4 fields. The
> incoming traffic is more than 40G (4 x10G Intel x520s), and has multiple
> parallel flows (over a million IPs). With this the OVS performance decreases
> and each port is forwarding only around 250 Mb/s. I am using multiple RX
> queues (4-6), with single RX queue it drops to 70 Mb/s. Now if I shutdown
> three 10G interfaces, an interesting thing happen, and OVS starts forwarding
> over 7Gb/s for that single interface. That got me thinking, maybe the reason
> for low performance is 40 G traffic hitting a single bridges flow tables,
> how about creating multiple bridges with multiple flow tables. With this
> setup the situation remained same, and now the only common thing between the
> 4 interfaces is the data path. They are not sharing anything else. They are
> polled by dedicated vCPUs, and they are in different tables. 
> 
>  
> 
> Can anyone explain this bizarre scenario of why the OVS is able to forward
> more traffic over single interface polled by 6 vCPUs, compared to 4
> interfaces polled by 24 vCPUs.
> 
> Also is there a way to create multiple data paths and remove this dependency
> also.  

You can create multiple bridges with "ovs-vsctl add-br".  OVS doesn't
use multiple datapaths.

Maybe someone who understands the DPDK port better can suggest some
reason for the performance characteristics that you see.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Reg IPv6 Neighbor Advertisement Message fields

2018-05-03 Thread Vishal Deep Ajmera
>> Zak is working on that feature; I expect patches will hit the mailing list 
>> in the next week or two.
>> 
>> Hi Justin,
>> 
>> As part of this feature, will it also enable us to rewrite the options 
>> tlv:type field from SLL (1) to TLL (2) Or may be a set-field option to 
>> rewrite these fields in Options TLV. This will help in supporting multicast 
>> solicitations.

>>I believe you should already be able to set the SLL and TLL options.  Does 
>>that not work for you?
In order to set up IPv6 NA responder we will be required to modify incoming 
solicitations with SLL option (multicast solicitations) to an Advertisement 
with TLL option. My understanding is that current support is only to modify 
existing TLL option, if already present in the packet.

Regards,
Vishal
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss