[ovs-discuss] packet flow in the OVS

2019-09-25 Thread Ramana Reddy
Hi All,

I want to debug the packet path in the OVS and would like to know
the list of few functions where we can verify the packet data for both
incoming packets entering into the OVS through local port and outgoing
packets over vxlan.

Looking forward to the reply.

Regards,
Ramana
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Openvswitch-Assigning MAC to the interfaces

2019-09-25 Thread Ben Pfaff
On Wed, Sep 04, 2019 at 01:07:46PM +0530, V Sai Surya Laxman Rao Bellala wrote:
> How to assign the MAC address to each interface/port which is present in
> the Openvswitch?
> I have installed openvswitch in GNS3 and connected the three hosts H1, H2,
> H3.
> When I send a message from  H1 to H2, the same message is delivered to H3.
> I want to avoid that. I am interested to know more about Layer 2
> operations. Please let me know.

When this happens, has H2 send any packets through the switch yet?  If
not, this is probably normal MAC learning behavior.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovs-dev] Regarding TSO using AF_PACKET in OVS

2019-09-25 Thread William Tu
On Thu, Sep 19, 2019 at 2:16 AM Ramana Reddy  wrote:
>
> Hi William,
> Thanks for your reply. Please find the inline comments.
>
> On Fri, Aug 30, 2019 at 9:26 PM William Tu  wrote:
>>
>> Hi Ramana,
>>
>> I'm trying to understand your setup.
>>
>> On Wed, Aug 28, 2019 at 4:11 AM Ramana Reddy  wrote:
>> >
>> > Hi Ben, Justin, Jesse and All,
>> >
>> > Hope everyone doing great.
>> >
>> > During my work, I create a socket using AF_PACKET with virtio_net_hdr and
>> > filled all the fields in the virtio_net_hdr
>> > and the flag to VIRTIO_NET_GSO_TCPV4 to enable TSO using af_packet.
>> >
>> > vnet->gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
>> >
>> > The code is not working when I am trying to send large packets over OVS
>> > VXLAN tunnel. It's dropping the large packets and
>> > the application is resending the MTU sized packets. The code is working
>> > fine in non ovs case (directly sending without ovs).

btw, does 'dmesg' shows that OVS complaining about packet size
larger than MTU, so it drops the packet?

>>
>> Do you create AF_PACKET and bind its socket to OVS's vxlan device,
>> ex: vxlan_sys_4789?
>
> No. We created a AF_PACKET and binding it our own interface in the container. 
> It takes the help of the routing table in the container
> and routes the packets to the OVS veth interface, and from there, the OVS 
> rule selects the respective out_port  based on the destination IP address.
> We can't select  vxlan_sys_4789 interface as it is in the host namespace. 
> It's up to the OVS  to select the respective interface (out_port).
> And we are not aware and need not know about the underlying network (OVS 
> bridge or linux or some docker bridge) in the host.
>
>> In the non-ovs working case, do you bind AF_PACKET to the vxlan device 
>> created
>> by using ip link command?
>>
>> >
>> > I understood that UDP tunneling offloading (VXLAN) not happening because of
>> > nic may not support this offloading feature,
>> > but when I send iperf (AF_INET) traffic, though the offloading is not
>> > happening, but the large packets are fragmented and the
>> > VXLAN tunneling sending the fragmented packets.  Why are we seeing this
>> > different behavior?
>>
>> OVS's vxlan device is a light-weight tunnel device, so it might not
>> have all the
>> features.
>
> This is the same question in my mind. OVS may not be handling AF_PACKET with 
> vnet_hdr.

Right, I couldn't find any source code related to vnet_hdr in OVS kernel module.

>>
>>
>> >
>> > What is the expected behavior in AF_PACKET in OVS? Does OVS support
>> > AF_PACKET offloading mechanism?
>>
>> AF_PACKET (see net/packet/af_packet.c) just from userspace send packet into
>> kernel and to the device you bind to.  It creates skb and invokes the
>> device's xmit
>> function.
>
> This I understood. But OVS should understand this and offload the packet to 
> the driver or to the nic
> when virtio_net_hdr is set. If we send the packets to the normal interface 
> (eth0) without ovs, the kernel
> is able to understand the TSO packets with virtio_net_hdr and handover the 
> offloading to the driver (GSO) or nic (TSO).

I see your point. Yes, currently OVS does not understand this offload type,
this might be a good feature to add to OVS.

Regards,
William
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS - Netflow/IPFix export of custom metadata.

2019-09-25 Thread Ben Pfaff
I'm sure the OVS project would accept IPFix improvements to export more
fields or metadata, if anyone would like to prepare patches for that.

On Fri, Sep 13, 2019 at 10:41:22PM +0900, Motonori Shindo wrote:
> Rajesh,
> 
> OVS supports NetFlow V5 and IPFIX (a.k.a. NetFlow v10). NetFlow V5 as a
> protocol can't export information other than the pre-defined information
> element such as src/dst IP address, src/dst port number, protocol, nexthop
> address, input/output interface, etc. IPFIX, on the other hand, has
> extensibility and can export additional information. However, IPFIX in OVS
> only exports information similar to NetFlow V5. It doesn't have a
> configuration knob to have it export additional information element.
> 
> Regards,
> 
> Motonori Shindo
> 
> 2019年9月13日(金) 21:01 Rajesh Kumar :
> 
> >
> > Hi,
> >
> > The inbuilt OVS netflow export feature seems to export the 5 basic info
> > present in the packet  (src dst mac,src dst IP, port).
> > Is there any option to export other data present in a packet.
> >
> > Without OVS, I was able to do netflow export using a tool called pmacctd.
> > It supported exporting basic netflow data and custom metadata (this was
> > present as part of the packet itself at specific byte location). There was
> > a way to specify custom data with packet pointer bytes, length in pmacctd
> > tool. Just wanted to check whether any such option in current OVS setup.
> >
> >
> >
> >
> > Thanks,
> > Rajesh kumar S R
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> >

> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue porting openvswitch-ipsec on XCP-ng

2019-09-25 Thread Ansis
On Mon, 9 Sep 2019 at 02:36, Benjamin  wrote:
>
> Hello everyone,
>
> I'm Benjamin, a french developer working at Vates (the editor of XCP-ng
> a XenServer fork).
> I've been working in the network area of XCP-ng in order to create a SDN
> Controller controlling openvswitch on several hosts.
>
> Everything is working great as for now!
>
> I am using openvswitch v2.11.0.
> However I'm trying to add IPSEC support into XCP-ng and I'm facing an issue.
>
> I've successfully installed libreswan version 3.26, and the
> openvswitch-ipsec service from rhel and the python script ovs-monitor-ipsec.
> I'm using Pre-Shared Key for IPSEC.
>
> When I attempt to create tunnels, everything seems to go smoothly:
> - there's no error in ovs-vswitchd.log nor in ovs-monitor-ipsec.log
> - ovs-appctl -t ovs-monitor-ipsec tunnels/show shows me the tunnels with
> correct configurations and active connections.

Share your ovs-appctl output here.

>
> But there's no traffic passing on the tunnels created by openvswitch and
> since there's no helpful log I don't know how to investigate the issue.
> I hoped you could point me in the right direction.

Did the plain tunnel work in your setup? E.g. if you are using geneve
with ipsec simply try plain geneve.
>
> Here's what appears in ovs-vswitchd.log after tunnels creation:
>
> 2019-09-09T08:16:49.311Z|00018|tunnel(handler7)|WARN|receive tunnel port
> not found
> (pkt_mark=0x1,udp,tun_id=0x3,tun_src=192.168.5.28,tun_dst=192.168.5.27,tun_ipv6_src=::,tun_ipv6_dst=::,tun_gbp_id=0,tun_gbp_flags=0,tun_tos=0,tun_ttl=64,tun_flags=key,in_port=4,vlan_tci=0x,dl_src=b2:bc:3c:29:bd:fd,dl_dst=ff:ff:ff:ff:ff:ff,nw_src=0.0.0.0,nw_dst=255.255.255.255,nw_tos=16,nw_ecn=0,nw_ttl=128,tp_src=68,tp_dst=67)
> 2019-09-09T08:16:49.311Z|00019|ofproto_dpif_upcall(handler7)|INFO|Dropped
> 1 log messages in last 214 seconds (most recently, 214 seconds ago) due
> to excessive rate
> 2019-09-09T08:16:49.311Z|00020|ofproto_dpif_upcall(handler7)|INFO|received
> packet on unassociated datapath port 4
> 2019-09-09T08:16:49.914Z|3|tunnel(revalidator6)|WARN|receive tunnel
> port not found
> (pkt_mark=0x1,udp,tun_id=0x3,tun_src=192.168.5.28,tun_dst=192.168.5.27,tun_ipv6_src=::,tun_ipv6_dst=::,tun_gbp_id=0,tun_gbp_flags=0,tun_tos=0,tun_ttl=64,tun_flags=key,in_port=4,vlan_tci=0x,dl_src=b2:bc:3c:29:bd:fd,dl_dst=ff:ff:ff:ff:ff:ff,nw_src=0.0.0.0,nw_dst=255.255.255.255,nw_tos=16,nw_ecn=0,nw_ttl=128,tp_src=68,tp_dst=67)
>
> There's plenty of errors like this after the tunnels are created and I
> attempt to ping through the tunnels.
>
> Does that ring a bell to anyone?

IIRC, I have seen that "receive tunnel port not found" in following cases:
1. skb mark not being set and ovs user space is confused which tunnel
is that (this is specific to IPsec).
2. openvswitch kernel module does not support particular tunnel flavor
even in plain (this is not specific to IPsec).

Probably checking ofport value for tunnel in OVSDB's Interface table
would help to pinpoint which case it is.

>
> Do not hesitate to ask me anything that can help debug this issue.
>
> Thank you,
> Benjamin Reis
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Issue porting openvswitch-ipsec on XCP-ng

2019-09-25 Thread Ben Pfaff
Ansis (added to this message) knows the most about IPsec.  If he has
the time for it, I imagine he can help you figure this out.

On Mon, Sep 09, 2019 at 09:20:40AM +, Benjamin wrote:
> Hello everyone,
> 
> I'm Benjamin, a french developer working at Vates (the editor of XCP-ng a
> XenServer fork).
> I've been working in the network area of XCP-ng in order to create a SDN
> Controller controlling openvswitch on several hosts.
> 
> Everything is working great as for now!
> 
> I am using openvswitch v2.11.0.
> However I'm trying to add IPSEC support into XCP-ng and I'm facing an issue.
> 
> I've successfully installed libreswan version 3.26, and the
> openvswitch-ipsec service from rhel and the python script ovs-monitor-ipsec.
> I'm using Pre-Shared Key for IPSEC.
> 
> When I attempt to create tunnels, everything seems to go smoothly:
> - there's no error in ovs-vswitchd.log nor in ovs-monitor-ipsec.log
> - ovs-appctl -t ovs-monitor-ipsec tunnels/show shows me the tunnels with
> correct configurations and active connections.
> 
> But there's no traffic passing on the tunnels created by openvswitch and
> since there's no helpful log I don't know how to investigate the issue.
> I hoped you could point me in the right direction.
> 
> Here's what appears in ovs-vswitchd.log after tunnels creation:
> 
> 2019-09-09T08:16:49.311Z|00018|tunnel(handler7)|WARN|receive tunnel port not
> found 
> (pkt_mark=0x1,udp,tun_id=0x3,tun_src=192.168.5.28,tun_dst=192.168.5.27,tun_ipv6_src=::,tun_ipv6_dst=::,tun_gbp_id=0,tun_gbp_flags=0,tun_tos=0,tun_ttl=64,tun_flags=key,in_port=4,vlan_tci=0x,dl_src=b2:bc:3c:29:bd:fd,dl_dst=ff:ff:ff:ff:ff:ff,nw_src=0.0.0.0,nw_dst=255.255.255.255,nw_tos=16,nw_ecn=0,nw_ttl=128,tp_src=68,tp_dst=67)
> 2019-09-09T08:16:49.311Z|00019|ofproto_dpif_upcall(handler7)|INFO|Dropped 1
> log messages in last 214 seconds (most recently, 214 seconds ago) due to
> excessive rate
> 2019-09-09T08:16:49.311Z|00020|ofproto_dpif_upcall(handler7)|INFO|received
> packet on unassociated datapath port 4
> 2019-09-09T08:16:49.914Z|3|tunnel(revalidator6)|WARN|receive tunnel port
> not found 
> (pkt_mark=0x1,udp,tun_id=0x3,tun_src=192.168.5.28,tun_dst=192.168.5.27,tun_ipv6_src=::,tun_ipv6_dst=::,tun_gbp_id=0,tun_gbp_flags=0,tun_tos=0,tun_ttl=64,tun_flags=key,in_port=4,vlan_tci=0x,dl_src=b2:bc:3c:29:bd:fd,dl_dst=ff:ff:ff:ff:ff:ff,nw_src=0.0.0.0,nw_dst=255.255.255.255,nw_tos=16,nw_ecn=0,nw_ttl=128,tp_src=68,tp_dst=67)
> 
> There's plenty of errors like this after the tunnels are created and I
> attempt to ping through the tunnels.
> 
> Does that ring a bell to anyone?
> 
> Do not hesitate to ask me anything that can help debug this issue.
> 
> Thank you,
> Benjamin Reis
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Query related to invalid_ttl support

2019-09-25 Thread Ben Pfaff
On Mon, Sep 09, 2019 at 10:25:02PM +0530, bindiya Kurle wrote:
> Hi ,
> I was going through the code when ttl =0 and action is set as dec_ttl
> for ex:  TTL of the packet is 1 and rule in ovs is ,
> ovs-ofctl add-flow vnf-net0 "table=1,in_port=1 actions=dec_ttl,output:2"
>  from the 2.11 code.
> for this case it calls
> for (i = 0; i < ids->n_controllers; i++) {
> xlate_controller_action(ctx, UINT16_MAX, OFPR_INVALID_TTL,
> ids->cnt_ids[i], UINT32_MAX, NULL, 0);
> this in turn calls,
> put_controller_user_action
> 
> This in turn will create user-space action. I am not getting where in the
> code, it is generating message towards controller for invalid ttl.

The userspace action sends a Netlink message to userspace, which in turn
gets classified in classify_upcall() in ofproto-dpif-upcall.c as a
CONTROLLER_UPCALL, which gets handled in process_upcall() by appending
to the async message queue, which run() in ofproto-dpif.c eventually
processes via connmgr_send_async_msg().
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] unable to get all packets from the physical interface to OVS bridge

2019-09-25 Thread Stokes, Ian




On 9/18/2019 5:22 AM, Anish M wrote:

Hello,
Im trying to forward/mirror VM packets between two ovs-dpdk compute 
nodes. With the help of 
http://docs.openvswitch.org/en/latest/howto/userspace-tunneling/ link, i 
could able to setup additional NIC for forwarding the packet from source 
ovs-dpdk compute node to the destination compute node. I could able to 
see all the vxlan forwarded packets at the destination compute node's 
additional NIC. But the same is not visible at the ovs bridge where is 
attached that additional NIC.


Hi, can you please clarify the sentence above? Is it that you can see 
packets arrive at NIC but the packets are not seen routing through the 
bridge?




At both the compute nodes, i have same type of 2 port 10G NIC. ens1f0 & 
ens1f1.


ens1f0 is acting as DPDK port and im using it inside the openstack for 
DPDK VMs at both compute nodes.


Just to clarify, ens1f0 is a dpdk port type on each node and ens1f1 is 
not? I.e. is ens1f1 a netdev linux device?




In order to mirror DPDK VM traffic from one compute node to another 
compute node, i followed the above userspace-tunnelling link and able to 
forward VM traffic from source compute node towards the 
ens1f1(172.28.41.101) of the destination compute node.


Can you list the mirroring commands you used to set this up?



ovs-vsctl --may-exist add-br br-phy \
     -- set Bridge br-phy datapath_type=netdev \
     -- br-set-external-id br-phy bridge-id br-phy \
     -- set bridge br-phy fail-mode=standalone \
          other_config:hwaddr=48:df:37:7e:c2:08

ovs-vsctl --timeout 10 add-port br-phy ens1f1
ip addr add 172.28.41.101/24 dev br-phy
ip link set br-phy up
ip addr flush dev ens1f1 2>/dev/null
ip link set ens1f1 up

Even though i receiving all the mirrored vxlan packets at ens1f1 port 
(checked using tcpdump), the same number of packets are not available 
inside the ovs br-phy bridge (only ~10% of the mirrored traffic is 
available inside the br-phy)


Just to be aware, low volumes of traffic are fine for mirroring but at 
high volumes it would be expected that you would not see the same amount 
of traffic mirrored due to the overhead associated with mirroring. In 
this case are you sending high volumes of traffic? Have you tested with 
smaller bursts of traffic?




[root@overcloud-hpcomputeovsdpdk-0 ~]# ovs-ofctl dump-flows br-phy
  cookie=0x3435, duration=46778.969s, table=0, n_packets=30807, 
n_bytes=3889444, priority=0 actions=NORMAL


[root@overcloud-hpcomputeovsdpdk-0 ~]# ovs-ofctl dump-ports br-phy
OFPST_PORT reply (xid=0x2): 3 ports
   port LOCAL: rx pkts=77, bytes=5710, drop=0, errs=0, frame=0, over=0, 
crc=0

            tx pkts=30107, bytes=3604094, drop=28077, errs=0, coll=0
   port  ens1f1: rx pkts=4385104, bytes=596066321, drop=0, errs=0, 
frame=0, over=0, crc=0

            tx pkts=33606, bytes=4021526, drop=0, errs=0, coll=0
   port  "patch-tap-bint": rx pkts=30056, bytes=3591859, drop=?, errs=?, 
frame=?, over=?, crc=?

            tx pkts=739, bytes=294432, drop=?, errs=?, coll=?

In ens1f1, i could see lot of rx pkts, but in the br-phy flow im seeing 
only few packets.


Please provide any advice how i can mirror/forward packets between two 
OVS-DPDK compute nodes.


A diagram of your setup would be useful to help debug/understand the use 
case. I'm slightly confused with the mix of DPDK/non-DPDK ports you have 
vs what you want to achieve with mirroring. If you could provide more 
info on this as well as the expected flow of a packet as it is 
mirrored/forwarded it would be helpful.


Thanks
Ian


Best Regards,
Anish

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVS - PDUMP: Pdump initialization failure in different container

2019-09-25 Thread Stokes, Ian




On 9/16/2019 8:40 AM, Rajesh Kumar wrote:

Hi,

Sorry, Didn't complete my previous mail.



Hi Rajesh, apologies for the delay on my part in responding, I've been 
out of office the past few weeks.



The errors I was getting are
1)
root@basepdump-67b4b8-lt8wf:/# pdump
EAL: Detected 2 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_12_19ffcc06b54
EAL: Probing VFIO support...
EAL: Cannot initialize tailq: RTE_EVENT_RING
Tailq 0: qname:, tqh_first:(nil), tqh_last:0x7fda1b17d47c
Tailq 1: qname:, tqh_first:(nil), 
tqh_last:0x7fda1b17d4ac

Tailq 2: qname:, tqh_first:0x108064900, tqh_last:0x108064900
Tailq 3: qname:, tqh_first:(nil), tqh_last:0x7fda1b17d50c
.
EAL: FATAL: Cannot init tail queues for objects
EAL: Cannot init tail queues for objects
PANIC in main():
Cannot init EAL
5: [pdump(+0x2e2a) [0x557832863e2a]]
4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) 
[0x7fe128dd809b]]

3: [pdump(+0x233a) [0x55783286333a]]
2: [/usr/lib/x86_64-linux-gnu/librte_eal.so.18.11(__rte_panic+0xbd) 
[0x7fe1292b0ca5]]
1: [/usr/lib/x86_64-linux-gnu/librte_eal.so.18.11(rte_dump_stack+0x2e) 
[0x7fe1292c65be]]

Aborted (core dumped)


2)
root@basepdump-67b4b8-lt8wf:/# pdump
EAL: Detected 2 lcore(s)
EAL: Detected 1 NUMA nodes
PANIC in rte_eal_config_reattach():
Cannot mmap memory for rte_config at [(nil)], got [0x7...] - please 
use '--base-virtaddr' option

6: [./dpdk-pdump(start+0x2a) [0x559c7aa]]
5:[/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7fe128dd809b]]
4: [./dpdk-pdump(main+0xe2) [0x55597dd2]]
3: [./dpdk-pdump(rte_eal_init+0xc06) [0x55678416]]
..
Aborted (core dumped)



From the logs above it looks like the secondary process is unable to 
access the config between the pods. I'm unsure if this is possible 
myself as I haven't tried this setup before with pdump.


Can I ask if you are specifically sharing the process configs between 
the pods? Also are you sharing hugepages between the pods and if so, 
what steps were taken to ensure this?




Attached the same errors also.

I need in help in figuring out where I'm going wrong.



We'll try to recreate this in or lab setup also as in theory this should 
work.


Regards
Ian





Thanks,
Rajesh kumar S R



*From:* ovs-discuss-boun...@openvswitch.org 
 on behalf of Rajesh Kumar 


*Sent:* Monday, September 16, 2019 1:00:56 PM
*To:* ovs-discuss@openvswitch.org
*Subject:* [ovs-discuss] OVS - PDUMP: Pdump initialization failure in 
different container


In our kubernetes setup, we are running OVS in a pod with dpdk enabled.

Using 18.11.2.

I wanted to use dpdk-pdump as packet capture tool and trying to run 
pdump in separate pod.


As pdump is a secondary process, it will map to the hugepages allocated 
by primary process (OVS-vswitchd).


I'm getting these 2 errors while starting PDUMP as secondary process in 
a separate pod.




Without the container setup, I was able to bringup pdump with OVS





___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss