We found the source of the problem. The NSH NextProtocol field was not being 
set and was 0 in the packet.

Here's the bugzilla:

https://bugs.opendaylight.org/show_bug.cgi?id=8375

And the Netvirt patches:


master:


    https://git.opendaylight.org/gerrit/56531


stable/carbon:


    https://git.opendaylight.org/gerrit/56532

Regards,

Brady


-----Original Message-----
From: Manuel Buil <[email protected]<mailto:manuel%20buil%20%[email protected]%3e>>
To: "Yang, Yi Y" 
<[email protected]<mailto:%22Yang,%20yi%20y%22%20%[email protected]%3e>>
Cc: [email protected] 
<[email protected]<mailto:%[email protected]%22%20%[email protected]%3e>>
Subject: Re: [sfc-dev] Problem with vxlan-gpe interface in ovs2.6
Date: Thu, 4 May 2017 09:39:50 +0200

Hi,

Not exactly sfc103 demo but similar. The SFC team during the Carbon release 
developed a tool called dovs which is very useful to test different topologies. 
It uses namespaces to emulate VNFs which are connected to docker containers 
where OVS is running. That OVS instance running inside the containers is OVS 
2.6 with the NSH patch.

To reproduce exactly what I am doing, do the following:

1 - Download the latest ODL build from: 
https://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/org/opendaylight/integration/distribution-karaf/0.6.0-SNAPSHOT/
 and start it with the following features: 
config,standard,region,package,kar,ssh,management,odl-netvirt-openstack,odl-netvirt-sfc,odl-sfc-genius,odl-sfc-model,odl-sfc-provider,odl-sfc-provider-rest,odl-sfc-ovs,odl-sfc-openflow-renderer

2 - git clone the odl-sfc repo and access the directory: sfc-test/sfc-docker/. 
There you can find a Vagrantfile and you must execute "vagrant up"

3 - Once the deployment is finished, access the VBox VM created by vagrant with 
"vagrant ssh"

4 - Check that from the VM you can ping the host and thus access the rest API 
of ODL. If that is the case and the IP where ODL is listening is 172.28.128.1 
execute: sudo dovs sfc-config --chains "[['client1', 'firewall', 'server1']]" 
--odl 172.28.128.1

5 - Create the rsp: sudo dovs sfc-config --create-rsp-from-id 1 --odl 
172.28.128.1

6 - Create the classifier: curl -i -H "Content-Type: application/json" -H 
"Cache-Control: no-cache" --data '{ "acl": [ { "acl-name": "ACL1", "acl-type": 
"ietf-access-control-list:ipv4-acl", "access-list-entries": { "ace": [ { 
"rule-name": "ACE1", "actions": { "netvirt-sfc-acl:rsp-name": "RSP1" }, 
"matches": { "network-uuid" : "177bef73-514e-4922-990f-d7aba0f3b0f4", 
"source-ipv4-network": "10.0.0.0/24", "protocol": "6", "source-port-range": { 
"lower-port": 0 }, "destination-port-range": { "lower-port": 80 } } } ] } }]}' 
-X PUT --user admin:admin 
http://172.28.128.1:8181/restconf/config/ietf-access-control-list:access-lists/acl/ietf-access-control-list:ipv4-acl/ACL1


7 - You should see that three docker containers were created. If you check the 
tables of the OVS created in the client container, you will see that table 223 
is missing even though table 222 is sending packets to 223. That is a bug in 
Genius which is being solved. To manually fix it, execute in that container:


ovs-ofctl -O Openflow13 add-flow br-int 
"table=223,priority=260,tun_dst=172.19.0.2 actions=output:1"



I am assuming that 172.19.0.2 is the local IP of the OVS in the firewall 
container and output 1 is pointing to the vxlan-gpe interface.


Regards,

Manuel

On Thu, 2017-05-04 at 01:38 +0000, Yang, Yi Y wrote:
Manuel, do you mean sfc103 demo? dmesg should be able to show invalid vxlan 
flags message if vxlangpe finds invalid flags. I didn’t encounter such issue in 
sfc104, please let me know how I can reproduce your issue in my machine.

From: Manuel Buil [mailto:[email protected]]
Sent: Thursday, May 4, 2017 4:30 AM
To: Yang, Yi Y <[email protected]>
Cc: [email protected]
Subject: Problem with vxlan-gpe interface in ovs2.6

Hello Yi Yang,

We are using your docker image docker-ovs:yyang to test stuff in ODL-SFC 
Carbon. Today we found out that packets sent through an vxlan-gpe tunnel are 
not accepted by the other VTEP.

This is the port sending the packets in OVS 1.

       Port "tundaae6e77d14"
            Interface "tundaae6e77d14"
                type: vxlan
                options: {dst_port="4880", exts=gpe, key=flow, 
local_ip="172.19.0.3", "nshc1"=flow, "nshc2"=flow, "nshc3"=flow, "nshc4"=flow, 
nsi=flow, nsp=flow, remote_ip=flow}

Here is the output of tcpdump where the packets can be seen:

16:53:11.114196 IP 172.19.0.3.58481 > 172.19.0.2.4880: UDP, length 120
16:53:12.113785 IP 172.19.0.3.58481 > 172.19.0.2.4880: UDP, length 120

And this is the port in OVS 2 which receives the packets but for unknown 
reasons, it does not accept them:

      Port "tun28619095a5a"
            Interface "tun28619095a5a"
                type: vxlan
                options: {dst_port="4880", exts=gpe, key=flow, 
local_ip="172.19.0.2", "nshc1"=flow, "nshc2"=flow, "nshc3"=flow, "nshc4"=flow, 
nsi=flow, nsp=flow, remote_ip=flow}

The traffic going through other vxlan ports (not gpe and using destination port 
4789) works. I am bit lost on what might be the problem, so any hint you can 
provide us will be appreciated. I found out one thing which perhaps is the 
source of the problem. In the packets working this is the vxlan header:

0800 0000 0000 0000

The packets not working have the following vxlan header:

0c00 0003 0000 0800

Look at the first byte, the packets working have 000001000 and the ones not 
working 00001100. I checked the VXLAN spec and it says:


   VXLAN Header:  This is an 8-byte field that has:



      - Flags (8 bits): where the I flag MUST be set to 1 for a valid

        VXLAN Network ID (VNI).  The other 7 bits (designated "R") are

        reserved fields and MUST be set to zero on transmission and

        ignored on receipt.



Not all are set to 0! could it be that the docker container where OVS2 is 
running drops the packets because it is expecting all flag bits to be 0 except 
for the one specifying VNI?



Thanks,

Manuel


_______________________________________________
sfc-dev mailing list
[email protected]<mailto:[email protected]>
https://lists.opendaylight.org/mailman/listinfo/sfc-dev

_______________________________________________
sfc-dev mailing list
[email protected]
https://lists.opendaylight.org/mailman/listinfo/sfc-dev

Reply via email to