Hi, Jesse,
Thanks for your comments on this series. Below are our comments and our
modification plan for this patch:
1. Firstly, we agree that we should operate VxLAN GPE header and NSH header
at different protocol layers, which will avoid lots of unnecessary conflations.
Then in our modified high level design, we will decouple VxLAN/VxLAN-GPE
tunneling header push/pop and NSH header push/pop. As the NSH header is handled
as an independent layer, a new push/pop action “push_nsh/pop_nsh” will be added
to support push and pop NSH header in both kernel datapath and userspace
datapath. In this way, NSH header will not be part of tunnel header, it’s
independent and the NSH header will be added/removed by NSH specific actions,
for example:
Ovs-ofctl add-flow br-int “in_port=1, actions=set_nsi:1, set_nsp:12345,
push_nsh, output:2” (port 2 is the VxLAN GPE port)
The “push_nsh” action is to add the NSH header and “output:2” is to add the
tunnel header via the VxLAN-GPE port.
One question we want to discuss here is about how to implement the VxLAN-GPE
port, we have two proposals:
1) Add a new configuration option for current VxLAN vport type:
“options:enable_GPE=true”, then the virtual port created with this option
enabled will be VxLAN-GPE but not VxLAN, corresponding actions will be
VxLAN-GPE specific.
2) Add a new vport type “VxLAN-GPE” parallel to “VxLAN”.
Which proposal do you prefer to take?
1. As discussed above, the NSH layer will be an independent protocol layer
but not part of VxLAN tunnel. Then for the lightweight tunneling support in
kernel, changes we need to make are to add supports for VxLAN-GPE tunneling.
Options here are to add a new vport type for VxLAN GPE or extend current VxLAN
port to support VxLAN GPE.
Except for the VxLAN GPE support in lightweight tunneling, we’ll also add the
NSH protocol support in the kernel space datapath code for Linux upstream.
BTW, can you give us some guidance regarding the model to upstream lightweight
tunneling to kernel.
1. For supporting TLV, VxLAN GPE doesn’t have option TLVs in current
definition. So for VxLAN GPE, we don’t need to implement the TLV option fields
as Geneve.
VxLAN GPE header format:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|R|R|Ver|I|P|R|O| Reserved |Next Protocol |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| VXLAN Network Identifier (VNI) | Reserved |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
(No variable length option header for VxLAN-GPE)
But for NSH header, if the meta-type (MD-type) is 2 as below, we need the
option TLV support:
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|Ver|O|C|R|R|R|R|R|R| Length | MD-type=0x2 | Next Protocol |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Service Path ID | Service Index |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
~ Variable Length Context Headers (opt.) ~
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
So for NSH header, we need to add the TLV support. For the fixed fields of NSH
header like NSI, NSP, we’d like to add specific meta-flow fields for them, for
example:
MFF_NSP Type: NXM_NX_NSP (105), Length:4 bytes
MFF_NSI Type: NXM_NX_NSI (106), Length:1 bytes
But for the variable length context headers of NSH, we’d like to use fields
like “tun_metadata” (tnl->metadata) to support it. We also have two options:
1) Reuse the “tun_metadata” for NSH variable context header, it’s similar
to current Geneve TLV support. But it’s a little wield because the NSH header
is already an independent protocol layer but not belong to the tunnel layer.
2) Define a new “nsh_metadata” fields for NSH variable context header.
Which one do you prefer? Please tell us for you inputs on our modification plan.
Best Regards
Mengke
-----Original Message-----
From: Jesse Gross [mailto:[email protected]]
Sent: Thursday, October 1, 2015 5:24 AM
To: Liu, Mengke <[email protected]>
Cc: [email protected]; Pritesh Kothari (pritkoth) <[email protected]>; Li,
Ricky <[email protected]>; [email protected]
Subject: Re: [ovs-dev] [PATCH 0/7] Enable NSH based Service Function Chaining
support in OVS
On Wed, Sep 30, 2015 at 2:47 AM, mengke
<[email protected]<mailto:[email protected]>> wrote:
> This patch adds support for NSH function implementation which can be
> used for service function chaining. It covers control plane, kernel
> dataplane and DPDK-netdev dataplane.
I have several high level comments on this series that I think it would be best
to discuss before going through the code level details.
First and foremost, the kernel tunneling code that you are modifying is
compatibility code. Upstream Linux is now using a different model of tunnels
and any new features need to be added to that rather than to the old model as
is done here. The new model is called "lightweight tunnels" so I would
encourage you to look at what has been going on with that upstream.
That being said, I have significant concerns with the layering throughout this
patchset and I suspect if that was resolved then at least some of the above
changes would not be necessary. In particular, there is a great deal of
conflation between VXLAN-GPE and NSH. As a theoretical matter, I believe that
these are supposed to operate independently of each other at different protocol
layers. And more practically, this seems to result in a lot of special "modes"
to handle various specific cases.
It looks like the best way to handle this is by creating a push/pop mechanism
for inserting and removing NSH headers over various protocols. As it is, I
really don't like that there are a variety of specific operations such as for
converting between VXLAN+NSH to
Ethernet+NSH - this should be decomposed. I believe that this
decomposition would also help address some other issues. For example, it's not
OK to hard code the UDP port of VXLAN-GPE in the kernel - this needs to be
configurable. However, the port for vanilla VXLAN is already configurable so if
you can add support for GPE and then layer NSH on top then you don't need to
introduce a new configuration mechanism.
The last thing that I wanted to mention is that I would strongly recommend
looking at supporting TLVs now. There has been extensive work in supporting
them for Geneve and not only should it now be much easier to do than it would
have been a little while back, some of these public interfaces have not been
part of a released version of OVS yet. Therefore, if there are changes that
need to be made to make them support multiple protocols, now would be the time
to do it before they can no longer be modified.
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev