Hi Tom,

Based on your comments you have not followed the discussion. I think it will be 
good if you go through the thread in the archive.

First, we started the discussion by announcing the fact that we have open 
sourced a working static tunnel L2TPv3 implementation as an overlay at vNIC 
level (allowing for off-host switching, direct overlay to physical and direct 
vm to vm overlay). Based on this discussion I will add back (I actually removed 
it as I saw it as unnecessary) the "application-specific data between header 
and payload" feature.

Second, I have been very specific about using _STATIC_ tunnels. What you are 
saying in your mail is mostly invalid in a static tunnel context.

Static tunnels are just PWEs  - same as any other encaps.

1. There is no control plane. RFC 4719 does not apply.

2. For an example - see 
http://tools.ietf.org/html/draft-mkonstan-l2tpext-keyed-ipv6-tunnel-00 . This 
is one example use case, the limitations specified in it are not necessary in 
most others.

3. L2TPv3 static tunnels do not specify what the payload is and have no in-band 
information on this. You can have PPP, Ethernet, IP, ZigBee, RFC1149 or 
whatever else you may like. If you want a special packet type in the static 
tunnel case it is up to you. L2TPv3 will carry it for you.

4. There is no issue with offload in static tunnels as there is nothing inband 
in a static tunnel to signal actual header length. In fact, even the cookie 
size is unknown. If you want an offload you have to specify where the offload 
to start looking at. This is no different from any other arbitrary case of 
variable header. So if a NIC or NPU can offload starting from an arbitrary 
offset into the packet (f.e. geneve) it should also be possible to make it 
offload L2TPv3

5. I am not surprised with what you saw with GRE. However it does not apply 
here - see 4. You saw that because GRE provides info on what the payload is in 
the header. So an offload implementation is entitled to know where to find the 
payload packet and how to treat it. Protocols that do not provide this 
information in the header (L2TPv3) do not have this problem. With these you 
need to configure explicitly where to look for the packet and how to offload.

I am not a VXLAN expert. However, I suspect that most of this is applicable 
there (or can be made to apply) too.

So I am going to repeat what has been said quite a few times - the world does 
not need another encapsulation, the existing one(s) can do the job. Please use 
them.

A

On 01/03/14 17:13, Tom Herbert wrote:

On Fri, Feb 28, 2014 at 11:01 PM, Anton Ivanov (antivano)
<[email protected]><mailto:[email protected]> wrote:


On 01/03/14 01:51, Tom Herbert wrote:


On Fri, Feb 28, 2014 at 3:24 PM, Sam Aldrin 
<[email protected]><mailto:[email protected]> wrote:


Hi all,

Read the draft but have few questions on the same line others have asked.

- Is this draft intended for standardizing within NVo3 WG? The status
indicates it as informational. Also it is good to have it as draft-nvo3....,
if it is meant for NVo3 WG.
- I fail to find good reasoning, in the current version of the draft, on why
design of encap transport header should be closely associated with metadata
OR closely tied together? Could you add more details to clarify?


The draft alludes to the general need for extensibility, but does not
provide any example uses, so maybe I can suggest one. We have a real
use case for an encapsulation protocol with security to allow
validation of the virtual network identifier. In their current for
vxlan and nvgre have no provisions for authenticating or integrity
check of vni, existing mechanisms in the network were not deemed
robust enough to guarantee integrity of vni and ensure strict
isolation between tenants. UDP checksum is not sufficient for this. We
need a mechanism to at least have enforce an unpredictable security
token, or possibly at stronger authentication using something like a
message digest. This is intrinsic to the encapsulation, we cannot
deploy network virtualization without this security, hence an
extensible protocol is desirable. Additionally, as the network scales,
new threats emerge, we may have need for further extensions to adapt.
All of this needs to be efficient and amenable to HW performance
optimizations.





Tom, you are describing the L2TPv3 cookie.
http://tools.ietf.org/html/rfc3931#section-4.1.1 That has already been
defined and standardized in 2005.



That's great, and I would certainly want to adapt that to a data
center encapsulation protocol, but L2TP is *not* an equivalent
protocol to encapsulations like GRE. It is a tunnel protocol, more
than encapsulation. It's circuit based needing negotiation, and there
is no way to specify Ethertype or IP protocol. As I mentioned before,
I'm not going to artificially force IP packets in Ethernet frames just
to satisfy the needs of an encapsulation protocol-- this needs to work
the other way around, we need an encapsulation that is generic to
directly encapsulate IP packets and other protocols.



As quite a few people said - we do not need to invent a new
encapsulation for the goals of this draft or for the goals of NVO3 for
that matter. This just proves the point.



Saying we don't need a new encapsulation is not proof we don't need one.  :-)



We can copy that option to VXLAN or NVGRE as an extension if we wish too.



Unfortunately, that's not feasible. In optional fields model of vxlan
and nvgre in order to compute the offset of the next header, an
implementation needs to know the lengths of all the present optional
fields So if a new optional field is used a device that doesn't know
about won't be able to skip over it. This manifests itself when
hardware devices implement based on parsing the encapsulated headers.
We saw exactly this problem when trying to add the security token to
GRE, this broke ECMP in network switches as well as LRO on the NIC. So
once vxlan and nvgre are deployed in HW, there really is no way to
extend them without breaking compatibility-- for all intents and
purposes these protocols are not extensible. The solution, which we
advocate in GUE, is that protocols with variable length headers need
to have a header length field to allow devices to skip over unknown
fields.

Another deficiency that I see in the current encapsulations that
really needs to be addressed is the interaction with IPsec.  Just
saying we can use IPsec with any of these encapsulations to provide
security is *not* sufficient! For instance, we can secure vxlan
packets with IPsec by encrypting the UDP packet. This provides packet
security, but now the network has no visibility into the encapsulation
so we can't route or firewall based on vni so we've lost the value.
For this reason we really want the encapsulation in the outside header
(which actually would be the same property if vlan were used). I don't
see a reasonable way to do this with protocols encapsulating by
Ethertype, which is a reason why GUE uses IP protocol.




As far as metadata extensions - I believe there is an agreement that we
should do it. Similarly there is a consensus that they should not be
welded into the network header. That particular aspect of the design has
no other function but to be a "mono-culture monopoly license".




Unless meta data extensions, or for that matter TLVs, are defined
which are intrinsic to the operation of the protocol, it's exceedingly
likely that hardware vendors will implement their fast path assuming
no extensions or options. This is precisely why IP options have been
rendered useless, and does not bode well for meta data extensions or
TLVs. Besides, what is important enough to be directly in the header
versus what should be in an extension seems arbitrary to me.

Any options we deploy associated with encapsulation will be important
and may very well appear in *all* packets sent so they need to be
super efficient for processing. Neither do we want any baked in
restrictions on what fields we might want to route or firewall, for
instance some day we might add new QoS classification field for
special tenant or groups. The encapsulation protocol should support
this, this should not kill HW optimizations, and I would expect that
we can program switches to perform QoS routing based on the new field
without needing to HW change.

Tom



A.
_______________________________________________
nvo3 mailing list
[email protected]<mailto:[email protected]>
https://www.ietf.org/mailman/listinfo/nvo3



_______________________________________________
nvo3 mailing list
[email protected]<mailto:[email protected]>
https://www.ietf.org/mailman/listinfo/nvo3



_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to