Hello Eric and Tom,

from Eric's signature:

   Perfection is achieved, not when there is nothing more to add,
   but when there is nothing left to take away.

I think this is what Eric tries to achieve :-)  Reading the document there is 
some good background/motivation but sometimes many options too - maybe too 
many?


To pick up some of the points:

VNI: we live with "flat" IP addresses and yet they support the rich structure 
in the name space. I don't see why this should be different with overlay 
headers: the control plane (or the configuration) will know about any 
structure and will program the data plane accordingly; VNIs are then just a 
reference numbers (read: flat).

If you plan for something fancy like e.g. copy-VlanID-into-VNI-bits, I would 
still leave this to control-/config plane to negotiate this capability and an 
additional document to describe such features. No need to discuss all 
potential options or ideas now. We just need enough VNI bits.


QoS: I would consider additional QoS bits in the NVO3 overlay header as 
redundant. Either the tenant frame and underlay header have some QoS already, 
then we have the requirement for the data plane to be able to map QoS values 
(probably some small table). Or the tenant-frame has no QoS - well, sounds 
like a fixed mapping then.

Otherwise it's hard to re-use existing underlay forwarding technology (well, 
IPv4/v6).


Security:  could we re-use IPSEC ESP/AH ?  In tunnel mode as we would add 
already an underlay IPv4/IPv6 header?
(I'm no expert in this area but why not re-using other peoples work)


ECMP: with leaf-spine topologies in mind and IP as an underlay I would say 
being able to use already existing IP ECMP methods is a plus to simplify 
deployments. I would make it a requirement.


Meta-Data: I probably missed some discussions (sorry!) but what data would 
this be?
Anyway, it sounds TLV-like and having a variable overhead length may be a 
problem for the overlay MTU. Assuming that this Meta-Data is orthogonal to 
the VNI, would another "MD-ID" field help?  The control-/config plane could 
then map this MD-ID to the Meta-Data and program the data plane accordingly.


I think another point, which was mentioned on the list, is the 
fragmentation/reassembly or MTU problem. For simplicity I would prefer the 
NVO3 header has no support for this. If your tenant frame is IPv4/v6 then 
fragmentation/reassembly should happen on this level. For Ethernet tenant 
frames - no idea but I assume Ethernet networks solve the MTU problem by 
"correct configuration"? So the NVO3 "link" would just be another interface 
with an unusual MTU (?).


The document also mentions the "learning bridge" behaviour. I would have seen 
the details of MAC learning as "control plane" (albeit not necessarily the 
"centralized authority" of the charter).  For the data plane it is a 
requirement to punt packets to the control plane. Well, actually forward the 
packet and punt a copy to control plane. I wonder if we have other 
requirement to trigger such a copy/punt? (e.g. an OAM/alert flag, as 
discussed in VXLAN-gpe)


Regards, Marc





On Tue, 21 Oct 2014 14:27:44 -0700, Erik Nordmark wrote:
> On 10/21/14 12:33 PM, Tom Herbert wrote:
>> Agreed, but I think there are a few more probably. 
> 
> Tom,
> I think so too - just need to get the specific requirements written down 
> and get some consensus.
> 
>>>   - MUST contain an VNID field. This field MUST be large enough to scale 
>>> to
>>> 100's of thousands of virtual networks
>> In even moderate sized deployments hierarchical assignment of VNIDs,
>> sub-VNs, fragmentation in the space, classed VNIDs might be used--
>> this probably should be reflected in requirements to allow for that (I
>> have pointed this out previously).
> 
> Are you suggesting that the numeric requirement should be higher? 32 bit? 
> 64 bit?
> 
> I'm trying to tease out some fairly specific requirements that can 
> influence the dataplane encapsulation. Could be to have a larger initial 
> number of bits, or the ability to expand on the size of the VNID over time. 
> Part of this this in with the security question below.
>> 
>> I believe the bigger item missing around VNID  is security. Isolation
>> between VNs isolation is a critical requirement, so it follows that
>> the VNID must be adequately secured to protect against spoofing or
>> corruption.
> 
> I understand your concern, but I'm trying to get some actual requirements 
> written down.
> An example requirement in this space would be that the encapsulation format 
> should have the option of providing some keyed hash (or similar) which 
> covers at least the VNID to make it harder for an attacker to spoof NVO3 
> packets.
> 
> One can get similar security by having a larger VNID space which is 
> sparsely populated. A 32 bit VNID plus 64 bit of hash isn't that different 
> than a 96 bit VNID unless the hash also covers the inner packet (which 
> would be expensive).
> 
> In the world of solutions one can envision a variable length VNID which can 
> be broken up into flexible parts:
>  - ID
>  - key ID (to allow for smooth rekeying)
>  - keyed hash
> But that might be overkill.
> 
> Currently I don't quite know how to phrase a requirement without talking 
> about potential solutions. We could have a nebulous requirement that the 
> dataplane encapsulation format MUST allow the addition of higher assurance 
> mechanisms that prevent off-path attackers (those that can not intercept 
> NVO3 packets flying by) from injecting NVO3 packets.
> 
>>>   - ??? QoS field inside the NVO3 overlay header or not ???
>> I would add that a congestion control mechanism in the encapsulation
>> layer might also be considered.
> 
> "considered" doesn't sound like a requirement. Did you have something 
> specific in mind?
> 
> There has been discussion in the IETF about requiring at least some form of 
> circuit breaker for tunneling protocols. But the proposals I've seen (from 
> Bob Briscoe and folks) seem to need nothing from the tunnel encaps protocol 
> (might yse the ECN  bits in the outer IP header). We could list some "heads 
> up" non-requirements in the draft such as "The encapsulation protocol 
> should not prevent adding circuit breakers for congestion control" even 
> though we do not yet know what that would imply.
> 
>>>   - MUST/SHOULD facilitate ECMP in unmodified routers in the underlay
>> Is it reasonable to require UDP based encapsulation to support this,
>> or at least that any protocol can easily be encapsulated within UDP
>> (like being done for GRE/UDP)?
> 
> A UDP encaps would be the easy way to solve it. But first of all the WG 
> needs to decide whether this should be a requirement on the encapsulation.
> 
>> 
>>> (Others participants might have other requirements on the encapsulation
>>> format. My main message is the focus on the encaps requirements which 
>>> seems
>>> to be quite few.)
>> With respect to encapsulation protocol requirements, I would propose
>> that extensibility (ability to associate meta data with the
>> encapsulation) is also a requirement.
> 
> I guess we would need a bit more detail to get to a requirement which helps 
> in the selection.
> For instance, is it sufficient to have 2 bits of meta-data? Or do we need 
> the ability for a large variable-length list of meta-data (with e.g. a 
> userid in the form of a NAI plus a certificate and CRLs ...) I'm 
> exaggerating to push on the size question.
> 
> Also, some of the meta-data discussion has been about vendor-specific 
> meta-data.
> If we have a multi-protocol encaps (see below) then vendor-specific 
> meta-data can be handled by the vendors getting a code point and doing 
> their own header between the NVO3 and the inner Ethernet/IP header. That 
> avoids us having a standardization discussion about how much size to 
> allocate for some proprietary meta-data that we don't even know about.
> 
> Thus if there is meta-data that needs to be standardized, then I think we 
> should discuss that specific meta-data and the resulting requirements on 
> the dataplane encapsulation.
>> 
>>> The implementation and operational requirements in
>>> draft-ietf-nvo3-dataplane-requirements should IMHO belong in a different
>>> document. Some might already be covered in the architecture document. And
>>> others might need to be refined as we specify more details for the NVO3
>>> solution.
>>> 
>> One other thing that should be clarified: The protocol of the
>> encapsulated packet is described in the req. draft as
>> "Tenant frame: Ethernet or IP based upon the VNI type".
>> 
>> This is a requirement for a multi-protocol encapsulation which is
>> good, but seems restrictive in the protocols carried and implies that
>> protocol is inferred from VNID as opposed the to requiring/allowing
>> the encapsulation layer to have a protocol field.
> 
> The WG should entertain whether supporting some protocol type in the 
> encapsulation is a requirement or not.
> The short list of requirements is silent on that front.
> 
> And as you've pointed out, if there is such a requirement, then the next 
> question is whether it is an Ethernet type or an IP protocol field (or 
> both).
> 
>    Erik
> 
> _______________________________________________
> nvo3 mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/nvo3
> 

_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to