Pedro Roque Marques <[email protected]> writes: > On Jun 29, 2012, at 2:52 PM, Thomas Narten wrote:
> > Pedro Roque Marques <[email protected]> writes: > > > >> I object to the document on the following points: > > > >> 1) It doesn't appear to recognize that there are multiple type of > >> data-center deployments with different requirements. > > > > I'd like to hear more from others on this point. > > > > Please provide some examples of differences in DC deployments and > > *how* they impact NVO3, either from a problem statement perspective or > > from a framework perspective. > 1) Whether L2 headers are preserved (including for instance VLAN > tags) or L2 headers are rewritten an only IP packets are delivered. The goal all along with L2/ethernet service emulation has been to provide a simple ethernet service. It has never been a goal (as far as I can tell) to preserve, e.g., VLAN tags, or other specialized L2 features. The intention has been to carry L2 frames that have a src/dest/Ethertype and that's about it. One of the tasks for this WG is to flesh out the above in more precise detail (I know this came up during chartering and possibly the BOF as well). But, IMO we should keep it simple, covering common case deployments. In the case where a workload is using multiple VLANs, the logical starting point would be to map each VLAN into a seperate VN. > 2) Whether the problem statement is about building a "bigger vlan" > or about routing between IP "subnets" where there is an ACL that > controls traffic exchanges between subnets. What is meant by "bigger VLAN"? More nodes attached to an indvidual VLAN? I'm not sure this is a goal per se. I.e., how many individual nodes can you put on a single VLAN today? Is that considered a limitation? One thing we do need to do is support many many more VLANs than the current 4096 limit. > 3) Whether the underlying network can be assumed to support > broadcast/multicast services. It cannot. At least, this is what everyone I've asked has said. I.e., use it if available and makes sense, but one cannot assume it's present. There are numerous DCs where multicast is not supported. > 4) Whether it is assumed that the hypervisor switch can or not be > modified. It can. The hypervisor will need to support NVE functionality. > 5) Whether "unknown unicast" frames are broadcasted or communication > requires the end-points to be known a-priori. I think both models can be supported. For smaller DCs (and I'm not defining smaller precisely for obvious reasons), flooding unknown would appear to be a viable approach. I.e, that is the approach VXLAN seems to be taking. But my sense in talking with various folk is that there is a strong sense that flooding per above is not sufficient for larger DCs and that an alternate model needs to be pursued. One where "unknown unicast" frames trigger a request to an oracle to provide the necessary mapping. > > I agree that support for multicast in the underlay cannot be > > assumed. Pretty much everyone I've asked on this point says the same > > thing. > My understanding is that there is a significant number of proposals > in these space that are based on data-plane learning. These tend to > assume an underlying multicast service. Are you suggesting that > these need to also support ingress replication ? How is the > membership discovered ? How does ingress replication "fit" with > scaling requirements ? Yes, ingres replication (or equivalent) will IMO need to be supported. How else will you deliver broadcast/multicast if the underlay doesn't provide such functionality? As mentioned above, I think there are two different target environments we should support. Smaller and larger DCs. > The problem space that i'm personally interested in is not one where > the mac-in-mac style solutions is applicable. There are however DC > designs where this type of solutions is valid and i suspect that > the proponents of these style of solutions will disagree with your > statement. Available support for multicast/broadcast in the DC is NOT just a technology statement. It's also about whether the DC operator is willing to enable the functionality. There are plenty of DCs where multicast is not enabled on the underlay (beyond link-local), even if the deployed hardware is capable of supporting it. > > So I think one of the requirements will need to be emulation of > > multicast (at the VN level) across underlays that do not support (or > > want to enable) multicast. But that is a requirements discussion, not > > a problem statement issue. > I'm not sure i agree. If you define the problem as supporting > reasonable small scale DCs that follow a rather conventional > architecture, which some participants in this WG do (reasonably so) > then emulation of multicast is no longer an issue. Not sure what you are saying here. If you are saying some DCs have multicast enabled on the underlay, I agree. But it's also the case that some do not and we need an approach that works in such deployments as well. So we can't just say "the underlay is assumed to support multicast". > > I believe it is perfectly reasonably for an overlay to provide two > > almost identical services. L2 services (which preserves ethernet > > frames at an e2e level) and L3 service, for those cases where a > > deployment is happy with IP only emulation. > These are very different problems. An L2 service is expected to > solve complex issues relating with interaction with IEEE bridging > semantics. Solutions such as TRILL, etc spend a very considerable > effort in dealing with this interaction. An L3 service on the other > hand is mostly about dealing with traffic that crosses subnets. As said above, it is absolutely not a goal to perfectly emulate all L2 features. TRILL is operating in a very different design space than NVO3. > My understanding is that DC operators that are focused on an L3 > service have no need for an L2 service and its associated > complexity. Saddling a possible solution with the need to provide > an L2 service is imho highly counter-productive. I keep hearing that *both* are needed. L3 when you can. L2 as a fall back when for some reason L3 only emulation isn't good enough. > > While there clearly are some differences in those two service models, > > I think both can be accomodated by the same NVO3 framework (and > > problem statement). > I completely disagree. You can write a problem statement that > contains both but it will impose unnecessary complexity in the > solutions. The frameworks and core operations for both are 90% the same. The differences will be in packet format, address formats, etc. I.e., minor details, not in the concepts. And a solution doesn't have to do both. If you think L3 only is good enough for your product, then implement only that. Likewise for L2 only. On the other hand, doing both in a completely independent manner, without using a common framework seems like a missed opportunity or worse. > > Earlier versions of the charter had explicit text about this, that > > was later removed. But I see no reason why it should not continue to > > be a goal of this WG, even though not explicitly called out in the > > current charter. > Is it possible that it was removed because it is actually > counter-productive to attempt to build both an L2 overlay and an L3 > overlay with the same solution ? No. It was removed for other reasons, mostly having to do with keeping the charter focussed and simple. Thomas _______________________________________________ nvo3 mailing list [email protected] https://www.ietf.org/mailman/listinfo/nvo3
