Thomas, On Jun 29, 2012, at 2:52 PM, Thomas Narten wrote:
> Pedro Roque Marques <[email protected]> writes: > >> I object to the document on the following points: > >> 1) It doesn't appear to recognize that there are multiple type of >> data-center deployments with different requirements. > > I'd like to hear more from others on this point. > > Please provide some examples of differences in DC deployments and > *how* they impact NVO3, either from a problem statement perspective or > from a framework perspective. 1) Whether L2 headers are preserved (including for instance VLAN tags) or L2 headers are rewritten an only IP packets are delivered. 2) Whether the problem statement is about building a "bigger vlan" or about routing between IP "subnets" where there is an ACL that controls traffic exchanges between subnets. 3) Whether the underlying network can be assumed to support broadcast/multicast services. 4) Whether it is assumed that the hypervisor switch can or not be modified. 5) Whether "unknown unicast" frames are broadcasted or communication requires the end-points to be known a-priori. These factors depend on the DC design. > > In another message: > > Pedro Roque Marques <[email protected]> writes: > >> On Jun 26, 2012, at 7:25 AM, <[email protected]> <[email protected]> >> wrote: > >>> Hi Pedro, >>> >>> Could you provide some examples of the requirements differences >>> between the types of data centers? > >> Differences in aggregate bandwidth and locality of data access drive >> very different network designs. > >> To give you one example: large scale/large aggregate throughput DC >> networks typically do not support any form of broadcast / multicast. >> Solutions that rely on broadcast / IP multicast for learning are not >> very useful for this category of DCs. > > I agree that support for multicast in the underlay cannot be > assumed. Pretty much everyone I've asked on this point says the same > thing. My understanding is that there is a significant number of proposals in these space that are based on data-plane learning. These tend to assume an underlying multicast service. Are you suggesting that these need to also support ingress replication ? How is the membership discovered ? How does ingress replication "fit" with scaling requirements ? The problem space that i'm personally interested in is not one where the mac-in-mac style solutions is applicable. There are however DC designs where this type of solutions is valid and i suspect that the proponents of these style of solutions will disagree with your statement. > So I think one of the requirements will need to be emulation of > multicast (at the VN level) across underlays that do not support (or > want to enable) multicast. But that is a requirements discussion, not > a problem statement issue. I'm not sure i agree. If you define the problem as supporting reasonable small scale DCs that follow a rather conventional architecture, which some participants in this WG do (reasonably so) then emulation of multicast is no longer an issue. > > Section 4.2.3 of the framework draft covers this. > > I also agree that solutions that rely on multicast for learning are > only useful for smallish data centers. But we've been saying that all > along. That is, one of the key work areas (as described in the > problem statement) is having a separate oracle that can be queried for > mappings precisely so NVE's do not have to do learning. > >> Another example: "traditional / legacy" DCs may require the >> transport of unmodified ethernet frames; large scale/low cost DCs do >> not have any use for ethernet headers since applications are not >> allowed to send / receive non IP packets. > > > I believe it is perfectly reasonably for an overlay to provide two > almost identical services. L2 services (which preserves ethernet > frames at an e2e level) and L3 service, for those cases where a > deployment is happy with IP only emulation. These are very different problems. An L2 service is expected to solve complex issues relating with interaction with IEEE bridging semantics. Solutions such as TRILL, etc spend a very considerable effort in dealing with this interaction. An L3 service on the other hand is mostly about dealing with traffic that crosses subnets. My understanding is that DC operators that are focused on an L3 service have no need for an L2 service and its associated complexity. Saddling a possible solution with the need to provide an L2 service is imho highly counter-productive. > > While there clearly are some differences in those two service models, > I think both can be accomodated by the same NVO3 framework (and > problem statement). I completely disagree. You can write a problem statement that contains both but it will impose unnecessary complexity in the solutions. > Indeed, that is IMO very much a goal of this > WG. That is unfortunate. > Earlier versions of the charter had explicit text about this, that > was later removed. But I see no reason why it should not continue to > be a goal of this WG, even though not explicitly called out in the > current charter. Is it possible that it was removed because it is actually counter-productive to attempt to build both an L2 overlay and an L3 overlay with the same solution ? I'm under the impression that there is enough history in the IETF around the desire for different approaches. Specifically since an L2 overlay is not really about just the ethernet headers (which no end-system really care about) but IEEE bridging semantics. The problem space where an L2 overlay is desired is one where there is the expectation of interoperability between a bridged network (e.g. the part of the network that has not been "upgraded") and the overlay itself. > > Thomas > > _______________________________________________ > nvo3 mailing list > [email protected] > https://www.ietf.org/mailman/listinfo/nvo3 _______________________________________________ nvo3 mailing list [email protected] https://www.ietf.org/mailman/listinfo/nvo3
