Hi Pedro, Could you provide some examples of the requirements differences between the types of data centers? I'm not sure that a data center taxonomy is an important addition to the draft, but I'm definitely interested in what may be missing in area of requirements. Specific requirements belong in requirements drafts, but I want to understand how you see the differences affecting what the WG needs to support.
I don't understand your assertion that Section 2.7 appears to "dismiss" RFC 4364, when the latter part of your message appears to describe how RFC 4364 (BGP/MPLS VPNs) meets the requirements in that section. What did I miss? I will admit that I don't expect to see BGP implemented/deployed in hypervisor softswitches (although OSPF as the edge protocol for BGP/MPLS VPNs seems more realistic for that implementation location), and hence I'm interested in a standard protocol for NVEs to talk to an "oracle" which could be a set of network nodes that use a routing protocol to distribute information among themselves . That possibility of a distributed structure is not easy to infer from the current draft, so we probably should add text to describe it ... and knock out the one parenthetical remark that the "oracle" is directory-based. I don't care how the "oracle" is structured internally (a routing protocol is a fine way to distribute information), although NVEs that don't directly participate in the "oracle" probably view the "oracle" as some sort of directory on which NVEs perform address lookups. Inter-VN traffic (what you refer to as inter-CUG traffic) is handled by a straightforward application of IP routing to the inner IP headers; this is similar to the well-understood application of IP routing to forward traffic across VLANs. We should talk about VRFs as something other than a limitation of current approaches - for VLANs, VRFs (separate instances of routing) are definitely a feature, and I expect this to carry forward to nvo3 VNs. In addition, we need to make changes to address Dimitri's comments about problems with the current VRF text. Thanks, --David From: [email protected] [mailto:[email protected]] On Behalf Of Pedro Roque Marques Sent: Friday, June 22, 2012 10:57 AM To: Benson Schliesser Cc: Matthew (Matthew) Bocci; [email protected]; [email protected] Subject: Re: [nvo3] call for adoption: draft-narten-nvo3-overlay-problem-statement-02 Benson, I object to the document on the following points: 1) It doesn't appear to recognize that there are multiple type of data-center deployments with different requirements. 2) It appears to "dismiss" RFC4364 based on the requirements of section 2.7. 3) Does not discuss the requirements for inter-CUG traffic. On the first point: - One can place data-center design approaches on a continuum where at one extreme one encounters data-centers that in general: o Have a small number of physical servers per cluster (orchestration system scope). o Rely on hardware resiliency. o Use a single VM per application tier. o The VM lifespan is very large. o Use a SAN or local disks for storage. o Use a oversubscribed network that is L2 based. o Tend to exchange traffic within a "VLAN" At the other end of the spectrum you have data-centers in which the design points are in general: o Very large number of physical servers. o Rely on software resiliency. o Use multiple VMs per app tier, often 10s or 100s. o The VM lifespan is small since VMs are created and deleted according to load requirements. o Use a distributed block storage and distributed NAS file-systems. o Use a non-oversubscribed network from the TOR up that is L3 only and typically does not support multicast. o Most of the traffic crosses "closed user group" boundaries. Between these two extremes there is often non common elements: server hardware, storage, software architectures and network are completely different. The document in question seems to be focusing on solutions at the left end of the spectrum (first approach) and does not at all reflect the requirements of data-ceterns at the middle or at the right end of the spectrum (the latter approach). On the second point: - Section 2.7 of the document states that the overlay technology must take into account: 1. Highly distributed systems 2. Many highly distributed virtual networks with sparse membership. 3. Highly dynamic end systems. 4. Work with existing [...] routers and switches. 5. administered by a single administrative domain L3VPN has proven to work in highly distributed systems, with many distributed virtual networks with space membership and with highly dynamic rechability information. It can be encapsulated as MPLS over GRE which implies IP packets and will work with any existing routers and switches. It can be administered by a single administrative domain. Among the related work that is cited, i don't believe that trill, .1aq and arms can claim to work with existing routers and swathes. And none of the approaches has proven the same sort of route scaling as l3vpn. thank you, Pedro. On Jun 16, 2012, at 6:01 PM, Benson Schliesser wrote: Dear NVO3 Participants - This message begins a two week Call for Adoption of http://tools.ietf.org/html/draft-narten-nvo3-overlay-problem-statement-02 by the NVO3 working group, ending on 30-June-2012. Please respond to the NVO3 mailing list with any statements of approval or disapproval, along with any additional comments that might explain your position. Also, if any NVO3 participant is aware of IPR associated with this draft, please inform the mailing list and/or the NVO3 chairs. Thanks, -Benson & Matthew _______________________________________________ nvo3 mailing list [email protected]<mailto:[email protected]> https://www.ietf.org/mailman/listinfo/nvo3
_______________________________________________ nvo3 mailing list [email protected] https://www.ietf.org/mailman/listinfo/nvo3
