Janos, It is well understood that PBB and SPBM address traditional Ethernet limitations.
However, our intent is not to focus the nvo3 framework towards specific L2 technologies. In fact, nvo3 targets IP and/or MPLS as the underlying transport and not Ethernet-based transport. Thanks, Marc > -----Original Message----- > From: [email protected] [mailto:[email protected]] On > Behalf Of János Farkas > Sent: Friday, June 29, 2012 3:35 PM > To: [email protected] > Subject: Re: [nvo3] call for adoption: > draft-lasserre-nvo3-framework-02 > > Hi, > > I have some comments to the draft. Please find them below > under the section header they belong to. > > > 1. Introduction > > "Limited VLAN space" > The 12-bit overlay network ID limitation of the Ethernet > header is not > there any more for several years. There are different types of VLANs. > The I-SID can be considered as a 24-bit VLAN ID, which is sometimes > called I-VLAN. Furthermore, there exist B-VLANs and S-VLANs aside the > C-VLANs that were defined at the first place. The combination > of these > VLANs provides flexibility for network virtualization; altogether the > Ethernet header today provides 60 bits for network virtualization. > It would be better to remove this because the limitation is > not there today. > > "FIB explosion due to handling of large number of MACs/IP addresses" > FIB explosion might be there in a L2 network if the already > available L2 > network virtualization tools are not used. FIB explosion is > not likely > in a PBB network. > It would be better to update the bullet accordingly or remove it. > > "Spanning Tree limitations" > L2 networks are not limited to a single spanning tree for > several years. > Shortest Path Bridging (SPB) provides shortest path > forwarding and uses > physical links that might have been blocked by a single spanning tree > instance. Moreover, MSTP also provides multiple spanning tree > instances, > which allow the use of the physical links. > It would be better to remove the bullet. > > "Broadcast storms" > Broadcast storms are not necessarily there in today's L2 > networks, e.g. > there are no broadcast storms at all in an SPBM network. Furthermore, > applying the existing L2 network virtualization techniques, the > broadcast due to unknown destination is kept within the > virtual network, > thus it is not a broadcast storm. > It would be better to update the bullet such that it exactly > specifies > the problem or remove it. > > "Inefficient Broadcast/Multicast handling" > It is not clear what is meant here, it would be better to be more > specific. (If it is about L2, then the statement is not > valid. Handling > of multicast is very efficient in L2, both shortest path trees and > shared threes can be used today.) > > "Limited mobility/portability support" > It is not clear what is meant here. Is it about VM migration? If so, > then the statement is not exact for L2, VDP supports VM migration. > (Furthermore, L2 networks typically automatically learn the > new location > of a station after its movement.) > > "Lack of service auto-discovery" > If it is about L2, then that is not the case. SPB has > in-built service > auto-discovery. > It would be better to be more specific in the bullet or remove it. > > The issue list is introduced as "Existing virtual network models used > for data center networks have known limitations, specifically in the > context of multiple tenants. These issues can be summarized as" > As the comments to the bullets show the list is not accurate > given the > existing virtual network models available for data centers today. > Therefore, it would be better to bring this issue list up to date to > today's networking technologies. > (I understand that the WG aims to provide a solution over L3. > Nevertheless, it seems to me that it would be better to give other > motivation for it than L2 has issues, given that no issue has been > pointed out that is still valid for Ethernet today.) > > > 4.1. Pros & Cons > > "Unicast tunneling state management is handled at the edge of the > network. Intermediate transport nodes are unaware of such state. Note > that this is not the case when multicast is enabled in the > core network." > This statement may depend on the solution applied or the > terms used here > might not be entirely clear. Even if the core only provides point to > point tunnels, then those tunnels have to be established in the core, > hence maintenance of some states is required in the core > nodes. If the > core provides multipoint tunnels as well, then of course more > state is > required to maintain a multipoint tunnel in the core than a point to > point tunnel. The type of connectivity that needs to be > provided by the > core depends on the actual location of VMs belonging to the same > group/tenant. If VMs of a group are only behind two NVEs, then it is > enough to provide a point to point connectivity/tunnel by the core > independently of whether the traffic among the VMs is unicast or > multicast. If VMs of a group are behind more than two NVEs, then the > core has to provide a multipoint connectivity between those NVEs; the > way it is provided is solution dependent. > > "Load balancing may not be optimal as the hash algorithm may not work > well due to the limited number of combinations of tunnel source and > destination addresses" > There are other possibilities of load balancing besides hash. > It would be better to update the sentence to "Hash-based load > balancing > may not be optimal as the hash algorithm may not work well due to the > limited number of combinations of tunnel source and > destination addresses" > > > 4.2.1. Data plane vs Control plane driven > > "Multicasting in the core network for dynamic learning can lead to > significant scalability limitations." > Multicast should be kept within the virtual network even while it is > transmitted in the core, which can be aided by service > auto-discovery. > Having these features, the scalability limitations should not be > significant. > > "Specific forwarding rules must be enforced to prevent loops from > happening. This can be achieved using a spanning tree protocol or a > shortest path tree, or using a split-horizon mesh." > "a spanning tree protocol" is not the same category as "a > shortest path > tree" or "split horizon mesh" here. The first one is a > protocol, while > the latter two are forwarding rules as said in the beginning of the > sentence. > It would be better to remove "protocol" from the sentence or > resolve it > another way. > > "the amount of state to be distributed is a function of the number of > virtual machines." > This 'function' is not that easy to draw, it depends on a lot of > factors, furthermore it is most likely that the state to be > maintained > does not depend on the way the state is installed, i.e. it is > the same > both for the control and the data plane driven cases. > For example, there is no need to maintain any state if the > connectivity > is point to point through the core, i.e. a group of VMs are > only behind > two NVEs. > We can take a look on mapping tables e.g. by having an > example of three > NVEs: NVE-A, NVE-B and NVE-C providing the connectivity > through the core > for a group of VMs. If VM1 behind NVE-A communicates with VN2 behind > NVE-B, then the mapping table in NVE-A is VN2->NVE-B, i.e. > NVE-A has to > address the packets to NVE-B for the transmission through the core if > any VM behind NVE-A sends them to VN-2. Similarly the mapping > table is > VN1->NVE-A in NVE-B. It does not matter how these mapping tables are > maintained, the same states should be there. That is, states are the > same both for data and control plane driven mapping tables. > If state here means a mapping state, then it would be better > to update > the cited sentence. Moreover, maybe this section is not the best > location for it. One way out is to remove the paragraph. > Alternatively > it can be updated, e.g. "the amount of state to be > distributed depends > on the network scenario, the grouping and the number of > virtual machines." > If the state means something else, then it would be useful to clarify > what exactly meant here. > > > 4.2.6. Interaction between network overlays and underlays > > Is it really the case that the DC provider wants to give > visibility of > its infrastructure to its Tenants? Isn't it that the Tenant gets is > overlay and the underlay should be completely hidden? For example, if > the Tenant buys L2aaS, does/should it bother with the fact that L2 > overlay happens to be provided by an L3 underlay? Maybe the > direction of > SLAs would be more appropriate here than providing visibility of the > underlay. > > > Best regards, > János > > > > On 6/18/2012 11:51 PM, Benson Schliesser wrote: > > Dear NVO3 Participants - > > > > This message begins a two week Call for Adoption of > http://tools.ietf.org/html/draft-lasserre-nvo3-framework-02 > by the NVO3 working group, ending on 02-July-2012. > > > > Please respond to the NVO3 mailing list with any statements > of approval or disapproval, along with any additional > comments that might explain your position. Also, if any NVO3 > participant is aware of IPR associated with this draft, > please inform the mailing list and/or the NVO3 chairs. > > > > Thanks, > > -Benson& Matthew > > > > _______________________________________________ > > nvo3 mailing list > > [email protected] > > https://www.ietf.org/mailman/listinfo/nvo3 > > > > _______________________________________________ > nvo3 mailing list > [email protected] > https://www.ietf.org/mailman/listinfo/nvo3 > _______________________________________________ nvo3 mailing list [email protected] https://www.ietf.org/mailman/listinfo/nvo3
