Aldrin, That's what I thought but Joel seemed adamant. I am happy to use either term.
Yours irrespectively, John From: Aldrin Isaac [mailto:[email protected]] Sent: Thursday, September 20, 2012 5:57 AM To: John E Drake Cc: Kireeti Kompella; Thomas Nadeau; [email protected]; Balus, Florin Stelian (Florin); Joel M. Halpern Subject: Re: [nvo3] draft-drake-nvo3-evpn-control-plane Generically when we discuss the need for different forms of NVE to communicate, wouldnt we describe that as a need to interwork them? On Thursday, September 20, 2012, John E Drake wrote: I had an offline discussion with Joel and he suggests using the term 'encapsulation selection' rather than 'interworking' Yours irrespectively, John > -----Original Message----- > From: Kireeti Kompella [mailto:[email protected]<javascript:;>] > Sent: Wednesday, September 19, 2012 5:47 PM > To: Thomas Nadeau > Cc: Kireeti Kompella; Balus, Florin Stelian (Florin); John E Drake; > Joel M. Halpern; [email protected]<javascript:;> > Subject: Re: [nvo3] draft-drake-nvo3-evpn-control-plane > > Hi Tom, > > On Sep 19, 2012, at 4:17 PM, Thomas Nadeau > <[email protected]<javascript:;>> > wrote: > > > > > On Sep 19, 2012:11:28 AM, at 11:28 AM, "Balus, Florin Stelian > (Florin)" <[email protected]<javascript:;>> wrote: > > > >> John, > >> I think more details need to be added here. What happens if one PE > advertises nvgre encap while the other advertises only vxlan? Do you > allow asymmetric encapsulations? > >> What if one NVE supports all 3 which one is chosen, advertised? Just > a few examples.... > > > > That is just not how data centers are built today so that is > unlikely to happen in the wild. With that in mind, this is an > interesting corner case that we should handle just in case something is > misconfigured or someone in the future decides to build such a DC. > > As I've said, I like this draft. However, "interworking" is fraught > with misinterpretations and pitfalls, and perhaps at this stage > distracts from other more pressing concerns. > > Might I suggest the following reworking of Section 4: > > 4. Multiple Encapsulations > > The Tunnel Encapsulation attribute enables a single control plane > to oversee a number of different data plane encapsulations. This > can > manifest itself in several ways: > > a) a data center may use a single common encapsulation for all > EVIs, but > different data centers may make independent choices. > b) within a single data center, a given EVI may use a single > encapsulation, but different EVIs may choose different > encapsulations. > c) a single EVI may use multiple encapsulations, one for each PE-PE > pair, > and maybe even use a different encapsulation in each > direction. > > Going from (a) to (c ) increases generality, but also increases > complexity. > The initial focus will be on (a) and (b); further details for (c ) > will be added if > there is sufficient interest. > > In all cases, a PE within a given EVI knows which encapsulations > other > PEs in that EVI support, and, when sending unicast traffic, it MUST > choose > one of the encapsulations advertised by the egress PE. > > For case (c ), an ingress PE that uses shared multicast trees for > sending > Broadcast and Multicast traffic must maintain distinct trees for > each > different encapsulation type. Further details will be given in a > future version. > > The topic of interworking encapsulations and "gateway" functions > will also be > addressed in a future version. > > > > Kireeti. > > > --Tom > > > > > >> Thanks, > >> Florin > >> > >> On Sep 19, 2012, at 9:04 AM, John E Drake > >> <[email protected]<javascript:;>> > wrote: > >> > >>> Joel, > >>> > >>> From section 4, the section you referenced in your note below: > >>> > >>> "Note that an ingress PE must use the data plane encapsulation > specified by a given egress PE in the subject MAC Advertisement or Per > EVI Ethernet AD route when sending a packet to that PE. Further, an > ingress node that uses shared multicast trees for sending Broadcast and > Multicast traffic must maintain distinct trees for each different > encapsulation type." > >>> > >>> Aldrin also recast this into English in his reply to Lucy: > >>> > >>> "The imported E-VPN route will determine what the next hop entry in > the EVI will look like -- whether it will have encapsulation A or > encapsulation B. That is determined by the sender of the E-VPN route. > This is like having a PPP interface and an Ethernet interface connected > to the same VRF." > >>> > >>> Yours irrespectively, > >>> > >>> John > >>> > >>>> -----Original Message----- > >>>> From: Joel M. Halpern [mailto:[email protected]<javascript:;>] > >>>> Sent: Wednesday, September 19, 2012 6:52 AM > >>>> To: John E Drake > >>>> Cc: [email protected]<javascript:;> > >>>> Subject: Re: [nvo3] draft-drake-nvo3-evpn-control-plane > >>>> > >>>> Looking at the draft, there seems to be a very reasonable question > >>>> about section 4. The text starts by noting that the presence of > >>>> the Tunnel Encapsulation attribute allows for supporting a range > of > >>>> tunnel encapsulations. Sounds good. It then asserts that this > >>>> allows interoperability across the encapsualtions. That does not > >>>> seem to follow. > >>>> > >>>> Normally, when we allow multiple encpsulations, we specify one as > >>>> mandatory to implement in order to enable interoperability of the > >>>> devices. > >>>> Communicating the encapsulation type does not magically enable a > >>>> device that uses one encapsulation to communicate with a device > >>>> that only supports some other encapsualtion. > >>>> > >>>> I presume that there are steps missing in section 4. Could you > >>>> elaborate? > >>>> > >>>> Yours, > >>>> Joel > >>>> > >>>> On 9/19/2012 4:11 AM, John E Drake wrote: > >>>>> Lucy, > >>>>> > >>>>> Why didn't you ask your question of the authors? I had taken it > >>>>> as a > >>>> given that the capability to have an EVI spanning MPLS, VXLAN, and > >>>> NVGRE endpoints was a given. If the network operator does not > want > >>>> to deploy this capability, that is their option. > >>>>> > >>>>> Yours irrespectively, > >>>>> > >>>>> John > >>>>> > >>>>> > >>>>>> -----Original Message----- > >>>>>> From: [email protected]<javascript:;> > >>>>>> [mailto:[email protected]<javascript:;>] On > >>>>>> Behalf Of Lucy yong > >>>>>> Sent: Tuesday, September 18, 2012 1:19 PM > >>>>>> To: Kireeti Kompella > >>>>>> Cc: [email protected]<javascript:;> > >>>>>> Subject: Re: [nvo3] draft-drake-nvo3-evpn-control-plane > >>>>>> > >>>>>> Hi Kreeti, > >>>>>> > >>>>>> Regarding interworking capability, Is "a given EVI can support > >>>>>> multiple data plane encapsulation" equivalent to say "individual > >>>> NVEs > >>>>>> need to support multiple encapsulation schemas"? If one NVE only > >>>>>> supports VXLAN and another NVE only supports MPLS-in-GRE, two > >>>>>> will not able to work in a same EVI, is that right? Will this > >>>>>> give more benefit than having one encapsulation in an EVI or > make > >>>>>> more > >>>> complex? > >>>>>> > >>>>>> Regards, > >>>>>> Lucy > >>>>>> > >>>>>> -----Original Message----- > >>>>>> From: Kireeti Kompella > >>>>>> [mailto:[email protected]<javascript:;>] > >>>>>> Sent: Monday, September 17, 2012 8:18 PM > >>>>>> To: Lucy yong > >>>>>> Cc: [email protected]<javascript:;> > >>>>>> Subject: Re: [nvo3] draft-drake-nvo3-evpn-control-plane > >>>>>> > >>>>>> Hi Lucy, > >>>>>> > >>>>>> On Sep 17, 2012, at 3:36 PM, Lucy yong > >>>>>> <[email protected]<mailto:[email protected]>> > wrote: > >>>>>> > >>>>>>> Read this draft. > >>>>>>> > >>>>>>> RFC5512 applies a case where two BGP speakers are in a BGP free > >>>> core. > >>>>>> Using encapsulation tunnel between two speakers enables one > >>>>>> speaker to send a packet to another speaker as the next-hop. > >>>>>>> > >>>>>>> Using this approach in nvo3 may rise a high scalability concern > >>>>>> because any pair of NVEs in an NVO will need to maintain a state > >>>>>> for the tunnel encapsulation. > >>>>>> > >>>>>> They would have to in any case. The tunnel encap is a couple of > >>>>>> bits; the "tenant id" is also needed. > >>>>>> > >>>>>>> If some NVEs support VXLAN and some support NVGRE, to build > >>>>>>> mcast > >>>>>> tree for BUM, it has to build two distinct sub-trees for each, > >>>>>> which is more complex. > >>>>>>> > >>>>>>> "This memo specifies that an egress PE must use the sender MAC > >>>>>>> address to determine whether to send a received Broadcast or > >>>>>>> Multicast packet to a given Ethernet Segment. I.e., if the > >>>> sender > >>>>>>> MAC address is associated with a given Ethernet Segment, the > >>>> egress > >>>>>>> PE must not send the packet to that Ethernet Segment." > >>>>>>> > >>>>>>> Does it mean using BGP to exchange NVE MAC address that belong > >>>>>>> to > >>>> an > >>>>>> Ethernet segment first? How does this impact other evpn > features? > >>>>>> > >>>>>> Yes to the first question; not at all (imo) to the second. > >>>>>> > >>>>>>> This needs to be cooked more. > >>>>>> > >>>>>> I think it's pretty well cooked, although I must confess a > >>>>>> predilection for sushi. In effect, these very capable authors > >>>>>> saved me the trouble of writing pretty much th> >>>>>>> -----Original > >>>>>> Message----- > >>>>>>> From: > >>>>>>> Of Aldrin Isaac >
_______________________________________________ nvo3 mailing list [email protected] https://www.ietf.org/mailman/listinfo/nvo3
