I would also refer to RFC8365, specifically the local-bias explanation for multi-homing Split-Horizon, and also the NVE residing in the hypervisor. That’s usually the reference here.
My two cents.. Thx Jorge From: BESS <[email protected]> on behalf of "UTTARO, JAMES" <[email protected]> Date: Wednesday, March 4, 2020 at 4:57 PM To: Jeff Tantsura <[email protected]>, Gyan Mishra <[email protected]>, BESS <[email protected]> Subject: Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM Gyan, The following draft may also be of interest. AT&T ( A.Lingala ) has co-authored a draft that addresses unequal load balancing within a data center.. This draft intends to optimize the use of links of different size within the data center to fully utilize the capacity of the links from the leaf’s to the servers.. https://tools.ietf.org/html/draft-ietf-bess-evpn-unequal-lb-03 Thanks, Jim Uttaro From: Jeff Tantsura <[email protected]> Sent: Wednesday, March 04, 2020 10:47 AM To: Gyan Mishra <[email protected]>; BESS <[email protected]>; UTTARO, JAMES <[email protected]> Subject: Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM James, ESI multihoming (load-sharing) works just fine with VxLAN encapsulation (when supported), there’s no need for additional (proprietary) mechanisms (at least with basic synchronization). Gyan - the devil is in the details (as always) - I’m looking at multivendor EVPN VxLAN ESI designs as we speak, I’m yet to figure out how ESI type 3 (only ESI type supported in NX-OS) is going to work with ESI types 0/1 supported in Junos and Arista. I’d assume upcoming open source implementations will support type 0 (manual) only. To second James - replacing MLAG with ESI multihoming could be a really big deal in terms of simplification and normalization of the fabric (and you could finally remove peer-links!). L2 vs L3 discussion is somewhat orthogonal to that, if your services require stretched L2, whether your VTEPs are on a server or switch - you’d still be doing L2overL3. I still wouldn’t dare to deploy multivendor leafs though, but one step at a time ;-) Cheers, Jeff On Mar 4, 2020, 10:17 AM -0500, UTTARO, JAMES <[email protected]<mailto:[email protected]>>, wrote: Gyan, One of the big advantages of EVPN is the MLAG capability without the need for proprietary MLAG solutions. We have been actively testing EV-LAG to accomplish this in the WAN for L2 services.. That being said, we use EVPN/MPLS where MH ( EV-LAG ) is conveyed via labels.. My understanding is that when using EVPN/VXLAN proprietary mechanisms are need to make EV-LAG work.. The is no SH label.. Thanks, Jim Uttaro From: BESS <[email protected]<mailto:[email protected]>> On Behalf Of Gyan Mishra Sent: Monday, March 02, 2020 6:26 PM To: BESS <[email protected]<mailto:[email protected]>> Subject: [bess] VXLAN EVPN fabric extension to Hypervisor VM Dear BESS WG Is anyone aware of any IETF BGP development in the Data Center arena to extend BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor part of the vxlan fabric. This could eliminate use of MLAG on the leaf switches and eliminate L2 completely from the vxlan fabric thereby maximizing stability. Kind regards, Gyan -- Gyan Mishra Network Engineering & Technology Verizon Silver Spring, MD 20904 Phone: 301 502-1347 Email: [email protected]<mailto:[email protected]> _______________________________________________ BESS mailing list [email protected]<mailto:[email protected]> https://www.ietf.org/mailman/listinfo/bess<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ietf.org_mailman_listinfo_bess&d=DwQFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=3qhKphE8RnwJQ6u8MrAGeA&m=fDyO1T8959Mh1nCyB_DTHbuaJUWs7wQ01X_Pd18tPg0&s=sySZWuq5YBFtsqh6Y6wIrU5SDUpVQaB-cxlVNTZ84g8&e=>
_______________________________________________ BESS mailing list [email protected] https://www.ietf.org/mailman/listinfo/bess
