we're weighing in 161 pages in the last IESG reviews now which is however
not very meaningful ;-) since it's probably 25 pages just IANA request and
it's not just the spec, it's the whole outline of design/narrative/walk
through (folks preferred it that way). With FSMs/algorithms and water-tight
interoperable protocol like it can't be done in 10 easy pages breeze
through (well, one can do that but then customers spend 5 years figuring
out how to interop anything and finding cornercases, not the way RIFT WG is
run ;-)

I'll ping you 1:1 otherwise ;-)

thanks

--- tony

On Mon, Mar 9, 2020 at 9:15 PM Gyan Mishra <[email protected]> wrote:

> Hi Tony
>
> I am actually a member of RIFT, as the WG charter description
> was intriguing so I joined - to learn more about the RIFT technology.
>
> I downloaded the draft but never made it through as its a lengthy 100+
> page draft.  Now I really have to read it.
>
> Yes that would be great.  Please unicast me on [email protected].
>
> Kind regards
>
> Gyan
>
> On Mon, Mar 9, 2020 at 11:31 AM Tony Przygienda <[email protected]>
> wrote:
>
>> Gyan, the technology that you look for exists and is about to go
>> standards RFC (i.e. _scalable_ L3 multihoming to the host including
>> bandwidth load-balancing and many other things that need consideration in
>> such case, e.g. host asic table scalability). Please look @
>> https://datatracker.ietf.org/doc/draft-ietf-rift-rift/  If you need more
>> info/code, more than happy to talk out of band
>>
>> thanks
>>
>> -- tony
>>
>> On Thu, Mar 5, 2020 at 2:04 PM Gyan Mishra <[email protected]> wrote:
>>
>>> Hi Sandy
>>>
>>> Thank you for the feedback on your use cases.
>>>
>>> For now will remain L2 MLAG to the host.
>>>
>>> BESS WG,
>>>
>>> If anyone hears of any new developments on vxlan fabric extension to
>>> hypervisor and any vendors supporting even 3rd party open source
>>> implementations of fabric extensions please reach out to me.
>>>
>>> As an aside when reading RFC 8365 NVO3 section 7  it kind of gets
>>> excited by the subject heading “single homed NVE residing in Hypervisor” or
>>> section 8 MH NVE in TOR server - thinking that maybe fabric extension to
>>> hypervisor does actually exist but of course the let down that no vendor
>>> supports it.
>>>
>>> It does seem there would be a lot of customers wanting this fabric
>>> extension to hypervisor capabilities for MH load balancing- and very
>>> surprising that it does not exist.
>>>
>>> https://tools.ietf.org/html/rfc8365#section-7
>>>
>>>
>>>
>>> Kind regards
>>>
>>> Gyan Mishra
>>> Verizon
>>> Cell 301 502-1347
>>>
>>> On Wed, Mar 4, 2020 at 4:57 PM Sandy Breeze <[email protected]
>>> <[email protected]>> wrote:
>>>
>>>> Hi Gyan,
>>>>
>>>>
>>>>
>>>> Here, we use a mix of NXOS & Arista EOS for leaf and spine, ASR9k DCI
>>>> doing L2 + L3 VTEP and stitching to MPLS EVPN / traditional VPNv4/6.  We
>>>> also confess to pockets of proprietary NXOS vPC, and, pockets of Arista
>>>> ESI-MH.  Incidentally, the older Broadcom NXOS boxes do have ESI-MH
>>>> capabilities and that worked alright.  Customers, like us, have been long
>>>> promised the vendor silicon will catch up, though we’ve yet to see it
>>>> happen.
>>>>
>>>>
>>>>
>>>> We’re entirely a home grown automation / orchestration stack, with many
>>>> years behind it.  Have a look at some talks we’ve given publicly for the
>>>> background in our ‘GOLF’ project.
>>>> https://www.youtube.com/watch?v=jXOrdHfBqb0&t=
>>>>
>>>>
>>>>
>>>> > Has this ever been done before and with which hypervisors ?
>>>>
>>>> We had the same requirements as you 2-3 years ago.  Have we solved it?
>>>> TL;DR: not really!  We did make an interesting proof of concept though…
>>>>
>>>> Being largely a VMWare NSX house, we found VMWare closed to offering
>>>> any type of EVPN orchestrated VTEP extension into the physical fabric.
>>>> Quite simply, its model of extending to physical VTEP is OVSDB.  That’s not
>>>> exactly compatible with an EVPN ToR, for various reasons.  So, we had the
>>>> idea that we’d build something which spoke both OVSDB (to NSX) and EVPN (to
>>>> the physical ToR).  The high level idea being, we’d use this to create
>>>> routes in EVPN if they were seen from OVSDB, and visa-versa, only we’d
>>>> maintain the original next-hop.  Kinda like an iBGP RR, only between 2
>>>> different control-planes.  Internally, we called it VxGlue.
>>>>
>>>>
>>>>
>>>> At a high level, it worked well in achieving nice loadbalancing of
>>>> traffic into the hypervisor.  It would no doubt scale fairly well too.  Is
>>>> it the right thing to do in a production network?  We decided not.  For
>>>> one, its very hard to keep up with pace of development of underlying VMWare
>>>> API’s.  As I say, this was a few years ago and things have moved on.  Maybe
>>>> we’ll revisit.
>>>>
>>>>
>>>>
>>>> Sandy
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On 04/03/2020, 21:03, "BESS on behalf of Gyan Mishra" <
>>>> [email protected] on behalf of [email protected]> wrote:
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Thank you all for the feedback!!
>>>>
>>>>
>>>>
>>>> Our requirements we ar looking for a way to increase stability without
>>>> sacrificing bandwidth availability and convergence with Data Center host
>>>> connectivity to an existing vxlan fabric.  Our server traffic volume is
>>>> higher bandwidth East to West then North to South.
>>>>
>>>>
>>>>
>>>> We have a Cisco Nexus based vxlan evpn fabric with multi site feature
>>>> connecting all of our PODs intra DC and use BGP EVPN over an MPLS core for
>>>> DCI inter connect inter Data Center connectivity.
>>>>
>>>>
>>>>
>>>> We have orchestration via Cisco NSO and DCNM for network
>>>> programmability.  Typical Cisco shop..
>>>>
>>>>
>>>>
>>>> Our Data Center host attachment model as been with MLAG using Cisco’s
>>>> vPC.  That has been problematic having that L2 extension so we would like
>>>> to find a better way to maybe leverage our existing vxlan fabric and extend
>>>> to server hypervisor if at all possible.
>>>>
>>>>
>>>>
>>>> So with a hypervisor connected to two leaf switches in a vxlan fabric
>>>>  it sounds like it maybe possible with Cisco’s IETF standards based
>>>> implementation of vxlan overlay following NVO3 RFC 8365 and BGP EVPN RFC
>>>> 7432 that we could extend the fabric to server hypervisor.
>>>>
>>>>
>>>>
>>>> The question related to L3 ECMP versus L2 MLAG become moot as with
>>>> existing BGP EVPN load balancing procedures with EVPN type 4 default
>>>> gateway DF election we can achieve all active multi homed load balancing
>>>> from the hypervisor.  As was mentioned RFC 8365 since vxlan evpn does not
>>>> use ESI the local bias feature would have to be used for split horizon...
>>>>
>>>>
>>>>
>>>> Of course with the extension to the hypervisor we would use are
>>>> existing orchestration of the fabric to manage the server hypervisor layer.
>>>>
>>>>
>>>>
>>>> Has this ever been done before and with which hypervisors ?
>>>>
>>>>
>>>>
>>>> Kind regards
>>>>
>>>>
>>>>
>>>> Gyan
>>>>
>>>>
>>>>
>>>> On Mon, Mar 2, 2020 at 7:58 PM Jeff Tantsura <[email protected]>
>>>> wrote:
>>>>
>>>> Gyan,
>>>>
>>>>
>>>>
>>>> On open source side of things - FRR supports EVPN on the host.
>>>>
>>>> Any vendor virtualized NOS would provide you the same (at least
>>>> Junos/cRPD or  XRv).
>>>>
>>>> EVPN ESI load-sharing eliminates need for MLAG (basic thought, the
>>>> devil is in the details :))
>>>>
>>>> ECMP vs LAG load-balancing - the algorithms supported are quite
>>>> similar, in some code bases actually the same, so this statement is not
>>>> really correct.
>>>>
>>>>
>>>>
>>>> Would be glad to better understand your requirements and help you!
>>>>
>>>> Regards,
>>>>
>>>> Jeff
>>>>
>>>>
>>>>
>>>> On Mar 2, 2020, at 16:00, Gyan Mishra <[email protected]> wrote:
>>>>
>>>>
>>>>
>>>> Thanks Robert for the quick response
>>>>
>>>>
>>>>
>>>> Just thinking out loud -  I can see there maybe some advantages of
>>>> elimination of L2 to the host but the one major disadvantage is that BGP
>>>> multipath provides flow based uneven load balancing so not as desirable
>>>> from that standpoint compare to L3 MLAG bundle XOR Src/Dest/Port hash.
>>>>
>>>>
>>>>
>>>> Other big down side is most enterprises have the hypervisor managed by
>>>> server admins but if you run BGP now that ends up shifting to network.
>>>> More complicated.
>>>>
>>>>
>>>>
>>>> Kind regards
>>>>
>>>>
>>>>
>>>> Gyan
>>>>
>>>>
>>>>
>>>> On Mon, Mar 2, 2020 at 6:39 PM Robert Raszuk <[email protected]> wrote:
>>>>
>>>> Hi Gyan,
>>>>
>>>>
>>>>
>>>> Similar architecture has been invented and shipped by Contrail team.
>>>> Now that project after they got acquired by Juniper has been renamed to
>>>> Tungsten Fabric https://tungsten.io/
>>>> <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftungsten.io%2F&data=02%7C01%7Csandy.breeze%40eu.clara.net%7C5f875c0816c2433baeee08d7c07f7358%7Cf87a76403b944bbdb5fab4c15947cf56%7C0%7C0%7C637189525946984085&sdata=YOxnBjXir%2B%2Br4h2Emgz0%2BMkYxBrR75TpcLWqEuXiftY%3D&reserved=0>
>>>>  while
>>>> Juniper continued to keep the original project's name and commercial flavor
>>>> of it. No guarantees of any product quality at this point.
>>>>
>>>>
>>>>
>>>> Btw ,,, no need for VXLAN nor BGP to the host. The proposed above
>>>> alternative were well thought out and turned to work ways far more
>>>> efficient and practical if you zoom into details.
>>>>
>>>>
>>>>
>>>> Best,
>>>>
>>>> Robert.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Mar 3, 2020 at 12:26 AM Gyan Mishra <[email protected]>
>>>> wrote:
>>>>
>>>>
>>>>
>>>> Dear BESS WG
>>>>
>>>>
>>>>
>>>> Is anyone aware of any IETF BGP development in the Data Center arena to
>>>> extend BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor
>>>> part of the  vxlan fabric.  This could eliminate use of MLAG on the leaf
>>>> switches and eliminate L2 completely from the vxlan fabric thereby
>>>> maximizing  stability.
>>>>
>>>>
>>>>
>>>> Kind regards,
>>>>
>>>>
>>>>
>>>> Gyan
>>>>
>>>> --
>>>>
>>>> Gyan  Mishra
>>>>
>>>> Network Engineering & Technology
>>>>
>>>> Verizon
>>>>
>>>> Silver Spring, MD 20904
>>>>
>>>> Phone: 301 502-1347
>>>>
>>>> Email: [email protected] <[email protected]>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> BESS mailing list
>>>> [email protected]
>>>> https://www.ietf.org/mailman/listinfo/bess
>>>> <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ietf.org%2Fmailman%2Flistinfo%2Fbess&data=02%7C01%7Csandy.breeze%40eu.clara.net%7C5f875c0816c2433baeee08d7c07f7358%7Cf87a76403b944bbdb5fab4c15947cf56%7C0%7C0%7C637189525946994081&sdata=ioGt83d8%2FQ%2FUFiA9L0T%2BgrGmoDBiqADpUzw7kaRfxhI%3D&reserved=0>
>>>>
>>>> --
>>>>
>>>> Gyan  Mishra
>>>>
>>>> Network Engineering & Technology
>>>>
>>>> Verizon
>>>>
>>>> Silver Spring, MD 20904
>>>>
>>>> Phone: 301 502-1347
>>>>
>>>> Email: [email protected]
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> BESS mailing list
>>>> [email protected]
>>>> https://www.ietf.org/mailman/listinfo/bess
>>>> <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ietf.org%2Fmailman%2Flistinfo%2Fbess&data=02%7C01%7Csandy.breeze%40eu.clara.net%7C5f875c0816c2433baeee08d7c07f7358%7Cf87a76403b944bbdb5fab4c15947cf56%7C0%7C0%7C637189525947004076&sdata=CEduzdoebDfvYoDdqSQJEafpKGKycyM5YpCYOYviX%2FE%3D&reserved=0>
>>>>
>>>> --
>>>>
>>>> Gyan  Mishra
>>>>
>>>> Network Engineering & Technology
>>>>
>>>> Verizon
>>>>
>>>> Silver Spring, MD 20904
>>>>
>>>> Phone: 301 502-1347
>>>>
>>>> Email: [email protected]
>>>>
>>>>
>>>>
>>>>
>>>>
>>> --
>>>
>>> Gyan  Mishra
>>>
>>> Network Engineering & Technology
>>>
>>> Verizon
>>>
>>> Silver Spring, MD 20904
>>>
>>> Phone: 301 502-1347
>>>
>>> Email: [email protected]
>>>
>>>
>>>
>>> _______________________________________________
>>> BESS mailing list
>>> [email protected]
>>> https://www.ietf.org/mailman/listinfo/bess
>>>
>>
>
> --
>
> Gyan  Mishra
>
> Network Engineering & Technology
>
> Verizon
>
> Silver Spring, MD 20904
>
> Phone: 301 502-1347
>
> Email: [email protected]
>
>
>
>
_______________________________________________
BESS mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/bess

Reply via email to