Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-13 Thread Gyan Mishra
After doing some research I found a few vendors that have a solution that
extends the vxlan fabric to the server hypervisor.

Cisco VTS supports 3rd party x86 available now for vxlan fabric  extension

Arista and Nuage collaboration on vxlan fabric extension solution available
now

VMware also has a solution

Kind regards

Gyan

On Tue, Mar 10, 2020 at 3:04 AM Tony Przygienda  wrote:

> we're weighing in 161 pages in the last IESG reviews now which is however
> not very meaningful ;-) since it's probably 25 pages just IANA request and
> it's not just the spec, it's the whole outline of design/narrative/walk
> through (folks preferred it that way). With FSMs/algorithms and water-tight
> interoperable protocol like it can't be done in 10 easy pages breeze
> through (well, one can do that but then customers spend 5 years figuring
> out how to interop anything and finding cornercases, not the way RIFT WG is
> run ;-)
>
> I'll ping you 1:1 otherwise ;-)
>
> thanks
>
> --- tony
>
> On Mon, Mar 9, 2020 at 9:15 PM Gyan Mishra  wrote:
>
>> Hi Tony
>>
>> I am actually a member of RIFT, as the WG charter description
>> was intriguing so I joined - to learn more about the RIFT technology.
>>
>> I downloaded the draft but never made it through as its a lengthy 100+
>> page draft.  Now I really have to read it.
>>
>> Yes that would be great.  Please unicast me on hayabusa...@gmail.com.
>>
>> Kind regards
>>
>> Gyan
>>
>> On Mon, Mar 9, 2020 at 11:31 AM Tony Przygienda 
>> wrote:
>>
>>> Gyan, the technology that you look for exists and is about to go
>>> standards RFC (i.e. _scalable_ L3 multihoming to the host including
>>> bandwidth load-balancing and many other things that need consideration in
>>> such case, e.g. host asic table scalability). Please look @
>>> https://datatracker.ietf.org/doc/draft-ietf-rift-rift/  If you need
>>> more info/code, more than happy to talk out of band
>>>
>>> thanks
>>>
>>> -- tony
>>>
>>> On Thu, Mar 5, 2020 at 2:04 PM Gyan Mishra 
>>> wrote:
>>>
 Hi Sandy

 Thank you for the feedback on your use cases.

 For now will remain L2 MLAG to the host.

 BESS WG,

 If anyone hears of any new developments on vxlan fabric extension to
 hypervisor and any vendors supporting even 3rd party open source
 implementations of fabric extensions please reach out to me.

 As an aside when reading RFC 8365 NVO3 section 7  it kind of gets
 excited by the subject heading “single homed NVE residing in Hypervisor” or
 section 8 MH NVE in TOR server - thinking that maybe fabric extension to
 hypervisor does actually exist but of course the let down that no vendor
 supports it.

 It does seem there would be a lot of customers wanting this fabric
 extension to hypervisor capabilities for MH load balancing- and very
 surprising that it does not exist.

 https://tools.ietf.org/html/rfc8365#section-7



 Kind regards

 Gyan Mishra
 Verizon
 Cell 301 502-1347

 On Wed, Mar 4, 2020 at 4:57 PM Sandy Breeze >>> > wrote:

> Hi Gyan,
>
>
>
> Here, we use a mix of NXOS & Arista EOS for leaf and spine, ASR9k DCI
> doing L2 + L3 VTEP and stitching to MPLS EVPN / traditional VPNv4/6.  We
> also confess to pockets of proprietary NXOS vPC, and, pockets of Arista
> ESI-MH.  Incidentally, the older Broadcom NXOS boxes do have ESI-MH
> capabilities and that worked alright.  Customers, like us, have been long
> promised the vendor silicon will catch up, though we’ve yet to see it
> happen.
>
>
>
> We’re entirely a home grown automation / orchestration stack, with
> many years behind it.  Have a look at some talks we’ve given publicly for
> the background in our ‘GOLF’ project.
> https://www.youtube.com/watch?v=jXOrdHfBqb0&t=
>
>
>
> > Has this ever been done before and with which hypervisors ?
>
> We had the same requirements as you 2-3 years ago.  Have we solved
> it?  TL;DR: not really!  We did make an interesting proof of concept
> though…
>
> Being largely a VMWare NSX house, we found VMWare closed to offering
> any type of EVPN orchestrated VTEP extension into the physical fabric..
> Quite simply, its model of extending to physical VTEP is OVSDB.  That’s 
> not
> exactly compatible with an EVPN ToR, for various reasons.  So, we had the
> idea that we’d build something which spoke both OVSDB (to NSX) and EVPN 
> (to
> the physical ToR).  The high level idea being, we’d use this to create
> routes in EVPN if they were seen from OVSDB, and visa-versa, only we’d
> maintain the original next-hop.  Kinda like an iBGP RR, only between 2
> different control-planes.  Internally, we called it VxGlue.
>
>
>
> At a high level, it worked well in achieving nice loadbalancing of
> traffic into the hypervisor.  It would no doubt scale fair

Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-10 Thread Tony Przygienda
we're weighing in 161 pages in the last IESG reviews now which is however
not very meaningful ;-) since it's probably 25 pages just IANA request and
it's not just the spec, it's the whole outline of design/narrative/walk
through (folks preferred it that way). With FSMs/algorithms and water-tight
interoperable protocol like it can't be done in 10 easy pages breeze
through (well, one can do that but then customers spend 5 years figuring
out how to interop anything and finding cornercases, not the way RIFT WG is
run ;-)

I'll ping you 1:1 otherwise ;-)

thanks

--- tony

On Mon, Mar 9, 2020 at 9:15 PM Gyan Mishra  wrote:

> Hi Tony
>
> I am actually a member of RIFT, as the WG charter description
> was intriguing so I joined - to learn more about the RIFT technology.
>
> I downloaded the draft but never made it through as its a lengthy 100+
> page draft.  Now I really have to read it.
>
> Yes that would be great.  Please unicast me on hayabusa...@gmail.com.
>
> Kind regards
>
> Gyan
>
> On Mon, Mar 9, 2020 at 11:31 AM Tony Przygienda 
> wrote:
>
>> Gyan, the technology that you look for exists and is about to go
>> standards RFC (i.e. _scalable_ L3 multihoming to the host including
>> bandwidth load-balancing and many other things that need consideration in
>> such case, e.g. host asic table scalability). Please look @
>> https://datatracker.ietf.org/doc/draft-ietf-rift-rift/  If you need more
>> info/code, more than happy to talk out of band
>>
>> thanks
>>
>> -- tony
>>
>> On Thu, Mar 5, 2020 at 2:04 PM Gyan Mishra  wrote:
>>
>>> Hi Sandy
>>>
>>> Thank you for the feedback on your use cases.
>>>
>>> For now will remain L2 MLAG to the host.
>>>
>>> BESS WG,
>>>
>>> If anyone hears of any new developments on vxlan fabric extension to
>>> hypervisor and any vendors supporting even 3rd party open source
>>> implementations of fabric extensions please reach out to me.
>>>
>>> As an aside when reading RFC 8365 NVO3 section 7  it kind of gets
>>> excited by the subject heading “single homed NVE residing in Hypervisor” or
>>> section 8 MH NVE in TOR server - thinking that maybe fabric extension to
>>> hypervisor does actually exist but of course the let down that no vendor
>>> supports it.
>>>
>>> It does seem there would be a lot of customers wanting this fabric
>>> extension to hypervisor capabilities for MH load balancing- and very
>>> surprising that it does not exist.
>>>
>>> https://tools.ietf.org/html/rfc8365#section-7
>>>
>>>
>>>
>>> Kind regards
>>>
>>> Gyan Mishra
>>> Verizon
>>> Cell 301 502-1347
>>>
>>> On Wed, Mar 4, 2020 at 4:57 PM Sandy Breeze >> > wrote:
>>>
 Hi Gyan,



 Here, we use a mix of NXOS & Arista EOS for leaf and spine, ASR9k DCI
 doing L2 + L3 VTEP and stitching to MPLS EVPN / traditional VPNv4/6.  We
 also confess to pockets of proprietary NXOS vPC, and, pockets of Arista
 ESI-MH.  Incidentally, the older Broadcom NXOS boxes do have ESI-MH
 capabilities and that worked alright.  Customers, like us, have been long
 promised the vendor silicon will catch up, though we’ve yet to see it
 happen.



 We’re entirely a home grown automation / orchestration stack, with many
 years behind it.  Have a look at some talks we’ve given publicly for the
 background in our ‘GOLF’ project.
 https://www.youtube.com/watch?v=jXOrdHfBqb0&t=



 > Has this ever been done before and with which hypervisors ?

 We had the same requirements as you 2-3 years ago.  Have we solved it?
 TL;DR: not really!  We did make an interesting proof of concept though…

 Being largely a VMWare NSX house, we found VMWare closed to offering
 any type of EVPN orchestrated VTEP extension into the physical fabric.
 Quite simply, its model of extending to physical VTEP is OVSDB.  That’s not
 exactly compatible with an EVPN ToR, for various reasons.  So, we had the
 idea that we’d build something which spoke both OVSDB (to NSX) and EVPN (to
 the physical ToR).  The high level idea being, we’d use this to create
 routes in EVPN if they were seen from OVSDB, and visa-versa, only we’d
 maintain the original next-hop.  Kinda like an iBGP RR, only between 2
 different control-planes.  Internally, we called it VxGlue.



 At a high level, it worked well in achieving nice loadbalancing of
 traffic into the hypervisor.  It would no doubt scale fairly well too.  Is
 it the right thing to do in a production network?  We decided not.  For
 one, its very hard to keep up with pace of development of underlying VMWare
 API’s.  As I say, this was a few years ago and things have moved on.  Maybe
 we’ll revisit.



 Sandy





 On 04/03/2020, 21:03, "BESS on behalf of Gyan Mishra" <
 bess-boun...@ietf.org on behalf of hayabusa...@gmail.com> wrote:





 Thank you all for the feedback!!



 Our req

Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-09 Thread Gyan Mishra
Hi Tony

I am actually a member of RIFT, as the WG charter description
was intriguing so I joined - to learn more about the RIFT technology.

I downloaded the draft but never made it through as its a lengthy 100+ page
draft.  Now I really have to read it.

Yes that would be great.  Please unicast me on hayabusa...@gmail.com.

Kind regards

Gyan

On Mon, Mar 9, 2020 at 11:31 AM Tony Przygienda  wrote:

> Gyan, the technology that you look for exists and is about to go standards
> RFC (i.e. _scalable_ L3 multihoming to the host including bandwidth
> load-balancing and many other things that need consideration in such case,
> e.g. host asic table scalability). Please look @
> https://datatracker.ietf.org/doc/draft-ietf-rift-rift/  If you need more
> info/code, more than happy to talk out of band
>
> thanks
>
> -- tony
>
> On Thu, Mar 5, 2020 at 2:04 PM Gyan Mishra  wrote:
>
>> Hi Sandy
>>
>> Thank you for the feedback on your use cases.
>>
>> For now will remain L2 MLAG to the host.
>>
>> BESS WG,
>>
>> If anyone hears of any new developments on vxlan fabric extension to
>> hypervisor and any vendors supporting even 3rd party open source
>> implementations of fabric extensions please reach out to me.
>>
>> As an aside when reading RFC 8365 NVO3 section 7  it kind of gets excited
>> by the subject heading “single homed NVE residing in Hypervisor” or section
>> 8 MH NVE in TOR server - thinking that maybe fabric extension to hypervisor
>> does actually exist but of course the let down that no vendor supports it.
>>
>> It does seem there would be a lot of customers wanting this fabric
>> extension to hypervisor capabilities for MH load balancing- and very
>> surprising that it does not exist.
>>
>> https://tools.ietf.org/html/rfc8365#section-7
>>
>>
>>
>> Kind regards
>>
>> Gyan Mishra
>> Verizon
>> Cell 301 502-1347
>>
>> On Wed, Mar 4, 2020 at 4:57 PM Sandy Breeze > > wrote:
>>
>>> Hi Gyan,
>>>
>>>
>>>
>>> Here, we use a mix of NXOS & Arista EOS for leaf and spine, ASR9k DCI
>>> doing L2 + L3 VTEP and stitching to MPLS EVPN / traditional VPNv4/6.  We
>>> also confess to pockets of proprietary NXOS vPC, and, pockets of Arista
>>> ESI-MH.  Incidentally, the older Broadcom NXOS boxes do have ESI-MH
>>> capabilities and that worked alright.  Customers, like us, have been long
>>> promised the vendor silicon will catch up, though we’ve yet to see it
>>> happen.
>>>
>>>
>>>
>>> We’re entirely a home grown automation / orchestration stack, with many
>>> years behind it.  Have a look at some talks we’ve given publicly for the
>>> background in our ‘GOLF’ project.
>>> https://www.youtube.com/watch?v=jXOrdHfBqb0&t=
>>>
>>>
>>>
>>> > Has this ever been done before and with which hypervisors ?
>>>
>>> We had the same requirements as you 2-3 years ago.  Have we solved it?
>>> TL;DR: not really!  We did make an interesting proof of concept though…
>>>
>>> Being largely a VMWare NSX house, we found VMWare closed to offering any
>>> type of EVPN orchestrated VTEP extension into the physical fabric.  Quite
>>> simply, its model of extending to physical VTEP is OVSDB.  That’s not
>>> exactly compatible with an EVPN ToR, for various reasons.  So, we had the
>>> idea that we’d build something which spoke both OVSDB (to NSX) and EVPN (to
>>> the physical ToR).  The high level idea being, we’d use this to create
>>> routes in EVPN if they were seen from OVSDB, and visa-versa, only we’d
>>> maintain the original next-hop.  Kinda like an iBGP RR, only between 2
>>> different control-planes.  Internally, we called it VxGlue.
>>>
>>>
>>>
>>> At a high level, it worked well in achieving nice loadbalancing of
>>> traffic into the hypervisor.  It would no doubt scale fairly well too.  Is
>>> it the right thing to do in a production network?  We decided not.  For
>>> one, its very hard to keep up with pace of development of underlying VMWare
>>> API’s.  As I say, this was a few years ago and things have moved on.  Maybe
>>> we’ll revisit.
>>>
>>>
>>>
>>> Sandy
>>>
>>>
>>>
>>>
>>>
>>> On 04/03/2020, 21:03, "BESS on behalf of Gyan Mishra" <
>>> bess-boun...@ietf.org on behalf of hayabusa...@gmail.com> wrote:
>>>
>>>
>>>
>>>
>>>
>>> Thank you all for the feedback!!
>>>
>>>
>>>
>>> Our requirements we ar looking for a way to increase stability without
>>> sacrificing bandwidth availability and convergence with Data Center host
>>> connectivity to an existing vxlan fabric.  Our server traffic volume is
>>> higher bandwidth East to West then North to South.
>>>
>>>
>>>
>>> We have a Cisco Nexus based vxlan evpn fabric with multi site feature
>>> connecting all of our PODs intra DC and use BGP EVPN over an MPLS core for
>>> DCI inter connect inter Data Center connectivity.
>>>
>>>
>>>
>>> We have orchestration via Cisco NSO and DCNM for network
>>> programmability.  Typical Cisco shop..
>>>
>>>
>>>
>>> Our Data Center host attachment model as been with MLAG using Cisco’s
>>> vPC.  That has been problematic having that L2 extension so we would lik

Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-09 Thread Tony Przygienda
Gyan, the technology that you look for exists and is about to go standards
RFC (i.e. _scalable_ L3 multihoming to the host including bandwidth
load-balancing and many other things that need consideration in such case,
e.g. host asic table scalability). Please look @
https://datatracker.ietf.org/doc/draft-ietf-rift-rift/  If you need more
info/code, more than happy to talk out of band

thanks

-- tony

On Thu, Mar 5, 2020 at 2:04 PM Gyan Mishra  wrote:

> Hi Sandy
>
> Thank you for the feedback on your use cases.
>
> For now will remain L2 MLAG to the host.
>
> BESS WG,
>
> If anyone hears of any new developments on vxlan fabric extension to
> hypervisor and any vendors supporting even 3rd party open source
> implementations of fabric extensions please reach out to me.
>
> As an aside when reading RFC 8365 NVO3 section 7  it kind of gets excited
> by the subject heading “single homed NVE residing in Hypervisor” or section
> 8 MH NVE in TOR server - thinking that maybe fabric extension to hypervisor
> does actually exist but of course the let down that no vendor supports it..
>
> It does seem there would be a lot of customers wanting this fabric
> extension to hypervisor capabilities for MH load balancing- and very
> surprising that it does not exist.
>
> https://tools.ietf.org/html/rfc8365#section-7
>
>
>
> Kind regards
>
> Gyan Mishra
> Verizon
> Cell 301 502-1347
>
> On Wed, Mar 4, 2020 at 4:57 PM Sandy Breeze  > wrote:
>
>> Hi Gyan,
>>
>>
>>
>> Here, we use a mix of NXOS & Arista EOS for leaf and spine, ASR9k DCI
>> doing L2 + L3 VTEP and stitching to MPLS EVPN / traditional VPNv4/6.  We
>> also confess to pockets of proprietary NXOS vPC, and, pockets of Arista
>> ESI-MH.  Incidentally, the older Broadcom NXOS boxes do have ESI-MH
>> capabilities and that worked alright.  Customers, like us, have been long
>> promised the vendor silicon will catch up, though we’ve yet to see it
>> happen.
>>
>>
>>
>> We’re entirely a home grown automation / orchestration stack, with many
>> years behind it.  Have a look at some talks we’ve given publicly for the
>> background in our ‘GOLF’ project.
>> https://www.youtube.com/watch?v=jXOrdHfBqb0&t=
>>
>>
>>
>> > Has this ever been done before and with which hypervisors ?
>>
>> We had the same requirements as you 2-3 years ago.  Have we solved it?
>> TL;DR: not really!  We did make an interesting proof of concept though…
>>
>> Being largely a VMWare NSX house, we found VMWare closed to offering any
>> type of EVPN orchestrated VTEP extension into the physical fabric.  Quite
>> simply, its model of extending to physical VTEP is OVSDB.  That’s not
>> exactly compatible with an EVPN ToR, for various reasons.  So, we had the
>> idea that we’d build something which spoke both OVSDB (to NSX) and EVPN (to
>> the physical ToR).  The high level idea being, we’d use this to create
>> routes in EVPN if they were seen from OVSDB, and visa-versa, only we’d
>> maintain the original next-hop.  Kinda like an iBGP RR, only between 2
>> different control-planes.  Internally, we called it VxGlue.
>>
>>
>>
>> At a high level, it worked well in achieving nice loadbalancing of
>> traffic into the hypervisor.  It would no doubt scale fairly well too.  Is
>> it the right thing to do in a production network?  We decided not.  For
>> one, its very hard to keep up with pace of development of underlying VMWare
>> API’s.  As I say, this was a few years ago and things have moved on.  Maybe
>> we’ll revisit.
>>
>>
>>
>> Sandy
>>
>>
>>
>>
>>
>> On 04/03/2020, 21:03, "BESS on behalf of Gyan Mishra" <
>> bess-boun...@ietf.org on behalf of hayabusa...@gmail.com> wrote:
>>
>>
>>
>>
>>
>> Thank you all for the feedback!!
>>
>>
>>
>> Our requirements we ar looking for a way to increase stability without
>> sacrificing bandwidth availability and convergence with Data Center host
>> connectivity to an existing vxlan fabric.  Our server traffic volume is
>> higher bandwidth East to West then North to South.
>>
>>
>>
>> We have a Cisco Nexus based vxlan evpn fabric with multi site feature
>> connecting all of our PODs intra DC and use BGP EVPN over an MPLS core for
>> DCI inter connect inter Data Center connectivity.
>>
>>
>>
>> We have orchestration via Cisco NSO and DCNM for network
>> programmability.  Typical Cisco shop..
>>
>>
>>
>> Our Data Center host attachment model as been with MLAG using Cisco’s
>> vPC.  That has been problematic having that L2 extension so we would like
>> to find a better way to maybe leverage our existing vxlan fabric and extend
>> to server hypervisor if at all possible.
>>
>>
>>
>> So with a hypervisor connected to two leaf switches in a vxlan fabric  it
>> sounds like it maybe possible with Cisco’s IETF standards based
>> implementation of vxlan overlay following NVO3 RFC 8365 and BGP EVPN RFC
>> 7432 that we could extend the fabric to server hypervisor.
>>
>>
>>
>> The question related to L3 ECMP versus L2 MLAG become moot as with
>> existing BGP EVPN load balancing procedures wi

Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-05 Thread Gyan Mishra
Hi Sandy

Thank you for the feedback on your use cases.

For now will remain L2 MLAG to the host.

BESS WG,

If anyone hears of any new developments on vxlan fabric extension to
hypervisor and any vendors supporting even 3rd party open source
implementations of fabric extensions please reach out to me.

As an aside when reading RFC 8365 NVO3 section 7  it kind of gets excited
by the subject heading “single homed NVE residing in Hypervisor” or section
8 MH NVE in TOR server - thinking that maybe fabric extension to hypervisor
does actually exist but of course the let down that no vendor supports it.

It does seem there would be a lot of customers wanting this fabric
extension to hypervisor capabilities for MH load balancing- and very
surprising that it does not exist.

https://tools.ietf.org/html/rfc8365#section-7



Kind regards

Gyan Mishra
Verizon
Cell 301 502-1347

On Wed, Mar 4, 2020 at 4:57 PM Sandy Breeze 
wrote:

> Hi Gyan,
>
>
>
> Here, we use a mix of NXOS & Arista EOS for leaf and spine, ASR9k DCI
> doing L2 + L3 VTEP and stitching to MPLS EVPN / traditional VPNv4/6.  We
> also confess to pockets of proprietary NXOS vPC, and, pockets of Arista
> ESI-MH.  Incidentally, the older Broadcom NXOS boxes do have ESI-MH
> capabilities and that worked alright.  Customers, like us, have been long
> promised the vendor silicon will catch up, though we’ve yet to see it
> happen.
>
>
>
> We’re entirely a home grown automation / orchestration stack, with many
> years behind it.  Have a look at some talks we’ve given publicly for the
> background in our ‘GOLF’ project.
> https://www.youtube.com/watch?v=jXOrdHfBqb0&t=
>
>
>
> > Has this ever been done before and with which hypervisors ?
>
> We had the same requirements as you 2-3 years ago.  Have we solved it?
> TL;DR: not really!  We did make an interesting proof of concept though…
>
> Being largely a VMWare NSX house, we found VMWare closed to offering any
> type of EVPN orchestrated VTEP extension into the physical fabric.  Quite
> simply, its model of extending to physical VTEP is OVSDB.  That’s not
> exactly compatible with an EVPN ToR, for various reasons.  So, we had the
> idea that we’d build something which spoke both OVSDB (to NSX) and EVPN (to
> the physical ToR).  The high level idea being, we’d use this to create
> routes in EVPN if they were seen from OVSDB, and visa-versa, only we’d
> maintain the original next-hop.  Kinda like an iBGP RR, only between 2
> different control-planes.  Internally, we called it VxGlue.
>
>
>
> At a high level, it worked well in achieving nice loadbalancing of traffic
> into the hypervisor.  It would no doubt scale fairly well too.  Is it the
> right thing to do in a production network?  We decided not.  For one, its
> very hard to keep up with pace of development of underlying VMWare API’s.
> As I say, this was a few years ago and things have moved on.  Maybe we’ll
> revisit.
>
>
>
> Sandy
>
>
>
>
>
> On 04/03/2020, 21:03, "BESS on behalf of Gyan Mishra" <
> bess-boun...@ietf.org on behalf of hayabusa...@gmail.com> wrote:
>
>
>
>
>
> Thank you all for the feedback!!
>
>
>
> Our requirements we ar looking for a way to increase stability without
> sacrificing bandwidth availability and convergence with Data Center host
> connectivity to an existing vxlan fabric.  Our server traffic volume is
> higher bandwidth East to West then North to South.
>
>
>
> We have a Cisco Nexus based vxlan evpn fabric with multi site feature
> connecting all of our PODs intra DC and use BGP EVPN over an MPLS core for
> DCI inter connect inter Data Center connectivity.
>
>
>
> We have orchestration via Cisco NSO and DCNM for network programmability.
> Typical Cisco shop.
>
>
>
> Our Data Center host attachment model as been with MLAG using Cisco’s
> vPC.  That has been problematic having that L2 extension so we would like
> to find a better way to maybe leverage our existing vxlan fabric and extend
> to server hypervisor if at all possible.
>
>
>
> So with a hypervisor connected to two leaf switches in a vxlan fabric  it
> sounds like it maybe possible with Cisco’s IETF standards based
> implementation of vxlan overlay following NVO3 RFC 8365 and BGP EVPN RFC
> 7432 that we could extend the fabric to server hypervisor.
>
>
>
> The question related to L3 ECMP versus L2 MLAG become moot as with
> existing BGP EVPN load balancing procedures with EVPN type 4 default
> gateway DF election we can achieve all active multi homed load balancing
> from the hypervisor.  As was mentioned RFC 8365 since vxlan evpn does not
> use ESI the local bias feature would have to be used for split horizon.
>
>
>
> Of course with the extension to the hypervisor we would use are existing
> orchestration of the fabric to manage the server hypervisor layer.
>
>
>
> Has this ever been done before and with which hypervisors ?
>
>
>
> Kind regards
>
>
>
> Gyan
>
>
>
> On Mon, Mar 2, 2020 at 7:58 PM Jeff Tantsura 
> wrote:
>
> Gyan,
>
>
>
> On open source side of things - F

Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-04 Thread Gyan Mishra
Thank you Yuya!!

On Wed, Mar 4, 2020 at 11:58 AM Yuya KAWAKAMI  wrote:

> Hi Gyan and Robert,
>
>  > This could eliminate use of MLAG on the leaf switches
>
> Contrail/Tungsten just provide L2/L3 overlay routes to Compute via
> XMPP(MP-BGP), not support underlay multipath.
> At this time, vRouter of Contrail/Tungsten does not follow changes in the
> routing table of host OS.
> So there are still MLAG or Virtual-Chassis or VRRP and I'm struggling with
> them.
> This is just implementation issue and will be resolved in the future.
>
> Just for your information,
> Yuya
> SDN Tech Lead, NTT
>
> On 2020/03/03 9:07, Robert Raszuk wrote:
> > Hi Gyan,
> >
> > You are touching subject close to me so let me share my perspective on
> your doubts below ;)
> >
> >  > maybe some advantages of elimination of L2 to the host
> >
> > Not some but huge !
> >
> >  >  BGP multipath provides flow based uneven load balancing
> >
> > First Contrail/Tungsten does not use BGP to the hypervisor but XMPP. But
> this is opaque to your concern.
> >
> > Load balancing and hashing construction is your choice, BGP or XMPP only
> deliver you next hops .. how you spread traffic to them is 100% up to your
> choice. That is the same on hypervisor or on any decent router. LAGs also
> build hash in the way you configure them to do so..
> >
> >  >  hypervisor managed by server admins
> >
> > In any decent network or for that matter even in my lab this is all 100%
> automated. You run one template and execute it. Ansible works pretty well,
> but there are other choices too.
> >
> > Many thx,
> > R.
> >
> >
> > On Tue, Mar 3, 2020 at 1:00 AM Gyan Mishra  > wrote:
> >
> >
> > Thanks Robert for the quick response
> >
> > Just thinking out loud -  I can see there maybe some advantages of
> elimination of L2 to the host but the one major disadvantage is that BGP
> multipath provides flow based uneven load balancing so not as desirable
> from that standpoint compare to L3 MLAG bundle XOR Src/Dest/Port hash..
> >
> > Other big down side is most enterprises have the hypervisor managed
> by server admins but if you run BGP now that ends up shifting to network.
> More complicated.
> >
> > Kind regards
> >
> > Gyan
> >
> > On Mon, Mar 2, 2020 at 6:39 PM Robert Raszuk  > wrote:
> >
> > Hi Gyan,
> >
> > Similar architecture has been invented and shipped by Contrail
> team. Now that project after they got acquired by Juniper has been renamed
> to Tungsten Fabric https://tungsten..io/  while
> Juniper continued to keep the original project's name and commercial flavor
> of it. No guarantees of any product quality at this point.
> >
> > Btw ,,, no need for VXLAN nor BGP to the host. The proposed
> above alternative were well thought out and turned to work ways far more
> efficient and practical if you zoom into details.
> >
> > Best,
> > Robert.
> >
> >
> > On Tue, Mar 3, 2020 at 12:26 AM Gyan Mishra <
> hayabusa...@gmail.com > wrote:
> >
> >
> > Dear BESS WG
> >
> > Is anyone aware of any IETF BGP development in the Data
> Center arena to extend BGP VXLAN EVPN to a blade server Hypervisor making
> the Hypervisor part of the  vxlan fabric.  This could eliminate use of MLAG
> on the leaf switches and eliminate L2 completely from the vxlan fabric
> thereby maximizing  stability.
> >
> > Kind regards,
> >
> > Gyan
> > --
> >
> > Gyan  Mishra
> >
> > Network Engineering & Technology
> >
> > Verizon
> >
> > Silver Spring, MD 20904
> >
> > Phone: 301 502-1347
> >
> > Email: gyan.s.mis...@verizon.com  gyan.s.mis...@verizon.com>
> >
> >
> >
> > ___
> > BESS mailing list
> > BESS@ietf.org 
> > https://www.ietf.org/mailman/listinfo/bess
> >
> > --
> >
> > Gyan  Mishra
> >
> > Network Engineering & Technology
> >
> > Verizon
> >
> > Silver Spring, MD 20904
> >
> > Phone: 301 502-1347
> >
> > Email: gyan.s.mis...@verizon.com 
> >
> >
> >
> >
> > ___
> > BESS mailing list
> > BESS@ietf.org
> > https://www.ietf.org/mailman/listinfo/bess
> >
>
> ___
> BESS mailing list
> BESS@ietf.org
> https://www.ietf.org/mailman/listinfo/bess
>
-- 

Gyan  Mishra

Network Engineering & Technology

Verizon

Silver Spring, MD 20904

Phone: 301 502-1347

Email: gyan.s.mis...@verizon.com
___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess


Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-04 Thread Sandy Breeze
Hi Gyan,

Here, we use a mix of NXOS & Arista EOS for leaf and spine, ASR9k DCI doing L2 
+ L3 VTEP and stitching to MPLS EVPN / traditional VPNv4/6.  We also confess to 
pockets of proprietary NXOS vPC, and, pockets of Arista ESI-MH.  Incidentally, 
the older Broadcom NXOS boxes do have ESI-MH capabilities and that worked 
alright.  Customers, like us, have been long promised the vendor silicon will 
catch up, though we’ve yet to see it happen.

We’re entirely a home grown automation / orchestration stack, with many years 
behind it.  Have a look at some talks we’ve given publicly for the background 
in our ‘GOLF’ project.  https://www.youtube.com/watch?v=jXOrdHfBqb0&t=

> Has this ever been done before and with which hypervisors ?
We had the same requirements as you 2-3 years ago.  Have we solved it?  TL;DR: 
not really!  We did make an interesting proof of concept though…
Being largely a VMWare NSX house, we found VMWare closed to offering any type 
of EVPN orchestrated VTEP extension into the physical fabric.  Quite simply, 
its model of extending to physical VTEP is OVSDB.  That’s not exactly 
compatible with an EVPN ToR, for various reasons.  So, we had the idea that 
we’d build something which spoke both OVSDB (to NSX) and EVPN (to the physical 
ToR).  The high level idea being, we’d use this to create routes in EVPN if 
they were seen from OVSDB, and visa-versa, only we’d maintain the original 
next-hop.  Kinda like an iBGP RR, only between 2 different control-planes.  
Internally, we called it VxGlue.

At a high level, it worked well in achieving nice loadbalancing of traffic into 
the hypervisor.  It would no doubt scale fairly well too.  Is it the right 
thing to do in a production network?  We decided not.  For one, its very hard 
to keep up with pace of development of underlying VMWare API’s.  As I say, this 
was a few years ago and things have moved on.  Maybe we’ll revisit.

Sandy


On 04/03/2020, 21:03, "BESS on behalf of Gyan Mishra" 
mailto:bess-boun...@ietf.org> on behalf of 
hayabusa...@gmail.com> wrote:


Thank you all for the feedback!!

Our requirements we ar looking for a way to increase stability without 
sacrificing bandwidth availability and convergence with Data Center host 
connectivity to an existing vxlan fabric.  Our server traffic volume is higher 
bandwidth East to West then North to South.

We have a Cisco Nexus based vxlan evpn fabric with multi site feature 
connecting all of our PODs intra DC and use BGP EVPN over an MPLS core for DCI 
inter connect inter Data Center connectivity.

We have orchestration via Cisco NSO and DCNM for network programmability.  
Typical Cisco shop.

Our Data Center host attachment model as been with MLAG using Cisco’s vPC.  
That has been problematic having that L2 extension so we would like to find a 
better way to maybe leverage our existing vxlan fabric and extend to server 
hypervisor if at all possible.

So with a hypervisor connected to two leaf switches in a vxlan fabric  it 
sounds like it maybe possible with Cisco’s IETF standards based implementation 
of vxlan overlay following NVO3 RFC 8365 and BGP EVPN RFC 7432 that we could 
extend the fabric to server hypervisor.

The question related to L3 ECMP versus L2 MLAG become moot as with existing BGP 
EVPN load balancing procedures with EVPN type 4 default gateway DF election we 
can achieve all active multi homed load balancing from the hypervisor.  As was 
mentioned RFC 8365 since vxlan evpn does not use ESI the local bias feature 
would have to be used for split horizon.

Of course with the extension to the hypervisor we would use are existing 
orchestration of the fabric to manage the server hypervisor layer.

Has this ever been done before and with which hypervisors ?

Kind regards

Gyan

On Mon, Mar 2, 2020 at 7:58 PM Jeff Tantsura 
mailto:jefftant.i...@gmail.com>> wrote:
Gyan,

On open source side of things - FRR supports EVPN on the host.
Any vendor virtualized NOS would provide you the same (at least Junos/cRPD or  
XRv).
EVPN ESI load-sharing eliminates need for MLAG (basic thought, the devil is in 
the details :))
ECMP vs LAG load-balancing - the algorithms supported are quite similar, in 
some code bases actually the same, so this statement is not really correct.

Would be glad to better understand your requirements and help you!
Regards,
Jeff


On Mar 2, 2020, at 16:00, Gyan Mishra 
mailto:hayabusa...@gmail.com>> wrote:

Thanks Robert for the quick response

Just thinking out loud -  I can see there maybe some advantages of elimination 
of L2 to the host but the one major disadvantage is that BGP multipath provides 
flow based uneven load balancing so not as desirable from that standpoint 
compare to L3 MLAG bundle XOR Src/Dest/Port hash.

Other big down side is most enterprises have the hypervisor managed by server 
admins but if you run BGP now that ends up shifting to network.  More 
complicated.

Kind regards

Gyan

On Mon, Mar 2, 2020 at 

Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-04 Thread Gyan Mishra
Thank you all for the feedback!!

Our requirements we ar looking for a way to increase stability without
sacrificing bandwidth availability and convergence with Data Center host
connectivity to an existing vxlan fabric.  Our server traffic volume is
higher bandwidth East to West then North to South.

We have a Cisco Nexus based vxlan evpn fabric with multi site feature
connecting all of our PODs intra DC and use BGP EVPN over an MPLS core for
DCI inter connect inter Data Center connectivity.

We have orchestration via Cisco NSO and DCNM for network programmability.
Typical Cisco shop.

Our Data Center host attachment model as been with MLAG using Cisco’s vPC.
That has been problematic having that L2 extension so we would like to find
a better way to maybe leverage our existing vxlan fabric and extend to
server hypervisor if at all possible.

So with a hypervisor connected to two leaf switches in a vxlan fabric  it
sounds like it maybe possible with Cisco’s IETF standards based
implementation of vxlan overlay following NVO3 RFC 8365 and BGP EVPN RFC
7432 that we could extend the fabric to server hypervisor.

The question related to L3 ECMP versus L2 MLAG become moot as with existing
BGP EVPN load balancing procedures with EVPN type 4 default gateway DF
election we can achieve all active multi homed load balancing from the
hypervisor.  As was mentioned RFC 8365 since vxlan evpn does not use ESI
the local bias feature would have to be used for split horizon.

Of course with the extension to the hypervisor we would use are existing
orchestration of the fabric to manage the server hypervisor layer.

Has this ever been done before and with which hypervisors ?

Kind regards

Gyan

On Mon, Mar 2, 2020 at 7:58 PM Jeff Tantsura 
wrote:

> Gyan,
>
> On open source side of things - FRR supports EVPN on the host.
> Any vendor virtualized NOS would provide you the same (at least Junos/cRPD
> or  XRv).
> EVPN ESI load-sharing eliminates need for MLAG (basic thought, the devil
> is in the details :))
> ECMP vs LAG load-balancing - the algorithms supported are quite similar,
> in some code bases actually the same, so this statement is not really
> correct.
>
> Would be glad to better understand your requirements and help you!
>
> Regards,
> Jeff
>
> On Mar 2, 2020, at 16:00, Gyan Mishra  wrote:
>
> 
>
> Thanks Robert for the quick response
>
> Just thinking out loud -  I can see there maybe some advantages of
> elimination of L2 to the host but the one major disadvantage is that BGP
> multipath provides flow based uneven load balancing so not as desirable
> from that standpoint compare to L3 MLAG bundle XOR Src/Dest/Port hash.
>
> Other big down side is most enterprises have the hypervisor managed by
> server admins but if you run BGP now that ends up shifting to network.
> More complicated.
>
> Kind regards
>
> Gyan
>
> On Mon, Mar 2, 2020 at 6:39 PM Robert Raszuk  wrote:
>
>> Hi Gyan,
>>
>> Similar architecture has been invented and shipped by Contrail team. Now
>> that project after they got acquired by Juniper has been renamed to
>> Tungsten Fabric https://tungsten.io/ while Juniper continued to keep the
>> original project's name and commercial flavor of it. No guarantees of any
>> product quality at this point.
>>
>> Btw ,,, no need for VXLAN nor BGP to the host. The proposed above
>> alternative were well thought out and turned to work ways far more
>> efficient and practical if you zoom into details.
>>
>> Best,
>> Robert.
>>
>>
>> On Tue, Mar 3, 2020 at 12:26 AM Gyan Mishra 
>> wrote:
>>
>>>
>>> Dear BESS WG
>>>
>>> Is anyone aware of any IETF BGP development in the Data Center arena to
>>> extend BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor
>>> part of the  vxlan fabric.  This could eliminate use of MLAG on the leaf
>>> switches and eliminate L2 completely from the vxlan fabric thereby
>>> maximizing  stability.
>>>
>>> Kind regards,
>>>
>>> Gyan
>>> --
>>>
>>> Gyan  Mishra
>>>
>>> Network Engineering & Technology
>>>
>>> Verizon
>>>
>>> Silver Spring, MD 20904
>>>
>>> Phone: 301 502-1347
>>>
>>> Email: gyan.s..mis...@verizon.com 
>>>
>>>
>>>
>>> ___
>>> BESS mailing list
>>> BESS@ietf.org
>>> https://www.ietf.org/mailman/listinfo/bess
>>>
>> --
>
> Gyan  Mishra
>
> Network Engineering & Technology
>
> Verizon
>
> Silver Spring, MD 20904
>
> Phone: 301 502-1347
>
> Email: gyan.s.mis...@verizon.com
>
>
>
> ___
> BESS mailing list
> BESS@ietf.org
> https://www.ietf.org/mailman/listinfo/bess
>
> --

Gyan  Mishra

Network Engineering & Technology

Verizon

Silver Spring, MD 20904

Phone: 301 502-1347

Email: gyan.s.mis...@verizon.com
___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess


Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-04 Thread Gyan Mishra
Thanks Jorge!!

https://tools.ietf.org/html/rfc8365#page-16

Since VXLAN and NVGRE encapsulations do not include the ESI label,
   other means of performing the split-horizon filtering function must
   be devised for these encapsulations.  The following approach is
   recommended for split-horizon filtering when VXLAN (or NVGRE)
   encapsulation is used.

   Every NVE tracks the IP address(es) associated with the other NVE(s)
   with which it has shared multihomed ESs.  When the NVE receives a
   multi-destination frame from the overlay network, it examines the
   source IP address in the tunnel header (which corresponds to the
   ingress NVE) and filters out the frame on all local interfaces
   connected to ESs that are shared with the ingress NVE.  With this
   approach, it is required that the ingress NVE perform replication
   locally to all directly attached Ethernet segments (regardless of the
   DF election state) for all flooded traffic ingress from the access
   interfaces (i.e., from the hosts).  This approach is referred to as
   "Local Bias", and has the advantage that only a single IP address
   need be used per NVE for split-horizon filtering, as opposed to
   requiring an IP address per Ethernet segment per NVE.


On Wed, Mar 4, 2020 at 11:01 AM Rabadan, Jorge (Nokia - US/Mountain View) <
jorge.raba...@nokia.com> wrote:

> I would also refer to RFC8365, specifically the local-bias explanation for
> multi-homing Split-Horizon, and also the NVE residing in the hypervisor.
>
> That’s usually the reference here.
>
>
>
> My two cents..
>
> Thx
>
> Jorge
>
>
>
> *From: *BESS  on behalf of "UTTARO, JAMES" <
> ju1...@att.com>
> *Date: *Wednesday, March 4, 2020 at 4:57 PM
> *To: *Jeff Tantsura , Gyan Mishra <
> hayabusa...@gmail.com>, BESS 
> *Subject: *Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM
>
>
>
> *Gyan,*
>
>
>
> *  The following draft may also be of interest. AT&T (
> A.Lingala ) has co-authored a draft that addresses unequal load balancing
> within a data center.. This draft intends to optimize the use of links of
> different size within the data center to fully utilize the capacity of the
> links from the leaf’s to the servers.. *
>
>
>
> *https://tools.ietf.org/html/draft-ietf-bess-evpn-unequal-lb-03
> <https://tools.ietf.org/html/draft-ietf-bess-evpn-unequal-lb-03> *
>
>
>
> *Thanks,*
>
> *      Jim Uttaro*
>
>
>
> *From:* Jeff Tantsura 
> *Sent:* Wednesday, March 04, 2020 10:47 AM
> *To:* Gyan Mishra ; BESS ; UTTARO,
> JAMES 
> *Subject:* Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM
>
>
>
> James,
>
>
>
> ESI multihoming (load-sharing) works just fine with VxLAN encapsulation
> (when supported), there’s no need for additional (proprietary) mechanisms
> (at least with basic synchronization).
>
>
>
> Gyan - the devil is in the details (as always) - I’m looking at
> multivendor EVPN VxLAN ESI designs as we speak, I’m yet to figure out how
> ESI type 3 (only ESI type supported in NX-OS) is going to work with ESI
> types 0/1 supported in Junos and Arista. I’d assume upcoming open source
> implementations will support type 0 (manual) only.
>
>
>
> To second James - replacing MLAG with ESI multihoming could be a really
> big deal in terms of simplification and normalization of the fabric (and
> you could finally remove peer-links!).
>
> L2 vs L3 discussion is somewhat orthogonal to that, if your services
> require stretched L2, whether your VTEPs are on a server or switch - you’d
> still be doing L2overL3.
>
>
>
> I still wouldn’t dare to deploy multivendor leafs though, but one step at
> a time ;-)
>
>
>
> Cheers,
>
> Jeff
>
> On Mar 4, 2020, 10:17 AM -0500, UTTARO, JAMES , wrote:
>
>
> *Gyan,*
>
>
>
> *  One of the big advantages of EVPN is the MLAG capability
> without the need for proprietary MLAG solutions. We have been actively
> testing EV-LAG to accomplish this in the WAN for L2 services.. That being
> said, we use EVPN/MPLS where MH ( EV-LAG ) is conveyed via labels.. My
> understanding is that when using EVPN/VXLAN proprietary mechanisms are need
> to make EV-LAG work.. The is no SH label..*
>
>
>
> *Thanks,*
>
> *  Jim Uttaro*
>
>
>
> *From:* BESS  *On Behalf Of* Gyan Mishra
> *Sent:* Monday, March 02, 2020 6:26 PM
> *To:* BESS 
> *Subject:* [bess] VXLAN EVPN fabric extension to Hypervisor VM
>
>
>
>
>
> Dear BESS WG
>
>
>
> Is anyone aware of any IETF BGP development in the Data Center arena to
> extend BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor
> part of the 

Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-04 Thread Yuya KAWAKAMI

Hi Gyan and Robert,

> This could eliminate use of MLAG on the leaf switches

Contrail/Tungsten just provide L2/L3 overlay routes to Compute via 
XMPP(MP-BGP), not support underlay multipath.
At this time, vRouter of Contrail/Tungsten does not follow changes in the 
routing table of host OS.
So there are still MLAG or Virtual-Chassis or VRRP and I'm struggling with them.
This is just implementation issue and will be resolved in the future.

Just for your information,
Yuya
SDN Tech Lead, NTT

On 2020/03/03 9:07, Robert Raszuk wrote:

Hi Gyan,

You are touching subject close to me so let me share my perspective on your 
doubts below ;)

 > maybe some advantages of elimination of L2 to the host

Not some but huge !

 >  BGP multipath provides flow based uneven load balancing

First Contrail/Tungsten does not use BGP to the hypervisor but XMPP. But this 
is opaque to your concern.

Load balancing and hashing construction is your choice, BGP or XMPP only 
deliver you next hops .. how you spread traffic to them is 100% up to your 
choice. That is the same on hypervisor or on any decent router. LAGs also build 
hash in the way you configure them to do so..

 >  hypervisor managed by server admins

In any decent network or for that matter even in my lab this is all 100% 
automated. You run one template and execute it. Ansible works pretty well, but 
there are other choices too.

Many thx,
R.


On Tue, Mar 3, 2020 at 1:00 AM Gyan Mishra mailto:hayabusa...@gmail.com>> wrote:


Thanks Robert for the quick response

Just thinking out loud -  I can see there maybe some advantages of 
elimination of L2 to the host but the one major disadvantage is that BGP 
multipath provides flow based uneven load balancing so not as desirable from 
that standpoint compare to L3 MLAG bundle XOR Src/Dest/Port hash..

Other big down side is most enterprises have the hypervisor managed by 
server admins but if you run BGP now that ends up shifting to network.  More 
complicated.

Kind regards

Gyan

On Mon, Mar 2, 2020 at 6:39 PM Robert Raszuk mailto:rob...@raszuk.net>> wrote:

Hi Gyan,

Similar architecture has been invented and shipped by Contrail team. Now that 
project after they got acquired by Juniper has been renamed to Tungsten Fabric 
https://tungsten..io/  while Juniper continued to keep 
the original project's name and commercial flavor of it. No guarantees of any product 
quality at this point.

Btw ,,, no need for VXLAN nor BGP to the host. The proposed above 
alternative were well thought out and turned to work ways far more efficient 
and practical if you zoom into details.

Best,
Robert.


On Tue, Mar 3, 2020 at 12:26 AM Gyan Mishra mailto:hayabusa...@gmail.com>> wrote:


Dear BESS WG

Is anyone aware of any IETF BGP development in the Data Center 
arena to extend BGP VXLAN EVPN to a blade server Hypervisor making the 
Hypervisor part of the  vxlan fabric.  This could eliminate use of MLAG on the 
leaf switches and eliminate L2 completely from the vxlan fabric thereby 
maximizing  stability.

Kind regards,

Gyan
-- 


Gyan  Mishra

Network Engineering & Technology

Verizon

Silver Spring, MD 20904

Phone: 301 502-1347

Email: gyan.s.mis...@verizon.com 



___
BESS mailing list
BESS@ietf.org 
https://www.ietf.org/mailman/listinfo/bess

-- 


Gyan  Mishra

Network Engineering & Technology

Verizon

Silver Spring, MD 20904

Phone: 301 502-1347

Email: gyan.s.mis...@verizon.com 




___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess



___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess


Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-04 Thread Rabadan, Jorge (Nokia - US/Mountain View)
I would also refer to RFC8365, specifically the local-bias explanation for 
multi-homing Split-Horizon, and also the NVE residing in the hypervisor.
That’s usually the reference here.

My two cents..
Thx
Jorge

From: BESS  on behalf of "UTTARO, JAMES" 
Date: Wednesday, March 4, 2020 at 4:57 PM
To: Jeff Tantsura , Gyan Mishra 
, BESS 
Subject: Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

Gyan,

  The following draft may also be of interest. AT&T ( A.Lingala ) 
has co-authored a draft that addresses unequal load balancing within a data 
center.. This draft intends to optimize the use of links of different size 
within the data center to fully utilize the capacity of the links from the 
leaf’s to the servers..

https://tools.ietf.org/html/draft-ietf-bess-evpn-unequal-lb-03

Thanks,
  Jim Uttaro

From: Jeff Tantsura 
Sent: Wednesday, March 04, 2020 10:47 AM
To: Gyan Mishra ; BESS ; UTTARO, JAMES 

Subject: Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

James,

ESI multihoming (load-sharing) works just fine with VxLAN encapsulation (when 
supported), there’s no need for additional (proprietary) mechanisms (at least 
with basic synchronization).

Gyan - the devil is in the details (as always) - I’m looking at multivendor 
EVPN VxLAN ESI designs as we speak, I’m yet to figure out how ESI type 3 (only 
ESI type supported in NX-OS) is going to work with ESI types 0/1 supported in 
Junos and Arista. I’d assume upcoming open source implementations will support 
type 0 (manual) only.

To second James - replacing MLAG with ESI multihoming could be a really big 
deal in terms of simplification and normalization of the fabric (and you could 
finally remove peer-links!).
L2 vs L3 discussion is somewhat orthogonal to that, if your services require 
stretched L2, whether your VTEPs are on a server or switch - you’d still be 
doing L2overL3.

I still wouldn’t dare to deploy multivendor leafs though, but one step at a 
time ;-)

Cheers,
Jeff
On Mar 4, 2020, 10:17 AM -0500, UTTARO, JAMES 
mailto:ju1...@att.com>>, wrote:


Gyan,

  One of the big advantages of EVPN is the MLAG capability without 
the need for proprietary MLAG solutions. We have been actively testing EV-LAG 
to accomplish this in the WAN for L2 services.. That being said, we use 
EVPN/MPLS where MH ( EV-LAG ) is conveyed via labels.. My understanding is that 
when using EVPN/VXLAN proprietary mechanisms are need to make EV-LAG work.. The 
is no SH label..

Thanks,
  Jim Uttaro

From: BESS mailto:bess-boun...@ietf.org>> On Behalf Of 
Gyan Mishra
Sent: Monday, March 02, 2020 6:26 PM
To: BESS mailto:bess@ietf.org>>
Subject: [bess] VXLAN EVPN fabric extension to Hypervisor VM


Dear BESS WG

Is anyone aware of any IETF BGP development in the Data Center arena to extend 
BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor part of the  
vxlan fabric.  This could eliminate use of MLAG on the leaf switches and 
eliminate L2 completely from the vxlan fabric thereby maximizing  stability.

Kind regards,

Gyan
--
Gyan  Mishra
Network Engineering & Technology
Verizon
Silver Spring, MD 20904
Phone: 301 502-1347
Email: gyan.s.mis...@verizon.com<mailto:gyan.s.mis...@verizon.com>


___
BESS mailing list
BESS@ietf.org<mailto:BESS@ietf.org>
https://www.ietf.org/mailman/listinfo/bess<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ietf.org_mailman_listinfo_bess&d=DwQFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=3qhKphE8RnwJQ6u8MrAGeA&m=fDyO1T8959Mh1nCyB_DTHbuaJUWs7wQ01X_Pd18tPg0&s=sySZWuq5YBFtsqh6Y6wIrU5SDUpVQaB-cxlVNTZ84g8&e=>
___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess


Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-04 Thread UTTARO, JAMES
Jeffrey,

  I was unaware that local bias had been standardized. Will take a 
read..

Thanks,
  Jim Uttaro

From: Jeffrey (Zhaohui) Zhang 
Sent: Wednesday, March 04, 2020 10:54 AM
To: UTTARO, JAMES ; Gyan Mishra ; BESS 

Subject: RE: [bess] VXLAN EVPN fabric extension to Hypervisor VM

EVPN/VLXAN uses Local Bias method for MH split horizon, which is specified in 
RFC 8365.

Extending EVPN to servers don’t require new IETF standards – the servers just 
need to support existing relevant standards. Having said that, with many 
servers in the underlay, routing in the underlay needs to be able to scale 
well. For that you can run BGP in the underlay (RFC 7938), or RIFT 
(draft-ietf-rift-rift), or LSVR (draft-ietf-lsvr-bgp-spf).

Jeffrey

From: BESS mailto:bess-boun...@ietf.org>> On Behalf Of 
UTTARO, JAMES
Sent: Wednesday, March 4, 2020 7:17 AM
To: Gyan Mishra mailto:hayabusa...@gmail.com>>; BESS 
mailto:bess@ietf.org>>
Subject: Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

Gyan,

  One of the big advantages of EVPN is the MLAG capability without 
the need for proprietary MLAG solutions. We have been actively testing EV-LAG 
to accomplish this in the WAN for L2 services.. That being said, we use 
EVPN/MPLS where MH ( EV-LAG ) is conveyed via labels.. My understanding is that 
when using EVPN/VXLAN proprietary mechanisms are need to make EV-LAG work.. The 
is no SH label..

Thanks,
  Jim Uttaro

From: BESS mailto:bess-boun...@ietf.org>> On Behalf Of 
Gyan Mishra
Sent: Monday, March 02, 2020 6:26 PM
To: BESS mailto:bess@ietf.org>>
Subject: [bess] VXLAN EVPN fabric extension to Hypervisor VM


Dear BESS WG

Is anyone aware of any IETF BGP development in the Data Center arena to extend 
BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor part of the  
vxlan fabric.  This could eliminate use of MLAG on the leaf switches and 
eliminate L2 completely from the vxlan fabric thereby maximizing  stability.

Kind regards,

Gyan
--
Gyan  Mishra
Network Engineering & Technology
Verizon
Silver Spring, MD 20904
Phone: 301 502-1347
Email: gyan.s.mis...@verizon.com<mailto:gyan.s.mis...@verizon.com>


___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess


Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-04 Thread UTTARO, JAMES
Gyan,

  The following draft may also be of interest. AT&T ( A.Lingala ) 
has co-authored a draft that addresses unequal load balancing within a data 
center.. This draft intends to optimize the use of links of different size 
within the data center to fully utilize the capacity of the links from the 
leaf’s to the servers..

https://tools.ietf.org/html/draft-ietf-bess-evpn-unequal-lb-03

Thanks,
  Jim Uttaro

From: Jeff Tantsura 
Sent: Wednesday, March 04, 2020 10:47 AM
To: Gyan Mishra ; BESS ; UTTARO, JAMES 

Subject: Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

James,

ESI multihoming (load-sharing) works just fine with VxLAN encapsulation (when 
supported), there’s no need for additional (proprietary) mechanisms (at least 
with basic synchronization).

Gyan - the devil is in the details (as always) - I’m looking at multivendor 
EVPN VxLAN ESI designs as we speak, I’m yet to figure out how ESI type 3 (only 
ESI type supported in NX-OS) is going to work with ESI types 0/1 supported in 
Junos and Arista. I’d assume upcoming open source implementations will support 
type 0 (manual) only.

To second James - replacing MLAG with ESI multihoming could be a really big 
deal in terms of simplification and normalization of the fabric (and you could 
finally remove peer-links!).
L2 vs L3 discussion is somewhat orthogonal to that, if your services require 
stretched L2, whether your VTEPs are on a server or switch - you’d still be 
doing L2overL3.

I still wouldn’t dare to deploy multivendor leafs though, but one step at a 
time ;-)

Cheers,
Jeff
On Mar 4, 2020, 10:17 AM -0500, UTTARO, JAMES 
mailto:ju1...@att.com>>, wrote:

Gyan,

  One of the big advantages of EVPN is the MLAG capability without 
the need for proprietary MLAG solutions. We have been actively testing EV-LAG 
to accomplish this in the WAN for L2 services.. That being said, we use 
EVPN/MPLS where MH ( EV-LAG ) is conveyed via labels.. My understanding is that 
when using EVPN/VXLAN proprietary mechanisms are need to make EV-LAG work.. The 
is no SH label..

Thanks,
  Jim Uttaro

From: BESS mailto:bess-boun...@ietf.org>> On Behalf Of 
Gyan Mishra
Sent: Monday, March 02, 2020 6:26 PM
To: BESS mailto:bess@ietf.org>>
Subject: [bess] VXLAN EVPN fabric extension to Hypervisor VM


Dear BESS WG

Is anyone aware of any IETF BGP development in the Data Center arena to extend 
BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor part of the  
vxlan fabric.  This could eliminate use of MLAG on the leaf switches and 
eliminate L2 completely from the vxlan fabric thereby maximizing  stability.

Kind regards,

Gyan
--
Gyan  Mishra
Network Engineering & Technology
Verizon
Silver Spring, MD 20904
Phone: 301 502-1347
Email: gyan.s.mis...@verizon.com<mailto:gyan.s.mis...@verizon.com>


___
BESS mailing list
BESS@ietf.org<mailto:BESS@ietf.org>
https://www.ietf.org/mailman/listinfo/bess<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ietf.org_mailman_listinfo_bess&d=DwQFaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=3qhKphE8RnwJQ6u8MrAGeA&m=fDyO1T8959Mh1nCyB_DTHbuaJUWs7wQ01X_Pd18tPg0&s=sySZWuq5YBFtsqh6Y6wIrU5SDUpVQaB-cxlVNTZ84g8&e=>
___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess


Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-04 Thread Jeffrey (Zhaohui) Zhang
EVPN/VLXAN uses Local Bias method for MH split horizon, which is specified in 
RFC 8365.

Extending EVPN to servers don’t require new IETF standards – the servers just 
need to support existing relevant standards. Having said that, with many 
servers in the underlay, routing in the underlay needs to be able to scale 
well. For that you can run BGP in the underlay (RFC 7938), or RIFT 
(draft-ietf-rift-rift), or LSVR (draft-ietf-lsvr-bgp-spf).

Jeffrey

From: BESS  On Behalf Of UTTARO, JAMES
Sent: Wednesday, March 4, 2020 7:17 AM
To: Gyan Mishra ; BESS 
Subject: Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

Gyan,

  One of the big advantages of EVPN is the MLAG capability without 
the need for proprietary MLAG solutions. We have been actively testing EV-LAG 
to accomplish this in the WAN for L2 services.. That being said, we use 
EVPN/MPLS where MH ( EV-LAG ) is conveyed via labels.. My understanding is that 
when using EVPN/VXLAN proprietary mechanisms are need to make EV-LAG work.. The 
is no SH label..

Thanks,
  Jim Uttaro

From: BESS mailto:bess-boun...@ietf.org>> On Behalf Of 
Gyan Mishra
Sent: Monday, March 02, 2020 6:26 PM
To: BESS mailto:bess@ietf.org>>
Subject: [bess] VXLAN EVPN fabric extension to Hypervisor VM


Dear BESS WG

Is anyone aware of any IETF BGP development in the Data Center arena to extend 
BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor part of the  
vxlan fabric.  This could eliminate use of MLAG on the leaf switches and 
eliminate L2 completely from the vxlan fabric thereby maximizing  stability.

Kind regards,

Gyan
--
Gyan  Mishra
Network Engineering & Technology
Verizon
Silver Spring, MD 20904
Phone: 301 502-1347
Email: gyan.s.mis...@verizon.com<mailto:gyan.s.mis...@verizon.com>


___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess


Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-04 Thread Jeff Tantsura
James,

ESI multihoming (load-sharing) works just fine with VxLAN encapsulation (when 
supported), there’s no need for additional (proprietary) mechanisms (at least 
with basic synchronization).

Gyan - the devil is in the details (as always) - I’m looking at multivendor 
EVPN VxLAN ESI designs as we speak, I’m yet to figure out how ESI type 3 (only 
ESI type supported in NX-OS) is going to work with ESI types 0/1 supported in 
Junos and Arista. I’d assume upcoming open source implementations will support 
type 0 (manual) only.

To second James - replacing MLAG with ESI multihoming could be a really big 
deal in terms of simplification and normalization of the fabric (and you could 
finally remove peer-links!).
L2 vs L3 discussion is somewhat orthogonal to that, if your services require 
stretched L2, whether your VTEPs are on a server or switch - you’d still be 
doing L2overL3.

I still wouldn’t dare to deploy multivendor leafs though, but one step at a 
time ;-)

Cheers,
Jeff
On Mar 4, 2020, 10:17 AM -0500, UTTARO, JAMES , wrote:
> Gyan,
>
>   One of the big advantages of EVPN is the MLAG capability 
> without the need for proprietary MLAG solutions. We have been actively 
> testing EV-LAG to accomplish this in the WAN for L2 services.. That being 
> said, we use EVPN/MPLS where MH ( EV-LAG ) is conveyed via labels.. My 
> understanding is that when using EVPN/VXLAN proprietary mechanisms are need 
> to make EV-LAG work.. The is no SH label..
>
> Thanks,
>   Jim Uttaro
>
> From: BESS  On Behalf Of Gyan Mishra
> Sent: Monday, March 02, 2020 6:26 PM
> To: BESS 
> Subject: [bess] VXLAN EVPN fabric extension to Hypervisor VM
>
>
> Dear BESS WG
>
> Is anyone aware of any IETF BGP development in the Data Center arena to 
> extend BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor part 
> of the  vxlan fabric.  This could eliminate use of MLAG on the leaf switches 
> and eliminate L2 completely from the vxlan fabric thereby maximizing  
> stability.
>
> Kind regards,
>
> Gyan
> --
> Gyan  Mishra
> Network Engineering & Technology
> Verizon
> Silver Spring, MD 20904
> Phone: 301 502-1347
> Email: gyan.s.mis...@verizon.com
>
>
> ___
> BESS mailing list
> BESS@ietf.org
> https://www.ietf.org/mailman/listinfo/bess
___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess


Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-04 Thread UTTARO, JAMES
Gyan,

  One of the big advantages of EVPN is the MLAG capability without 
the need for proprietary MLAG solutions. We have been actively testing EV-LAG 
to accomplish this in the WAN for L2 services.. That being said, we use 
EVPN/MPLS where MH ( EV-LAG ) is conveyed via labels.. My understanding is that 
when using EVPN/VXLAN proprietary mechanisms are need to make EV-LAG work.. The 
is no SH label..

Thanks,
  Jim Uttaro

From: BESS  On Behalf Of Gyan Mishra
Sent: Monday, March 02, 2020 6:26 PM
To: BESS 
Subject: [bess] VXLAN EVPN fabric extension to Hypervisor VM


Dear BESS WG

Is anyone aware of any IETF BGP development in the Data Center arena to extend 
BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor part of the  
vxlan fabric.  This could eliminate use of MLAG on the leaf switches and 
eliminate L2 completely from the vxlan fabric thereby maximizing  stability.

Kind regards,

Gyan
--
Gyan  Mishra
Network Engineering & Technology
Verizon
Silver Spring, MD 20904
Phone: 301 502-1347
Email: gyan.s.mis...@verizon.com<mailto:gyan.s.mis...@verizon.com>


___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess


Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-02 Thread Gyan Mishra
Appreciate you sharing thoughts.

On Mon, Mar 2, 2020 at 7:08 PM Robert Raszuk  wrote:

> Hi Gyan,
>
> You are touching subject close to me so let me share my perspective on
> your doubts below ;)
>
> >  maybe some advantages of elimination of L2 to the host
>
> Not some but huge !
>

  Please name a few benefits of L3 comparing to L2 MLAG & no STP.  One
issue is host lacp misconfiguration which as a standard we suspend
individual links  to force server folks to fix lacp

>
> >  BGP multipath provides flow based uneven load balancing
>
> First Contrail/Tungsten does not use BGP to the hypervisor but XMPP. But
> this is opaque to your concern.
>

   Do you know of any vendor or project with a BGP based L3 to host
solution w/ or w/o extending vxlan fabric?

>
> Load balancing and hashing construction is your choice, BGP or XMPP only
> deliver you next hops .. how you spread traffic to them is 100% up to your
> choice. That is the same on hypervisor or on any decent router. LAGs also
> build hash in the way you configure them to do so.
>

Understood.  ECMP L3 flow based load balancing has inherently always
had that downside with load balancing compare to per packet Ether bundling
hash are any layer of the network DC, Access, Core etc.

>
> >  hypervisor managed by server admins
>
> In any decent network or for that matter even in my lab this is all 100%
> automated. You run one template and execute it. Ansible works pretty well,
> but there are other choices too.
>
> Many thx,
> R.
>
>
   Good point as most networks these days have orchestration built into the
solution.   Agreed.

>
>
> On Tue, Mar 3, 2020 at 1:00 AM Gyan Mishra  wrote:
>
>>
>> Thanks Robert for the quick response
>>
>> Just thinking out loud -  I can see there maybe some advantages of
>> elimination of L2 to the host but the one major disadvantage is that BGP
>> multipath provides flow based uneven load balancing so not as desirable
>> from that standpoint compare to L3 MLAG bundle XOR Src/Dest/Port hash.
>>
>> Other big down side is most enterprises have the hypervisor managed by
>> server admins but if you run BGP now that ends up shifting to network.
>> More complicated.
>>
>> Kind regards
>>
>> Gyan
>>
>> On Mon, Mar 2, 2020 at 6:39 PM Robert Raszuk  wrote:
>>
>>> Hi Gyan,
>>>
>>> Similar architecture has been invented and shipped by Contrail team. Now
>>> that project after they got acquired by Juniper has been renamed to
>>> Tungsten Fabric https://tungsten.io/ while Juniper continued to keep
>>> the original project's name and commercial flavor of it. No guarantees of
>>> any product quality at this point.
>>>
>>> Btw ,,, no need for VXLAN nor BGP to the host. The proposed above
>>> alternative were well thought out and turned to work ways far more
>>> efficient and practical if you zoom into details.
>>>
>>> Best,
>>> Robert.
>>>
>>>
>>> On Tue, Mar 3, 2020 at 12:26 AM Gyan Mishra 
>>> wrote:
>>>

 Dear BESS WG

 Is anyone aware of any IETF BGP development in the Data Center arena to
 extend BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor
 part of the  vxlan fabric.  This could eliminate use of MLAG on the leaf
 switches and eliminate L2 completely from the vxlan fabric thereby
 maximizing  stability.

 Kind regards,

 Gyan
 --

 Gyan  Mishra

 Network Engineering & Technology

 Verizon

 Silver Spring, MD 20904

 Phone: 301 502-1347

 Email: gyan.s.mis...@verizon.com



 ___
 BESS mailing list
 BESS@ietf.org
 https://www.ietf.org/mailman/listinfo/bess

>>> --
>>
>> Gyan  Mishra
>>
>> Network Engineering & Technology
>>
>> Verizon
>>
>> Silver Spring, MD 20904
>>
>> Phone: 301 502-1347
>>
>> Email: gyan.s.mis...@verizon.com
>>
>>
>>
>> --

Gyan  Mishra

Network Engineering & Technology

Verizon

Silver Spring, MD 20904

Phone: 301 502-1347

Email: gyan.s.mis...@verizon.com
___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess


Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-02 Thread Jeff Tantsura
Gyan,

On open source side of things - FRR supports EVPN on the host.
Any vendor virtualized NOS would provide you the same (at least Junos/cRPD or  
XRv).
EVPN ESI load-sharing eliminates need for MLAG (basic thought, the devil is in 
the details :))
ECMP vs LAG load-balancing - the algorithms supported are quite similar, in 
some code bases actually the same, so this statement is not really correct.

Would be glad to better understand your requirements and help you!

Regards,
Jeff

> On Mar 2, 2020, at 16:00, Gyan Mishra  wrote:
> 
> 
> 
> Thanks Robert for the quick response
> 
> Just thinking out loud -  I can see there maybe some advantages of 
> elimination of L2 to the host but the one major disadvantage is that BGP 
> multipath provides flow based uneven load balancing so not as desirable from 
> that standpoint compare to L3 MLAG bundle XOR Src/Dest/Port hash.
> 
> Other big down side is most enterprises have the hypervisor managed by server 
> admins but if you run BGP now that ends up shifting to network.  More 
> complicated.  
> 
> Kind regards
> 
> Gyan
> 
>> On Mon, Mar 2, 2020 at 6:39 PM Robert Raszuk  wrote:
>> Hi Gyan,
>> 
>> Similar architecture has been invented and shipped by Contrail team. Now 
>> that project after they got acquired by Juniper has been renamed to Tungsten 
>> Fabric https://tungsten.io/ while Juniper continued to keep the original 
>> project's name and commercial flavor of it. No guarantees of any product 
>> quality at this point. 
>> 
>> Btw ,,, no need for VXLAN nor BGP to the host. The proposed above 
>> alternative were well thought out and turned to work ways far more efficient 
>> and practical if you zoom into details. 
>> 
>> Best,
>> Robert.
>> 
>> 
>>> On Tue, Mar 3, 2020 at 12:26 AM Gyan Mishra  wrote:
>> 
>>> 
>>> Dear BESS WG
>>> 
>>> Is anyone aware of any IETF BGP development in the Data Center arena to 
>>> extend BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor 
>>> part of the  vxlan fabric.  This could eliminate use of MLAG on the leaf 
>>> switches and eliminate L2 completely from the vxlan fabric thereby 
>>> maximizing  stability.
>>> 
>>> Kind regards,
>>> 
>>> Gyan
>>> -- 
>>> Gyan  Mishra
>>> 
>>> Network Engineering & Technology 
>>> 
>>> Verizon 
>>> 
>>> Silver Spring, MD 20904
>>> 
>>> Phone: 301 502-1347
>>> 
>>> Email: gyan.s..mis...@verizon.com
>>> 
>>> 
>>> 
>>> 
>> 
>>> ___
>>> BESS mailing list
>>> BESS@ietf.org
>>> https://www.ietf.org/mailman/listinfo/bess
> -- 
> Gyan  Mishra
> 
> Network Engineering & Technology 
> 
> Verizon 
> 
> Silver Spring, MD 20904
> 
> Phone: 301 502-1347
> 
> Email: gyan.s.mis...@verizon.com
> 
> 
> 
> 
> ___
> BESS mailing list
> BESS@ietf.org
> https://www.ietf.org/mailman/listinfo/bess
___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess


Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-02 Thread Robert Raszuk
Hi Gyan,

You are touching subject close to me so let me share my perspective on your
doubts below ;)

>  maybe some advantages of elimination of L2 to the host

Not some but huge !

>  BGP multipath provides flow based uneven load balancing

First Contrail/Tungsten does not use BGP to the hypervisor but XMPP. But
this is opaque to your concern.

Load balancing and hashing construction is your choice, BGP or XMPP only
deliver you next hops .. how you spread traffic to them is 100% up to your
choice. That is the same on hypervisor or on any decent router. LAGs also
build hash in the way you configure them to do so.

>  hypervisor managed by server admins

In any decent network or for that matter even in my lab this is all 100%
automated. You run one template and execute it. Ansible works pretty well,
but there are other choices too.

Many thx,
R.


On Tue, Mar 3, 2020 at 1:00 AM Gyan Mishra  wrote:

>
> Thanks Robert for the quick response
>
> Just thinking out loud -  I can see there maybe some advantages of
> elimination of L2 to the host but the one major disadvantage is that BGP
> multipath provides flow based uneven load balancing so not as desirable
> from that standpoint compare to L3 MLAG bundle XOR Src/Dest/Port hash.
>
> Other big down side is most enterprises have the hypervisor managed by
> server admins but if you run BGP now that ends up shifting to network.
> More complicated.
>
> Kind regards
>
> Gyan
>
> On Mon, Mar 2, 2020 at 6:39 PM Robert Raszuk  wrote:
>
>> Hi Gyan,
>>
>> Similar architecture has been invented and shipped by Contrail team. Now
>> that project after they got acquired by Juniper has been renamed to
>> Tungsten Fabric https://tungsten.io/ while Juniper continued to keep the
>> original project's name and commercial flavor of it. No guarantees of any
>> product quality at this point.
>>
>> Btw ,,, no need for VXLAN nor BGP to the host. The proposed above
>> alternative were well thought out and turned to work ways far more
>> efficient and practical if you zoom into details.
>>
>> Best,
>> Robert.
>>
>>
>> On Tue, Mar 3, 2020 at 12:26 AM Gyan Mishra 
>> wrote:
>>
>>>
>>> Dear BESS WG
>>>
>>> Is anyone aware of any IETF BGP development in the Data Center arena to
>>> extend BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor
>>> part of the  vxlan fabric.  This could eliminate use of MLAG on the leaf
>>> switches and eliminate L2 completely from the vxlan fabric thereby
>>> maximizing  stability.
>>>
>>> Kind regards,
>>>
>>> Gyan
>>> --
>>>
>>> Gyan  Mishra
>>>
>>> Network Engineering & Technology
>>>
>>> Verizon
>>>
>>> Silver Spring, MD 20904
>>>
>>> Phone: 301 502-1347
>>>
>>> Email: gyan.s.mis...@verizon.com
>>>
>>>
>>>
>>> ___
>>> BESS mailing list
>>> BESS@ietf.org
>>> https://www.ietf.org/mailman/listinfo/bess
>>>
>> --
>
> Gyan  Mishra
>
> Network Engineering & Technology
>
> Verizon
>
> Silver Spring, MD 20904
>
> Phone: 301 502-1347
>
> Email: gyan.s.mis...@verizon.com
>
>
>
>
___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess


Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-02 Thread Gyan Mishra
Thanks Robert for the quick response

Just thinking out loud -  I can see there maybe some advantages of
elimination of L2 to the host but the one major disadvantage is that BGP
multipath provides flow based uneven load balancing so not as desirable
from that standpoint compare to L3 MLAG bundle XOR Src/Dest/Port hash.

Other big down side is most enterprises have the hypervisor managed by
server admins but if you run BGP now that ends up shifting to network.
More complicated.

Kind regards

Gyan

On Mon, Mar 2, 2020 at 6:39 PM Robert Raszuk  wrote:

> Hi Gyan,
>
> Similar architecture has been invented and shipped by Contrail team. Now
> that project after they got acquired by Juniper has been renamed to
> Tungsten Fabric https://tungsten.io/ while Juniper continued to keep the
> original project's name and commercial flavor of it. No guarantees of any
> product quality at this point.
>
> Btw ,,, no need for VXLAN nor BGP to the host. The proposed above
> alternative were well thought out and turned to work ways far more
> efficient and practical if you zoom into details.
>
> Best,
> Robert.
>
>
> On Tue, Mar 3, 2020 at 12:26 AM Gyan Mishra  wrote:
>
>>
>> Dear BESS WG
>>
>> Is anyone aware of any IETF BGP development in the Data Center arena to
>> extend BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor
>> part of the  vxlan fabric.  This could eliminate use of MLAG on the leaf
>> switches and eliminate L2 completely from the vxlan fabric thereby
>> maximizing  stability.
>>
>> Kind regards,
>>
>> Gyan
>> --
>>
>> Gyan  Mishra
>>
>> Network Engineering & Technology
>>
>> Verizon
>>
>> Silver Spring, MD 20904
>>
>> Phone: 301 502-1347
>>
>> Email: gyan.s.mis...@verizon.com
>>
>>
>>
>> ___
>> BESS mailing list
>> BESS@ietf.org
>> https://www.ietf.org/mailman/listinfo/bess
>>
> --

Gyan  Mishra

Network Engineering & Technology

Verizon

Silver Spring, MD 20904

Phone: 301 502-1347

Email: gyan.s.mis...@verizon.com
___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess


Re: [bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-02 Thread Robert Raszuk
Hi Gyan,

Similar architecture has been invented and shipped by Contrail team. Now
that project after they got acquired by Juniper has been renamed to
Tungsten Fabric https://tungsten.io/ while Juniper continued to keep the
original project's name and commercial flavor of it. No guarantees of any
product quality at this point.

Btw ,,, no need for VXLAN nor BGP to the host. The proposed above
alternative were well thought out and turned to work ways far more
efficient and practical if you zoom into details.

Best,
Robert.


On Tue, Mar 3, 2020 at 12:26 AM Gyan Mishra  wrote:

>
> Dear BESS WG
>
> Is anyone aware of any IETF BGP development in the Data Center arena to
> extend BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor
> part of the  vxlan fabric.  This could eliminate use of MLAG on the leaf
> switches and eliminate L2 completely from the vxlan fabric thereby
> maximizing  stability.
>
> Kind regards,
>
> Gyan
> --
>
> Gyan  Mishra
>
> Network Engineering & Technology
>
> Verizon
>
> Silver Spring, MD 20904
>
> Phone: 301 502-1347
>
> Email: gyan.s.mis...@verizon.com
>
>
>
> ___
> BESS mailing list
> BESS@ietf.org
> https://www.ietf.org/mailman/listinfo/bess
>
___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess


[bess] VXLAN EVPN fabric extension to Hypervisor VM

2020-03-02 Thread Gyan Mishra
Dear BESS WG

Is anyone aware of any IETF BGP development in the Data Center arena to
extend BGP VXLAN EVPN to a blade server Hypervisor making the Hypervisor
part of the  vxlan fabric.  This could eliminate use of MLAG on the leaf
switches and eliminate L2 completely from the vxlan fabric thereby
maximizing  stability.

Kind regards,

Gyan
-- 

Gyan  Mishra

Network Engineering & Technology

Verizon

Silver Spring, MD 20904

Phone: 301 502-1347

Email: gyan.s.mis...@verizon.com
___
BESS mailing list
BESS@ietf.org
https://www.ietf.org/mailman/listinfo/bess