Op 16/01/2024 om 09:46 schreef Hunter Yap:
Hey Wido and everyone in the Community,

Hope you're all doing great. We're setting up Cloudstack and stumbled upon your cool videos about VXLAN, BGP, and IPV6 from 2019 and 2021. Watched them all, including the links you shared. Noticed that similar concepts pop up in Openstack and OpenNebula too, with a few twists.


Great! :-)

So, we've got a few things on our mind about the setup, especially how it fits (or doesn't) with what we're doing. You mentioned in your video to reach out with questions, so here we are, looping you in.

Our Setup Is Where:
# Cloudstack V4.18.1 + Linstor (For SDS)
# VXLAN Enabled for Guest (Using Multicast)
# 1 Pair of Redundant Core Switch (With option to scale out)
# 1 Pair of Redundant Leaf Switch
# Our datacenter can handle up to 1,000 hypervisors, which is more than enough for us.

Our setup isn’t huge – just right for supporting a max of 1,000 hypervisors per datacenter. And, we're not planning to link multiple datacenters into one big availability zone. Having said that, we're scratching our heads over whether BGP+EVPN is the way to go for us.


1k per hypervisors in a datacenter doesn't sound small to me. Or did you mean 1k Instance (VMs) per DC?

Here's what we're thinking:

# BGP + EVPN doesn't add additional benefit to small/medium sized clouds. (In our case, we only need Max 1,000 Hypervisors per Datacenter). Implementing BGP+EVPN only increases the complexity of the setup.


BGP+EVPN is about the defacto industry standard for using VXLAN. Multicast VXLAN has it's drawbacks, eg, you need to either make everthing one large L2 domain which comes which it's problems, or you need to perform Multicast routing between different L2 domains.

The EVPN approach also allows you to use all the filtering and policies BGP has to provide, in addition it's much better suited to debug as you can see the EVPN database inside the BGP routers.

Multicast seems 'fire and forget', but it's difficult to debug.

# We are using IPV4. But it seems that for this BGP+EVPN to be beneficial, we need to use IPV6. If we use back IPV4 (with BGP+EVPN), there is no benefit than if we were to use the default VXLAN+Multicast.


For the underlay it's just fine to use IPv4 with RFC1918 addresses, but still use BGP+EVPN. I would however recommend to only use this on the loopback addresses and then use BGP Unnumbered with IPv4 via IPv6 (Link-Local), extended next-hop for all interconnections. Save you setting up all those interconnect subnets between the hosts and routers.

# And about redundancies – doesn't look like VXLAN+BGP+EVPN offers anything more than what VXLAN+Multicast already does.


Well, I disagree. With Multicast you need to handle redundancy on L2 level. With BGP you can do so on L3 level and that provides much better redundancy and failover possibilities over L2.

Making L2 very redundant on large(r) scale is a challenge imho.

Our Assumptions, To enable BGP + EVPN:

1. We need to instal FRRouter in each Hypervisor and form BGP Neighbor to the Leaf Switch (so this is Hypervisor <> Leaf Switch and NOT Hypervisors <> Hypervisor).


Correct. Each hypervisor will have two BGP sessions. One for each uplink and to each ToR.

2. After forming neighbours with the Leaf Switch, we need to enable L2VPN at all leaf switches.


Correct. You need to exchange L2VPN packets with BGP.

3. When the first 2 steps above are done, this would then mean that each Hypervisor will be as if it is sitting in its own network segment. (Usually all hypervisors will be in 1 single network segment)


Correct. Underneath your network there are no L2 VLANs stretching, everything is L3 routing.

4. We run the modifyvxlan.sh provided on Github and replace it with the default one from Cloudstack.


Correct. Modify this script to your needs if needed.

5. That's it, basically run Cloudstack and test to see if it works. (That's all the steps we're aware of).

Note: Our assumption is also that this BGP setup will only affect internal Guest Network (VXLAN) communications. Networks to the Public Internet (VLAN) will not be using this BGP.

Would love to get your thoughts on this. Are we on the right track or missing something? Any advice or heads-up would be great.

Go full EVPN with VXLAN! No VLANs towards your hypervisor. Guest networks in VXLAN, Agent<>Mmgmt server, all in VXLAN in different VNIs.

Wido


Regards,
Hunter Yap

Reply via email to