Op 27/01/2025 om 03:42 schreef Chi vediamo:
I need some additional guidance.

I was following I think all emails I found:


https://www.mail-archive.com/users@cloudstack.apache.org/msg38549.html <https://www.mail-archive.com/users@cloudstack.apache.org/msg38549.html>

https://www.mail-archive.com/users@cloudstack.apache.org/msg36477.html <https://www.mail-archive.com/users@cloudstack.apache.org/msg36477.html>

https://www.mail-archive.com/users@cloudstack.apache.org/msg38144.html <https://www.mail-archive.com/users@cloudstack.apache.org/msg38144.html>

Based on those previous e-mails, I have very similar outputs as Wido show there, only the vni and IP addressing changes.

Does anybody have a tip, I am using Ubuntu and netplan with networkd



I found that using systemd-networkd directly was the easiest.

root@hv:/etc/systemd/network# ls -al
total 28
drwxr-xr-x 2 root root 4096 Aug  9  2023 .
drwxr-xr-x 6 root root 4096 Jul 25  2023 ..
-rw-r--r-- 1 root root  201 Jul 28  2023 00-uplinks.network
-rw-r--r-- 1 root root   54 Jul 28  2023 cloudbr1.netdev
-rw-r--r-- 1 root root  161 Jul 28  2023 cloudbr1.network
-rw-r--r-- 1 root root  125 Jul 28  2023 vxlan300.netdev
-rw-r--r-- 1 root root   87 Jul 28  2023 vxlan300.network
root@hv:/etc/systemd/network#

root@hv:/etc/systemd/network# cat cloudbr1.netdev
[NetDev]
Name=cloudbr1
Kind=bridge
root@hv:/etc/systemd/network# cat cloudbr1.network
[Match]
Name=cloudbr1

[Network]
LinkLocalAddressing=no

[Address]
Address=10.100.64.33/20

[Route]
Gateway=10.100.64.1

[Link]
MTUBytes=1500
root@hv:/etc/systemd/network#

root@hv:/etc/systemd/network# cat vxlan300.net*

[NetDev]
Name=vxlan
Kind=vxlan

[VXLAN]
Id=300
Local=10.255.192.37
MacLearning=false
DestinationPort=4789


[Match]
Name=vxlan

[Network]
Bridge=cloudbr1

[Link]
MTUBytes=1500
root@hv:/etc/systemd/network#

root@hv:/etc/systemd/network# cat 00-uplinks.network
[Match]
Name=ens3f0np0
Name=ens3f1np1

[Network]
DHCP=no
IPv6AcceptRA=yes
LLDP=yes
EmitLLDP=yes
VXLAN=vxlan300
DNS=2a00:f10:ff04:153::53
DNS=2a00:f10:ff04:253::53

[Link]
MTUBytes=9216
root@hv:/etc/systemd/network#

On Jan 17, 2025, at 6:06 AM, Wido den Hollander <w...@widodh.nl> wrote:



Op 16/01/2025 om 13:04 schreef Chi vediamo:
Hello Wido and community,
Just to have the basics in simple terms:
*To use the VXLAN in Cloudstack -*
We need Management and Storage reacheable by plain L3, Correct ?

Correct. Although you can also use a VNI for reaching management. You need at least one VNI for cloudbr1 where all the Hypervisors can reach eachother for migration traffic and such.

What is the VNI on cloudbr1 used for ? Guest traffic?

No, traffic between hypervisors (migrations), SS and connection with management server.



*For the KVM Host initially:*
*Option 1)* At KVM, Do I need a PLAIN L3 only initially, WITH*NO Initial cloudbr1/VXLAN*? Yes/ NO? *Option 2)* At KVM host, I need the Cloudbr1 with the initial VXLAN reacheable somehow by the management server ?

Yes, you need that. As said above, you need cloudbr1 for traffic within the cluster. We use a dedicated VNI which we configure in systemd-networkd on our machines.

Do I need VRF's ?

Not needed in our case.


For a test purpose I created additional vxlans and is using the script, a vpc on vxlan 200 can reach a vpc on vxlan 250 two different customers. those VPC can not have the same private IP addressing.

In that case I will need to create a bunch of VRFs on the Switches. and remodify the script from https://gist.github.com/ wido/51cb9880d86f08f73766634d7f6df3f4 <https://gist.github.com/ wido/51cb9880d86f08f73766634d7f6df3f4>, unless there is something new.


Tata Y.



Wido

Thank you very Much.
Tata Y.
On Jan 14, 2025, at 3:30 AM, Wido den Hollander <w...@widodh.nl> wrote:



Op 13/01/2025 om 17:06 schreef Chi vediamo:
Hello Wido, Thank you so much for taking the time to respond.:
I bonded the interfaces for capacity.

I would just create additional BGP sessions and use ECMP balancing for more capacity.

I was following your YouTube videos and users mail.
Yes, VXLAN seems to be working now. The most dificult/radical part was the tunning of the parameres on Ubuntu 22. or should I use a different Linux system?
Let me put on a logic it may clear for me; then, with VXLAN:
Wido: Regarding your Youtube video: To me seems there is a VLAN on the TOR. But makes no sense based on what you mention here. What is the VLAN624 being used on your Youtube video @time 24:10 https:// www.youtube.com/watch?v=X02bxtIC0u4

VLAN is a Cumulus naming. These are all VXLAN VNI. In this case I showed the example how Cumulus would be the gateway for VNI 624

Then, Management servers will be on the same vxlan as kvm cloudbr1 then this should be the management vxlan. Correct? Storage I did put the ceph on vxlan2 with BGP; then, should I removed from vxlan ior keep it on the management vxlan? which one should be the best practice.?

Storage doesn't need to be in a VNI/VXLAN. It just needs to be reachable by the hypervisor. Plain L3 routing, nothing special. Just make sure both can ping eachother.

This is really just basic routing.

What About Network planning with Cloudstack for NFVs and low latency:
Final Question. Does Cloudstack supports Sriov ? Any documentation besides an old document that inidicates we should list the PCI interfaces ? IS there ANy documentation about this?


No idea :-)

I was thinnking SRIOV will align better with VXLAN/EVPN
Or do you recommend to use DPDk ?
If DPDK wihich one will you recommend,
 - OpenVswitch - if this is selected then I will have to create the Bridges with Openvswitch, and should work with VXLAN/EVPN , right?  - Or tunsgteen(OpenSDN) - I know Contrail pretty much very well, Not sure if there is any documentation about OpenSDN - Formerly Tungteen - with Cloudstack and VXLAN/EVPN, -  Seems VXLAN/EVPN and tungsteen are mutually exclusive, plus not supported on Ubuntu 22 as of yet, are both of this statements correct?
Thank you so much for taking the time to respond.
Tata Y.
On Jan 13, 2025, at 8:45 AM, Wido den Hollander <w...@widodh.nl> wrote:



Op 11/01/2025 om 00:36 schreef Chi vediamo:
Forgot to add Wido
The Isolation should start, and should happen at the HOST running KVM, No VLANS needed at the Host, neither at the Leafs. But seems isolation is not happening! using cloudstack 4.20, maybe is a detail missing on my side. What Cloudstac documentation mean with "On the Spine router(s) the VNIs will terminate and they will act as IPv4/IPv6 gateways" DO "Need" vlans to be configured on the Leafs. I will understand if the VM isolation is based on VLANS that will be needed, but if the VM Isolation is going to be VXLAN
what for I will need the VLANS at the leafs?

No VLANs needed. Forget Guest and Storage traffic. The hypervisor will do everything routed! It will talk BGP with the Top-of-Rack.

You will reach your storage via routing without the need of VXLAN, have it route through your network.

No need for bonding on your hypervisor either, it just has two BGP uplink sessions towards the ToR.

Just connect your storage to your network and make it reachable via L3, same goes for the management server.

But before starting to use CloudStack with VXLAN, make sure you get VXLAN working without CloudStack so you really understand how VXLAN works.

Wido

based on this scenario from Guido presso:
           [spine1]           [spine2] no ibgp between spines
         /                          /
       /                           /   evpn/bgp unnumbered
     /                           /
   /                            /
[leaf1]-----ibgp---[Leaf2]
    \                           /
     \                         /
       \                      /
         \                  /
       | bond1     bond2 |
       |     [cloudbr1]      |
       |       vxlan1          |
       |       vxlan2          |
       |   Host 1 - KVM   |
       |        SRIOV         |
       --------------------
the vxlan1 will handle the guest traffic
the vxlan2 will handle main storage
I have management over a regular vlan for now but will create a vxlan3 for it
and vxlan2000 for Public traffic.
vxlan1, vxlan2, vxlan3, and vxlan2000 are goint to be created manually or Cloudstack should/will create them when I choose the VXLAN for each one of the traffic names? Or should I use VXLAN for the Guest traffic only and use regular vlan isolation for the the remaining - management, storage and public ? I have 4 physical hosts, and i am using same ip addressing on vxlan 1 and vxlan 2 connect to different leafs and i am still able to ping between them.
Thank you
Tata Y.
On Jan 6, 2025, at 9:08 PM, Chi vediamo <tatay...@gmail.com> wrote:


Hello Community,
First: Thank you Wido, your answers on previous emails to the community help a lot. I read the vincent.bernat document, but his example uses VLAN mapping at the Switch level.

I was thinking to use the LEAF-SPINE as a transport only , and the seetings on the Host will take care of the Isolation.

But is not working that way. Should I create the traditional VXLAN, VLAN/VNI/VRF on the LEAF Switches to properly isolate it?
We are using SONIC NOS community version, nothing fancy.

The BGP unnumbered evpn etc works fine.

The output:

vtysh -c 'show interface vxlan2'
VNI: 2
 Type: L2
 Tenant VRF: default
 VxLAN interface: vxlan2
 VxLAN ifIndex: 14
 SVI interface: storage0
 SVI ifIndex: 13
 Local VTEP IP: 172.2.0.60
 Mcast group: 0.0.0.0
 Remote VTEPs for this VNI:
  172.2.0.59 flood: HER
  172.2.0.32 flood: HER
  172.2.0.30 flood: HER
  172.2.0.28 flood: HER
  172.2.0.26 flood: HER

bridge fdb show dev vxlan2
8a:be:71:4c:e0:20 vlan 1 extern_learn master storage0
8a:be:71:4c:e0:20 extern_learn master storage0
b2:33:bb:84:cc:38 vlan 1 extern_learn master storage0
b2:33:bb:84:cc:38 extern_learn master storage0
86:07:90:2b:db:db vlan 1 extern_learn master storage0
86:07:90:2b:db:db extern_learn master storage0
4a:28:60:90:76:42 vlan 1 extern_learn master storage0
4a:28:60:90:76:42 extern_learn master storage0
22:d6:49:9f:08:07 vlan 1 extern_learn master storage0
22:d6:49:9f:08:07 extern_learn master storage0
fe:4a:fb:63:9d:3a vlan 1 extern_learn master storage0
fe:4a:fb:63:9d:3a extern_learn master storage0
ee:78:b4:d8:3f:a0 vlan 1 master storage0 permanent
ee:78:b4:d8:3f:a0 master storage0 permanent
00:00:00:00:00:00 dst 172.2.0.24 self permanent
00:00:00:00:00:00 dst 172.2.0.26 self permanent
00:00:00:00:00:00 dst 172.2.0.28 self permanent
00:00:00:00:00:00 dst 172.2.0.30 self permanent
00:00:00:00:00:00 dst 172.2.0.32 self permanent
00:00:00:00:00:00 dst 172.2.0.59 self permanent

fe:4a:fb:63:9d:3a dst 172.2.0.24 self extern_learn


vtysh -c 'show interface vxlan1'

Interface vxlan1 is up, line protocol is up
  Link ups:       1    last: 2025/01/06 23:53:01.17
  Link downs:     1    last: 2025/01/06 23:53:01.17
  vrf: default
  index 14 metric 0 mtu 9050 speed 4294967295
  flags: <UP,BROADCAST,RUNNING,MULTICAST>
  Type: Ethernet
  HWaddr: ea:d3:68:02:7d:f7
  inet6 fe80::e8d3:68ff:fe02:7df7/64
  Interface Type Vxlan
  Interface Slave Type None
  VxLAN Id 100 VTEP IP: 10.23.13.14 Access VLAN Id 1

  protodown: off

vtysh -c 'show evpn vni 1'
VNI: 1
 Type: L2
 Tenant VRF: default
 VxLAN interface: vxlan1
 VxLAN ifIndex: 14
 SVI interface: cloudbr1
 SVI ifIndex: 12
 Local VTEP IP: 10.23.13.14
 Mcast group: 0.0.0.0
 No remote VTEPs known for this VNI
 Number of MACs (local and remote) known for this VNI: 0
 Number of ARPs (IPv4 and IPv6, local and remote) known for this VNI: 0
 Advertise-gw-macip: No
 Advertise-svi-macip: No

and I can ping the IPV6 that is routed using the FRR from :60 which is in VXLAN 2 to :14 which is in VXLAN 1

ping -I 20XX:5XX:56XX:fff0::2:60 20XX:5XX:56XX:fff0:0:2:13:14
PING 20XX:5XX:56XX:fff0:0:2:13:14(20XX:5XX:56XX:fff0:0:2:13:14) from 20XX:5XX:56XX:fff0::2:60 : 56 data bytes 64 bytes from 20XX:5XX:56XX:fff0:0:2:13:14: icmp_seq=1 ttl=61 time=0.293 ms 64 bytes from 20XX:5XX:56XX:fff0:0:2:13:14: icmp_seq=2 ttl=61 time=0.222 ms


Then, my questions are:
are you using at the Leaf Switches/routers a regular Mapping VLAN to VNI VXLAN with VRF ? IF not, Can you share a FRR config of your Switches?
Or should I use an enterprise SONIC switch software ?
What other possibilities are with the modifyvxlan.sh That Wido states on some user mails.


Thank you

Tata Y.







Reply via email to