Hi,

I'm hoping that someone can offer some suggestions for getting an unusual configuration 
of CloudStack working.  We're using CloudStack for managing VM's that are used for 
training purposes.  We've developed code in MOODLE that controls VM states and snapshots 
and maps VM's to students.  When students enter a particular course MOODLE allocates an 
available VM, restores it to the appropriate snapshot, spins up the VM and then provide 
the student with an RDP file to access the VM.  It also monitors for idle VM's, spins 
them down and returns them to the "training pool".

It's been working well for years and we haven't updated anything in that time.  
We recently decided to upgrade all of the systems to supported versions and 
have run into some trouble with Cloudstack networking.

In the original system the networking was configured so that CloudStack was 
hidden behind cloudbr0, effectively using cloudbr0 as a router as well as the 
cluster management server, with rules in iptables to forward external RDP 
connections to CloudStack VM's.  Since there is no need for this system to be 
on an actual cluster, just using cloudbr0 worked well.  Also, since this system 
is portable the NIC is DHCP, meaning that the NIC ip address can't be used for 
the cluster manager ip address.

I'm trying to upgrade to Rocky Linux 8 and CloudStack 4.20 and discovered that 
cloudbr0 can't be used for the management server.  I went into the source code 
and discovered that there are checks in there to ensure that it's a physical 
device and that its link state is up.  So I tried a few things:

- Configured the active NIC to use DHCP and gave it a secondary static ip 
address that was in the same subnet as cloudbr0, and then used it as the 
management server ip.  That almost worked.  CloudStack started up and the 
CloudStack service VM's came online and were handed out appropriate ip 
addresses, and I could spin up client VM's.  And I could see that the VM vnet 
devices were bridged to cloudbr0.  But none of the VM's had any network 
connectivity.  DHCP didn't work.  I tried to set static ip addresses but the 
links in the VM's didn't appear to come up and I couldn't ping anything (not 
cloudbr0 or any other VM).  I adjusted the ingress and egress rules to allow 
all traffic.

- Bridged the active NIC to cloudbr0 but this appears to have caused it to lose 
DHCP.

- Set up the unused NIC with a static ip address, bridged it to cloudbr0, and 
specified its ip address for the cluster manager.  But because it's not in an 
up state, CloudStack won't use it.

So I'm kind of stuck now and can't think of how to configure the networking.  
Any suggestions?

Thanks,
Dave

Reply via email to