GitHub user bradh352 closed a discussion: How to join an LXC container host?

I've just spun up a cloudstack 4.20.1 8-node environment.   All hosts are 
configured identically (regardless if their purpose is for KVM or LXC).

I joined the first 5 hosts as KVM hosts and that works fine.

Then I created a new cluster **under the same zone** via the UI for my LXC 
hosts with the intention of joining the remaining 3 nodes as LXC nodes but it 
fails (after a long wait).

Looking at the agent logs, all I see is:
```
DEBUG:root:execute:uname -r
DEBUG:root:execute:uname -m
DEBUG:root:execute:hostname -f
DEBUG:root:execute:kvm-ok
DEBUG:root:execute:awk '/MemTotal/ { printf "%.3f \n", $2/1024 }' /proc/meminfo
DEBUG:root:execute:ip a | grep "^\w" | grep -iv "^lo" | wc -l
DEBUG:root:execute:service apparmor status
DEBUG:root:execute:apparmor_status |grep libvirt
DEBUG:root:Failed to execute:
DEBUG:root:cloudbr0 is not a network device, is it down?
DEBUG:root:execute:sudo /usr/sbin/service network-manager status
DEBUG:root:Failed to execute:Unit network-manager.service could not be found.
DEBUG:root:execute:ip route show default | awk '{print $3,$5}'
DEBUG:root:execute:ifconfig public
DEBUG:root:Failed to execute:/bin/sh: 1: ifconfig: not found
DEBUG:root:Failed to get address from ifconfig
DEBUG:root:execute:sudo update-rc.d -f apparmor remove
DEBUG:root:execute:sudo update-rc.d -f apparmor defaults
DEBUG:root:execute:sudo /usr/sbin/service apparmor status
DEBUG:root:execute:sudo /usr/sbin/service apparmor start
DEBUG:root:execute:sudo /usr/sbin/service apparmor status
DEBUG:root:execute:sudo /usr/sbin/service apparmor start
DEBUG:root:execute:sudo update-rc.d -f network-manager remove
DEBUG:root:execute:sudo update-rc.d -f network-manager defaults
DEBUG:root:Failed to execute:update-rc.d: error: unable to read 
/etc/init.d/network-manager
DEBUG:root:execute:sudo /usr/sbin/service network-manager status
DEBUG:root:Failed to execute:Unit network-manager.service could not be found.
DEBUG:root:execute:sudo /usr/sbin/service network-manager start
DEBUG:root:Failed to execute:Failed to start network-manager.service: Unit 
network-manager.service not found.
DEBUG:root:execute:sudo /usr/sbin/service network-manager status
DEBUG:root:Failed to execute:Unit network-manager.service could not be found.
DEBUG:root:execute:sudo /usr/sbin/service network-manager start
DEBUG:root:Failed to execute:Failed to start network-manager.service: Unit 
network-manager.service not found.
DEBUG:root:execute:/etc/init.d/networking stop
DEBUG:root:Failed to execute:/bin/sh: 1: /etc/init.d/networking: not found
DEBUG:root:execute:/etc/init.d/networking start
DEBUG:root:Failed to execute:/bin/sh: 1: /etc/init.d/networking: not found
```

Looking at the command history, I see:
```
Jun  9 21:05:52 node7.testenv.bradhouse.dev sudo: cloudstack : 
PWD=/home/cloudstack ; USER=root ; COMMAND=/usr/bin/cloudstack-setup-agent -m 
10.10.100.2,10.10.100.3,10.10.100.4 -z 2 -p 2 -c 5 -g 
78999d33-6584-340b-a4bc-8c19b52aa195 -a -s --pubNic=cloudbr0 --prvNic=cloudbr0 
--guestNic=cloudbr0 --hypervisor=lxc
```

compared to KVM:
```
Jun  9 20:43:31 node4.testenv.bradhouse.dev sudo: cloudstack : 
PWD=/home/cloudstack ; USER=root ; COMMAND=/usr/bin/cloudstack-setup-agent -m 
10.10.100.2,10.10.100.3,10.10.100.4 -z 2 -p 2 -c 2 -g 
044ed14f-03ec-31b4-a02a-c842cdcbba1b -a -s --pubNic=public --prvNic=hypervisor 
--guestNic=public --hypervisor=kvm
```

The thing that stands out at me is the pubNic, prvNic and guestNic ... the LXC 
should match what is passed for the KVM nodes, but instead are reverting to the 
default 'cloudbr0' which I don't have as I named my bridges more useful names. 
Any idea why these aren't passed properly?

Perhaps I missed something in the new cluster?  I thought the network settings 
were zone-wide.

GitHub link: https://github.com/apache/cloudstack/discussions/10999

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to