LXC is nothing short of untested recently (for years) - the ones that DO
work (used in production by people) are KVM, XenServer/XCP-ng, VMware.
That's all.
LXC, OVM and co, are most probably doomed, to be honest.

Best,

On Wed, 23 Jun 2021 at 09:27, Joshua Schaeffer <[email protected]>
wrote:

>
> >>>> A thing that I briefly touched somewhere upstairs ^^^ - for each
> >>>> traffic
> >>>> type you have defined - you need to define a traffic label - my
> >>>> deduction
> >>>> capabilities make me believe you are using KVM, so you need to set
> >>>> your KVM
> >>>> traffic label for all your network traffic (traffic label, in you case
> >>>> =
> >>>> exact name of the bridge as visible in Linux) - I recall there are
> >>>> some new
> >>>> UI issues when it comes to tags, so go to your
> >>>> <MGMT-IP>:8080/client/legacy
> >>>> - and check your traffic label there - and set it there, UI in
> >>>> 4.15.0.0
> >>>> doesn't allow you to update/set it after the zone is created - but old
> >>>> UI
> >>>> will allow you to do it.
>
> I changed over all the bonds to the standard naming convention and that
> did the trick. I also added the storage network back as you suggested.
> Thanks again for those pointers. However, I may have discovered a bug. I'm
> actually trying to test an LXC hypervisor instead of KVM and it isn't using
> the network labels. There seems to be two problems:
>
> 1. You can't actually set the LXC network label in the new UI because
> there is no option for it. There is an option in the legacy UI, however it
> doesn't actually update the database and throws a warning in the management
> logs.
> 2. Even if you set the labels directly in the database ACS doesn't seem to
> use them. I'm not 100% sure but it looks like it defaults to the settings
> on the compute host. In my case this is causing problems with the storage
> network.
>
> For the first problem, If all the labels are set to NULL:
>
> user@dbserver:~$ sudo mysql -D cloud -e "SELECT id, traffic_type,
> lxc_network_label FROM physical_network_traffic_types;"
> +----+--------------+-------------------+
> | id | traffic_type | lxc_network_label |
> +----+--------------+-------------------+
> | 11 | Management   | NULL              |
> | 12 | Public       | NULL              |
> | 13 | Guest        | NULL              |
> | 14 | Storage      | NULL              |
> +----+--------------+-------------------+
>
> and I attempt to set the LXC network label in the legacy UI it remains
> NULL in the database and I see this warning in the logs:
>
> 2021-06-23 05:42:20,977 WARN  [c.c.a.d.ParamGenericValidationWorker]
> (qtp1644231115-887:ctx-a97e9424 ctx-5d6ce3c6) (logid:3e68476e) Received
> unknown parameters for command updateTrafficType. Unknown parameters :
> lxcnetworklabel
>
> In order to get the right labels I updated the database manually:
>
> user@dbserver:~$ sudo mysql -D cloud -e "UPDATE
> physical_network_traffic_types SET lxc_network_label = 'cloudbr0' WHERE id
> = 11;"
> user@dbserver:~$ sudo mysql -D cloud -e "UPDATE
> physical_network_traffic_types SET lxc_network_label = 'cloudbr1' WHERE id
> in (12,13);"
> user@dbserver:~$ sudo mysql -D cloud -e "UPDATE
> physical_network_traffic_types SET lxc_network_label = 'cloudbr2' WHERE id
> = 14;"
> user@dbserver:~$ sudo mysql -D cloud -e "SELECT id, traffic_type,
> lxc_network_label FROM physical_network_traffic_types;"
> +----+--------------+-------------------+
> | id | traffic_type | lxc_network_label |
> +----+--------------+-------------------+
> | 11 | Management   | cloudbr0          |
> | 12 | Public       | cloudbr1          |
> | 13 | Guest        | cloudbr1          |
> | 14 | Storage      | cloudbr2          |
> +----+--------------+-------------------+
>
> However, this leads to my second problem; it doesn't seem to actually use
> the correct network interface. I think it uses the default that is set on
> the compute (maybe as a fallback), but I could be wrong about that. This is
> what is set on my compute in the agent.properties file:
>
> user@cmpserver:~$ sudo cat /etc/cloudstack/agent/agent.properties | egrep
> '(network\.device|hypervisor\.type)'
> private.network.device=cloudbr0
> guest.network.device=cloudbr1
> hypervisor.type=lxc
> public.network.device=cloudbr1
>
> And I can see in virsh that the management and public interfaces use
> cloudbr0 and cloudbr1 respectively, however the storage interface for the
> VM uses cloudbr0 when it should use cloudbr2:
>
> root@s-38-VM:~# ip --brief link show eth3
> eth3             UP             1e:00:ac:00:03:6a
> <BROADCAST,MULTICAST,UP,LOWER_UP>
>
> root@bllcloudcmp01:~# virsh dumpxml s-38-VM | grep -B 1 -A 8
> '1e:00:ac:00:03:6a'
>     <interface type='bridge'>
>       <mac address='1e:00:ac:00:03:6a'/>
>       <source bridge='cloudbr0'/>
>       <target dev='vnet6'/>
>       <model type='virtio'/>
>       <link state='up'/>
>       <alias name='net3'/>
>       <rom bar='off' file=''/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x06'
> function='0x0'/>
>     </interface>
>
> I setup another cluster and host with the exact same configuration except
> running KVM instead of LXC and set the KVM labels to the same as the LXC
> labels as a test. I then started the system VM's on the new host. You can
> see that virsh is using the cloudbr2 bridge for the VM:
>
> root@s-32-VM:~# ip --brief link show eth3
> eth3             UP             1e:00:f8:00:03:df
> <BROADCAST,MULTICAST,UP,LOWER_UP>
>
> root@bllcloudcmp02:~# virsh dumpxml s-32-VM | grep -B 1 -A 8
> '1e:00:f8:00:03:df'
>     <interface type='bridge'>
>       <mac address='1e:00:f8:00:03:df'/>
>       <source bridge='cloudbr2'/>
>       <target dev='vnet3'/>
>       <model type='virtio'/>
>       <link state='up'/>
>       <alias name='net3'/>
>       <rom bar='off' file=''/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x06'
> function='0x0'/>
>     </interface>
>
> Notice the <source bridge> is set to cloudbr2 now with the only difference
> in the hypervisor.
>
> Is LXC still supported (it is still mentioned in the docs)? If not then
> I'll just switch to using KVM.
>
> --
> Thanks,
> Joshua Schaeffer
>
>

-- 

Andrija Panić

Reply via email to