On 12/30/19 1:53 AM, Ross Vandegrift wrote: > On Sun, Dec 29, 2019 at 11:16:20PM +0100, Thomas Goirand wrote: >> It isn't uncommon to have a vlan on top of a bridge, on top of a bond >> (in fact, that's very common for OpenStack), and in this type of setup, >> not having the ifenslave + vlan + bridge-utils packages would be a real >> pain. > > Is it common that this is exposed to the VM though? Or does the VM just see a > virtual ethernet device? > > Ross
Thinking that there's only VMs is a dangerous assumption. There's also bare-metal with Ironic. In such environment, it wouldn't be surprising to have VLAN and bonding, then the user may need bridging. For those who want to run untrusted networking for users, there's super nice network cards such as the ConnectX-6 Dx from Mellanox: https://www.mellanox.com/page/products_dyn?product_family=302&mtag=connectx_6_dx_ic These cards have a multicore ARM CPU, 16 GBytes of RAM, which contains a full operating system (I've been told, CentOS or Ubuntu), support nice stuff like NVMe over fabric, and much more. They would be ideal to support running bare-metal systems using Ironic. I haven't been able to test them though (they are at about 1600 USD a piece...). But with such cards, I am guessing it should be possible to have any kind of VLAN, bonding and bridging for the bare-metal OS, configured by cloud-init, on-demand for the customer. With the OS, it is possible to filter VLANs to make sure the customer isn't destroying the OpenStack management network (for example, by using the management network VLAN and using the IP address of the router gateway). So yeah, it does make sense to provide VLAN, bridging and bonding, at least for our non-minimal generic cloud image. It'd be IMO fine to provide them only on that one though, if we document it, so the other smaller image can be kept small. Cheers, Thomas Goirand (zigo)
