On Sun, Feb 17, 2013 at 03:57:33PM -0500, Alon Bar-Lev wrote:
> Hello Antoni,
> Great work!
> I am very excited we are going this route, it is first of many to allow us to
> be run on different distributions.
> I apologize I got to this so late.
> Notes for the model, I am unsure if someone already noted.
> I think that the abstraction should be more than entity and properties.
> For example:
> nic is a network interface
> bridge is a network interface and ports network interfaces
> bound is a network interface and slave network interfaces
> vlan is a network interface and vlan id
> network interface can have:
> - name
> - ip config
> - state
> - mtu
> this way it would be easier to share common code that handle pure interfaces.
I agree with you - even though OOD is falling out of fashion in certain
> I don't quite understand the 'Team' configurator, are you suggesting a
> provider for each technology?
Just as we may decide to move away from standard linux bridge to
ovs-based bridging, we may switch from bonding to teaming. I do not
think that we should do it now, but make sure that the design
> - iproute2 provider
> - ovs provider
> - ifcfg provider
> - iproute2
> - team
> - ovs
> - ifcfg
> - iproute2
> - ovs
> - ifcfg
> So we can get a configuration of:
I do not think that such complex combinations are of real interest. The
client should not (currently) be allowed to request them. Some say that
the specific combination that is used by Vdsm to implement the network
should be defined in a config file. I think that a python file is good
enough for that, at least for now.
> I also would like us to explore a future alternative of the network
> configuration via crypto vpn directly from qemu to another qemu, the
> idea is to have a kerberos like key per layer3(or layer2) destination,
> while communication is encrypted at user space and sent to a flat
> network. The advantage of this is that we manage logical network and
> not physical network, while relaying on hardware to find the best
> route to destination. The question is how and if we can provide this
> via the suggestion abstraction. But maybe it is too soon to address
> this kind of future.
This is something completely different, as we say in Python.
The nice thing about your idea, is that in the context of host network
configuration we need nothing more than our current bridge-bond-nic.
The sad thing about your idea, is that it would scale badly with the
nubmer of virtual networks. If a new VM comes live and sends an ARP
who-has broadcast message - which VMs should be bothered to attempt to
> For the open questions:
> 1. Yes, I think that mode should be non-persistence, persistence providers
> should emulate non-persistence operations by diff between what they have and
> the goal.
> 2. Once vdsm is installed, the mode it runs should be fixed. So the only
> question is what is the selected profile during host deployment.
> 3. I think that if we can avoid aliases it would be nice.
I wonder if everybody agrees that aliases are not needed.
> 4. Keeping the least persistence information would be flexible. I
> would love to see a zero persistence mode available, for example if
> management interface is dhcp or manually configured.
> I am very fond of the iproute2 configuration, and don't mind if
> administrator configures the management interface manually. I think
> this can supersede the ifcfg quite easily in most cases. In these rare
> cases administrator use ovirt to modify the network interface we may
> consider delegating persistence to totally different model. But as far
> as I understand the problem is solely related to the management
> connectivity, so we can implement a simple bootstrap of
> non-persistence module to reconstruct the management network setup
> from vdsm configuration instead of persisting it to the distribution
> width configuration.
> Alon Bar-Lev
vdsm-devel mailing list