On 12/03/2012 04:25 PM, Dan Kenigsberg wrote:
On Mon, Dec 03, 2012 at 04:35:34AM -0500, Alon Bar-Lev wrote:
----- Original Message -----
From: "Mark Wu" <wu...@linux.vnet.ibm.com>
To: "VDSM Project Development" <firstname.lastname@example.org>
Cc: "Alon Bar-Lev" <alo...@redhat.com>, "Dan Kenigsberg" <dan...@redhat.com>, "Simon
"Antoni Segura Puimedon" <asegu...@redhat.com>, "Igor Lvovsky" <ilvov...@redhat.com>,
"Daniel P. Berrange"
Sent: Monday, December 3, 2012 7:39:49 AM
Subject: Re: [vdsm] Back to future of vdsm network configuration
On 11/29/2012 04:24 AM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" <dan...@redhat.com>
To: "Alon Bar-Lev" <alo...@redhat.com>
Cc: "Simon Grinberg" <si...@redhat.com>, "VDSM Project
Sent: Wednesday, November 28, 2012 10:20:11 PM
Subject: Re: [vdsm] MTU setting according to ifcfg files.
On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote:
Itamar though a bomb that we should co-exist on generic host,
something I do not know to compute. I still waiting for a
where this requirement came from and if that mandatory.
This bomb has been ticking since ever. We have ovirt-node images
pure hypervisor nodes, but we support plain Linux nodes, where
admins are free to `yum upgrade` in the least convenient moment.
latter mode can be the stuff that nightmares are made of, but it
allows the flexibility and bleeding-endgeness we all cherish.
There is a different between having generic OS and having generic
setup, running your email server, file server and LDAP on a node
that running VMs.
I have no problem in having generic OS (opposed of ovirt-node) but
have full control over that.
Can I say we have got agreement on oVirt should cover two kinds of
hypervisors? Stateless slave is good for pure and normal
workload, while generic host can keep the flexibility of
In my opinion, it's good for the oVirt community to provide choices
users. They could customize it in production, building and even
code according to their requirements and skills.
I also think it will be good to support both modes! It will also good if we can
rule the world! :)
Now seriously... :)
If we want to ever have a working solution we need to focus, dropping wishful
requirements in favour of the minimum required that will allow us to reach to
Having a good clean interface for vdsm network within the stateless mode, will
allow a persistent implementation to exists even if the whole implementation of
master and vdsm assume stateless. This kind of implementation will get a new
state from master, compare to whatever exists on the host and sync.
I, of course, will be against investing resources in such network
management plugin approach... but it is doable, and my vote is not
something that you cannot safely ignore.
I cannot say that I do not fail to parse English sentences with double
or triple negations...
I'd like to see an API that lets us define a persistent initial management
interface, and create volatile network devices during runtime. I'd love
to see a define/create distiction, as libvirt has.
How about keeping our current setupNetwork API, with a minor change to
its sematics - it would not persist anything. A new persistNetwork API
would be added, intending to persist the management network after it has
On boot, only the management defitions would show up, and Engine (or a
small local sevice on top of vdsm) would push the complete
how does this benefit over loading the last config, and then have engine
refresh (always/if needed)?
setSafeNetConfig, and the rollback-on-boot mess would be scrapped.
The only little problem would be to implement setupNetwork without
playing with persisted ifcfg* files.
Having said that, let's come back to your original claim:
"""while generic host can keep the flexibility of customization."""
NOBODY, and I repeat my answer to Dan, NOBODY claim we should not support
But the term 'generic' seems to confuse everyone... generic is a a host does
not mean administrator can do whatever he likes, it just a host that is
installed using standard distribution installation procedure.
Using 'generic host' can be done with either stateful or stateless modes.
However what and how customization can be done to a resource that is managed by
VDSM (eg: storage, network) is a complete different question.
There cannot be two managers to the same resource, it is a rule of thumb, any
other approach is non-deterministic and may lead to huge resource investment
with almost no benefit, as it will never be stable.
So moving back to
discussion network configuration, I would like to suggest we could
adopt both of the two solutions.
dynamic way (as Alon suggested in his previous mail.) -- for oVirt
It will take a step towards real stateless. Actually it's also
to offload the transaction management from vdsm for the static way.
It will also provide the framework needed in order to provide network on demand
as Livnat plan.
Define network resources (and I guess storage) when VM is moved to the host.
In this mode there is no other way to go!
We're going to build vdsm network setup module on top of generic host
network manager, like libvirt virInterface. But to persist the
configuration on oVirt node, vdsm has to care about the details of
lower level. If we only run the static way on the generic host, then
host network manager could perform the rollback stuff on behalf of
vdsm. I only have two comments on dynamic way:
1. Do we really need to care about the management interface? How
just leaving it to installation and disallow to configure it at
2. How about putting the retrievement network configuration from
vdsm-reg is going to be killed soon, just like the vdsm-bootstrap.
I was tempted to do this for 3.2, but I was taken...
I, too, see no benefit in vdsm pulling its setup from Engine, over
Engine pushing it once it is aware of the new host (and knows that the
host is needed, and that its network config has changed, etc).
static way -- for generic host. We didn't follow much on this topic
the thread. So I would like to talk about my understanding to
As Dan said in the first message of this thread, openvswitch couldn't
keep 3rd level configurations, so it's not appropriate to use itself
cover the base network configurations. Then we have two choices:
and NetworkManager. It seems netcf is not used as widely as NM.
Currently, it supports fedora/rhel, debian/ubuntu and suse.
new distribution, you need add a converter to translate the
XML definition into native configuration, because netcf just covers
part of static configuration, and relies on the system network
to make configurations take effect.
This may be a good opportunity to show
$ git grep NETCF_TRANSACTION
src/drv_suse.c:#define NETCF_TRANSACTION "/bin/false"
i.e., a considerable efferot has to take place in order to get
distribution-neutrality out of netcf.
Beyond that, netcf is all about cf: configuration. It take lesser care
about the current state of network devices. So to perform on-line
changes to them, the user is responsible to taking them down, chagine
the config, and taking them up again - just like what we do with ifcfg*
Compared with netcf, it's easier
support new distribution because it has its own daemon to parse the
self-defining key-value file and call netlink library to perform the
live change. Besides that, NM can also monitor the physical
link status, and has the ability run callback on some events. Daniel
mentioned that libvirt would support NM by the virInterface API.
good for vdsm. But I found it didn't fit vdsm's requirements very
1. It doesn't allow to define a bridge on top of an existing
That means the schema requires you define the bridge port interface
together with the bridge.
The vdsm setupNetwork verb allows creating a bridge with a given
name of existing bonding. To work around it, vdsm has to get the
definition from libvirt or collect information from /sys. And then
the bonding definition into bridge's definition.
2. It also removes its port interface, like bonding device together
remove bridge. It's not expected by vdsm when the option
'implicitBonding' is unset. To work around it, vdsm has to re-create
bonding as said in 1.
3. mtu setting is propagated to nic or bond when setting a mtu to
bridge. It could break mtu support in oVirt when adding a bridge with
smaller mtu to a vlan whose slave nic is
also used by a bigger mtu network.
this smells like a plain bug in NM. I believe they'd like to support
managing vlans with different MTUs based on the same strata.
4. Some less used options are not allowed by the the schema, like
bonding modes and options.
Some of them could change in the backend NM. But it's better to
vdsm's requirements while libvirt is moving to NM. Is it better that
libvirt allow vdsm manipulate
sub-element of a network? Probably Igor and Antoni have more
I don't like using libvirt for networking. We should interact directly
with the host network manager or whatever alternative we choose.
I do like using libvirt's abstraction - when it has non-/bin/false
substance behind it. I think that it has a potential to help projects
vdsm-devel mailing list
vdsm-devel mailing list