On 11/29/2012 04:24 AM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Dan Kenigsberg" <[email protected]>
To: "Alon Bar-Lev" <[email protected]>
Cc: "Simon Grinberg" <[email protected]>, "VDSM Project Development"
<[email protected]>
Sent: Wednesday, November 28, 2012 10:20:11 PM
Subject: Re: [vdsm] MTU setting according to ifcfg files.
On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote:
Itamar though a bomb that we should co-exist on generic host, this
is
something I do not know to compute. I still waiting for a response
of
where this requirement came from and if that mandatory.
This bomb has been ticking since ever. We have ovirt-node images for
pure hypervisor nodes, but we support plain Linux nodes, where local
admins are free to `yum upgrade` in the least convenient moment. The
latter mode can be the stuff that nightmares are made of, but it also
allows the flexibility and bleeding-endgeness we all cherish.
There is a different between having generic OS and having generic setup,
running your email server, file server and LDAP on a node that running VMs.
I have no problem in having generic OS (opposed of ovirt-node) but have full
control over that.
Alon.
Can I say we have got agreement on oVirt should cover two kinds of
hypervisors? Stateless slave is good for pure and normal virtualization
workload, while generic host can keep the flexibility of customization.
In my opinion, it's good for the oVirt community to provide choices for
users. They could customize it in production, building and even source
code according to their requirements and skills. So moving back to the
discussion network configuration, I would like to suggest we could
adopt both of the two solutions.
dynamic way (as Alon suggested in his previous mail.) -- for oVirt node.
It will take a step towards real stateless. Actually it's also helpful
to offload the transaction management from vdsm for the static way.
We're going to build vdsm network setup module on top of generic host
network manager, like libvirt virInterface. But to persist the network
configuration on oVirt node, vdsm has to care about the details of
lower level. If we only run the static way on the generic host, then
host network manager could perform the rollback stuff on behalf of
vdsm. I only have two comments on dynamic way:
1. Do we really need to care about the management interface? How about
just leaving it to installation and disallow to configure it at runtime.
2. How about putting the retrievement network configuration from engine
into vdsm-reg?
static way -- for generic host. We didn't follow much on this topic in
the thread. So I would like to talk about my understanding to continue
this discussion.
As Dan said in the first message of this thread, openvswitch couldn't
keep 3rd level configurations, so it's not appropriate to use itself to
cover the base network configurations. Then we have two choices: netcf
and NetworkManager. It seems netcf is not used as widely as NM.
Currently, it supports fedora/rhel, debian/ubuntu and suse. To support a
new distribution, you need add a converter to translate the interface's
XML definition into native configuration, because netcf just covers the
part of static configuration, and relies on the system network service
to make configurations take effect. Compared with netcf, it's easier to
support new distribution because it has its own daemon to parse the
self-defining key-value file and call netlink library to perform the
live change. Besides that, NM can also monitor the physical interface's
link status, and has the ability run callback on some events. Daniel
mentioned that libvirt would support NM by the virInterface API. That's
good for vdsm. But I found it didn't fit vdsm's requirements very well.
1. It doesn't allow to define a bridge on top of an existing interface.
That means the schema requires you define the bridge port interface
together with the bridge.
The vdsm setupNetwork verb allows creating a bridge with a given
name of existing bonding. To work around it, vdsm has to get the bonding
definition from libvirt or collect information from /sys. And then put
the bonding definition into bridge's definition.
2. It also removes its port interface, like bonding device together when
remove bridge. It's not expected by vdsm when the option
'implicitBonding' is unset. To work around it, vdsm has to re-create the
bonding as said in 1.
3. mtu setting is propagated to nic or bond when setting a mtu to
bridge. It could break mtu support in oVirt when adding a bridge with
smaller mtu to a vlan whose slave nic is
also used by a bigger mtu network.
4. Some less used options are not allowed by the the schema, like some
bonding modes and options.
Some of them could change in the backend NM. But it's better to claim
vdsm's requirements while libvirt is moving to NM. Is it better that if
libvirt allow vdsm manipulate
sub-element of a network? Probably Igor and Antoni have more comments
on it.
_______________________________________________
vdsm-devel mailing list
[email protected]
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
_______________________________________________
vdsm-devel mailing list
[email protected]
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel