On 12/03/2012 04:25 PM, Dan Kenigsberg wrote:
On Mon, Dec 03, 2012 at 04:35:34AM -0500, Alon Bar-Lev wrote:



----- Original Message -----
From: "Mark Wu" <wu...@linux.vnet.ibm.com>
To: "VDSM Project Development" <vdsm-devel@lists.fedorahosted.org>
Cc: "Alon Bar-Lev" <alo...@redhat.com>, "Dan Kenigsberg" <dan...@redhat.com>, "Simon 
Grinberg" <si...@redhat.com>,
"Antoni Segura Puimedon" <asegu...@redhat.com>, "Igor Lvovsky" <ilvov...@redhat.com>, 
"Daniel P. Berrange"
<berra...@redhat.com>
Sent: Monday, December 3, 2012 7:39:49 AM
Subject: Re: [vdsm] Back to future of vdsm network configuration

On 11/29/2012 04:24 AM, Alon Bar-Lev wrote:

----- Original Message -----
From: "Dan Kenigsberg" <dan...@redhat.com>
To: "Alon Bar-Lev" <alo...@redhat.com>
Cc: "Simon Grinberg" <si...@redhat.com>, "VDSM Project
Development" <vdsm-devel@lists.fedorahosted.org>
Sent: Wednesday, November 28, 2012 10:20:11 PM
Subject: Re: [vdsm] MTU setting according to ifcfg files.

On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote:
Itamar though a bomb that we should co-exist on generic host,
this
is
something I do not know to compute. I still waiting for a
response
of
where this requirement came from and if that mandatory.

This bomb has been ticking since ever. We have ovirt-node images
for
pure hypervisor nodes, but we support plain Linux nodes, where
local
admins are free to `yum upgrade` in the least convenient moment.
The
latter mode can be the stuff that nightmares are made of, but it
also
allows the flexibility and bleeding-endgeness we all cherish.

There is a different between having generic OS and having generic
setup, running your email server, file server and LDAP on a node
that running VMs.

I have no problem in having generic OS (opposed of ovirt-node) but
have full control over that.

Alon.
Can I say we have got agreement on oVirt should cover two kinds of
hypervisors?  Stateless slave is good for pure and normal
virtualization
workload, while generic host can keep the flexibility of
customization.
In my opinion, it's good for the oVirt community to provide choices
for
users.  They could customize it in production, building and even
source
code according to their requirements and skills.

I also think it will be good to support both modes! It will also good if we can 
rule the world! :)

Now seriously... :)

If we want to ever have a working solution we need to focus, dropping wishful 
requirements in favour of the minimum required that will allow us to reach to 
stable milestone.

Having a good clean interface for vdsm network within the stateless mode, will 
allow a persistent implementation to exists even if the whole implementation of 
master and vdsm assume stateless. This kind of implementation will get a new 
state from master, compare to whatever exists on the host and sync.

I, of course, will be against investing resources in such network
management plugin approach... but it is doable, and my vote is not
something that you cannot safely ignore.

I cannot say that I do not fail to parse English sentences with double
or triple negations...

I'd like to see an API that lets us define a persistent initial management
interface, and create volatile network devices during runtime. I'd love
to see a define/create distiction, as libvirt has.

How about keeping our current setupNetwork API, with a minor change to
its sematics - it would not persist anything. A new persistNetwork API
would be added, intending to persist the management network after it has
been tested.

On boot, only the management defitions would show up, and Engine (or a
small local sevice on top of vdsm) would push the complete
configuration.


how does this benefit over loading the last config, and then have engine refresh (always/if needed)?


setSafeNetConfig, and the rollback-on-boot mess would be scrapped.

The only little problem would be to implement setupNetwork without
playing with persisted ifcfg* files.


Having said that, let's come back to your original claim:
"""while generic host can keep the flexibility of customization."""

NOBODY, and I repeat my answer to Dan, NOBODY claim we should not support 
generic host.
But the term 'generic' seems to confuse everyone... generic is a a host does 
not mean administrator can do whatever he likes, it just a host that is 
installed using standard distribution installation procedure.

Using 'generic host' can be done with either stateful or stateless modes.

However what and how customization can be done to a resource that is managed by 
VDSM (eg: storage, network) is a complete different question.

There cannot be two managers to the same resource, it is a rule of thumb, any 
other approach is non-deterministic and may lead to huge resource investment 
with almost no benefit, as it will never be stable.

So moving back to
the
discussion network configuration,  I would like to suggest we could
adopt both of the two solutions.

dynamic way (as Alon suggested in his previous mail.) -- for oVirt
node.
It will take a step towards real stateless. Actually it's also
helpful
to offload the transaction management from vdsm for the static way.

It will also provide the framework needed in order to provide network on demand 
as Livnat plan.
Define network resources (and I guess storage) when VM is moved to the host.
In this mode there is no other way to go!

We're going to build vdsm network setup module on top of generic host
network manager, like libvirt virInterface. But to persist the
network
configuration on oVirt node,  vdsm has to care about the details of
lower level. If we only run the static way on the generic host, then
host network manager could perform the rollback stuff on behalf of
vdsm.  I only have two comments on dynamic way:
1. Do we really need to care about the management interface? How
about
just leaving it to installation and disallow to configure it at
runtime.
2. How about putting the retrievement network configuration from
engine
into vdsm-reg?

vdsm-reg is going to be killed soon, just like the vdsm-bootstrap.
I was tempted to do this for 3.2, but I was taken...

I, too, see no benefit in vdsm pulling its setup from Engine, over
Engine pushing it once it is aware of the new host (and knows that the
host is needed, and that its network config has changed, etc).



static way -- for generic host. We didn't follow much on this topic
in
the thread.  So I would like to talk about my understanding to
continue
this discussion.
As Dan said in the first message of this thread, openvswitch couldn't
keep 3rd level configurations, so it's not appropriate to use itself
to
cover the base network configurations. Then we have two choices:
netcf
and NetworkManager. It seems netcf is not used as widely as NM.
Currently, it supports fedora/rhel, debian/ubuntu and suse.
To
support a
new distribution, you need add a converter to translate the
interface's
XML definition into native configuration, because netcf just covers
the
part of static configuration, and relies on the system network
service
to make configurations take effect.

This may be a good opportunity to show

$ git grep NETCF_TRANSACTION
...
src/drv_suse.c:#define NETCF_TRANSACTION "/bin/false"
...

i.e., a considerable efferot has to take place in order to get
distribution-neutrality out of netcf.

Beyond that, netcf is all about cf: configuration. It take lesser care
about the current state of network devices. So to perform on-line
changes to them, the user is responsible to taking them down, chagine
the config, and taking them up again - just like what we do with ifcfg*
files.

Compared with netcf, it's easier
to
support new distribution because it has its own daemon to parse the
self-defining key-value file and call netlink library to perform the
live change. Besides that, NM can also monitor the physical
interface's
link status, and has the ability run callback on some events. Daniel
mentioned that libvirt would support NM by the virInterface API.
That's
good for vdsm. But I found it didn't fit vdsm's requirements very
well.
1. It doesn't allow to define a bridge on top of an existing
interface.
That means the schema requires you define the bridge port interface
together with the bridge.
     The vdsm setupNetwork verb allows creating a bridge with a given
name of existing bonding. To work around it, vdsm has to get the
bonding
definition from libvirt or collect   information from /sys. And then
put
the bonding definition into bridge's definition.
2. It also removes its port interface, like bonding device together
when
remove bridge.  It's not expected by vdsm when the option
'implicitBonding' is unset. To work around it, vdsm has to re-create
the
bonding as said in 1.
3. mtu setting is propagated to nic or bond when setting a mtu to
bridge. It could break mtu support in oVirt when adding a bridge with
smaller mtu to a vlan whose slave nic is
     also used by a bigger mtu network.

this smells like a plain bug in NM. I believe they'd like to support
managing vlans with different MTUs based on the same strata.

4. Some less used options are not allowed by the the schema, like
some
bonding modes and options.

Some of them could change in the backend NM.  But it's better to
claim
vdsm's requirements while libvirt is moving to NM. Is it better that
if
libvirt allow vdsm manipulate
sub-element of a network?  Probably Igor and Antoni have more
comments
on it.

I don't like using libvirt for networking. We should interact directly
with the host network manager or whatever alternative we choose.

I do like using libvirt's abstraction - when it has non-/bin/false
substance behind it. I think that it has a potential to help projects
outside oVirt.

Dan.
_______________________________________________
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel



_______________________________________________
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel

Reply via email to