Re: [vdsm] vdsm networking changes proposal

2013-02-21 Thread Mark Wu

On 02/18/2013 05:23 PM, David Jaša wrote:

Hi,

Alon Bar-Lev píše v Ne 17. 02. 2013 v 15:57 -0500:

Hello Antoni,

Great work!
I am very excited we are going this route, it is first of many to allow us to 
be run on different distributions.
I apologize I got to this so late.

Notes for the model, I am unsure if someone already noted.

I think that the abstraction should be more than entity and properties.

For example:

nic is a network interface
bridge is a network interface and ports network interfaces
bound is a network interface and slave network interfaces
vlan is a network interface and vlan id

network interface can have:
- name
- ip config
- state
- mtu

this way it would be easier to share common code that handle pure interfaces.

I don't quite understand the 'Team' configurator, are you suggesting a provider 
for each technology?

Team is a new implementation of bonding in Linux kernel IIRC.


bridge
- iproute2 provider
- ovs provider
- ifcfg provider

bond
- iproute2
- team
- ovs
- ifcfg

vlan
- iproute2
- ovs
- ifcfg

So we can get a configuration of:
bridge:iproute2
bond:team
vlan:ovs

?

I also would like us to explore a future alternative of the network 
configuration via crypto vpn directly from qemu to another qemu, the idea is to 
have a kerberos like key per layer3(or layer2) destination, while communication 
is encrypted at user space and sent to a flat network. The advantage of this is 
that we manage logical network and not physical network, while relaying on 
hardware to find the best route to destination. The question is how and if we 
can provide this via the suggestion abstraction. But maybe it is too soon to 
address this kind of future.

Isn't it better to separate the two goals and persuade qemu developers to 
implement TLS for migration connections?

+1 for implementing it in qemu


David


For the open questions:

1. Yes, I think that mode should be non-persistence, persistence providers 
should emulate non-persistence operations by diff between what they have and 
the goal.

2. Once vdsm is installed, the mode it runs should be fixed. So the only 
question is what is the selected profile during host deployment.

3. I think that if we can avoid aliases it would be nice.

4. Keeping the least persistence information would be flexible. I would love to 
see a zero persistence mode available, for example if management interface is 
dhcp or manually configured.

I am very fond of the iproute2 configuration, and don't mind if administrator 
configures the management interface manually. I think this can supersede the 
ifcfg quite easily in most cases. In these rare cases administrator use ovirt 
to modify the network interface we may consider delegating persistence to 
totally different model. But as far as I understand the problem is solely 
related to the management connectivity, so we can implement a simple bootstrap 
of non-persistence module to reconstruct the management network setup from vdsm 
configuration instead of persisting it to the distribution width configuration.

Regards,
Alon Bar-Lev

- Original Message -

From: Antoni Segura Puimedon asegu...@redhat.com
To: a...@ovirt.org, vdsm-de...@fedorahosted.org
Sent: Friday, February 8, 2013 12:54:23 AM
Subject: vdsm networking changes proposal

Hi fellow oVirters!

The network team and a few others have toyed in the past with several
important
changes like using open vSwitch, talking D-BUS to NM, making the
network
non-persistent, etc.

It is with some of this changes in mind that we (special thanks go to
Livnat
Peer, Dan Kenigsberg and Igor Lvovsky) have worked in a proposal for
a new architecture for vdsm's networking part. This proposal is
intended to
make our software more adaptable to new components and use cases,
eliminate
distro dependancies as much as possible and improve the
responsiveness and
scalability of the networking operations.

To do so, it proposes an object oriented representation of the
different
elements that come into play in our networking use cases.

But enough of introduction, please go to the feature page that we
have put
together and help us with your feedback, questions proposals and
extensions.

http://www.ovirt.org/Feature/NetworkReloaded


Best regards,

Toni
___
Arch mailing list
a...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/arch


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] vdsm networking changes proposal

2013-02-21 Thread Mark Wu

On Thu 21 Feb 2013 04:46:16 PM CST, Mark Wu wrote:

On 02/18/2013 05:23 PM, David Jaša wrote:

Hi,

Alon Bar-Lev píše v Ne 17. 02. 2013 v 15:57 -0500:

Hello Antoni,

Great work!
I am very excited we are going this route, it is first of many to
allow us to be run on different distributions.
I apologize I got to this so late.

Notes for the model, I am unsure if someone already noted.

I think that the abstraction should be more than entity and properties.

For example:

nic is a network interface
bridge is a network interface and ports network interfaces
bound is a network interface and slave network interfaces
vlan is a network interface and vlan id

network interface can have:
- name
- ip config
- state
- mtu

this way it would be easier to share common code that handle pure
interfaces.

I don't quite understand the 'Team' configurator, are you suggesting
a provider for each technology?

Team is a new implementation of bonding in Linux kernel IIRC.


bridge
- iproute2 provider
- ovs provider
- ifcfg provider

bond
- iproute2
- team
- ovs
- ifcfg

vlan
- iproute2
- ovs
- ifcfg

So we can get a configuration of:
bridge:iproute2
bond:team
vlan:ovs

?

I also would like us to explore a future alternative of the network
configuration via crypto vpn directly from qemu to another qemu, the
idea is to have a kerberos like key per layer3(or layer2)
destination, while communication is encrypted at user space and sent
to a flat network. The advantage of this is that we manage logical
network and not physical network, while relaying on hardware to find
the best route to destination. The question is how and if we can
provide this via the suggestion abstraction. But maybe it is too
soon to address this kind of future.

Isn't it better to separate the two goals and persuade qemu
developers to implement TLS for migration connections?

+1 for implementing it in qemu


David


For the open questions:

1. Yes, I think that mode should be non-persistence, persistence
providers should emulate non-persistence operations by diff between
what they have and the goal.

2. Once vdsm is installed, the mode it runs should be fixed. So the
only question is what is the selected profile during host deployment.

3. I think that if we can avoid aliases it would be nice.

4. Keeping the least persistence information would be flexible. I
would love to see a zero persistence mode available, for example if
management interface is dhcp or manually configured.

I am very fond of the iproute2 configuration, and don't mind if
administrator configures the management interface manually. I think
this can supersede the ifcfg quite easily in most cases. In these
rare cases administrator use ovirt to modify the network interface
we may consider delegating persistence to totally different model.
But as far as I understand the problem is solely related to the
management connectivity, so we can implement a simple bootstrap of
non-persistence module to reconstruct the management network setup
from vdsm configuration instead of persisting it to the distribution
width configuration.

Regards,
Alon Bar-Lev

- Original Message -

From: Antoni Segura Puimedon asegu...@redhat.com
To: a...@ovirt.org, vdsm-de...@fedorahosted.org
Sent: Friday, February 8, 2013 12:54:23 AM
Subject: vdsm networking changes proposal

Hi fellow oVirters!

The network team and a few others have toyed in the past with several
important
changes like using open vSwitch, talking D-BUS to NM, making the
network
non-persistent, etc.

It is with some of this changes in mind that we (special thanks go to
Livnat
Peer, Dan Kenigsberg and Igor Lvovsky) have worked in a proposal for
a new architecture for vdsm's networking part. This proposal is
intended to
make our software more adaptable to new components and use cases,
eliminate
distro dependancies as much as possible and improve the
responsiveness and
scalability of the networking operations.

To do so, it proposes an object oriented representation of the
different
elements that come into play in our networking use cases.

But enough of introduction, please go to the feature page that we
have put
together and help us with your feedback, questions proposals and
extensions.

http://www.ovirt.org/Feature/NetworkReloaded


Best regards,

Toni
___
Arch mailing list
a...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/arch


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Sorry for coming to it so late.  I get the following comments and 
questions about the proposal.


I suggest to add a field of top interface to the network, and only 
apply IpConfig and mtu to it.


For the openvswitch configurator,  it needs 

[vdsm] [yajsonrpc]questions about json rpc

2013-02-21 Thread Sheldon

Hi, Adam
An error arises, when I call json rpc server by AsyncoreReactor. And I 
can call json rpc server successfully by a simple TCPReactor write by 
myself.

how can I call json Rpc by AsyncoreReactor correctly?

 address = (127.0.0.1, 4044)
 clientsReactor = asyncoreReactor.AsyncoreReactor()
 reactor = TestClientWrapper(clientsReactor.createClient(address))
 jsonAPI = JsonRpcClient(reactor)
 jsonAPI.connect()
 jsonAPI.callMethod(Host.ping, [], 1, 10)
Traceback (most recent call last):
File stdin, line 1, in module
File /usr/lib64/python2.7/site-packages/yajsonrpc/client.py, line 39, 
in callMethod

resp = self._transport.recv(timeout=timeout)
File /usr/share/vdsm/tests/jsonRpcUtils.py, line 100, in recv
return self._queue.get(timeout=timeout)[1]
File /usr/lib64/python2.7/Queue.py, line 176, in get
raise Empty
Queue.Empty


--
Sheldon Feng(冯少合)shao...@linux.vnet.ibm.com
IBM Linux Technology Center

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] VDSM Repository Reorganization

2013-02-21 Thread Dave Neary

Hi Vinzenz,

On 02/18/2013 05:43 PM, Vinzenz Feenstra wrote:

It would be nice to come to an agreement any time soon. I would like to
apply all the change as soon as possible.
I would not like to see this go into the depth of a dead hole.


The absence of feedback typically means one of 6 things:

* No-one read it
* No-one understood it
* No-one cared
* No-one is paying attention to you any more
* It's so ridiculous it's unworthy of comment
* Everyone who read it agrees

I suggest you assume the last one, and proceed until someone starts 
shouting at you ;-)


Thanks,
Dave.
--
Dave Neary - Community Action and Impact
Open Source and Standards, Red Hat - http://community.redhat.com
Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel