Re: [vdsm] VM xp is down. Exit message: unsupported configuration: spice graphics are not supported with this QEMU.

2014-04-07 Thread David Jaša
On So, 2014-04-05 at 01:44 -0400, 鬼丁 wrote:
 How to compile qemu is to support spice? how can i do?
 .
 /configure :
 tcg debug enabled no
 gprof enabled no
 sparse enabledno
 strip binariesyes
 profiler  no
 static build  no
 -Werror enabled   yes
 pixmansystem
 SDL support   no
 GTK support   no
 curses supportno
 curl support  no
 mingw32 support   no
 Audio drivers oss
 Block whitelist (rw)
 Block whitelist (ro)
 Mixer emulation   no
 VirtFS supportno
 VNC support   yes
 VNC TLS support   no
 VNC SASL support  no
 VNC JPEG support  no
 VNC PNG support   no
 VNC WS supportno
 xen support   no
 brlapi supportno
 bluez  supportno
 Documentation no
 GUEST_BASEyes
 PIE   yes
 vde support   no
 Linux AIO support no
 ATTR/XATTR support yes
 Install blobs yes
 KVM support   yes
 RDMA support  no
 TCG interpreter   no
 fdt support   yes
 preadv supportyes
 fdatasync yes
 madvise   yes
 posix_madvise yes
 sigev_thread_id   yes
 uuid support  no
 libcap-ng support no
 vhost-net support yes
 vhost-scsi support yes
 Trace backend nop
 Trace output file trace-pid
 spice support no (/)

^^^
here. Use CLI switches of configure script to enable spice support. Why
don't you use qemu  spice packages anyway?

David

 rbd support   no
 xfsctl supportno
 nss used  no
 libusbno
 usb net redir no
 GLX support   no
 libiscsi support  no
 build guest agent yes
 seccomp support   no
 coroutine backend ucontext
 coroutine poolyes
 GlusterFS support no
 virtio-blk-data-plane no
 gcov  gcov
 gcov enabled  no
 TPM support   no
 libssh2 support   no
 TPM passthrough   no
 QOM debugging yes
 
 
 
 2014-04-04 3:04 GMT-04:00 Michal Skrivanek michal.skriva...@redhat.com:
 
 
  On Apr 4, 2014, at 04:50 , 鬼丁 lxfw...@gmail.com wrote:
 
   ovirt-engine  version 3.4.0 RC3   qemu-kvm  version 1.6.1vdsm
  version 4.14.6
libvirt version  1.1.3
  
messages as follow:
   Apr  3 22:30:48 localhost vdsm vm.Vm ERROR vmId=`a4ea786e-2c1c-4159-86e0-
   8744c54b3bbe`::The vm start process failed#012Traceback (most recent
  call last):#012  File /usr/share/vdsm/vm.py, line 2249, in
  _startUnderlyingVm#012self._run()#012  File /usr/share/vdsm/vm.py,
  line 3170, in _run#012self._connection.createXML(domxml, flags),#012
   File /usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py, line
  92, in wrapper#012ret = f(*args, **kwargs)#012  File
  /usr/lib64/python2.7/site-packages/libvirt.py, line 2920, in
  createXML#012if ret is None:raise libvirtError('virDomainCreateXML()
  failed', conn=self)#012libvirtError: unsupported configuration: spice
  graphics are not supported with this QEMU
 
  so check your QEMU is compiled with SPICE support. Where did you get it
  from?
 
   Apr  3 22:30:50 localhost vdsm vm.Vm WARNING
  vmId=`a4ea786e-2c1c-4159-86e0-8744c54b3bbe`::trying to set state to
  Powering down when already Down
   Apr  3 22:30:50 localhost vdsm root WARNING File:
  /var/lib/libvirt/qemu/channels/a4ea786e-2c1c-4159-86e0-8744c54b3bbe.com.redhat.rhevm.vdsm
  already removed
   Apr  3 22:30:50 localhost vdsm root WARNING File:
  /var/lib/libvirt/qemu/channels/a4ea786e-2c1c-4159-86e0-8744c54b3bbe.org.qemu.guest_agent.0
  already removed
  
  
   libvirtd.log as follow:
   2014-04-04 02:30:48.363+: 988: debug :
  virStorageFileGetMetadata:1090 : 
  path=/rhev/data-center/mnt/192.168.1.130:_home_root_iso/64ac0f5a-82b2-4bae-8b74-794a0bcecf27/images/----/xp.iso
  format=1 uid=107 gid=107 probe=0
   2014-04-04 02:30:48.363+: 988: debug :
  virStorageFileGetMetadataRecurse:1022 :
  path=/rhev/data-center/mnt/192.168.1.130:_home_root_iso/64ac0f5a-82b2-4bae-8b74-794a0bcecf27/images/----/xp.iso
  format=1 uid=107 gid=107 probe=0
   2014-04-04 02:30:48.366+: 988: debug :
  virStorageFileGetMetadataInternal:770 :
  path=/rhev/data-center/mnt/192.168.1.130:_home_root_iso/64ac0f5a-82b2-4bae-8b74-794a0bcecf27/images/----/xp.iso,
  fd=25, format=1
   2014-04-04 02:30:48.422+: 988: debug :
  virStorageFileGetMetadata:1090 : 
  path=/rhev/data-center/mnt/192.168.1.130:_home_root_images/817df10f-ebe3-48ec-9e97-7c34aec1b00c/images/7eb012db-d3f3-4374-a24e-044946528f26/a2ec5f90-cc8a-43b1-bc8c-429536445a4d
  format=1 uid=107 gid=107 probe=0
   2014-04-04 02:30:48.422+: 988: debug :
  virStorageFileGetMetadataRecurse:1022 :
  path=/rhev/data-center/mnt/192.168.1.130:_home_root_images/817df10f-ebe3-48ec-9e97-7c34aec1b00c/images/7eb012db-d3f3-4374-a24e-044946528f26/a2ec5f90-cc8a-43b1-bc8c-429536445a4d
  format=1 uid=107 gid=107 probe=0
   2014-04-04 02:30:48.425+: 988: debug :
  virStorageFileGetMetadataInternal:770 :
  

[vdsm] RFC: is it possible to configure hosts in cluster to be NTP peers

2013-04-09 Thread David Jaša
Hi,

ovirt still doesn't configure NTP on host installation and relies on
administrator not forgetting to set it up correctly, mainly because it
is quite hard to configure it correctly automatically.

There is one thing IMO that could be configured automatically and that
could alleviate situation somewhat: make hosts in the cluster NTP peers
so that when clocks go wrong in the cluster for any reason, the error is
the same on all hosts in the cluster.

The files could be stored in /etc/{ntp,chrony}/vdsm.conf for instance
and referenced with includefile or include ... in /etc/ntp.conf
or /etc/chrony.conf respectively.

What seems tricky though is that non-Up hosts should be excluded from
peer list because there are higher chances that their clocks are not
configured properly, so engine (or some host?) should trigger changes to
NTP configuration pretty frequently.

What do you think about these issues? I don't want to report bugs/RFEs
on topic before I'll see your reply.

David


-- 

David Jaša, RHCE

SPICE QE based in Brno
GPG Key: 22C33E24 
Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24




___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Users] Failed to Start VM (Run Once) due to internal spice channel error

2013-03-19 Thread David Jaša
David Jaša píše v Út 19. 03. 2013 v 10:50 +0100:
 Omer Frenkel píše v Út 19. 03. 2013 v 04:58 -0400:
 Od: 
  Omer Frenkel ofren...@redhat.com
   Komu: 
  Tom Rogers trogers1...@gmail.com
  Kopie: 
  us...@ovirt.org
Předmět: 
  Re: [Users] Failed to Start VM (Run
  Once) due to internal spice
  channel error
  Datum: 
  19.3.2013 09:58:23
  
  
  - Original Message -
  
   From: Tom Rogers trogers1...@gmail.com
   To: us...@ovirt.org
   Sent: Monday, March 18, 2013 10:22:52 PM
   Subject: [Users] Failed to Start VM (Run Once) due to internal spice
   channel error
  
   When trying to run a first time (Run Once) VM install, the vm
  session
   fails to start. When we look into the libvirtd.log on the host, we
   find the following error:
  
   virDomainGraphicsDefParseXML:6566 : internal error unknown spice
   channel name ain
  
   in the lines right above message, contained in the xmlDESC we find
   the following snippet:
  
   channel type=spicevmc
   target name=com.redhat.spice.0 type=virtio/
   /channel
   graphics autoport=yes keymap=en-us listen=0 passwd=*
   passwdValidTo=1970-01-01T00:00:01 port=-1 tlsPort=-1
   type=spice
   channel mode=secure name=ain/
   channel mode=secure name=nputs/
   channel mode=secure name=ursor/
   channel mode=secure name=layback/
   channel mode=secure name=ecord/
   channel mode=secure name=isplay/
   channel mode=secure name=sbredir/
   channel mode=secure name=martcard/
   /graphics
  
   It looks to us that the xml may be malformed due to the seemingly
   first character missing in the channel names.
  
   Does this make any sense to anyone?
  
   On master:
   Ovirt-engine 3.3.0-0.2.master.20130313215708
  
   On node:
   qemu-kvm 1.2.2-7.fc18
   vdsm-xmlrpc 4.10.3-10.fc18
   libvirt-daemon 0.10.2.3-1.fc18
  
   --
   Tom Rogers
  
   ___
   Users mailing list
   us...@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
  indeed looks like first letter is missing, this is what the engine
  sends: 
  
  engine=# select * from vdc_options where option_name =
  'SpiceSecureChannels'; 
  option_id | option_name | option_value | version 
  ---+-+---+-
   
  345 | SpiceSecureChannels | smain,sinputs | 3.0 
  346 | SpiceSecureChannels |
  main,inputs,cursor,playback,record,display,usbredir,smartcard | 3.1 
  347 | SpiceSecureChannels |
  main,inputs,cursor,playback,record,display,usbredir,smartcard | 3.2 
  348 | SpiceSecureChannels |
  main,inputs,cursor,playback,record,display,usbredir,smartcard | 3.3 
  
  can you please attach relevant vdsm log for this run? 
  Thanks 
 
 sounds like vdsm didn't get right schannel - channel name change [1].
 It should either implement magic like spice-xpi:
 http://cgit.freedesktop.org/spice/spice-xpi/tree/SpiceXPI/src/plugin/plugin.cpp#n320
 or stop any channel name mangling altogether for recent enough clusters.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=803666

 
 David
 
  
  

-- 

David Jaša, RHCE

SPICE QE based in Brno
GPG Key: 22C33E24 
Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [Users] Failed to Start VM (Run Once) due to internal spice channel error

2013-03-19 Thread David Jaša
Omer Frenkel píše v Út 19. 03. 2013 v 04:58 -0400:
Od: 
 Omer Frenkel ofren...@redhat.com
  Komu: 
 Tom Rogers trogers1...@gmail.com
 Kopie: 
 us...@ovirt.org
   Předmět: 
 Re: [Users] Failed to Start VM (Run
 Once) due to internal spice
 channel error
 Datum: 
 19.3.2013 09:58:23
 
 
 - Original Message -
 
  From: Tom Rogers trogers1...@gmail.com
  To: us...@ovirt.org
  Sent: Monday, March 18, 2013 10:22:52 PM
  Subject: [Users] Failed to Start VM (Run Once) due to internal spice
  channel error
 
  When trying to run a first time (Run Once) VM install, the vm
 session
  fails to start. When we look into the libvirtd.log on the host, we
  find the following error:
 
  virDomainGraphicsDefParseXML:6566 : internal error unknown spice
  channel name ain
 
  in the lines right above message, contained in the xmlDESC we find
  the following snippet:
 
  channel type=spicevmc
  target name=com.redhat.spice.0 type=virtio/
  /channel
  graphics autoport=yes keymap=en-us listen=0 passwd=*
  passwdValidTo=1970-01-01T00:00:01 port=-1 tlsPort=-1
  type=spice
  channel mode=secure name=ain/
  channel mode=secure name=nputs/
  channel mode=secure name=ursor/
  channel mode=secure name=layback/
  channel mode=secure name=ecord/
  channel mode=secure name=isplay/
  channel mode=secure name=sbredir/
  channel mode=secure name=martcard/
  /graphics
 
  It looks to us that the xml may be malformed due to the seemingly
  first character missing in the channel names.
 
  Does this make any sense to anyone?
 
  On master:
  Ovirt-engine 3.3.0-0.2.master.20130313215708
 
  On node:
  qemu-kvm 1.2.2-7.fc18
  vdsm-xmlrpc 4.10.3-10.fc18
  libvirt-daemon 0.10.2.3-1.fc18
 
  --
  Tom Rogers
 
  ___
  Users mailing list
  us...@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 indeed looks like first letter is missing, this is what the engine
 sends: 
 
 engine=# select * from vdc_options where option_name =
 'SpiceSecureChannels'; 
 option_id | option_name | option_value | version 
 ---+-+---+-
  
 345 | SpiceSecureChannels | smain,sinputs | 3.0 
 346 | SpiceSecureChannels |
 main,inputs,cursor,playback,record,display,usbredir,smartcard | 3.1 
 347 | SpiceSecureChannels |
 main,inputs,cursor,playback,record,display,usbredir,smartcard | 3.2 
 348 | SpiceSecureChannels |
 main,inputs,cursor,playback,record,display,usbredir,smartcard | 3.3 
 
 can you please attach relevant vdsm log for this run? 
 Thanks 

sounds like vdsm didn't get right schannel - channel name change [1].
It should either implement magic like spice-xpi:
http://cgit.freedesktop.org/spice/spice-xpi/tree/SpiceXPI/src/plugin/plugin.cpp#n320
or stop any channel name mangling altogether for recent enough clusters.

David

 
 
-- 

David Jaša, RHCE

SPICE QE based in Brno
GPG Key: 22C33E24 
Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] vdsm networking changes proposal

2013-02-18 Thread David Jaša
Hi,

Alon Bar-Lev píše v Ne 17. 02. 2013 v 15:57 -0500:
 Hello Antoni,
 
 Great work!
 I am very excited we are going this route, it is first of many to allow us to 
 be run on different distributions.
 I apologize I got to this so late.
 
 Notes for the model, I am unsure if someone already noted.
 
 I think that the abstraction should be more than entity and properties.
 
 For example:
 
 nic is a network interface
 bridge is a network interface and ports network interfaces
 bound is a network interface and slave network interfaces
 vlan is a network interface and vlan id
 
 network interface can have:
 - name
 - ip config
 - state
 - mtu
 
 this way it would be easier to share common code that handle pure interfaces.
 
 I don't quite understand the 'Team' configurator, are you suggesting a 
 provider for each technology?

Team is a new implementation of bonding in Linux kernel IIRC.

 
 bridge
 - iproute2 provider
 - ovs provider
 - ifcfg provider
 
 bond
 - iproute2
 - team
 - ovs
 - ifcfg
 
 vlan
 - iproute2
 - ovs
 - ifcfg
 
 So we can get a configuration of:
 bridge:iproute2
 bond:team
 vlan:ovs
 
 ?
 
 I also would like us to explore a future alternative of the network 
 configuration via crypto vpn directly from qemu to another qemu, the idea is 
 to have a kerberos like key per layer3(or layer2) destination, while 
 communication is encrypted at user space and sent to a flat network. The 
 advantage of this is that we manage logical network and not physical network, 
 while relaying on hardware to find the best route to destination. The 
 question is how and if we can provide this via the suggestion abstraction. 
 But maybe it is too soon to address this kind of future.

Isn't it better to separate the two goals and persuade qemu developers to 
implement TLS for migration connections?

David

 
 For the open questions:
 
 1. Yes, I think that mode should be non-persistence, persistence providers 
 should emulate non-persistence operations by diff between what they have and 
 the goal.
 
 2. Once vdsm is installed, the mode it runs should be fixed. So the only 
 question is what is the selected profile during host deployment.
 
 3. I think that if we can avoid aliases it would be nice.
 
 4. Keeping the least persistence information would be flexible. I would love 
 to see a zero persistence mode available, for example if management interface 
 is dhcp or manually configured.
 
 I am very fond of the iproute2 configuration, and don't mind if administrator 
 configures the management interface manually. I think this can supersede the 
 ifcfg quite easily in most cases. In these rare cases administrator use ovirt 
 to modify the network interface we may consider delegating persistence to 
 totally different model. But as far as I understand the problem is solely 
 related to the management connectivity, so we can implement a simple 
 bootstrap of non-persistence module to reconstruct the management network 
 setup from vdsm configuration instead of persisting it to the distribution 
 width configuration.
 
 Regards,
 Alon Bar-Lev
 
 - Original Message -
  From: Antoni Segura Puimedon asegu...@redhat.com
  To: a...@ovirt.org, vdsm-de...@fedorahosted.org
  Sent: Friday, February 8, 2013 12:54:23 AM
  Subject: vdsm networking changes proposal
  
  Hi fellow oVirters!
  
  The network team and a few others have toyed in the past with several
  important
  changes like using open vSwitch, talking D-BUS to NM, making the
  network
  non-persistent, etc.
  
  It is with some of this changes in mind that we (special thanks go to
  Livnat
  Peer, Dan Kenigsberg and Igor Lvovsky) have worked in a proposal for
  a new architecture for vdsm's networking part. This proposal is
  intended to
  make our software more adaptable to new components and use cases,
  eliminate
  distro dependancies as much as possible and improve the
  responsiveness and
  scalability of the networking operations.
  
  To do so, it proposes an object oriented representation of the
  different
  elements that come into play in our networking use cases.
  
  But enough of introduction, please go to the feature page that we
  have put
  together and help us with your feedback, questions proposals and
  extensions.
  
  http://www.ovirt.org/Feature/NetworkReloaded
  
  
  Best regards,
  
  Toni
  ___
  Arch mailing list
  a...@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/arch
  
 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel

-- 

David Jaša, RHCE

SPICE QE based in Brno
GPG Key: 22C33E24 
Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] vdsm networking changes proposal

2013-02-08 Thread David Jaša
Hi,

 Bridges: via the brctl cmdline tool. 

I was told that the canonical way to configure bridges in recent distros
is also ip(8):
# ip link add br0 type bridge
# ip link set eth0 master br0
# ip link set eth0 nomaster

(RHEL6 still has to rely on brctl, though)

 Alias 
   * Users have shown interest in the likes of eth0:4. We should
 find out if this is really required of oVirt.

If you consider ipfwadm support (Linux 2.0 feature also replaced in 2.2
with something else), go on.

net-tools only pretend to work (add ip to a device via ip(8) or direct
kernel cals and try to find corresponding alias for instance...) so we
shouldn't contribute to life support of them.

David


Antoni Segura Puimedon píše v Čt 07. 02. 2013 v 17:54 -0500:
 Hi fellow oVirters!
 
 The network team and a few others have toyed in the past with several 
 important
 changes like using open vSwitch, talking D-BUS to NM, making the network
 non-persistent, etc.
 
 It is with some of this changes in mind that we (special thanks go to Livnat
 Peer, Dan Kenigsberg and Igor Lvovsky) have worked in a proposal for
 a new architecture for vdsm's networking part. This proposal is intended to
 make our software more adaptable to new components and use cases, eliminate
 distro dependancies as much as possible and improve the responsiveness and
 scalability of the networking operations.
 
 To do so, it proposes an object oriented representation of the different
 elements that come into play in our networking use cases.
 
 But enough of introduction, please go to the feature page that we have put
 together and help us with your feedback, questions proposals and extensions.
 
 http://www.ovirt.org/Feature/NetworkReloaded
 
 
 Best regards,
 
 Toni
 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel

-- 

David Jaša, RHCE

SPICE QE based in Brno
GPG Key: 22C33E24 
Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Future of Vdsm network configuration

2012-11-12 Thread David Jaša
 well. For
  installations on machines that maybe serve other purposes as well, it
  could be slightly problematic. Not the part of managing the network,
  but the part of disabling network manager and network.service.
  
  Since what you said was bypass NM and network.service, maybe it would
  be better instead to leave whichever is default enabled and let
  the user define which interfaces we should manage, and make those
  unavailable to NM and network.service. Thre are four cases here:
  
  NM enabled network.service disabled:
  Simply create ifcfg-* for the interfaces that we want to manage
  that include NM_CONTROLLED=no and the MAC address of the
  interface.
  NM disabled and network.service disabled:
  Just make sure that the interfaces we are to manage do not have
  a ifcfg-* file.
  NM disabled and network.service disabled:
  No special requirements to make it work.
  NM enabled and network.service enabled:
  Make sure that there are no ifcfg-* files for the interfaces we
  manage and create a NM keyfile stating the interface as not
  managed.
  
  Alon, just correct me if I am wrong in my interpretation of what you
  said, I wanted to expand on it to make sure I understood it well.
  
  Best, Toni
  
   
   I know this derive some more work, but I don't think it is that
   complex to implement and maintain.
   
   Just my 2 cents...
 
 Hello Toni,
 
 The demonstrate what I think, let's take this to the extreme...
 
 Hypervisor should be stable and rock solid, so I would use the minimum 
 required dependencies with tight integration.
 For this purpose I would use kernel + busybox + host-manager.
 host-manager that uses ioctls/netlink to perform the network management and 
 storage management.
 And as we only use qemu/kvm linked against qemu.
 We may add some OPTIONAL infrastructure component like openvswitch for extra 
 functionality.
 
 I, personally, don't see the value in running the hypervisor on generic 
 hosts, meaning running VMs on host that performs other tasks as well, such as 
 database server or application server.
 
 But let's say there is some value in that, so we have to ask:
 1. What is the stability factor we expect from these hosts?
 2. How well do we need to integrate with the distribution specific features?
 
 If the answer to (1) is as same as hypervisor, then we take the same software 
 and compromise with the integration.
 
 Otherwise we perform the minimum we can for such integration, such as 
 removing the network interfaces from the network manager control.
 
 The reasoning behind my opinion is that components such as dbus, systemd, 
 network manager are component that were design to solve the problem of the 
 END USER, not to be used as MISSION CRITICAL infrastructure components. This 
 was part of the effort to make the Linux desktop more friendly. But then 
 leaked to the MISSION CRITICAL core.

This is surely not true for systemd and as far as I know about
NetworkManager, it's recent developments are moving it to
mission-critical grade software.

 
 The stability of the hypervisor should be the same or higher than the hosts 
 it runs, so it cannot use none mission critical components to achieve that.
 
 The solution can be to write the whole network functionality as plugins, 
 example: bridge plugin, vlan plugin, bond plugin etc...

Putting this together with other facts (inability of current kernel +
scripts to handle full IPv6 functionality), you effectively propose to
write Yet Another Network Daemon, This Time Done Right.

If you can spend one hour of your time to listen to some networking-related 
talks, please have a look at these two:
https://www.youtube.com/watch?v=lzCLkjjrg1Q (by Pavel Šimerda, one of 
NetworkManager developers)
https://www.youtube.com/watch?v=XUgmFyBe_9w (by SUSE guys developing Wicked)

 Then have implementation of these plugins using network manager, openvswitch, 
 ioctl/netlink.
 Using the appropriate plugin based on desired functionality per desired 
 stability.
 
 I really like to see rock solid monolithic host manager / cluster manager.

Systemd is on the best path to become such a monolithic beast that will
do everything, given its efforts to absorb functionalities unrelated to
init into its monolithic design (syslog, anacron).

David

 
 I hope I clarified a little...
 
 Regards,
 Alon
 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel

-- 

David Jaša, RHCE

SPICE QE based in Brno
GPG Key: 22C33E24 
Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Future of Vdsm network configuration

2012-11-12 Thread David Jaša
Hi Alon,

Alon Bar-Lev píše v Po 12. 11. 2012 v 12:22 -0500:
 
 - Original Message -
  From: David Jaša dj...@redhat.com
  To: Alon Bar-Lev alo...@redhat.com
  Cc: vdsm-de...@fedorahosted.org
  Sent: Monday, November 12, 2012 7:13:19 PM
  Subject: Re: [vdsm] Future of Vdsm network configuration
  
  Hi Alon,
  
  Alon Bar-Lev píše v Ne 11. 11. 2012 v 13:28 -0500:
   
   - Original Message -
From: Antoni Segura Puimedon asegu...@redhat.com
To: Alon Bar-Lev alo...@redhat.com
Cc: vdsm-de...@fedorahosted.org, Dan Kenigsberg
dan...@redhat.com
Sent: Sunday, November 11, 2012 5:47:54 PM
Subject: Re: [vdsm] Future of Vdsm network configuration



- Original Message -
 From: Alon Bar-Lev alo...@redhat.com
 To: Dan Kenigsberg dan...@redhat.com
 Cc: vdsm-de...@fedorahosted.org
 Sent: Sunday, November 11, 2012 3:46:43 PM
 Subject: Re: [vdsm] Future of Vdsm network configuration
 
 
 
 - Original Message -
  From: Dan Kenigsberg dan...@redhat.com
  To: vdsm-de...@fedorahosted.org
  Sent: Sunday, November 11, 2012 4:07:30 PM
  Subject: [vdsm] Future of Vdsm network configuration
  
  Hi,
  
  Nowadays, when vdsm receives the setupNetowrk verb, it
  mangles
  /etc/sysconfig/network-scripts/ifcfg-* files and restarts the
  network
  service, so they are read by the responsible SysV service.
  
  This is very much Fedora-oriented, and not up with the new
  themes
  in Linux network configuration. Since we want oVirt and Vdsm
  to
  be
  distribution agnostic, and support new features, we have to
  change.
  
  setupNetwork is responsible for two different things:
  (1) configure the host networking interfaces, and
  (2) create virtual networks for guests and connect the to the
  world
  over (1).
  
  Functionality (2) is provided by building Linux software
  bridges,
  and
  vlan devices. I'd like to explore moving it to Open vSwitch,
  which
  would
  enable a host of functionalities that we currently lack (e.g.
  tunneling). One thing that worries me is the need to
  reimplement
  our
  config snapshot/recovery on ovs's database.
  
  As far as I know, ovs is unable to maintain host level
  parameters
  of
  interfaces (e.g. eth0's IPv4 address), so we need another
  tool for functionality (1): either speak to NetworkManager
  directly,
  or
  to use NetCF, via its libvirt virInterface* wrapper.
  
  I have minor worries about NetCF's breadth of testing and
  usage;
  I
  know
  it is intended to be cross-platform, but unlike ovs, I am not
  aware
  of a
  wide Debian usage thereof. On the other hand, its API is
  ready
  for
  vdsm's
  usage for quite a while.
  
  NetworkManager has become ubiquitous, and we'd better
  integrate
  with
  it
  better than our current setting of NM_CONTROLLED=no. But as
  DPB
  tells
  us,
  https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.html
  we'd better offload integration with NM to libvirt.
  
  We would like to take Network configuration in VDSM to the
  next
  level
  and make it distribution agnostic in addition for setting the
  infrastructure for more advanced features to be used going
  forward.
  The path we think of taking is to integrate with OVS and for
  feature
  completeness use NetCF, via its libvirt virInterface*
  wrapper.
  Any
  comments or feedback on this proposal is welcomed.
  
  Thanks to the oVirt net team members who's input has helped
  writing
  this
  email.
 
 Hi,
 
 As far as I see this, network manager is a monster that is a
 huge
 dependency to have just to create bridges or configure network
 interfaces... It is true that on a host where network manager
 lives
 it would be not polite to define network resources not via its
 interface, however I don't like we force network manager.
  
  NM is a default way of network configuration from F17 on and it's
  available on all platforms. It isn't exactly small but it wouldn't
  pull
  any dependency AFAICT because all its dependencies are on Fedora
  initramfs already...
  
 
 libvirt is long not used as virtualization library but system
 management agent, I am not sure this is the best system agent I
 would have chosen.
 
 I think that all the terms and building blocks got lost in
 time...
 and the result integration became more and more complex.
 
 Stabilizing such multi-layered component environment is much
 harder
 than monolithic environment.
 
 I would really want to see vdsm as monolithic component with
 full

Re: [vdsm] [RFC] ipv6 support of vdsm

2012-11-08 Thread David Jaša
Hi,

huntxu píše v Čt 08. 11. 2012 v 18:58 +0800:
 Hi, folks.
 
 Recently I am considering to implement ipv6 support for vdsm. First of all  
 I
 would like to know whether there is already someone working on this  
 feature.
 If so, I might do something to help, however, if not, I would try to  
 implement
 it with suggestions from this discussion.
 
 With ipv6 support vdsm is supposed to work properly in:
  * mixed environment, in which ipv4 and ipv6 addresses coexist
  * ipv6-pure environment
 
 My idea is:
 
 1) Provide a mechanism to setup ipv6 address configuration of a host via
 XMLRPC/RestAPI. This would be done in the current ConfigNetwork module by
 modifying the network-scripts/ifcfg-* of the devices. Thus the host is
 able to access ipv6 network (with correct configuration).
 
 2) For incoming spice connections, qemu is able to listen to ipv6 address,
 so we use a boolean option spice_use_ipv6 which indicates whether qemu
 should listen to incoming ipv6 or ipv4 connections.

beware that this is a can of worms:

1) spice server can not really listen on v6 only, you can only choose
between v4 (default) or v6+v4 (if you set -spice ipv6,... or -spice
addr=::,... and its libvirt domain xml equivalents) because -spice
ipv6 is just an equivalent of -spice addr=::, not of IPV6_V6ONLY

2) spice server does not know or use SO_BINDTODEVICE

3) spice server can bind to a single IP address only:
https://bugzilla.redhat.com/show_bug.cgi?id=787259

points 2 + 3 combined have several unfortunate consequences:

1. you can't have separate display network and mixed ipv4 + ipv6
spice-server at once -- IIRC that's most of current RHEV deployments

2. you can't have spice-server listening on multiple ipv6 addresses at
once - that's requirement for essentially any serious ipv6 use

3. even if 1. and 2. would be resolved, the existing VM displays would
break if there is any network configuration change (that is much more
probable event than in ipv4 because of different network configuration
mechanisms as far as I know -- but Pavel could correct me here).
Using SO_BINDTODEVICE and bind to :: (ipv6 wildcard) sounds like the
best option here but there is a catch: this option is supported by linux
kernel for root-processes only but qemu actually runs under regular
qemu user...

All of these make current spice ipv6 support limited to quite small
subset of real-world use cases, a kind of proof-of-concept one. Pavel
(CCd) could surely trow in several more examples of what can go wrong.

Given all of these, I'd leave spice part as-is for now till these
details are at least decided how they should look like...

David

 
 3) Make the connection with ovirt-engine(management connection) also ipv6-
 compatible. This requires modifying both XMLRPC and RestAPI servers to make
 them able to bind to the ipv6 address of the host. Also we need another
 boolean option use_ipv6 to indicate what is the ip version of the  
 management
 connection.
 
 4) Regarding the register process, all is the same with the current  
 workflow,
 except for if we use ipv6 to register, we should firstly set use_ipv6 to
 True, then XMLRPC and RestAPI servers would be listening on the ipv6  
 address
 after vdsm restarts.
 
 5) The management connection is supposed to be able to switch between ipv4
 and ipv6 on the fly (when host is under maintenance and with proper network
 configuration of the host). This requires another vdsm API.
 
 Suggestions are always welcome. Thanks.
 

-- 

David Jaša, RHCE

SPICE QE based in Brno
GPG Key: 22C33E24 
Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] spicec + vncviewer query

2012-06-05 Thread David Jaša
Anil Vettathu píše v Út 05. 06. 2012 v 15:36 +0530:
 
 Hi,
 
 I was able to get the details of the display of both spice and vnc
 using vdsclient. Now how can I connect to the console using spicec or
 virtviewer.
 
 spicec is failiing with the following log.
 
 1338808977 INFO [32318:32318] Application::main: command line: spicec
 --host 192.165.210.136 --port 5900 --secure-port 5901 --ca-file
 ca-cert.pem
 1338808977 INFO [32318:32318] init_key_map: using evdev mapping
 1338808979 INFO [32318:32318] MultyMonScreen::MultyMonScreen:
 platform_win: 77594625
 1338808979 INFO [32318:32318] GUI::GUI:
 1338808979 INFO [32318:32318] ForeignMenu::ForeignMenu: Creating a
 foreign menu connection /tmp/SpiceForeignMenu-32318.uds
 1338808979 INFO [32318:32319] RedPeer::connect_unsecure: Connected to
 192.165.210.136 5900
 1338808979 INFO [32318:32319] RedPeer::connect_secure: Connected to
 192.165.210.136 5901
 1338808979 WARN [32318:32319] RedChannel::run: connect failed 7

This indicates authentication failure. Have you set the ticket via
vdsClient for spice, too?

David

 
 virt-viewer is failing due to authentication even though i use a
 password set by vmticket.
 
 Please note that the VMs are managed by ovirt
 Is it mandatory that we need to use ovirt to connect to vm consoles?
 Can someone guide me?
 
 Thanks,
 Anil
 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://fedorahosted.org/mailman/listinfo/vdsm-devel

-- 

David Jaša, RHCE

SPICE QE based in Brno
GPG Key: 22C33E24 
Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] VDSM API/clientIF instance design issue

2012-05-30 Thread David Jaša
Do I get it right that host-side of MOM communicates with guest side
over network interface? If so, why isn't it following best practices of
qemu/kvm world and why it doesn't communicate over its own virtio-serial
port?

David


Mark Wu píše v St 30. 05. 2012 v 22:49 +0800:
 Hi Guys,
 
 Recently,  I has been working on integrate MOM into VDSM.  MOM needs to 
 use VDSM API to interact with it.  But currently, it requires the 
 instance of clientIF to use vdsm API.  Passing clientIF to MOM is not a 
 good choice since it's a vdsm internal object.  So I try to remove the 
 parameter 'cif' from the interface definition and change to access the 
 globally unique  clientIF instance in API.py.
 
 To get the instance of clientIF,  I add a decorator to clientIF to 
 change it into singleton. Actually, clientIF has been working as a 
 global single instance already.  We just don't have an interface to get 
 it and so passing it as parameter instead.  I think using singleton to 
 get the instance of clientIF is more clean.
 
 Dan and Saggi already gave some comments in 
 http://gerrit.ovirt.org/#change,4839  Thanks for the reviewing!  But I 
 think we need more discussion on it,  so I post it here because gerrit 
 is not the appropriate to discuss a design issue.
 
 Thanks !
 Mark.
 
 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://fedorahosted.org/mailman/listinfo/vdsm-devel

-- 

David Jaša, RHCE

SPICE QE based in Brno
GPG Key: 22C33E24 
Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] about migration

2012-02-01 Thread David Jaša
wangxiaofan píše v St 01. 02. 2012 v 07:48 +0800:
 Hi,
 To do migration, why vdsm/libvirt requires DNS server or hostnames
 in hosts file? Is there any way to do migration directly with IP
 address?

If you address your hosts via FQDN, these requirements apply. When
address them by their IP addresses, the DNS requirements should go away.

David

 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://fedorahosted.org/mailman/listinfo/vdsm-devel

-- 

David Jaša, RHCE

SPICE QE based in Brno
GPG Key: 22C33E24 
Fingerprint: 513A 060B D1B4 2A72 7F0D 0278 B125 CD00 22C3 3E24



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel