[vdsm] oVirt February 2014 Updates
1. Releases - oVirt 3.3.3 was released early in the month: http://www.ovirt.org/OVirt_3.3.3_release_notes - oVirt 3.3.4 about to release. http://www.ovirt.org/OVirt_3.3.4_release_notes - oVirt 3.4.0 about to release! 2. Events - Leonardo Vaz is organizing ovirt attendance at FISL15, the largest FOSS conference in LATAM which will happen from 7th to 10th of May in Porto Alegre, Brazil: http://softwarelivre.org/fisl15 - Allon Mureinik gave a presentation on DR with oVirt at devconf.cz http://www.slideshare.net/AllonMureinik/dev-conf-ovirt-dr - oVirt workshop in korea slides (korean) http://www.slideshare.net/rogan/20140208-ovirtkorea-01 https://www.facebook.com/groups/ovirt.korea - Rogan also presented oVirt integration with OpenStack in OpenStack day in Korea http://alturl.com/m3jnx - Pat Pierson posted on basic network setup http://izen.ghostpeppersrus.com/setting-up-networks/ - Fosdem 2014 sessions (slides and videos) are at: http://www.ovirt.org/FOSDEM_2014 - and some at Infrastructure.Next Ghent the week after forsdem. 3. oVirt Activity (software) - oVirt Jenkins plugin by Dustin Kut Moy Cheung to control VM slaves managed by ovirt/RHEV https://github.com/thescouser89/ovirt-slaves-plugin - Opaque oVirt/RHEV/Proxmox client and source code released https://play.google.com/store/apps/details?id=com.undatech.opaque - great to see the NUMA push from HP: http://www.ovirt.org/Features/NUMA_and_Virtual_NUMA http://www.ovirt.org/Features/Detailed_NUMA_and_Virtual_NUMA 4. oVirt Activity (blogs, preso's) - oVirt's has been accepted as a mentoring project for the Google Summer of Code 2014. - Oved Ourfali posted on Importing Glance images as oVirt templates http://alturl.com/h7xid - v2v had seen many active discussions. here's a post by Jon Archer on how to Import regular kvm image to oVirt or RHEV http://jonarcher.info/2014/02/import-regular-kvm-image-ovirt-rhev/ - great reviews on amazon.com for Getting Started with oVirt 3.3 http://alturl.com/5rk2p - oVirt Deep Dive 3.3 slides (Chinese) http://www.slideshare.net/mobile/johnwoolee/ovirt-deep-dive# - oVirt intro video (russian) http://alturl.com/it546 - how to install oVirt 3.3 on CentOS 6.5 http://www.youtube.com/watch?v=5i5ilSKsmbo 5. Related - NetApp GA'd their Virtual Storage Console for RHEV, which is implemented as an oVirt UI plugin (and then some) http://captainkvm.com/2014/02/vsc-for-rhev-is-ga-today/#more-660 ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Users] NAT configuration
On 02/05/2014 09:27 AM, Sven Kieske wrote: Well, I just tested this. and I can't connect to virsh with this information. I guess the mentioned user vdsm@rhevh might not be the actual one used in 3.3.2 anymore? (mail is from 2012 and mentions rhev, so..) well, that's a question for vdsm-devel or can't libvirt manage multiple authenticated users? Because I registered my own using: saslpasswd2 -a libvirt USERNAME which still works. Am 05.02.2014 09:15, schrieb Sven Kieske: It's actually step 5 :) Am 05.02.2014 07:50, schrieb Itamar Heim: (step 6 explains where the user/password for libvirt/virsh are. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] oVirt January 2014 Updates
1. VERSIONS - oVirt 3.3.3 in last phases http://www.ovirt.org/OVirt_3.3.3_release_notes - oVirt 3.4 with another slew of features is getting into test day, beta, etc. http://red.ht/1eo9TiS 2. WORKSHOPS - Fosdem - oVirt stand wass packed as well as a virt IaaS devroom, with many oVirt talks. more details next time. - more oVirt talks in cfgmgmtcamp and infra.next this week, including: -- Hervé Leclerc How we build a cloud with CentOS, oVirt, and Cisco-UCS Wednesday 5th February in Infrastructure.Next Ghent http://bit.ly/1fjTJVC -- oVirt Node being used as a Discovery Node in The Foreman project talk at cfgmgmtcamp, february 3rd http://bit.ly/1gAnneI - oVirt Korea group meeting this Saturday in Seoul Register here http://onoffmix.com/event/23134 - Open Virtualization Workshop in Tehran (26,27 February 2014) Isfahan (5,6 March 2014) http://t.co/9PR3BxOnpd 3. USING oVirt - More details on Wind River using ovirt http://bit.ly/1i2LtLI - New Case Study: Nieuwland Geo-Informati http://www.ovirt.org/Nieuwland_case_study - oVirt Node being used as a Discovery Node in The Foreman project talk at cfgmgmtcamp, february 3rd http://bit.ly/1gAnneI 4. Users - Double the amount of emails on users mailing list- up from 800 to 1600 this month! - Karli updated how to use spice with ovirt from OS X http://www.ovirt.org/SPICE_Remote-Viewer_on_OS_X - Opaque (spice android ovirt client) v1.1.8 beta released https://plus.google.com/communities/116099119712127782216 - how to deploy windows guest agent on windows: http://bit.ly/1kr5tJo - Andrew Lau posted on working through Hosted Engine with 3.4.0 beta http://bit.ly/1eobzZw - Deep Dive into oVirt 3.3.3 by Li Jiansheng (chinese) http://slidesha.re/1eFWQ8G - Matthew Bingham posted a series of video on setting up ovirt: Install oVirt 3.3.2 http://www.youtube.com/watch?v=GWT-m-oWSjQ Optional export nfs mount for oVirt 3.3.2 http://www.youtube.com/watch?v=MLbPln5-2jE Initial webgui oVirt 3.3.2 Steps for storage http://www.youtube.com/watch?v=dL0_03ZICw4 Download and upload client iso to ISO_DOMAIN for oVirt 3.3.2 http://www.youtube.com/watch?v=pDzTHFSmvGE ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Engine-devel] Gerrit NEW Change Screen
On 01/26/2014 12:38 PM, Eli Mesika wrote: - Original Message - From: Roy Golan rgo...@redhat.com To: Itamar Heim ih...@redhat.com, vdsm-devel@lists.fedorahosted.org, engine-devel engine-de...@ovirt.org Sent: Sunday, January 26, 2014 9:12:38 AM Subject: Re: [Engine-devel] Gerrit NEW Change Screen On 01/16/2014 12:08 AM, Itamar Heim wrote: with gerrit 2.8, there is a new change screen. its not enabled by default (yet), please use and see what you think. to enable, go to settings (click the top-right arrow next to your name, and choose settings). select preferences and set Change View: to New Screen. I had found that the new window truncates the code See for example line 115 in http://gerrit.ovirt.org/#/c/23428/2/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/RestartVdsCommand.java,cm Switching to the old view will show all the line while in the new one it is truncated open a bug on gerrit upstream? Thanks, Itamar ___ Engine-devel mailing list engine-de...@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel +1 needs some time to get used to the new layout but overall its way better to have the patch dependency (Related) and better without the cluttered patch-sets view. reminds me of gitk somewhat... ___ Engine-devel mailing list engine-de...@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Engine-devel] Gerrit NEW Change Screen
On 01/26/2014 01:02 PM, Eli Mesika wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Eli Mesika emes...@redhat.com Cc: vdsm-devel@lists.fedorahosted.org, engine-devel engine-de...@ovirt.org Sent: Sunday, January 26, 2014 12:39:19 PM Subject: Re: [Engine-devel] Gerrit NEW Change Screen On 01/26/2014 12:38 PM, Eli Mesika wrote: - Original Message - From: Roy Golan rgo...@redhat.com To: Itamar Heim ih...@redhat.com, vdsm-devel@lists.fedorahosted.org, engine-devel engine-de...@ovirt.org Sent: Sunday, January 26, 2014 9:12:38 AM Subject: Re: [Engine-devel] Gerrit NEW Change Screen On 01/16/2014 12:08 AM, Itamar Heim wrote: with gerrit 2.8, there is a new change screen. its not enabled by default (yet), please use and see what you think. to enable, go to settings (click the top-right arrow next to your name, and choose settings). select preferences and set Change View: to New Screen. I had found that the new window truncates the code See for example line 115 in http://gerrit.ovirt.org/#/c/23428/2/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/RestartVdsCommand.java,cm Switching to the old view will show all the line while in the new one it is truncated open a bug on gerrit upstream? done btw, i really like the new feature on this screen of the right-lower-corner notification that someone else updated this change since i opened it on my screen. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] ovirt storage creation problems
freed to object EngineLock [exclusiveLocks= key: localhost.localdomain:/192.168.0.108 http://192.168.0.108 value: STORAGE_CONNECTION , sharedLocks= ] the vdsm log was: Thread-17::DEBUG::2014-01-26 17:17:41,579::BindingXMLRPC::974::vds::(wrapper) client [192.168.0.108]::call getHardwareInfo with () {} Thread-17::DEBUG::2014-01-26 17:17:41,614::BindingXMLRPC::981::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'HP Pavilion g4 Notebook PC', 'systemSerialNumber': '5CD2418N50', 'systemFamily': '103C_5335KV G=N L=CON B=HP S=PAV X=Null', 'systemVersion': '0796100056160', 'systemUUID': '32444335-3134-4E38-3530-843497791B18', 'systemManufacturer': 'Hewlett-Packard'}} Thread-19::DEBUG::2014-01-26 17:18:27,079::task::579::TaskManager.Task::(_updateState) Task=`0649dc45-6e00-468c-8a60-94680cabb980`::moving from state init - state preparing Thread-19::INFO::2014-01-26 17:18:27,079::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-19::INFO::2014-01-26 17:18:27,079::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-19::DEBUG::2014-01-26 17:18:27,080::task::1168::TaskManager.Task::(prepare) Task=`0649dc45-6e00-468c-8a60-94680cabb980`::finished: {} Thread-19::DEBUG::2014-01-26 17:18:27,080::task::579::TaskManager.Task::(_updateState) Task=`0649dc45-6e00-468c-8a60-94680cabb980`::moving from state preparing - state finished Thread-19::DEBUG::2014-01-26 17:18:27,080::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-19::DEBUG::2014-01-26 17:18:27,080::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-19::DEBUG::2014-01-26 17:18:27,080::task::974::TaskManager.Task::(_decref) Task=`0649dc45-6e00-468c-8a60-94680cabb980`::ref 0 aborting False On Sun, Jan 26, 2014 at 3:08 PM, Itamar Heim ih...@redhat.com mailto:ih...@redhat.com wrote: On 01/25/2014 12:17 PM, aditya mamidwar wrote: hey am facing following problem while creating new storage domain in ovirt. PFA screenshot and log file details -- -Aditya Mamidwar _ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.__org mailto:vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.__org/mailman/listinfo/vdsm-__devel https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel please attach vdsm log. storage folks - can we get something more tangible to show in the ui to user? hopefully based on the engine log has a 477 error? The connection with details 192.168.0.110:/var/lib/__exports/data failed because of error code 477 and error message is: problem while trying to mount target -- -Aditya Mamidwar please try to locate the relevant part in the vdsm log, it isn't the one you attached ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Users] The machine type of one cluster
On 01/25/2014 04:23 AM, Kewei Yu wrote: Hi all: There is a machine type in cluster, It will decide which machine of Qemu will be used, When we add the first host into cluster, a default machine type is shown. We can correct the DB's value of the engine to set the machine type. I just want to know how dose cluster choice the default machine? It is decided by VDSM? Qemu? or It is only fixed value in engine's DB? Regard Kewei ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel basically, for fedora its 'pc', which may have some live migration issues between different versions in the same cluster. for .el6, its 'rhel63/rhel64/rhel65/etc.', which is a stable definition of emulation mode for the cluster (i.e., even a .el7 host should live migrate to .el6 if we specify its emulation mode as rhel65, etc.) engine defines per cluster level the expected emulation mode. vdsm reports from libvirt from qemu, so engine can check the host is a match. if the first host in the cluster is fedora, it will be set to 'pc', if its .el6, it will be set to the 'rhelxx' option. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Engine-devel] Copy reviewer scores on trivial rebase/commit msg changes
On 01/19/2014 02:48 AM, Dan Kenigsberg wrote: On Sat, Jan 18, 2014 at 01:48:52AM +0200, Itamar Heim wrote: I'd like to enable these - comments welcome: 1. label.Label-Name.copyAllScoresOnTrivialRebase If true, all scores for the label are copied forward when a new patch set is uploaded that is a trivial rebase. A new patch set is considered as trivial rebase if the commit message is the same as in the previous patch set and if it has the same code delta as the previous patch set. This is the case if the change was rebased onto a different parent. This can be used to enable sticky approvals, reducing turn-around for trivial rebases prior to submitting a change. Defaults to false. 2. label.Label-Name.copyAllScoresIfNoCodeChange If true, all scores for the label are copied forward when a new patch set is uploaded that has the same parent commit as the previous patch set and the same code delta as the previous patch set. This means only the commit message is different. This can be used to enable sticky approvals on labels that only depend on the code, reducing turn-around if only the commit message is changed prior to submitting a change. Defaults to false. https://gerrit-review.googlesource.com/Documentation/config-labels.html I think that the time saved by these copying is worth the dangers. But is there a way to tell a human ack from an ack auto-copied by these options? It's not so fair to blame X for X approved this patch when he only approved a very similar version thereof. we'll find out when we enable it. Assuming that a clean rebase can do no wrong is sometimes wrong (a recent example is detailed by Nir's http://gerrit.ovirt.org/21649/ ) of course it can do wrong, but that's the exception usually. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Engine-devel] Copy reviewer scores on trivial rebase/commit msg changes
On 01/19/2014 11:49 PM, Yair Zaslavsky wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Dan Kenigsberg dan...@redhat.com Cc: engine-devel engine-de...@ovirt.org, vdsm-devel@lists.fedorahosted.org Sent: Sunday, January 19, 2014 10:20:33 AM Subject: Re: [Engine-devel] Copy reviewer scores on trivial rebase/commit msg changes On 01/19/2014 02:48 AM, Dan Kenigsberg wrote: On Sat, Jan 18, 2014 at 01:48:52AM +0200, Itamar Heim wrote: I'd like to enable these - comments welcome: 1. label.Label-Name.copyAllScoresOnTrivialRebase If true, all scores for the label are copied forward when a new patch set is uploaded that is a trivial rebase. A new patch set is considered as trivial rebase if the commit message is the same as in the previous patch set and if it has the same code delta as the previous patch set. This is the case if the change was rebased onto a different parent. This can be used to enable sticky approvals, reducing turn-around for trivial rebases prior to submitting a change. Defaults to false. 2. label.Label-Name.copyAllScoresIfNoCodeChange If true, all scores for the label are copied forward when a new patch set is uploaded that has the same parent commit as the previous patch set and the same code delta as the previous patch set. This means only the commit message is different. This can be used to enable sticky approvals on labels that only depend on the code, reducing turn-around if only the commit message is changed prior to submitting a change. Defaults to false. https://gerrit-review.googlesource.com/Documentation/config-labels.html I think that the time saved by these copying is worth the dangers. But is there a way to tell a human ack from an ack auto-copied by these options? It's not so fair to blame X for X approved this patch when he only approved a very similar version thereof. I think the ideas are good, regarding a way to mark if this is human ack or not - can the process of copying post a comment that copying occurred? we'll see when enabled it. I plan to do that tuesday if no strong objections will arise. we'll find out when we enable it. Assuming that a clean rebase can do no wrong is sometimes wrong (a recent example is detailed by Nir's http://gerrit.ovirt.org/21649/ ) of course it can do wrong, but that's the exception usually. ___ Engine-devel mailing list engine-de...@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Block Storage] What about the shrinking disk in the vm?
On 01/13/2014 04:19 AM, Qixiaozhen wrote: Hi,all VDSM have realized the feature of thin provision in its released version. Volumes have 2 major properties: 1.*type*- How are the bits written to the underlying volume. o/raw/- means a simple raw access a write to offset X will be written on offset X o/qcow2/- means that the storage will be accessed as a qcow2 image and all that this entails 2.*allocation*- How should VDSM allocated the storage o/preallocated/- VDSM will try it's best to guaranty that all the storage that was requested is allocated right away. Some storage configuration may render preallocation pointless. o/sparse/thin provision/- space will be allocated for the volume as needed As is known to all, thin provision means allocating the disk space once the instance writes the data on the area of volume in the first time. The size of the disk will be increasing even though we have deleted the file in the instance. With the latest feature of qemu, the disk can be shrinked. Ref link: http://dustymabe.com/2013/06/11/recover-space-from-vm-disk-images-by-using-discardfstrim/ Does VDSM have any plan for this? we have a long list of features folks are interested in. please open a bugzilla RFE if you are interested in this feature. of course, we would be more than happy if you would like to try and collaborate on implementing this. please note while this may work for NFS, it may require extra work for iscsi which uses an LV which may have to be shrinked as well. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] Gerrit NEW Change Screen
with gerrit 2.8, there is a new change screen. its not enabled by default (yet), please use and see what you think. to enable, go to settings (click the top-right arrow next to your name, and choose settings). select preferences and set Change View: to New Screen. Thanks, Itamar ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] gerrit upgrade
gerrit upgrade will happen in 10-15 minutes. only a short service interruption is expected (but look and feel will change) On 01/05/2014 10:29 AM, Itamar Heim wrote: with gerrit 2.8 released a month ago having our fix for Configurable external robots.txt file (thanks juan), I'm planning to upgrade next monday (January 13th) to 2.8. other noteworthy changes - worth reading those with asterisk(*). gerrit 2.7[1] - New /a/tools URL. This allows users to download the commit-msg hook via the command line if the Gerrit server requires authentication globally. - Gerrit Trigger Plugin in Jenkins WARNING: Upgrading to 2.7 may cause the Gerrit Trigger Plugin in Jenkins to stop working. Please see the New Stream Events global capability section below. gerrit 2.8[2] * New change screen with completely redesigned UI and fully using the REST API. Users can choose which one to use in their personal preferences, either using the site default or explicitly choosing the old one or new one. - Secondary indexing with Lucene and Solr. - Lots of new REST API endpoints. - New UI extension and JavaScript API for plugins. - New build system using Facebook’s Buck. - New core plugin: Download Commands. - Configurable external robots.txt file. * Labels can be configured to copy scores forward to new patch sets if there is no code change. * Labels can be configured to copy scores forward to new patch sets for trivial rebases. * New button to cherry-pick the change to another branch. * When issuing a rebase via the Web UI, the committer is now the logged in user, rather than Gerrit Code Review. Thanks, Itamar [1] https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.7.html [2] https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.8.html ___ Infra mailing list in...@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] gerrit upgrade
On 01/13/2014 11:37 AM, Itamar Heim wrote: gerrit upgrade will happen in 10-15 minutes. only a short service interruption is expected (but look and feel will change) upgrade done now (backup of the instance took more than expected). On 01/05/2014 10:29 AM, Itamar Heim wrote: with gerrit 2.8 released a month ago having our fix for Configurable external robots.txt file (thanks juan), I'm planning to upgrade next monday (January 13th) to 2.8. other noteworthy changes - worth reading those with asterisk(*). gerrit 2.7[1] - New /a/tools URL. This allows users to download the commit-msg hook via the command line if the Gerrit server requires authentication globally. - Gerrit Trigger Plugin in Jenkins WARNING: Upgrading to 2.7 may cause the Gerrit Trigger Plugin in Jenkins to stop working. Please see the New Stream Events global capability section below. gerrit 2.8[2] * New change screen with completely redesigned UI and fully using the REST API. Users can choose which one to use in their personal preferences, either using the site default or explicitly choosing the old one or new one. - Secondary indexing with Lucene and Solr. - Lots of new REST API endpoints. - New UI extension and JavaScript API for plugins. - New build system using Facebook’s Buck. - New core plugin: Download Commands. - Configurable external robots.txt file. * Labels can be configured to copy scores forward to new patch sets if there is no code change. * Labels can be configured to copy scores forward to new patch sets for trivial rebases. * New button to cherry-pick the change to another branch. * When issuing a rebase via the Web UI, the committer is now the logged in user, rather than Gerrit Code Review. Thanks, Itamar [1] https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.7.html [2] https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.8.html ___ Infra mailing list in...@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [QE] 3.4.0 Release tracker
On 01/08/2014 10:46 AM, Sandro Bonazzola wrote: Hi, as you may know, we're planning to build oVirt 3.4.0 beta really soon and release 3.4.0 by end of January. A tracker bug (https://bugzilla.redhat.com/show_bug.cgi?id=1024889) has been created for this release. The following is a list of the current blocker bugs with target 3.4.0: Whiteboard Bug ID Summary storage 1032686 [RFE] API to save OVF on any location storage 1032679 [RFE] Single Disk Snapshots network 987813 [RFE] report BOOTPROTO and BONDING_OPTS independent of netdevice.cfg I don't think RFEs should be blocking the release by default. (we may want to discuss if we allow any RFE not making the feature freeze and stable branch a few extra days). The following is a list of the bugs with target 3.4.0 not yet fixed: Whiteboard Bug ID Summary gluster 1008980 [oVirt] Option 'Select as SPM' available for a host in gluster-only mode of oVirt gluster 1038988 Gluster brick sync does not work when host has multiple interfaces i18n1033730 [es-ES] need to revise the create snapshot translation infra 870330 Cache records in memory infra 904029 [engine-manage-domains] should use POSIX parameter form and aliases as values infra 979231 oVirt Node Upgrade: Support N configuration infra 986882 tar which is used by host-deploy is missing from fedora minimal installation infra 995362 [RFE] Support firewalld infra 1016634 Performance hit as a result of duplicate updates to VdsDynamic in VdsUpdateRuntimeInfo infra 1023751 [RFE] Create Bin Overrider for application context files changes we do in JRS infra 1023754 [RFE] add trigger to stop etl connection via engine db value. infra 1023759 [RFE] re-implement SSO solution based on JRS new SSO interface infra 1023761 [RFE] Build nightly JRS builds based on latest JRS version infra 1028793 systemctl start vdsmd blocks if dns server unreachable infra 1032682 Refactor authentication framework in engine infra 1035844 [oVirt][infra] Add host/Reinstall radio button text not actionable infra 1045350 REST error during VM creation via API infra 1046611 [oVirt][infra] Device custom properties syntax check is wrong integration 789040 [RFE] Log Collector should be able to run without asking questions integration 967350 [RFE] port dwh installer to otopi integration 967351 [RFE] port reports installer to otopi integration 1023752 [RFE] add upstream support for Centos el6 arch. integration 1024028 [RFE] add trigger to stop etl connection via engine db value. integration 1028489 [RFE] pre-populate ISO DOMAIN with rhev-tools-setup.iso (or equiv) integration 1028913 'service network start' sometimes fails during setup integration 1037663 F20 - ovirt-log-collector: conflicts with file from package sos-3.0-3.fc20.noarch integration 1039616 Setting shmmax on F19 is not enough for starting postgres network 987832 failed to add ovirtmgmt bridge when the host has static ip network 1001186 With AIO installer and NetworkManager enabled, the ovirtmgmt bridge is not properly configured network 1010663 override mtu field allows only values up to 9000 network 1018947 Yum update to oVirt 3.3 from 3.1.0 fails on CentOS 6.4 with EPEL dependency on python-inotify network 1037612 [oVirt][network][RFE] Add sync column to hosts sub tab under networks main tab network 1040580 [RFE] Apply networks changes to multiple hosts network 1040586 [RFE] Ability to configure network on multiple hosts at once network 1043220 [oVirt][network][RFE] Add Security-Group support for Neutron based networks network 1043230 Allow configuring Network QoS on host interfaces network 1044479 Make an iproute2 network configurator for vdsm network 1048738 [oVirt][network][RFE] Add subnet support for neutron based networks network 1048740 [oVirt][network][RFE] Allow deleting Neutron based network (in Neutron) network 1048880 [vdsm][openstacknet] Migration fails for vNIC using OVS + security groups sla 994712 Remove underscores for pre-defined policy names sla 1038616 [RFE] Support for hosted engine storage 888711 PosixFS issues storage 961532 [RFE] Handle iSCSI lun resize storage 1009610 [RFE] Provide clear warning when SPM become inaccessible and needs fencing storage 1034081 Misleading error message when adding an existing Storage Domain storage 1038053 [RFE] Allow domain of multiple types in a single Data Center storage 1045842 After deleting image failed ui display message: Disk gluster-test was successfully removed from... ux 784779 [webadmin][RFE] Login page. Add a link to welcome page ux 1014859 [RFE]
[vdsm] oVirt December 2013 Updates
Happy Holidays and Happy New Year! It has been quite a year for oVirt, and I'd like to take the opportunity and thank everyone in out community for making it a great year for the project. - oVirt 3.3.2 was released http://www.ovirt.org/OVirt_3.3.2_release_notes - oVirt 3.4 is in the works - Many oVirt sessions planned for Fosdem + oVirt booth - see you there! https://fosdem.org/2014/ - oVirt Contributor shirts Doron and Dave shipped some T-Shirts to prominently active community members to recognize their contribution (and thanks to Eldan for the graphics) https://twitter.com/hleclerc/status/414357959188430848 https://twitter.com/ste_vincent/status/415502831702265856 - a new book: Getting started with oVirt 3.3, written by community member Alexey Lesovsky: http://www.packtpub.com/getting-started-with-ovirt-3-3/book or via safaribooksonline: http://alturl.com/fm7ug - An Austrian company called IT Novum migrated a 1100 machine data center to 100% open source, including oVirt for management (from VMware) They chronicled the migration with the Twitter tag #OpenSourceDataCenter https://twitter.com/search?q=%23OpenSourceDataCentersrc=typd - James is trying to build a docker image for oVirt engine http://allthingsopen.com/2013/12/19/building-docker-images-on-fedora/ - Alon announced Experimental Gentoo support for ovirt-engine-3.3.1 is available: http://lists.ovirt.org/pipermail/users/2013-November/018156.html http://wiki.gentoo.org/wiki/OVirt - iiordanov announced Android Spice Client available for beta testing http://lists.ovirt.org/pipermail/users/2013-December/018434.html - Some nice how to added to oVirt wiki by Nkesick (cybertimber2000) http://www.ovirt.org/index.php?title=How_to_create_a_Windows_8_Virtual_Machine http://www.ovirt.org/index.php?title=How_to_create_a_Windows_XP_Virtual_Machine http://www.ovirt.org/index.php?title=How_to_create_a_Windows_7_Virtual_Machine http://www.ovirt.org/index.php?title=How_to_create_a_Ubuntu_Virtual_Machine http://www.ovirt.org/index.php?title=How_to_create_a_Fedora_Virtual_Machine - just one of those nice emails... Just wanted to express my deepest admiration for the progress of this project. ... my quest for upgrading from 3.1 to 3.2 ... This time around, ... the whole process _just worked_! All I can say is THANK YOU! Both to you developers of oVirt and a special thank you to dreyou for posting such a well-written manual. Thank you. Thank you. Thank you:) http://lists.ovirt.org/pipermail/users/2013-December/018341.html Blogs: - Up and Running with oVirt 3.3 http://chenglimin.blog.51cto.com/8236854/1344083 - oVirt 3.3.2 hackery on Fedora 19 blog post by bderzhavets http://alturl.com/68muq - a new (dedicated?) ovirt blog http://izen.ghostpeppersrus.com/ - detailed steps for deploying/using oVirt (Japanese) http://alturl.com/ww25m - Oved blogged about various ways to integrate with oVirt/RHEV (API, SDK, CLI, UI plugins, Scheduling API, VDSM Hooks) part 1 - http://alturl.com/4sytx part 2 - http://alturl.com/qye5f - Use oVirt LiveCD in dual LAN network environment (Chinese) http://www.cnblogs.com/sztsian/p/3439873.html ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] gerrit upgrade
with gerrit 2.8 released a month ago having our fix for Configurable external robots.txt file (thanks juan), I'm planning to upgrade next monday (January 13th) to 2.8. other noteworthy changes - worth reading those with asterisk(*). gerrit 2.7[1] - New /a/tools URL. This allows users to download the commit-msg hook via the command line if the Gerrit server requires authentication globally. - Gerrit Trigger Plugin in Jenkins WARNING: Upgrading to 2.7 may cause the Gerrit Trigger Plugin in Jenkins to stop working. Please see the New Stream Events global capability section below. gerrit 2.8[2] * New change screen with completely redesigned UI and fully using the REST API. Users can choose which one to use in their personal preferences, either using the site default or explicitly choosing the old one or new one. - Secondary indexing with Lucene and Solr. - Lots of new REST API endpoints. - New UI extension and JavaScript API for plugins. - New build system using Facebook’s Buck. - New core plugin: Download Commands. - Configurable external robots.txt file. * Labels can be configured to copy scores forward to new patch sets if there is no code change. * Labels can be configured to copy scores forward to new patch sets for trivial rebases. * New button to cherry-pick the change to another branch. * When issuing a rebase via the Web UI, the committer is now the logged in user, rather than Gerrit Code Review. Thanks, Itamar [1] https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.7.html [2] https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.8.html ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] oVirt Updates - November 2013
Case Studies - Dave Neary just published his interview with Martin Goldstone and Gary Lloyd, system administrators at the Keele university http://www.ovirt.org/Keele_University_case_study (we'd be happy to have more case studies, please contact Dave if you are willing to do one - its mostly just an interview with him) oVirt 3.3 - 3.3.1 with a slew of updates is finally out. http://www.ovirt.org/OVirt_3.3.1_release_notes - 3.3.2 beta planned once we GA 3.3.1. - sample scheduler plugins available here http://www.ovirt.org/External_Scheduler_Samples oVirt 3.4 - planning is on going, please join and help make oVirt better http://lists.ovirt.org/pipermail/users/2013-October/017451.html (clearly, only items with devel owners will make it) Conferences - Einav presented at an oVirt booth in LISA 2013 (Washington, DC) - Fosdem virt IaaS devroom CFP[1] is open till December 1st http://bit.ly/1gMXdUH We also requested a stand/table/booth to demo oVirt (we'd love if oVirt users visiting Fosdem would volunteer to help man the stand/demo) - KVM Forum/CloudOpen sessions available http://www.ovirt.org/KVM_Forum_2013 Cross Distro - guest agent packages are now available for OpenSUSE 12.3, 13.1 and Factory, and SLES 11 SP3. (thanks Vinzenz for pushing and René for testing) - Ubuntu packages pushed by Zhou Zheng Sheng. - How To: Get oVirt 3.2 All-In-One working on Scientific Linux 6.4 (With fixes!) http://bit.ly/17BstPj Related - How to manage multiple nodes with oVirt (Japanese) http://lab.space-i.com/?p=1252 - oVirt node has a new plugin, allowing it to be used for The Foreman host discovery http://www.youtube.com/watch?v=V8TugleqF64 - how to use virt-sysprep with oVirt/RHEV http://bit.ly/1hy54Xx - script to easily move VMs by name from export domain https://github.com/ppiersonbt/ovfmaker - some oVirt and Dell Equalogic issues discussions http://dell.to/1gN3TlX http://bit.ly/1cXrIH2 - Intro to oVirt on Fedora (Spanish) http://www.itrestauracion.com.ar/?p=1505 - Christophe Fergeau published libgovirt 0.3.0 (storage domains, cd-rom and certificate handling improvements) http://red.ht/HVl4U2 - helper script by Humble to print list of VMs and their nic/mac/ip http://bit.ly/1ePpMjF - Want to play with oVirt Neutron and GRE Tunneling? http://www.ovirt.org/OVirt_Neutron_GRE_Integration_-_How_To - How To: Install rhev-agents on Scientific Linux 6x http://bit.ly/1cZRydo - How To: Convert a server running on VMware ESXi 5x to oVirt 3.2 http://bit.ly/17x2X1O ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] suggesting Yaniv Bronheim as ovirt-3.3 branch maintainer
On 10/13/2013 04:41 PM, Maor Lipchuk wrote: +1 On 10/09/2013 02:55 PM, Dan Kenigsberg wrote: Recently, the Vdsm branch ovirt-3.3 was rebased on a relatively-stabilized vdsm-4.13.0. As discussed elsewhere, oVirt-3.3.1 is expected to be a relatively large micro version. Plenty of bug fixes are expected to be suggested to this branch. Currently, Federico is responsible to merge patches for oVirt-3.3.1, with my assistance. I would like to nominate Yaniv to join us in this delicate task: on one hand we should fix as many problems, but on the other hand - avoid regressions and needless surprises. It is a micro version after all. Yaniv has a wide understanding of Vdsm internals, and I am certain he would know how to solve problems introduced by patches approved by him. Please grant him +2 and merge rights on the ovirt-3.3 branch of vdsm. Dan. ___ Arch mailing list a...@ovirt.org http://lists.ovirt.org/mailman/listinfo/arch ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel yaniv was added as vdsm stable branch maintainer (together with danken and federico) ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Recent changes in vdsmd startup
On 10/10/2013 04:32 PM, Yaniv Bronheim wrote: Hey Everybody, FYI, Recently we merged a fix that changes the way vdsmd starts. Before that service vdsmd start command performed its main logic in addition to related services manipulation, as configure libvirt service and restart it for example. Now we are trying to avoid that and only alert the user if restart or other operations on related services are required. So now when you perform vdsmd start after clear installation you will see: ~$ sudo service vdsmd restart Shutting down vdsm daemon: vdsm watchdog stop [ OK ] vdsm: not running [FAILED] vdsm: Running run_final_hooks vdsm stop [ OK ] supervdsm start[ OK ] vdsm: Running run_init_hooks vdsm: Running gencerts hostname: Unknown host vdsm: Running check_libvirt_configure libvirt is not configured for vdsm yet Perform 'vdsm-tool libvirt-configure' before starting vdsmd vdsm: failed to execute check_libvirt_configure, error code 1 vdsm start [FAILED] This asks you to run vdsm-tool libvirt-configure After running it you should see: ~$ sudo vdsm-tool libvirt-configure Stopping libvirtd daemon: [ OK ] libvirt is not configured for vdsm yet Reconfiguration of libvirt is done. To start working with the new configuration, execute: 'vdsm-tool libvirt-configure-services-restart' This will manage restarting of the following services: libvirtd, supervdsmd After performing: 'vdsm-tool libvirt-configure-services-restart' you are ready to start vdsmd again as usual. All those vdsm-tool commands require root privileges, otherwise it'll alert and stop the operation. Over systemd the errors\output can be watched in /var/log/messages Thanks, Yaniv Bronhaim. ___ Arch mailing list a...@ovirt.org http://lists.ovirt.org/mailman/listinfo/arch how will this affect the following use cases: 1. i added a new host to the system via the engine. at the end of the installation i expect the host to work without admin having to do any operation on the host. 2. i update a host to latest vdsm version via the engine. at the end of the update i expect the host to be updated without admin having to do any operation on the host Thanks, Itamar ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Recent changes in vdsmd startup
On 10/10/2013 05:38 PM, Yaniv Bronheim wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Yaniv Bronheim ybron...@redhat.com Cc: VDSM Project Development vdsm-devel@lists.fedorahosted.org, arch a...@ovirt.org Sent: Thursday, October 10, 2013 5:24:40 PM Subject: Re: Recent changes in vdsmd startup On 10/10/2013 04:32 PM, Yaniv Bronheim wrote: Hey Everybody, FYI, Recently we merged a fix that changes the way vdsmd starts. Before that service vdsmd start command performed its main logic in addition to related services manipulation, as configure libvirt service and restart it for example. Now we are trying to avoid that and only alert the user if restart or other operations on related services are required. So now when you perform vdsmd start after clear installation you will see: ~$ sudo service vdsmd restart Shutting down vdsm daemon: vdsm watchdog stop [ OK ] vdsm: not running [FAILED] vdsm: Running run_final_hooks vdsm stop [ OK ] supervdsm start[ OK ] vdsm: Running run_init_hooks vdsm: Running gencerts hostname: Unknown host vdsm: Running check_libvirt_configure libvirt is not configured for vdsm yet Perform 'vdsm-tool libvirt-configure' before starting vdsmd vdsm: failed to execute check_libvirt_configure, error code 1 vdsm start [FAILED] This asks you to run vdsm-tool libvirt-configure After running it you should see: ~$ sudo vdsm-tool libvirt-configure Stopping libvirtd daemon: [ OK ] libvirt is not configured for vdsm yet Reconfiguration of libvirt is done. To start working with the new configuration, execute: 'vdsm-tool libvirt-configure-services-restart' This will manage restarting of the following services: libvirtd, supervdsmd After performing: 'vdsm-tool libvirt-configure-services-restart' you are ready to start vdsmd again as usual. All those vdsm-tool commands require root privileges, otherwise it'll alert and stop the operation. Over systemd the errors\output can be watched in /var/log/messages Thanks, Yaniv Bronhaim. ___ Arch mailing list a...@ovirt.org http://lists.ovirt.org/mailman/listinfo/arch how will this affect the following use cases: 1. i added a new host to the system via the engine. at the end of the installation i expect the host to work without admin having to do any operation on the host. 2. i update a host to latest vdsm version via the engine. at the end of the update i expect the host to be updated without admin having to do any operation on the host Of course it shouldn't effect any of the deploy process. If using the host-deploy, the host-deploy process should take care of stopping, starting and other managing that can be required before starting vdsmd, and it is taking care of. The prints I copied above are relevant only if user tries to install and start vdsmd manually great. so how does backward compatibility work? i have a 3.2 engine, and i deploy latest vdsm due to some bug fixes. (i.e., i didn't get new host-deploy) ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Recent changes in vdsmd startup
On 10/10/2013 07:38 PM, Alon Bar-Lev wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Yaniv Bronheim ybron...@redhat.com Cc: arch a...@ovirt.org, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Thursday, October 10, 2013 7:37:14 PM Subject: Re: [vdsm] Recent changes in vdsmd startup On 10/10/2013 05:38 PM, Yaniv Bronheim wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Yaniv Bronheim ybron...@redhat.com Cc: VDSM Project Development vdsm-devel@lists.fedorahosted.org, arch a...@ovirt.org Sent: Thursday, October 10, 2013 5:24:40 PM Subject: Re: Recent changes in vdsmd startup On 10/10/2013 04:32 PM, Yaniv Bronheim wrote: Hey Everybody, FYI, Recently we merged a fix that changes the way vdsmd starts. Before that service vdsmd start command performed its main logic in addition to related services manipulation, as configure libvirt service and restart it for example. Now we are trying to avoid that and only alert the user if restart or other operations on related services are required. So now when you perform vdsmd start after clear installation you will see: ~$ sudo service vdsmd restart Shutting down vdsm daemon: vdsm watchdog stop [ OK ] vdsm: not running [FAILED] vdsm: Running run_final_hooks vdsm stop [ OK ] supervdsm start[ OK ] vdsm: Running run_init_hooks vdsm: Running gencerts hostname: Unknown host vdsm: Running check_libvirt_configure libvirt is not configured for vdsm yet Perform 'vdsm-tool libvirt-configure' before starting vdsmd vdsm: failed to execute check_libvirt_configure, error code 1 vdsm start [FAILED] This asks you to run vdsm-tool libvirt-configure After running it you should see: ~$ sudo vdsm-tool libvirt-configure Stopping libvirtd daemon: [ OK ] libvirt is not configured for vdsm yet Reconfiguration of libvirt is done. To start working with the new configuration, execute: 'vdsm-tool libvirt-configure-services-restart' This will manage restarting of the following services: libvirtd, supervdsmd After performing: 'vdsm-tool libvirt-configure-services-restart' you are ready to start vdsmd again as usual. All those vdsm-tool commands require root privileges, otherwise it'll alert and stop the operation. Over systemd the errors\output can be watched in /var/log/messages Thanks, Yaniv Bronhaim. ___ Arch mailing list a...@ovirt.org http://lists.ovirt.org/mailman/listinfo/arch how will this affect the following use cases: 1. i added a new host to the system via the engine. at the end of the installation i expect the host to work without admin having to do any operation on the host. 2. i update a host to latest vdsm version via the engine. at the end of the update i expect the host to be updated without admin having to do any operation on the host Of course it shouldn't effect any of the deploy process. If using the host-deploy, the host-deploy process should take care of stopping, starting and other managing that can be required before starting vdsmd, and it is taking care of. The prints I copied above are relevant only if user tries to install and start vdsmd manually great. so how does backward compatibility work? i have a 3.2 engine, and i deploy latest vdsm due to some bug fixes. (i.e., i didn't get new host-deploy) This was already supported in last iteration. The init.d and systemd script support reconfigure verb that is executed ever since vdsm-bootstrap, these are kept for backward compatibility. so what happens to 3.2 engine with this new vdsm, without this patch? http://gerrit.ovirt.org/20102 ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Recent changes in vdsmd startup
On 10/10/2013 07:43 PM, Alon Bar-Lev wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Alon Bar-Lev alo...@redhat.com Cc: Yaniv Bronheim ybron...@redhat.com, arch a...@ovirt.org, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Thursday, October 10, 2013 7:39:35 PM Subject: Re: [vdsm] Recent changes in vdsmd startup On 10/10/2013 07:38 PM, Alon Bar-Lev wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Yaniv Bronheim ybron...@redhat.com Cc: arch a...@ovirt.org, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Thursday, October 10, 2013 7:37:14 PM Subject: Re: [vdsm] Recent changes in vdsmd startup On 10/10/2013 05:38 PM, Yaniv Bronheim wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Yaniv Bronheim ybron...@redhat.com Cc: VDSM Project Development vdsm-devel@lists.fedorahosted.org, arch a...@ovirt.org Sent: Thursday, October 10, 2013 5:24:40 PM Subject: Re: Recent changes in vdsmd startup On 10/10/2013 04:32 PM, Yaniv Bronheim wrote: Hey Everybody, FYI, Recently we merged a fix that changes the way vdsmd starts. Before that service vdsmd start command performed its main logic in addition to related services manipulation, as configure libvirt service and restart it for example. Now we are trying to avoid that and only alert the user if restart or other operations on related services are required. So now when you perform vdsmd start after clear installation you will see: ~$ sudo service vdsmd restart Shutting down vdsm daemon: vdsm watchdog stop [ OK ] vdsm: not running [FAILED] vdsm: Running run_final_hooks vdsm stop [ OK ] supervdsm start[ OK ] vdsm: Running run_init_hooks vdsm: Running gencerts hostname: Unknown host vdsm: Running check_libvirt_configure libvirt is not configured for vdsm yet Perform 'vdsm-tool libvirt-configure' before starting vdsmd vdsm: failed to execute check_libvirt_configure, error code 1 vdsm start [FAILED] This asks you to run vdsm-tool libvirt-configure After running it you should see: ~$ sudo vdsm-tool libvirt-configure Stopping libvirtd daemon: [ OK ] libvirt is not configured for vdsm yet Reconfiguration of libvirt is done. To start working with the new configuration, execute: 'vdsm-tool libvirt-configure-services-restart' This will manage restarting of the following services: libvirtd, supervdsmd After performing: 'vdsm-tool libvirt-configure-services-restart' you are ready to start vdsmd again as usual. All those vdsm-tool commands require root privileges, otherwise it'll alert and stop the operation. Over systemd the errors\output can be watched in /var/log/messages Thanks, Yaniv Bronhaim. ___ Arch mailing list a...@ovirt.org http://lists.ovirt.org/mailman/listinfo/arch how will this affect the following use cases: 1. i added a new host to the system via the engine. at the end of the installation i expect the host to work without admin having to do any operation on the host. 2. i update a host to latest vdsm version via the engine. at the end of the update i expect the host to be updated without admin having to do any operation on the host Of course it shouldn't effect any of the deploy process. If using the host-deploy, the host-deploy process should take care of stopping, starting and other managing that can be required before starting vdsmd, and it is taking care of. The prints I copied above are relevant only if user tries to install and start vdsmd manually great. so how does backward compatibility work? i have a 3.2 engine, and i deploy latest vdsm due to some bug fixes. (i.e., i didn't get new host-deploy) This was already supported in last iteration. The init.d and systemd script support reconfigure verb that is executed ever since vdsm-bootstrap, these are kept for backward compatibility. so what happens to 3.2 engine with this new vdsm, without this patch? http://gerrit.ovirt.org/20102 this patch is just adjustment to whatever yaniv plans now. up until now host-deploy tried to execute vdsm-tool libvirt-configure and if fails it tries the old way as I described above. now host-deploy will be adjusted to call a single verb to configure whatever need to be configured by vdsm. so what happens if the vdsm on the host is an older vdsm? waiting for interface of vdsm-tool to stabilize before attempting to fix. 3.2, 3.1, 3.0 uses the old method of reconfigure, it does not use vdsm tool. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel ___ vdsm-devel mailing list vdsm-devel
Re: [vdsm] Recent changes in vdsmd startup
On 10/10/2013 07:47 PM, Alon Bar-Lev wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Alon Bar-Lev alo...@redhat.com Cc: Yaniv Bronheim ybron...@redhat.com, arch a...@ovirt.org, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Thursday, October 10, 2013 7:45:36 PM Subject: Re: [vdsm] Recent changes in vdsmd startup On 10/10/2013 07:43 PM, Alon Bar-Lev wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Alon Bar-Lev alo...@redhat.com Cc: Yaniv Bronheim ybron...@redhat.com, arch a...@ovirt.org, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Thursday, October 10, 2013 7:39:35 PM Subject: Re: [vdsm] Recent changes in vdsmd startup On 10/10/2013 07:38 PM, Alon Bar-Lev wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Yaniv Bronheim ybron...@redhat.com Cc: arch a...@ovirt.org, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Thursday, October 10, 2013 7:37:14 PM Subject: Re: [vdsm] Recent changes in vdsmd startup On 10/10/2013 05:38 PM, Yaniv Bronheim wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Yaniv Bronheim ybron...@redhat.com Cc: VDSM Project Development vdsm-devel@lists.fedorahosted.org, arch a...@ovirt.org Sent: Thursday, October 10, 2013 5:24:40 PM Subject: Re: Recent changes in vdsmd startup On 10/10/2013 04:32 PM, Yaniv Bronheim wrote: Hey Everybody, FYI, Recently we merged a fix that changes the way vdsmd starts. Before that service vdsmd start command performed its main logic in addition to related services manipulation, as configure libvirt service and restart it for example. Now we are trying to avoid that and only alert the user if restart or other operations on related services are required. So now when you perform vdsmd start after clear installation you will see: ~$ sudo service vdsmd restart Shutting down vdsm daemon: vdsm watchdog stop [ OK ] vdsm: not running [FAILED] vdsm: Running run_final_hooks vdsm stop [ OK ] supervdsm start[ OK ] vdsm: Running run_init_hooks vdsm: Running gencerts hostname: Unknown host vdsm: Running check_libvirt_configure libvirt is not configured for vdsm yet Perform 'vdsm-tool libvirt-configure' before starting vdsmd vdsm: failed to execute check_libvirt_configure, error code 1 vdsm start [FAILED] This asks you to run vdsm-tool libvirt-configure After running it you should see: ~$ sudo vdsm-tool libvirt-configure Stopping libvirtd daemon: [ OK ] libvirt is not configured for vdsm yet Reconfiguration of libvirt is done. To start working with the new configuration, execute: 'vdsm-tool libvirt-configure-services-restart' This will manage restarting of the following services: libvirtd, supervdsmd After performing: 'vdsm-tool libvirt-configure-services-restart' you are ready to start vdsmd again as usual. All those vdsm-tool commands require root privileges, otherwise it'll alert and stop the operation. Over systemd the errors\output can be watched in /var/log/messages Thanks, Yaniv Bronhaim. ___ Arch mailing list a...@ovirt.org http://lists.ovirt.org/mailman/listinfo/arch how will this affect the following use cases: 1. i added a new host to the system via the engine. at the end of the installation i expect the host to work without admin having to do any operation on the host. 2. i update a host to latest vdsm version via the engine. at the end of the update i expect the host to be updated without admin having to do any operation on the host Of course it shouldn't effect any of the deploy process. If using the host-deploy, the host-deploy process should take care of stopping, starting and other managing that can be required before starting vdsmd, and it is taking care of. The prints I copied above are relevant only if user tries to install and start vdsmd manually great. so how does backward compatibility work? i have a 3.2 engine, and i deploy latest vdsm due to some bug fixes. (i.e., i didn't get new host-deploy) This was already supported in last iteration. The init.d and systemd script support reconfigure verb that is executed ever since vdsm-bootstrap, these are kept for backward compatibility. so what happens to 3.2 engine with this new vdsm, without this patch? http://gerrit.ovirt.org/20102 this patch is just adjustment to whatever yaniv plans now. up until now host-deploy tried to execute vdsm-tool libvirt-configure and if fails it tries the old way as I described above. now host-deploy will be adjusted to call a single verb to configure whatever need to be configured by vdsm. so what happens if the vdsm on the host is an older vdsm? I don't follow... up until now host-deploy tried
Re: [vdsm] Keeping VDSM Compatible with Ubuntu
On 09/27/2013 04:16 PM, Ewoud Kohl van Wijngaarden wrote: On Fri, Sep 27, 2013 at 03:53:08PM +0800, Zhou Zheng Sheng wrote: Recently we merged some patches to make VDSM run on Ubuntu. We also have some packaging scripts in the debian/ sub-dir. You can either build .deb packages manually or find binary packages from VDSM PPA [1] on launchpad.net. Once you add that PPA, you can use apt-get to install VDSM and its dependencies. I'll setup a Jenkins on my laptop to test the master branch automatically by build and install VDSM on Ubuntu, then running unit and functional tests. I suggest when adding a new change, it's better to make sure it is covered by unit or functional tests. If you change the packaging code, for example adding new options to configure.ac, editing vdsm.spec.in, and changing VDSM daemon startup behavior, I am happy to be invited to review your patch. I'd also like to listen to your suggestions on this topic, thanks! I'd recommend getting in touch with infra to set up an ubuntu slave to ensure compatibility. We have our CI infra almost in a shape where we can easily add new slaves. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel +1 - if you can get to run some sanity tests per vdsm patch, you can prevent bad things from happening rather than chase things after they broke ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] stale gerrit patches
we have some very old gerrit patches. I'm for abandoning patches which were not touched over 60 days (to begin with, I think the number should actually be lower). they can always be re-opened by any interested party post their closure. i.e., looking at gerrit, the patch list should actually get attention, and not be a few worth looking at, with a lot of old patches thoughts? Thanks, Itamar ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] stale gerrit patches
On 09/23/2013 01:49 PM, Alon Bar-Lev wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: David Caro dcaro...@redhat.com Cc: engine-devel engine-de...@ovirt.org, vdsm-devel@lists.fedorahosted.org Sent: Monday, September 23, 2013 1:47:47 PM Subject: Re: [vdsm] stale gerrit patches On 09/23/2013 01:46 PM, David Caro wrote: On Mon 23 Sep 2013 12:36:58 PM CEST, Itamar Heim wrote: we have some very old gerrit patches. I'm for abandoning patches which were not touched over 60 days (to begin with, I think the number should actually be lower). they can always be re-opened by any interested party post their closure. i.e., looking at gerrit, the patch list should actually get attention, and not be a few worth looking at, with a lot of old patches thoughts? Thanks, Itamar ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel It might helpful to have a cron-like script that checks the age of the posts and first notifies the sender, the reviewers and the maintainer, and if the patch is not updated in a certain period just abandons it. yep - warn after X days via email to just owner (or all subscribed to the patch), and close if no activity for X+14 days or something like that. This will be annoying. And there are patches that pending with good reason. pending for 60 days with zero activity on them (no comment, no rebase, nothing)? Maintainers can close patches that are no interest nor progress. Alon ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] start vm which in pool failed
On 09/18/2013 08:03 AM, bigclouds wrote: when starting a vm of a pool fails.in the process of hooks, i modify guestvm hostname, and modify the path of backing file of chain. do nothing else. i can munually define a xml(after hooks), and start it without error. env: libvirt-0.10.2-18.el6_4.5.x86_64 2.6.32-358.6.2.el6.x86_64 centos6.4 1.where does this error message come from ? Storage.StorageDomain WARNING Could not find mapping for lv d028d521-d4a9-4dd7-a0fe-3e9b60e7c4e4/cae1bb2d-0529-4287-95a3-13dcb14f082f 2. Thread-6291::ERROR::2013-09-18 12:44:21,205::vm::683::vm.Vm::(_startUnderlyingVm) vmId=`24f7e975-9aa5-4a14-b0f0-590add14c8b5`::The vm start process failed Traceback (most recent call last): File /us r/share/vdsm/vm.py, line 645, in _startUnderlyingVm self._run() File /usr/share/vdsm/libvirtvm.py, line 1529, in _run self._connection.createXML(domxml, flags), File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, line 83, in wrapper ret = f(*args, **kwargs) File /usr/lib64/python2.6/site-packages/libvirt.py, line 2645, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: internal error Process exited while reading console log output: char device redirected to /dev/pts/4 qemu-kvm: -drive file=/rhev/data-center/7828f2ae-955e-4e4b-a4bb-43807629dc52/d028d521-d4a9-4dd7-a0fe-3e9b60e7c4e4/images/ac025dc1-4e25-4b71-8c56-88dcb61b9f09/cae1bb2d-0529-4287-95a3-13dcb14f082f,if=none,id=drive-ide0-0-0,format=qcow2,serial=ac025dc1-4e25-4b71-8c56-88dcb61b9f09,cache=none,werror=stop,rerror=stop,aio=n ative: could not open disk image /rhev/data-center/7828f2ae-955e-4e4b-a4bb-43807629dc52/d028d521-d4a9-4dd7-a0fe-3e9b60e7c4e4/images/ac025dc1-4e25-4b71-8c56-88dcb61b9f09/cae1bb2d-0529-4287-95a3-13dcb14f082f: Operation not permitted ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel can you share your hook? (if it works without your hook, i suspect a bug there) ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] question about injectfile hook
On 09/10/2013 05:32 PM, Shahar Havivi wrote: On 10.09.13 22:02, bigclouds wrote: hi, shaharh: recently i work on injecting file into guestvm, i find a hook called injectfile in vdsm. i have a confusion about why injectfile hook takes qcow2 format as a special one which can not be handled? could you tell me your reason? as i know libguestfs certainly can be aware of qcow2 and all its feature like backing_file etc. it is a little hard to inject files into a image which has backing files(maybe backing chain) especially for images of block type with thin-providing. like a vm in pool. i am now writing code to inject files into a vm in pool. please tell me your ideas. and my questoin. It should work with qcow2, when this hook was written we had some issues that we limit the format to raw. did you try to modify the hook script? (by removing the qcow only condition) the issues still stand for block storage iirc, since there is no lvextend flow during hook flow. for file based domain, i think it will be ok ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] how vdsm implement qcow2 on lvm
On 08/20/2013 06:11 AM, bigclouds wrote: hi,all could you tell me how vdsm implement qcow2 on lvm? and how lv of qcow2 fromat support backing file? qcow2 is the format, stored in an LV. see http://www.ovirt.org/images/3/3d/OVirt_VDSM_Storage_2002.pdf if the implememt is a normal way, now i find libguestfs does not recognize lvs(qcow2 format) with backing file how are you trying to use it via libguestfs? you need vdsm to monitor qemu/qcow for lv extend requests to the SPM (or another similar solution)? thanks ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain
On 08/12/2013 04:55 PM, Deepak C Shetty wrote: what do you mean by 'out of luck'.. I thot virt-preview had F17/F18 repos, no ? Another Q to answer would be.. Do we support F17 as a valid vdsm host for 3.3 ? iirc, F17 isn't supported by fedora once F19 is out, so no more updates to it. using fedora you are moving fast, but with a shorter support/update cycle i guess. I don't think anyone tested 3.3 on F17. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] How to handle qemu 1.3 dep for Gluster Storage Domain
On 08/07/2013 08:21 AM, Sahina Bose wrote: [Adding engine-devel] On 08/06/2013 10:48 AM, Deepak C Shetty wrote: Hi All, There were 2 learnings from BZ https://bugzilla.redhat.com/show_bug.cgi?id=988299 1) Gluster RPM deps were not proper in VDSM when using Gluster Storage Domain. This has been partly addressed by the gluster-devel thread @ http://lists.gnu.org/archive/html/gluster-devel/2013-08/msg8.html and will be fully addressed once Gluster folks ensure their packaging is friendly enuf for VDSM to consume just the needed bits. Once that happens, i will be sending a patch to vdsm.spec.in to update the gluster deps correctly. So this issue gets addressed in near term. 2) Gluster storage domain needs minimum libvirt 1.0.1 and qemu 1.3. libvirt 1.0.1 has the support for representing gluster as a network block device and qemu 1.3 has the native support for gluster block backend which supports gluster://... URI way of representing a gluster based file (aka volume/vmdisk in VDSM case). Many distros (incl. centos 6.4 in the BZ) won't have qemu 1.3 in their distro repos! How do we handle this dep in VDSM ? Do we disable gluster storage domain in oVirt engine if VDSM reports qemu 1.3 as part of getCapabilities ? or Do we ensure qemu 1.3 is present in ovirt.repo assuming ovirt.repo is always present on VDSM hosts in which case when VDSM gets installed, qemu 1.3 dep in vdsm.spec.in will install qemu 1.3 from the ovirt.repo instead of the distro repo. This means vdsm.spec.in will have qemu = 1.3 under Requires. Is this possible to make this a conditional install? That is, only if Storage Domain = GlusterFS in the Data center, the bootstrapping of host will install the qemu 1.3 and dependencies. (The question still remains as to where the qemu 1.3 rpms will be available) hosts are installed prior to storage domain definition usually. we need to find a solution to having a qemu 1.3 for .el6 (or another version of qemu with this feature set). What will be a good way to handle this ? Appreciate your response thanx, deepak ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Exploiting domain specific offload features
On 07/24/2013 03:38 PM, Federico Simoncelli wrote: - Original Message - From: Deepak C Shetty deepa...@linux.vnet.ibm.com To: vdsm-devel@lists.fedorahosted.org Sent: Wednesday, July 17, 2013 12:08:03 PM Subject: Re: [vdsm] Exploiting domain specific offload features On 07/17/2013 12:48 PM, M. Mohan Kumar wrote: Hello, We are adding features such as server offloaded cloning, snapshot of the files(ie VM disks) and zeroed vm disk allocation in GlusterFS. As of now only BD xlator supports offloaded cloning snapshot. Server offloaded zeroing of VM disks is supported both by posix and BD xlator. The initial approach is to use xattr interface to trigger this offload features such as # setfattr -n clone -v path-to-new-clone-file path-to-source-file will create clone of path-to-source-file in path-to-new-clone-file. Cloning is done in GlusterFS server side and its kind of server offloaded copy. Similarly snapshot can also be taken using setfattr approach. GlusterFS storage domain is already part of VDSM and we want to exploit offload features provided by GlusterFS through VDSM. Is there any way to exploit these features from VDSM as of now? Mohan, IIUC, zeroing of files in GlusterFS is supported for both posix and block backends of GlusterFS Today VDSM does zeroing (as part of preallocated vmdisk flow) itself using 'dd'. If GlusterFS supports zeroing this feature can be exploited in VDSM (by overriding create volume flow as needed) so that we can save compute resources on the VDSM host, when Gluster domain is being used. Regarding exploiting clone and snapshot, IIUC these are very native to VDSM today... it expects that snapshots are qcow2 based and they form the image chain etc... With snapshot and clone handled in Gluster transparently, these notions of VDSM will be broken, so it probably needs a lot of changes in lots of places in VDSM to exploit these. Federico/Ayal, Wanted to know your comments/opinion on this ? Is there a way to exploit these features in VDSM Gluster storage domain in an elegant way ? I think we can already start exploiting cloning whenever we need to copy a volume maintaining the same format (raw=raw, cow=cow). you still need to tailor the flow from engine's perspective, right? or just override the entity created by the engine with the native cloned one for simplicity? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] migration progress feature
On 07/21/2013 12:14 PM, Dan Kenigsberg wrote: Introducing a new verb, getMigrationStatuses(), would allow Engine to collect this piece of statistics in the exact frequency that it needs it. Coupling this data with the frequently-polled state of VM begs for future complaints that getVMList(full=False) is too heavy, and that we must introduce getVMList(full=False, veryveryskinny=True). so the most likely outcome is engine would be calling this every time it calls getAllVmStats? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] migration progress feature
On 07/18/2013 01:24 AM, Doron Fediuck wrote: - Original Message - | From: Michal Skrivanek mskri...@redhat.com | To: Peter V. Saveliev p...@redhat.com, vdsm-devel@lists.fedorahosted.org Development | vdsm-devel@lists.fedorahosted.org | Sent: Monday, July 8, 2013 5:13:42 PM | Subject: Re: [vdsm] migration progress feature | | | On Jul 4, 2013, at 16:04 , Peter V. Saveliev p...@redhat.com wrote: | | … | | Goal | | | We have to implement a feature, migration progress bar in the UI. This | migration bar should reflect not only the progress, but if the migration | is stalled and so on. | | Tasks | = | | * Get the information from libvirt: it provides job progress in the same | way for all migration-like jobs: migration, suspend, snapshot | * Feed this information to the engine | * Reflect it in the UI | | API status | == | | Libvirt info is OK — it is available for any migration-like job, be it | migration, suspend or snapshot. | | In VDSM, we have an API call, a separate verb to report the migration | progress: migrateStatus() | | But also we have getVmList() call, polled by the engine every several | seconds. | | Proposal | | | We would propose to provide an optional field, `migrateStatus`, in the | report sent by getVmList(). This approach should save a good amount of | traffic and ease the engine side logic. | | Having the separate verb, this can sound weird, but I'm sure that the | optimisation worth it. | | Please, read the patchset [1] and discuss it here. | | From my point of view this makes sense as it doesn't require additional | bandwidth and calls. | Also means no additional complexity in engine. I don't feel comfortable to | add another poll request during migration which makes the logic in engine | more complicated and means even higher traffic between hosts and engine. Not | that it's not doable, I just don't think it's the right way to go. I agree | it's not ideal from purely API point of view...however to me it seems | logical to have it as part of statistics as it is a sort of statistic, and | it doesn't require an extra call. The extra call already exist, it's just | that I think we already have too many. We should start looking into | effective communication, not necessarily clean | | Thanks, | michal | +1 such a feature will be very handy when migrating large guests (8GB+) over sub-optimal network. In such a case the last thing you want to do is additional calls. Re-using the existing calls by extending the statistics makes a lot of sense. i'm also in favor - vm status is treated as an async update based on the stats. vdsm can send this item only for vms which are migrating for example. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] vdsm sync meeting Monday July 15th
On 07/15/2013 05:05 PM, Dan Kenigsberg wrote: - ovirt-3.3 is going beta oh-so-very-soon. This means that I am about to tag v4.12.0-rc1 after basic tests. worth fixing in parallel at engine side then: packaging/dbscripts/upgrade/pre_upgrade/_config.sql: select fn_db_update_config_value('SupportedVDSMVersions','4.9,4.10,4.11','general'); ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] vdsm sync meeting June 17th 2013
On 06/18/2013 09:29 AM, Arik Hadas wrote: - Original Message - VDSM sync meeting Monday June 17th 2013 === - Dan requests reviews for pending patches from Mei Liu and himself ( http://gerrit.ovirt.org/#/c/15356/8 ) for mom. - Toni to fix the problem with Mark's patch that Dan reviewed so that it can be merged together with the smaller following patches. - Federico has three high priority patches for features and two lower priority ones that need reviews in order to make it for the release deadline. They are related to disk extension and image upload/download (glance): + http://gerrit.ovirt.org/#/c/15442 - constants: unify the BLANK_UUID def... + http://gerrit.ovirt.org/#/c/14589 - volume: add the BLOCK_SIZE constant +1 + http://gerrit.ovirt.org/#/c/14590 - volume: add the extendVolumeSize method + http://gerrit.ovirt.org/#/c/15614 - vm: add the live diskSizeExtend method +1 + http://gerrit.ovirt.org/#/c/14955 - image: add support to upload/download images - Adam and others to review Gluster patches: + http://gerrit.ovirt.org/#/c/13785/ + http://gerrit.ovirt.org/#/c/11094/ - Is there a filing for Qemu image handling reported by virt team? Background about the problem: As part of the ram snapshots feature (http://wiki.ovirt.org/Features/RAM_Snapshots) we pass a path to file to libvirt, where the memory of the VM should be saved as part of the snapshot creation. the output file is identical to the one that is created to store the memory on hibernate command. When VM with snapshot(s) that has memory is being exported, we want the snapshot's memory file to be exported to the export domain as well. when trying to export such memory file, we saw that the file is being truncated to the closest size that is a multiple of block size (512). we found that the file is being truncated by qemu-img that is being used to copy the file to the export domain. We previously thought that it is a problem in qemu - not handling files with size that is not a multiple of block size, and a bug to qemu was opened (bz 970559). We now understand that the issue described above might be a problem in qemu that needs to be solved, but it is not the problem in our case: treating the memory file as qemu image is wrong - the created file is not a qemu image. the file has a special format: it contains xml part at the beginning and the memory dump as a binary data right after that. So basically it seems that we need the following things in vdsm: 1. The creation of the file that is expected to store the memory as qemu image seems to be redundant. instead, we need to have a way to create a file in a similar way to 'touch' command - just to verify that the file in the path we pass to libvirt is created with the right permissions. (it's probably true for the hibernation volume creation as well btw). how would this look for block devices? 2. The more important issue is that we need to have a way to export files which are not qemu images to the export domain, which means without using the qemu-img process, just copy the file. ayal - anything close to this today - i remember various qemu-img vs. dd patches in the past for copy operations? - Reminder to follow wiki feature page process closely to maximize changes to have features added to a release. - Multiple gateways are being worked on to fix dhcp races and comments by reviewers. - NetReloaded after multiple gateways. Try to get up to iproute2 configurator. - Cross-distro support: + Netreloaded with iproute2 plus persistence would help. + init scripts separation (proposed by Yaniv to make ready for 3.3). - Cloud-init integration by Greg Padgett. There is a vdsm patch that should be taken care of in terms of reviewing http://gerrit.ovirt.org/#/c/14347/ ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] what is 'qcow2 on LV' and how node communicate
On 06/18/2013 05:55 PM, bigclouds wrote: hi,all 1.it seems that vdsm has the capability to thin providing lv(logical volume), could you please explain the theory of it? 2.i did not notice any code throught which NODE can communicate with each other, called mail-box. if it varies depends on volume type(block,fs), and what is and how they communicate via mail-box? thanks so much. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel have you reviewed: http://www.ovirt.org/images/3/3d/OVirt_VDSM_Storage_2002.pdf http://www.ovirt.org/Vdsm_Block_Storage_Domains ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] oVirt updates - April 28th, 2013
1. From the Web - interview with Theron on oVirt (chinese) http://www.infoq.com/cn/news/2013/05/conrey-on-ovirt - interview with Dave Neary about his work (and oVirt) http://www.techradar.com/news/software/what-went-wrong-with-meego-nokia-lost-faith-in-the-project--1147770 - Nagios monitoring plugin check_rhev3 1.2 released http://lists.ovirt.org/pipermail/users/2013-May/014389.html - a blog on how to do HA for engine (written for rhev, should be relevant to oVirt as well) http://captainkvm.com/2013/05/providing-high-availability-for-rhev-m/ 2. Video - youtube available for IBM's session on connected Communities, Innovative Technologies: OpenStack, oVirt, and KVM http://www.youtube.com/watch?v=Pg7ShV-HvCE - fog/foreman by ohad levy (fog supports oVirt) http://www.youtube.com/watch?v=JgaQ_ekR2JA 3. Conferences - FOSDEM presentations page uploaded http://www.ovirt.org/FOSDEM_2013 - some of the Shangahi presentations uploaded http://www.ovirt.org/Intel_Workshop_May_2013 - upcoming - LinuxCon Japan oVirt session in LinuxCon Japan (this week) - upcoming - oVirt Developer days (with KVM Forum) Edinburgh, UK - October 21 - 23, 2013 4. Other - help test the new oVirt installer and developer setup environments http://www.ovirt.org/OVirtEngineDevelopmentEnvironment - RC for oVirt Node 3.0.0 is now available (but not compatible with ovirt-engine yet) Thanks, Itamar ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Users] questions about vdsm supervdsmserver
you'll probably get more answers on vdsm-devel (cc'd). bcc'd users to avoid dupes. On 04/26/2013 09:59 AM, bigclouds wrote: hi,all. i have 2 questions about supervdsmserver, 1.why it is created separately as a daemon? its many function have never been used, and its function can be done by calling libvirt directly, why supervdsmserver is need? 2.it seems that only one supervdsmserver should be exist, by killing old ones to assure that . but look at this, and please explain to me. unix 2 [ ACC ] STREAM LISTENING 1339963 4968/python /var/run/vdsm/svdsm.sock unix 2 [ ACC ] STREAM LISTENING 384165 39716/python nb sp; /var/run/vdsm/svdsm.sock unix 2 [ ACC ] STREAM LISTENING 1100509 49487/python/var/run/vdsm/svdsm.sock unix 2 [ ACC ] STREAM LISTENING 1326232 3813/python /var/run/mcvda/svdsm.sock unix 3 [ ] STREAM CONNECTED 1448193 4968/python /var/run/mcvda/svdsm.sock unix 3 [ ] STREAM CONNECTED 1447937 4968/python /var /run/mcvda/svdsm.sock unix 3 [ ] STREAM CONNECTED 1424959 4968/python /var/run/mcvda/svdsm.sock unix 3 [ ] STREAM CONNECTED 1414835 4968/python /var/run/mcvda/svdsm.sock unix 3 [ ] STREAM CONNECTED 1327873 3813/python /var/run/mcvda/svdsm.sock unix 2 [ ] STREAM CONNECTED 1309612 49487/python nbsp; /var/run/mcvda/svdsm.sock unix 3 [ ] STREAM CONNECTED 384221 39716/python/var/run/mcvda/svdsm.sock thanks ___ Users mailing list us...@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Enabling clusterLevels for 3.3 in dsaversion.py
On 03/22/2013 06:42 AM, Deepak C Shetty wrote: On 03/21/2013 04:07 PM, Vinzenz Feenstra wrote: On 03/21/2013 10:32 AM, Deepak C Shetty wrote: On 03/21/2013 01:11 PM, Dan Kenigsberg wrote: On Thu, Mar 21, 2013 at 10:42:27AM +0530, Deepak C Shetty wrote: Hi, I am trying to validate GlusterFS domain engine patches, against VDSM. GlusterFS domain is enabled for 3.3 only So when i try to add my VDSM as a new host to engine, it doesn't allow me to do so since clusterLevels (returned by VDSM as part of engine calling getCap) doesn't have 3.3 I hacked VDSM's dsaversion.py to return 3.3 as well as part of getCap and now I am able to add my VDSM host as a new host from engine for DC of type GLUSTERFS_DOMAIN. Is this the right way to test a 3.3. feature, if yes, should I send a vdsm patch to add 3.3 in dsaversion.py ? You are right - it's time to expose this clusterLevel. Shouldn't the supportedENGINEs value also be updated to 3.2 and 3.3? I am a bit confused that this one stays at 3.0 and 3.1 I am really not sure whats the use of supportedENGINEs. I changed clusterLevels bcos doing that allowed me to add my VDSM host to a 3.3 cluster. Can someone throw more light on what is supportedENGINEs used for ? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel engine also has supported vdsm versions. if vdsm version isn't in that list, vdsm can declare to engine the vdsm can work with that version of the engine - it was meant to make sure only tested/supported versions of vdsm work with tested versions of engine, etc. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Proposal VDSM = Engine Data Statistics Retrieval Optimization
On 03/07/2013 01:25 PM, Vinzenz Feenstra wrote: Often Changed but unused This data does not seem to be used in the engine at all. It is *not* even used in the data warehouse. *memoryStats* = {'swap_out': '0', 'majflt': '0', 'mem_free': '1466884', 'swap_in': '0', 'pageflt': '0', 'mem_total': '2096736', 'mem_unused': '1466884'} *balloonInfo* = {'balloon_max': 2097152, 'balloon_cur': 2097152} these were added for better balloon control iirc. so api needs them for mom probbaly. not sure if engine needs them or not. *disks* = {'vda': {'readLatency': '0', 'apparentsize': '64424509440', 'writeLatency': '1754496', 'imageID': '28abb923-7b89-4638-84f8-1700f0b76482', 'flushLatency': '156549', 'readRate': '0.00', 'truesize': '18855059456', 'writeRate': '952.05'}, 'hdc': {'readLatency': '0', 'apparentsize': '0', 'writeLatency': '0', 'flushLatency': '0', 'readRate': '0.00', 'truesize': '0', 'writeRate': '0.00'}} these i remember we added per request to the system - are you sure they are not in the dwh? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] VDSM - top 10 with patches with no activity for more than 30 days
On 03/06/2013 06:54 PM, Dave Neary wrote: Also - as someone not that familiar with Gerrit, what's involved in picking up someone's patch and revising it for them? A pattern I see over over again is: Jane submits patch Alex reviews: -1 with suggestions for improvements Jane submits patch rev #2 Barry reviews: -1 (whitespace issues) Jane submits patch #3 Hoda reviews: -1 with suggestions for another approach Jane loses interest and patch remains *almost* ready forever until it no longer applies or gets dropped. Is there a way we can promote adopt a patch where you take someone else's patch and push it through review? shouldn't be an issue for someone else to fetch the patch and submit an update to someone else's patch, based on same change-id ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [JENKINS][ANN] jenkins.ovirt.org new look and infra
On 03/05/2013 10:01 AM, Eyal Edri wrote: fyi, Starting from yesterday (4/3/13) jenkins.ovirt.org [1] has migrated to a new hosting server provided by alterway [2]. the new server has a new ui look that is similar to ovirt.org and is running on stronger infra then the previous one. All jobs and configuration have migrated from the old instance, but if you're still missing a certain job or permissions please contact infra team at in...@ovirt.org. I want to thank David caro from the infra team in helping with the migration and einav cohen from the ovirt frontend developer community for helping with the new css for jenkins. [1] http://jenkins.ovirt.org/ [2] http://www.ovirt.org/Sponsors_and_supporters Eyal Edri oVirt infra team. ___ Infra mailing list in...@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra finally... (and looks very nice) can we shutdown the ec2 instance for now? do we have more horsepower to start running say engine findbugs on gerrit patches? Thanks, Itamar ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] VDSM - top 10 with patches with no activity for more than 30 days
On 01/03/2013 15:58, Adam Litke wrote: On Thu, 2013-02-28 at 12:51 -0500, Doron Fediuck wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: vdsm-devel@lists.fedorahosted.org Sent: Wednesday, February 20, 2013 5:39:21 PM Subject: [vdsm] VDSM - top 10 with patches with no activity for more than 30 days thoughts on how to trim these? (in openstack gerrit they auto-abandon patches with no activity for a couple of weeks - author can revive them back when they are relevant) preferred_email | count +-- fsimo...@redhat.com | 34 smizr...@redhat.com | 23 lvro...@linux.vnet.ibm.com | 13 ewars...@redhat.com | 12 wu...@linux.vnet.ibm.com| 12 x...@linux.vnet.ibm.com | 11 shao...@linux.vnet.ibm.com | 6 li...@linux.vnet.ibm.com| 6 zhshz...@linux.vnet.ibm.com | 6 shum...@linux.vnet.ibm.com | 5 ___ Review day? Anyone thinks a monthly review day will help? We've discussed this in the past and part of the reason for the backlog is that folks like Saggi and Federico like to use gerrit to store work-in-progress patches that don't need review. They may not be working on those patches at the moment but want them in gerrit for them to come back to. If we want to allow this use of gerrit then we will always have some stale patches lying around. I think a private github is a better place for soemthing which gets no attention for so long. personally i'd like to make sure patches do get attention, and having old ones is cluttering. I like openstack auto-abandon approach. i think a patch with no activity for more than 30 days should be auto abandoned. its trivial to get it back to life after that (openstack are much harsher than this actually) ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] VDSM - top 10 with patches with no activity for more than 30 days
On 01/03/2013 16:32, Ewoud Kohl van Wijngaarden wrote: On Fri, Mar 01, 2013 at 04:27:33PM +0200, Itamar Heim wrote: On 01/03/2013 15:58, Adam Litke wrote: On Thu, 2013-02-28 at 12:51 -0500, Doron Fediuck wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: vdsm-devel@lists.fedorahosted.org Sent: Wednesday, February 20, 2013 5:39:21 PM Subject: [vdsm] VDSM - top 10 with patches with no activity for more than 30 days thoughts on how to trim these? (in openstack gerrit they auto-abandon patches with no activity for a couple of weeks - author can revive them back when they are relevant) preferred_email | count +-- fsimo...@redhat.com | 34 smizr...@redhat.com | 23 lvro...@linux.vnet.ibm.com | 13 ewars...@redhat.com | 12 wu...@linux.vnet.ibm.com| 12 x...@linux.vnet.ibm.com | 11 shao...@linux.vnet.ibm.com | 6 li...@linux.vnet.ibm.com| 6 zhshz...@linux.vnet.ibm.com | 6 shum...@linux.vnet.ibm.com | 5 ___ Review day? Anyone thinks a monthly review day will help? We've discussed this in the past and part of the reason for the backlog is that folks like Saggi and Federico like to use gerrit to store work-in-progress patches that don't need review. They may not be working on those patches at the moment but want them in gerrit for them to come back to. If we want to allow this use of gerrit then we will always have some stale patches lying around. I think a private github is a better place for soemthing which gets no attention for so long. personally i'd like to make sure patches do get attention, and having old ones is cluttering. Don't gerrit drafts suffice for personal WIP branches? I like openstack auto-abandon approach. i think a patch with no activity for more than 30 days should be auto abandoned. its trivial to get it back to life after that (openstack are much harsher than this actually) As long as the user is notified I agree. A 30 day old patch is likely to no longer cleanly apply anyway so a rebase may be needed anyway. user gets an email on any change to their patch, including abandon. we can send a heads up email a few days earlier as well ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] vdsm: patches last updated more than 30 days
below folks have the largest number of patches last updated more than 30 days ago, please refresh/abandon/ask for reviews/etc. preferred_email | count +-- fsimo...@redhat.com | 34 smizr...@redhat.com | 23 ewars...@redhat.com | 12 wu...@linux.vnet.ibm.com| 12 wu...@linux.vnet.ibm.com | 11 lvro...@linux.vnet.ibm.com | 8 shao...@linux.vnet.ibm.com | 6 li...@linux.vnet.ibm.com| 6 ybron...@redhat.com | 6 zhshz...@linux.vnet.ibm.com | 6 Thanks, Itamar ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] which vdsm version for 3.2?
i tried running nightly engine (well, since 3.2 beta repo still contains an old version). from beta repo i got this vdsm version: vdsm-4.10.3-4.fc18.x86_64 it doesn't contain latest getHardwareInfo which engine expects[1] i enabled the nightly repo, expecting a newer vdsm, but nightly has vdsm versions of vdsm-4.10.3-0.87 I assume -4 in beta isn't getting replaced by -0.87? so which vdsm version are we using for 3.2 beta? [1]2013-01-17 15:15:53,222 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] (QuartzScheduler_Worker-65) XML RPC error in command GetHardwareInfoVDS ( HostName = local_host ), the error was: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException, type 'exceptions.Exception':method getVdsHardwareInfo is not supported ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] which vdsm version for 3.2?
On 01/17/2013 04:15 PM, Dan Kenigsberg wrote: On Thu, Jan 17, 2013 at 10:20:12PM -0500, Itamar Heim wrote: i tried running nightly engine (well, since 3.2 beta repo still contains an old version). from beta repo i got this vdsm version: vdsm-4.10.3-4.fc18.x86_64 it doesn't contain latest getHardwareInfo which engine expects[1] i enabled the nightly repo, expecting a newer vdsm, but nightly has vdsm versions of vdsm-4.10.3-0.87 I assume -4 in beta isn't getting replaced by -0.87? Indeed. Nightlies are build from master branch. ovirt-3.2 would be built from ovirt-3.2 branch. then shouldn't master move to a newer version (3.3 for engine soon, 4.11 for vdsm?) so which vdsm version are we using for 3.2 beta? [1]2013-01-17 15:15:53,222 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] (QuartzScheduler_Worker-65) XML RPC error in command GetHardwareInfoVDS ( HostName = local_host ), the error was: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException, type 'exceptions.Exception':method getVdsHardwareInfo is not supported getVdsHardwareInfo (http://gerrit.ovirt.org/#/c/9258/) has not been backported to the ovirt-3.2 branch; I do not see that it has even been proposed. I suppose Yaniv (or someone) should do it. We still do not have a sane build for ovirt-3.2. On top of getVdsHardwareInfo we have bz 886087 Rest query add storage domain fails on fedora18: missing /sbin/scsi_id and http://gerrit.ovirt.org/#/c/10758/ (udev: Race fix- load and trigger dev rule) that blocks block storage. I told Douglas today that before these are fixed, there is no hurry for another build. agree, but in general we need this build is we were supposed to have had the beta build by now... ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] RFC: New Storage API
On 12/04/2012 11:52 PM, Saggi Mizrahi wrote: I've been throwing a lot of bits out about the new storage API and I think it's time to talk a bit. I will purposefully try and keep implementation details away and concentrate about how the API looks and how you use it. First major change is in terminology, there is no long a storage domain but a storage repository. This change is done because so many things are already called domain in the system and this will make things less confusing for new-commers with a libvirt background. One other changes is that repositories no longer have a UUID. The UUID was only used in the pool members manifest and is no longer needed. connectStorageRepository(repoId, repoFormat, connectionParameters={}): repoId - is a transient name that will be used to refer to the connected domain, it is not persisted and doesn't have to be the same across the cluster. repoFormat - Similar to what used to be type (eg. localfs-1.0, nfs-3.4, clvm-1.2). connectionParameters - This is format specific and will used to tell VDSM how to connect to the repo. disconnectStorageRepository(self, repoId): In the new API there are only images, some images are mutable and some are not. mutable images are also called VirtualDisks immutable images are also called Snapshots There are no explicit templates, you can create as many images as you want from any snapshot. There are 4 major image operations: createVirtualDisk(targetRepoId, size, baseSnapshotId=None, userData={}, options={}): targetRepoId - ID of a connected repo where the disk will be created size - The size of the image you wish to create baseSnapshotId - the ID of the snapshot you want the base the new virtual disk on userData - optional data that will be attached to the new VD, could be anything that the user desires. options - options to modify VDSMs default behavior returns the id of the new VD createSnapshot(targetRepoId, baseVirtualDiskId, userData={}, options={}): targetRepoId - The ID of a connected repo where the new sanpshot will be created and the original image exists as well. size - The size of the image you wish to create baseVirtualDisk - the ID of a mutable image (Virtual Disk) you want to snapshot userData - optional data that will be attached to the new Snapshot, could be anything that the user desires. options - options to modify VDSMs default behavior returns the id of the new Snapshot copyImage(targetRepoId, imageId, baseImageId=None, userData={}, options={}) targetRepoId - The ID of a connected repo where the new image will be created imageId - The image you wish to copy baseImageId - if specified, the new image will contain only the diff between image and Id. If None the new image will contain all the bits of image Id. This can be used to copy partial parts of images for export. userData - optional data that will be attached to the new image, could be anything that the user desires. options - options to modify VDSMs default behavior return the Id of the new image. In case of copying an immutable image the ID will be identical to the original image as they contain the same data. However the user should not assume that and always use the value returned from the method. removeImage(repositoryId, imageId, options={}): repositoryId - The ID of a connected repo where the image to delete resides imageId - The id of the image you wish to delete. getImageStatus(repositoryId, imageId) repositoryId - The ID of a connected repo where the image to check resides imageId - The id of the image you wish to check. All operations return once the operations has been committed to disk NOT when the operation actually completes. This is done so that: - operation come to a stable state as quickly as possible. - In case where there is an SDM, only small portion of the operation actually needs to be performed on the SDM host. - No matter how many times the operation fails and on how many hosts, you can always resume the operation and choose when to do it. - You can stop an operation at any time and remove the resulting object making a distinction between stop because the host is overloaded to I don't want that image This means that after calling any operation that creates a new image the user must then call getImageStatus() to check what is the status of the image. The status of the image can be either optimized, degraded, or broken. Optimized means that the image is available and you can run VMs of it. Degraded means that the image is available and will run VMs but it might be a better way VDSM can represent the underlying data. Broken means that the image can't be used at the moment, probably because not all the data has been set up on the volume. Apart from that VDSM will also return the last persisted status information which will conatin hostID - the last host to try and optimize of fix the image stage - X/Y (eg. 1/10) the last persisted stage
Re: [vdsm] RFC: New Storage API
On 12/07/2012 09:53 PM, Saggi Mizrahi wrote: ... connectStorageRepository(repoId, repoFormat, connectionParameters={}): repoId - is a transient name that will be used to refer to the connected domain, it is not persisted and doesn't have to be the same across the cluster. repoFormat - Similar to what used to be type (eg. localfs-1.0, nfs-3.4, clvm-1.2). connectionParameters - This is format specific and will used to tell VDSM how to connect to the repo. Where does repoID come from? I think repoID doesn't exist before connectStorageRepository() return. Isn't repoID a return value of connectStorageRepository()? No, repoIDs are no longer part of the domain, they are just a transient handle. The user can put whatever it wants there as long as it isn't already taken by another currently connected domain. So what happens when user mistakenly gives a repoID that is in use before.. there should be something in the return value that specifies the error and/or reason for error so that user can try with a new/diff repoID ? Asi I said, connect fails if the repoId is in use ATM. so how do add connections to the repo without first disconnecting it (extend storage domain flow)? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Engine-devel] FW: Querying for and registering unknown disk images on a storage domain
On 12/23/2012 04:00 PM, Shu Ming wrote: 2012-12-20 23:18, Morrissey, Christopher: Hi All, I’ve been working on a bit of functionality for the engine that will allow a user to query a domain for new disk images (GetUnregisteredImagesQuery) for which the engine was previously unaware and a separate command to register those images (ImportImageCommand). These commands will be exposed through the REST API. This functionality is needed as we are developing an extension/plugin to oVirt that will allow a NetApp storage controller to handle cloning the actual disks outside of oVirt and need to import them once they are cloned. We’ll be using other existing APIs to attach the disk to the necessary VM once the disk is cloned. On the NetApp side, we’ll ensure the disk is coalesced before cloning so as to avoid the issues of registering snapshots. I am just curious about how the third party tool like NetApp to make sure the disk of a running VM coalesced before cloning? By an agent in the VM to flush file-system cache out to the disk? I'd expect either a livesnapshot before, or doing this on a VM which is down. GetUnregisteredImagesQuery will be accessible through the disks resource collection on a storage domain. A “disks” resource collection does not yet exist and will need to be added. To access the unregistered images, a parameter (maybe “unregistered=true”) would be passed. So the path to “GET” the unregistered disk images on a domain would be something like /api/storagedomains/f0dbcb33-69d3-4899-9352-8e8a02f01bbd/disks?unregistered=true. This will return a list of disk images that can be each used as input to the ImportImageCommand to get them added to oVirt. ImportImageCommand will be accessible through “POST”ing a disk to /api/disks?import=true. The disk will be added to the oVirt DB based on the information supplied and afterward would be available to attach to a VM. When querying for unregistered disk images, the GetUnregisteredImagesQuery command will use the getImagesList() VDSM command. Currently this only reports the GUIDs of all disk images in a domain. I had been using the getVolumesList() and getVolumeInfo() VDSM commands to fill in the information so that valid disk image objects could be registered in oVirt. It seems these two functions are set to be removed since they are too invasive into the internal VDSM workings. The VDSM team will need to either return more information about each disk as part of the getImagesList() function or add a new function getImageInfo() that will give the same information for a given image GUID. Here is the project proposal for floating disk in oVirt. I think unregistered images are also floating disks. http://www.ovirt.org/Features/DetailedFloatingDisk floating disks are disks the engine is aware of, but not associated to any VM. the scan domain feature is to add orphan disks - disks on the storage the engine isn't aware of to begin with. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] moving the collection of statistics to external process
On 12/07/2012 12:39 PM, Mark Wu wrote: On 12/06/2012 11:29 PM, Adam Litke wrote: On Thu, Dec 06, 2012 at 11:19:34PM +0800, Shu Ming wrote: 于 2012-12-6 4:51, Itamar Heim 写道: On 12/05/2012 10:33 PM, Adam Litke wrote: On Wed, Dec 05, 2012 at 10:21:39PM +0200, Itamar Heim wrote: On 12/05/2012 10:16 PM, Adam Litke wrote: On Wed, Dec 05, 2012 at 09:01:24PM +0200, Itamar Heim wrote: On 12/05/2012 08:57 PM, Adam Litke wrote: On Wed, Dec 05, 2012 at 08:30:10PM +0200, Itamar Heim wrote: On 12/05/2012 04:42 PM, Adam Litke wrote: I wanted to know what do you think about it and if you have better solution to avoid initiate so many threads? And if splitting vdsm is a good idea here? In first look, my opinion is that it can help and would be nice to have vmStatisticService that runs and writes to separate log the vms status. Vdsm recently started requiring the MOM package. MOM also performs some host and guest statistics collection as part of the policy framework. I think it would be a really good idea to consolidate all stats collection into MOM. Then, all stats become usable within the policy and by vdsm for its own internal purposes. Today, MOM has one stats collection thread per VM and one thread for the host stats. It has an API for gathering the most recently collected stats which vdsm can use. isn't this what collectd (and its libvirt plugin) or pcp are already doing? Lot's of things collect statistics, but as of right now, we're using MOM and we're not yet using collectd on the host, right? I think we should have a single stats collection service and clients for it. I think mom and vdsm should get their stats from that service, rather than have either beholden to any new stats something needs to collect. How would this work for collecting guest statistics? Would we require collectd to be installed in all guests running under oVirt? my understanding is collectd is installed on the host, and uses collects libvirt plugin to collect guests statistics? Yes, but some statistics can only be collected by making a call to the oVirt guest agent (eg. guest memory statistics). The logical next step would be to write a collectd plugin for ovirt-guest-agent, but vdsm owns the connections to the guest agents and probably does not want to multiplex those connections for many reasons (security being the main one). and some will come from qemu-ga which libvirt will support? maybe a collectd vdsm plugin for the guest agent stats? I am thinking to have the collectd as a stand alone service to collect the statics from both ovirt-guest and qemu-ga. Then collected can export the information to host proc file system in layered architecture. Then mom or other vdsm service can get the information from the proc file system like other OS statics exported in the host. You wouldn't use the host /proc filesystem for this purpose. /proc is an interface between userspace and the kernel. It is not for direct application use. The problem I see with hooking collectd up to ovirt-ga is that vdsm still needs a connection to ovirt-ga for things like shutdown and desktopLogin. Today vdsm, owns the connection to the guest agent and there is not a nice way to multiplex that connection for use by multiple clients simultaneously. /home/tlv/iheim/workspace Actually, I don't like to collect from statistics from guest agent. Now libvirt can provide the statistics of vcpu, block and network interface. So I think we should reconsider enabling guest memory report in virtio balloon driver. I am not sure if async event is supported in qmp now. How do you think of it? In vdsm and mom, we don't just simply collect statistics, but also need perform appropriate action on it. So probably we still need a output plugin for collectd to to make the data is available to vdsm and mom, and generate an event to vdsm or mom when the data reaches a given threshold. Just an idea. I am not sure how easy to implement it. should be easy for such stats, question is what other items are reported by the current guest agent (say, list of installed applications). ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] moving the collection of statistics to external process
On 12/05/2012 04:42 PM, Adam Litke wrote: I wanted to know what do you think about it and if you have better solution to avoid initiate so many threads? And if splitting vdsm is a good idea here? In first look, my opinion is that it can help and would be nice to have vmStatisticService that runs and writes to separate log the vms status. Vdsm recently started requiring the MOM package. MOM also performs some host and guest statistics collection as part of the policy framework. I think it would be a really good idea to consolidate all stats collection into MOM. Then, all stats become usable within the policy and by vdsm for its own internal purposes. Today, MOM has one stats collection thread per VM and one thread for the host stats. It has an API for gathering the most recently collected stats which vdsm can use. isn't this what collectd (and its libvirt plugin) or pcp are already doing? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] moving the collection of statistics to external process
On 12/05/2012 08:57 PM, Adam Litke wrote: On Wed, Dec 05, 2012 at 08:30:10PM +0200, Itamar Heim wrote: On 12/05/2012 04:42 PM, Adam Litke wrote: I wanted to know what do you think about it and if you have better solution to avoid initiate so many threads? And if splitting vdsm is a good idea here? In first look, my opinion is that it can help and would be nice to have vmStatisticService that runs and writes to separate log the vms status. Vdsm recently started requiring the MOM package. MOM also performs some host and guest statistics collection as part of the policy framework. I think it would be a really good idea to consolidate all stats collection into MOM. Then, all stats become usable within the policy and by vdsm for its own internal purposes. Today, MOM has one stats collection thread per VM and one thread for the host stats. It has an API for gathering the most recently collected stats which vdsm can use. isn't this what collectd (and its libvirt plugin) or pcp are already doing? Lot's of things collect statistics, but as of right now, we're using MOM and we're not yet using collectd on the host, right? I think we should have a single stats collection service and clients for it. I think mom and vdsm should get their stats from that service, rather than have either beholden to any new stats something needs to collect. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] moving the collection of statistics to external process
On 12/05/2012 10:16 PM, Adam Litke wrote: On Wed, Dec 05, 2012 at 09:01:24PM +0200, Itamar Heim wrote: On 12/05/2012 08:57 PM, Adam Litke wrote: On Wed, Dec 05, 2012 at 08:30:10PM +0200, Itamar Heim wrote: On 12/05/2012 04:42 PM, Adam Litke wrote: I wanted to know what do you think about it and if you have better solution to avoid initiate so many threads? And if splitting vdsm is a good idea here? In first look, my opinion is that it can help and would be nice to have vmStatisticService that runs and writes to separate log the vms status. Vdsm recently started requiring the MOM package. MOM also performs some host and guest statistics collection as part of the policy framework. I think it would be a really good idea to consolidate all stats collection into MOM. Then, all stats become usable within the policy and by vdsm for its own internal purposes. Today, MOM has one stats collection thread per VM and one thread for the host stats. It has an API for gathering the most recently collected stats which vdsm can use. isn't this what collectd (and its libvirt plugin) or pcp are already doing? Lot's of things collect statistics, but as of right now, we're using MOM and we're not yet using collectd on the host, right? I think we should have a single stats collection service and clients for it. I think mom and vdsm should get their stats from that service, rather than have either beholden to any new stats something needs to collect. How would this work for collecting guest statistics? Would we require collectd to be installed in all guests running under oVirt? my understanding is collectd is installed on the host, and uses collects libvirt plugin to collect guests statistics? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] moving the collection of statistics to external process
On 12/05/2012 10:33 PM, Adam Litke wrote: On Wed, Dec 05, 2012 at 10:21:39PM +0200, Itamar Heim wrote: On 12/05/2012 10:16 PM, Adam Litke wrote: On Wed, Dec 05, 2012 at 09:01:24PM +0200, Itamar Heim wrote: On 12/05/2012 08:57 PM, Adam Litke wrote: On Wed, Dec 05, 2012 at 08:30:10PM +0200, Itamar Heim wrote: On 12/05/2012 04:42 PM, Adam Litke wrote: I wanted to know what do you think about it and if you have better solution to avoid initiate so many threads? And if splitting vdsm is a good idea here? In first look, my opinion is that it can help and would be nice to have vmStatisticService that runs and writes to separate log the vms status. Vdsm recently started requiring the MOM package. MOM also performs some host and guest statistics collection as part of the policy framework. I think it would be a really good idea to consolidate all stats collection into MOM. Then, all stats become usable within the policy and by vdsm for its own internal purposes. Today, MOM has one stats collection thread per VM and one thread for the host stats. It has an API for gathering the most recently collected stats which vdsm can use. isn't this what collectd (and its libvirt plugin) or pcp are already doing? Lot's of things collect statistics, but as of right now, we're using MOM and we're not yet using collectd on the host, right? I think we should have a single stats collection service and clients for it. I think mom and vdsm should get their stats from that service, rather than have either beholden to any new stats something needs to collect. How would this work for collecting guest statistics? Would we require collectd to be installed in all guests running under oVirt? my understanding is collectd is installed on the host, and uses collects libvirt plugin to collect guests statistics? Yes, but some statistics can only be collected by making a call to the oVirt guest agent (eg. guest memory statistics). The logical next step would be to write a collectd plugin for ovirt-guest-agent, but vdsm owns the connections to the guest agents and probably does not want to multiplex those connections for many reasons (security being the main one). and some will come from qemu-ga which libvirt will support? maybe a collectd vdsm plugin for the guest agent stats? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Back to future of vdsm network configuration
On 12/04/2012 07:49 PM, Simon Grinberg wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Dan Kenigsberg dan...@redhat.com Cc: Alon Bar-Lev alo...@redhat.com, VDSM Project Development vdsm-devel@lists.fedorahosted.org, Simon Grinberg si...@redhat.com, Andrew Cathrow acath...@redhat.com Sent: Monday, December 3, 2012 10:56:53 PM Subject: Re: [vdsm] Back to future of vdsm network configuration On 12/03/2012 06:54 PM, Dan Kenigsberg wrote: On Mon, Dec 03, 2012 at 04:28:16PM +0200, Itamar Heim wrote: On 12/03/2012 04:25 PM, Dan Kenigsberg wrote: On Mon, Dec 03, 2012 at 04:35:34AM -0500, Alon Bar-Lev wrote: - Original Message - From: Mark Wu wu...@linux.vnet.ibm.com To: VDSM Project Development vdsm-devel@lists.fedorahosted.org Cc: Alon Bar-Lev alo...@redhat.com, Dan Kenigsberg dan...@redhat.com, Simon Grinberg si...@redhat.com, Antoni Segura Puimedon asegu...@redhat.com, Igor Lvovsky ilvov...@redhat.com, Daniel P. Berrange berra...@redhat.com Sent: Monday, December 3, 2012 7:39:49 AM Subject: Re: [vdsm] Back to future of vdsm network configuration On 11/29/2012 04:24 AM, Alon Bar-Lev wrote: - Original Message - From: Dan Kenigsberg dan...@redhat.com To: Alon Bar-Lev alo...@redhat.com Cc: Simon Grinberg si...@redhat.com, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 10:20:11 PM Subject: Re: [vdsm] MTU setting according to ifcfg files. On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote: Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory. This bomb has been ticking since ever. We have ovirt-node images for pure hypervisor nodes, but we support plain Linux nodes, where local admins are free to `yum upgrade` in the least convenient moment. The latter mode can be the stuff that nightmares are made of, but it also allows the flexibility and bleeding-endgeness we all cherish. There is a different between having generic OS and having generic setup, running your email server, file server and LDAP on a node that running VMs. I have no problem in having generic OS (opposed of ovirt-node) but have full control over that. Alon. Can I say we have got agreement on oVirt should cover two kinds of hypervisors? Stateless slave is good for pure and normal virtualization workload, while generic host can keep the flexibility of customization. In my opinion, it's good for the oVirt community to provide choices for users. They could customize it in production, building and even source code according to their requirements and skills. I also think it will be good to support both modes! It will also good if we can rule the world! :) Now seriously... :) If we want to ever have a working solution we need to focus, dropping wishful requirements in favour of the minimum required that will allow us to reach to stable milestone. Having a good clean interface for vdsm network within the stateless mode, will allow a persistent implementation to exists even if the whole implementation of master and vdsm assume stateless. This kind of implementation will get a new state from master, compare to whatever exists on the host and sync. I, of course, will be against investing resources in such network management plugin approach... but it is doable, and my vote is not something that you cannot safely ignore. I cannot say that I do not fail to parse English sentences with double or triple negations... I'd like to see an API that lets us define a persistent initial management interface, and create volatile network devices during runtime. I'd love to see a define/create distiction, as libvirt has. How about keeping our current setupNetwork API, with a minor change to its sematics - it would not persist anything. A new persistNetwork API would be added, intending to persist the management network after it has been tested. On boot, only the management defitions would show up, and Engine (or a small local sevice on top of vdsm) would push the complete configuration. how does this benefit over loading the last config, and then have engine refresh (always/if needed)? It's clearer for the local admin: if it's on the file system, it would be there after boot; he can do his worst to them, and we'd try to manage. Also, it is easier to recover from utterly-horrible remote commands, which had rendered our host incommunicado: the management interface used to send these commands -- and only it -- would show up after boot. This increases the probability that after fencing, we'd see the host again. i think we mentioned this before, but this will kill any way to have hosts come back to life, also have a policy on connecting to storage, even if engine is still down. (one of these use cases is for the engine itself to be hosted on the hosts as well) For this use case you'll need much
Re: [vdsm] Back to future of vdsm network configuration
On 12/03/2012 04:25 PM, Dan Kenigsberg wrote: On Mon, Dec 03, 2012 at 04:35:34AM -0500, Alon Bar-Lev wrote: - Original Message - From: Mark Wu wu...@linux.vnet.ibm.com To: VDSM Project Development vdsm-devel@lists.fedorahosted.org Cc: Alon Bar-Lev alo...@redhat.com, Dan Kenigsberg dan...@redhat.com, Simon Grinberg si...@redhat.com, Antoni Segura Puimedon asegu...@redhat.com, Igor Lvovsky ilvov...@redhat.com, Daniel P. Berrange berra...@redhat.com Sent: Monday, December 3, 2012 7:39:49 AM Subject: Re: [vdsm] Back to future of vdsm network configuration On 11/29/2012 04:24 AM, Alon Bar-Lev wrote: - Original Message - From: Dan Kenigsberg dan...@redhat.com To: Alon Bar-Lev alo...@redhat.com Cc: Simon Grinberg si...@redhat.com, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 10:20:11 PM Subject: Re: [vdsm] MTU setting according to ifcfg files. On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote: Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory. This bomb has been ticking since ever. We have ovirt-node images for pure hypervisor nodes, but we support plain Linux nodes, where local admins are free to `yum upgrade` in the least convenient moment. The latter mode can be the stuff that nightmares are made of, but it also allows the flexibility and bleeding-endgeness we all cherish. There is a different between having generic OS and having generic setup, running your email server, file server and LDAP on a node that running VMs. I have no problem in having generic OS (opposed of ovirt-node) but have full control over that. Alon. Can I say we have got agreement on oVirt should cover two kinds of hypervisors? Stateless slave is good for pure and normal virtualization workload, while generic host can keep the flexibility of customization. In my opinion, it's good for the oVirt community to provide choices for users. They could customize it in production, building and even source code according to their requirements and skills. I also think it will be good to support both modes! It will also good if we can rule the world! :) Now seriously... :) If we want to ever have a working solution we need to focus, dropping wishful requirements in favour of the minimum required that will allow us to reach to stable milestone. Having a good clean interface for vdsm network within the stateless mode, will allow a persistent implementation to exists even if the whole implementation of master and vdsm assume stateless. This kind of implementation will get a new state from master, compare to whatever exists on the host and sync. I, of course, will be against investing resources in such network management plugin approach... but it is doable, and my vote is not something that you cannot safely ignore. I cannot say that I do not fail to parse English sentences with double or triple negations... I'd like to see an API that lets us define a persistent initial management interface, and create volatile network devices during runtime. I'd love to see a define/create distiction, as libvirt has. How about keeping our current setupNetwork API, with a minor change to its sematics - it would not persist anything. A new persistNetwork API would be added, intending to persist the management network after it has been tested. On boot, only the management defitions would show up, and Engine (or a small local sevice on top of vdsm) would push the complete configuration. how does this benefit over loading the last config, and then have engine refresh (always/if needed)? setSafeNetConfig, and the rollback-on-boot mess would be scrapped. The only little problem would be to implement setupNetwork without playing with persisted ifcfg* files. Having said that, let's come back to your original claim: while generic host can keep the flexibility of customization. NOBODY, and I repeat my answer to Dan, NOBODY claim we should not support generic host. But the term 'generic' seems to confuse everyone... generic is a a host does not mean administrator can do whatever he likes, it just a host that is installed using standard distribution installation procedure. Using 'generic host' can be done with either stateful or stateless modes. However what and how customization can be done to a resource that is managed by VDSM (eg: storage, network) is a complete different question. There cannot be two managers to the same resource, it is a rule of thumb, any other approach is non-deterministic and may lead to huge resource investment with almost no benefit, as it will never be stable. So moving back to the discussion network configuration, I would like to suggest we could adopt both of the two solutions. dynamic way (as Alon suggested in his previous mail.) -- for oVirt node. It will take a step
Re: [vdsm] Back to future of vdsm network configuration
On 12/03/2012 06:54 PM, Dan Kenigsberg wrote: On Mon, Dec 03, 2012 at 04:28:16PM +0200, Itamar Heim wrote: On 12/03/2012 04:25 PM, Dan Kenigsberg wrote: On Mon, Dec 03, 2012 at 04:35:34AM -0500, Alon Bar-Lev wrote: - Original Message - From: Mark Wu wu...@linux.vnet.ibm.com To: VDSM Project Development vdsm-devel@lists.fedorahosted.org Cc: Alon Bar-Lev alo...@redhat.com, Dan Kenigsberg dan...@redhat.com, Simon Grinberg si...@redhat.com, Antoni Segura Puimedon asegu...@redhat.com, Igor Lvovsky ilvov...@redhat.com, Daniel P. Berrange berra...@redhat.com Sent: Monday, December 3, 2012 7:39:49 AM Subject: Re: [vdsm] Back to future of vdsm network configuration On 11/29/2012 04:24 AM, Alon Bar-Lev wrote: - Original Message - From: Dan Kenigsberg dan...@redhat.com To: Alon Bar-Lev alo...@redhat.com Cc: Simon Grinberg si...@redhat.com, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Wednesday, November 28, 2012 10:20:11 PM Subject: Re: [vdsm] MTU setting according to ifcfg files. On Wed, Nov 28, 2012 at 12:49:10PM -0500, Alon Bar-Lev wrote: Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory. This bomb has been ticking since ever. We have ovirt-node images for pure hypervisor nodes, but we support plain Linux nodes, where local admins are free to `yum upgrade` in the least convenient moment. The latter mode can be the stuff that nightmares are made of, but it also allows the flexibility and bleeding-endgeness we all cherish. There is a different between having generic OS and having generic setup, running your email server, file server and LDAP on a node that running VMs. I have no problem in having generic OS (opposed of ovirt-node) but have full control over that. Alon. Can I say we have got agreement on oVirt should cover two kinds of hypervisors? Stateless slave is good for pure and normal virtualization workload, while generic host can keep the flexibility of customization. In my opinion, it's good for the oVirt community to provide choices for users. They could customize it in production, building and even source code according to their requirements and skills. I also think it will be good to support both modes! It will also good if we can rule the world! :) Now seriously... :) If we want to ever have a working solution we need to focus, dropping wishful requirements in favour of the minimum required that will allow us to reach to stable milestone. Having a good clean interface for vdsm network within the stateless mode, will allow a persistent implementation to exists even if the whole implementation of master and vdsm assume stateless. This kind of implementation will get a new state from master, compare to whatever exists on the host and sync. I, of course, will be against investing resources in such network management plugin approach... but it is doable, and my vote is not something that you cannot safely ignore. I cannot say that I do not fail to parse English sentences with double or triple negations... I'd like to see an API that lets us define a persistent initial management interface, and create volatile network devices during runtime. I'd love to see a define/create distiction, as libvirt has. How about keeping our current setupNetwork API, with a minor change to its sematics - it would not persist anything. A new persistNetwork API would be added, intending to persist the management network after it has been tested. On boot, only the management defitions would show up, and Engine (or a small local sevice on top of vdsm) would push the complete configuration. how does this benefit over loading the last config, and then have engine refresh (always/if needed)? It's clearer for the local admin: if it's on the file system, it would be there after boot; he can do his worst to them, and we'd try to manage. Also, it is easier to recover from utterly-horrible remote commands, which had rendered our host incommunicado: the management interface used to send these commands -- and only it -- would show up after boot. This increases the probability that after fencing, we'd see the host again. i think we mentioned this before, but this will kill any way to have hosts come back to life, also have a policy on connecting to storage, even if engine is still down. (one of these use cases is for the engine itself to be hosted on the hosts as well) ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] MTU setting according to ifcfg files.
On 11/28/2012 01:20 PM, Saggi Mizrahi wrote: ... VDSM should not bother with the issue at all, certainly not playing a guessing game. Livant, your 0.02$? This exactly the reason why we should either define completely stateless slave host, and apply configuration including what you call 'defaults'. Completely stateless is problematic because if the engine is down or unavailable and VDSM happens to restart you can't use any of your resources. that's actually a very good point. going forward we would like to be able for hosts to continue working when engine is down, even post reboot. engine passing the policy to the hosts and hosts assuming that policy is relevant post boot would allow that. (though relying on central network services like qunatum will also cause an issue for this architecture). The way forward is currently to get rid of most of the configuration in vdsm.conf. Only have things that are necessary for communication with the engine (eg. Core dump on\off, management interface\port, SSL on\off). Other VDSM configuration should have a an API introduced to set them and that will be persisted but only configurable by management (eg. reserved host mem, guest ram overhead, migration timeouts). There should be a place where VDSM saves the configuration of owned resources (eg. managed storage connections, managed interfaces). This will be use by VDSM to make sure that the resources are configured properly after restarts\downtimes without the need of the engine. To reiterate, the general logic for system resources should be that resources are either owned or used by VDSM, you never share ownership. Never assume ownership unless expressly given. VDSM has complete control over owned resources. VDSM has NO control over unowned resources, he can use them but never configure them. Every other hybrid scheme is just asking for trouble. Or, store configuration before we perform any change so we can revert. Assuming manual changes and distro specific persistence make the problem complex in factor of np complete, as we do not know what was changed when and how to revert. Itamar though a bomb that we should co-exist on generic host, this is something I do not know to compute. I still waiting for a response of where this requirement came from and if that mandatory. It's all about resource provisioning and ownership delegation. hybrid mode is something brought up several times as a use case we should consider. so far our main concern was that SLA in the host would be needed (cgroup for example) between the native and guest workloads. as well as making sure hybrid nodes will not contend for critical resources to reduce the risk of a need to fence them. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 11/28/2012 03:53 AM, Dan Kenigsberg wrote: On Tue, Nov 27, 2012 at 03:24:58PM -0500, Alon Bar-Lev wrote: Management interface configuration is a separate issue. But it is an important issue that has to be discussed.. If we perform changes of this interface when host is in maintenance we reduce the complexity of the problem. For your specific issue, if there are two interfaces, one which is up during boot and one which is down during boot, there is no problem to bond them after boot without persisting configuration. how would you know which bond mode to use? which MTU? I don't understand the question. I think I do: Alon suggests that on boot, the management interface would not have bonding at all, and use a single nic. The switch would have to assume that other nics in the bond are dead, and will use the only one which is alive to transfer packets. There is no doubt that we have to persist the vlan tag of the management interface, and its MTU, in the extremely rare case where the network would not alow Linux's default of 1500. i was thinking manager may be using jumbo frames to talk to host, and host will have an issue with them since it is set to 1500 instead of 8k. jumbo frames isn't a rare case. as for bond, are you sure you can use a nic in a non bonded mode for all bond modes? next, what if we're using openvswitch, and you need some flow definitions for the management interface? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 11/28/2012 05:34 AM, Roni Luxenberg wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Dan Kenigsberg dan...@redhat.com Cc: Alon Bar-Lev alo...@redhat.com, VDSM Project Development vdsm-devel@lists.fedorahosted.org, Yaniv Kaul yk...@redhat.com Sent: Wednesday, November 28, 2012 11:01:35 AM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary On 11/28/2012 03:53 AM, Dan Kenigsberg wrote: On Tue, Nov 27, 2012 at 03:24:58PM -0500, Alon Bar-Lev wrote: Management interface configuration is a separate issue. But it is an important issue that has to be discussed.. If we perform changes of this interface when host is in maintenance we reduce the complexity of the problem. For your specific issue, if there are two interfaces, one which is up during boot and one which is down during boot, there is no problem to bond them after boot without persisting configuration. how would you know which bond mode to use? which MTU? I don't understand the question. I think I do: Alon suggests that on boot, the management interface would not have bonding at all, and use a single nic. The switch would have to assume that other nics in the bond are dead, and will use the only one which is alive to transfer packets. There is no doubt that we have to persist the vlan tag of the management interface, and its MTU, in the extremely rare case where the network would not alow Linux's default of 1500. i was thinking manager may be using jumbo frames to talk to host, and host will have an issue with them since it is set to 1500 instead of 8k. jumbo frames isn't a rare case. as for bond, are you sure you can use a nic in a non bonded mode for all bond modes? all bond modes have to cope with a situation where only a single nic is active and the rest are down, so one can boot with a single active nic and only activate the rest and promote to the desired bond mode upon getting the full network configuration from the manager. of course they need to handle single active nic, but iirc, the host must be configured for a matching bond as the switch. i.e., you can't configure the switch to be in bond, then boot the host with a single active nic in a non bonded config Changing the master interface mtu for either vlan or bond is required for management interface and non management interface. So the logic would probably be set max(mtu(slaves)) regardless it is management interface or not. I discussed this with Livnat, if there are applications that access the master interface directly we may break them, as the destination may not support non standard mtu. This is true in current implementation and any future implementation. It is bad practice to use the master interface directly (mixed tagged/untagged), better to define in switch that untagged communication belongs to vlanX, then use this explicit vlanX at host. next, what if we're using openvswitch, and you need some flow definitions for the management interface? I cannot answer that as I don't know openvswitch very well and don't know what flow definitions are, however, I do guess that it has non persistent mode that can effect any interface on its control. If you like I can research this one. you mainly need OVS for provisioning VM networks so here too you can completely bypass OVS during boot and only configure it in a transactional manner upon getting the full network configuration from the manager. a general question, why would you need to configure VM networks on the host (assuming a persistent cached configuration) upon boot if it cannot talk to the manager? after-all, in this case no resources would be scheduled to run on this host until connection to the manager is restored and up-to-date network configuration is applied. thanks, Roni ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Engine-devel] [ATTENTION] vdsm-bootstrap/host deployment (pre-3.2)
On 11/28/2012 04:59 PM, Ryan Harper wrote: * Alon Bar-Lev alo...@redhat.com [2012-11-28 15:33]: Leaving only vdsm-devel. - Original Message - From: Ryan Harper ry...@us.ibm.com To: Alon Bar-Lev alo...@redhat.com Cc: Dan Kenigsberg dan...@redhat.com, engine-devel engine-de...@ovirt.org, VDSM Project Development vdsm-devel@lists.fedorahosted.org, users us...@ovirt.org Sent: Wednesday, November 28, 2012 11:22:59 PM Subject: Re: [Engine-devel] [vdsm] [ATTENTION] vdsm-bootstrap/host deployment (pre-3.2) snip If sysadmin manually enables dumps, he may do this at a location of his own choice. Note that we've just swapped hats: you're arguing for letting a local admin log in and mess with system configuration, and I'm for keeping a centralized feature for storing and collecting core dumps. As problems like crashes are investigated per case and reproduction scenario. But again, I may be wrong and we should have VDSM API command to start/stop storing dumps and manage this via its master... I very much like this idea. There was a thread a while back discussing[1] the this very idea; I was looking for a way to enable 'debugging' mode as well as a way to programatically collect debugging info (which could include host stats, guest stats, logs and any core files). Certainly in such a scenario, being able to enable/disable varous features of a debugging mode could include whether to enable core dumps as well as where to save them on the host. 1. http://comments.gmane.org/gmane.comp.emulators.ovirt.vdsm.devel/1387 Yes, I read this, however, I am unsure that debug and low level collections should be implemented as in-band interface and not side-band. After many of years handling crappy bug reports that don't include the data needed for debugging, such as log files and other settings, and spending weeks going back and forth with the submitter on collecting what and where and how to collect it, I'm confident that having something that can programtically enable debugging on-demand and providing a way to collect the relative data (logs, cores, etc) would make a dramatic improvement when handing these support issues. I'm also a firm believer in lowering the barriers for consumption. IIUC, a side-band tool is going to require additional configuration/authenitication; and will ultimately need to access the API anyhow to determine various information about the specific VM. With that in mind, why wouldn't we want to look at a first-class debug API mode which can be used remotely and programatically? A better question is, what are the drawbacks to having it in-band? isn't this what the ovirt-log-collector is for? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary
On 11/26/2012 03:18 PM, Alon Bar-Lev wrote: - Original Message - From: Livnat Peer lp...@redhat.com To: Shu Ming shum...@linux.vnet.ibm.com Cc: Alon Bar-Lev abar...@redhat.com, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Monday, November 26, 2012 2:57:19 PM Subject: Re: [vdsm] Future of Vdsm network configuration - Thread mid-summary On 26/11/12 03:15, Shu Ming wrote: Livnat, Thanks for your summary. I got comments below. 2012-11-25 18:53, Livnat Peer: Hi All, We have been discussing $subject for a while and I'd like to summarized what we agreed and disagreed on thus far. The way I see it there are two related discussions: 1. Getting VDSM networking stack to be distribution agnostic. - We are all in agreement that VDSM API should be generic enough to incorporate multiple implementation. (discussed on this thread: Alon's suggestion, Mark's patch for adding support for netcf etc.) - We would like to maintain at least one implementation as the working/up-to-date implementation for our users, this implementation should be distribution agnostic (as we all acknowledge this is an important goal for VDSM). I also think that with the agreement of this community we can choose to change our focus, from time to time, from one implementation to another as we see fit (today it can be OVS+netcf and in a few months we'll use the quantum based implementation if we agree it is better) 2. The second discussion is about persisting the network configuration on the host vs. dynamically retrieving it from a centralized location like the engine. Danken raised a concern that even if going with the dynamic approach the host should persist the management network configuration. About dynamical retrieving from a centralized location, when will the retrieving start? Just in the very early stage of host booting before network functions? Or after the host startup and in the normal running state of the host? Before retrieving the configuration, how does the host network connecting to the engine? I think we need a basic well known network between hosts and the engine first. Then after the retrieving, hosts should reconfigure the network for later management. However, the timing to retrieve and reconfigure are challenging. We did not discuss the dynamic approach in details on the list so far and I think this is a good opportunity to start this discussion... From what was discussed previously I can say that the need for a well known network was raised by danken, it was referred to as the management network, this network would be used for pulling the full host network configuration from the centralized location, at this point the engine. About the timing for retrieving the configuration, there are several approaches. One of them was described by Alon, and I think he'll join this discussion and maybe put it in his own words, but the idea was to 'keep' the network synchronized at all times. When the host have communication channel to the engine and the engine detects there is a mismatch in the host configuration, the engine initiates 'apply network configuration' action on the host. Using this approach we'll have a single path of code to maintain and that would reduce code complexity and bugs - That's quoting Alon Bar Lev (Alon I hope I did not twisted your words/idea). On the other hand the above approach makes local tweaks on the host (done manually by the administrator) much harder. Any other approaches ? I'd like to add a more general question to the discussion what are the advantages of taking the dynamic approach? So far I collected two reasons: -It is a 'cleaner' design, removes complexity on VDSM code, easier to maintain going forward, and less bug prone (I agree with that one, as long as we keep the retrieving configuration mechanism/algorithm simple). -It adheres to the idea of having a stateless hypervisor - some more input on this point would be appreciated Any other advantages? discussing the benefits of having the persisted Livnat Sorry for the delay. Some more expansion. ASSUMPTION After boot a host running vdsm is able to receive communication from engine. This means that host has legitimate layer 2 configuration and layer 3 configuration for the interface used to communicate to engine. MISSION Reduce complexity of implementation, so that only one algorithm is used in order to reach to operative state as far as networking is concerned. (Storage is extremely similar I can s/network/storage/ and still be relevant). DESIGN FOCAL POINT Host running vdsm is a complete slave of its master, will it be ovirt-engine or other engine. Having a complete slave ease implementation: 1. Master always apply the setting as-is. 2. No need to consider slave state. 3. No need to implement AI to reach from unknown state X to known state Y + delta. 4. After reboot (or fence) host is always in known state. ALGORITHM A. Given communication to vdsm, construct
Re: [vdsm] Proposal to add Adam Litke as maintainer to vdsm
On 11/26/2012 03:39 AM, Federico Simoncelli wrote: +1 - Original Message - From: Itamar Heim ih...@redhat.com To: Ayal Baron aba...@redhat.com, Dan Kenigsberg dan...@redhat.com, Saggi Mizrahi smizr...@redhat.com, Federico Simoncelli fsimo...@redhat.com, Eduardo Warszawski ewars...@redhat.com, Igor Lvovsky ilvov...@redhat.com Cc: vdsm-devel@lists.fedorahosted.org Sent: Friday, November 16, 2012 5:50:27 PM Subject: Proposal to add Adam Litke as maintainer to vdsm Adam has been submitting numerous patches for VDSM for more than a year, most noticeably on improving VDSM API into, well, an API... I'd like to propose Adam as a maintainer for vdsm. Thanks, Itamar added adam as a vdsm maintainer. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Future of Vdsm network configuration
On 11/17/2012 11:56 AM, Gary Kotton wrote: On 11/17/2012 11:00 AM, Alon Bar-Lev wrote: Hello, After discussion calm down, I want to once again to ask a question. Why isn't this discussion focusing on the interface vdsm will use to access network provider? Why should vdsm core care which network technology it actually uses? Quantum? 1. that's still a specific implementation. 2. last i checked, it is far from covering the API needed by vdsm for provisioning network configurations, rather than just consuming them? (i.e., i don't remember quantum ever intends to provide an api to bond physical interfaces, etc). With proper design of such interface, and the ability to select interface implementation using configuration, vdsm will be able to work with various of technologies without a change. Technologies can be either network manager, ovs, libvirt or basic. What popular now can be unpopular in future, what is considered stable enough for now, may be not stable enough for future uses, what is maintained now may be unmaintained in future. Developing tightly coupled software is something I would avoid if not absolutely required. People may vote which interface they like to have now and we can implement one, while in time we may see other implementations as contributions. This will also allow us to move from one technology to another with decent effort/costs if required for any reason. Best Regards, Alon Bar-Lev. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] Proposal to add Adam Litke as maintainer to vdsm
Adam has been submitting numerous patches for VDSM for more than a year, most noticeably on improving VDSM API into, well, an API... I'd like to propose Adam as a maintainer for vdsm. Thanks, Itamar ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Review Request: Add an option to create a watchdog device.
On 10/31/2012 11:17 AM, Sheldon wrote: On 10/30/2012 03:45 PM, Itamar Heim wrote: On 10/30/2012 07:00 AM, Sheldon wrote: Looking for some review on this patch. http://gerrit.ovirt.org/#/c/7535/ Hi Sheldon, just wondering - are you planning to close this feature end-to-end (with ovirt engine)? a feature page would be nice (i think your commit message is almost good enough to explain. just need to cover how it fits with the engine side. IMO, It is nice for engine to add UI to let user to decide to add this watchdog device. Like virt-manager, it has a UI option to add watchdog device. And engine just need to add the watchdog device configuration in the parameter when call VM create API, if watchdog device is chosen. i'd say it's not only nice, but mandatory, if the feature is to be supported in ovirt and not only vdsm. my question was just if there was a plan to close the loop on this (feature page, patches to engine, etc.) thanks, Itamar ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Engine-devel] unmanaged devices thrown into 'custom' feature
On 10/23/2012 10:52 PM, Ayal Baron wrote: - Original Message - On Tue, Oct 23, 2012 at 05:53:20AM -0400, Doron Fediuck wrote: - Original Message - From: Livnat Peer lp...@redhat.com To: Dan Kenigsberg dan...@redhat.com Cc: engine-de...@ovirt.org, Genadi Chereshnya gcher...@redhat.com, vdsm-de...@fedorahosted.org Sent: Monday, October 22, 2012 8:29:20 AM Subject: Re: [Engine-devel] [vdsm] unmanaged devices thrown into 'custom' feature On 21/10/12 23:49, Dan Kenigsberg wrote: On Sun, Oct 21, 2012 at 11:57:10AM -0400, Eli Mesika wrote: - Original Message - From: Yair Zaslavsky yzasl...@redhat.com To: Livnat Peer lp...@redhat.com Cc: Genadi Chereshnya gcher...@redhat.com, engine-de...@ovirt.org, vdsm-de...@fedorahosted.org Sent: Sunday, October 21, 2012 5:38:54 PM Subject: Re: [Engine-devel] unmanaged devices thrown into 'custom' feature - Original Message - From: Livnat Peer lp...@redhat.com To: Dan Kenigsberg dan...@redhat.com Cc: Genadi Chereshnya gcher...@redhat.com, engine-de...@ovirt.org, vdsm-de...@fedorahosted.org Sent: Sunday, October 21, 2012 5:18:31 PM Subject: Re: [Engine-devel] unmanaged devices thrown into 'custom' feature On 21/10/12 16:42, Dan Kenigsberg wrote: I have just noticed that when a VM is started for the second time, Engine issues the create vdsm verb with some information regarding unmanaged devices (an example is shown below[1]) in the 'custom' propery bag. I'm surprised about this, as I was not aware of this usage of the 'custom' dictionary, and Vdsm is not doing anything with the data. If I recall correctly the idea of passing the unmanaged devices data in the custom property was to enable managing stable device addresses in the hooks (to devices that were added to the VM via hooks from the first place), so this info is there not for VDSM use. For example if you add a device in a hook it will be kept in the engine as a non managed device. later when starting the VM again you would like to assign the same device address to your device, and you can do so because you have access to the original address in the custom properties of the VM. This is exactly what Eli has explained Gendai and Dan today. (I was asking here because I did not understand the verbal explanation.) This is taken from the Stable Device Address design in http://wiki.ovirt.org/wiki/Features/Design/StableDeviceAddresses Unmanaged Device - Unmanaged Device will be supported in the new format and will include all unhandled devices as sound/controller/etc and future devices. Those devices will be persistent and will have Type , SubType (device specific) and an Address. For 3.1 an unmanaged Device is not exposed to any GUI/REST API. Unmanaged devices are passed to vdsm inside a Custom property. VDSM in it turn is passing this as is for possible hook processing. Thanks for the elaboration. Too bad that I've missed this issue before. Are you aware of any hook making use of this? I hope that hook writers are not using APIs that are not documented in vdsmd(8). It seems as a classic case where a generic bag interface is coerced into an awkward partially-documented interface. I think that a better approach would have been to pass all devices (managed and unmanaged alike) in the 'devices' property, and let vdsm expose whatever is needed to the before_vm_start hook. Maybe we can still do this. That was the original idea but Ayal objected and I think Igor did not like it as well... +2. The original design had an 'unmanaged' (or generic) device type, and all devices should have been normalized. But as explained, this was strongly rejected in the VDSM side, causing Eli write some special handling for this anomaly. Can someone (Ayal?) explain the rejection on Vdsm side? Hiding part of the API in the custom propery bag requires strong reasoning indeed. It's not hiding anything. Today vdsm passes the libvirt xml to the hooks + the custom properties. If you pass 'non-managed devices' through the devices API then basically you're saying to vdsm 'here is a bunch of things I want you to ignore but be a sport and pass it on to the hooks'. Last I checked, that mechanism is custom properties. I don't see any reason to add another one. 1. this means hooks don't actually get the unmanaged devices today? 2. custom property are to pass parameters to the hook user can reasonably control. device management should not be part of that if not strictly related to the hook (from user's side). 3. I'm guessing (didn't get this first time i read your answer), you are hinting the unmanaged devices should be passed as a custom property to vdsm, auto generated by engine? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Spice-devel] [RFC]about the implement of text-based console
On 10/18/2012 12:13 PM, Alon Levy wrote: On 10/16/2012 12:18 AM, David Jaša wrote: Ewoud Kohl van Wijngaarden píše v Po 15. 10. 2012 v 22:46 +0200: On Tue, Oct 16, 2012 at 12:51:23AM +0800, Xu He Jie wrote: [SNIP] Hi, Adam, Could you explain more detail about how streaming API can survive a VM migration? If we want to support migration, I think we should implement console server out of vdsm. Actually, It will work like proxy. So we call it as consoleProxy now. That consoleProxy can deploy on same machine with engine, or standalone, or virtual machine. I think its' working flow as below: 1. user request open console to engine. 2. engine setTicket(uuid, ticket, hostofvm) to consoleProxy. consoleProxy need provide api to engine. 3. engine return ticket to user. 4. user 'ssh UUID@consoleProxy' with ticket. 5. consoleProxy connect 'virsh -c qemu+tls://hostofvm/system console'. the host of running consoleProxy should have certificates of all vdsm host. 6. consoleProxy redirect output of 'virsh -c qemu+tls://hostofvm/system console' with ssh protocol. Same with currently implement. we can use system sshd or paramiko. If we use paramiko, it almost reuse the code of consoleServer that I have already writen. After vm migrated: 1. engine tell consoleProxy that vm was migrated. I guess engine can know vm finished migration? And engine how to push the event of vm finished migration to consoleProxy? Engine only have rest api didn't support event push? Is streaming api can resolve this problem? 2. consoleProxy kill 'virsh console'. 3. reconnect to new host of vm with 'virsh console' again. There will missing some character if the reconnection isn't enough fast. This is hardly to resolve except implement ssh in qemu. I guess streaming api have some problem too. 4. continue redirect 'virsh console'. Actually if we implement consoleProxy out of vdsm, we don't need decide it will run on physical machine or virtual machine now. A lot detail need to think. I'm not cover all problem. And I haven't code to prove that work now. Just depend on thinking. Is this make sense? How is this handled with current displays like VNC and Spice? Extending spice to provide just serial console remoting actually seems the easiest way to provide remote text-only console because most of the code path is already mature (used for client to guest agent communication) and e.g. spicy to just provide a device where e.g. screen could connect or just provide the console itself. CCing spice-devel would it allow users to script with/over it like they can with ssh? If I understand correctly the idea is to add another channel for spice that would connect to a char device in qemu that in turn connects to a serial port. The result is a spice client that can display and interact, but not a scripting extension. We could also create a unix domain socket to expose this connection on the client, and the client could then use that for scripting (but this will be instead of displaying, since you can't multiplex the console in a meaningful way - unless you run screen/tmux over it maybe): remote-viewer --spice-console-unix-domain-socket /tmp/spice.uds (This option assumes we want a single console channel - if we have multiple we will need to name them too) Anyone will be able to script it using for instance: socat UNIX-CONNECT:/tmp/spice.uds SYSTEM:echo hello world We could also turn it into a pty (socat can do that). i think using spice this way may be a very good solution, to proxy a serial console. only caveat is it requires client to install spice, vs. just using ssh. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] new API verb: getVersionInfo()
On 10/18/2012 06:03 PM, Saggi Mizrahi wrote: currently getVdsCaps() does a lot of unrelated things most of them have no relation to capabilites. This was done because of HTTP overhead. Instead of calling multiple commands we will call one that does everything. I agree with the suggestion that getVdsCaps() will actually return the capabilities. Capabilities being: - storage core version supported domain formats - VM core version and supported host capabilites. - network core and capabilities. - etc... These all should be mostly static and set at boot. As to the query API. I personally dislike the idea of a bag API. Now that we are moving away from HTTP, call overhead is no longer an issue so we can have multiple verbs and call them sequentially. In actuality we already do. Internally getVdsCaps() just aggregates other APIs. This makes return values of the method easier to handle and makes changing the results of an API call not affect users that don't care about that change. This also has better performance as storage APIs tend to slow the response and sending multiple commands would mean that you can get the Network stats even though the storage server is down. i thought getVdsCaps return the storage results from cache, which is refreshed by another thread, to make sure getVdsCaps has no latency. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [RFC]about the implement of text-based console
On 10/18/2012 09:13 PM, Dave Allan wrote: On Thu, Oct 18, 2012 at 02:21:44AM +0200, Itamar Heim wrote: On 10/15/2012 07:41 PM, Dave Allan wrote: On Fri, Oct 12, 2012 at 11:25:47AM -0500, Ryan Harper wrote: * Adam Litke a...@us.ibm.com [2012-10-12 08:13]: On Fri, Oct 12, 2012 at 04:55:20PM +0800, Zhou Zheng Sheng wrote: on 09/04/2012 22:19, Ryan Harper wrote: * Dan Kenigsberg dan...@redhat.com [2012-09-04 05:53]: On Tue, Sep 04, 2012 at 03:05:37PM +0800, Xu He Jie wrote: On 09/03/2012 10:33 PM, Dan Kenigsberg wrote: On Thu, Aug 30, 2012 at 04:26:31PM -0500, Adam Litke wrote: On Thu, Aug 30, 2012 at 11:32:02AM +0800, Xu He Jie wrote: Hi, I submited a patch for text-based console http://gerrit.ovirt.org/#/c/7165/ the issue I want to discussing as below: 1. fix port VS dynamic port Use fix port for all VM's console. connect console with 'ssh vmUUID@ip -p port'. Distinguishing VM by vmUUID. The current implement was vdsm will allocated port for console dynamically and spawn sub-process when VM creating. In sub-process the main thread responsible for accept new connection and dispatch output of console to each connection. When new connection is coming, main processing create new thread for each new connection. Dynamic port will allocated port for each VM and use range port. It isn't good for firewall rules. so I got a suggestion that use fix port. and connect console with 'ssh vmuuid@hostip -p fixport'. this is simple for user. We need one process for accept new connection from fix port and when new connection is coming, spawn sub-process for each vm. But because the console only can open by one process, main process need responsible for dispatching console's output of all vms and all connection. So the code will be a little complex then dynamic port. So this is dynamic port VS fix port and simple code VS complex code. From a usability point of view, I think the fixed port suggestion is nicer. This means that a system administrator needs only to open one port to enable remote console access. If your initial implementation limits console access to one connection per VM would that simplify the code? Yes, using a fixed port for all consoles of all VMs seems like a cooler idea. Besides the firewall issue, there's user experience: instead of calling getVmStats to tell the vm port, and then use ssh, only one ssh call is needed. (Taking this one step further - it would make sense to add another layer on top, directing console clients to the specific host currently running the Vm.) I did not take a close look at your implementation, and did not research this myself, but have you considered using sshd for this? I suppose you can configure sshd to collect the list of known users from `getAllVmStats`, and force it to run a command that redirects VM's console to the ssh client. It has a potential of being a more robust implementation. I have considered using sshd and ssh tunnel. They can't implement fixed port and share console. Would you elaborate on that? Usually sshd listens to a fixed port 22, and allows multiple users to have independet shells. What do you mean by share console? Current implement we can do anything that what we want. Yes, it is completely under our control, but there are down sides, too: we have to maintain another process, and another entry point, instead of configuring a universally-used, well maintained and debugged application. Think of the security implications of having another remote shell access point to a host. I'd much rather trust sshd if we can make it work. Dan. At first glance, the standard sshd on the host is stronger and more robust than a custom ssh server, but the risk using the host sshd is high. If we implement this feature via host ssd, when a hacker attacks the sshd successfully, he will get access to the host shell. After all, the custom ssh server is not for accessing host shell, but just for forwarding the data from the guest console (a host /dev/pts/X device). If we just use a custom ssh server, the code in this server only does 1. auth, 2. data forwarding, when the hacker attacks, he just gets access to that virtual machine. Notice that there is no code written about login to the host in the custom ssh server, and the custom ssh server can be protected under selinux, only allowing it to access /dev/pts/X. In fact using a custom VNC server in qemu is as risky as a custom ssh server in vdsm. If we accepts the former one, then I can accepts the latter one. The consideration is how robust of the custom ssh server, and the difficulty to maintain it. In He Jie's current patch, the ssh auth and transport library is an open-source third-party project, unless the project is well maintained and well proven, using it can be risky. So my opinion is using neither the host sshd, nor a custom ssh server. Maybe we can apply the suggestion from Dan Yasny, running a standard sshd in a very small VM in every host, and forward data from
Re: [vdsm] [Engine-devel] Network related hooks in vdsm
On 10/16/2012 09:31 AM, Livnat Peer wrote: On 16/10/12 08:52, Mike Kolesnik wrote: - Original Message - On 10/10/12 16:47, Igor Lvovsky wrote: Hi everyone, As you know vdsm has hooks mechanism and we already support dozen of hooks for different needs. Now it's a network's time. We would like to get your comments regarding our proposition for network related hooks. In general we are planning to prepare framework for future support of bunch network related hooks. Some of them already proposed by Itzik Brown [1] and Dan Yasny [2]. Below you can find the additional hooks list that we propose: Many of the API calls bellow are deprecated. Why do we want to add hooks before/after to deprecated APIs? They are actually still very much in use with the REST API. Deprecate does not mean not in use but not using it going forward. Today if a user is using 3.1 cluster/DC in the UI or the setupNetwork API (which is the recommended way to configure your network in 3.1 and in future versions) the hooks for add/edit-Network won't get activated and that is confusing to the users (and the developers). Perhaps we should address just the logical entry points instead of specific commands. A command such as setup networks can trigger multiple logical events in which hooks can be planted (same goes for edit network in a smaller scale). What you are suggesting above is to deviate from the current hook mechanism we have in VDSM and add some logic to where/when we activate hooks. That's an interesting suggestion, I suggest to write a wiki page and start thinking of the implementation implications of it. Since I like the idea I'll work with you on the wiki and we'll see if we can get something more useful to the users and send a formal proposal. question is there is any high demand/priority for network hooks other than hotplug nic, and do we have a clear vision of a stable api for them. one thing to consider is allowing to define custom properties at logical network and virtual nic level though. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [RFC]about the implement of text-based console
On 10/16/2012 12:18 AM, David Jaša wrote: Ewoud Kohl van Wijngaarden píše v Po 15. 10. 2012 v 22:46 +0200: On Tue, Oct 16, 2012 at 12:51:23AM +0800, Xu He Jie wrote: [SNIP] Hi, Adam, Could you explain more detail about how streaming API can survive a VM migration? If we want to support migration, I think we should implement console server out of vdsm. Actually, It will work like proxy. So we call it as consoleProxy now. That consoleProxy can deploy on same machine with engine, or standalone, or virtual machine. I think its' working flow as below: 1. user request open console to engine. 2. engine setTicket(uuid, ticket, hostofvm) to consoleProxy. consoleProxy need provide api to engine. 3. engine return ticket to user. 4. user 'ssh UUID@consoleProxy' with ticket. 5. consoleProxy connect 'virsh -c qemu+tls://hostofvm/system console'. the host of running consoleProxy should have certificates of all vdsm host. 6. consoleProxy redirect output of 'virsh -c qemu+tls://hostofvm/system console' with ssh protocol. Same with currently implement. we can use system sshd or paramiko. If we use paramiko, it almost reuse the code of consoleServer that I have already writen. After vm migrated: 1. engine tell consoleProxy that vm was migrated. I guess engine can know vm finished migration? And engine how to push the event of vm finished migration to consoleProxy? Engine only have rest api didn't support event push? Is streaming api can resolve this problem? 2. consoleProxy kill 'virsh console'. 3. reconnect to new host of vm with 'virsh console' again. There will missing some character if the reconnection isn't enough fast. This is hardly to resolve except implement ssh in qemu. I guess streaming api have some problem too. 4. continue redirect 'virsh console'. Actually if we implement consoleProxy out of vdsm, we don't need decide it will run on physical machine or virtual machine now. A lot detail need to think. I'm not cover all problem. And I haven't code to prove that work now. Just depend on thinking. Is this make sense? How is this handled with current displays like VNC and Spice? Extending spice to provide just serial console remoting actually seems the easiest way to provide remote text-only console because most of the code path is already mature (used for client to guest agent communication) and e.g. spicy to just provide a device where e.g. screen could connect or just provide the console itself. CCing spice-devel would it allow users to script with/over it like they can with ssh? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] how engine get files from node???
On 09/24/2012 02:46 PM, Ryan Harper wrote: * Sheldon shao...@linux.vnet.ibm.com [2012-09-24 06:50]: On 09/21/2012 08:14 PM, Laszlo Hornyak wrote: - Original Message - From: Sheldon shao...@linux.vnet.ibm.com To: vdsm-devel@lists.fedorahosted.org Sent: Friday, September 21, 2012 11:42:05 AM Subject: [vdsm] how engine get files from node??? Hi all, I have submitted a patch about watchdog device http://gerrit.ovirt.org/#/c/7535/ If we set 'dump' action for watchdog, qemu will generate a dump file. And I have get some feedback, how engine get these files? When adding the host (if installation is enabled) the engine does some ssh/scp to the new host, but this can be turned off. Also, the guideline is to have a single interface with the host, and after installation the engine does not try to communicate with vdsm in any other way than the xmlrpc interface. do you means ssh/scp may not be available on host, So a xmlrpc interface should be added to get the files on host? ssh/scp is available, but requires user authentication. We already have one entry point into the node, the VDSM API, which is currently running over XMLRPC, so, instead of attempting to coordinate additional users and authenication, provide a way to programatically get at the data via XMLRPC. is this needed by the engine interactively, or just for log colleciton? logs/cores/dumps are collected today by the log collector utility, based on ssh/scp leveraging sos. but this isn't engine getting logs, rather a stand alone utility for collecting them. is the file needed by engine for some flow, or just log collection for user benefit? scp? or Will vdsm support new API to list and get these files? IMO, The new API should list and get not only dump files, but also other kinds of files. -- Sheldon Feng shao...@linux.vnet.ibm.com IBM Linux Technology Center ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel -- Sheldon Feng(?)shao...@linux.vnet.ibm.com IBM Linux Technology Center ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Engine-devel] is gerrit.ovirt.org down? eom
On 09/12/2012 05:50 PM, Shu Ming wrote: It seems gerrit has downed for several times recently. Is there any special reason? I assume the jenkins/gerrit integration is overloading it a bit. 于 2012-9-12 22:45, Alon Bar-Lev: yes. - Original Message - From: Shireesh Anjal san...@redhat.com To: engine-de...@ovirt.org Sent: Wednesday, September 12, 2012 5:43:35 PM Subject: [Engine-devel] is gerrit.ovirt.org down? eom ___ Engine-devel mailing list engine-de...@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel ___ Engine-devel mailing list engine-de...@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Engine-devel] is gerrit.ovirt.org down? eom
On 09/12/2012 08:15 PM, Adam Litke wrote: So the fix is to just regularly restart gerrit? Do we have any idea about the real, underlying problem? the error is in jetty, i didn't look at fixing the core issue in jetty. moving to another container is possible, but makes managing it for upgrades much less trivial. so far most threads around this on google mention a cron to detect this and restart. I actually expect moving to a hosted physical server will make this less of an issue based on the previous experience of extending the EC2 instance. if it won't help, we may need to tackle the core jetty issue ourselves. On Wed, Sep 12, 2012 at 11:56:44AM -0400, Eyal Edri wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Asaf Shakarchi ashak...@redhat.com Cc: Alon Bar-Lev alo...@redhat.com, Shireesh Anjal san...@redhat.com, engine-de...@ovirt.org, VDSM Project Development vdsm-devel@lists.fedorahosted.org, Shu Ming shum...@linux.vnet.ibm.com, Eyal Edri ee...@redhat.com Sent: Wednesday, September 12, 2012 6:34:56 PM Subject: Re: [Engine-devel] is gerrit.ovirt.org down? eom On 09/12/2012 06:23 PM, Asaf Shakarchi wrote: It happens from time to time, restart is required, Itamar only. restarted. eyal - can we make progress on the jenkins job with permission to more people to restart gerrit? the job is ready http://jenkins.ovirt.org/view/system-monitoring/job/restart_gerrit_service but i need to have jenkins user access to gerrit server + sudo access to run 'service' restart... it has access to www.ovirt.org but not to gerrit.ovirt.org. others - please email infra on gerrit issues (well, me personally always help as well) - Original Message - Yes, I am experiencing this too... Itamar? - Original Message - From: Shu Ming shum...@linux.vnet.ibm.com To: Alon Bar-Lev alo...@redhat.com Cc: Shireesh Anjal san...@redhat.com, engine-de...@ovirt.org, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Wednesday, September 12, 2012 5:50:14 PM Subject: Re: [Engine-devel] is gerrit.ovirt.org down? eom It seems gerrit has downed for several times recently. Is there any special reason? 于 2012-9-12 22:45, Alon Bar-Lev: yes. - Original Message - From: Shireesh Anjal san...@redhat.com To: engine-de...@ovirt.org Sent: Wednesday, September 12, 2012 5:43:35 PM Subject: [Engine-devel] is gerrit.ovirt.org down? eom ___ Engine-devel mailing list engine-de...@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel ___ Engine-devel mailing list engine-de...@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel -- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shum...@cn.ibm.com or shum...@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC ___ Engine-devel mailing list engine-de...@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Change in vdsm[master]: Add uploadIso API call for pushing ISOs into an active iso_d...
On 09/10/2012 05:45 PM, Ryan Harper wrote: * ih...@redhat.com ih...@redhat.com [2012-09-09 15:39]: Itamar Heim has posted comments on this change. Change subject: Add uploadIso API call for pushing ISOs into an active iso_domain via pool .. Patch Set 1: how would this work for anyone using vdsm api remotely? The wget method supports remote acquisition. so how would a remote action would look like exactly? say, ovirt engine uploading the image over the api? or just a remote user - the wget would happen on vdsm rather than on the node? usually when uploading the client has the access to the image uploaded and it is passed over the api to the target? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Patch review process
On 09/10/2012 08:33 PM, Ryan Harper wrote: What's the point of going to the list if not to be able to respond to email? to be able to see what's going on in bulk, in offline, via mail client. but go on gerrit to reply/discuss, or some of your comments will get lost from the patch activity. if you comment via email, other reviewers may not see your comments when they review the patch in gerrit. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [RFC] GlusterFS domain specific changes
On 09/07/2012 08:21 AM, M. Mohan Kumar wrote: On Thu, 6 Sep 2012 18:59:19 -0400 (EDT), Ayal Baron aba...@redhat.com wrote: - Original Message - - Original Message - From: M. Mohan Kumar mo...@in.ibm.com To: vdsm-devel@lists.fedorahosted.org Sent: Wednesday, July 25, 2012 1:26:15 PM Subject: [vdsm] [RFC] GlusterFS domain specific changes We are developing a GlusterFS server translator to export block devices as regular files to the client. Using block devices to serve VM images gives performance improvements, since it avoids some file system bottlenecks in the host kernel. Goal is to use one block device(ie file at the client side) per VM image and feed this file to QEMU to get the performance improvements. QEMU will talk to glusterfs server directly using libgfapi. Currently we support only exporting Volume groups and Logical Volumes. Logical volumes are exported as regular files to the client. Are you actually using LVM behind the scenes? If so, why bother with exposing the LVs as files and not raw block devices? Ayal, The idea is to provide a FS interface for managing block devices. One can mount the Block Device Gluster Volume and create a LV and size it just by $ touch lv1 $ truncate -s5G lv1 And other file commands can be used to clone LVs, snapshot LVs $ ln lv1 lv2 # clones $ ln -s lv1 lv1.sn # creates snapshot By enabling this feature GlusterFS can directly export storage in SAN. We are planning to add feature to export LUNs also as regular files in future. In GlusterFS terminology a volume capable of exporting block devices is created by specifying the 'Volume Group' (ie VG in Logical Volume management). Block Device translator(BD xlator) exports this volume group as a directory and LVs under it as regular files. In the gluster mount point creating a file results in creating a logical volume, removing a file results in removing logical volume etc. When a GlusterFS volume enabled with BD xlator is used, directory creation in that gluster mount path is not supported because directory maps to Volume groups in BD xlator. But it could be an issue in VDSM environment when a new VDSM volume is created for GlusterFS domain, VDSM mounts the storage domain and creates directories under that and create files for vm image and other uses (like meta data). Is it possible to modify this behavior in VDSM to use flat structure instead of creating directories and VM images and other files underneath it? ie for GlusterFS domain with BD xlator VDSM will not create any directory and only creates all required files under the mount point directory itself. From your description I think that the GlusterFS for block devices is actually more similar to what happens with the regular block domains. You should probably need to mount the share somewhere in the system and then use symlinks to point to the volumes. Create a regular block domain and look inside /rhev/data-center/mnt/blockSD, you'll probably get the idea of what I mean. That said we'd need to come up with a way of extending the LVs on the gluster server when required (for thin provisioning). Why? if it's exposed as a file that probably means it supports sparseness. i.e. if this becomes a new type of block domain it should only support 'preallocated' images. For start using the LVs we will always do truncate for the required size, it will resize the LV. I didn't get what you are mentioning about thin-provisioning, but I have a dumb code using dm-thin targets showing BD xlators can be extended to use dm-thin targets for thin-provisioning. so even though this is block storage, it will be extended as needed? how does that work exactly? say i have a VM with a 100GB disk. thin provisioning means we only allocated 1GB to it, then as the guest uses that storage, we allocate more as needed (lvextend, pause guest, lvrefresh, resume guest) ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [RFC] Implied UUIDs in API
On 08/31/2012 12:33 AM, Alon Bar-Lev wrote: - Original Message - From: Saggi Mizrahi smizr...@redhat.com To: arch a...@ovirt.org, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Friday, August 31, 2012 12:19:46 AM Subject: [vdsm] [RFC] Implied UUIDs in API Hi, in the API a lot of IDs get passed around are UUIDs. The point is that as long as you are not the entity generating the UUIDs the fact that these are UUIDs have no real significance to you. I suggest removing the validation of UUIDs from the receiving end. There is no real reason to make sure these are real UUIDs. It's another restriction we can remove from the interface simplifying the code and the interface. Just to be clear I'm not saying that we should stop using UUIDs. For example, vdsm will keep generating task IDs as UUIDs. But the documentation will state that it could be *any* string value. If for some reason we choose to change the format of task IDs. There will be no need to change the interface. The same goes for VM IDs. Currently the engine uses UUIDs but there is no reason for VDSM to enforce this and limit the engine from ever changing it in the future and using other string values. I agree that UUID is just a method of generating unique strings, there is no reason to validate the value nor the format. I'm with daniel on this one - knowing a field is a uuid makes it easier to deal with in the db and code around it compared to a string (stored binary instead of string representation, etc.) ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [RFC] Implied UUIDs in API
On 08/31/2012 03:36 PM, Alon Bar-Lev wrote: - Original Message - From: Juan Hernandez jhern...@redhat.com To: Alon Bar-Lev alo...@redhat.com Cc: Itamar Heim ih...@redhat.com, arch a...@ovirt.org, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Friday, August 31, 2012 12:36:10 PM Subject: Re: [vdsm] [RFC] Implied UUIDs in API On 08/31/2012 11:27 AM, Alon Bar-Lev wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Alon Bar-Lev alo...@redhat.com Cc: Saggi Mizrahi smizr...@redhat.com, arch a...@ovirt.org, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Friday, August 31, 2012 12:23:38 PM Subject: Re: [vdsm] [RFC] Implied UUIDs in API On 08/31/2012 12:33 AM, Alon Bar-Lev wrote: - Original Message - From: Saggi Mizrahi smizr...@redhat.com To: arch a...@ovirt.org, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Friday, August 31, 2012 12:19:46 AM Subject: [vdsm] [RFC] Implied UUIDs in API Hi, in the API a lot of IDs get passed around are UUIDs. The point is that as long as you are not the entity generating the UUIDs the fact that these are UUIDs have no real significance to you. I suggest removing the validation of UUIDs from the receiving end. There is no real reason to make sure these are real UUIDs. It's another restriction we can remove from the interface simplifying the code and the interface. Just to be clear I'm not saying that we should stop using UUIDs. For example, vdsm will keep generating task IDs as UUIDs. But the documentation will state that it could be *any* string value. If for some reason we choose to change the format of task IDs. There will be no need to change the interface. The same goes for VM IDs. Currently the engine uses UUIDs but there is no reason for VDSM to enforce this and limit the engine from ever changing it in the future and using other string values. I agree that UUID is just a method of generating unique strings, there is no reason to validate the value nor the format. I'm with daniel on this one - knowing a field is a uuid makes it easier to deal with in the db and code around it compared to a string (stored binary instead of string representation, etc.) Why to store binary? An UUID stored in its binary format takes 16 bytes. Stored as an string it takes 36 bytes, and more than 72 bytes in memory in the engine. The amount of CPU needed to create/compare/etc is proportional. The engine takes advantage of this and uses an specialized UUID class and a specialized database type to manage them. If we change to arbitrary strings then a *lot* of changes have to be done to the engine. We are trying to reduce types of vdsm to simplify the VDSM-next API. I guess it will derive a lot of changes in the engine anyway... 32-72 bytes in memory of JVM is not something that I care, JVM is not optimized for memory use in any sense. that doesn't mean you should abuse it. it's not a single item. its all the items. I am not sure that compare in database of binary or string has significant impact, if there is a proper indexing, and if foreign key is used to cascade. Regards, Alon. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Jenkins build failure for change that adds build dependencies
On 08/31/2012 12:31 AM, Adam Litke wrote: Hi, My change, http://gerrit.ovirt.org/#/c/7516/ adds the following build dependencies. Since they are not installed on the system running patch verification tests I am getting build failures. Can we get these packages installed on the testing host(s) please? +BuildRequires: gobject-introspection-devel +BuildRequires: glib2-devel +BuildRequires: json-glib-devel +BuildRequires: vala +BuildRequires: libgee-devel you should email this to infra mailing list. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] minutes: today's call
On 08/15/2012 04:13 PM, Ryan Harper wrote: * Ryan Harper ry...@us.ibm.com [2012-08-08 09:38]: * Itamar Heim ih...@redhat.com [2012-08-01 08:27]: On 08/01/2012 04:20 PM, Ryan Harper wrote: * Itamar Heim ih...@redhat.com [2012-07-25 08:50]: On 07/25/2012 04:36 PM, Ryan Harper wrote: * Eyal Edri ee...@redhat.com [2012-07-18 11:59]: I'm in favor on doing it next monday. (not too healthy to do upgrades before the weekend...). We'll need to send email to rhev-devel + qe for an estimated downtime of 1 hour for gerrit.eng.lab.tlv.redhat.com. Any update? I've sent an update earlier today to infra that i'm planning to upgrade gerrit to 2.4.2 coming sunday. after a few days of a clean upgrade, I'll do the patched version of it. How'd the upgrade go? any ETA for when the patched version will show up? too early to say - gerrit already required a restart today after hanging. Gal already created a patched version for me to deploy, but i want to see the gerrit behavior for a few more days before i apply it. (btw, i think gerrit 2.5 has some improvements over the original patches (like max size, etc.), but we'll wait with them for 2.5 i guess. How's gerrit behaving? Ready for the patched version? Any update? done? http://lists.ovirt.org/pipermail/infra/2012-August/000828.html ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] minutes: today's call
On 08/15/2012 04:21 PM, Ryan Harper wrote: * Itamar Heim ih...@redhat.com [2012-08-15 08:15]: On 08/15/2012 04:13 PM, Ryan Harper wrote: * Ryan Harper ry...@us.ibm.com [2012-08-08 09:38]: * Itamar Heim ih...@redhat.com [2012-08-01 08:27]: On 08/01/2012 04:20 PM, Ryan Harper wrote: * Itamar Heim ih...@redhat.com [2012-07-25 08:50]: On 07/25/2012 04:36 PM, Ryan Harper wrote: * Eyal Edri ee...@redhat.com [2012-07-18 11:59]: I'm in favor on doing it next monday. (not too healthy to do upgrades before the weekend...). We'll need to send email to rhev-devel + qe for an estimated downtime of 1 hour for gerrit.eng.lab.tlv.redhat.com. Any update? I've sent an update earlier today to infra that i'm planning to upgrade gerrit to 2.4.2 coming sunday. after a few days of a clean upgrade, I'll do the patched version of it. How'd the upgrade go? any ETA for when the patched version will show up? too early to say - gerrit already required a restart today after hanging. Gal already created a patched version for me to deploy, but i want to see the gerrit behavior for a few more days before i apply it. (btw, i think gerrit 2.5 has some improvements over the original patches (like max size, etc.), but we'll wait with them for 2.5 i guess. How's gerrit behaving? Ready for the patched version? Any update? done? http://lists.ovirt.org/pipermail/infra/2012-August/000828.html The one morning I don't look at my gerrit email and indeed it is. Thanks for getting this working! Did we look at getting replies to the emails showing up as comments? not yet. much more complex and no cycles for this right now. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Avoid testStressTest Failing in Jenkins VDSM Unit Tests Job
On 08/09/2012 05:46 AM, Robert Middleswarth wrote: On 08/08/2012 10:37 PM, Zhou Zheng Sheng wrote: Hi all, Recently the oVirt Jenkins runs unit test for every new patch set in Gerrit. There are a lot of new patch sets every day, so the Jenkins may run the unit tests in parallel. I notice that most of the unit tests can be run in parallel except testStressTest. testStressTest creates lots of threads nearly to the system limit, so if we run two or more testStressTest, it will fail, giving false positives. We found this when we started running 5 of them at once. I have since limited the job to one per host. To stop the failures. I though I had resubmitted all the jobs that failed because of that. So I suggest the oVirt Jenkins run most of the tests in parellel, but run testStressTest exlusively. With the help of the Exclusion-Plugin, it its possible to configure Jenkins to run some steps in parellel while some steps exclusive in one job. Firstly, add a resource with the help of Exclusion-Plugin. Give a meaningful name to that resource, and assign the resource to this Job. Secondly, add a build step of Execute Shell, in this step, run all the tests other than testStressTest. So the shell script can be as follow. cd tests NOSE_EXCLUDE=testStressTest \ ./run_tests_local.sh \ --with-xunit --xunit-file=nosetests0.xml \ *.py Then add a build step of Critical Block Start. Then add a build step of Execute Shell, in this step, only run testStressTest as follow. cd tests ./run_tests_local.sh -m testStressTest \ --with-xunit --xunit-file=nosetests1.xml \ resourceManagerTests.py Then add a build step of Critical BlockEnd. At last, in post-build actions, in the Publish Test Result Report section, modify the test report XMLs to tests/nosetests*.xml. Jenkins will merge the test reports. Some patch sets are marked as verify failure by oVirt Jenkins. Most of them fails in testStressTest. I hope this can help the oVirt Jenkins a little bit. This information is useful to know. I wonder if we could just remove the stress test from the patch test and just leave it on the master branch tests. or you could do a separate job running only this test separate from all the other tests, and limit this job to one per host. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] minutes: today's call
On 08/01/2012 04:20 PM, Ryan Harper wrote: * Itamar Heim ih...@redhat.com [2012-07-25 08:50]: On 07/25/2012 04:36 PM, Ryan Harper wrote: * Eyal Edri ee...@redhat.com [2012-07-18 11:59]: I'm in favor on doing it next monday. (not too healthy to do upgrades before the weekend...). We'll need to send email to rhev-devel + qe for an estimated downtime of 1 hour for gerrit.eng.lab.tlv.redhat.com. Any update? I've sent an update earlier today to infra that i'm planning to upgrade gerrit to 2.4.2 coming sunday. after a few days of a clean upgrade, I'll do the patched version of it. How'd the upgrade go? any ETA for when the patched version will show up? too early to say - gerrit already required a restart today after hanging. Gal already created a patched version for me to deploy, but i want to see the gerrit behavior for a few more days before i apply it. (btw, i think gerrit 2.5 has some improvements over the original patches (like max size, etc.), but we'll wait with them for 2.5 i guess. Ryan Eyal. - Original Message - From: Itamar Heim ih...@redhat.com To: Ryan Harper ry...@us.ibm.com Cc: Dan Kenigsberg dan...@redhat.com, Adam Litke a...@us.ibm.com, Anthony Liguori aligu...@us.ibm.com, VDSM Project Development vdsm-devel@lists.fedorahosted.org, Eyal Edri ee...@redhat.com, Attila Darazs adar...@redhat.com, Moran Goldboim mgold...@redhat.com Sent: Wednesday, July 18, 2012 7:48:01 PM Subject: Re: [vdsm] minutes: today's call On 07/18/2012 06:24 PM, Ryan Harper wrote: * Itamar Heim ih...@redhat.com [2012-05-16 10:26]: On 05/16/2012 06:11 PM, Dan Kenigsberg wrote: On Wed, May 16, 2012 at 09:43:57AM -0500, Ryan Harper wrote: * Dan Kenigsbergdan...@redhat.com [2012-05-07 05:42]: On Mon, Apr 23, 2012 at 05:52:13PM +0300, Dan Kenigsberg wrote: On Mon, Apr 23, 2012 at 07:34:14AM -0500, Adam Litke wrote: On Mon, Apr 23, 2012 at 04:17:18AM -0400, Ayal Baron wrote: Hi all, I would like to discuss the following on today's call: 1. Gerrit vs. mailing list Gerrit is an inhibiter for some contributors. One approach to solve this improve gerrit: - Gerrit should send the patch when it notified of a change. This may attract more reviewers. I'm happy to inform that Gal has sent a patch for this to upstream gerrit: https://gerrit-review.googlesource.com/#/c/34861/ Add unified diff to newchange mail template. Any eta on getting the gerrit notifications of changes to include the full patch in the email? You can +1 them in googlesource, maybe it helps ;-) indeed, please help push them upstream... https://gerrit-review.googlesource.com/#/c/34861/ https://gerrit-review.googlesource.com/#/c/34862/ I'm definitely happy to see notification on new posts and changes; helps me see what new activity is happening, but I'd really enjoy seeing the the patch series attached (via threading) as well. Itamar, I know Red Hat hates to do it, but can we take these patches to ovirt's gerrit before they are accepted upstream? yes we can. after i'll upgrade it to 2.3, which i prefer to do after i return from PTO in case there are some issues with the upgrade. so ETA end of the month for clean upgrade to 2.3, then after we see all is ok, let's say another week to upgrade to the version with these patches. It's about 6 weeks later. Dan's been asking for more review in vdsm; I know that I'd be able to review quite a bit more if I can get patches via email instead of the webui. Can we get this enabled? eyal/attila - any ETA on testing gerrit 2.4 (.2 by now) so we can upgrade to it? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Using vdsm hook to exploit gluster backend of qemu
On 07/16/2012 04:07 PM, Deepak C Shetty wrote: I am sure VDSM hook is not the ideal way to add this functionality in VDSM, would request inputs from experts on this list on what would be a better way in VDSM to exploit QEMU-GlusterFS native integration ? Ideally based on the Storage Domain type and options used, there should be a way in VDSM to modify the libvirt XML formed. from your discussion with saggi, the recommended approach was a gluster storage domain. do i understand correctly that here are two ways to consume the images via qemu: block based or file based? would there also be a difference in how these images are provisioned (i.e., would this imply a gluster_fs and a gluster_block storage domains, which sounds somewhat of an overkill, unless there are very good different use cases for this)? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [RFC] An alternative way to provide a supported interface -- libvdsm
On 07/17/2012 01:19 AM, Itamar Heim wrote: On 07/09/2012 09:52 PM, Saggi Mizrahi wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Saggi Mizrahi smizr...@redhat.com Cc: Adam Litke a...@us.ibm.com, vdsm-devel@lists.fedorahosted.org Sent: Monday, July 9, 2012 11:03:43 AM Subject: Re: [vdsm] [RFC] An alternative way to provide a supported interface -- libvdsm On 07/09/2012 05:56 PM, Saggi Mizrahi wrote: I don't think AMQP is a good low level supported protocol as it's a very complex protocol to set up and support. Also brokers are known to have their differences in standard implementation which means supporting them all is a mess. It looks like the most accepted route is the libvirt route of having a c library abstracting away client server communication and having more advanced consumers build protocol specific bridges that may have different support standards. On a more personal note, I think brokerless messaging is the way to go in ovirt because, unlike traditional clustering, worker nodes are not interchangeable so direct communication is the way to go, rendering brokers pretty much useless. but brokerless doesn't let multiple consumers which a bus provides? All consumers can connect to the host and *some* events can be broadcasted to all connected clients. The real question is weather you want to depend on AMQP's routing \ message storing Also, if you find it preferable to have a centralized host (single point of failure) to get all events from all hosts for the price of some clients (I assume read only clients) not needing to know the locations of all worker nodes. But IMHO we already have something like that, it's called the ovirt-engine, and it could send aggregated events about the cluster (maybe with some extra enginy data). The question is what does mandating a broker gives us something that an AMQP bridge wouldn't. The only thing I can think of is vdsm can assume unmoderated vdsm to vdsm communication bypassing the engine. This means that VDSM can have some clustered behavior that requires no engine intervention. Further more, the engine can send a request and let the nodes decide who is performing the operation among themselves. Essentially: [ engine ] [ engine ] | | VS | [vdsm][vdsm] [ broker ] | | [vdsm][vdsm] *All links are two way links This has dire consequences on API usability and supportability. So we need to converge on that. There needs to be a good reason why the aforementioned logic code can't sit on a another ovirt specific entity (lets call it ovirt-dynamo) that uses VDSM's supported API but it's own APIs (or more likely messaging algorithms) are unsupported. [engine ] ||| | [ broker ] | ||| | [vdsm]-[dynamo] : [dynamo]-[vdsm] Host A : Host B *All links are two way links 1. we have engine today 'in the path' to the history db. but it makes no sense for engine to be aware of each statistic we want to keep in the history db. same would be for an event/stats correlation service. they don't need to depend on each other for availability/redundancy. 2. we are already looking at quantum integration, which is doing engine to nodes communication via amqp. 3. with somewhat of a forward looking - moving some scheduling logic down to vdsm will probably mean we'll want one of the nodes to listen to statistics and state from the other nodes. to all of these, setting up a bus which allows multiple peer listeners seems more robust I'm still against developing a C level binding for amqp and rest support over a codebase which is in python. rest and amqp allow for both local and remote bindings in any language. C bindings should/could be a parallel implementation, but they seem like an unneeded overhead and complexity in the middle of the codebase. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [RFC] An alternative way to provide a supported interface -- libvdsm
On 07/26/2012 04:33 PM, Adam Litke wrote: On Thu, Jul 26, 2012 at 11:47:51AM +0300, Itamar Heim wrote: On 07/17/2012 01:19 AM, Itamar Heim wrote: On 07/09/2012 09:52 PM, Saggi Mizrahi wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Saggi Mizrahi smizr...@redhat.com Cc: Adam Litke a...@us.ibm.com, vdsm-devel@lists.fedorahosted.org Sent: Monday, July 9, 2012 11:03:43 AM Subject: Re: [vdsm] [RFC] An alternative way to provide a supported interface -- libvdsm On 07/09/2012 05:56 PM, Saggi Mizrahi wrote: I don't think AMQP is a good low level supported protocol as it's a very complex protocol to set up and support. Also brokers are known to have their differences in standard implementation which means supporting them all is a mess. It looks like the most accepted route is the libvirt route of having a c library abstracting away client server communication and having more advanced consumers build protocol specific bridges that may have different support standards. On a more personal note, I think brokerless messaging is the way to go in ovirt because, unlike traditional clustering, worker nodes are not interchangeable so direct communication is the way to go, rendering brokers pretty much useless. but brokerless doesn't let multiple consumers which a bus provides? All consumers can connect to the host and *some* events can be broadcasted to all connected clients. The real question is weather you want to depend on AMQP's routing \ message storing Also, if you find it preferable to have a centralized host (single point of failure) to get all events from all hosts for the price of some clients (I assume read only clients) not needing to know the locations of all worker nodes. But IMHO we already have something like that, it's called the ovirt-engine, and it could send aggregated events about the cluster (maybe with some extra enginy data). The question is what does mandating a broker gives us something that an AMQP bridge wouldn't. The only thing I can think of is vdsm can assume unmoderated vdsm to vdsm communication bypassing the engine. This means that VDSM can have some clustered behavior that requires no engine intervention. Further more, the engine can send a request and let the nodes decide who is performing the operation among themselves. Essentially: [ engine ] [ engine ] | | VS | [vdsm][vdsm] [ broker ] | | [vdsm][vdsm] *All links are two way links This has dire consequences on API usability and supportability. So we need to converge on that. There needs to be a good reason why the aforementioned logic code can't sit on a another ovirt specific entity (lets call it ovirt-dynamo) that uses VDSM's supported API but it's own APIs (or more likely messaging algorithms) are unsupported. [engine ] ||| | [ broker ] | ||| | [vdsm]-[dynamo] : [dynamo]-[vdsm] Host A : Host B *All links are two way links 1. we have engine today 'in the path' to the history db. but it makes no sense for engine to be aware of each statistic we want to keep in the history db. same would be for an event/stats correlation service. they don't need to depend on each other for availability/redundancy. 2. we are already looking at quantum integration, which is doing engine to nodes communication via amqp. 3. with somewhat of a forward looking - moving some scheduling logic down to vdsm will probably mean we'll want one of the nodes to listen to statistics and state from the other nodes. to all of these, setting up a bus which allows multiple peer listeners seems more robust I'm still against developing a C level binding for amqp and rest support over a codebase which is in python. rest and amqp allow for both local and remote bindings in any language. C bindings should/could be a parallel implementation, but they seem like an unneeded overhead and complexity in the middle of the codebase. Sure, it's probably possible to bind a REST or AMQP API in other languages but I don't think there is an automatic way of doing it. That means having to keep up with maintenance of each and every binding every time the API changes. If we look at libvirt, they will say this is a large source of pain that they have recommended we avoid. we'd need this on top of the C api as well - but it would probably be simpler doing it over the python api, rather than the C one. For the C/gobject approach, we write a single API schema file. From that, we automatically generate the C API and bindings. Sure, the generation could be a bit complex but much of it will be someone else's codebase (and one that is used by lots of Gnome projects). that's a critical part of the question for the C api - what is needed for adding a verb / parameter. I'm not against having c bindings. what i don't
Re: [vdsm] minutes: today's call
On 07/25/2012 04:36 PM, Ryan Harper wrote: * Eyal Edri ee...@redhat.com [2012-07-18 11:59]: I'm in favor on doing it next monday. (not too healthy to do upgrades before the weekend...). We'll need to send email to rhev-devel + qe for an estimated downtime of 1 hour for gerrit.eng.lab.tlv.redhat.com. Any update? I've sent an update earlier today to infra that i'm planning to upgrade gerrit to 2.4.2 coming sunday. after a few days of a clean upgrade, I'll do the patched version of it. Eyal. - Original Message - From: Itamar Heim ih...@redhat.com To: Ryan Harper ry...@us.ibm.com Cc: Dan Kenigsberg dan...@redhat.com, Adam Litke a...@us.ibm.com, Anthony Liguori aligu...@us.ibm.com, VDSM Project Development vdsm-devel@lists.fedorahosted.org, Eyal Edri ee...@redhat.com, Attila Darazs adar...@redhat.com, Moran Goldboim mgold...@redhat.com Sent: Wednesday, July 18, 2012 7:48:01 PM Subject: Re: [vdsm] minutes: today's call On 07/18/2012 06:24 PM, Ryan Harper wrote: * Itamar Heim ih...@redhat.com [2012-05-16 10:26]: On 05/16/2012 06:11 PM, Dan Kenigsberg wrote: On Wed, May 16, 2012 at 09:43:57AM -0500, Ryan Harper wrote: * Dan Kenigsbergdan...@redhat.com [2012-05-07 05:42]: On Mon, Apr 23, 2012 at 05:52:13PM +0300, Dan Kenigsberg wrote: On Mon, Apr 23, 2012 at 07:34:14AM -0500, Adam Litke wrote: On Mon, Apr 23, 2012 at 04:17:18AM -0400, Ayal Baron wrote: Hi all, I would like to discuss the following on today's call: 1. Gerrit vs. mailing list Gerrit is an inhibiter for some contributors. One approach to solve this improve gerrit: - Gerrit should send the patch when it notified of a change. This may attract more reviewers. I'm happy to inform that Gal has sent a patch for this to upstream gerrit: https://gerrit-review.googlesource.com/#/c/34861/ Add unified diff to newchange mail template. Any eta on getting the gerrit notifications of changes to include the full patch in the email? You can +1 them in googlesource, maybe it helps ;-) indeed, please help push them upstream... https://gerrit-review.googlesource.com/#/c/34861/ https://gerrit-review.googlesource.com/#/c/34862/ I'm definitely happy to see notification on new posts and changes; helps me see what new activity is happening, but I'd really enjoy seeing the the patch series attached (via threading) as well. Itamar, I know Red Hat hates to do it, but can we take these patches to ovirt's gerrit before they are accepted upstream? yes we can. after i'll upgrade it to 2.3, which i prefer to do after i return from PTO in case there are some issues with the upgrade. so ETA end of the month for clean upgrade to 2.3, then after we see all is ok, let's say another week to upgrade to the version with these patches. It's about 6 weeks later. Dan's been asking for more review in vdsm; I know that I'd be able to review quite a bit more if I can get patches via email instead of the webui. Can we get this enabled? eyal/attila - any ETA on testing gerrit 2.4 (.2 by now) so we can upgrade to it? ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Getting rid of a...@ovirt.org?
On 07/16/2012 01:46 AM, Robert Middleswarth wrote: On 07/15/2012 03:59 PM, Ayal Baron wrote: - Original Message - -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 07/15/2012 01:53 AM, Ayal Baron wrote: Hi all, Sorry for cross-posting, but in this case I think it's relevant. The original idea was that every time we wish to discuss a new cross-component feature we should do it over arch list. However, it would appear that de-facto usually engine-devel and vdsm-devel are being used (cross posted). Currently engine-devel has 211 subscribers, arch has 160 and vdsm-devel has 128 so from this perspective again, arch seems less relevant. I propose we ditch arch and keep the other 2 mailing lists. I'm not sure whether new cross-component features should be discussed solely on engine-devel or cross-posted (there are probably people who wouldn't care about engine side but would still like to know about such changes). Thoughts? - -1 I don't normally read engine-devel and vdsm-devel, so I hadn't noticed that discussions I would expect to be on arch@ are not happening here. I'm probably not the only person in that situation. If this project were 100% about Engine and VDSM, then I could understand your reasoning. But we've already added a few new incubating projects, we have subsystem teams such as documentation and infrastructure, and we all need a single location where we know we can reach *all* contributors to this project. If we try to force all that discussion on to engine-devel, not everyone would be interested. There is enough on engine-devel that is not general interest that it would become noise (as it has for me, so I filter it) or people would drop it all together. Perhaps what we need to do is have the discipline to cross-post *all* general interest discussions from the project mailing list back to arch@? Enforce the rule that decisions that affect the whole project have to be ratified on arch@ instead of whatever project list the discussions started on? Strongly suggest that all contributors be on arch@ and announce@ as a minimum? I find that anything that should go on arch would interest anyone on the devel lists (as it is about new features, design, etc) so I believe that arch should have at least everyone on engine-devel and vdsm-devel. However, right now this is not the case as is evident by number of subs to each list (e.g. I haven't compared to see if everyone on arch is on engine). So imo something needs to be done. I'm fine with keeping arch, but as you said, that means we need to enforce it to be *the* list for feature discussions and I'm not exactly sure how you'd go about doing that. Maybe arch needs renamed to make it clear what if is for? Maybe something simple like ovirt-devel to make it clear it is for generally ovirt development? we can simply make it arch include the other mailing lists, so sending to arch would be sending to all other mailing lists. wouldn't resolve the dupes, but will resolve need of everyone to subscribe to it as well. (for dupes i also use a mail filter to delete emails arriving from engine-devel and cc other mailing list, etc. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Getting rid of a...@ovirt.org?
On 07/16/2012 10:55 AM, Livnat Peer wrote: On 16/07/12 10:01, Itamar Heim wrote: On 07/16/2012 09:56 AM, Livnat Peer wrote: On 16/07/12 09:41, Itamar Heim wrote: On 07/16/2012 01:46 AM, Robert Middleswarth wrote: On 07/15/2012 03:59 PM, Ayal Baron wrote: - Original Message - -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 07/15/2012 01:53 AM, Ayal Baron wrote: Hi all, Sorry for cross-posting, but in this case I think it's relevant. The original idea was that every time we wish to discuss a new cross-component feature we should do it over arch list. However, it would appear that de-facto usually engine-devel and vdsm-devel are being used (cross posted). Currently engine-devel has 211 subscribers, arch has 160 and vdsm-devel has 128 so from this perspective again, arch seems less relevant. I propose we ditch arch and keep the other 2 mailing lists. I'm not sure whether new cross-component features should be discussed solely on engine-devel or cross-posted (there are probably people who wouldn't care about engine side but would still like to know about such changes). Thoughts? - -1 I don't normally read engine-devel and vdsm-devel, so I hadn't noticed that discussions I would expect to be on arch@ are not happening here. I'm probably not the only person in that situation. If this project were 100% about Engine and VDSM, then I could understand your reasoning. But we've already added a few new incubating projects, we have subsystem teams such as documentation and infrastructure, and we all need a single location where we know we can reach *all* contributors to this project. If we try to force all that discussion on to engine-devel, not everyone would be interested. There is enough on engine-devel that is not general interest that it would become noise (as it has for me, so I filter it) or people would drop it all together. Perhaps what we need to do is have the discipline to cross-post *all* general interest discussions from the project mailing list back to arch@? Enforce the rule that decisions that affect the whole project have to be ratified on arch@ instead of whatever project list the discussions started on? Strongly suggest that all contributors be on arch@ and announce@ as a minimum? I find that anything that should go on arch would interest anyone on the devel lists (as it is about new features, design, etc) so I believe that arch should have at least everyone on engine-devel and vdsm-devel. However, right now this is not the case as is evident by number of subs to each list (e.g. I haven't compared to see if everyone on arch is on engine). So imo something needs to be done. I'm fine with keeping arch, but as you said, that means we need to enforce it to be *the* list for feature discussions and I'm not exactly sure how you'd go about doing that. Maybe arch needs renamed to make it clear what if is for? Maybe something simple like ovirt-devel to make it clear it is for generally ovirt development? we can simply make it arch include the other mailing lists, so sending to arch would be sending to all other mailing lists. What would happen if someone reply on the engine-list to a mail originally sent to arch? wouldn't we end-up starting a thread on arch and then loosing it to one of the other lists? reply-to is not set to reply-to-list, rather to original sender/cc list, so shouldn't be an issue ok so if reply to such mail de-facto I'll send a mail to the arch list - shouldn't I be register to the arch list (or I need someone to approve the mail)? you would be moderated the first time you reply to it, yes. same as for all other mailing lists - not an issue usually. wouldn't resolve the dupes, but will resolve need of everyone to subscribe to it as well. (for dupes i also use a mail filter to delete emails arriving from engine-devel and cc other mailing list, etc. ___ Arch mailing list a...@ovirt.org http://lists.ovirt.org/mailman/listinfo/arch ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [RFC] An alternative way to provide a supported interface -- libvdsm
On 07/09/2012 09:52 PM, Saggi Mizrahi wrote: - Original Message - From: Itamar Heim ih...@redhat.com To: Saggi Mizrahi smizr...@redhat.com Cc: Adam Litke a...@us.ibm.com, vdsm-devel@lists.fedorahosted.org Sent: Monday, July 9, 2012 11:03:43 AM Subject: Re: [vdsm] [RFC] An alternative way to provide a supported interface -- libvdsm On 07/09/2012 05:56 PM, Saggi Mizrahi wrote: I don't think AMQP is a good low level supported protocol as it's a very complex protocol to set up and support. Also brokers are known to have their differences in standard implementation which means supporting them all is a mess. It looks like the most accepted route is the libvirt route of having a c library abstracting away client server communication and having more advanced consumers build protocol specific bridges that may have different support standards. On a more personal note, I think brokerless messaging is the way to go in ovirt because, unlike traditional clustering, worker nodes are not interchangeable so direct communication is the way to go, rendering brokers pretty much useless. but brokerless doesn't let multiple consumers which a bus provides? All consumers can connect to the host and *some* events can be broadcasted to all connected clients. The real question is weather you want to depend on AMQP's routing \ message storing Also, if you find it preferable to have a centralized host (single point of failure) to get all events from all hosts for the price of some clients (I assume read only clients) not needing to know the locations of all worker nodes. But IMHO we already have something like that, it's called the ovirt-engine, and it could send aggregated events about the cluster (maybe with some extra enginy data). The question is what does mandating a broker gives us something that an AMQP bridge wouldn't. The only thing I can think of is vdsm can assume unmoderated vdsm to vdsm communication bypassing the engine. This means that VDSM can have some clustered behavior that requires no engine intervention. Further more, the engine can send a request and let the nodes decide who is performing the operation among themselves. Essentially: [ engine ] [ engine ] | | VS | [vdsm][vdsm] [ broker ] | | [vdsm][vdsm] *All links are two way links This has dire consequences on API usability and supportability. So we need to converge on that. There needs to be a good reason why the aforementioned logic code can't sit on a another ovirt specific entity (lets call it ovirt-dynamo) that uses VDSM's supported API but it's own APIs (or more likely messaging algorithms) are unsupported. [engine ] ||| | [ broker ] | ||| | [vdsm]-[dynamo] : [dynamo]-[vdsm] Host A : Host B *All links are two way links 1. we have engine today 'in the path' to the history db. but it makes no sense for engine to be aware of each statistic we want to keep in the history db. same would be for an event/stats correlation service. they don't need to depend on each other for availability/redundancy. 2. we are already looking at quantum integration, which is doing engine to nodes communication via amqp. 3. with somewhat of a forward looking - moving some scheduling logic down to vdsm will probably mean we'll want one of the nodes to listen to statistics and state from the other nodes. to all of these, setting up a bus which allows multiple peer listeners seems more robust ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] Creating a new VM with a template
On 07/11/2012 05:06 PM, Robert Middleswarth wrote: On 07/11/2012 09:08 AM, Shu Ming wrote: Hi, I created a VM with from existing template and found that the image of the new VM actually copied the image from the template instead of sharing a base from the template. I am wondering why the new VM doesn't use a new cow image as its image pointing to the base image in template. By that way, all the new VMs will share a common parent image in template and save the space in the storage domain. Is there any other concern not to do this? Under Resource Allocation did you use Thin or Clone when creating the VM from template? Thin will share the base and just use a cow image. Clone will copy the VM. note: the default resource allocation for a server is clone and for a desktop is thin. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [RFC] An alternative way to provide a supported interface -- libvdsm
On 07/09/2012 05:56 PM, Saggi Mizrahi wrote: I don't think AMQP is a good low level supported protocol as it's a very complex protocol to set up and support. Also brokers are known to have their differences in standard implementation which means supporting them all is a mess. It looks like the most accepted route is the libvirt route of having a c library abstracting away client server communication and having more advanced consumers build protocol specific bridges that may have different support standards. On a more personal note, I think brokerless messaging is the way to go in ovirt because, unlike traditional clustering, worker nodes are not interchangeable so direct communication is the way to go, rendering brokers pretty much useless. but brokerless doesn't let multiple consumers which a bus provides? - Original Message - From: Adam Litke a...@us.ibm.com To: Itamar Heim ih...@redhat.com Cc: vdsm-devel@lists.fedorahosted.org Sent: Monday, July 9, 2012 9:56:17 AM Subject: Re: [vdsm] [RFC] An alternative way to provide a supported interface -- libvdsm On Fri, Jul 06, 2012 at 03:53:08PM +0300, Itamar Heim wrote: On 07/06/2012 01:15 AM, Robert Middleswarth wrote: On 07/05/2012 04:45 PM, Adam Litke wrote: On Thu, Jul 05, 2012 at 03:47:42PM -0400, Saggi Mizrahi wrote: - Original Message - From: Adam Litke a...@us.ibm.com To: Saggi Mizrahi smizr...@redhat.com Cc: Anthony Liguori anth...@codemonkey.ws, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Thursday, July 5, 2012 2:34:50 PM Subject: Re: [RFC] An alternative way to provide a supported interface -- libvdsm On Wed, Jun 27, 2012 at 02:50:02PM -0400, Saggi Mizrahi wrote: The idea of having a supported C API was something I was thinking about doing (But I'd rather use gobject introspection and not schema generation) But the problem is not having a C API is using the current XML RPC API as it's base I want to disect this a bit to find out exactly where there might be agreement and disagreement. C API is a good thing to implement - Agreed. I also want to use gobject introspection but I don't agree that using glib precludes the use of a formalized schema. My proposal is that we write a schema definition and generate the glib C code from that schema. I agree that the _current_ xmlrpc API makes a pretty bad base from which to start a supportable API. XMLRPC is a perfectly reasonable remote/wire protocol and I think we should continue using it as a base for the next generation API. Using a schema will ensure that the new API is well-structured. There major problems with XML-RPC (and to some extent with REST as well) are high call overhead and no two way communication (push events). Basing on XML-RPC means that we will never be able to solve these issues. I am not sure I am ready to conceed that XML-RPC is too slow for our needs. Can you provide some more detail around this point and possibly suggest an alternative that has even lower overhead without sacrificing the ubiquity and usability of XML-RPC? As far as the two-way communication point, what are the options besides AMQP/ZeroMQ? Aren't these even worse from an overhead perspective than XML-RPC? Regarding two-way communication: you can write AMQP brokers based on the C API and run one on each vdsm host. Assuming the C API supports events, what else would you need? I personally think that using something like AMQP for inter-node communication and engine - node would be optimal. With a rest interface that just send messages though something like AMQP. I would also not dismiss AMQP so soon we want a bug with more than a single listener at engine side (engine, history db, maybe event correlation service). collectd as a means for statistics already supports it as well. I'm for having REST as well, but not sure as main one for a consumer like ovirt engine. I agree that a message bus could be a very useful model of communication between ovirt-engine components and multiple vdsm instances. But the complexities and dependencies of AMQP do not make it suitable for use as a low-level API. AMQP will repel new adopters. Why not establish a libvdsm that is more minimalist and can be easily used by everyone? Then AMQP brokers can be built on top of the stable API with ease. All AMQP should require of the low-level API are standard function calls and an events mechanism. Thanks Robert The current XML-RPC API contains a lot of decencies and inefficiencies and we would like to retire it as soon as we possibly can. Engine would like us to move to a message based API and 3rd parties want something simple like REST so it looks like no one actually wants to use XML-RPC. Not even us. I am proposing that AMQP brokers and REST APIs could be written against the public API. In fact, they need not even live in the vdsm tree anymore if that is what we choose. Core vdsm would only be responsible for providing libvdsm and whatever language bindings
Re: [vdsm] [RFC] An alternative way to provide a supported interface -- libvdsm
On 07/06/2012 01:15 AM, Robert Middleswarth wrote: On 07/05/2012 04:45 PM, Adam Litke wrote: On Thu, Jul 05, 2012 at 03:47:42PM -0400, Saggi Mizrahi wrote: - Original Message - From: Adam Litke a...@us.ibm.com To: Saggi Mizrahi smizr...@redhat.com Cc: Anthony Liguori anth...@codemonkey.ws, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Thursday, July 5, 2012 2:34:50 PM Subject: Re: [RFC] An alternative way to provide a supported interface -- libvdsm On Wed, Jun 27, 2012 at 02:50:02PM -0400, Saggi Mizrahi wrote: The idea of having a supported C API was something I was thinking about doing (But I'd rather use gobject introspection and not schema generation) But the problem is not having a C API is using the current XML RPC API as it's base I want to disect this a bit to find out exactly where there might be agreement and disagreement. C API is a good thing to implement - Agreed. I also want to use gobject introspection but I don't agree that using glib precludes the use of a formalized schema. My proposal is that we write a schema definition and generate the glib C code from that schema. I agree that the _current_ xmlrpc API makes a pretty bad base from which to start a supportable API. XMLRPC is a perfectly reasonable remote/wire protocol and I think we should continue using it as a base for the next generation API. Using a schema will ensure that the new API is well-structured. There major problems with XML-RPC (and to some extent with REST as well) are high call overhead and no two way communication (push events). Basing on XML-RPC means that we will never be able to solve these issues. I am not sure I am ready to conceed that XML-RPC is too slow for our needs. Can you provide some more detail around this point and possibly suggest an alternative that has even lower overhead without sacrificing the ubiquity and usability of XML-RPC? As far as the two-way communication point, what are the options besides AMQP/ZeroMQ? Aren't these even worse from an overhead perspective than XML-RPC? Regarding two-way communication: you can write AMQP brokers based on the C API and run one on each vdsm host. Assuming the C API supports events, what else would you need? I personally think that using something like AMQP for inter-node communication and engine - node would be optimal. With a rest interface that just send messages though something like AMQP. I would also not dismiss AMQP so soon we want a bug with more than a single listener at engine side (engine, history db, maybe event correlation service). collectd as a means for statistics already supports it as well. I'm for having REST as well, but not sure as main one for a consumer like ovirt engine. Thanks Robert The current XML-RPC API contains a lot of decencies and inefficiencies and we would like to retire it as soon as we possibly can. Engine would like us to move to a message based API and 3rd parties want something simple like REST so it looks like no one actually wants to use XML-RPC. Not even us. I am proposing that AMQP brokers and REST APIs could be written against the public API. In fact, they need not even live in the vdsm tree anymore if that is what we choose. Core vdsm would only be responsible for providing libvdsm and whatever language bindings we want to support. If we take the libvdsm route, the only reason to even have a REST bridge is only to support OSes other then Linux which is something I'm not sure we care about at the moment. That might be true regarding the current in-tree implementation. However, I can almost guarantee that someone wanting to write a web GUI on top of standalone vdsm would want a REST API to talk to. But libvdsm makes this use case of no concern to the core vdsm developers. I do think that having C supportability in our API is a good idea, but the current API should not be used as the base. Let's _start_ with a schema document that describes today's API and then clean it up. I think that will work better than starting from scratch. Once my schema is written I will post it and we can 'patch' it as a community until we arrive at a 1.0 version we are all happy with. +1 Ok. Redoubling my efforts to get this done. Describing the output of list(True) takes awhile :) ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [virt-node] VDSM as a general purpose virt host manager
On 06/26/2012 11:26 AM, Adam Litke wrote: On Tue, Jun 26, 2012 at 11:11:51PM +0800, Shu Ming wrote: On 2012-6-26 20:45, Adam Litke wrote: On Tue, Jun 26, 2012 at 09:53:10AM +0800, Xu He Jie wrote: On 06/26/2012 05:19 AM, Adam Litke wrote: On Mon, Jun 25, 2012 at 05:53:31PM +0300, Dan Kenigsberg wrote: On Mon, Jun 25, 2012 at 08:28:29AM -0500, Adam Litke wrote: On Fri, Jun 22, 2012 at 06:45:43PM -0400, Andrew Cathrow wrote: - Original Message - From: Ryan Harperry...@us.ibm.com To: Adam Litkea...@us.ibm.com Cc: Anthony Liguorialigu...@redhat.com, VDSM Project Developmentvdsm-devel@lists.fedorahosted.org Sent: Friday, June 22, 2012 12:45:42 PM Subject: Re: [vdsm] [virt-node] VDSM as a general purpose virt host manager * Adam Litkea...@us.ibm.com [2012-06-22 11:35]: On Thu, Jun 21, 2012 at 12:17:19PM +0300, Dor Laor wrote: On 06/19/2012 08:12 PM, Saggi Mizrahi wrote: - Original Message - From: Deepak C Shettydeepa...@linux.vnet.ibm.com To: Ryan Harperry...@us.ibm.com Cc: Saggi Mizrahismizr...@redhat.com, Anthony Liguori aligu...@redhat.com, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Tuesday, June 19, 2012 10:58:47 AM Subject: Re: [vdsm] [virt-node] VDSM as a general purpose virt host manager On 06/19/2012 01:13 AM, Ryan Harper wrote: * Saggi Mizrahismizr...@redhat.com [2012-06-18 10:05]: I would like to put on to the table for descussion the growing need for a way to more easily reuse of the functionality of VDSM in order to service projects other than Ovirt-Engine. Originally VDSM was created as a proprietary agent for the sole purpose of serving the then proprietary version of what is known as ovirt-engine. Red Hat, after acquiring the technology, pressed on with it's commitment to open source ideals and released the code. But just releasing code into the wild doesn't build a community or makes a project successful. Further more when building open source software you should aspire to build reusable components instead of monolithic stacks. Saggi, Thanks for sending this out. I've been trying to pull together some thoughts on what else is needed for vdsm as a community. I know that for some time downstream has been the driving force for all of the work and now with a community there are challenges in finding our own way. While we certainly don't want to make downstream efforts harder, I think we need to develop and support our own vision for what vdsm can be come, some what independent of downstream and other exploiters. Revisiting the API is definitely a much needed endeavor and I think adding some use-cases or sample applications would be useful in demonstrating whether or not we're evolving the API into something easier to use for applications beyond engine. We would like to expose a stable, documented, well supported API. This gives us a chance to rethink the VDSM API from the ground up. There is already work in progress of making the internal logic of VDSM separate enough from the API layer so we could continue feature development and bug fixing while designing the API of the future. In order to achieve this though we need to do several things: 1. Declare API supportability guidelines 2. Decide on an API transport (e.g. REST, ZMQ, AMQP) 3. Make the API easily consumable (e.g. proper docs, example code, extending the API, etc) 4. Implement the API itself In the earlier we'd discussed working to have similarities in the modeling between the oVirt API and VDSM but that seems to have dropped off the radar. Yes, the current REST API has attempted to be compatible with the current ovirt-engine API. Going forward, I am not sure how easy this will be to maintain given than engine is built on Java and vdsm is built on Python. Could you elaborate why the language difference is an issue? Isn't this what APIs are supposed to solve? The main language issue is that ovirt-engine has built their API using a set of Java-specific frameworks (JAXB and its dependents). It's true, if you google for 'python jaxb' you will find some sourceforge projects that attempt to bring the jaxb interface to python but I don't think that's the right approach. If you're writing a java project, do things the java way. If you're writing a python project, do them the python way. Right now I am focused on defining the current API (API.py/xmlrpc) mechanically (creating a schema and API documentation). XSD is not the correct language for that task (which is why I forsee a divergence at least at first). I want to take a stab at defining the API in a beneficial, long-term manner. Adam, Can you explain why you think XSD is not the correct language? Is it because of the lacking of full python language code generator? Is it possible to modify the existing code generator to address that issue? What is the benefit to introduce a new schema/code generator to oVirt instead of changing the existing code generator? XSD is
Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration
On 06/25/2012 10:14 AM, Deepak C Shetty wrote: On 06/25/2012 07:47 AM, Shu Ming wrote: On 2012-6-25 10:10, Andrew Cathrow wrote: - Original Message - From: Andy Grover agro...@redhat.com To: Shu Ming shum...@linux.vnet.ibm.com Cc: libstoragemgmt-de...@lists.sourceforge.net, engine-de...@ovirt.org, VDSM Project Development vdsm-devel@lists.fedorahosted.org Sent: Sunday, June 24, 2012 10:05:45 PM Subject: Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration On 06/24/2012 07:28 AM, Shu Ming wrote: On 2012-6-23 20:40, Itamar Heim wrote: On 06/23/2012 03:09 AM, Andy Grover wrote: On 06/22/2012 04:46 PM, Itamar Heim wrote: On 06/23/2012 02:31 AM, Andy Grover wrote: On 06/18/2012 01:15 PM, Saggi Mizrahi wrote: Also, there is no mention on credentials in any part of the process. How does VDSM or the host get access to actually modify the storage array? Who holds the creds for that and how? How does the user set this up? It seems to me more natural to have the oVirt-engine use libstoragemgmt directly to allocate and export a volume on the storage array, and then pass this info to the vdsm on the node creating the vm. This answers Saggi's question about creds -- vdsm never needs array modification creds, it only gets handed the params needed to connect and use the new block device (ip, iqn, chap, lun). Is this usage model made difficult or impossible by the current software architecture? what about live snapshots? I'm not a virt guy, so extreme handwaving: vm X uses luns 1 2 engine - vdsm pause vm X that's pausing the VM. live snapshot isn't supposed to do so. Tough we don't expect to do a pausing operation to the VM when live snaphot is undergoing, the VM should be blocked on the access to specific luns for a while. The blocking time should be very short to avoid the storage IO time out in the VM. OK my mistake, we don't pause the VM during live snapshot, we block on access to the luns while snapshotting. Does this keep live snapshots working and mean ovirt-engine can use libsm to config the storage array instead of vdsm? Because that was really my main question, should we be talking about engine-libstoragemgmt integration rather than vdsm-libstoragemgmt integration. for snapshotting wouldn't we want VDSM to handle the coordination of the various atomic functions? I think VDSM-libstoragemgmt will let the storage array itself to make the snapshot and handle the coordination of the various atomic functions. VDSM should be blocked on the following access to the specific luns which are under snapshotting. I kind of agree. If snapshot is being done at the array level, then the array takes care of quiesing the I/O, taking the snapshot and allowing the I/O, why does VDSM have to worry about anything here, it should all happen transparently for VDSM, isnt it ? I may be misssing something, but afaiu you need to ask the guest to perform the quiesce, and i'm sure the storage array can't do that. ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Engine-devel] RFC: Writeup on VDSM-libstoragemgmt integration
On 06/23/2012 03:09 AM, Andy Grover wrote: On 06/22/2012 04:46 PM, Itamar Heim wrote: On 06/23/2012 02:31 AM, Andy Grover wrote: On 06/18/2012 01:15 PM, Saggi Mizrahi wrote: Also, there is no mention on credentials in any part of the process. How does VDSM or the host get access to actually modify the storage array? Who holds the creds for that and how? How does the user set this up? It seems to me more natural to have the oVirt-engine use libstoragemgmt directly to allocate and export a volume on the storage array, and then pass this info to the vdsm on the node creating the vm. This answers Saggi's question about creds -- vdsm never needs array modification creds, it only gets handed the params needed to connect and use the new block device (ip, iqn, chap, lun). Is this usage model made difficult or impossible by the current software architecture? what about live snapshots? I'm not a virt guy, so extreme handwaving: vm X uses luns 1 2 engine - vdsm pause vm X that's pausing the VM. live snapshot isn't supposed to do so. engine - libstoragemgmt snapshot luns 1, 2 to luns 3, 4 engine - vdsm snapshot running state of X to Y engine - vdsm unpause vm X if engine had any failure before this step, the VM will remain paused. i.e., we compromised the VM to take a live snapshot. engine - vdsm change Y to use luns 3, 4 ? -- Andy ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel