Re: [Users] RFE/Bug? Cloud-Init in oVirt 3.3.1 / IPv6 Support
I opened a BZ for this one: https://bugzilla.redhat.com/show_bug.cgi?id=1036013 Am 28.11.2013 10:41, schrieb Sven Kieske: can not set a static network via Cloud-Init -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt Weekly Meeting Minutes -- 2013-11-27
On Fri, Nov 29, 2013 at 8:57 AM, Sandro Bonazzola wrote: Meeting summary --- * Agenda and roll Call (doron, 15:02:42) * 3.3 update releases (doron, 15:04:23) * 3.4 planning (doron, 15:04:24) * conferences and workshops (doron, 15:04:26) * infra update (doron, 15:04:27) * other topics (doron, 15:04:29) * LINK: http://gerrit.ovirt.org/#/admin/projects/ovirt-release ~ (danken, 15:12:58) * LINK: http://gerrit.ovirt.org/21794 (mburns, 15:15:04) * LINK: http://jenkins.ovirt.org/job/ovirt-release/16800/ (mburns, 15:15:48) * mburns to add sbonazzo as a maintainer to support ovirt-release project (doron, 15:18:17) ovirt-release-9 released yesterday BTW: I see that this package contains /etc/yum.repos.d/fedora-virt-preview.repo (and ovirt-release-fedora-8-1.noarch already did so) By default all lines are disabled in it. When and how this repo should be enabled? Only when using nightly or only under developers/maintainers indications? Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Combine Fedora and EL6
Greetings. Just have to ask... While having issues installing a node based on el6 I instead tried one based on Fedora (Same version) This seem to work just fine. I now see some other issues that I believe are unrelated still just to double check. Is it supported mixing nodes based on FC19 with an engine running EL6 and vice versa as long as the versions are the same ? Rgds Jonas ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] 3 IDE disks : Duplicate ID error
Hi, Strarting from a VM with two IDE disks, I'm trying to add a third IDE disk. I get this Duplicate ID error : VM serv-bd-dev1 is down. Exit message: internal error process exited while connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/1429ffe2-4137-416c-bb38-63fd73f4bcc1/images/f9ca88d8-e29f-4b11-9aae-b9330b5f8cdf/ed91505a-a219-42af-9915-e5ffc79918f9,if=none,id=drive-ide0-1-1,format=qcow2,serial=f9ca88d8-e29f-4b11-9aae-b9330b5f8cdf,cache=none,werror=stop,rerror=stop,aio=native: Duplicate ID 'drive-ide0-1-1' for drive . When disabling the second one and starting only with disk 0 et the new one (so start with 2 disks), it is booting well. Is it by design? (oVirt 3.3, no shared disk, thin-provisioned) -- Nicolas Ecarnot ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Bug cron 3.3.1-2 ?
Hi, It seems, the ovirt-cron does something which does not work out of the box: From: Anacron r...@management.test To: r...@management.test Content-Type: text/plain; charset=ANSI_X3.4-1968 Subject: Anacron job 'cron.daily' on management.test Message-Id: 20131025010702.1a487400...@management.test Date: Fri, 25 Oct 2013 03:07:02 +0200 (CEST) /etc/cron.daily/ovirt-cron: ls: cannot access /var/log/ovirt-engine/server.log.*: No such file or directory ls: cannot access /var/log/ovirt-engine/jasperserver.log.*: No such file or directory ls: cannot access /var/log/ovirt-engine/server.log.*.gz: No such file or directory ls: cannot access /var/log/ovirt-engine/jasperserver.log.*.gz: No such file or directory Is this intended behaviour? ovirt-engine 3.3.1-2 on CentOS 6.4. -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Bug cron 3.3.1-2 ?
- Original Message - From: Sven Kieske s.kie...@mittwald.de To: users@ovirt.org Sent: Friday, November 29, 2013 12:12:31 PM Subject: [Users] Bug cron 3.3.1-2 ? Hi, It seems, the ovirt-cron does something which does not work out of the box: From: Anacron r...@management.test To: r...@management.test Content-Type: text/plain; charset=ANSI_X3.4-1968 Subject: Anacron job 'cron.daily' on management.test Message-Id: 20131025010702.1a487400...@management.test Date: Fri, 25 Oct 2013 03:07:02 +0200 (CEST) /etc/cron.daily/ovirt-cron: ls: cannot access /var/log/ovirt-engine/server.log.*: No such file or directory ls: cannot access /var/log/ovirt-engine/jasperserver.log.*: No such file or directory ls: cannot access /var/log/ovirt-engine/server.log.*.gz: No such file or directory ls: cannot access /var/log/ovirt-engine/jasperserver.log.*.gz: No such file or directory Is this intended behaviour? No. In 3.3 we removed the proprietary log rotation in favor of standard logrotate. ovirt-engine 3.3.1-2 on CentOS 6.4. -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Gluster storage
On 11/29/2013 03:35 AM, tristan...@libero.it wrote: Hello everybody, i'm successful using ovirt with 16 physical node, in a FC cluster with a very BIG dell compellent (and so expensive) enterprise storage ;) I'm researching a new architecture for a new cluster, and i want to understand more better the glusterFS integration in ovirt. GlusterFS is integrated with oVirt in two ways: 1. Use oVirt to manage gluster storage configuration options. 2. Use GlusterFS as a storage domain. As i understand you have to install a normal physical node, with also the glusterFS package… right ? after that you have to create a new cluster in ovirt , new datacenter and put in this new node. in that datacenter you can create a new data domain ( glusterFS ) that reside on that host. right ? A cluster in oVirt can be configured to behave in 3 ways: (i) virtualization only (ii) Gluster storage only (iii) virtualization + gluster storage You need (ii) or (iii) to provide oVirt with the ability to perform gluster storage configuration and management. You can also configure gluster using its CLI and use (i) or (iii) for a glusterfs storage domain. You can also have two clusters for (i), (ii) and manage both using oVirt. There are two ways in which GlusterFS can be used as a storage domain: a) Use gluster native/fuse access with POSIXFS b) Use the gluster native storage domain to bypass fuse (with libgfapi). We are currently addressing an issue in libvirt (https://bugzilla.redhat.com/show_bug.cgi?id=1017289) to enable snapshot support with libgfapi. Once this is addressed, we will have libgfapi support in the native storage domain. Till then, fuse would be used with native storage domain. You can find more details about native storage domain here: http://www.ovirt.org/Features/GlusterFS_Storage_Domain and… after that ? ok i have 1 node that are also my storage , and if i want to all more compute node ? … every new compute node are a new brick for glusterFS so i can expand/redundant the first one ? If you are having separate compute/virtualization and storage clusters, you would not be required to create bricks on your compute nodes. However if you are using a single cluster for both compute and storage (as in iii above), you need not necessarily have bricks on all compute nodes. HTH, Vijay i don't have the architecture very clear in my mind, and the documentation don't clarify the final architecture for this type of usage. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Minor HTML5 Spice Bug
Ok, so I took a look at the cursor issue but unfortunately it worked for me :) I have - python-websockify-0.4.1-1 - spice-html5-0.1.4-1 - firefox client - rhel guest with spice-qxl driver installed and it's works quite good although I've got some warning in the debug output. F. - Original Message - From: Thomas Suckow thomas.suc...@pnnl.gov To: Frantisek Kobzik fkob...@redhat.com Cc: users@ovirt.org Sent: Wednesday, November 20, 2013 6:46:13 PM Subject: Re: [Users] Minor HTML5 Spice Bug Regarding the 1st one, the feature is still in preview, so it still has quite a lot of issues. But could you do me a favor and take a look if the debug output is printing some text when you click the screen (after resizing)? The debug output can be enabled via a browser console (using inspect element, for instance). I'm aware of the fact the debug output contains quite a lot of error messages, but I would be interested if it contains anything that is related to this resizing problem. WARNING: FIXME: DrawCopy did not find image id -1897675979 in cache. spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 2: Unknown message type 304! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: FIXME: DrawCopy did not find image id -1684602068 in cache. spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 2: Unknown message type 304! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: FIXME: DrawCopy did not find image id 1040472031 in cache. spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 2: Unknown message type 304! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: FIXME: DrawCopy did not find image id 1134140689 in cache. spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 2: Unknown message type 304! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: FIXME: DrawCopy did not find image id 833634344 in cache. spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 2: Unknown message type 304! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: FIXME: DrawCopy did not find image id -25796394 in cache. spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 2: Unknown message type 304! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: FIXME: DrawCopy did not find image id -1897675979 in cache. spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 2: Unknown message type 304! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379 https://we13196.pnl.gov//ovirt-engine-spicehtml5/spiceconn.js WARNING: 4: Unknown message type 104! spiceconn.js:379
Re: [Users] 3 IDE disks : Duplicate ID error
On Fri, Nov 29, 2013 at 11:13:03AM +0100, Nicolas Ecarnot wrote: Hi, Strarting from a VM with two IDE disks, I'm trying to add a third IDE disk. I get this Duplicate ID error : VM serv-bd-dev1 is down. Exit message: internal error process exited while connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/1429ffe2-4137-416c-bb38-63fd73f4bcc1/images/f9ca88d8-e29f-4b11-9aae-b9330b5f8cdf/ed91505a-a219-42af-9915-e5ffc79918f9,if=none,id=drive-ide0-1-1,format=qcow2,serial=f9ca88d8-e29f-4b11-9aae-b9330b5f8cdf,cache=none,werror=stop,rerror=stop,aio=native: Duplicate ID 'drive-ide0-1-1' for drive . When disabling the second one and starting only with disk 0 et the new one (so start with 2 disks), it is booting well. Is it by design? No. It sounds like a collision between the address of your third ide disk and the ide cdrom. Would you post vdsm.log from vmCreate command until the reported error message? It would let us understand if the bug is in Vdsm or Engine. Dan. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Is it possible to limit migration speed and number of concurrent migrations?
I want to put some of my hvs to maintenance, but that causes a migration storm, which causes an temporary unavailability of the hv and ovirt fences it, while migrations are still running. So I have to migrate one-by-one manually and then put the hv to maintenance. Is it possible to limit migration speed and number of concurrent migrations? -- Ernest Beinrohr, AXON PRO DevOps, Ing http://www.beinrohr.sk/ing.php, RHCE http://www.beinrohr.sk/rhce.php, RHCVA http://www.beinrohr.sk/rhce.php, LPIC http://www.beinrohr.sk/lpic.php, VCA http://www.beinrohr.sk/vca.php, +421-2--6241-0360 callto://+421-2--6241-0360, +421-903--482-603 callto://+421-903--482-603 icq:28153343, skype:oernii-work callto://oernii-work, jabber:oer...@jabber.org “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.” Richard Feynman ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] VMX validation failing
Thank you, I feel dumb as I looked and didn't find the option in the bios so I asked. Also thanks to Itamar. regards, On 28/11/13 17:44, Liviu Elama wrote: Hi Juan Please make sure that you have virtualization enabled in BIOS. It's not enable by default for 2950 as far as I remember Regards, Liviu On Fri, Nov 29, 2013 at 8:16 AM, Juan Pablo Lorier jplor...@gmail.com mailto:jplor...@gmail.com wrote: Hi, We got two PowerEdge 2950 out of production and I'm trying to add them to the DC just to get extra CPU and RAM but though the X5450 xeons have VMX, ovirt fails to install on the hosts with the error: Failed to execute stage 'Setup validation': Hardware does not support virtualization. I've searched the web and found a tip in vdsm-developer about updating vdsm-bootstrap but didn't work for me. Package version in the engine are: bea-stax-api.noarch 1.2.0-4.el6.centos.alt @ovirt_test glusterfs.x86_64 3.4.0-8.el6 @ovirt-stable glusterfs-libs.x86_643.4.0-8.el6 @ovirt-stable jboss-as.x86_64 7.1.1-11.el6 @ovirt-stable jpackage-utils.noarch5.0.0-7.el6.alt @ovirt_test maven.x86_64 3.0.4-1.el6.alt @ovirt_test otopi.noarch 1.1.2-1.el6 @ovirt-stable otopi-java.noarch1.1.2-1.el6 @ovirt-stable ovirt-engine.noarch 3.3.1-2.el6 @ovirt-stable ovirt-engine-backend.noarch 3.3.1-2.el6 @ovirt-stable ovirt-engine-cli.noarch 3.3.0.4-1.el6 @epel ovirt-engine-dbscripts.noarch3.3.1-2.el6 @ovirt-stable ovirt-engine-lib.noarch 3.3.1-2.el6 @ovirt-stable ovirt-engine-restapi.noarch 3.3.1-2.el6 @ovirt-stable ovirt-engine-sdk-python.noarch 3.3.0.6-1.el6 @epel ovirt-engine-setup.noarch3.3.1-2.el6 @ovirt-stable ovirt-engine-tools.noarch3.3.1-2.el6 @ovirt-stable ovirt-engine-userportal.noarch 3.3.1-2.el6 @ovirt-stable ovirt-engine-webadmin-portal.noarch 3.3.1-2.el6 @ovirt-stable ovirt-engine-websocket-proxy.noarch 3.3.1-2.el6 @ovirt-stable ovirt-host-deploy.noarch 1.1.1-1.el6 @ovirt-stable ovirt-host-deploy-java.noarch1.1.1-1.el6 @ovirt-stable ovirt-image-uploader.noarch 3.3.1-1.el6 @ovirt-stable ovirt-iso-uploader.noarch3.3.1-1.el6 @ovirt-stable ovirt-log-collector.noarch 3.3.1-1.el6 @ovirt-stable ovirt-release-el6.noarch 8-1 @ovirt-stable postgresql-jdbc.x86_64 8.4.701-8.1.el6.centos.alt @ovirt_test python-kitchen.noarch1.1.1-1.el6.centos.alt @ovirt_test vdsm-bootstrap.noarch4.13.0-11.el6 @ovirt-stable Regards, ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Gluster storage
On Fri, Nov 29, 2013 at 04:04:03PM +0530, Vijay Bellur wrote: There are two ways in which GlusterFS can be used as a storage domain: a) Use gluster native/fuse access with POSIXFS b) Use the gluster native storage domain to bypass fuse (with libgfapi). We are currently addressing an issue in libvirt (https://bugzilla.redhat.com/show_bug.cgi?id=1017289) to enable snapshot support with libgfapi. Once this is addressed, we will have libgfapi support in the native storage domain. It won't be as immediate, since there's a required fix on Vdsm's side (Bug 1022961 - Running a VM from a gluster domain uses mount instead of gluster URI) Till then, fuse would be used with native storage domain. You can find more details about native storage domain here: http://www.ovirt.org/Features/GlusterFS_Storage_Domain ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt Weekly Meeting Minutes -- 2013-11-27
Il 29/11/2013 09:43, Gianluca Cecchi ha scritto: On Fri, Nov 29, 2013 at 8:57 AM, Sandro Bonazzola wrote: Meeting summary --- * Agenda and roll Call (doron, 15:02:42) * 3.3 update releases (doron, 15:04:23) * 3.4 planning (doron, 15:04:24) * conferences and workshops (doron, 15:04:26) * infra update (doron, 15:04:27) * other topics (doron, 15:04:29) * LINK: http://gerrit.ovirt.org/#/admin/projects/ovirt-release ~ (danken, 15:12:58) * LINK: http://gerrit.ovirt.org/21794 (mburns, 15:15:04) * LINK: http://jenkins.ovirt.org/job/ovirt-release/16800/ (mburns, 15:15:48) * mburns to add sbonazzo as a maintainer to support ovirt-release project (doron, 15:18:17) ovirt-release-9 released yesterday BTW: I see that this package contains /etc/yum.repos.d/fedora-virt-preview.repo (and ovirt-release-fedora-8-1.noarch already did so) By default all lines are disabled in it. When and how this repo should be enabled? Only when using nightly or only under developers/maintainers indications? I think that fedora-virt-preview should be used with nightly, stable shouldn't need it. However, since fedora-virt-preview contains vdsm - related packages not needed if you run only ovirt-engine (without using the same host as hypervisor) I think it's better to wait for VDSM guys answer. Thanks, Gianluca -- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Is it possible to limit migration speed and number of concurrent migrations?
On Fri, Nov 29, 2013 at 11:49:05AM +0100, Ernest Beinrohr wrote: I want to put some of my hvs to maintenance, but that causes a migration storm, which causes an temporary unavailability of the hv and ovirt fences it, while migrations are still running. So I have to migrate one-by-one manually and then put the hv to maintenance. Is it possible to limit migration speed and number of concurrent migrations? You could set max_outgoing_migrations to 1 (in each /etc/vdsm/vdsm.conf), but even a single VM migrating may choke your connection (depends which wins, the CPU running qemu, or your bandwidth). Currently, your only option is to define a migration network for your cluster (say, over a different nic, or over a vlan), and use tools external to ovirt to throtle the bandwidth on it (virsh net-edit network name and http://libvirt.org/formatnetwork.html#elementQoS can come up handy). ovirt-3.4 should expose the ability to set QoS limits on migration networks. Dan. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt Weekly Meeting Minutes -- 2013-11-27
On Fri, Nov 29, 2013 at 11:49:59AM +0100, Sandro Bonazzola wrote: Il 29/11/2013 09:43, Gianluca Cecchi ha scritto: On Fri, Nov 29, 2013 at 8:57 AM, Sandro Bonazzola wrote: Meeting summary --- * Agenda and roll Call (doron, 15:02:42) * 3.3 update releases (doron, 15:04:23) * 3.4 planning (doron, 15:04:24) * conferences and workshops (doron, 15:04:26) * infra update (doron, 15:04:27) * other topics (doron, 15:04:29) * LINK: http://gerrit.ovirt.org/#/admin/projects/ovirt-release ~ (danken, 15:12:58) * LINK: http://gerrit.ovirt.org/21794 (mburns, 15:15:04) * LINK: http://jenkins.ovirt.org/job/ovirt-release/16800/ (mburns, 15:15:48) * mburns to add sbonazzo as a maintainer to support ovirt-release project (doron, 15:18:17) ovirt-release-9 released yesterday BTW: I see that this package contains /etc/yum.repos.d/fedora-virt-preview.repo (and ovirt-release-fedora-8-1.noarch already did so) By default all lines are disabled in it. When and how this repo should be enabled? Only when using nightly or only under developers/maintainers indications? I think that fedora-virt-preview should be used with nightly, stable shouldn't need it. However, since fedora-virt-preview contains vdsm - related packages not needed if you run only ovirt-engine (without using the same host as hypervisor) I think it's better to wait for VDSM guys answer. Vdsm is not in http://fedorapeople.org/groups/virt/virt-preview/fedora-20/x86_64/ virt-preview is not needed for ovirt-3.3, and frankly, I think it should be dropped from ovirt-release. It used to be needed on the nodes when vdsm required a version of libvirt that was not yet in Fedora. Now that we have el6 as a fully-supported platform, and given that el6 is missing from virt-preview, virt-preview is much less helpful to us. Dan. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] 3 IDE disks : Duplicate ID error
Le 29/11/2013 11:44, Dan Kenigsberg a écrit : On Fri, Nov 29, 2013 at 11:13:03AM +0100, Nicolas Ecarnot wrote: Hi, Strarting from a VM with two IDE disks, I'm trying to add a third IDE disk. I get this Duplicate ID error : VM serv-bd-dev1 is down. Exit message: internal error process exited while connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/1429ffe2-4137-416c-bb38-63fd73f4bcc1/images/f9ca88d8-e29f-4b11-9aae-b9330b5f8cdf/ed91505a-a219-42af-9915-e5ffc79918f9,if=none,id=drive-ide0-1-1,format=qcow2,serial=f9ca88d8-e29f-4b11-9aae-b9330b5f8cdf,cache=none,werror=stop,rerror=stop,aio=native: Duplicate ID 'drive-ide0-1-1' for drive . When disabling the second one and starting only with disk 0 et the new one (so start with 2 disks), it is booting well. Is it by design? No. It sounds like a collision between the address of your third ide disk and the ide cdrom. Would you post vdsm.log from vmCreate command until the reported error message? It would let us understand if the bug is in Vdsm or Engine. Dan. Do you mean the xml that one can see in the vdsm.log? domain type=kvm nameserv-bd-dev1/name uuid0e3ffc7c-4e54-405c-9af1-55bc4f25fe13/uuid memory4194304/memory currentMemory4194304/currentMemory vcpu4/vcpu memtune min_guarantee349184/min_guarantee /memtune devices channel type=unix target name=com.redhat.rhevm.vdsm type=virtio/ source mode=bind path=/var/lib/libvirt/qemu/channels/0e3ffc7c-4e54-405c-9af1-55bc4f25fe13.com.redhat.rhevm.vdsm/ /channel channel type=unix target name=org.qemu.guest_agent.0 type=virtio/ source mode=bind path=/var/lib/libvirt/qemu/channels/0e3ffc7c-4e54-405c-9af1-55bc4f25fe13.org.qemu.guest_agent.0/ /channel input bus=usb type=tablet/ graphics autoport=yes keymap=fr passwd=* passwdValidTo=1970-01-01T00:00:01 port=-1 type=vnc listen network=vdsm-ovirtmgmt type=network/ /graphics controller model=virtio-scsi type=scsi/ video address bus=0x00 domain=0x function=0x0 slot=0x02 type=pci/ model heads=1 type=qxl vram=65536/ /video interface type=bridge address bus=0x00 domain=0x function=0x0 slot=0x03 type=pci/ mac address=00:1a:4a:a8:27:0c/ model type=e1000/ source bridge=ovirtmgmt/ filterref filter=vdsm-no-mac-spoofing/ link state=up/ /interface disk device=cdrom snapshot=no type=file address bus=1 controller=0 target=0 type=drive unit=0/ source file= startupPolicy=optional/ target bus=ide dev=hdc/ readonly/ serial/ /disk disk device=disk snapshot=no type=block address bus=0 controller=0 target=0 type=drive unit=0/ source dev=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/1429ffe2-4137-416c-bb38-63fd73f4bcc1/images/c38a73f0-da9e-4e52-ad63-78e64fce957c/7a74ba80-9207-4f21-90a0-f28fc9925fee/ target bus=ide dev=hda/ serialc38a73f0-da9e-4e52-ad63-78e64fce957c/serial boot order=1/ driver cache=none error_policy=stop io=native name=qemu type=qcow2/ /disk disk device=disk snapshot=no type=block address bus=1 controller=0 target=0 type=drive unit=1/ source dev=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/11a077c7-658b-49bb-8596-a785109c24c9/images/36553e2f-64f2-4d4e-b8a1-d9cd7d1cdf49/e610a133-1f96-4c6a-8be5-efd1fdc413b6/ target bus=ide dev=hdb/ serial36553e2f-64f2-4d4e-b8a1-d9cd7d1cdf49/serial driver cache=none error_policy=stop io=native name=qemu type=qcow2/ /disk disk device=disk snapshot=no type=block address bus=0 controller=0 target=0 type=drive unit=1/ source dev=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/1429ffe2-4137-416c-bb38-63fd73f4bcc1/images/f9ca88d8-e29f-4b11-9aae-b9330b5f8cdf/ed91505a-a219-42af-9915-e5ffc79918f9/ target bus=ide dev=hdd/ serialf9ca88d8-e29f-4b11-9aae-b9330b5f8cdf/serial driver cache=none error_policy=stop io=native name=qemu type=qcow2/ /disk memballoon model=virtio/ /devices
Re: [Users] oVirt Node 3.0.3-1 for oVirt 3.3 release
Fabian, I've downloaded the ovirt-node-iso-3.0.3-1.1.vdsm.fc19.iso image and have tried installing it on two separate machines. Whether I do a regular install (default boot screen option) or choose 'reinstall' from the troubleshooting menu I get tripped up at the same spot. When it comes time to select a keyboard layout (I think this is correct) US English is the default. I press enter at that point and after a short delay there is a very uninformative error message that appears. I can't remember what it is off the top of my head and the systems are at work (I'm at home now). After this point I can continue but the install fails immediately after I get past the install target selection screen. Has anyone successfully installed a node using this iso image? I even tried removing partitions from the drive beforehand, to the point of dd'ing the first 512 bytes to make sure there was no partition table. This didn't help - I still ran into the install error described above. I particularly wanted to try an FC19 based node to try out live storage migration on NFS, as I was having problems with the EL6.4 node with this. Is there a chance that that issue might be resolved for EL6 based nodes by an upcoming new EL6.5 based node spin? Sorry I cannot provide the exact error details for the install issue now, but I can get the error detail in the next couple of days if required. Thanks, Paul ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] 3 IDE disks : Duplicate ID error
On Fri, Nov 29, 2013 at 01:16:17PM +0100, Nicolas Ecarnot wrote: Le 29/11/2013 11:44, Dan Kenigsberg a écrit : On Fri, Nov 29, 2013 at 11:13:03AM +0100, Nicolas Ecarnot wrote: Hi, Strarting from a VM with two IDE disks, I'm trying to add a third IDE disk. I get this Duplicate ID error : VM serv-bd-dev1 is down. Exit message: internal error process exited while connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/1429ffe2-4137-416c-bb38-63fd73f4bcc1/images/f9ca88d8-e29f-4b11-9aae-b9330b5f8cdf/ed91505a-a219-42af-9915-e5ffc79918f9,if=none,id=drive-ide0-1-1,format=qcow2,serial=f9ca88d8-e29f-4b11-9aae-b9330b5f8cdf,cache=none,werror=stop,rerror=stop,aio=native: Duplicate ID 'drive-ide0-1-1' for drive . When disabling the second one and starting only with disk 0 et the new one (so start with 2 disks), it is booting well. Is it by design? No. It sounds like a collision between the address of your third ide disk and the ide cdrom. Would you post vdsm.log from vmCreate command until the reported error message? It would let us understand if the bug is in Vdsm or Engine. Dan. Do you mean the xml that one can see in the vdsm.log? Actually, I've meant everything: from Engine's parameters to the vmCreate command, down to the error from libvirt. The domxml blow suggests that the error might be even further below, since I do not see an address collision here. disk device=cdrom snapshot=no type=file address bus=1 controller=0 target=0 type=drive unit=0/ target bus=ide dev=hdc/ /disk disk device=disk snapshot=no type=block address bus=0 controller=0 target=0 type=drive unit=0/ target bus=ide dev=hda/ /disk disk device=disk snapshot=no type=block address bus=1 controller=0 target=0 type=drive unit=1/ target bus=ide dev=hdb/ /disk disk device=disk snapshot=no type=block address bus=0 controller=0 target=0 type=drive unit=1/ target bus=ide dev=hdd/ /disk Could you provide the vmCreate line from vdsm.log, and also the part of libvirtd.log since that domxml gets in until the error is spewed out? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt Weekly Meeting Minutes -- 2013-11-27
On Fri, Nov 29, 2013 at 01:18:53PM +0100, Sandro Bonazzola wrote: Il 29/11/2013 13:13, Dan Kenigsberg ha scritto: On Fri, Nov 29, 2013 at 11:49:59AM +0100, Sandro Bonazzola wrote: Il 29/11/2013 09:43, Gianluca Cecchi ha scritto: On Fri, Nov 29, 2013 at 8:57 AM, Sandro Bonazzola wrote: Meeting summary --- * Agenda and roll Call (doron, 15:02:42) * 3.3 update releases (doron, 15:04:23) * 3.4 planning (doron, 15:04:24) * conferences and workshops (doron, 15:04:26) * infra update (doron, 15:04:27) * other topics (doron, 15:04:29) * LINK: http://gerrit.ovirt.org/#/admin/projects/ovirt-release ~ (danken, 15:12:58) * LINK: http://gerrit.ovirt.org/21794 (mburns, 15:15:04) * LINK: http://jenkins.ovirt.org/job/ovirt-release/16800/ (mburns, 15:15:48) * mburns to add sbonazzo as a maintainer to support ovirt-release project (doron, 15:18:17) ovirt-release-9 released yesterday BTW: I see that this package contains /etc/yum.repos.d/fedora-virt-preview.repo (and ovirt-release-fedora-8-1.noarch already did so) By default all lines are disabled in it. When and how this repo should be enabled? Only when using nightly or only under developers/maintainers indications? I think that fedora-virt-preview should be used with nightly, stable shouldn't need it. However, since fedora-virt-preview contains vdsm - related packages not needed if you run only ovirt-engine (without using the same host as hypervisor) I think it's better to wait for VDSM guys answer. Vdsm is not in http://fedorapeople.org/groups/virt/virt-preview/fedora-20/x86_64/ virt-preview is not needed for ovirt-3.3, and frankly, I think it should be dropped from ovirt-release. It used to be needed on the nodes when vdsm required a version of libvirt that was not yet in Fedora. Now that we have el6 as a fully-supported platform, and given that el6 is missing from virt-preview, virt-preview is much less helpful to us. Dan. So, any objection in removing virt-preview from ovirt-release? What about nightly? Will it be needed there? Should be removed from both, since it is currently unused. We could reintroduce it if the need arises. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Documentation: Storage Domain conversion from Data Domain to Export Domain
On 11/28/2013 03:53 AM, Haim Ateya wrote: - Original Message - From: Bob Doolittle b...@doolittle.us.com To: d...@redhat.com Cc: users@ovirt.org Sent: Wednesday, November 27, 2013 12:16:30 AM Subject: Re: [Users] Documentation: Storage Domain conversion from Data Domain to Export Domain On 11/26/2013 03:27 PM, Dafna Ron wrote: sql -U postgres engine -c 'select connection from storage_server_connections;' So now that we determined that I have stale connection state in my DB, any suggestions as to how I might clear it out safely? I tried rebooting my Engine, but the connection is still in the DB and it still doesn't show in the Admin Portal. Hi Bob, you can try the following command: engine=# delete FROM storage_server_connections where connection = '172.16.0.58:/export/VM_EXPORTDOMAIN'; this should allow you re-create your storage domain. Thanks Haim! I would appreciate if you could file a bug against it. I got into this situation with some unsupported hacking, and am not sure whether I would have hit this same issue without it or not. Do you believe this stale data was simply a result of trying to add a Host while it had a default iptables configuration active? If it's that simple I'm happy to open the bug. But if it was a result of trying to manually convert an old Data NFS Domain into an Export NFS Domain that's not worth opening a bug since an official feature to do this is in development. Please advise. Thanks, Bob ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt installation breaks on Contos 6.4 64bit
- Original Message - From: Peter Lerche pe...@easyspeedy.com To: users@ovirt.org Sent: Friday, November 29, 2013 6:36:45 PM Subject: [Users] oVirt installation breaks on Contos 6.4 64bit Hi, I har tried to instal oVirt all in one on Fedora 19 for the last couple of days. I have got some very good help from this list but ended up with an issue with NFS not starting correctly. I must admit that I gave up and I am now trying to do the same install on Centos 6.4. However, after a fresh install of Centos and oVirt all in one I get following error when running enigne-setup. [ INFO ] Starting engine service [ INFO ] Restarting httpd detail: Cannot add Host. SSH authentication failed, verify authentication parameters are correct (Username/Password, public-key etc.) You may refer to the engine.log file for further details. [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131129163946.log [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed I suspect selinux issue, will fix. I think that temporary workaround will be: # restorecon -r ~/.ssh And try again. I have enclosed setup and engine logfiles. I am a bit bewildered that is is so hard to get a all in one demo server going. -- Best regards Peter Lerche ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Combine Fedora and EL6
On 11/29/2013 11:06 AM, Jonas Israelsson wrote: Greetings. Just have to ask... While having issues installing a node based on el6 I instead tried one based on Fedora (Same version) This seem to work just fine. I now see some other issues that I believe are unrelated still just to double check. Is it supported mixing nodes based on FC19 with an engine running EL6 and vice versa as long as the versions are the same ? the engine can manage both fedora or .el6 hosts at the same time. each can be either the full blown version or the ovirt-node version. *but* do not mix fedora and .el6 nodes in the same cluster. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Is it possible to limit migration speed and number of concurrent migrations?
On 11/29/2013 02:08 PM, Dan Kenigsberg wrote: On Fri, Nov 29, 2013 at 11:49:05AM +0100, Ernest Beinrohr wrote: I want to put some of my hvs to maintenance, but that causes a migration storm, which causes an temporary unavailability of the hv and ovirt fences it, while migrations are still running. So I have to migrate one-by-one manually and then put the hv to maintenance. Is it possible to limit migration speed and number of concurrent migrations? You could set max_outgoing_migrations to 1 (in each /etc/vdsm/vdsm.conf), but even a single VM migrating may choke your connection (depends which wins, the CPU running qemu, or your bandwidth). Currently, your only option is to define a migration network for your cluster (say, over a different nic, or over a vlan), and use tools external to ovirt to throtle the bandwidth on it (virsh net-edit network name and http://libvirt.org/formatnetwork.html#elementQoS can come up handy). ovirt-3.4 should expose the ability to set QoS limits on migration networks. Dan. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users I thought we can control both the number of concurrent migrations and the bandwidth per migration which defaults to 30MB/s? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] ovirtmgmt not installed
- Original Message - From: Pascal Jakobi pascal.jak...@gmail.com To: users@ovirt.org Sent: Saturday, November 30, 2013 1:24:23 AM Subject: [Users] ovirtmgmt not installed Hi there I installed a console on F19, then a F19 host (time 11:09 today). Everything works fine, apart from the installation of the mgmt network at the end. Can someone tell me what's going wrong ? Thxs in advance Pascal ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] ovirtmgmt not installed
- Original Message - From: Pascal Jakobi pascal.jak...@gmail.com To: users@ovirt.org Sent: Saturday, November 30, 2013 1:24:23 AM Subject: [Users] ovirtmgmt not installed Hi there I installed a console on F19, then a F19 host (time 11:09 today). Everything works fine, apart from the installation of the mgmt network at the end. Can someone tell me what's going wrong ? Thxs in advance Pascal Hi, This is not the log you sent me offline. It is not 3.3 cluster, the host-deploy in this log tries to create the management bridge and fail. For this issue I need the host-deploy log[1], I think we will find the same issue we had in ipv6. Please attempt to use cluster level 3.3. Regards, Alon Bar-Lev. [1] /var/log/ovirt-engine/host-deploy/ovirt-20131128141519-lab2.home-2d134028.log ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] ovirtmgmt not installed
- Original Message - From: Alon Bar-Lev alo...@redhat.com To: Pascal Jakobi pascal.jak...@gmail.com Cc: users@ovirt.org Sent: Saturday, November 30, 2013 2:40:33 AM Subject: Re: [Users] ovirtmgmt not installed - Original Message - From: Pascal Jakobi pascal.jak...@gmail.com To: users@ovirt.org Sent: Saturday, November 30, 2013 1:24:23 AM Subject: [Users] ovirtmgmt not installed Hi there I installed a console on F19, then a F19 host (time 11:09 today). Everything works fine, apart from the installation of the mgmt network at the end. Can someone tell me what's going wrong ? Thxs in advance Pascal Hi, This is not the log you sent me offline. It is not 3.3 cluster, the host-deploy in this log tries to create the management bridge and fail. For this issue I need the host-deploy log[1], I think we will find the same issue we had in ipv6. I was wrong, it resolves correctly to ipv4, and fails connecting to 192.168.1.41:80 after setup of management bridge, so I need the log to see what happening. Please attempt to use cluster level 3.3. Regards, Alon Bar-Lev. [1] /var/log/ovirt-engine/host-deploy/ovirt-20131128141519-lab2.home-2d134028.log ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users