[ovirt-users] Can HA Agent control NFS Mount?
Hi, I was just wondering, within the whole complexity of hosted-engine. Would it be possible for the hosted-engine ha-agent control the mount point? I'm basing this off a few people I've been talking to who have their NFS server running on the same host that the hosted-engine servers are running. Most normally also running that on top of gluster. The main motive for this, is currently if the nfs server is running on the localhost and the server goes for a clean shutdown it will hang because the nfs mount is hard mounted and as the nfs server has gone away, we're stuck at an infinite hold waiting for it to cleanly unmount (which it never will) If it's possible for instead one of the ha components to unmount this nfs mount when it shuts down, this could potentially prevent this. There are other alternatives and I know this is not the supported scenario, but just hoping to bounce a few ideas. Thanks, Andrew ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] glusterfs quetions/tips
Hi, did you miss the email's body :-) ? Thanks, Gilad. - Original Message - From: Gabi C gab...@gmail.com To: users@ovirt.org Sent: Wednesday, May 21, 2014 11:28:26 AM Subject: [ovirt-users] glusterfs quetions/tips ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] glusterfs quetions/tips
ignore :-) saw the later thread... Thanks, Gilad. - Original Message - From: Gilad Chaplik gchap...@redhat.com To: Gabi C gab...@gmail.com Cc: users@ovirt.org Sent: Saturday, May 24, 2014 11:32:14 AM Subject: Re: [ovirt-users] glusterfs quetions/tips Hi, did you miss the email's body :-) ? Thanks, Gilad. - Original Message - From: Gabi C gab...@gmail.com To: users@ovirt.org Sent: Wednesday, May 21, 2014 11:28:26 AM Subject: [ovirt-users] glusterfs quetions/tips ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] SLA : RAM scheduling
Hi Nathanaël, You have 2 ways to get what you're after (quick/slow): 1) install 'oVirt's external scheduling proxy', and write an extremely simple weight function that orders hosts by used memory, then add that to your cluster policy. 2) open an RFE for oVirt 3.4 to have that in (https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt). let me know if you consider (1), and I'll assist. anyway I suggest you'll open an RFE for 3.5. Thanks, Gilad. - Original Message - From: Nathanaël Blanchet blanc...@abes.fr To: Karli Sjöberg karli.sjob...@slu.se Cc: users users@ovirt.org Sent: Friday, May 23, 2014 7:38:40 PM Subject: Re: [ovirt-users] SLA : RAM scheduling even distribution is for cpu only Le 23/05/2014 17:48, Karli Sjöberg a écrit : Den 23 maj 2014 17:13 skrev =?ISO-8859-1?Q?Nathana=EBl_Blanchet?= blanc...@abes.fr : Le 23/05/2014 17:11, Nathanaël Blanchet a écrit : Hello, On ovirt 3.4, is it possible to schedule vms distribution depending on host RAM availibility? Concretly, I had to manually move vms all the vms to the second host of the cluster, this lead to reach 90% occupation of memory on the destination host. When my first host has rebooted, none vms of the second host automatically migrated to the first one which had full RAM. How to make this happen? ... so as to both hosts be RAM evenly distributed... hope to be enough clear... Sounds like you just want to apply the cluster policy for even distribution. Have you assigned any policy for that cluster? /K -- Nathanaël Blanchet Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanc...@abes.fr ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Nathanaël Blanchet Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanc...@abes.fr ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] SLA : RAM scheduling
- Original Message - From: Gilad Chaplik gchap...@redhat.com To: Nathanaël Blanchet blanc...@abes.fr Cc: Karli Sjöberg karli.sjob...@slu.se, users users@ovirt.org Sent: Saturday, May 24, 2014 11:49:48 AM Subject: Re: [ovirt-users] SLA : RAM scheduling Hi Nathanaël, You have 2 ways to get what you're after (quick/slow): 1) install 'oVirt's external scheduling proxy', and write an extremely simple weight function that orders hosts by used memory, then add that to your cluster policy. 2) open an RFE for oVirt 3.4 to have that in (https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt). by 3.4, I mean 3.4.x (= anyway for (2) you'll need to upgrade), but not sure it will make it. let me know if you consider (1), and I'll assist. anyway I suggest you'll open an RFE for 3.5. Thanks, Gilad. - Original Message - From: Nathanaël Blanchet blanc...@abes.fr To: Karli Sjöberg karli.sjob...@slu.se Cc: users users@ovirt.org Sent: Friday, May 23, 2014 7:38:40 PM Subject: Re: [ovirt-users] SLA : RAM scheduling even distribution is for cpu only Le 23/05/2014 17:48, Karli Sjöberg a écrit : Den 23 maj 2014 17:13 skrev =?ISO-8859-1?Q?Nathana=EBl_Blanchet?= blanc...@abes.fr : Le 23/05/2014 17:11, Nathanaël Blanchet a écrit : Hello, On ovirt 3.4, is it possible to schedule vms distribution depending on host RAM availibility? Concretly, I had to manually move vms all the vms to the second host of the cluster, this lead to reach 90% occupation of memory on the destination host. When my first host has rebooted, none vms of the second host automatically migrated to the first one which had full RAM. How to make this happen? ... so as to both hosts be RAM evenly distributed... hope to be enough clear... Sounds like you just want to apply the cluster policy for even distribution. Have you assigned any policy for that cluster? /K -- Nathanaël Blanchet Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanc...@abes.fr ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Nathanaël Blanchet Supervision réseau Pôle exploitation et maintenance Département des systèmes d'information 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanc...@abes.fr ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational...
Hi, Are these patches merged into 3.4.1? I seem to be hitting this issue now, twice in a row. The second BZ is also marked as private. On Fri, May 2, 2014 at 1:21 AM, Artyom Lukianov aluki...@redhat.com wrote: It have number of the same bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1080513 https://bugzilla.redhat.com/show_bug.cgi?id=1088572 - fix for this already merged, so if you take the last ovirt it must include it The one thing you can do until it, it try to restart host and start deployment process from beginning. Thanks - Original Message - From: Tobias Honacker tob...@honacker.info To: users@ovirt.org Sent: Thursday, May 1, 2014 6:06:47 PM Subject: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational... Hi all, i hit this bug yesterday. Packages: ovirt-host-deploy-1.2.0-1.el6.noarch ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch ovirt-hosted-engine-setup-1.1.2-1.el6.noarch ovirt-release-11.2.0-1.noarch ovirt-hosted-engine-ha-1.1.2-1.el6.noarch After setting up the hosted engine (running great) the setup canceled with this MSG: [ INFO ] The VDSM Host is now operational [ ERROR ] Waiting for cluster 'Default' to become operational... [ ERROR ] Failed to execute stage 'Closing up': 'NoneType' object has no attribute '__dict__' [ INFO ] Stage: Clean up [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination What is the next step i have to do that t he HA features of the hosted-engine will take care of keeping the VM alive. best regards tobias ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational...
Simply starting the ha-agents manually seems to bring up the VM however it doesn't come up in the chkconfig list. The next host that gets configured works fine. What steps get configured in that final stage that perhaps I could manually run rather than rerolling for a third time? On Sat, May 24, 2014 at 9:42 PM, Andrew Lau and...@andrewklau.com wrote: Hi, Are these patches merged into 3.4.1? I seem to be hitting this issue now, twice in a row. The second BZ is also marked as private. On Fri, May 2, 2014 at 1:21 AM, Artyom Lukianov aluki...@redhat.com wrote: It have number of the same bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1080513 https://bugzilla.redhat.com/show_bug.cgi?id=1088572 - fix for this already merged, so if you take the last ovirt it must include it The one thing you can do until it, it try to restart host and start deployment process from beginning. Thanks - Original Message - From: Tobias Honacker tob...@honacker.info To: users@ovirt.org Sent: Thursday, May 1, 2014 6:06:47 PM Subject: [ovirt-users] Hosted Engine - Waiting for cluster 'Default' to become operational... Hi all, i hit this bug yesterday. Packages: ovirt-host-deploy-1.2.0-1.el6.noarch ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch ovirt-hosted-engine-setup-1.1.2-1.el6.noarch ovirt-release-11.2.0-1.noarch ovirt-hosted-engine-ha-1.1.2-1.el6.noarch After setting up the hosted engine (running great) the setup canceled with this MSG: [ INFO ] The VDSM Host is now operational [ ERROR ] Waiting for cluster 'Default' to become operational... [ ERROR ] Failed to execute stage 'Closing up': 'NoneType' object has no attribute '__dict__' [ INFO ] Stage: Clean up [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination What is the next step i have to do that t he HA features of the hosted-engine will take care of keeping the VM alive. best regards tobias ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] experience with AMD Kabini (Jaguar) CPUs
Hey guys, I need a system to test oVirt and Opaque on, but I want it to be super-low power and near-silent. Therefore I am thinking of grabbing one of the new 25W AMD Kabini CPUs. Will oVirt work well on that architecture? If not, what would you recommend? Many thanks! iordan -- The conscious mind has only one thread of execution. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users