Re: [Users] errors in engine log and tones of sync messages
On 12/16/2013 03:26 PM, Vered Volansky wrote: I think Sahina is qualified to answer this question, Sahina? - Original Message - From: Gianluca Cecchi gianluca.cec...@gmail.com To: Vered Volansky ve...@redhat.com Cc: Juan Pablo Lorier jplor...@gmail.com, users users@ovirt.org Sent: Sunday, December 15, 2013 6:52:07 PM Subject: Re: [Users] errors in engine log and tones of sync messages On Sun, Dec 15, 2013 at 8:33 AM, Vered Volansky wrote: Hi Juan, The following patch should handle the messages: http://gerrit.ovirt.org/#/c/22287 Can we safely run the sql statements referred in that patch against a running environment, eventually stopping engine service before: so against the engine db only run select fn_db_add_config_value('GlusterAysncTasksSupport', 'false', '3.0'); select fn_db_add_config_value('GlusterAysncTasksSupport', 'false', '3.1'); select fn_db_add_config_value('GlusterAysncTasksSupport', 'false', '3.2'); select fn_db_add_config_value('GlusterAysncTasksSupport', 'false', '3.3'); Yes, it is safe to run these. ? Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt 3.3.2 beta: glusterTaskList not supported message
On 12/10/2013 10:16 PM, Gianluca Cecchi wrote: Hello, succesfully upgraded engine and two hosts from 3.3.1 stable f19 to 3.3.2 beta1 ovirt-engine-3.3.2-0.1.beta1.fc19.noarch I have gluster datacenter. I see these repeating messages in engine.log, both on 3.3.1 and 3.3.2 beta. 3.3.1 2013-12-10 03:41:16,853 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand] (DefaultQuartzScheduler_Worker-50) Command GlusterTasksListVDS execution failed. Exception: VDSNetworkException: org.apache.xmlrpc.XmlRpcException: type 'exceptions.Exception':method glusterTasksList is not supported 2013-12-10 03:41:16,853 ERROR [org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl] (DefaultQuartzScheduler_Worker-50) Failed to invoke scheduled method gluster_async_task_poll_event: java.lang.reflect.InvocationTargetException Caused by: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: org.apache.xmlrpc.XmlRpcException: type 'exceptions.Exception':method glusterTasksList is not supported (Failed with error VDS_NETWORK_ERROR and code 5022) 3.3.2 beta 2013-12-10 17:35:37,633 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand] (DefaultQuartzSchedule r_Worker-66) Command GlusterTasksListVDS execution failed. Exception: VDSNetworkException: org.apache.xmlrpc.XmlRpcExcept ion: type 'exceptions.Exception':method glusterTasksList is not supported 2013-12-10 17:35:37,634 ERROR [org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl] (DefaultQuartzScheduler_Worker-66) Failed to invoke scheduled method gluster_async_task_poll_event: java.lang.reflect.InvocationTargetException Caused by: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: org.apache.xmlrpc.XmlRpcException: type 'exceptions.Exception':method glusterTasksList is not supported (Failed with error VDS_NETWORK_ERROR and code 5022) Do I have to worry about them? At the moment storage domain is ok for both hosts and VMs powered on... No, you can ignore these errors - these are part of monitoring gluster tasks. Have send a patch to disable the monitoring for 3.3 clusters and below http://gerrit.ovirt.org/22287 Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Gluster Volume Info won't update
On 12/10/2013 06:16 PM, Andrew Lau wrote: Here you go: http://www.fpaste.org/60464/38667947/ Andrew, Thanks for the patience in providing the logs. When a node is identified by multiple host names, the engine is not able to resolve it correctly. The fix is for the engine to use gluster host uuids while resolving brick info as well (as per the bug logged) In the meantime, for you to proceed, one way would be to use the ip address (172.16.1.1 instead of gsx.melb.example.net http://gsx.melb.example.net) to peer probe the gluster host . On Tue, Dec 10, 2013 at 4:14 PM, Sahina Bose sab...@redhat.com mailto:sab...@redhat.com wrote: On 12/10/2013 04:24 AM, Andrew Lau wrote: engine.log: http://www.fpaste.org/60336/86629496/ vdsm.log: http://www.fpaste.org/60337/29624138/ Andrew, I was interested in the getCapabilities output in vdsm.log. Could you click on Refresh Caps from Host tab and attach the output of getCapabilities from vdsm.log? thanks! On Mon, Dec 9, 2013 at 5:46 PM, Sahina Bose sab...@redhat.com mailto:sab...@redhat.com wrote: On 12/08/2013 01:51 PM, Andrew Lau wrote: The defined cluster keeps showing me the option to import the gsx.melb.example.net http://gsx.melb.example.net hosts into the cluster as hosts. Under the Host's Network interfaces sub tab there is another issue I've got was it's picking up the keepalived floating IP address rather than the one assigned, but yes the network is listed as I managed it through ovirt. Could you attach the engine.log as well as the vdsm.log from the node? On Fri, Dec 6, 2013 at 9:22 PM, Sahina Bose sab...@redhat.com mailto:sab...@redhat.com wrote: On 12/06/2013 03:29 PM, Sahina Bose wrote: On 12/06/2013 10:00 AM, Kanagaraj wrote: On 12/06/2013 09:46 AM, Andrew Lau wrote: Yup - it's ovirt cluster version 3.3 with gluster 3.4.1 what does gluster volume info vol and gluster volume status vol say? There's an issue where engine cannot sync the bricks, as bricks return a different IP address than the one engine is aware of (in this case gsx.melb.example.net http://gsx.melb.example.net while engine knows the host as hvx.melb.example.net http://hvx.melb.example.net) . We need to fix this path to use the gluster host UUID as well. Have logged a bug to track this - https://bugzilla.redhat.com/show_bug.cgi?id=1038988 Could you also let us know if the following network is listed under your Host's Network Interfaces sub tab? 172.16.1.1 (gluster) gsx.melb.example.net http://gsx.melb.example.net If it is, then even without any fix, it should have worked for you and we need to dig further on why it did not. On Fri, Dec 6, 2013 at 3:11 PM, Kanagaraj kmayi...@redhat.com mailto:kmayi...@redhat.comwrote: On 12/06/2013 07:55 AM, Andrew Lau wrote: Because of a few issues I had with keepalived, I moved my storage network to it's own VLAN but it seems to have broken part of the ovirt gluster management. Same scenario: 2 Hosts 1x Engine, VDSM, Gluster 1x VDSM,Gluster So to properly split the gluster data and ovirtmgmt I simply assigned them two host names and two IPS. 172.16.0.1 (ovirtmgmt) hvx.melb.example.net http://hvx.melb.example.net 172.16.1.1 (gluster) gsx.melb.example.net http://gsx.melb.example.net However the oVirt engine does not seem to like this, it would not pick up the gluster volume as running until I did a restart through the UI. The issue (possible bug) I'm seeing is the logs are being filled with http://www.fpaste.org/59440/13862963/ Volume information isn't being pulled as it thinks the gs01.melb.example.net http://gs01.melb.example.net is not within the cluster, where in fact it is but registered under hv01.melb.example.net http://hv01.melb.example.net What's compatibility version of the clusters? From 3.3 onwards, gluster-host-uuid is used to identify a host instead of hostname. Thanks, Kanagaraj Thanks, Andrew ___ Users mailing list
Re: [Users] Ovirt Engine Memory usage going crazy since 3.3 Upgrade
On 12/12/2013 11:43 AM, squadra wrote: Hi, i upgraded my Ovirt 3.2 a few days ago, with 3.2 memory usage was within a level i rated as normal (~3gb mem used, for controlling 6 nodes with about 30 vm on them). after 3.3.1 upgrade, the engine goes crazy and uses about 3x of memory after some hours. right after starting ovirt-engine, everything is fine, after about 20hrs the host starts swapping etc heres the process in crazy state ovirt 8749 107 59.2 10592340 4776040 ?Sl Dec10 2189:35 ovirt-engine -server -XX:+TieredCompilation -Xms1g -Xmx1g -XX:PermSize=256m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Dsun.rmi.dgc.client.gcInterva here right now after a restart ovirt28350 43.7 8.7 3895304 705964 ? Sl 07:09 1:09 ovirt-engine -server -XX:+TieredCompilation -Xms1g -Xmx1g -XX:PermSize=256m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Dsun.rmi.dgc.client.gcInterval also, the webinterface is getting really laggy after a few hours of runtime (already before the host starts swapping). anyone got an idea what is causing this? which logs should i provide to dig further into this? Which OS are you running on? And which version of java? Could you attach the output of pmap ovirt process id ? thanks cheers, Juergen -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Ovirt Engine Memory usage going crazy since 3.3 Upgrade
[Adding ovirt users] On 12/12/2013 12:09 PM, squadra wrote: Hello Sahina, CentOS 6.5 [root@ovirt:~]$ rpm -aq |grep jdk java-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64 java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64 [root@ovirt:~]$ 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux its running inside vmware esx, if that matters? also, previosly it was a 3.2 installation based on dreyous rpms, dunno if that could be the root cause? currently the vm is able to use up to 8gb with 4 cores, running alone on the host. but engine is able to eat 8gb and 12gb of swap. pmap output is here (not going crazy right now, since i restarted it a few minutes ago) http://pastebin.com/fuNEZqMA This looks very similar to https://bugzilla.redhat.com/show_bug.cgi?id=1028966. Please use workaround in comment 27. Cheers, Jürgen On Thu, Dec 12, 2013 at 7:32 AM, Sahina Bose sab...@redhat.com mailto:sab...@redhat.com wrote: On 12/12/2013 11:43 AM, squadra wrote: Hi, i upgraded my Ovirt 3.2 a few days ago, with 3.2 memory usage was within a level i rated as normal (~3gb mem used, for controlling 6 nodes with about 30 vm on them). after 3.3.1 upgrade, the engine goes crazy and uses about 3x of memory after some hours. right after starting ovirt-engine, everything is fine, after about 20hrs the host starts swapping etc heres the process in crazy state ovirt 8749 107 59.2 10592340 4776040 ? Sl Dec10 2189:35 ovirt-engine -server -XX:+TieredCompilation -Xms1g -Xmx1g -XX:PermSize=256m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Dsun.rmi.dgc.client.gcInterva here right now after a restart ovirt28350 43.7 8.7 3895304 705964 ?Sl 07:09 1:09 ovirt-engine -server -XX:+TieredCompilation -Xms1g -Xmx1g -XX:PermSize=256m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Dsun.rmi.dgc.client.gcInterval also, the webinterface is getting really laggy after a few hours of runtime (already before the host starts swapping). anyone got an idea what is causing this? which logs should i provide to dig further into this? Which OS are you running on? And which version of java? Could you attach the output of pmap ovirt process id ? thanks cheers, Juergen -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Sent from the Delta quadrant using Borg technology! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Gluster Volume Info won't update
On 12/10/2013 04:24 AM, Andrew Lau wrote: engine.log: http://www.fpaste.org/60336/86629496/ vdsm.log: http://www.fpaste.org/60337/29624138/ Andrew, I was interested in the getCapabilities output in vdsm.log. Could you click on Refresh Caps from Host tab and attach the output of getCapabilities from vdsm.log? thanks! On Mon, Dec 9, 2013 at 5:46 PM, Sahina Bose sab...@redhat.com mailto:sab...@redhat.com wrote: On 12/08/2013 01:51 PM, Andrew Lau wrote: The defined cluster keeps showing me the option to import the gsx.melb.example.net http://gsx.melb.example.net hosts into the cluster as hosts. Under the Host's Network interfaces sub tab there is another issue I've got was it's picking up the keepalived floating IP address rather than the one assigned, but yes the network is listed as I managed it through ovirt. Could you attach the engine.log as well as the vdsm.log from the node? On Fri, Dec 6, 2013 at 9:22 PM, Sahina Bose sab...@redhat.com mailto:sab...@redhat.com wrote: On 12/06/2013 03:29 PM, Sahina Bose wrote: On 12/06/2013 10:00 AM, Kanagaraj wrote: On 12/06/2013 09:46 AM, Andrew Lau wrote: Yup - it's ovirt cluster version 3.3 with gluster 3.4.1 what does gluster volume info vol and gluster volume status vol say? There's an issue where engine cannot sync the bricks, as bricks return a different IP address than the one engine is aware of (in this case gsx.melb.example.net http://gsx.melb.example.net while engine knows the host as hvx.melb.example.net http://hvx.melb.example.net) . We need to fix this path to use the gluster host UUID as well. Have logged a bug to track this - https://bugzilla.redhat.com/show_bug.cgi?id=1038988 Could you also let us know if the following network is listed under your Host's Network Interfaces sub tab? 172.16.1.1 (gluster) gsx.melb.example.net http://gsx.melb.example.net If it is, then even without any fix, it should have worked for you and we need to dig further on why it did not. On Fri, Dec 6, 2013 at 3:11 PM, Kanagaraj kmayi...@redhat.com mailto:kmayi...@redhat.comwrote: On 12/06/2013 07:55 AM, Andrew Lau wrote: Because of a few issues I had with keepalived, I moved my storage network to it's own VLAN but it seems to have broken part of the ovirt gluster management. Same scenario: 2 Hosts 1x Engine, VDSM, Gluster 1x VDSM,Gluster So to properly split the gluster data and ovirtmgmt I simply assigned them two host names and two IPS. 172.16.0.1 (ovirtmgmt) hvx.melb.example.net http://hvx.melb.example.net 172.16.1.1 (gluster) gsx.melb.example.net http://gsx.melb.example.net However the oVirt engine does not seem to like this, it would not pick up the gluster volume as running until I did a restart through the UI. The issue (possible bug) I'm seeing is the logs are being filled with http://www.fpaste.org/59440/13862963/ Volume information isn't being pulled as it thinks the gs01.melb.example.net http://gs01.melb.example.net is not within the cluster, where in fact it is but registered under hv01.melb.example.net http://hv01.melb.example.net What's compatibility version of the clusters? From 3.3 onwards, gluster-host-uuid is used to identify a host instead of hostname. Thanks, Kanagaraj Thanks, Andrew ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Gluster Volume Info won't update
On 12/08/2013 01:51 PM, Andrew Lau wrote: The defined cluster keeps showing me the option to import the gsx.melb.example.net http://gsx.melb.example.net hosts into the cluster as hosts. Under the Host's Network interfaces sub tab there is another issue I've got was it's picking up the keepalived floating IP address rather than the one assigned, but yes the network is listed as I managed it through ovirt. Could you attach the engine.log as well as the vdsm.log from the node? On Fri, Dec 6, 2013 at 9:22 PM, Sahina Bose sab...@redhat.com mailto:sab...@redhat.com wrote: On 12/06/2013 03:29 PM, Sahina Bose wrote: On 12/06/2013 10:00 AM, Kanagaraj wrote: On 12/06/2013 09:46 AM, Andrew Lau wrote: Yup - it's ovirt cluster version 3.3 with gluster 3.4.1 what does gluster volume info vol and gluster volume status vol say? There's an issue where engine cannot sync the bricks, as bricks return a different IP address than the one engine is aware of (in this case gsx.melb.example.net http://gsx.melb.example.net while engine knows the host as hvx.melb.example.net http://hvx.melb.example.net) . We need to fix this path to use the gluster host UUID as well. Have logged a bug to track this - https://bugzilla.redhat.com/show_bug.cgi?id=1038988 Could you also let us know if the following network is listed under your Host's Network Interfaces sub tab? 172.16.1.1 (gluster) gsx.melb.example.net http://gsx.melb.example.net If it is, then even without any fix, it should have worked for you and we need to dig further on why it did not. On Fri, Dec 6, 2013 at 3:11 PM, Kanagaraj kmayi...@redhat.com mailto:kmayi...@redhat.comwrote: On 12/06/2013 07:55 AM, Andrew Lau wrote: Because of a few issues I had with keepalived, I moved my storage network to it's own VLAN but it seems to have broken part of the ovirt gluster management. Same scenario: 2 Hosts 1x Engine, VDSM, Gluster 1x VDSM,Gluster So to properly split the gluster data and ovirtmgmt I simply assigned them two host names and two IPS. 172.16.0.1 (ovirtmgmt) hvx.melb.example.net http://hvx.melb.example.net 172.16.1.1 (gluster) gsx.melb.example.net http://gsx.melb.example.net However the oVirt engine does not seem to like this, it would not pick up the gluster volume as running until I did a restart through the UI. The issue (possible bug) I'm seeing is the logs are being filled with http://www.fpaste.org/59440/13862963/ Volume information isn't being pulled as it thinks the gs01.melb.example.net http://gs01.melb.example.net is not within the cluster, where in fact it is but registered under hv01.melb.example.net http://hv01.melb.example.net What's compatibility version of the clusters? From 3.3 onwards, gluster-host-uuid is used to identify a host instead of hostname. Thanks, Kanagaraj Thanks, Andrew ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Gluster Volume Info won't update
On 12/06/2013 10:00 AM, Kanagaraj wrote: On 12/06/2013 09:46 AM, Andrew Lau wrote: Yup - it's ovirt cluster version 3.3 with gluster 3.4.1 what does gluster volume info vol and gluster volume status vol say? There's an issue where engine cannot sync the bricks, as bricks return a different IP address than the one engine is aware of (in this case gsx.melb.example.net http://gsx.melb.example.net while engine knows the host as hvx.melb.example.net http://hvx.melb.example.net) . We need to fix this path to use the gluster host UUID as well. Have logged a bug to track this - https://bugzilla.redhat.com/show_bug.cgi?id=1038988 On Fri, Dec 6, 2013 at 3:11 PM, Kanagaraj kmayi...@redhat.com mailto:kmayi...@redhat.comwrote: On 12/06/2013 07:55 AM, Andrew Lau wrote: Because of a few issues I had with keepalived, I moved my storage network to it's own VLAN but it seems to have broken part of the ovirt gluster management. Same scenario: 2 Hosts 1x Engine, VDSM, Gluster 1x VDSM,Gluster So to properly split the gluster data and ovirtmgmt I simply assigned them two host names and two IPS. 172.16.0.1 (ovirtmgmt) hvx.melb.example.net http://hvx.melb.example.net 172.16.1.1 (gluster) gsx.melb.example.net http://gsx.melb.example.net However the oVirt engine does not seem to like this, it would not pick up the gluster volume as running until I did a restart through the UI. The issue (possible bug) I'm seeing is the logs are being filled with http://www.fpaste.org/59440/13862963/ Volume information isn't being pulled as it thinks the gs01.melb.example.net http://gs01.melb.example.net is not within the cluster, where in fact it is but registered under hv01.melb.example.net http://hv01.melb.example.net What's compatibility version of the clusters? From 3.3 onwards, gluster-host-uuid is used to identify a host instead of hostname. Thanks, Kanagaraj Thanks, Andrew ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Gluster Volume Info won't update
On 12/06/2013 03:29 PM, Sahina Bose wrote: On 12/06/2013 10:00 AM, Kanagaraj wrote: On 12/06/2013 09:46 AM, Andrew Lau wrote: Yup - it's ovirt cluster version 3.3 with gluster 3.4.1 what does gluster volume info vol and gluster volume status vol say? There's an issue where engine cannot sync the bricks, as bricks return a different IP address than the one engine is aware of (in this case gsx.melb.example.net http://gsx.melb.example.net while engine knows the host as hvx.melb.example.net http://hvx.melb.example.net) . We need to fix this path to use the gluster host UUID as well. Have logged a bug to track this - https://bugzilla.redhat.com/show_bug.cgi?id=1038988 Could you also let us know if the following network is listed under your Host's Network Interfaces sub tab? 172.16.1.1 (gluster) gsx.melb.example.net http://gsx.melb.example.net If it is, then even without any fix, it should have worked for you and we need to dig further on why it did not. On Fri, Dec 6, 2013 at 3:11 PM, Kanagaraj kmayi...@redhat.com mailto:kmayi...@redhat.comwrote: On 12/06/2013 07:55 AM, Andrew Lau wrote: Because of a few issues I had with keepalived, I moved my storage network to it's own VLAN but it seems to have broken part of the ovirt gluster management. Same scenario: 2 Hosts 1x Engine, VDSM, Gluster 1x VDSM,Gluster So to properly split the gluster data and ovirtmgmt I simply assigned them two host names and two IPS. 172.16.0.1 (ovirtmgmt) hvx.melb.example.net http://hvx.melb.example.net 172.16.1.1 (gluster) gsx.melb.example.net http://gsx.melb.example.net However the oVirt engine does not seem to like this, it would not pick up the gluster volume as running until I did a restart through the UI. The issue (possible bug) I'm seeing is the logs are being filled with http://www.fpaste.org/59440/13862963/ Volume information isn't being pulled as it thinks the gs01.melb.example.net http://gs01.melb.example.net is not within the cluster, where in fact it is but registered under hv01.melb.example.net http://hv01.melb.example.net What's compatibility version of the clusters? From 3.3 onwards, gluster-host-uuid is used to identify a host instead of hostname. Thanks, Kanagaraj Thanks, Andrew ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Using storage network for gluster
On 12/05/2013 10:34 PM, Juan Pablo Lorier wrote: H, I'm experimenting with gluster and as my DC is a iscsi dc, I've just created a new cluster with two hosts and now I'm trying to create a volume. I've followed all kind of guides in the web without success. Now I'm having ovirt to complaint about a network error in the volume creation and I'd like to ask two things: - Why I must use ovirtmgmt network for gluster (can't choose other from the add brick drop down list) instead of the network I have set up for iscsi? - What may be the problem with network? The issue is not with the network, but some dependency package on the host. It looks like you're missing vdsm-gluster package. Could you share details of your setup? ovirt engine version? result of rpm -qa | grep vdsm on the hosts? I get success on both hosts when I proob each other # gluster peer probe ovirt3.montecarlotv.com.uy peer probe: success: host ovirt3.montecarlotv.com.uy port 24007 already in peer list # gluster peer probe ovirt4.montecarlotv.com.uy peer probe: success: host ovirt4.montecarlotv.com.uy port 24007 already in peer list I get a lot of this ones in engine log: 2013-12-05 03:40:01,882 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler_Worker-50) Error while refreshing Gluster lightweight data of cluster Penryn!: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: org.apache.xmlrpc.XmlRpcException: type 'exceptions.Exception':method glusterVolumesList is not supported (Failed with error VDS_NETWORK_ERROR and code 5022) at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:122) [bll.jar:] at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterJob.runVdsCommand(GlusterJob.java:64) [bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterSyncJob.fetchVolumes(GlusterSyncJob.java:389) [bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterSyncJob.fetchVolumes(GlusterSyncJob.java:375) [bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshVolumeData(GlusterSyncJob.java:346) [bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshClusterData(GlusterSyncJob.java:104) [bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshLightWeightData(GlusterSyncJob.java:83) [bll.jar:] at sun.reflect.GeneratedMethodAccessor99.invoke(Unknown Source) [:1.7.0_45] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_45] at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_45] at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60) [scheduler.jar:] at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz.jar:] at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) [quartz.jar:] So I guess there's something with gluster version maybe: glusterfs.x86_643.4.0-8.el6 @glusterfs-epel glusterfs-api.x86_643.4.0-8.el6 @glusterfs-epel glusterfs-cli.x86_643.4.0-8.el6 @glusterfs-epel glusterfs-fuse.x86_64 3.4.0-8.el6 @glusterfs-epel glusterfs-libs.x86_64 3.4.0-8.el6 @glusterfs-epel glusterfs-rdma.x86_64 3.4.0-8.el6 @glusterfs-epel glusterfs-server.x86_64 3.4.0-8.el6 @glusterfs-epel vdsm-gluster.noarch 4.13.0-11.el6 @ovirt-stable this are the main links I used to configure: http://www.andrewklau.com/returning-to-glusterized-ovirt-3-3/ http://www.gluster.org/2013/09/ovirt-3-3-glusterized/ Regards ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Using storage network for gluster
On 12/06/2013 11:49 AM, Sahina Bose wrote: On 12/05/2013 10:34 PM, Juan Pablo Lorier wrote: H, I'm experimenting with gluster and as my DC is a iscsi dc, I've just created a new cluster with two hosts and now I'm trying to create a volume. I've followed all kind of guides in the web without success. Now I'm having ovirt to complaint about a network error in the volume creation and I'd like to ask two things: - Why I must use ovirtmgmt network for gluster (can't choose other from the add brick drop down list) instead of the network I have set up for iscsi? - What may be the problem with network? The issue is not with the network, but some dependency package on the host. It looks like you're missing vdsm-gluster package. Could you share details of your setup? ovirt engine version? result of rpm -qa | grep vdsm on the hosts? Sorry, I jumped the gun. Din't see you had already sorted this out. I get success on both hosts when I proob each other # gluster peer probe ovirt3.montecarlotv.com.uy peer probe: success: host ovirt3.montecarlotv.com.uy port 24007 already in peer list # gluster peer probe ovirt4.montecarlotv.com.uy peer probe: success: host ovirt4.montecarlotv.com.uy port 24007 already in peer list I get a lot of this ones in engine log: 2013-12-05 03:40:01,882 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler_Worker-50) Error while refreshing Gluster lightweight data of cluster Penryn!: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: org.apache.xmlrpc.XmlRpcException: type 'exceptions.Exception':method glusterVolumesList is not supported (Failed with error VDS_NETWORK_ERROR and code 5022) at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:122) [bll.jar:] at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterJob.runVdsCommand(GlusterJob.java:64) [bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterSyncJob.fetchVolumes(GlusterSyncJob.java:389) [bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterSyncJob.fetchVolumes(GlusterSyncJob.java:375) [bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshVolumeData(GlusterSyncJob.java:346) [bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshClusterData(GlusterSyncJob.java:104) [bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshLightWeightData(GlusterSyncJob.java:83) [bll.jar:] at sun.reflect.GeneratedMethodAccessor99.invoke(Unknown Source) [:1.7.0_45] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_45] at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_45] at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60) [scheduler.jar:] at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz.jar:] at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) [quartz.jar:] So I guess there's something with gluster version maybe: glusterfs.x86_643.4.0-8.el6 @glusterfs-epel glusterfs-api.x86_643.4.0-8.el6 @glusterfs-epel glusterfs-cli.x86_643.4.0-8.el6 @glusterfs-epel glusterfs-fuse.x86_64 3.4.0-8.el6 @glusterfs-epel glusterfs-libs.x86_64 3.4.0-8.el6 @glusterfs-epel glusterfs-rdma.x86_64 3.4.0-8.el6 @glusterfs-epel glusterfs-server.x86_64 3.4.0-8.el6 @glusterfs-epel vdsm-gluster.noarch 4.13.0-11.el6 @ovirt-stable this are the main links I used to configure: http://www.andrewklau.com/returning-to-glusterized-ovirt-3-3/ http://www.gluster.org/2013/09/ovirt-3-3-glusterized/ Regards ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt 3.4 planning
On 11/13/2013 08:20 PM, Itamar Heim wrote: On 11/13/2013 09:27 AM, René Koch (ovido) wrote: Hi, There are 2 features I can maybe add to existing projects of mine (check_rhev3 and Monitoring UI-Plugin), but it would be good to know what users require to decide if this can be done via an external (ui)plugin or if this needs to be integrated into oVirt itself. * gluster: Monitoring (UI plugin) What's expected here - monitoring glusterfs volumes (including performance data) and displaying the results in your favored monitoring solution and oVirt? vijay/dpati/sahina - thoughts on this one? At a high level, Monitoring of volume Storage metrics - capacity, network, CPU, disk utilization Detecting split-brain/self-heal activity Vijay/Dusmant - please add. * other: Zabbix monitoring Monitoring the oVirt environment should work with my check_rhev3 plugin by adding it as an external check to Zabbix. I'll test this and if it's working I'll provide a short guide on how to do it. Displaying data/triggers in oVirt isn't possible yet with my Monitoring UI-plugin, but on the feature list (help is welcome)... Before I add myself to the list - can you give me more information on the role/tasks of a testing owner? Are there more steps required then testing a feature, getting in contact with the devel owner to fix issues and update the oVirt BZ (and join the IRC weekly meetings)? communicate with the other two owner on scope of what to test, then when feature is ready - test it, open bugs, and communicate if too broken to be considered in the version, etc. Regards, René On Tue, 2013-10-29 at 19:46 +0200, Itamar Heim wrote: To try and improve 3.4 planning over the wiki approach in 3.3, I've placed the items i collected on users list last time into a google doc[1] now, the main thing each item needs is a requirements owner, devel owner and a testing owner (well, devel owner is really needed to make it happen, but all are important). then we need an oVirt BZ for each, and for some a feature page. I also added columns indicating if the item will require an API design review and a GUI design review. this list is just the start of course for items from it to get ownership. i expect more items will be added as well, as long as they have owners, etc. the doc is public read-only, please request read-write access to be able to edit it. feel free to ask questions, etc. Thanks, Itamar [1] http://bit.ly/17qBn6F ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt 3.4 planning
On 11/13/2013 08:53 PM, Itamar Heim wrote: On 11/13/2013 10:20 AM, Sahina Bose wrote: On 11/13/2013 08:20 PM, Itamar Heim wrote: On 11/13/2013 09:27 AM, René Koch (ovido) wrote: Hi, There are 2 features I can maybe add to existing projects of mine (check_rhev3 and Monitoring UI-Plugin), but it would be good to know what users require to decide if this can be done via an external (ui)plugin or if this needs to be integrated into oVirt itself. * gluster: Monitoring (UI plugin) What's expected here - monitoring glusterfs volumes (including performance data) and displaying the results in your favored monitoring solution and oVirt? vijay/dpati/sahina - thoughts on this one? At a high level, Monitoring of volume Storage metrics - capacity, network, CPU, disk utilization Detecting split-brain/self-heal activity would help specyfing which REST API calls are involved. Hmm.. would monitoring work on top of engine via REST API calls? I was thinking more in line of monitoring the nodes directly - maybe a push mechanism from nodes. Vijay/Dusmant - please add. * other: Zabbix monitoring Monitoring the oVirt environment should work with my check_rhev3 plugin by adding it as an external check to Zabbix. I'll test this and if it's working I'll provide a short guide on how to do it. Displaying data/triggers in oVirt isn't possible yet with my Monitoring UI-plugin, but on the feature list (help is welcome)... Before I add myself to the list - can you give me more information on the role/tasks of a testing owner? Are there more steps required then testing a feature, getting in contact with the devel owner to fix issues and update the oVirt BZ (and join the IRC weekly meetings)? communicate with the other two owner on scope of what to test, then when feature is ready - test it, open bugs, and communicate if too broken to be considered in the version, etc. Regards, René On Tue, 2013-10-29 at 19:46 +0200, Itamar Heim wrote: To try and improve 3.4 planning over the wiki approach in 3.3, I've placed the items i collected on users list last time into a google doc[1] now, the main thing each item needs is a requirements owner, devel owner and a testing owner (well, devel owner is really needed to make it happen, but all are important). then we need an oVirt BZ for each, and for some a feature page. I also added columns indicating if the item will require an API design review and a GUI design review. this list is just the start of course for items from it to get ownership. i expect more items will be added as well, as long as they have owners, etc. the doc is public read-only, please request read-write access to be able to edit it. feel free to ask questions, etc. Thanks, Itamar [1] http://bit.ly/17qBn6F ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Gluster VM stuck in waiting for launch state
On 10/31/2013 09:22 AM, Zhou Zheng Sheng wrote: on 2013/10/31 06:24, Dan Kenigsberg wrote: On Wed, Oct 30, 2013 at 08:41:43PM +0100, Alessandro Bianchi wrote: Il 30/10/2013 18:04, Dan Kenigsberg ha scritto: On Wed, Oct 30, 2013 at 02:40:02PM +0100, Alessandro Bianchi wrote: Il 30/10/2013 13:58, Dan Kenigsberg ha scritto: On Wed, Oct 30, 2013 at 11:34:21AM +0100, Alessandro Bianchi wrote: Hi everyone I've set up a gluster storage with two replicated bricks DC is up and I created a VM to test gluster storage If I start the VM WITHOUT any disk attached (only one virtual DVD) it starts fine. If I attach a gluster domain disk thin provisioning 30 Gb the Vm stucks in waiting for launch state I see no special activity on the gluster servers (they serve several other shares with no troubles at all and even the ISO domain is a NFS on locally mounted gluster and works fine) I've double checked all the pre requisites and they look fine (F 19 - gluster setup insecure in both glusterd.vol and volume options - uid/gid/insecure ) Am I doing something wrong? I'm even unable to stop the VM from the engine GUI Any advise? Which version of ovirt are you using? Hopefully ovirt-3.3.0.1. For how long is the VM stuck in its wait for launch state? What does `virsh -r list` has to say while startup stalls? Would you provide more content of your vdsm.log and possibly libvirtd.log so we can understand what blocks the VM start-up? Please use attachement of pastebin, as your mail agents wreaks havoc to the log lines. Thank you for your answer. Here are the facts In the GUI I see waiting for launch 3 h virsh -r list IdNome Stato 3 CentOS_30 terminato vdsClient -s 0 list table 200dfb05-461e-49d9-95a2-c0a7c7ced669 0 CentOS_30 WaitForLaunch Packages: ovirt-engine-userportal-3.3.0.1-1.fc19.noarch ovirt-log-collector-3.3.1-1.fc19.noarch ovirt-engine-restapi-3.3.0.1-1.fc19.noarch ovirt-engine-setup-3.3.0.1-1.fc19.noarch ovirt-engine-backend-3.3.0.1-1.fc19.noarch ovirt-host-deploy-java-1.1.1-1.fc19.noarch ovirt-release-fedora-8-1.noarch ovirt-engine-setup-plugin-allinone-3.3.0.1-1.fc19.noarch ovirt-engine-webadmin-portal-3.3.0.1-1.fc19.noarch ovirt-engine-sdk-python-3.3.0.7-1.fc19.noarch ovirt-iso-uploader-3.3.1-1.fc19.noarch ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch ovirt-engine-dbscripts-3.3.0.1-1.fc19.noarch ovirt-host-deploy-offline-1.1.1-1.fc19.noarch ovirt-engine-cli-3.3.0.5-1.fc19.noarch ovirt-engine-tools-3.3.0.1-1.fc19.noarch ovirt-engine-lib-3.3.0.1-1.fc19.noarch ovirt-image-uploader-3.3.1-1.fc19.noarch ovirt-engine-3.3.0.1-1.fc19.noarch ovirt-host-deploy-1.1.1-1.fc19.noarch I attach the full vdsm log Look around 30-10 10:30 to see all what happens Despite the terminated label in output from virsh I still see the VM waiting for launch in the GUI, so I suspect the answer to how long may be forever Since this is a test VM I can do whatever test you may need to track the problem included destroy and rebuild It would be great to have gluster support stable in ovirt! Thank you for your efforts The log has an ominous failed attempt to start the VM, followed by an immediate vdsm crash. Is it reproducible? We have plenty of issues lurking here: 1. Why has libvirt failed to create the VM? For this, please find clues in the complete non-line-broken CentOS_30.log and libvirtd.log. attached to this messages 2. Why was vdsm killed? Does /var/log/message has a clue from systemd? result of cat /var/log/messages | grep vdsm attached I do not see an explicit attempt to take vdsmd down. Do you see any other incriminating message correlated with Oct 30 08:51:15 hypervisor respawn: slave '/usr/share/vdsm/vdsm' died, respawning slave 3. We may have a nasty race: if Vdsm crashes just before it has registered that the VM is down. Actually, this is not the issue: vdsm tries (and fails, due to qemu/libvirt bug) to destroy the VM. 4. We used to force Vdsm to run with LC_ALL=C. It seems that the grand service rewrite by Zhou (http://gerrit.ovirt.org/15578) has changed that. This may have adverse effects, since AFAIR we sometimes parse application output, and assume that it's in C. Having a non-English log file is problematic on it's own for support personal, used to grep for keywords. ybronhei, was it intensional? Can it be reverted or at least scrutinized? currentely it still says waiting for launch 9h I don't abort it so if you need any other info I can have them libvirtd fails to connect to qemu's monitor. This smells like a qemu bug that is beyond my over-the-mailing-list debugging
Re: [Users] Gluster VM stuck in waiting for launch state
On 10/31/2013 11:51 AM, Sahina Bose wrote: On 10/31/2013 09:22 AM, Zhou Zheng Sheng wrote: on 2013/10/31 06:24, Dan Kenigsberg wrote: On Wed, Oct 30, 2013 at 08:41:43PM +0100, Alessandro Bianchi wrote: Il 30/10/2013 18:04, Dan Kenigsberg ha scritto: On Wed, Oct 30, 2013 at 02:40:02PM +0100, Alessandro Bianchi wrote: Il 30/10/2013 13:58, Dan Kenigsberg ha scritto: On Wed, Oct 30, 2013 at 11:34:21AM +0100, Alessandro Bianchi wrote: Hi everyone I've set up a gluster storage with two replicated bricks DC is up and I created a VM to test gluster storage If I start the VM WITHOUT any disk attached (only one virtual DVD) it starts fine. If I attach a gluster domain disk thin provisioning 30 Gb the Vm stucks in waiting for launch state I see no special activity on the gluster servers (they serve several other shares with no troubles at all and even the ISO domain is a NFS on locally mounted gluster and works fine) I've double checked all the pre requisites and they look fine (F 19 - gluster setup insecure in both glusterd.vol and volume options - uid/gid/insecure ) Am I doing something wrong? I'm even unable to stop the VM from the engine GUI Any advise? Which version of ovirt are you using? Hopefully ovirt-3.3.0.1. For how long is the VM stuck in its wait for launch state? What does `virsh -r list` has to say while startup stalls? Would you provide more content of your vdsm.log and possibly libvirtd.log so we can understand what blocks the VM start-up? Please use attachement of pastebin, as your mail agents wreaks havoc to the log lines. Thank you for your answer. Here are the facts In the GUI I see waiting for launch 3 h virsh -r list IdNome Stato 3 CentOS_30 terminato vdsClient -s 0 list table 200dfb05-461e-49d9-95a2-c0a7c7ced669 0 CentOS_30 WaitForLaunch Packages: ovirt-engine-userportal-3.3.0.1-1.fc19.noarch ovirt-log-collector-3.3.1-1.fc19.noarch ovirt-engine-restapi-3.3.0.1-1.fc19.noarch ovirt-engine-setup-3.3.0.1-1.fc19.noarch ovirt-engine-backend-3.3.0.1-1.fc19.noarch ovirt-host-deploy-java-1.1.1-1.fc19.noarch ovirt-release-fedora-8-1.noarch ovirt-engine-setup-plugin-allinone-3.3.0.1-1.fc19.noarch ovirt-engine-webadmin-portal-3.3.0.1-1.fc19.noarch ovirt-engine-sdk-python-3.3.0.7-1.fc19.noarch ovirt-iso-uploader-3.3.1-1.fc19.noarch ovirt-engine-websocket-proxy-3.3.0.1-1.fc19.noarch ovirt-engine-dbscripts-3.3.0.1-1.fc19.noarch ovirt-host-deploy-offline-1.1.1-1.fc19.noarch ovirt-engine-cli-3.3.0.5-1.fc19.noarch ovirt-engine-tools-3.3.0.1-1.fc19.noarch ovirt-engine-lib-3.3.0.1-1.fc19.noarch ovirt-image-uploader-3.3.1-1.fc19.noarch ovirt-engine-3.3.0.1-1.fc19.noarch ovirt-host-deploy-1.1.1-1.fc19.noarch I attach the full vdsm log Look around 30-10 10:30 to see all what happens Despite the terminated label in output from virsh I still see the VM waiting for launch in the GUI, so I suspect the answer to how long may be forever Since this is a test VM I can do whatever test you may need to track the problem included destroy and rebuild It would be great to have gluster support stable in ovirt! Thank you for your efforts The log has an ominous failed attempt to start the VM, followed by an immediate vdsm crash. Is it reproducible? We have plenty of issues lurking here: 1. Why has libvirt failed to create the VM? For this, please find clues in the complete non-line-broken CentOS_30.log and libvirtd.log. attached to this messages 2. Why was vdsm killed? Does /var/log/message has a clue from systemd? result of cat /var/log/messages | grep vdsm attached I do not see an explicit attempt to take vdsmd down. Do you see any other incriminating message correlated with Oct 30 08:51:15 hypervisor respawn: slave '/usr/share/vdsm/vdsm' died, respawning slave 3. We may have a nasty race: if Vdsm crashes just before it has registered that the VM is down. Actually, this is not the issue: vdsm tries (and fails, due to qemu/libvirt bug) to destroy the VM. 4. We used to force Vdsm to run with LC_ALL=C. It seems that the grand service rewrite by Zhou (http://gerrit.ovirt.org/15578) has changed that. This may have adverse effects, since AFAIR we sometimes parse application output, and assume that it's in C. Having a non-English log file is problematic on it's own for support personal, used to grep for keywords. ybronhei, was it intensional? Can it be reverted or at least scrutinized? currentely it still says waiting for launch 9h I don't abort it so if you need any other info I can have them libvirtd fails to connect to qemu's monitor. This smells
Re: [Users] Gluster network info
On 10/03/2013 09:27 PM, Livnat Peer wrote: On 10/03/2013 03:45 PM, Gianluca Cecchi wrote: On Thu, Oct 3, 2013 at 2:08 PM, Dan Kenigsberg wrote: SO the question is : what about ovirtmgmt network for gluster replication when gluster domain is provided by ovirt nodes? I suppose it could be a problem too, couldn't it? yes that is correct, using ovirtmgmt for non management traffic is not a good idea. In case I have a dedicated network for gluster for the nodes, how can i configure it? Generally there two ways to go about this: 1. add a new network role 'gluster replication' and then adjust the engine code to pass the network with this role as parameter to the replication verb in VDSM. 2. in the replication verb in the UI let the user choose which network to use for the replication. I am not familiar with the replication verb so I might be missing something but otherwise I think that solution 2 is more simple and less invasive then requiring a new network role. Bala, is there a means to select, for each node, which IP address is to be used for replication? AFAICT we use the host fqdn, which most probably resolves to the ovirtmgmt device. what is replication verb in the UI? Inside UI when I click create volume and then add Bricks buttons I can only select Server that in my case is a drop down menu containing only ip addresses of my 2 hypervisors Or simply do you mean development steps to integrate this missing functionality in oVirt with steps 1. and 2.? I am probably missing some background around Gluster, could you share what triggers the gluster replication? [Adding Amar from gluster team] There's no user interface way to trigger replication. Gianluca, I assume you do not mean geo-replication but replication of files when volume is of replicated type. Currently, we do not have a way to specify the network to be used when gluster communicates with peers. Like Joop mentioned in a separate mail, if you use FQDN to add host to the engine, and FQDN resolves differently for internal and external communication, gluster would then use the internal network for its communication thanks sahina Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Gluster network info
On 10/04/2013 03:40 PM, Gianluca Cecchi wrote: On Fri, Oct 4, 2013 at 12:01 PM, Sahina Bose wrote: There's no user interface way to trigger replication. Gianluca, I assume you do not mean geo-replication but replication of files when volume is of replicated type. YesSahina, I mean replication of files when volume is of replicated type (in my case distributed replicated) Currently, we do not have a way to specify the network to be used when gluster communicates with peers. Like Joop mentioned in a separate mail, if you use FQDN to add host to the engine, and FQDN resolves differently for internal and external communication, gluster would then use the internal network for its communication OK, eventuallt I'm going to try this way. Do you think it would work if without dns at all for a test environment I set up this way mgmt lan 192.x gluster network 10.x server hostnames involved engine node01 node02 In /etc/hosts of engine 192.xxx engine 192.yyy node01 192.zzz node02 In /etc/hosts of nodes 192.xxx engine 10.yyy node01 10.zzz node02 and accordingly in sysconfig files of nodes for their hostnames/ip bindng And when I preconfigure gluster on nodes I use on node01 the command gluster peer probe node02 I believe it should. When you add host to engine you would use node01. Please let us know how it goes, thanks again sahina Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Importing an existing gluster setup
On 08/28/2013 05:44 PM, Nux! wrote: Hi, I have an existing gluster cluster, it's in production and fully functional. I'd like to be able to manage it from within Ovirt. Can anyone tell me if I can import this gluster setup as it is? I don't want Ovirt to change anything on those hosts. Yes, you can, using the Import cluster option when you create a cluster with gluster services enabled (Do not enable virt services on cluster) This will install vdsm on the hosts, but should not affect the gluster setup in any other way. Thanks, Lucian ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Importing an existing gluster setup
On 08/28/2013 06:36 PM, Kanagaraj wrote: On 08/28/2013 06:16 PM, Nux! wrote: On 28.08.2013 13:19, Sahina Bose wrote: On 08/28/2013 05:44 PM, Nux! wrote: Hi, I have an existing gluster cluster, it's in production and fully functional. I'd like to be able to manage it from within Ovirt. Can anyone tell me if I can import this gluster setup as it is? I don't want Ovirt to change anything on those hosts. Yes, you can, using the Import cluster option when you create a cluster with gluster services enabled (Do not enable virt services on cluster) This will install vdsm on the hosts, but should not affect the gluster setup in any other way. Hello Sahina, I've added my existing gluster hosts as well as the VM running the ovirt-engine. It installed vdsm on this VM, set up a bridge called ovirtmgmt and rebooted it. Can you assure me this will not happen with the hosts I have in production. For the moment I cut access for ovirt to the gluster hosts.. my heart just froze when I saw that reboot. :) It is not supposed to reboot if you have created the cluster with 'Enable Gluster service' checked and 'Enable Virt Service' un-checked. This is true for Ovirt version 3.3. Which version are you using? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Importing an existing gluster setup
On 08/28/2013 07:41 PM, Nux! wrote: On 28.08.2013 14:54, Sahina Bose wrote: On 08/28/2013 06:36 PM, Kanagaraj wrote: On 08/28/2013 06:16 PM, Nux! wrote: On 28.08.2013 13:19, Sahina Bose wrote: On 08/28/2013 05:44 PM, Nux! wrote: Hi, I have an existing gluster cluster, it's in production and fully functional. I'd like to be able to manage it from within Ovirt. Can anyone tell me if I can import this gluster setup as it is? I don't want Ovirt to change anything on those hosts. Yes, you can, using the Import cluster option when you create a cluster with gluster services enabled (Do not enable virt services on cluster) This will install vdsm on the hosts, but should not affect the gluster setup in any other way. Hello Sahina, I've added my existing gluster hosts as well as the VM running the ovirt-engine. It installed vdsm on this VM, set up a bridge called ovirtmgmt and rebooted it. Can you assure me this will not happen with the hosts I have in production. For the moment I cut access for ovirt to the gluster hosts.. my heart just froze when I saw that reboot. :) It is not supposed to reboot if you have created the cluster with 'Enable Gluster service' checked and 'Enable Virt Service' un-checked. This is true for Ovirt version 3.3. Which version are you using? I have installed 3.2, and only the gluster bit so I did not see any Enable Virt Service. Should I upgrade to 3.3? Yes, since in 3.2 the hosts reboot even for gluster clusters. You might want to consider waiting till the 3.3 GA build is out.(next week) ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Can't Attach Glusterfs Storage to Data Center
On 08/27/2013 11:10 AM, higkoohk wrote: the vdsm.log : Thread-3817::DEBUG::2013-08-27 13:38:30,980::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=GlusterDomain', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=172.28.26.102:vm-storage', 'ROLE=Regular', 'SDUUID=9a8554b5-c514-4e46-91cc-b0490a977d55', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=7329f71f34ccd201c574c9e06dd9d5c07084d17f'] Thread-3817::WARNING::2013-08-27 13:38:30,981::fileUtils::167::Storage.fileUtils::(createdir) Dir /rhev/data-center/e8f568cf-e9db-4cf1-b812-1e4b8ae530cc already exists Thread-3817::DEBUG::2013-08-27 13:38:30,981::persistentDict::167::Storage.PersistentDict::(transaction) Starting transaction Thread-3817::DEBUG::2013-08-27 13:38:30,981::persistentDict::175::Storage.PersistentDict::(transaction) Finished transaction Thread-3817::INFO::2013-08-27 13:38:30,981::clusterlock::174::SANLock::(acquireHostId) Acquiring host id for domain 9a8554b5-c514-4e46-91cc-b0490a977d55 (id: 250) Thread-3817::ERROR::2013-08-27 13:38:31,982::task::850::TaskManager.Task::(_setError) Task=`e2ec1d1f-c94d-46bd-8db3-ef83fed0fd79`::Unexpected error Traceback (most recent call last): File /usr/share/vdsm/storage/task.py, line 857, in _run return fn(*args, **kargs) File /usr/share/vdsm/logUtils.py, line 45, in wrapper res = f(*args, **kwargs) File /usr/share/vdsm/storage/hsm.py, line 960, in createStoragePool masterVersion, leaseParams) File /usr/share/vdsm/storage/sp.py, line 617, in create self._acquireTemporaryClusterLock(msdUUID, leaseParams) File /usr/share/vdsm/storage/sp.py, line 559, in _acquireTemporaryClusterLock msd.acquireHostId(self.id http://self.id) File /usr/share/vdsm/storage/sd.py, line 458, in acquireHostId self._clusterLock.acquireHostId(hostId, async) File /usr/share/vdsm/storage/clusterlock.py, line 189, in acquireHostId raise se.AcquireHostIdFailure(self._sdUUID, e) AcquireHostIdFailure: Cannot acquire host id: ('9a8554b5-c514-4e46-91cc-b0490a977d55', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) Thread-3817::DEBUG::2013-08-27 13:38:31,982::task::869::TaskManager.Task::(_run) Task=`e2ec1d1f-c94d-46bd-8db3-ef83fed0fd79`::Task._run: e2ec1d1f-c94d-46bd-8db3-ef83fed0fd79 (None, 'e8f568cf-e9db-4cf1-b812-1e4b8ae530cc', 'GlusterDataCenter', '9a8554b5-c514-4e46-91cc-b0490a977d55', ['9a8554b5-c514-4e46-91cc-b0490a977d55'], 7, None, 5, 60, 10, 3) {} failed - stopping task I'm not very aware of the VDSM error, adding Deepak to look at it. If you're trying to use GlusterFS as storage domain, have you followed the Important Pre-requisites in http://www.ovirt.org/Features/GlusterFS_Storage_Domain? thank! sahina 2013/8/27 higkoohk higko...@gmail.com mailto:higko...@gmail.com Hello everyone! I'm init a glusterfs vm center , and now doing storage domain fail torage . Still get 'Cannot acquire host id'. Env: OS Version:RHEL - 6 - 4.el6.centos.10 Kernel Version:2.6.32 - 358.14.2.el6.x86_64 KVM Version:0.12.1.2 - 2.355.el6 LIBVIRT Version: libvirt-1.1.1-1.el6 VDSM Version: vdsm-4.12.0-0.1.rc3.el6 SPICE Version:0.12.0 - 12.el6 iSCSI Initiator Name:iqn.1994-05.com.redhat:9ea82ec11b83 SPM Priority: Active VMs: 0 CPU Name:Intel SandyBridge Family CPU Type:Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz CPU Sockets: 1 CPU Cores per Socket: 6 CPU Threads per Core:2 (SMT Enabled) Physical Memory: 24135 MB total, 965 MB used, 23170 MB free Swap Size: 23999 MB total, 0 MB used, 23999 MB free Shared Memory: 0% Max free Memory for scheduling new VMs: 23749 MB Memory Page Sharing: Inactive Automatic Large Pages: Always The error message is : 2013-08-27 12:13:25,814 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (pool-6-thread-50) Failed in CreateStoragePoolVDS method 2013-08-27 12:13:25,815 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (pool-6-thread-50) Error code AcquireHostIdFailure and error message VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire host id: ('9a8554b5-c514-4e46-91cc-b0490a977d55', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) 2013-08-27 12:13:25,817 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (pool-6-thread-50) Command CreateStoragePoolVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot
Re: [Users] oVirt Weekly Meeting Minutes -- 2013-08-21
On 08/21/2013 08:32 PM, Mike Burns wrote: Minutes: http://ovirt.org/meetings/ovirt/2013/ovirt.2013-08-21-14.01.html Minutes (text): http://ovirt.org/meetings/ovirt/2013/ovirt.2013-08-21-14.01.txt Log: http://ovirt.org/meetings/ovirt/2013/ovirt.2013-08-21-14.01.log.html #ovirt: oVirt Weekly Meeting Meeting started by oschreib at 14:01:24 UTC. The full logs are available at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-08-21-14.01.log.html . Meeting summary --- * agenda and roll call (oschreib, 14:01:34) * 3.3 status update (oschreib, 14:01:42) * infra team update (oschreib, 14:02:04) * Conference and Workshop (oschreib, 14:02:11) * Other Topics (oschreib, 14:02:19) * 3.3 Release update (oschreib, 14:11:44) * ovirt 3.3 releae page and tracker: (oschreib, 14:11:57) * LINK: http://www.ovirt.org/OVirt_3.3_release-management (oschreib, 14:12:04) * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=918494 (oschreib, 14:12:12) * GA is one week from now (oschreib, 14:12:35) * RC build has been uploaded to ovirt.org, thanks mburns (oschreib, 14:13:00) * Blocker review (oschreib, 14:13:43) * Bug 988299 - Impossible to start VM from Gluster Storage Domain (engine, MODIFIED) (oschreib, 14:14:02) * probably fixed (oschreib, 14:16:19) * ACTION: sahina to verify 988299 is pushed (oschreib, 14:16:37) * Bug 994523 - [vdsm][multiple gateways] Keep ovirt management as default in the main routing table always (engine, MODIFIED) (oschreib, 14:17:08) * patch for 988299 is not merged yet ( http://gerrit.ovirt.org/#/c/17994/ ) (mburns, 14:23:34) * need new vdsm build with this patch (mburns, 14:23:43) * Bug 994604 - Users cannot log into UserPortal (engine, MODIFIED) (oschreib, 14:24:52) * ACTION: ecohen to push into 3.3 branch (oschreib, 14:26:18) * ACTION: oschreib to rebuild engine afterwards (oschreib, 14:26:31) * Bug 997362 - [engine-config] /var/log/ovirt-engine/engine-config.log doesn't exist (engine, MODIFIED) (oschreib, 14:27:46) * ACTION: emesika to check with 997362 (oschreib, 14:32:10) * Blocker review done (oschreib, 14:32:18) * ACTION: oschreib to send mail on release go/no-go meeting (oschreib, 14:40:29) * workshops and conferences (mburns, 14:42:03) * ovirt developer meetings and planning to happen at KVM Forum (21-23 October in Edinburgh) (mburns, 14:42:36) * there will be additional oVirt content during KVM Forum as well (mburns, 14:43:27) * dneary to make sure everyone that should be there is in attendance for ovirt dev meetings (mburns, 14:45:45) * jbrooks has been working on release notes, announcement, etc (mburns, 14:48:17) * ACTION: oschreib to check about 3.3 live distro (oschreib, 14:51:10) * infra update (mburns, 14:52:40) * extra storage in rackspace servers, so backups will be migrated there (mburns, 14:55:39) * new centos 6.4 slaves are only (mburns, 14:55:47) * new puppet classes added to manage jenkins slaves (mburns, 14:55:57) * more room for additional jenkins slaves (mburns, 14:56:04) * jenkins updates to latest LTS (long term stable) (mburns, 14:56:40) * hosted-engine-setup and ha builds are now built nightly and pushed to nightly repos (mburns, 14:58:03) * in process of adding network functional testing to jenkins (mburns, 14:58:56) * Other Topics (mburns, 14:59:49) * no other topics (mburns, 15:02:04) Meeting ended at 15:02:09 UTC. Action Items * sahina to verify 988299 is pushed Could someone from VDSM team help to get this patch merged- http://gerrit.ovirt.org/#/c/17994? thanks sahina * ecohen to push into 3.3 branch * oschreib to rebuild engine afterwards * emesika to check with 997362 * oschreib to send mail on release go/no-go meeting * oschreib to check about 3.3 live distro Action Items, by person --- * ecohen * ecohen to push into 3.3 branch * emesika * emesika to check with 997362 * oschreib * oschreib to rebuild engine afterwards * oschreib to send mail on release go/no-go meeting * oschreib to check about 3.3 live distro * sahina * sahina to verify 988299 is pushed * **UNASSIGNED** * (none) People Present (lines said) --- * oschreib (75) * mburns (52) * dneary (17) * ecohen (10) * eedri (9) * jbrooks (8) * sahina (7) * awels (6) * emesika (6) * amuller (4) * sbonazzo (4) * ovirtbot (4) * sgotliv (3) * knesenko (1) * lvernia (1) * mskrivanek (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt Weekly Meeting Minutes -- 2013-06-19
Mike, Gluster feature status updated inline. thanks sahina On 06/19/2013 09:48 PM, Mike Burns wrote: Minutes: http://ovirt.org/meetings/ovirt/2013/ovirt.2013-06-19-14.03.html Minutes (text): http://ovirt.org/meetings/ovirt/2013/ovirt.2013-06-19-14.03.txt Log: http://ovirt.org/meetings/ovirt/2013/ovirt.2013-06-19-14.03.log.html #ovirt: oVirt Weekly Meeting Meeting started by mburns at 14:03:38 UTC. The full logs are available at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-06-19-14.03.log.html . Meeting summary --- * Agenda and Roll Call (mburns, 14:03:51) * 3.2 updates (mburns, 14:04:44) * 3.3 status review (mburns, 14:04:48) * infra report (mburns, 14:05:05) * conferences and workshops (mburns, 14:05:14) * Other Topics (mburns, 14:05:23) * 3.2 updates (mburns, 14:06:18) * there is an updated vdsm in updates-testing: 4.10.3-17 (mburns, 14:06:29) * ovirt-node update will be built this week and posted ASAP (mburns, 14:07:39) * ACTION: mburns to push vdsm from updates-testing to stable (mburns, 14:10:38) * ACTION: mburns to follow-up on getting ovirt-node respin built with vdsm. (mburns, 14:11:03) * some complaints about lack of upgrade from 3.1-3.2 on fedora (mburns, 14:13:46) * alourie is working on documenting the process (mburns, 14:13:54) * likely no code fix (mburns, 14:14:00) * 3.3 status (mburns, 14:15:29) * features without target dates and release priority that aren't complete are getting dropped from tracking (mburns, 14:16:31) * LINK: http://www.ovirt.org/OVirt_3.3_release-management (mburns, 14:19:03) * Virt Team (mburns, 14:19:12) * Feature RAM Snapshots (mburns, 14:19:23) * status: orange (mburns, 14:19:34) * target date: 17-Jun (mburns, 14:19:50) * priority: unset (mburns, 14:20:11) * skipping virt since no rep available (mburns, 14:23:20) * Infra Features (mburns, 14:23:22) * Feature: Device Custom Properties (mburns, 14:23:33) * status: green (mburns, 14:23:38) * Feature: Async task manager (mburns, 14:23:54) * status: Orange (mburns, 14:24:05) * priority: should (mburns, 14:24:33) * target date: unset (mburns, 14:24:39) * Task manager under final review, no issues (mburns, 14:31:17) * Feature: External Tasks (mburns, 14:31:29) * Status Orange -- still in development (mburns, 14:31:38) * target date still pending (mburns, 14:31:43) * ACTION: yzaslavs to followup with new target date (mburns, 14:31:57) * Networking Features (mburns, 14:32:26) * Normalized ovirtmgmt Initialization (mburns, 14:32:45) * Complete (mburns, 14:32:51) * Migration Network (mburns, 14:33:02) * Complete (mburns, 14:33:05) * Quantum Integration (mburns, 14:33:17) * status: Orange (mburns, 14:33:54) * Quantum Integration should get merged in coming days (mburns, 14:34:52) * Multiple Gateways (mburns, 14:35:15) * in final review, status Orange (mburns, 14:35:35) * Network Reloaded (mburns, 14:36:17) * status: progress, not a blocker for release (mburns, 14:36:28) * Storage features (mburns, 14:37:20) * Storage features (mburns, 14:42:44) * manage storage connections -- RED, target mid-july (mburns, 14:42:57) * must-have feature (mburns, 14:43:06) * SLA features (mburns, 14:44:18) * Watchdog -- mostly merged, final patch in review (mburns, 14:44:32) * Scheduler and schedule API -- still in development, target mid-july (mburns, 14:46:41) * QoS target mid-july (mburns, 14:46:58) * SLA features should at least be feature complete by mid july (mburns, 14:49:12) * Gluster Swift -- dropped (mburns, 14:49:51) Vdsm patch submitted, but not merged. Have updated target. Can we include this in 3.3? * Gluster Hooks -- status orange (mburns, 14:50:01) * no gluster contact online (mburns, 14:52:30) Gluster hooks feature (engine and vdsm patches) are now merged. Have changed to green on Feature page * Node Features (mburns, 14:52:38) * Universal Node: Status: Complete (mburns, 14:52:47) * Node VDSM plugin: mostly complete, can be finalized quickly once we need beta/test images (mburns, 14:53:32) * Integration features (mburns, 14:54:48) * Feature Self Hosted Engine -- Status: Red (mburns, 14:55:20) * self-hosted should be done mid-july (mburns, 14:59:59) * otopi migration should be end of June (mburns, 15:00:10) * User portal IE8 performance -- Complete (mburns, 15:00:34) * no UX person available (mburns, 15:05:05) * IDEA: delay feature freeze until 17-July and GA until 14-Aug (mburns, 15:06:20) * IDEA: beta shipped 17-July or ASAP after (mburns, 15:06:35) * IDEA: RC around 1-Aug (mburns, 15:06:56) * AGREED: on new dates (mburns, 15:07:52) * branding support is complete (mburns, 15:11:00) * frontend refactor and upgrade GWT version are in progress (mburns, 15:11:27) *
Re: [Users] Package installation error
You are right - Gluster 3.4 is only required to manage gluster clusters. Currently the question asked at setup is this : The engine can be configured to present the UI in three different application modes. virt [Manage virtualization only], gluster [Manage gluster storage only], and both [Manage virtualization as well as gluster storage] where both is the default. If this is confusing to the user, we can change this message. Suggestions? On 03/19/2013 01:41 PM, Dave Neary wrote: Hi, On 03/19/2013 08:16 AM, Alon Bar-Lev wrote: Now I am confused Do you or don't you need vdsm-gluster on your system? Allow me to clarify. There have been several messages from users since the oVirt 3.2 release asking why they need Gluster 3.4 pre-releases to run oVirt. My understanding is that you don't need Gluster 3.4 unless you want to manage a Gluster cluster with oVirt. So my question is: are we sure that we are not leading users wrong, and confusing them during the installation set-up process? Thanks, Dave. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] DB error on fresh ovirt install
We ran into the same issue yesterday...The call to backup database seems to be causing this. engine_setup.py still uses the old method call and not the changed one as per http://gerrit.ovirt.org/10548. On 02/20/2013 04:25 PM, Eli Mesika wrote: - Original Message - From: Dead Horse deadhorseconsult...@gmail.com To: users@ovirt.org users@ovirt.org Sent: Tuesday, February 19, 2013 11:58:01 PM Subject: [Users] DB error on fresh ovirt install Normally I upgrade from prior test installs. However for the past two weeks or so attempting a fresh install yields: oVirt Engine will be installed using the following configuration: = override-httpd-config: yes http-port: 80 https-port: 443 host-fqdn: ovirtfoo.test.domain auth-pass: org-name: DHC application-mode: both default-dc-type: NFS db-remote-install: local db-local-pass: config-nfs: no override-firewall: None Proceed with the configuration listed above? (yes|no): yes Installing: Configuring oVirt Engine... [ DONE ] Configuring JVM... [ DONE ] Creating CA... [ DONE ] Updating ovirt-engine service... [ DONE ] Setting Database Configuration... [ DONE ] Setting Database Security... [ DONE ] Upgrading Database Schema... [ ERROR ] dictionary update sequence element #0 has length 1; 2 is required Please check log file /var/log/ovirt-engine/engine-setup_2013_02_19_15_31_02.log for more information log attached Hi Seems like a BUG in the setup Python script see http://www.gossamer-threads.com/lists/python/python/917709 Alex, Moran can you check please ? - DHC ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users