[ovirt-users] is there a complete feature list for respective os template in the OS list when make a new vm
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, When I will make a new Windows 2012 vm and choose OS-type 2012 I am unable to connect with spice to the vm,but if I change this to type 2008 R2 I can. Still I am not able to install spice-guest-tools to the vm as its not supported the installer told me. So what is the difference between these OS types in the selector? Is there a complete list (CPU features etc)to get where I can se the differences between the OS types? regards - -- Ricky Schneberger -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.14 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEYEARECAAYFAlNsgMUACgkQOap81biMC2O10gCfeh6U+Z6taWGOnFiLDZfNXN+H iIQAnjRfOd3SEFukIxRSqGQOW0OK4fj3 =2nk+ -END PGP SIGNATURE- 0xB88C0B63.asc Description: application/pgp-keys ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Ovirt Python SDK adding a directlun
On 05/08/2014 11:37 PM, Gary Lloyd wrote: When I add direct Luns this way the size shows as 1 on the GUI and 0 when called from the rest api. All the other items mentioned are not present. Thanks Ah, I understand. This is probably related to the fact that you aren't creating a storage domain, only the storage connection. This should work correctly, but I guess that either the GUI or the backend aren't completely prepared for this. I'm checking. On 8 May 2014, at 18:05, Juan Hernandez jhern...@redhat.com wrote: On 05/08/2014 05:04 PM, Gary Lloyd wrote: We are working on a script so that we can create an ISCSI LUN on our SAN and then directly assign it to a vm. We have been able to get it to work but with one small annoyance. I can't figure out how to populate size,serial,vendor_id and product_id via the api. Would anyone be able to point me in the right direction ? code (see def add_disk): def get_clusterid(cluster_name): cluster = ovirt_api.clusters.get(cluster_name) try: return cluster.id http://cluster.id except: logging.error('the cluster: %s does not appear to exist' % cluster_name ) sys.exit(1) def nominate_host(cluster_id): for host in ovirt_api.hosts.list(): if host.cluster.id http://host.cluster.id == cluster_id and host.status.state == 'up': host.iscsidiscover return host logging.error('could not find a suitable host to nominate in cluster:') sys.exit(1) def iscsi_discover_and_login(cluster,target,portal,chap_user,chap_pass): clusterid=get_clusterid(cluster) host=nominate_host(clusterid) iscsidet = params.IscsiDetails() iscsidet.address=portal iscsidet.username=chap_user iscsidet.password=chap_pass iscsidet.target=target host.iscsidiscover(params.Action(iscsi=iscsidet)) result = host.iscsilogin(params.Action(iscsi=iscsidet)) if result.status.state == 'complete': storecon = params.StorageConnection() storecon.address=portal storecon.type_='iscsi' storecon.port=3260 storecon.target=target storecon.username=chap_user storecon.password=chap_pass ovirt_api.storageconnections.add(storecon) return result # error checking code needs to be added to this function def add_disk(vm_name,wwid,target,size,portal): logunit = params.LogicalUnit() logunit.id http://logunit.id=wwid logunit.vendor_id='EQLOGIC' logunit.product_id='100E-00' logunit.port=3260 logunit.lun_mapping=0 logunit.address=portal logunit.target=target logunit.size=size * 1073741824 stor = params.Storage(logical_unit=[logunit]) stor.type_='iscsi' disk = params.Disk() disk.alias = 'vm-' + vm_name disk.name http://disk.name = disk.alias disk.interface = 'virtio' disk.bootable = True disk.type_ = 'iscsi' disk.format='raw' disk.set_size(size * 1073741824) #disk.size=size * 1073741824 #disk.active=True disk.lun_storage=stor try: result = ovirt_api.disks.add(disk) except: logging.error('Could not add disk') sys.exit(1) attachdisk=ovirt_api.disks.get(disk.alias) attachdisk.active = True try: ovirt_api.vms.get(vm_name).disks.add(attachdisk) except: logging.error('Could attach disk to vm') sys.exit(1) return result If we could just get the size to show correctly that would be enough, the others don't really matter to me. Thanks /Gary Lloyd/ For a direct LUN disk all these values are ready only. Why do you need to change them? -- Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3ºD, 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3ºD, 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] change disk size using a thin provision based template
I would assume that this is to avoid data corruption. However it sounds like a good feature request (allow disk resize when creating a new vm from a template). Can you please open it? https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt Thanks! Dafna Hello, I am making some tests and this time I want to reduce the disk size before create a new vm (in the new vm window ) test: 1) click on new VM 2) select template (centos6.5 , 6GB, 500GB HD , 2 cores) 3) resource allocation : select preallocated disk 4) change disk size it is not possible ! Why I can not change disk size ? I know it is based on template who had already defined (500GB), but this template in fact has only 4GB of actual size in thin provisioning allocation policy . thanks tamer ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Dafna Ron ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] oVirt in 3D
Nice :) Why blue? Best, Latcho -Original Message- From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of Michal Skrivanek Sent: Friday, May 09, 2014 1:26 PM To: Users@ovirt.org Users Subject: [ovirt-users] oVirt in 3D Bored of the old oVirt stickers?:-) ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] oVirt 3.5.0 release schedule updated
Here is the updated schedule for oVirt 3.5.0. These are tentative planning dates and may change: General availability: 2014-08-04 RC Build: 2014-07-15 oVirt 3.5 Second Test Day: 2014-07-01 Branching - Beta release: 2014-06-16 oVirt 3.5 First Test Day: 2014-06-05 Feature freeze - 2nd Alpha: 2014-05-30 Alpha release: 2014-05-16 Release management page has been updated accordingly: http://www.ovirt.org/OVirt_3.5_release-management oVirt Google Calendar has been updated accordingly: ICAL: https://www.google.com/calendar/ical/ppqtk46u9cglj7l987ruo2l0f8%40group.calendar.google.com/public/basic.ics XML: https://www.google.com/calendar/feeds/ppqtk46u9cglj7l987ruo2l0f8%40group.calendar.google.com/public/basic HTML: https://www.google.com/calendar/embed?src=ppqtk46u9cglj7l987ruo2l0f8%40group.calendar.google.comctz=UTC -- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Network issues with my node
Hello, I'm having network issues with my node. I can run virtual machines fine and my ovirt-engine can function with my node. But I'm unable to connect to the internet to get the updates. On my node I have 2 physical nic's, one I use for the VM's and the other one I use for the ovirtmgmt. On the node I have installed a centos 6.5 Minimal install and started with just one nic for vm's and ovirtmgmt. At that time I could stil do updates. This is the output for ifconfig loLink encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8334 errors:0 dropped:0 overruns:0 frame:0 TX packets:8334 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:719611 (702.7 KiB) TX bytes:719611 (702.7 KiB) ovirtmgmt Link encap:Ethernet HWaddr 90:B1:1C:41:8F:E6 inet addr:192.168.203.150 Bcast:192.168.203.255 Mask:255.255.255.0 inet6 addr: fe80::92b1:1cff:fe41:8fe6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:311808663 errors:0 dropped:0 overruns:0 frame:0 TX packets:372893154 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1422822276840 (1.2 TiB) TX bytes:690166822846 (642.7 GiB) vnet0 Link encap:Ethernet HWaddr FE:1A:4A:C0:99:05 inet6 addr: fe80::fc1a:4aff:fec0:9905/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5267776 errors:0 dropped:0 overruns:0 frame:0 TX packets:11700414 errors:0 dropped:0 overruns:1 carrier:0 collisions:0 txqueuelen:500 RX bytes:1756515922 (1.6 GiB) TX bytes:4651600153 (4.3 GiB) vnet1 Link encap:Ethernet HWaddr FE:1A:4A:C0:99:02 inet6 addr: fe80::fc1a:4aff:fec0:9902/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2557550 errors:0 dropped:0 overruns:0 frame:0 TX packets:9841131 errors:0 dropped:0 overruns:10 carrier:0 collisions:0 txqueuelen:500 RX bytes:508294202 (484.7 MiB) TX bytes:3764681737 (3.5 GiB) vnet2 Link encap:Ethernet HWaddr FE:1A:4A:C0:99:04 inet6 addr: fe80::fc1a:4aff:fec0:9904/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3057202 errors:0 dropped:0 overruns:0 frame:0 TX packets:7276617 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:338763185 (323.0 MiB) TX bytes:749836458 (715.0 MiB) vnet3 Link encap:Ethernet HWaddr FE:1A:4A:C0:99:07 inet6 addr: fe80::fc1a:4aff:fec0:9907/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1021171 errors:0 dropped:0 overruns:0 frame:0 TX packets:2019222 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:918673872 (876.1 MiB) TX bytes:624962227 (596.0 MiB) This is my first nic [root@romulus network-scripts]# cat ifcfg-em1 # Generated by VDSM version 4.13.3-4.el6 DEVICE=em1 ONBOOT=yes HWADDR=90:b1:1c:41:8f:e6 BRIDGE=ovirtmgmt NM_CONTROLLED=no STP=no And my second one [root@romulus network-scripts]# cat ifcfg-em2 # Generated by VDSM version 4.13.3-4.el6 DEVICE=em2 ONBOOT=yes HWADDR=90:b1:1c:41:8f:e7 BRIDGE=VM NM_CONTROLLED=no STP=no my ovirtmgmt setup is [root@romulus network-scripts]# cat ifcfg-ovirtmgmt # Generated by VDSM version 4.13.3-4.el6 DEVICE=ovirtmgmt ONBOOT=yes TYPE=Bridge DELAY=0 IPADDR=192.168.203.150 NETMASK=255.255.255.0 GATEWAY=192.168.203.98 DNS1=192.168.204.2 DNS2=192.168.204.218 BOOTPROTO=none DEFROUTE=yes NM_CONTROLLED=no STP=no for my VM [root@romulus network-scripts]# cat ifcfg-VM # Generated by VDSM version 4.13.3-4.el6 DEVICE=VM ONBOOT=yes TYPE=Bridge DELAY=0 DEFROUTE=no NM_CONTROLLED=no STP=no Where do I need to configure my DNS servers ? Kind regards. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Network issues with my node
/etc/resolv.conf, as usual Am 09.05.2014 14:36, schrieb Andy Michielsen: Where do I need to configure my DNS servers ? -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Ovirt Python SDK adding a directlun
On 05/09/2014 10:31 AM, Juan Hernandez wrote: On 05/08/2014 11:37 PM, Gary Lloyd wrote: When I add direct Luns this way the size shows as 1 on the GUI and 0 when called from the rest api. All the other items mentioned are not present. Thanks Ah, I understand. This is probably related to the fact that you aren't creating a storage domain, only the storage connection. This should work correctly, but I guess that either the GUI or the backend aren't completely prepared for this. I'm checking. I think this is a bug, and I didn't find any way to workaround it other than creating the LUN using the GUI instead of the RESTAPI. I opened the following BZ to track it: https://bugzilla.redhat.com/1096217 On 8 May 2014, at 18:05, Juan Hernandez jhern...@redhat.com wrote: On 05/08/2014 05:04 PM, Gary Lloyd wrote: We are working on a script so that we can create an ISCSI LUN on our SAN and then directly assign it to a vm. We have been able to get it to work but with one small annoyance. I can't figure out how to populate size,serial,vendor_id and product_id via the api. Would anyone be able to point me in the right direction ? code (see def add_disk): def get_clusterid(cluster_name): cluster = ovirt_api.clusters.get(cluster_name) try: return cluster.id http://cluster.id except: logging.error('the cluster: %s does not appear to exist' % cluster_name ) sys.exit(1) def nominate_host(cluster_id): for host in ovirt_api.hosts.list(): if host.cluster.id http://host.cluster.id == cluster_id and host.status.state == 'up': host.iscsidiscover return host logging.error('could not find a suitable host to nominate in cluster:') sys.exit(1) def iscsi_discover_and_login(cluster,target,portal,chap_user,chap_pass): clusterid=get_clusterid(cluster) host=nominate_host(clusterid) iscsidet = params.IscsiDetails() iscsidet.address=portal iscsidet.username=chap_user iscsidet.password=chap_pass iscsidet.target=target host.iscsidiscover(params.Action(iscsi=iscsidet)) result = host.iscsilogin(params.Action(iscsi=iscsidet)) if result.status.state == 'complete': storecon = params.StorageConnection() storecon.address=portal storecon.type_='iscsi' storecon.port=3260 storecon.target=target storecon.username=chap_user storecon.password=chap_pass ovirt_api.storageconnections.add(storecon) return result # error checking code needs to be added to this function def add_disk(vm_name,wwid,target,size,portal): logunit = params.LogicalUnit() logunit.id http://logunit.id=wwid logunit.vendor_id='EQLOGIC' logunit.product_id='100E-00' logunit.port=3260 logunit.lun_mapping=0 logunit.address=portal logunit.target=target logunit.size=size * 1073741824 stor = params.Storage(logical_unit=[logunit]) stor.type_='iscsi' disk = params.Disk() disk.alias = 'vm-' + vm_name disk.name http://disk.name = disk.alias disk.interface = 'virtio' disk.bootable = True disk.type_ = 'iscsi' disk.format='raw' disk.set_size(size * 1073741824) #disk.size=size * 1073741824 #disk.active=True disk.lun_storage=stor try: result = ovirt_api.disks.add(disk) except: logging.error('Could not add disk') sys.exit(1) attachdisk=ovirt_api.disks.get(disk.alias) attachdisk.active = True try: ovirt_api.vms.get(vm_name).disks.add(attachdisk) except: logging.error('Could attach disk to vm') sys.exit(1) return result If we could just get the size to show correctly that would be enough, the others don't really matter to me. Thanks /Gary Lloyd/ For a direct LUN disk all these values are ready only. Why do you need to change them? -- Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3ºD, 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3ºD, 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Auto-SOLVED, but read anyway : Invalid status on Data Center. Setting status to Non Responsive.
Hi, On our second oVirt setup in 3.4.0-1.el6 (that was running fine), I did a yum upgrade on the engine (...sigh...). Then rebooted the engine. This machine is hosting the NFS export domain. Though the VM are still running, the storage domain is in invalid status. You'll find below the engine.log. At first sight, I thought it was the same issue as : http://lists.ovirt.org/pipermail/users/2014-March/022161.html because it looked very similar. But the NFS export domain connection seemed OK (tested). I did try every trick I could thought of, restarting, checking anything... Our cluster stayed in a broken state. On second sight, I saw that when rebooting the engine, then NFS export domain was not mounted correctly (I wrote a static /dev/sd-something in fstab, and the iscsi manager changed the letter. Next time, I'll use LVM or a label). So the NFS served was void/empty/black hole. I just realized all the above, and spent my afternoon in cold sweat. Correcting the NFS mounting and restarting the engine did the trick. What still disturbs me is that the unavailability of the NFS export domain should NOT be a reason for the MASTER storage domain to break! Following the URL above and the BZ opened by the user (https://bugzilla.redhat.com/show_bug.cgi?id=1072900), I see this has been corrected in 3.4.1. What gives a perfectly connected NFS export domain, but empty? PS : I see no 3.4.1 update on CentOS repo. Regards, -- The engine log : 2014-05-09 14:40:37,767 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] spmStart polling started: taskId = 6d612398-fdad-49f2-9874-5f32a9bf87e2 │ 20│2014-05-09 14:40:40,848 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] Failed in HSMGetTaskStatusVDS method │ 20│2014-05-09 14:40:40,850 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] spmStart polling ended: taskId = 6d612398-fdad-49f2-9874-5f32a9bf87e2 task status = finished │ 20│2014-05-09 14:40:40,850 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 │ 20│2014-05-09 14:40:40,913 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] spmStart polling ended, spm status: Free │ 20│2014-05-09 14:40:40,932 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] START, HSMClearTaskVDSCommand(HostName = serv-vm-adm17, HostId = 049943eb-2bcc-4167-a780-7ef76a1f95e9, taskId=6d612398-fdad-49f2-9874-5f32a9bf87e2), log id: 5cfdc8ce │ 20│2014-05-09 14:40:40,982 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] FINISH, HSMClearTaskVDSCommand, log id: 5cfdc8ce │ 20│2014-05-09 14:40:40,983 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@39471ba9, log id: 58ec77ee │ 20│2014-05-09 14:40:40,985 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-28) [6b69119f] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: 5849b030-626e-47cb-ad90-3ce782d831b3 Type: StoragePool │ 20│2014-05-09 14:40:41,009 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-28) [6b69119f] Correlation ID: 6b69119f, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Etat-Major3. Setting status to Non Responsive. │ 20│2014-05-09 14:40:41,017 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [6b69119f] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed │al se│2014-05-09 14:40:41,112 INFO
Re: [ovirt-users] Auto-SOLVED, but read anyway : Invalid status on Data Center. Setting status to Non Responsive.
Same here! An reboot of the nfs machine hosting Export domain, caused the same..Upgrading to 3.4.1 on F19 and then some tweking on my storage domain Gluster based , which mean time became split-brain - some brick remove then add and so on - finally solved it. On Fri, May 9, 2014 at 4:55 PM, Nicolas Ecarnot nico...@ecarnot.net wrote: Hi, On our second oVirt setup in 3.4.0-1.el6 (that was running fine), I did a yum upgrade on the engine (...sigh...). Then rebooted the engine. This machine is hosting the NFS export domain. Though the VM are still running, the storage domain is in invalid status. You'll find below the engine.log. At first sight, I thought it was the same issue as : http://lists.ovirt.org/pipermail/users/2014-March/022161.html because it looked very similar. But the NFS export domain connection seemed OK (tested). I did try every trick I could thought of, restarting, checking anything... Our cluster stayed in a broken state. On second sight, I saw that when rebooting the engine, then NFS export domain was not mounted correctly (I wrote a static /dev/sd-something in fstab, and the iscsi manager changed the letter. Next time, I'll use LVM or a label). So the NFS served was void/empty/black hole. I just realized all the above, and spent my afternoon in cold sweat. Correcting the NFS mounting and restarting the engine did the trick. What still disturbs me is that the unavailability of the NFS export domain should NOT be a reason for the MASTER storage domain to break! Following the URL above and the BZ opened by the user ( https://bugzilla.redhat.com/show_bug.cgi?id=1072900), I see this has been corrected in 3.4.1. What gives a perfectly connected NFS export domain, but empty? PS : I see no 3.4.1 update on CentOS repo. Regards, -- The engine log : 2014-05-09 14:40:37,767 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] spmStart polling started: taskId = 6d612398-fdad-49f2-9874-5f32a9bf87e2 │ 20│2014-05-09 14:40:40,848 ERROR [org.ovirt.engine.core. vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] Failed in HSMGetTaskStatusVDS method │ 20│2014-05-09 14:40:40,850 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] spmStart polling ended: taskId = 6d612398-fdad-49f2-9874-5f32a9bf87e2 task status = finished │ 20│2014-05-09 14:40:40,850 ERROR [org.ovirt.engine.core. vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 │ 20│2014-05-09 14:40:40,913 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] spmStart polling ended, spm status: Free │ 20│2014-05-09 14:40:40,932 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] START, HSMClearTaskVDSCommand(HostName = serv-vm-adm17, HostId = 049943eb-2bcc-4167-a780-7ef76a1f95e9, taskId=6d612398-fdad-49f2-9874-5f32a9bf87e2), log id: 5cfdc8ce│ 20│2014-05-09 14:40:40,982 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] FINISH, HSMClearTaskVDSCommand, log id: 5cfdc8ce │ 20│2014-05-09 14:40:40,983 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [f685ea4] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common. businessentities.SpmStatusResult@39471ba9, log id: 58ec77ee │ 20│2014-05-09 14:40:40,985 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-28) [6b69119f] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: 5849b030-626e-47cb-ad90-3ce782d831b3 Type: StoragePool │ 20│2014-05-09 14:40:41,009 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-28) [6b69119f] Correlation ID: 6b69119f, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Etat-Major3. Setting status to Non Responsive. │ 20│2014-05-09 14:40:41,017 ERROR [org.ovirt.engine.core.
[ovirt-users] Ovirt-3.4.1: How to: Upgrading Hosted Engine Cluster
Hello, failing to find a procedure how to actually upgrade a HA cluster, I did the following witch turned out to be working pretty well. I am somewhat new to oVirt and was amazed how well actually; I did not need to shutdown a single VM (well, one because of mem usage; many of my running VMs have fancy stuff like iscsi and FC luns via a Quantum Stornext HA Cluster): 1. Set cluster to global maintance 2. Login to ovit engine and to the upgrade according to the release nodes. 3. After the upgrade is finished and the engine running, set the first Node in local maintenance. 4. Login the first node and yum update (with the removal of ovirt-release as mentioned in release notes).* I rebooted the node because of the kernel update. 5. Return to oVirt and reinstall the Node from GUI, it will be set to operational automatically** 6. Repeat steps 3-6 for the rest of the Nodes. 7. Remove global maintenance. 8. Update the last Node.*** * I first tried to do this with re-install from GUI. This failed; so I used the yum - update method to update all relevant services ** I do not know if this was necessary. I did this because the hosted-engine --deploy does the same thing when adding a host. *** I found this to be necessary because I had all my Nodes in local maintenance and could not migrate the Hosted engine from the last node any more. The host activation in oVirt did not remove the local maintenance set prior to the update (witch it should, IMHO). It might be desirable to have a hosted-engine command option to remove local maintenance for that reason. -- Daniel Helgenberger m box bewegtbild GmbH P: +49/30/2408781-22 F: +49/30/2408781-10 ACKERSTR. 19 D-10115 BERLIN www.m-box.de www.monkeymen.tv Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767 smime.p7s Description: S/MIME cryptographic signature ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] user portal and stateless vm pool behavior
On 05/09/2014 05:25 AM, Jeff Clay wrote: if a user takes a vm from the pool and uses it, then disconnects; can that vm then be assigned to another user immediately or quickly? the vm's in my pools run as stateless, is there a way to automatically get the vm's to reboot when a user disconnects so that it's fresh for the next user? i'm using windows 7 vm's and windows clients with virt-viewer to connect to the vm's. I've written a quick script that runs on the engine and tails the engine log looking for disconnects then reboots the vm when a disconnect is seen from a non-admin user, but i'm not familiar enough with how things are intended to work in the backend to know if my script is needed or if i'm just unaware of a certain feature or function. thanks ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users i think a screen saver inside the guest shutting it down after being idle for X minutes may be a good approach. otherwise, an RFE for Return VM to pool if no users connected for X minutes would be needed. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users