Re: [ovirt-users] R: R: PXE boot of a VM on vdsm don't read DHCP offer
On Fri, May 08, 2015 at 03:11:25PM +0200, NUNIN Roberto wrote: Hi Dan Thanks for answering Which kernel does the el7 host run? I think that Ido has seen a case where `brctl showmacs` was not populated with the VM mac, despite a packet coming out of it. Kernel is: 3.10.0-123.20.1.el7.x86_64, package is vdsm only. Brctl isn't available within vdsm only package. Could you try upgrading to a more up-to-date http://mirror.centos.org/centos-7/7.1.1503/updates/x86_64/Packages/kernel-3.10.0-229.4.2.el7.x86_64.rpm ? bridge-utils is a vdsm dependency. It must exist on your host. Please see if the mac of the vNIC shows up on `brctl showmacs` as it should. Can you tcpdump and check whether the bridge propogated the DHCP offer to the tap device of the said VM? Does the packet generated by `ether-wake MAC-of-VM` reach the tap device? Yes: host see the broadcast : 0.000.0.0.0 255.255.255.255 DHCP 346 DHCP Discover - Transaction ID 0x69267b67 It came from the right MAC: Source: Qumranet_15:81:03 (00:1a:4a:15:81:03) And it is tagged correctly: 802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 3500 This is the offer, on the bond interface: 1.01235510.155.124.2 10.155.124.246DHCP 346 DHCP Offer- Transaction ID 0x69267b67 Layer 2 info: Ethernet II, Src: Cisco_56:83:c3 (84:78:ac:56:83:c3), Dst: Qumranet_15:81:03 (00:1a:4a:15:81:03) Tagging on the bond: 802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 3500 The tag is correctly removed when DHCP offer is forwarded over the bond.3500. Here's the offer content, seems everything right: Client IP address: 0.0.0.0 (0.0.0.0) Your (client) IP address: 10.155.124.246 (10.155.124.246) Next server IP address: 10.155.124.223 (10.155.124.223) Relay agent IP address: 10.155.124.2 (10.155.124.2) Client MAC address: Qumranet_15:81:03 (00:1a:4a:15:81:03) Client hardware address padding: Server host name: 10.155.124.223 Boot file name: pxelinux.0 Magic cookie: DHCP Nothing of this offer appear on the VM side. But does it show on the host's bridge? on the tap device? ether-wake -i bond0.3500 00:1a:4a:15:81:03 (started from the host) reach the VM eth0 interface: 2.002028 HewlettP_4a:47:b0 Qumranet_15:81:03 WOL 116 MagicPacket for Qumranet_15:81:03 (00:1a:4a:15:81:03) Really strange behavior. Roberto ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] vdsm storage problem - maybe cache problem?
Hi Mario, Can u try to mount this directly from the Host? Can u please attach the VDSM and engine logs Thanks, Maor - Original Message - From: m...@ohnewald.net To: Maor Lipchuk mlipc...@redhat.com Cc: users@ovirt.org Sent: Monday, May 18, 2015 2:36:38 PM Subject: Re: [ovirt-users] vdsm storage problem - maybe cache problem? Hi Maor, thanks for the quick reply. Am 18.05.15 um 13:25 schrieb Maor Lipchuk: Now my Question: Why does the vdsm node not know that i deleted the storage? Has the vdsm cached this mount informations? Why does it still try to access 036b5575-51fa-4f14-8b05-890d7807894c? Yes, the vdsm use a cache for Storage Domains, you can try to restart the vdsmd service instead of rebooting the host. I am still getting the same error. [root@ovirt-node01 ~]# /etc/init.d/vdsmd stop Shutting down vdsm daemon: vdsm watchdog stop [ OK ] vdsm: Running run_final_hooks [ OK ] vdsm stop [ OK ] [root@ovirt-node01 ~]# [root@ovirt-node01 ~]# [root@ovirt-node01 ~]# [root@ovirt-node01 ~]# ps aux | grep vdsmd root 3198 0.0 0.0 11304 740 ?S May07 0:00 /bin/bash -e /usr/share/vdsm/respawn --minlifetime 10 --daemon --masterpid /var/run/vdsm/supervdsm_respawn.pid /usr/share/vdsm/supervdsmServer --sockfile /var/run/vdsm/svdsm.sock --pidfile /var/run/vdsm/supervdsmd.pid root 3205 0.0 0.0 922368 26724 ?Sl May07 12:10 /usr/bin/python /usr/share/vdsm/supervdsmServer --sockfile /var/run/vdsm/svdsm.sock --pidfile /var/run/vdsm/supervdsmd.pid root 15842 0.0 0.0 103248 900 pts/0S+ 13:35 0:00 grep vdsmd [root@ovirt-node01 ~]# /etc/init.d/vdsmd start initctl: Job is already running: libvirtd vdsm: Running mkdirs vdsm: Running configure_coredump vdsm: Running configure_vdsm_logs vdsm: Running run_init_hooks vdsm: Running gencerts vdsm: Running check_is_configured libvirt is already configured for vdsm sanlock service is already configured vdsm: Running validate_configuration SUCCESS: ssl configured to true. No conflicts vdsm: Running prepare_transient_repository vdsm: Running syslog_available vdsm: Running nwfilter vdsm: Running dummybr vdsm: Running load_needed_modules vdsm: Running tune_system vdsm: Running test_space vdsm: Running test_lo vdsm: Running restore_nets vdsm: Running unified_network_persistence_upgrade vdsm: Running upgrade_300_nets Starting up vdsm daemon: vdsm start [ OK ] [root@ovirt-node01 ~]# [root@ovirt-node01 ~]# grep ERROR /var/log/vdsm/vdsm.log | tail -n 20 Thread-13::ERROR::2015-05-18 13:35:03,631::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain abc51e26-7175-4b38-b3a8-95c6928fbc2b Thread-13::ERROR::2015-05-18 13:35:03,632::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain abc51e26-7175-4b38-b3a8-95c6928fbc2b Thread-36::ERROR::2015-05-18 13:35:11,607::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-36::ERROR::2015-05-18 13:35:11,621::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-36::ERROR::2015-05-18 13:35:11,960::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found Thread-36::ERROR::2015-05-18 13:35:11,960::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain 036b5575-51fa-4f14-8b05-890d7807894c monitoring information Thread-36::ERROR::2015-05-18 13:35:21,962::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-36::ERROR::2015-05-18 13:35:21,965::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-36::ERROR::2015-05-18 13:35:22,068::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found Thread-36::ERROR::2015-05-18 13:35:22,072::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain 036b5575-51fa-4f14-8b05-890d7807894c monitoring information Thread-15::ERROR::2015-05-18 13:35:33,821::task::866::TaskManager.Task::(_setError) Task=`54bdfc77-f63a-493b-b24e-e5a3bc4977bb`::Unexpected error Thread-15::ERROR::2015-05-18 13:35:33,864::dispatcher::65::Storage.Dispatcher.Protect::(run) {'status': {'message': Unknown pool id, pool not connected: ('b384b3da-02a6-44f3-a3f6-56751ce8c26d',), 'code': 309}} Thread-13::ERROR::2015-05-18 13:35:33,930::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain abc51e26-7175-4b38-b3a8-95c6928fbc2b Thread-15::ERROR::2015-05-18
[ovirt-users] Invito aggiornato: oVirt 3.6.0 Alpha Release Test Day - mer 2015-05-27 (oVirt schedule)
BEGIN:VCALENDAR PRODID:-//Google Inc//Google Calendar 70.9054//EN VERSION:2.0 CALSCALE:GREGORIAN METHOD:REQUEST BEGIN:VEVENT DTSTART;VALUE=DATE:20150527 DTEND;VALUE=DATE:20150528 DTSTAMP:20150518T125529Z ORGANIZER;CN=oVirt schedule:mailto:ppqtk46u9cglj7l987ruo2l0f8@group.calenda r.google.com UID:ai2aa31e02qaj02fbi1tusl...@google.com ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP= TRUE;CN=in...@ovirt.org;X-NUM-GUESTS=0:mailto:in...@ovirt.org ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP= TRUE;CN=de...@ovirt.org;X-NUM-GUESTS=0:mailto:de...@ovirt.org ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP= TRUE;CN=users@ovirt.org;X-NUM-GUESTS=0:mailto:users@ovirt.org CREATED:20150219T141714Z DESCRIPTION:Visualizza il tuo evento in https://www.google.com/calendar/eve nt?action=VIEWeid=YWkyYWEzMWUwMnFhajAyZmJpMXR1c2xtaG8gdXNlcnNAb3ZpcnQub3Jn tok=NTIjcHBxdGs0NnU5Y2dsajdsOTg3cnVvMmwwZjhAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLm NvbWM4MDhlODcwN2ZiZWRmOGJmZDUxMzQ1MGY5ZjM1YzgzOWE4ZmQ3NWQctz=Europe/Romeh l=it. LAST-MODIFIED:20150518T125529Z LOCATION: SEQUENCE:3 STATUS:CONFIRMED SUMMARY:oVirt 3.6.0 Alpha Release Test Day TRANSP:TRANSPARENT END:VEVENT END:VCALENDAR invite.ics Description: application/ics ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] missing disk after storage domain expansion
Hi Maor, 1) disk creation: ok 2) no free space at the moment of creation: this is very strange! Are you sure? I assure you the disk was working, exported to VM which formatted and wrote data on it 3) no disk removal mention: this is the very issue I'm facing. In fact disk disappeared with no notification, no logs, nothing. I'm forwarding you full set of engine.log separately. If you need vdsm.logs just tell me from which server you need it as I have several cluster. 4) I have not yet tried to register using the procedure you provided me earlier. I'll try tomorrow. Thanks AG -Original Message- From: Maor Lipchuk [mailto:mlipc...@redhat.com] Sent: Monday, May 18, 2015 1:15 PM To: Andrea Ghelardi Cc: users@ovirt.org Subject: Re: [ovirt-users] missing disk after storage domain expansion Hi Andrea, I guessed I missed those logs, I found it now, thanks. I do see the disk creation in the log from 20150501, I can also see that the Storage Domain hertz-dstore2 has 0 GB of free space at the moment of creation: 2015-05-01 03:30:05,801 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-92) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Critical, Low disk space. hertz-dstore2 domain has 0 GB of free space though, I don't see any indication of the removed disks from the engine. I have also noticed a gap from the first log which finished at 2015-05-01 03:30:05 and the other log which starts at 2015-05-04 03:22:19,879. Did you try to register the disk as described in the wiki? Regards, Maor - Original Message - From: Andrea Ghelardi a.ghela...@iontrading.com To: Maor Lipchuk mlipc...@redhat.com Cc: users@ovirt.org Sent: Monday, May 18, 2015 12:10:55 PM Subject: RE: [ovirt-users] missing disk after storage domain expansion Hi Maor, I already added logs in my previous email, did you received them? I'm sending them again, but only privately to you not to burden the mailing list (or let me know if you prefer otherwise). Cheers AG -Original Message- From: Maor Lipchuk [mailto:mlipc...@redhat.com] Sent: Saturday, May 16, 2015 10:03 AM To: Andrea Ghelardi Cc: users@ovirt.org Subject: Re: [ovirt-users] missing disk after storage domain expansion Hi Andrea, The OVF_STORE disks are disks which mainly being used internally in the engine to store VMs' and Templates' OVFs, you can disregard them for now. The behavior you mentioned, that suddenly the disk has disappeared from the setup, sounds very weird and I would like to investigate this a bit more. if you can please add the engine and vdsm logs, I can know what made the disk get removed from the setup, at the first place. Could it be that some one accessed the data base by any chance? Regarding the disk registration, basically it should not create any problems unless, if your disk was deleted manually form the DB, and there are still leftovers related to the disk in some of the tables. Once the disk is registered to the setup, you can try to attach it to the VM, and try to run it. Regards, Maor - Original Message - From: Andrea Ghelardi a.ghela...@iontrading.com To: Maor Lipchuk mlipc...@redhat.com Cc: users@ovirt.org Sent: Friday, May 15, 2015 4:54:25 PM Subject: RE: [ovirt-users] missing disk after storage domain expansion After querying the hosted engine with the suggested command https://pisa-ion-ovirtm-01/ovirt-engine/api/storagedomains/7a48fe46- 21 12-40a4-814f-24d74c760b2d/disks;unregistered Indeed shows a (unregistered?) disk ?xml version=1.0 encoding=UTF-8 standalone=yes? disks disk href=/ovirt-engine/api/storagedomains/7a48fe46-2112-40a4-814f-24d74c760b2d/disks/16736ce0-a9df-410f-9f29-3a28364cdd41 id=16736ce0-a9df-410f-9f29-3a28364cdd41 actions link href=/ovirt-engine/api/storagedomains/7a48fe46-2112-40a4-814f-24d74c760b2d/disks/16736ce0-a9df-410f-9f29-3a28364cdd41/export rel=export/ /actions namehertz_disk5/name descriptiondisk for SYBASE installation, on SAN shared storage/description link href=/ovirt-engine/api/storagedomains/7a48fe46-2112-40a4-814f-24d74c760b2d/disks/16736ce0-a9df-410f-9f29-3a28364cdd41/permissions rel=permissions/ link href=/ovirt-engine/api/storagedomains/7a48fe46-2112-40a4-814f-24d74c760b2d/disks/16736ce0-a9df-410f-9f29-3a28364cdd41/statistics rel=statistics/ aliashertz_disk5/alias image_id4ab070c0-fb16-452d-8521-4ff0b004aef3/image_id storage_domain href=/ovirt-engine/api/storagedomains/7a48fe46-2112-40a4-814f-24d74c760b2d id=7a48fe46-2112-40a4-814f-24d74c760b2d/ storage_domains storage_domain id=7a48fe46-2112-40a4-814f-24d74c760b2d/ /storage_domains size225485783040/size provisioned_size225485783040/provisioned_size actual_size225485783040/actual_size status stateok/state /status interfaceide/interface formatraw/format sparsefalse/sparse bootablefalse/bootable
Re: [ovirt-users] Ovirt node or Linux standard
Hi Kevin, On 05/13/2015 02:53 AM, Kevin C wrote: Hi all, There is any reason to choose to install a CentOS 6/7 and add it to oVirt Manager instead of an oVirt Node ? It's just for managability or there is other technical reasons ? oVirt Node is a 'ready to go' option, as soon you complete the installation it will have all packages needed for the hypervisor in a read-only OS (focused in oVirt), using less space possible. It also contain the Text User Interface (TUI) that helps administrators to do management tasks, like initial network, hosted-engine, etc. On the other hand, adding CentOS/Fedora and others distros as hypervisor via Engine will automatically download the packages needed. In this case, configuring hosted-engine and others management options will obviously without the TUI. To upgrade the ovirt-node as read-only distro, you can: - boot the new iso into the system - Download and install the new ovirt-node RPM into engine and trigger the upgrade via engine. We provide nightly builds for testing: http://jenkins.ovirt.org/job/ovirt-node_master_create-iso-el7_merged/ Fell free to reach us with your experience using the oVirt Node. -- Cheers Douglas ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Team a NIC and add to ovirtmgmt vNIC
Dear all, How I would go about in an existing setup with 1 NIC and 1 vNIC ovirtmgmt, to add a second NIC, team them up (bonding) and then assign that one to ovirtmgmt without too much downtime? Does anybody have some experience on that? Thank you for your help, — Christophe Dr Christophe Trefois, Dipl.-Ing. Technical Specialist / Post-Doc UNIVERSITÉ DU LUXEMBOURG LUXEMBOURG CENTRE FOR SYSTEMS BIOMEIDINCE Campus Belval | House of Biomedicine 7, avenue des Hauts-Fourneaux L-4362 Esch-sur-Alzette T: +352 46 66 44 6124 F: +352 46 66 44 6949 http://www.uni.lu/lcsb [Facebook]https://www.facebook.com/trefex [Twitter] https://twitter.com/Trefex [Google Plus] https://plus.google.com/+ChristopheTrefois/ [Linkedin] https://www.linkedin.com/in/trefoischristophe [skype] http://skype:Trefex?call This message is confidential and may contain privileged information. It is intended for the named recipient only. If you receive it in error please notify me and permanently delete the original message and any copies. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] oVirt newbie
Hi Didi, Thanks for the welcome! I had some issues, but @dougsland hit the spot just by providing a checklist. I was creating a Data/Storage/FCP and my naive feelings that the double Hitachi/NetApp would provide me with a clear LUN went down the toilet! In the process of fighting the documentation, @dougsland provided me with a checklist, including the item there's something in the LUN?. That followed the recommended dd if=/dev/zero of=/dev/mapper/my_lun ... and everything resumed to move! Note for improvement in the DOCs: straight checklists! Cheers, - Mensagem original - De: Yedidyah Bar David d...@redhat.com Para: Fábio Coelho fabio.coe...@jfsc.jus.br Cc: users users@ovirt.org Enviadas: Domingo, 17 de maio de 2015 2:49:08 Assunto: Re: [ovirt-users] oVirt newbie - Original Message - From: Fábio Coelho fabio.coe...@jfsc.jus.br To: users users@ovirt.org Sent: Friday, May 15, 2015 11:27:14 PM Subject: [ovirt-users] oVirt newbie Hi Everyone, l'm a SysAdmin, old vmware user, willing to advance in open source alternatives. I hope I can help and be helped for here :D. Welcome! Currently, I'm testing a setup with ovirt-node-el7 and a centos6 engine-server, and at the moment, all is going fine! Glad to hear that! Please do not hesitate to report any issues you stumble upon. Best, -- Didi Aviso Legal A informação contida neste e-mail e em seus anexos pode ser restrita, sendo o emitente deste responsável por seu conteúdo e endereçamento. Se você não for a pessoa autorizada a receber esta mensagem e tendo recebido a mesma por engano, favor apagá-la imediatamente. A JFSC considera opiniões, conclusões e outras informações não oficiais de responsabilidade do usuário deste serviço. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] vdsm storage problem - maybe cache problem?
Manual mounting works: [root@ovirt-node01 ~]# mount ovirt.hq.example.net:/export2 /tmp/mnt/ [root@ovirt-node01 ~]# umount /tmp/mnt/ (but i removed the export2 domain from the webgui engine. I still wonder why the node want to access it) vdsm log: StorageDomainDoesNotExist: Storage domain does not exist: ('036b5575-51fa-4f14-8b05-890d7807894c',) 47d37321-3f0f-4a1b-a8d5-629065fc1c4c::ERROR::2015-05-18 14:35:35,099::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c 47d37321-3f0f-4a1b-a8d5-629065fc1c4c::ERROR::2015-05-18 14:35:35,099::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c 47d37321-3f0f-4a1b-a8d5-629065fc1c4c::DEBUG::2015-05-18 14:35:35,100::lvm::372::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex 47d37321-3f0f-4a1b-a8d5-629065fc1c4c::DEBUG::2015-05-18 14:35:35,100::lvm::295::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config devices { preferred_names = [\\^/dev/mapper/\\] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 036b5575-51fa-4f14-8b05-890d7807894c' (cwd None) 47d37321-3f0f-4a1b-a8d5-629065fc1c4c::DEBUG::2015-05-18 14:35:35,121::lvm::295::Storage.Misc.excCmd::(cmd) FAILED: err = ' Volume group 036b5575-51fa-4f14-8b05-890d7807894c not found\n Skipping volume group 036b5575-51fa-4f14-8b05-890d7807894c\n'; rc = 5 47d37321-3f0f-4a1b-a8d5-629065fc1c4c::WARNING::2015-05-18 14:35:35,122::lvm::377::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [' Volume group 036b5575-51fa-4f14-8b05-890d7807894c not found', ' Skipping volume group 036b5575-51fa-4f14-8b05-890d7807894c'] 47d37321-3f0f-4a1b-a8d5-629065fc1c4c::DEBUG::2015-05-18 14:35:35,122::lvm::414::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex 47d37321-3f0f-4a1b-a8d5-629065fc1c4c::ERROR::2015-05-18 14:35:35,133::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found Traceback (most recent call last): File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain dom = findMethod(sdUUID) File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: ('036b5575-51fa-4f14-8b05-890d7807894c',) 47d37321-3f0f-4a1b-a8d5-629065fc1c4c::ERROR::2015-05-18 14:35:35,133::sp::329::Storage.StoragePool::(startSpm) Unexpected error Traceback (most recent call last): File /usr/share/vdsm/storage/sp.py, line 296, in startSpm self._updateDomainsRole() File /usr/share/vdsm/storage/securable.py, line 75, in wrapper return method(self, *args, **kwargs) File /usr/share/vdsm/storage/sp.py, line 205, in _updateDomainsRole domain = sdCache.produce(sdUUID) File /usr/share/vdsm/storage/sdc.py, line 98, in produce domain.getRealDomain() File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain return self._cache._realProduce(self._sdUUID) File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce domain = self._findDomain(sdUUID) File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain dom = findMethod(sdUUID) File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: ('036b5575-51fa-4f14-8b05-890d7807894c',) 47d37321-3f0f-4a1b-a8d5-629065fc1c4c::ERROR::2015-05-18 14:35:35,134::sp::330::Storage.StoragePool::(startSpm) failed: Storage domain does not exist: ('036b5575-51fa-4f14-8b05-890d7807894c',) 47d37321-3f0f-4a1b-a8d5-629065fc1c4c::DEBUG::2015-05-18 14:35:35,134::sp::336::Storage.StoragePool::(_shutDownUpgrade) Shutting down upgrade process 47d37321-3f0f-4a1b-a8d5-629065fc1c4c::DEBUG::2015-05-18 14:35:35,134::resourceManager::198::ResourceManager.Request::(__init__) ResName=`Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d`ReqID=`24d881e8-eba3-4b51-85e7-32301218b2e9`::Request was made in '/usr/share/vdsm/storage/sp.py' line '338' at '_shutDownUpgrade' 47d37321-3f0f-4a1b-a8d5-629065fc1c4c::DEBUG::2015-05-18 14:35:35,135::resourceManager::542::ResourceManager::(registerResource) Trying to register resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' for lock type 'exclusive' 47d37321-3f0f-4a1b-a8d5-629065fc1c4c::DEBUG::2015-05-18 14:35:35,135::resourceManager::601::ResourceManager::(registerResource) Resource
[ovirt-users] oVirt Test Day next week on May 27th
Hi, Just a reminder that next week we're going to have a Test Day on May 27 (postponed 1 day due to collision with 3.5.3 RC2 release). Maintainers: please update the test day page[1] Testers: be sure to get your systems ready! [1] http://www.ovirt.org/OVirt_3.6_Test_Day -- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] oVirt newbie
- Original Message - From: Fábio Coelho fabio.coe...@jfsc.jus.br To: users users@ovirt.org Sent: Monday, May 18, 2015 4:16:46 PM Subject: Re: [ovirt-users] oVirt newbie Hi Didi, Thanks for the welcome! I had some issues, but @dougsland hit the spot just by providing a checklist. I was creating a Data/Storage/FCP and my naive feelings that the double Hitachi/NetApp would provide me with a clear LUN went down the toilet! Well, I agree that's weird. In the process of fighting the documentation, @dougsland provided me with a checklist, including the item there's something in the LUN?. That followed the recommended dd if=/dev/zero of=/dev/mapper/my_lun ... and everything resumed to move! Which page(s) did you follow? Note for improvement in the DOCs: straight checklists! Well, you are more than welcome to help improving the wiki :-) Best, Cheers, - Mensagem original - De: Yedidyah Bar David d...@redhat.com Para: Fábio Coelho fabio.coe...@jfsc.jus.br Cc: users users@ovirt.org Enviadas: Domingo, 17 de maio de 2015 2:49:08 Assunto: Re: [ovirt-users] oVirt newbie - Original Message - From: Fábio Coelho fabio.coe...@jfsc.jus.br To: users users@ovirt.org Sent: Friday, May 15, 2015 11:27:14 PM Subject: [ovirt-users] oVirt newbie Hi Everyone, l'm a SysAdmin, old vmware user, willing to advance in open source alternatives. I hope I can help and be helped for here :D. Welcome! Currently, I'm testing a setup with ovirt-node-el7 and a centos6 engine-server, and at the moment, all is going fine! Glad to hear that! Please do not hesitate to report any issues you stumble upon. Best, -- Didi Aviso Legal A informação contida neste e-mail e em seus anexos pode ser restrita, sendo o emitente deste responsável por seu conteúdo e endereçamento. Se você não for a pessoa autorizada a receber esta mensagem e tendo recebido a mesma por engano, favor apagá-la imediatamente. A JFSC considera opiniões, conclusões e outras informações não oficiais de responsabilidade do usuário deste serviço. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Didi ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Presentation
Hello Roberto, On 05/07/2015 05:39 AM, NUNIN Roberto wrote: Hello I’m Roberto Nunin, responsible for Infrastructure in an Italian Company, part of multi-national group. Welcome to oVirt world! I’m really interested in oVirt technology and it’s future developments. Currently we are running a PoC of three clusters, 6 hosts, with an issue I’ve already submitted to the community. Which kind of deploy are you guys running? Is it includes hosted-engine, all-in-one or other schema? Could you please share the issue you are facing? Is it oVirt issue? Hope to find inside community answers, solutions, suggestions related to the product. Sure Roberto. You are welcome to share you knowledge and experiences too to enhance the community. -- Cheers Douglas ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] oVirt user permissions for fence_rhevm
Hi, I've created a user in AD that should only be able to power off/on a specific VM in oVirt. I've granted this user UserRole permission on this specific VM. If I log into the user portal with these credentials I can see the VM and power it off/on. When I use the fence_rhevm agent it fails to find the correct plug. I fixed this by adding the Filter: true header to the fence_rhevm script. When running manually, fence_rhevm can show me the status of the plug and can power it on/off. When I try to integrate this into a pacemaker cluster (on Debian 7) using the fence_rhevm resource agent it reboots the VM on every monitor action. Has anyone succeeded in using fence_rhevm with oVirt on pacemaker 1.1? Are there any additional oVirt permissions the user needs to make this work? I don't want to make this fence user an admin for my entire ovirt datacenter. The stonith primitive is configured: primitive p_fence_vm1 stonith:fence_rhevm \ params port=vm1 login=fence-...@mydomain.ad ipaddr=ovirt-engine.mydomain ipport=443 ssl=1 passwd=secret verbose=1 pcmk_host_list=vm1 pcmk_host_check=static-list \ op monitor interval=15m Regards, Rik -- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 Any errors in spelling, tact or fact are transmission errors ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Possible SELinux problems with ovirt syncronizing networks
On Fri, May 15, 2015 at 03:03:48PM -0500, Jeremy Utley wrote: Hello all! Running ovirt 3.5 on CentOS 7 currently, and running into a little problem. All my nodes are currently showing the ovirtmgmt network as unsyncronized. When I try to force them to sync, it fails. Looking at the /var/log/vdsm/supervdsm.log file on one of the nodes, it looks like it has to do with SELinux. See: http://pastebin.com/NX7yetVW Which contains a dump of the supervdsm.log file when I tried to force syncronization. Judging from what I'm seeing, after VDSM writes the new network configuration files to /etc/sysconfig/network-scripts/ifcfg-*, it attempts to run a selinux.restorecon function against those files. Since we disable SELinux by default on all our servers, this action is failing with Errno 61 (see lines 66-71 and 86-91 in the above-mentioned pastebin). Is this normal? Is ovirt expecting to run with SELinux enabled? Or am I mis-interpreting this log output? Thanks for any help or advice you can give me! The log has ...ignoring restorecon error in case SElinux is disabled... meaning that Vdsm decided to allow working with SElinux disabled, but it is recommended, full-heartedly, that you enable SElinux on your hosts. For example, the recent qemu flaw https://securityblog.redhat.com/2015/05/13/venom-dont-get-bitten/ becomes much limited when SElinux enabled. http://stopdisablingselinux.com/ And now to your networking question: Your log excerpt ends with a successful setSafeNetworkConfig, which means that setupNetwork has succeeded and that Engine knows that. We'd need to dig deeper to understand why the nets keep being out-of-sync. Does engine.log has clues? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] ReferenceError: WebUtil is not defined = novnc console broken after yum update (on centos 6.6?)
Hello, i think i ran into this bug: https://bugzilla.redhat.com/show_bug.cgi?format=multipleid=1202356 I can not use my novnc console anymore. Does anyone have a fix for this (on centos 6.6?) Thanks, Mario ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Why so long to add virtual HDD?
Hi Alan, Since your disks are configured as pre-allocation, vdsm use the dd command when creating them. perhaps you could try to monitor the dd command to analyze the progress. BTW, Is there any reason not to use thin provisioning? Regards, Maor - Original Message - From: Alan Murrell li...@murrell.ca To: users users@ovirt.org Sent: Monday, May 18, 2015 9:31:31 AM Subject: [ovirt-users] Why so long to add virtual HDD? Hello, I am wondering why it takes so long to provision a HDD for a VM? I typically do full provision (as opposed to thin provision) and while I have never sat there and timed it, it takes at least ten minutes to provision a 32GB HDD. I provisioned a 100GB HDD earlier today and even after 45 minutes it was not complete. I am from the VMware world where it takes less than a minute (usually more like 30 seconds or so) to provision a VM, regardless of HDD assigned. My host's physical HDD's are 2x2TB SATA drives in a hardware RAID1. oVirt storage is an NFS pool connecting to my host machine (for all intents and purposes, local storage) I wonder if I have things mis-configured? Is it normal for provisioning to take that long? Thanks! :-) -Alan ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Why so long to add virtual HDD?
Hello, I am wondering why it takes so long to provision a HDD for a VM? I typically do full provision (as opposed to thin provision) and while I have never sat there and timed it, it takes at least ten minutes to provision a 32GB HDD. I provisioned a 100GB HDD earlier today and even after 45 minutes it was not complete. I am from the VMware world where it takes less than a minute (usually more like 30 seconds or so) to provision a VM, regardless of HDD assigned. My host's physical HDD's are 2x2TB SATA drives in a hardware RAID1. oVirt storage is an NFS pool connecting to my host machine (for all intents and purposes, local storage) I wonder if I have things mis-configured? Is it normal for provisioning to take that long? Thanks! :-) -Alan ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Bad performance with Windows 2012 guests
Hi Vadim, could you reproduce my issue with your system ? Do you have any advice on getting more performance with Win2012 ? Thank you, Sven -Ursprüngliche Nachricht- Von: Vadim Rozenfeld [mailto:vroze...@redhat.com] Gesendet: Dienstag, 5. Mai 2015 02:57 An: Sven Achtelik Cc: Doron Fediuck; Martijn Grendelman; Karen Noel; users@ovirt.org Betreff: Re: AW: AW: AW: Bad performance with Windows 2012 guests On Mon, 2015-05-04 at 03:32 -0500, Sven Achtelik wrote: Hi Vadim, the command line: /usr/libexec/qemu-kvm -name wc_db01 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Westmere -m 12288 -realtime mlock=off -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid fbbdc0a0-23a4-4d32-a526-a35c59eb790d -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-1.1503.el7.centos.2.8,serial=4C4C4544-0035-4E10-8034-B4C04F4B4E31,uuid=fbbdc0a0-23a4-4d32-a526-a35c59eb790d -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/wc_db01.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2015-05-04T03:26:39,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/mnt/ovirt-engine.mgmt.asl.local:_var_lib_exports_iso/d1559536-71da-4b7a-ad71-171b0b528d7f/images/----/SVR2012EVAL.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2 -drive file=/rhev/data-center/0002-0002-0002-0002-03e2/a7d4ddb9-4486-4e37-b524-29625d6a7e61/images/23672c7f-ec3c-4686-bc29-89a0f95eae1c/9741917b-9134-4e14-892d-d16abf13e406,if=none,id=drive-virtio-disk0,format=raw,serial=23672c7f-ec3c-4686-bc29-89a0f95eae1c,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/rhev/data-center/0002-0002-0002-0002-03e2/a7d4ddb9-4486-4e37-b524-29625d6a7e61/images/238e79c3-378b-4117-9b6d-18f73832f286/a8730e05-ed95-4d41-a10d-e249b601ebd3,if=none,id=drive-virtio-disk1,format=qcow2,serial=238e79c3-378b-4117-9b6d-18f73832f286,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:1a:4a:ae:02,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/fbbdc0a0-23a4-4d32-a526-a35c59eb790d.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/fbbdc0a0-23a4-4d32-a526-a35c59eb790d.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -device usb-tablet,id=input0 -vnc 172.16.1.14:2,password -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on Sven Thanks a lot. I will try trace this issue on my local setup. Best regards, Vadim. -Ursprüngliche Nachricht- Von: Vadim Rozenfeld [mailto:vroze...@redhat.com] Gesendet: Montag, 4. Mai 2015 05:00 An: Sven Achtelik Cc: Doron Fediuck; Martijn Grendelman; Karen Noel; users@ovirt.org Betreff: Re: AW: AW: Bad performance with Windows 2012 guests On Sun, 2015-05-03 at 07:46 -0500, Sven Achtelik wrote: Hi Vadim, I've tested the performance with CrystalDiskMark from inside the Windows guest. Using Win2k8 R2 I got expected values for my system, about 88 MB/s on 4k random with 32 queues and 500MB/s + sequential writes with 32 queues. Using a Windows 2012 VM on the same system it's only 33MB/s on 4k random with 32 queues and 300MB/s sequential writes. Similar tests with a linux VM show a bit better values than the Win2k8 R2 and respond ultra-fast. My hosts are connected via iSCSI using a 10 GbE link and a ZFS appliance as the storage system. All tests have been run several times with the same results. Sven, Can I ask you to post the Windows 2012 VM qemu command line? Thanks, Vadim. Sven -Ursprüngliche Nachricht- Von: Vadim Rozenfeld [mailto:vroze...@redhat.com] Gesendet: Sonntag, 3. Mai 2015 14:35 An: Sven Achtelik Cc: Doron Fediuck; Martijn Grendelman; Karen Noel; users@ovirt.org Betreff: Re: AW: Bad performance with Windows 2012 guests On Sun, 2015-05-03 at 06:48 -0500, Sven Achtelik wrote: Hi Doron, I've also
Re: [ovirt-users] Unable to Delete a VM snapshot . (Juan Hern?ndez)
Hi Juan Hern?ndez , RE:Unable to Delete a VM snapshot . Do you mean that there is no unique identifier for a VM snapshot.? The only way out is from my java client ,I have to take care of creating a snapshot with unique description. SO that I can delete/restore a snapshot based on the unique description. Thanks, Prashanth R -Original Message- From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of users-requ...@ovirt.org Sent: Wednesday, May 13, 2015 2:06 PM To: users@ovirt.org Subject: Users Digest, Vol 44, Issue 61 Send Users mailing list submissions to users@ovirt.org To subscribe or unsubscribe via the World Wide Web, visit https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ovirt.org_mailman_listinfo_usersd=AwICAgc=BFpWQw8bsuKpl1SgiZH64Qr=3rKe7gJAcRh5l-S9dbWM0T_MbQbahl-YIetIBtIOxuom=vJx9-3POXYltD2YEW2fW50XHKsdm3Gvkd_piyvR7ubUs=qy3j1dASDjNv3QfyFirDEiNgyTqTDHR6lsngzMWtBUQe= or, via email, send a message with subject or body 'help' to users-requ...@ovirt.org You can reach the person managing the list at users-ow...@ovirt.org When replying, please edit your Subject line so it is more specific than Re: Contents of Users digest... Today's Topics: 1. Re: HE setup failing: Cannot set temporary password for console connection (Daniel Helgenberger) 2. Re: messages from journal in centos7 hosts (Nicolas Ecarnot) 3. Re: engine-reports portal and cpu cores (Shirly Radco) 4. Re: engine-reports portal and cpu cores (Alexandr Krivulya) 5. Re: messages from journal in centos7 hosts (Francesco Romani) 6. Re: Unable to Delete a VM snapshot . (Juan Hern?ndez) -- Message: 1 Date: Wed, 13 May 2015 07:29:24 + From: Daniel Helgenberger daniel.helgenber...@m-box.de To: Yedidyah Bar David d...@redhat.com Cc: users@ovirt.org users@ovirt.org Subject: Re: [ovirt-users] HE setup failing: Cannot set temporary password for console connection Message-ID: 8569c17008504c42.643f5abb-0f2d-445e-8665-0d736f998...@mail.outlook.com Content-Type: text/plain; charset=iso-8859-1 Hello Didi, this is a clean install of the first host. (I am trying a disaster scenario and will later restore the backup) -- Daniel Helgenberger m box bewegtbild GmbH P: +49/30/2408781-22tel:+49/30/2408781-22 F: +49/30/2408781-10tel:+49/30/2408781-10 ACKERSTR. 19 D-10115 BERLIN www.m-box.dehttps://urldefense.proofpoint.com/v2/url?u=http-3A__www.m-2Dbox.ded=AwICAgc=BFpWQw8bsuKpl1SgiZH64Qr=3rKe7gJAcRh5l-S9dbWM0T_MbQbahl-YIetIBtIOxuom=vJx9-3POXYltD2YEW2fW50XHKsdm3Gvkd_piyvR7ubUs=_1Z5ydGAROY0UwBcDL9Pc2PWpf2DLJxYwp5B6AV_M20e= www.monkeymen.tvhttps://urldefense.proofpoint.com/v2/url?u=http-3A__www.monkeymen.tvd=AwICAgc=BFpWQw8bsuKpl1SgiZH64Qr=3rKe7gJAcRh5l-S9dbWM0T_MbQbahl-YIetIBtIOxuom=vJx9-3POXYltD2YEW2fW50XHKsdm3Gvkd_piyvR7ubUs=HsaNw9tst2q6vdxtHWMmXSNzuKAi-tqf0Tst3JJb9cke= Gesch?ftsf?hrer: Martin Retschitzegger / Michaela G?llner Handeslregister: Amtsgericht Charlottenburg / HRB 112767 On Wed, May 13, 2015 at 12:22 AM -0700, Yedidyah Bar David d...@redhat.commailto:d...@redhat.com wrote: - Original Message - From: Daniel Helgenberger daniel.helgenber...@m-box.de To: users@ovirt.org Sent: Tuesday, May 12, 2015 8:48:42 PM Subject: [ovirt-users] HE setup failing: Cannot set temporary password for console connection Hello, a week ago I set up a HE (3.5.2) with no problems; now this fails. You mean you re-deploy the same host (clean setup), or adding another host? I suspect storage trouble but cannot pinpoint it. Did someth. change there with the recent async release? The storage domain is NFS3 via a 1G link to a ZFS storage appliance. The export is working quite fast; tested with iozone running for over 24h without problems. It starts with the Verifying sanlock lockspace initialization phase; which lasts about 7min (!). In the end it seems sanlock is locking the domain down. I can only get it back by stopping sanlock and vdsm. Attached is the full vdsm lock. Thanks! vdsm: Thread-27::ERROR::2015-05-12 19:17:51,470::domainMonitor::256::Storage.DomainMonitorThread::(_monit orDomain) Error while collecting domain 2f83ea82-e110-465d-a2a6-ac30534a9f6a monitoring information Traceback (most recent call last): File /usr/share/vdsm/storage/domainMonitor.py, line 232, in _monitorDomain self.domain.selftest() File /usr/share/vdsm/storage/sdc.py, line 49, in __getattr__ return getattr(self.getRealDomain(), attrName) File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain return self._cache._realProduce(self._sdUUID) File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce domain = self._findDomain(sdUUID) File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain dom = findMethod(sdUUID) File /usr/share/vdsm/storage/nfsSD.py, line 122, in
Re: [ovirt-users] Why so long to add virtual HDD?
On 05/18/2015 01:39 AM, Maor Lipchuk wrote: Since your disks are configured as pre-allocation, vdsm use the dd command when creating them. perhaps you could try to monitor the dd command to analyze the progress. I will see if I can do that, though if it is using dd, that would explain the time it takes. BTW, Is there any reason not to use thin provisioning? I would like to avoid over-allocating the disk space. If I thin-provision, and over-allocate my physical space, there may come a day when I run out of physical space even though I have space avaialble on my VMs. -Alan ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] liburcu dependecy with glusterfs-epel on Centos 7.1
Hello, I'm just trying to do a yum update on one ovirt host machine (Centos 7.1) And I have this trouble: Errore: Pacchetto: glusterfs-server-3.7.0-1.el7.x86_64 (ovirt-3.5-glusterfs-epel) Richiede: liburcu-bp.so.1()(64bit) Errore: Pacchetto: glusterfs-server-3.7.0-1.el7.x86_64 (ovirt-3.5-glusterfs-epel) Richiede: liburcu-cds.so.1()(64bit) (Don't care about Italian words on error message) ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] ReferenceError: WebUtil is not defined = novnc console broken after yum update (on centos 6.6?)
yum downgrade novnc on your ovirt engine works to get around it. Doesn’t seem to have any dependency problems on my cent 6.6 host, just have to keep remembering to redo it after engine updates. -Darrell On May 18, 2015, at 7:07 AM, m...@ohnewald.net wrote: Hello, i think i ran into this bug: https://bugzilla.redhat.com/show_bug.cgi?format=multipleid=1202356 I can not use my novnc console anymore. Does anyone have a fix for this (on centos 6.6?) Thanks, Mario ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Team a NIC and add to ovirtmgmt vNIC
Hi, If I understood correctly, this operation does not incur a long downtime. To setup the bond, first y ou'll need to select the relevant host, select *Network Interfaces* and click *Setup Host Networks*. At this step, you should be presented with *NIC0* (eth0?) connected to *ovirtmgmt*, and a standalone (and possibly down) *NIC1* (eth1 ? ). Just drag and drop *NIC1* unto *NIC0*, choose bond type and press *OK*. After the bond is displayed, make sure you've selected both *Verify connectivity between Host and Engine* and *Save network configuration* options, and again press *OK* . If all goes well, after the configuration you'll have a working bond with ovirtmgmt on top of it. You'll have to repeat this for all hosts you'll be configuring the new bond... All network downtime will be the total time of bringing down eth0, creating bond0 and bringing up bond0 - no more than a fraction of a minute normally. And this will only effect the guests on that particular Host, not the whole cluster. A little side note on migrating from a single nic to a bond: if you plan to add more VLAN tagged virtual networks to your new bond, keep in mind that ovirtmgmt should also be VLAN tagged - you can not mix tagged and untagged networks on a single bond. Regards, On Mon, May 18, 2015 at 5:58 PM, Christophe TREFOIS christophe.tref...@uni.lu wrote: Dear all, How I would go about in an existing setup with 1 NIC and 1 vNIC ovirtmgmt, to add a second NIC, team them up (bonding) and then assign that one to ovirtmgmt without too much downtime? Does anybody have some experience on that? Thank you for your help, — Christophe Dr Christophe Trefois, Dipl.-Ing. Technical Specialist / Post-Doc UNIVERSITÉ DU LUXEMBOURG LUXEMBOURG CENTRE FOR SYSTEMS BIOMEIDINCE Campus Belval | House of Biomedicine 7, avenue des Hauts-Fourneaux L-4362 Esch-sur-Alzette T: +352 46 66 44 6124 F: +352 46 66 44 6949 http://www.uni.lu/lcsb [image: Facebook] https://www.facebook.com/trefex [image: Twitter] https://twitter.com/Trefex [image: Google Plus] https://plus.google.com/+ChristopheTrefois/ [image: Linkedin] https://www.linkedin.com/in/trefoischristophe [image: skype] http://skype:Trefex?call This message is confidential and may contain privileged information. It is intended for the named recipient only. If you receive it in error please notify me and permanently delete the original message and any copies. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- *Ekin Meroğlu** Red Hat Certified Architect* linuxera Özgür Yazılım Çözüm ve Hizmetleri *T* +90 (850) 22 LINUX | *GSM* +90 (532) 137 77 04 www.linuxera.com | bi...@linuxera.com ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] distributed striped replicated volumes can not create
I test ,but replica count must 4 and 16,or it not allow create. Can i update or patch solved this? tks! At 2015-05-18 12:21:28, Kanagaraj kmayi...@redhat.com wrote: Only the stripe count should be between 4 and 16. And replica count can be 2 or 3. For example if you try to create a Distributed Striped Replicate volume with stripe 4 and replica 3, then you need to 24 bricks. 4(stripe)x3(replica) = 12, then 12x2(distribute) = 24 Then if you reduce the replica to 2, then you need 12 bricks. As mentioned in your message, if the error thrown is Replica Count mast between in 4 and 16 , its a bug and it should be changed to Stripe Count must between in 4 and 16 Regards, Kanagaraj On 05/18/2015 08:44 AM, 肖力 wrote: hi I test create distributed striped replicated volumes ,but not succeed。 First Prompt Replica Count mast between in 4 and 16, and i chang Replica Count to 4. Then Prompt Number of bricks should be a mutiple of Stripe Count and Replica count. Then i change different Replica Count Stripe Count ,add brick 8,12,16. It is not work。 I use oVirt Engine Version: 3.5.2.1-1.el6, Can Someone help me, tks! ___ Users mailing list Users@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] 3.5 all in one
hi guys, is it possible with 3.5 to run everything on a single server the way earlier versions did? cannot seem to find much on it other than using a live cd which is not what i am wanting, trying to install on top of centos 6.6 minimal install. -- thanks and regards, grant pasley. --- This email has been checked for viruses by Avast antivirus software. http://www.avast.com ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] My feedback on network setting after installing OVirt
Hi All Thanks for the support, I had done a couple of installation after this to test, all of the installation which I did later I choose minimum installation, when doing minimum installation I did not face any interface problem, the machine boots up and the iterface is up The issue was with the installation which I choose Desktop installation Thanks Joseph John On Monday, 11 May 2015 9:41 PM, Dan Kenigsberg dan...@redhat.com wrote: On Mon, May 11, 2015 at 10:25:36AM +, John Joseph wrote: On Monday, 11 May 2015 2:14 PM, Yedidyah Bar David d...@redhat.com wrote: - Original Message - From: John Joseph jjk_s...@yahoo.com To: users@ovirt.org Sent: Sunday, May 10, 2015 2:37:39 PM Subject: [ovirt-users] My feedback on network setting after installing OVirt Sounds a bit like [1]. Adding Petr, this bug's owner. Can you please check/post relevant conf/log files (ifcfg*, vdsm/system logs etc)? [1] https://bugzilla.redhat.com/show_bug.cgi?id=1154399 I don't think that the bug is related, as Joseph's case lacks any bond. Network should have been restored by Vdsm on its start up. However, in ovirt-3.5.0 there were cases where this has failed. Furthermore, we still have 1203422 vdsm should restore networks much earlier, to let net-dependent services start that is still on our TODO list. To understand precisely what happens on your host, we'd need your vdsm version, and as Didi asked - your {super,}vdsm.log and /var/log/message. However, to avoid the need of re-solving a solved bug, verify that you use vdsm-4.16.14 or later. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] missing disk after storage domain expansion
Hi Andrea, I guessed I missed those logs, I found it now, thanks. I do see the disk creation in the log from 20150501, I can also see that the Storage Domain hertz-dstore2 has 0 GB of free space at the moment of creation: 2015-05-01 03:30:05,801 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-92) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Critical, Low disk space. hertz-dstore2 domain has 0 GB of free space though, I don't see any indication of the removed disks from the engine. I have also noticed a gap from the first log which finished at 2015-05-01 03:30:05 and the other log which starts at 2015-05-04 03:22:19,879. Did you try to register the disk as described in the wiki? Regards, Maor - Original Message - From: Andrea Ghelardi a.ghela...@iontrading.com To: Maor Lipchuk mlipc...@redhat.com Cc: users@ovirt.org Sent: Monday, May 18, 2015 12:10:55 PM Subject: RE: [ovirt-users] missing disk after storage domain expansion Hi Maor, I already added logs in my previous email, did you received them? I'm sending them again, but only privately to you not to burden the mailing list (or let me know if you prefer otherwise). Cheers AG -Original Message- From: Maor Lipchuk [mailto:mlipc...@redhat.com] Sent: Saturday, May 16, 2015 10:03 AM To: Andrea Ghelardi Cc: users@ovirt.org Subject: Re: [ovirt-users] missing disk after storage domain expansion Hi Andrea, The OVF_STORE disks are disks which mainly being used internally in the engine to store VMs' and Templates' OVFs, you can disregard them for now. The behavior you mentioned, that suddenly the disk has disappeared from the setup, sounds very weird and I would like to investigate this a bit more. if you can please add the engine and vdsm logs, I can know what made the disk get removed from the setup, at the first place. Could it be that some one accessed the data base by any chance? Regarding the disk registration, basically it should not create any problems unless, if your disk was deleted manually form the DB, and there are still leftovers related to the disk in some of the tables. Once the disk is registered to the setup, you can try to attach it to the VM, and try to run it. Regards, Maor - Original Message - From: Andrea Ghelardi a.ghela...@iontrading.com To: Maor Lipchuk mlipc...@redhat.com Cc: users@ovirt.org Sent: Friday, May 15, 2015 4:54:25 PM Subject: RE: [ovirt-users] missing disk after storage domain expansion After querying the hosted engine with the suggested command https://pisa-ion-ovirtm-01/ovirt-engine/api/storagedomains/7a48fe46-21 12-40a4-814f-24d74c760b2d/disks;unregistered Indeed shows a (unregistered?) disk ?xml version=1.0 encoding=UTF-8 standalone=yes? disks disk href=/ovirt-engine/api/storagedomains/7a48fe46-2112-40a4-814f-24d74c760b2d/disks/16736ce0-a9df-410f-9f29-3a28364cdd41 id=16736ce0-a9df-410f-9f29-3a28364cdd41 actions link href=/ovirt-engine/api/storagedomains/7a48fe46-2112-40a4-814f-24d74c760b2d/disks/16736ce0-a9df-410f-9f29-3a28364cdd41/export rel=export/ /actions namehertz_disk5/name descriptiondisk for SYBASE installation, on SAN shared storage/description link href=/ovirt-engine/api/storagedomains/7a48fe46-2112-40a4-814f-24d74c760b2d/disks/16736ce0-a9df-410f-9f29-3a28364cdd41/permissions rel=permissions/ link href=/ovirt-engine/api/storagedomains/7a48fe46-2112-40a4-814f-24d74c760b2d/disks/16736ce0-a9df-410f-9f29-3a28364cdd41/statistics rel=statistics/ aliashertz_disk5/alias image_id4ab070c0-fb16-452d-8521-4ff0b004aef3/image_id storage_domain href=/ovirt-engine/api/storagedomains/7a48fe46-2112-40a4-814f-24d74c760b2d id=7a48fe46-2112-40a4-814f-24d74c760b2d/ storage_domains storage_domain id=7a48fe46-2112-40a4-814f-24d74c760b2d/ /storage_domains size225485783040/size provisioned_size225485783040/provisioned_size actual_size225485783040/actual_size status stateok/state /status interfaceide/interface formatraw/format sparsefalse/sparse bootablefalse/bootable shareablefalse/shareable wipe_after_deletefalse/wipe_after_delete propagate_errorsfalse/propagate_errors /disk /disks I'm waiting for any comments on the two OVF disk or any advice before proceeding with the registering command (test in production environment hm) Cheers AG -Original Message- From: Andrea Ghelardi [mailto:a.ghela...@iontrading.com] Sent: Friday, May 15, 2015 12:30 PM To: Maor Lipchuk Cc: users@ovirt.org Subject: RE: [ovirt-users] missing disk after storage domain expansion Thank you for your reply! I'm unsure if the disk contained any snapshots. I do not think so. Is the register action a safe one to do in production system? I wouldn't mess any of my existing running servers. Here the relevant logs from the date
[ovirt-users] vdsm storage problem - maybe cache problem?
Hello List, i did a simple update in my CentOS 6.6 Engine (3.5) and then i rebootet it. After that i ran into trouble. My NFS mounts from my clients to my engine got stuck. So i unmounted them manually with: umount -f -l path to my mount point Which worked. I hoped vdsm would remount it again. With no luck. So i deleted the nfs export and iso domain from the cluster. Now i am getting this error in my vdsm log: 2c6b4422-7faa-4847-ab30-fc713d7012af::ERROR::2015-05-18 13:13:25,682::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c 2c6b4422-7faa-4847-ab30-fc713d7012af::ERROR::2015-05-18 13:13:25,683::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c 2c6b4422-7faa-4847-ab30-fc713d7012af::ERROR::2015-05-18 13:13:25,717::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found 2c6b4422-7faa-4847-ab30-fc713d7012af::ERROR::2015-05-18 13:13:25,717::sp::329::Storage.StoragePool::(startSpm) Unexpected error 2c6b4422-7faa-4847-ab30-fc713d7012af::ERROR::2015-05-18 13:13:25,717::sp::330::Storage.StoragePool::(startSpm) failed: Storage domain does not exist: ('036b5575-51fa-4f14-8b05-890d7807894c',) 2c6b4422-7faa-4847-ab30-fc713d7012af::ERROR::2015-05-18 13:13:25,928::task::866::TaskManager.Task::(_setError) Task=`2c6b4422-7faa-4847-ab30-fc713d7012af`::Unexpected error Thread-37::ERROR::2015-05-18 13:13:35,683::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-37::ERROR::2015-05-18 13:13:35,683::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-37::ERROR::2015-05-18 13:13:35,720::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found Thread-37::ERROR::2015-05-18 13:13:35,720::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain 036b5575-51fa-4f14-8b05-890d7807894c monitoring information Thread-482::ERROR::2015-05-18 13:13:40,062::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-482::ERROR::2015-05-18 13:13:40,063::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-482::ERROR::2015-05-18 13:13:40,147::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,149::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,152::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,191::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,191::sp::288::Storage.StoragePool::(startSpm) Backup domain validation failed 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,193::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,193::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,228::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,228::sp::329::Storage.StoragePool::(startSpm) Unexpected error 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,228::sp::330::Storage.StoragePool::(startSpm) failed: Storage domain does not exist: ('036b5575-51fa-4f14-8b05-890d7807894c',) 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,932::task::866::TaskManager.Task::(_setError) Task=`3140b81f-a434-4877-9a34-3923505e4a1f`::Unexpected error Thread-37::ERROR::2015-05-18 13:13:45,721::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-37::ERROR::2015-05-18 13:13:45,721::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-37::ERROR::2015-05-18 13:13:45,757::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found Thread-37::ERROR::2015-05-18 13:13:45,758::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting
Re: [ovirt-users] vdsm storage problem - maybe cache problem?
Hi ml, See my comments inline Regards, Maor - Original Message - From: m...@ohnewald.net To: users@ovirt.org Sent: Monday, May 18, 2015 2:16:23 PM Subject: [ovirt-users] vdsm storage problem - maybe cache problem? Hello List, i did a simple update in my CentOS 6.6 Engine (3.5) and then i rebootet it. After that i ran into trouble. My NFS mounts from my clients to my engine got stuck. So i unmounted them manually with: umount -f -l path to my mount point Which worked. I hoped vdsm would remount it again. With no luck. So i deleted the nfs export and iso domain from the cluster. Now i am getting this error in my vdsm log: 2c6b4422-7faa-4847-ab30-fc713d7012af::ERROR::2015-05-18 13:13:25,682::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c 2c6b4422-7faa-4847-ab30-fc713d7012af::ERROR::2015-05-18 13:13:25,683::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c 2c6b4422-7faa-4847-ab30-fc713d7012af::ERROR::2015-05-18 13:13:25,717::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found 2c6b4422-7faa-4847-ab30-fc713d7012af::ERROR::2015-05-18 13:13:25,717::sp::329::Storage.StoragePool::(startSpm) Unexpected error 2c6b4422-7faa-4847-ab30-fc713d7012af::ERROR::2015-05-18 13:13:25,717::sp::330::Storage.StoragePool::(startSpm) failed: Storage domain does not exist: ('036b5575-51fa-4f14-8b05-890d7807894c',) 2c6b4422-7faa-4847-ab30-fc713d7012af::ERROR::2015-05-18 13:13:25,928::task::866::TaskManager.Task::(_setError) Task=`2c6b4422-7faa-4847-ab30-fc713d7012af`::Unexpected error Thread-37::ERROR::2015-05-18 13:13:35,683::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-37::ERROR::2015-05-18 13:13:35,683::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-37::ERROR::2015-05-18 13:13:35,720::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found Thread-37::ERROR::2015-05-18 13:13:35,720::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain 036b5575-51fa-4f14-8b05-890d7807894c monitoring information Thread-482::ERROR::2015-05-18 13:13:40,062::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-482::ERROR::2015-05-18 13:13:40,063::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-482::ERROR::2015-05-18 13:13:40,147::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,149::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,152::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,191::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,191::sp::288::Storage.StoragePool::(startSpm) Backup domain validation failed 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,193::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,193::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,228::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,228::sp::329::Storage.StoragePool::(startSpm) Unexpected error 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,228::sp::330::Storage.StoragePool::(startSpm) failed: Storage domain does not exist: ('036b5575-51fa-4f14-8b05-890d7807894c',) 3140b81f-a434-4877-9a34-3923505e4a1f::ERROR::2015-05-18 13:13:40,932::task::866::TaskManager.Task::(_setError) Task=`3140b81f-a434-4877-9a34-3923505e4a1f`::Unexpected error Thread-37::ERROR::2015-05-18 13:13:45,721::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-37::ERROR::2015-05-18 13:13:45,721::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c
Re: [ovirt-users] vdsm storage problem - maybe cache problem?
Hi Maor, thanks for the quick reply. Am 18.05.15 um 13:25 schrieb Maor Lipchuk: Now my Question: Why does the vdsm node not know that i deleted the storage? Has the vdsm cached this mount informations? Why does it still try to access 036b5575-51fa-4f14-8b05-890d7807894c? Yes, the vdsm use a cache for Storage Domains, you can try to restart the vdsmd service instead of rebooting the host. I am still getting the same error. [root@ovirt-node01 ~]# /etc/init.d/vdsmd stop Shutting down vdsm daemon: vdsm watchdog stop [ OK ] vdsm: Running run_final_hooks [ OK ] vdsm stop [ OK ] [root@ovirt-node01 ~]# [root@ovirt-node01 ~]# [root@ovirt-node01 ~]# [root@ovirt-node01 ~]# ps aux | grep vdsmd root 3198 0.0 0.0 11304 740 ?S May07 0:00 /bin/bash -e /usr/share/vdsm/respawn --minlifetime 10 --daemon --masterpid /var/run/vdsm/supervdsm_respawn.pid /usr/share/vdsm/supervdsmServer --sockfile /var/run/vdsm/svdsm.sock --pidfile /var/run/vdsm/supervdsmd.pid root 3205 0.0 0.0 922368 26724 ?Sl May07 12:10 /usr/bin/python /usr/share/vdsm/supervdsmServer --sockfile /var/run/vdsm/svdsm.sock --pidfile /var/run/vdsm/supervdsmd.pid root 15842 0.0 0.0 103248 900 pts/0S+ 13:35 0:00 grep vdsmd [root@ovirt-node01 ~]# /etc/init.d/vdsmd start initctl: Job is already running: libvirtd vdsm: Running mkdirs vdsm: Running configure_coredump vdsm: Running configure_vdsm_logs vdsm: Running run_init_hooks vdsm: Running gencerts vdsm: Running check_is_configured libvirt is already configured for vdsm sanlock service is already configured vdsm: Running validate_configuration SUCCESS: ssl configured to true. No conflicts vdsm: Running prepare_transient_repository vdsm: Running syslog_available vdsm: Running nwfilter vdsm: Running dummybr vdsm: Running load_needed_modules vdsm: Running tune_system vdsm: Running test_space vdsm: Running test_lo vdsm: Running restore_nets vdsm: Running unified_network_persistence_upgrade vdsm: Running upgrade_300_nets Starting up vdsm daemon: vdsm start [ OK ] [root@ovirt-node01 ~]# [root@ovirt-node01 ~]# grep ERROR /var/log/vdsm/vdsm.log | tail -n 20 Thread-13::ERROR::2015-05-18 13:35:03,631::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain abc51e26-7175-4b38-b3a8-95c6928fbc2b Thread-13::ERROR::2015-05-18 13:35:03,632::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain abc51e26-7175-4b38-b3a8-95c6928fbc2b Thread-36::ERROR::2015-05-18 13:35:11,607::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-36::ERROR::2015-05-18 13:35:11,621::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-36::ERROR::2015-05-18 13:35:11,960::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found Thread-36::ERROR::2015-05-18 13:35:11,960::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain 036b5575-51fa-4f14-8b05-890d7807894c monitoring information Thread-36::ERROR::2015-05-18 13:35:21,962::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-36::ERROR::2015-05-18 13:35:21,965::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 036b5575-51fa-4f14-8b05-890d7807894c Thread-36::ERROR::2015-05-18 13:35:22,068::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 036b5575-51fa-4f14-8b05-890d7807894c not found Thread-36::ERROR::2015-05-18 13:35:22,072::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain 036b5575-51fa-4f14-8b05-890d7807894c monitoring information Thread-15::ERROR::2015-05-18 13:35:33,821::task::866::TaskManager.Task::(_setError) Task=`54bdfc77-f63a-493b-b24e-e5a3bc4977bb`::Unexpected error Thread-15::ERROR::2015-05-18 13:35:33,864::dispatcher::65::Storage.Dispatcher.Protect::(run) {'status': {'message': Unknown pool id, pool not connected: ('b384b3da-02a6-44f3-a3f6-56751ce8c26d',), 'code': 309}} Thread-13::ERROR::2015-05-18 13:35:33,930::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain abc51e26-7175-4b38-b3a8-95c6928fbc2b Thread-15::ERROR::2015-05-18 13:35:33,928::task::866::TaskManager.Task::(_setError) Task=`fe9bb0fa-cf1e-4b21-af00-0698c6d1718f`::Unexpected error Thread-13::ERROR::2015-05-18 13:35:33,932::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain abc51e26-7175-4b38-b3a8-95c6928fbc2b Thread-15::ERROR::2015-05-18 13:35:33,978::dispatcher::65::Storage.Dispatcher.Protect::(run) {'status': {'message': 'Not SPM: ()', 'code': 654}}