Re: [ovirt-users] VM disk update failure

2017-05-08 Thread Stefano Bovina
Hi,
Ok, ignore what I said regards step 2. The volume exists.

2017-05-07 20:27 GMT+02:00 Stefano Bovina :

> Hi,
> 1)
> What do you need exactly? Shutdown VM, edit disk, "extend size by" --->
> error
>
> 2)
> storage domain is up and running, but with lvm command, vg exists, lv does
> not exist
>
> Powering up a machine, I also saw these lines in vdsm log:
>
>
> Thread-22::DEBUG::2017-05-07 18:59:54,115::blockSD::596::St
> orage.Misc.excCmd::(getReadDelay) SUCCESS:  = '1+0 records in\n1+0
> records out\n4096 bytes (4.1 kB) copied, 0.000411491 s, 10.0 MB/s\n'; 
> = 0
> Thread-300::DEBUG::2017-05-07 
> 18:59:55,539::libvirtconnection::151::root::(wrapper)
> Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not
> found: Requested metadata element is not present
> Thread-1101::DEBUG::2017-05-07 
> 18:59:55,539::libvirtconnection::151::root::(wrapper)
> Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not
> found: Requested metadata element is not present
> VM Channels Listener::INFO::2017-05-07 
> 18:59:55,560::guestagent::180::vm.Vm::(_handleAPIVersion)
> vmId=`79cd29dc-bb6c-4f1f-ae38-54d6cac05489`::Guest API version changed
> from 3 to 1
> VM Channels Listener::INFO::2017-05-07 
> 18:59:55,865::guestagent::180::vm.Vm::(_handleAPIVersion)
> vmId=`d37b37e9-1dd7-4a90-8d8c-755743c714ad`::Guest API version changed
> from 2 to 1
> VM Channels Listener::INFO::2017-05-07 
> 18:59:57,414::guestagent::180::vm.Vm::(_handleAPIVersion)
> vmId=`8a034ac2-2646-4fe7-8fda-7c33affa8009`::Guest API version changed
> from 2 to 1
> VM Channels Listener::INFO::2017-05-07 
> 18:59:58,178::guestagent::180::vm.Vm::(_handleAPIVersion)
> vmId=`4d1dad66-4ada-445a-b5f6-c695220d6b19`::Guest API version changed
> from 3 to 1
> Thread-272::DEBUG::2017-05-07 
> 18:59:58,187::libvirtconnection::151::root::(wrapper)
> Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not
> found: Requested metadata element is not present
> Thread-166::DEBUG::2017-05-07 
> 18:59:58,539::libvirtconnection::151::root::(wrapper)
> Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not
> found: Requested metadata element is not present
> Thread-18::DEBUG::2017-05-07 19:00:33,805::persistentDict::
> 234::Storage.PersistentDict::(refresh) read lines (LvMetadataRW)=[]
> Thread-18::DEBUG::2017-05-07 19:00:33,805::persistentDict::
> 252::Storage.PersistentDict::(refresh) Empty metadata
> Thread-18::DEBUG::2017-05-07 19:00:33,805::persistentDict::
> 192::Storage.PersistentDict::(__init__) Created a persistent dict with
> VGTagMetadataRW backend
> Thread-18::DEBUG::2017-05-07 19:00:33,805::lvm::504::Storag
> e.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation'
> got the operation mutex
> Thread-18::DEBUG::2017-05-07 19:00:33,806::lvm::506::Storag
> e.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation'
> released the operation mutex
> Thread-18::DEBUG::2017-05-07 19:00:33,806::lvm::514::Storag
> e.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation'
> got the operation mutex
> Thread-18::DEBUG::2017-05-07 19:00:33,806::lvm::526::Storag
> e.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation'
> released the operation mutex
> Thread-18::DEBUG::2017-05-07 
> 19:00:33,806::lvm::371::Storage.OperationMutex::(_reloadvgs)
> Operation 'lvm reload operation' got the operation mutex
> Thread-18::DEBUG::2017-05-07 
> 19:00:33,806::lvm::291::Storage.Misc.excCmd::(cmd)
> /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names =
> ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/360060160a62134002818778f949de411|/dev/
> mapper/360060160a621340040652b7582f5e511|/dev/mapper/3600601
> 60a621340064b1034cbbfce511|/dev/mapper/360060160a6213400c4
> b39e80949de411|/dev/mapper/360060160a6213400cce46e40949de411
> |/dev/mapper/360060160a6213400e622de69949de411|/dev/mapper/3
> 60060160a6213400fa2d31acbbfce511|'\'', '\''r|.*|'\'' ] }  global {
>  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0
> }  backup {  retain_min = 50  retain_days = 0 } ' --noheadings --units b
> --nosuffix --separator '|' --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count
> ,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 2c501858-bf8d-49a5-a42b-bca341b47827 (cwd None)
> Thread-18::DEBUG::2017-05-07 
> 19:00:33,962::lvm::291::Storage.Misc.excCmd::(cmd)
> SUCCESS:  = '  WARNING: lvmetad is running but disabled. Restart
> lvmetad before enabling it!\n';  = 0
> Thread-18::DEBUG::2017-05-07 
> 19:00:33,962::lvm::416::Storage.OperationMutex::(_reloadvgs)
> Operation 'lvm reload operation' released the operation mutex
> Thread-18::DEBUG::2017-05-07 19:00:33,963::persistentDict::
> 234::Storage.PersistentDict::(refresh) read lines
> (VGTagMetadataRW)=['CLASS=Data', 'DESCRIPTION=BKP01',
> 'IOOPTIMEOUTSE

Re: [ovirt-users] VM disk update failure

2017-05-08 Thread Stefano Bovina
Hi,
1)
What do you need exactly? Shutdown VM, edit disk, "extend size by" --->
error

2)
storage domain is up and running, but with lvm command, vg exists, lv does
not exist

Powering up a machine, I also saw these lines in vdsm log:


Thread-22::DEBUG::2017-05-07 18:59:54,115::blockSD::596::
Storage.Misc.excCmd::(getReadDelay) SUCCESS:  = '1+0 records in\n1+0
records out\n4096 bytes (4.1 kB) copied, 0.000411491 s, 10.0 MB/s\n'; 
= 0
Thread-300::DEBUG::2017-05-07
18:59:55,539::libvirtconnection::151::root::(wrapper)
Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not
found: Requested metadata element is not present
Thread-1101::DEBUG::2017-05-07
18:59:55,539::libvirtconnection::151::root::(wrapper)
Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not
found: Requested metadata element is not present
VM Channels Listener::INFO::2017-05-07
18:59:55,560::guestagent::180::vm.Vm::(_handleAPIVersion)
vmId=`79cd29dc-bb6c-4f1f-ae38-54d6cac05489`::Guest API version changed from
3 to 1
VM Channels Listener::INFO::2017-05-07
18:59:55,865::guestagent::180::vm.Vm::(_handleAPIVersion)
vmId=`d37b37e9-1dd7-4a90-8d8c-755743c714ad`::Guest API version changed from
2 to 1
VM Channels Listener::INFO::2017-05-07
18:59:57,414::guestagent::180::vm.Vm::(_handleAPIVersion)
vmId=`8a034ac2-2646-4fe7-8fda-7c33affa8009`::Guest API version changed from
2 to 1
VM Channels Listener::INFO::2017-05-07
18:59:58,178::guestagent::180::vm.Vm::(_handleAPIVersion)
vmId=`4d1dad66-4ada-445a-b5f6-c695220d6b19`::Guest API version changed from
3 to 1
Thread-272::DEBUG::2017-05-07
18:59:58,187::libvirtconnection::151::root::(wrapper)
Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not
found: Requested metadata element is not present
Thread-166::DEBUG::2017-05-07
18:59:58,539::libvirtconnection::151::root::(wrapper)
Unknown libvirterror: ecode: 80 edom: 20 level: 2 message: metadata not
found: Requested metadata element is not present
Thread-18::DEBUG::2017-05-07 19:00:33,805::persistentDict::
234::Storage.PersistentDict::(refresh) read lines (LvMetadataRW)=[]
Thread-18::DEBUG::2017-05-07 19:00:33,805::persistentDict::
252::Storage.PersistentDict::(refresh) Empty metadata
Thread-18::DEBUG::2017-05-07 19:00:33,805::persistentDict::
192::Storage.PersistentDict::(__init__) Created a persistent dict with
VGTagMetadataRW backend
Thread-18::DEBUG::2017-05-07 19:00:33,805::lvm::504::
Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate
operation' got the operation mutex
Thread-18::DEBUG::2017-05-07 19:00:33,806::lvm::506::
Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate
operation' released the operation mutex
Thread-18::DEBUG::2017-05-07 19:00:33,806::lvm::514::
Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate
operation' got the operation mutex
Thread-18::DEBUG::2017-05-07 19:00:33,806::lvm::526::
Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate
operation' released the operation mutex
Thread-18::DEBUG::2017-05-07 19:00:33,806::lvm::371::
Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got
the operation mutex
Thread-18::DEBUG::2017-05-07 19:00:33,806::lvm::291::Storage.Misc.excCmd::(cmd)
/usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names =
["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/360060160a62134002818778f949de411|/dev/mapper/
360060160a621340040652b7582f5e511|/dev/mapper/360060160a621340064b1034cbbfce
511|/dev/mapper/360060160a6213400c4b39e80949de411|/dev/mapper/
360060160a6213400cce46e40949de411|/dev/mapper/360060160a6213400e622de69949de
411|/dev/mapper/360060160a6213400fa2d31acbbfce511|'\'', '\''r|.*|'\'' ] }
 global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1
 use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } '
--noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_
count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
2c501858-bf8d-49a5-a42b-bca341b47827 (cwd None)
Thread-18::DEBUG::2017-05-07 19:00:33,962::lvm::291::Storage.Misc.excCmd::(cmd)
SUCCESS:  = '  WARNING: lvmetad is running but disabled. Restart
lvmetad before enabling it!\n';  = 0
Thread-18::DEBUG::2017-05-07 19:00:33,962::lvm::416::
Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation'
released the operation mutex
Thread-18::DEBUG::2017-05-07 19:00:33,963::persistentDict::
234::Storage.PersistentDict::(refresh) read lines
(VGTagMetadataRW)=['CLASS=Data',
'DESCRIPTION=BKP01', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3',
'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5',
'LOGBLKSIZE=512', 'PHYBLKSIZE=512',
'POOL_UUID=715d1ba2-eabe-48db-9aea-c28c30359808',
u'PV0=pv:360060160a6213400c4b39e80949de411,uuid:Epqopf-zKvd-Um2q-
lHO9-6jmE-7IyB-qvib1I,pestart:0,pecount:3997,mapoffset:0', u'PV1=pv:
360

Re: [ovirt-users] VM disk update failure

2017-05-07 Thread Kevin Goldblatt
Hi Stephano,

>From the log I see that the image cannot be found. This would indicate that
you have an issue on your host connected to the storage provider:

OneImageInfoReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=254,
mMessage=Image path does not exist or cannot be accessed/created:
(u'/rhev/data-center/mnt/blockSD/6a386652-629d-4045-835b-
21d2f5c104aa/images/c5fb9190-d059-4d9b-af23-07618ff660ce',)]]

Please provide the following:

1. Provide the exact scenario you performed in the Update of the disk

2. Can you please check to see that the storage domain on which the disk
resides is up and running,

3. If not then check whether the storage on the host is visible through the
host

4. If all the above is working then please submit a bug through bugzilla
and provide the engine, server and vdsm logs.


Regards,


Kevin

On Tue, May 2, 2017 at 11:10 PM, Stefano Bovina  wrote:

> Hi, while trying to update a VM disk, a failure was returned (forcing me
> to add a new disk)
>
> Any advice on how to resolve this error?
>
> Thanks
>
>
> Installation info:
>
> ovirt-release35-006-1.noarch
> libgovirt-0.3.3-1.el7_2.1.x86_64
> vdsm-4.16.30-0.el7.centos.x86_64
> vdsm-xmlrpc-4.16.30-0.el7.centos.noarch
> vdsm-yajsonrpc-4.16.30-0.el7.centos.noarch
> vdsm-jsonrpc-4.16.30-0.el7.centos.noarch
> vdsm-python-zombiereaper-4.16.30-0.el7.centos.noarch
> vdsm-python-4.16.30-0.el7.centos.noarch
> vdsm-cli-4.16.30-0.el7.centos.noarch
> qemu-kvm-ev-2.3.0-29.1.el7.x86_64
> qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64
> qemu-kvm-tools-ev-2.3.0-29.1.el7.x86_64
> libvirt-client-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-driver-storage-1.2.17-13.el7_2.3.x86_64
> libvirt-python-1.2.17-2.el7.x86_64
> libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.3.x86_64
> libvirt-lock-sanlock-1.2.17-13.el7_2.3.x86_64
> libvirt-glib-0.1.9-1.el7.x86_64
> libvirt-daemon-driver-network-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-driver-lxc-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-driver-interface-1.2.17-13.el7_2.3.x86_64
> libvirt-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-config-network-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-driver-secret-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-kvm-1.2.17-13.el7_2.3.x86_64
> libvirt-daemon-driver-qemu-1.2.17-13.el7_2.3.x86_64
>
>
>  engine.log
>
> 2017-05-02 09:48:26,505 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand]
> (ajp--127.0.0.1-8702-6) [c3d7125] Lock Acquired to object EngineLock
> [exclusiveLocks= key: 25c0bcc0-0d3d-4ddc-b103-24ed2ac5aa05 value:
> VM_DISK_BOOT
> key: c5fb9190-d059-4d9b-af23-07618ff660ce value: DISK
> , sharedLocks= key: 25c0bcc0-0d3d-4ddc-b103-24ed2ac5aa05 value: VM
> ]
> 2017-05-02 09:48:26,515 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand]
> (ajp--127.0.0.1-8702-6) [c3d7125] Running command: UpdateVmDiskCommand
> internal: false. Entities affected :  ID: c5fb9190-d059-4d9b-af23-07618ff660ce
> Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
> 2017-05-02 09:48:26,562 INFO  
> [org.ovirt.engine.core.bll.ExtendImageSizeCommand]
> (ajp--127.0.0.1-8702-6) [ae718d8] Running command: ExtendImageSizeCommand
> internal: true. Entities affected :  ID: c5fb9190-d059-4d9b-af23-07618ff660ce
> Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
> 2017-05-02 09:48:26,565 INFO  [org.ovirt.engine.core.vdsbro
> ker.irsbroker.ExtendImageSizeVDSCommand] (ajp--127.0.0.1-8702-6)
> [ae718d8] START, ExtendImageSizeVDSCommand( storagePoolId =
> 715d1ba2-eabe-48db-9aea-c28c30359808, ignoreFailoverLimit = false), log
> id: 52aac743
> 2017-05-02 09:48:26,604 INFO  [org.ovirt.engine.core.vdsbro
> ker.irsbroker.ExtendImageSizeVDSCommand] (ajp--127.0.0.1-8702-6)
> [ae718d8] FINISH, ExtendImageSizeVDSCommand, log id: 52aac743
> 2017-05-02 09:48:26,650 INFO  
> [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
> (ajp--127.0.0.1-8702-6) [ae718d8] CommandAsyncTask::Adding
> CommandMultiAsyncTasks object for command cb7958d9-6eae-44a9-891a-7fe088
> a79df8
> 2017-05-02 09:48:26,651 INFO  
> [org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
> (ajp--127.0.0.1-8702-6) [ae718d8] CommandMultiAsyncTasks::AttachTask:
> Attaching task 769a4b18-182b-4048-bb34-a276a55ccbff to command
> cb7958d9-6eae-44a9-891a-7fe088a79df8.
> 2017-05-02 09:48:26,661 INFO  
> [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
> (ajp--127.0.0.1-8702-6) [ae718d8] Adding task 
> 769a4b18-182b-4048-bb34-a276a55ccbff
> (Parent Command UpdateVmDisk, Parameters Type org.ovirt.engine.core.common.a
> synctasks.AsyncTaskParameters), polling hasn't started yet..
> 2017-05-02 09:48:26,673 INFO  [org.ovirt.engine.core.dal.db
> broker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-6)
> [ae718d8] Correlation ID: c3d7125, Call Stack: null, Custom Event ID: -1,
> Message: VM sysinfo-73 sysinfo-73_Disk3 disk was updated by admin@internal.
> 2017-05-02 09:48:26,674

[ovirt-users] VM disk update failure

2017-05-06 Thread Stefano Bovina
Hi, while trying to update a VM disk, a failure was returned (forcing me to
add a new disk)

Any advice on how to resolve this error?

Thanks


Installation info:

ovirt-release35-006-1.noarch
libgovirt-0.3.3-1.el7_2.1.x86_64
vdsm-4.16.30-0.el7.centos.x86_64
vdsm-xmlrpc-4.16.30-0.el7.centos.noarch
vdsm-yajsonrpc-4.16.30-0.el7.centos.noarch
vdsm-jsonrpc-4.16.30-0.el7.centos.noarch
vdsm-python-zombiereaper-4.16.30-0.el7.centos.noarch
vdsm-python-4.16.30-0.el7.centos.noarch
vdsm-cli-4.16.30-0.el7.centos.noarch
qemu-kvm-ev-2.3.0-29.1.el7.x86_64
qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64
qemu-kvm-tools-ev-2.3.0-29.1.el7.x86_64
libvirt-client-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7_2.3.x86_64
libvirt-python-1.2.17-2.el7.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.3.x86_64
libvirt-lock-sanlock-1.2.17-13.el7_2.3.x86_64
libvirt-glib-0.1.9-1.el7.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-driver-lxc-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7_2.3.x86_64
libvirt-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-config-network-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-driver-secret-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.3.x86_64


 engine.log

2017-05-02 09:48:26,505 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand]
(ajp--127.0.0.1-8702-6) [c3d7125] Lock Acquired to object EngineLock
[exclusiveLocks= key: 25c0bcc0-0d3d-4ddc-b103-24ed2ac5aa05 value:
VM_DISK_BOOT
key: c5fb9190-d059-4d9b-af23-07618ff660ce value: DISK
, sharedLocks= key: 25c0bcc0-0d3d-4ddc-b103-24ed2ac5aa05 value: VM
]
2017-05-02 09:48:26,515 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand]
(ajp--127.0.0.1-8702-6) [c3d7125] Running command: UpdateVmDiskCommand
internal: false. Entities affected :  ID: c5fb9190-d059-4d9b-af23-07618ff660ce
Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
2017-05-02 09:48:26,562 INFO
[org.ovirt.engine.core.bll.ExtendImageSizeCommand]
(ajp--127.0.0.1-8702-6) [ae718d8] Running command: ExtendImageSizeCommand
internal: true. Entities affected :  ID: c5fb9190-d059-4d9b-af23-07618ff660ce
Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
2017-05-02 09:48:26,565 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ExtendImageSizeVDSCommand]
(ajp--127.0.0.1-8702-6) [ae718d8] START, ExtendImageSizeVDSCommand(
storagePoolId = 715d1ba2-eabe-48db-9aea-c28c30359808, ignoreFailoverLimit =
false), log id: 52aac743
2017-05-02 09:48:26,604 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ExtendImageSizeVDSCommand]
(ajp--127.0.0.1-8702-6) [ae718d8] FINISH, ExtendImageSizeVDSCommand, log
id: 52aac743
2017-05-02 09:48:26,650 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(ajp--127.0.0.1-8702-6) [ae718d8] CommandAsyncTask::Adding
CommandMultiAsyncTasks object for command cb7958d9-6eae-44a9-891a-
7fe088a79df8
2017-05-02 09:48:26,651 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
(ajp--127.0.0.1-8702-6) [ae718d8] CommandMultiAsyncTasks::AttachTask:
Attaching task 769a4b18-182b-4048-bb34-a276a55ccbff to command
cb7958d9-6eae-44a9-891a-7fe088a79df8.
2017-05-02 09:48:26,661 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(ajp--127.0.0.1-8702-6) [ae718d8] Adding task
769a4b18-182b-4048-bb34-a276a55ccbff
(Parent Command UpdateVmDisk, Parameters Type org.ovirt.engine.core.common.
asynctasks.AsyncTaskParameters), polling hasn't started yet..
2017-05-02 09:48:26,673 INFO  [org.ovirt.engine.core.dal.
dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-6)
[ae718d8] Correlation ID: c3d7125, Call Stack: null, Custom Event ID: -1,
Message: VM sysinfo-73 sysinfo-73_Disk3 disk was updated by admin@internal.
2017-05-02 09:48:26,674 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(ajp--127.0.0.1-8702-6) [ae718d8] BaseAsyncTask::startPollingTask: Starting
to poll task 769a4b18-182b-4048-bb34-a276a55ccbff.
2017-05-02 09:48:28,430 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-48) [36cd2f7] Polling and updating Async
Tasks: 1 tasks, 1 tasks to poll now
2017-05-02 09:48:28,435 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.
HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-48)
[36cd2f7] Failed in HSMGetAllTasksStatusesVDS method
2017-05-02 09:48:28,436 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-48) [36cd2f7] SPMAsyncTask::PollTask:
Polling task 769a4b18-182b-4048-bb34-a276a55ccbff (Parent Command
UpdateVmDisk, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters)
returned status finished, result 'cleanSuccess'.
2017-05-02 09:48:28,446 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-48) [36cd2f7] BaseAsyncTask::logEndTaskFailure: