Nir,

  thank you! I could successfully recover the corrupt metadata blocks on all 
affected storage domains!

Affected storage domains were created between January and March this year, 
accessed by 40 hosts in parallel with changing SPMs…unfortunately matching vdsm 
logs are not available anymore. I could privide a vsdm.log backup from SPM 
showing the first occurrence (+15 minutes prior the error).

All the best,
Oliver

Von: Nir Soffer <nsof...@redhat.com>
Gesendet: Mittwoch, 20. Juni 2018 10:01
An: Bruckner, Simone <simone.bruck...@fabasoft.com>
Cc: users@ovirt.org
Betreff: [ovirt-users] Re: Failed to update VMs/Templates OVF data, cannot 
change SPM

On Wed, Jun 20, 2018 at 10:33 AM Bruckner, Simone 
<simone.bruck...@fabasoft.com<mailto:simone.bruck...@fabasoft.com>> wrote:
Hi Nir,

  I identified the reason for the failing OVF updates on the initial VG – 
metadata was affected by blkdiscard tests in scope of 
https://bugzilla.redhat.com/show_bug.cgi?id=1562369

However, the OVF updates are failing on other installations as well (on 2 out 
of 40 storage domains). Here is the output of your commands:

# lvs -o vg_name,lv_name,tags | grep 3ad1987a-8b7d-426d-9d51-4a78cb0a888f
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f 212d644c-0155-4999-9df9-72bacfc7f141 
IU_0ebefe5e-9053-4bf1-bdfd-fdb26579c179,MD_4,PU_00000000-0000-0000-0000-000000000000
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f 94f519de-bc19-4557-82c4-86bbcfc5dd2f 
IU_60d9eec7-951f-4594-ae99-7d31318ee3a9,MD_5,PU_00000000-0000-0000-0000-000000000000
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f ids
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f inbox
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f leases
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f master
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f metadata
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f outbox
  3ad1987a-8b7d-426d-9d51-4a78cb0a888f xleases

Looks good


# for i in 4 5; do
  dd if=/dev/3ad1987a-8b7d-426d-9d51-4a78cb0a888f/metadata bs=512 count=1 
skip=$i of=metadata.$i
done
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00121297 s, 422 kB/s
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.000735026 s, 697 kB/s

# file metadata.*
metadata.4: data
metadata.5: ASCII text

# cat metadata.5
DOMAIN=3ad1987a-8b7d-426d-9d51-4a78cb0a888f
CTIME=1520597691
FORMAT=RAW
DISKTYPE=OVFS
LEGALITY=LEGAL
SIZE=262144
VOLTYPE=LEAF
DESCRIPTION={"Updated":true,"Size":4669440,"Last Updated":"Fri Jun 08 09:51:18 
CEST 2018","Storage 
Domains":[{"uuid":"3ad1987a-8b7d-426d-9d51-4a78cb0a888f"}],"Disk 
Description":"OVF_STORE"}
IMAGE=60d9eec7-951f-4594-ae99-7d31318ee3a9
PUUID=00000000-0000-0000-0000-000000000000
MTIME=0
POOL_UUID=
TYPE=PREALLOCATED
GEN=0
EOF

Looks good

# od -c metadata.4
0000000  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
*
0001000

There is no metada in this volume. This will fail every time when we try to 
store something
into this volume.

I wonder how the metadata was deleted,  maybe there was some error when the 
volume
was created?

Can you file a bug for this, and attach vdsm.log from the point this storage
domain was created until the error started?

An important date seems Fri Jun 08 09:51:18 CEST 2018 - at this date the volume
metadata was updated in the last time.

For recovering this volume, you can write valid metadata to the same offset
using dd.

Basically it is the same as the metadata of the other ovf storage (metadata.5)
the only changes needed are:

DESCRIPTION={"Updated":false,"Size":0,"Last Updated":"Fri Jun 08 09:51:18 CEST 
2018","Storage Domains":[{"uuid":"3ad1987a-8b7d-426d-9d51-4a78cb0a888f"}],"Disk 
Description":"OVF_STORE"}

IMAGE=0ebefe5e-9053-4bf1-bdfd-fdb26579c179

I took the image uuid from the IU_ tag in lvs output.

Make sure the size of the metadata file is less then 512 bytes, otherwise
you will overwrite and corrupt the metadata of the next volume.

To write the metadata you should use:
# makes your metadata file aligned to sector size
truncate -s 512 fixed-metadata.txt

# write exactly one sector
dd if=fixed-metadata.txt of=/dev/vgname/metadata  seek=4 count=1 bs=512 
oflag=direct conv=fsync

After this change, engine should be able to use this volume again.

vdsm.log from manual OVF update test:

2018-06-20 09:28:27,840+0200 INFO  (jsonrpc/7) [vdsm.api] START 
setVolumeDescription(sdUUID=u'3ad1987a-8b7d-426d-9d51-4a78cb0a888f', 
spUUID=u'5849b030-626e-47cb-ad90-3ce782d831b3', 
imgUUID=u'0ebefe5e-9053-4bf1-bdfd-fdb26579c179', 
volUUID=u'212d644c-0155-4999-9df9-72bacfc7f141', 
description=u'{"Updated":false,"Last Updated":"Fri Jun 08 09:51:18 CEST 
2018","Storage Domains":[{"uuid":"3ad1987a-8b7d-426d-9d51-4a78cb0a888f"}],"Disk 
Description":"OVF_STORE"}', options=None) from=::ffff:<IP>,51790, 
flow_id=7e4edb74, task_id=5f1fda67-a073-419a-bba5-9bf680c0e5d5 (api:46)
2018-06-20 09:28:28,072+0200 WARN  (jsonrpc/7) [storage.ResourceManager] 
Resource factory failed to create resource 
'01_img_3ad1987a-8b7d-426d-9d51-4a78cb0a888f.0ebefe5e-9053-4bf1-bdfd-fdb26579c179'.
 Canceling request. (resourceManager:543)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 
539, in registerResource
    obj = namespaceObj.factory.createResource(name, lockType)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", 
line 193, in createResource
    lockType)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", 
line 122, in __getResourceCandidatesList
    imgUUID=resourceName)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 206, in 
getChain
    if len(uuidlist) == 1 and srcVol.isShared():
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1434, in 
isShared
    return self._manifest.isShared()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 141, in 
isShared
    return self.getVolType() == sc.type2name(sc.SHARED_VOL)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 134, in 
getVolType
    self.voltype = self.getMetaParam(sc.VOLTYPE)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 118, in 
getMetaParam
    meta = self.getMetadata()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 
112, in getMetadata
    md = VolumeMetadata.from_lines(lines)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volumemetadata.py", line 
103, in from_lines
    "Missing metadata key: %s: found: %s" % (e, md))
MetaDataKeyNotFoundError: Meta Data key not found error: ("Missing metadata 
key: 'DOMAIN': found: {}",)
2018-06-20 09:28:28,072+0200 WARN  (jsonrpc/7) 
[storage.ResourceManager.Request] 
(ResName='01_img_3ad1987a-8b7d-426d-9d51-4a78cb0a888f.0ebefe5e-9053-4bf1-bdfd-fdb26579c179',
 ReqID='10c95223-f349-4ac3-ab2f-7a5f3d1c7749') Tried to cancel a processed 
request (resourceManager:187)
2018-06-20 09:28:28,073+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH 
setVolumeDescription error=Could not acquire resource. Probably resource 
factory threw an exception.: () from=::ffff:<IP>,51790, flow_id=7e4edb74, 
task_id=5f1fda67-a073-419a-bba5-9bf680c0e5d5 (api:50)
2018-06-20 09:28:28,073+0200 ERROR (jsonrpc/7) [storage.TaskManager.Task] 
(Task='5f1fda67-a073-419a-bba5-9bf680c0e5d5') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
    return fn(*args, **kargs)
  File "<string>", line 2, in setVolumeDescription
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1294, in 
setVolumeDescription
    pool.setVolumeDescription(sdUUID, imgUUID, volUUID, description)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, 
in wrapper
    return method(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 2011, in 
setVolumeDescription
    with rm.acquireResource(img_ns, imgUUID, rm.EXCLUSIVE):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 
1025, in acquireResource
    return _manager.acquireResource(namespace, name, lockType, timeout=timeout)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 
475, in acquireResource
    raise se.ResourceAcqusitionFailed()
ResourceAcqusitionFailed: Could not acquire resource. Probably resource factory 
threw an exception.: ()
2018-06-20 09:28:28,073+0200 INFO  (jsonrpc/7) [storage.TaskManager.Task] 
(Task='5f1fda67-a073-419a-bba5-9bf680c0e5d5') aborting: Task is aborted: 
u'Could not acquire resource. Probably resource factory threw an exception.: 
()' - code 100 (task:1181)
2018-06-20 09:28:28,073+0200 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH 
setVolumeDescription error=Could not acquire resource. Probably resource 
factory threw an exception.: () (dispatcher:82)

I worked around the SPM problem by running engine-setup (neither vds-client 
Host/StoragePool *Task*, async_task_entries or taskcleaner.sh showed any tasks).

All the best,
Oliver

Von: Nir Soffer <nsof...@redhat.com<mailto:nsof...@redhat.com>>
Gesendet: Dienstag, 19. Juni 2018 23:39
An: Elad Ben Aharon <ebena...@redhat.com<mailto:ebena...@redhat.com>>
Cc: users@ovirt.org<mailto:users@ovirt.org>; Albl, Oliver 
<oliver.a...@fabasoft.com<mailto:oliver.a...@fabasoft.com>>
Betreff: [ovirt-users] Re: Failed to update VMs/Templates OVF data, cannot 
change SPM

On Wed, Jun 20, 2018 at 12:11 AM Elad Ben Aharon 
<ebena...@redhat.com<mailto:ebena...@redhat.com>> wrote:
Can you please attach engine.log and vdsm.log?

On Tue, Jun 19, 2018 at 6:58 PM, Albl, Oliver 
<oliver.a...@fabasoft.com<mailto:oliver.a...@fabasoft.com>> wrote:
Hi all,

  does anybody have an idea how to address this? There is also a side effect 
that I cannot change SPM (“Error while executing action: Cannot force select 
SPM. The Storage Pool has running tasks.“).

All the best,
Oliver

Von: Albl, Oliver
Gesendet: Mittwoch, 13. Juni 2018 12:32
An: users@ovirt.org<mailto:users@ovirt.org>
Betreff: Failed to update VMs/Templates OVF data

Hi,

  I have a FC storage domain reporting the following messages every hour:

VDSM command SetVolumeDescriptionVDS failed: Could not acquire resource. 
Probably resource factory threw an exception.: ()
Failed to update OVF disks cb04b55c-10fb-46fe-b9de-3c133a94e6a5, OVF data isn't 
updated on those OVF stores (Data Center VMTEST, Storage Domain VMHOST_LUN_62).
Failed to update VMs/Templates OVF data for Storage Domain VMHOST_LUN_62 in 
Data Center VMTEST.

Trying to manually update OVF results in “Error while executing action 
UpdateOvfStoreForStorageDomain: Internal Engine Error”

I run oVirt 4.2.3.5-1.el7.centos on CentOS 7.5 (3.10.0-862.3.2.el7.x86_64) with 
vdsm-4.20.27.1-1.el7.centos.x86_64

Engine log:

2018-06-13 12:15:35,649+02 WARN  
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (default task-12) 
[092d8f27-c0a4-4d78-a8cb-f4738aff71e6] The message key 
'UpdateOvfStoreForStorageDomain' is missing from 'bundles/ExecutionMessages'
2018-06-13 12:15:35,655+02 INFO  
[org.ovirt.engine.core.bll.storage.domain.UpdateOvfStoreForStorageDomainCommand]
 (default task-12) [092d8f27-c0a4-4d78-a8cb-f4738aff71e6] Lock Acquired to 
object 
'EngineLock:{exclusiveLocks='[3edb5295-3797-4cd0-9b43-f46ec1ee7b14=OVF_UPDATE, 
373efd46-8aea-4d0e-96cc-1da0debf72d0=STORAGE]', sharedLocks=''}'
2018-06-13 12:15:35,660+02 INFO  
[org.ovirt.engine.core.bll.storage.domain.UpdateOvfStoreForStorageDomainCommand]
 (default task-12) [092d8f27-c0a4-4d78-a8cb-f4738aff71e6] Running command: 
UpdateOvfStoreForStorageDomainCommand internal: false. Entities affected :  ID: 
373efd46-8aea-4d0e-96cc-1da0debf72d0 Type: StorageAction group 
MANIPULATE_STORAGE_DOMAIN with role type ADMIN
2018-06-13 12:15:35,670+02 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (default task-12) [4fd5b59a] Running command: 
ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected :  ID: 
3edb5295-3797-4cd0-9b43-f46ec1ee7b14 Type: StoragePool
2018-06-13 12:15:35,674+02 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (default task-12) [4fd5b59a] Attempting to update VM OVFs in Data Center 
'VMTEST'
2018-06-13 12:15:35,678+02 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (default task-12) [4fd5b59a] Successfully updated VM OVFs in Data Center 
'VMTEST'
2018-06-13 12:15:35,678+02 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (default task-12) [4fd5b59a] Attempting to update template OVFs in Data Center 
'VMTEST'
2018-06-13 12:15:35,678+02 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (default task-12) [4fd5b59a] Successfully updated templates OVFs in Data 
Center 'VMTEST'
2018-06-13 12:15:35,678+02 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (default task-12) [4fd5b59a] Attempting to remove unneeded template/vm OVFs in 
Data Center 'VMTEST'
2018-06-13 12:15:35,680+02 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (default task-12) [4fd5b59a] Successfully removed unneeded template/vm OVFs in 
Data Center 'VMTEST'
2018-06-13 12:15:35,684+02 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand]
 (default task-12) [4fd5b59a] Lock freed to object 
'EngineLock:{exclusiveLocks='[3edb5295-3797-4cd0-9b43-f46ec1ee7b14=OVF_UPDATE, 
373efd46-8aea-4d0e-96cc-1da0debf72d0=STORAGE]', sharedLocks=''}'
2018-06-13 12:15:35,704+02 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand]
 (default task-12) [24485c23] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[]', 
sharedLocks='[3edb5295-3797-4cd0-9b43-f46ec1ee7b14=OVF_UPDATE]'}'
2018-06-13 12:15:35,714+02 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand]
 (default task-12) [24485c23] Running command: 
ProcessOvfUpdateForStorageDomainCommand internal: true. Entities affected :  
ID: 373efd46-8aea-4d0e-96cc-1da0debf72d0 Type: StorageAction group 
MANIPULATE_STORAGE_DOMAIN with role type ADMIN
2018-06-13 12:15:35,724+02 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-12) [24485c23] START, SetVolumeDescriptionVDSCommand( 
SetVolumeDescriptionVDSCommandParameters:{storagePoolId='3edb5295-3797-4cd0-9b43-f46ec1ee7b14',
 ignoreFailoverLimit='false', 
storageDomainId='373efd46-8aea-4d0e-96cc-1da0debf72d0', 
imageGroupId='cb04b55c-10fb-46fe-b9de-3c133a94e6a5', 
imageId='a1e7554d-530c-4c07-a4b5-459a1c509e39'}), log id: 747d674f
2018-06-13 12:15:35,724+02 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-12) [24485c23] -- executeIrsBrokerCommand: calling 
'setVolumeDescription', parameters:
2018-06-13 12:15:35,724+02 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-12) [24485c23] ++ spUUID=3edb5295-3797-4cd0-9b43-f46ec1ee7b14
2018-06-13 12:15:35,724+02 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-12) [24485c23] ++ sdUUID=373efd46-8aea-4d0e-96cc-1da0debf72d0
2018-06-13 12:15:35,724+02 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-12) [24485c23] ++ 
imageGroupGUID=cb04b55c-10fb-46fe-b9de-3c133a94e6a5
2018-06-13 12:15:35,724+02 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-12) [24485c23] ++ volUUID=a1e7554d-530c-4c07-a4b5-459a1c509e39
2018-06-13 12:15:35,724+02 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-12) [24485c23] ++ description={"Updated":false,"Last 
Updated":"Thu May 24 12:02:22 CEST 2018","Storage 
Domains":[{"uuid":"373efd46-8aea-4d0e-96cc-1da0debf72d0"}],"Disk 
Description":"OVF_STORE"}
2018-06-13 12:15:35,827+02 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-12) [24485c23] Failed in 'SetVolumeDescriptionVDS' method
2018-06-13 12:15:35,831+02 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-12) [24485c23] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command 
SetVolumeDescriptionVDS failed: Could not acquire resource. Probably resource 
factory threw an exception.: ()
2018-06-13 12:15:35,831+02 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-12) [24485c23] Command 'SetVolumeDescriptionVDSCommand( 
SetVolumeDescriptionVDSCommandParameters:{storagePoolId='3edb5295-3797-4cd0-9b43-f46ec1ee7b14',
 ignoreFailoverLimit='false', 
storageDomainId='373efd46-8aea-4d0e-96cc-1da0debf72d0', 
imageGroupId='cb04b55c-10fb-46fe-b9de-3c133a94e6a5', 
imageId='a1e7554d-530c-4c07-a4b5-459a1c509e39'})' execution failed: 
IRSGenericException: IRSErrorException: Failed to SetVolumeDescriptionVDS, 
error = Could not acquire resource. Probably resource factory threw an 
exception.: (), code = 855
2018-06-13 12:15:35,831+02 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] 
(default task-12) [24485c23] FINISH, SetVolumeDescriptionVDSCommand, log id: 
747d674f
2018-06-13 12:15:35,831+02 WARN  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand]
 (default task-12) [24485c23] failed to update domain 
'373efd46-8aea-4d0e-96cc-1da0debf72d0' ovf store disk 
'cb04b55c-10fb-46fe-b9de-3c133a94e6a5'
2018-06-13 12:15:35,834+02 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-12) [24485c23] EVENT_ID: UPDATE_FOR_OVF_STORES_FAILED(1,016), Failed to 
update OVF disks cb04b55c-10fb-46fe-b9de-3c133a94e6a5, OVF data isn't updated 
on those OVF stores (Data Center VMTEST, Storage Domain HOST_LUN_62).
2018-06-13 12:15:35,843+02 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-12) [24485c23] EVENT_ID: UPDATE_OVF_FOR_STORAGE_DOMAIN_FAILED(190), Failed 
to update VMs/Templates OVF data for Storage Domain VMHOST_LUN_62 in Data 
Center VMTEST.
2018-06-13 12:15:35,846+02 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand]
 (default task-12) [24485c23] Lock freed to object 
'EngineLock:{exclusiveLocks='[]', 
sharedLocks='[3edb5295-3797-4cd0-9b43-f46ec1ee7b14=OVF_UPDATE]'}'
2018-06-13 12:15:36,031+02 INFO  
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] 
(EE-ManagedThreadFactory-engineScheduled-Thread-60) [24485c23] Command 
'ProcessOvfUpdateForStorageDomain' id: 'a887910e-39a1-4120-a29b-76741ade8bf6' 
child commands '[]' executions were completed, status 'SUCCEEDED'
2018-06-13 12:15:37,052+02 INFO  
[org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand]
 (EE-ManagedThreadFactory-engineScheduled-Thread-31) [24485c23] Ending command 
'org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand'
 successfully.
2018-06-13 12:15:37,059+02 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-31) [24485c23] EVENT_ID: 
USER_UPDATE_OVF_STORE(199), OVF_STORE for domain VMHOST_LUN_62 was updated by 
<user>@>domain>@<DOMAIN>-authz.

vdsm.log

2018-06-13 12:15:35,727+0200 INFO  (jsonrpc/7) [vdsm.api] START 
setVolumeDescription(sdUUID=u'373efd46-8aea-4d0e-96cc-1da0debf72d0', 
spUUID=u'3edb5295-3797-4cd0-9b43-f46ec1ee7b14', 
imgUUID=u'cb04b55c-10fb-46fe-b9de-3c133a94e6a5', volUU 
ID=u'a1e7554d-530c-4c07-a4b5-459a1c509e39', 
description=u'{"Updated":false,"Last Updated":"Thu May 24 12:02:22 CEST 
2018","Storage Domains":[{"uuid":"373efd46-8aea-4d0e-96cc-1da0debf72d0"}],"Disk 
Description":"OVF_STORE"}', options=Non e) from=::ffff:<IP>,54686, 
flow_id=24485c23, task_id=70941873-0296-4ed0-94c8-b51290cd6963 (api:46)
2018-06-13 12:15:35,825+0200 WARN  (jsonrpc/7) [storage.ResourceManager] 
Resource factory failed to create resource 
'01_img_373efd46-8aea-4d0e-96cc-1da0debf72d0.cb04b55c-10fb-46fe-b9de-3c133a94e6a5'.
 Canceling request. (resourceManager :543)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 
539, in registerResource
    obj = namespaceObj.factory.createResource(name, lockType)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", 
line 193, in createResource
    lockType)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", 
line 122, in __getResourceCandidatesList
    imgUUID=resourceName)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 206, in 
getChain
    if len(uuidlist) == 1 and srcVol.isShared():
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1434, in 
isShared
    return self._manifest.isShared()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 141, in 
isShared
    return self.getVolType() == sc.type2name(sc.SHARED_VOL)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 134, in 
getVolType
    self.voltype = self.getMetaParam(sc.VOLTYPE)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 118, in 
getMetaParam
    meta = self.getMetadata()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 
112, in getMetadata
    md = VolumeMetadata.from_lines(lines)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volumemetadata.py", line 
103, in from_lines
    "Missing metadata key: %s: found: %s" % (e, md))
MetaDataKeyNotFoundError: Meta Data key not found error: ("Missing metadata 
key: 'DOMAIN': found: {}",)

This means the volume metadata is missing a required key: "DOMAIN".

Can you share the output of:

lvs -o vg_name,lv_name,tags | grep a1e7554d-530c-4c07-a4b5-459a1c509e39

There should a tag named MD_N - N is an integer. Once you find it,
we would like to see the metadata at offset N * 512 in the metadata volume.

You can extract it like this:

dd if=/dev/vgname/metadata bs=512 skip=N

Nir

2018-06-13 12:15:35,825+0200 WARN  (jsonrpc/7) 
[storage.ResourceManager.Request] 
(ResName='01_img_373efd46-8aea-4d0e-96cc-1da0debf72d0.cb04b55c-10fb-46fe-b9de-3c133a94e6a5',
 ReqID='dc9ebbc2-5cfa-447d-b2be-40ed2cf81992') Tried to cancel  a processed 
request (resourceManager:187)
2018-06-13 12:15:35,825+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH 
setVolumeDescription error=Could not acquire resource. Probably resource 
factory threw an exception.: () from=::ffff:<IP>,54686, flow_id=24485c23, 
task_id=70941873 -0296-4ed0-94c8-b51290cd6963 (api:50)
2018-06-13 12:15:35,825+0200 ERROR (jsonrpc/7) [storage.TaskManager.Task] 
(Task='70941873-0296-4ed0-94c8-b51290cd6963') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
    return fn(*args, **kargs)
  File "<string>", line 2, in setVolumeDescription
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1294, in 
setVolumeDescription
    pool.setVolumeDescription(sdUUID, imgUUID, volUUID, description)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, 
in wrapper
    return method(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 2011, in 
setVolumeDescription
    with rm.acquireResource(img_ns, imgUUID, rm.EXCLUSIVE):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 
1025, in acquireResource
    return _manager.acquireResource(namespace, name, lockType, timeout=timeout)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 
475, in acquireResource
    raise se.ResourceAcqusitionFailed()
ResourceAcqusitionFailed: Could not acquire resource. Probably resource factory 
threw an exception.: ()
2018-06-13 12:15:35,826+0200 INFO  (jsonrpc/7) [storage.TaskManager.Task] 
(Task='70941873-0296-4ed0-94c8-b51290cd6963') aborting: Task is aborted: 
u'Could not acquire resource. Probably resource factory threw an exception.: 
()' - code  100 (task:1181)
2018-06-13 12:15:35,826+0200 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH 
setVolumeDescription error=Could not acquire resource. Probably resource 
factory threw an exception.: () (dispatcher:82)
2018-06-13 12:15:35,826+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call 
Volume.setDescription failed (error 855) in 0.10 seconds (__init__:573)
2018-06-13 12:15:38,953+0200 INFO  (jsonrpc/5) [api.host] START getAllVmStats() 
from=::ffff:<IP>,54666 (api:46)
2018-06-13 12:15:38,956+0200 INFO  (jsonrpc/5) [api.host] FINISH getAllVmStats 
return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} 
from=::ffff:<IP>,54666 (api:52)
2018-06-13 12:15:38,957+0200 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmStats succeeded in 0.01 seconds (__init__:573)
2018-06-13 12:15:39,406+0200 INFO  (jsonrpc/4) [vdsm.api] START 
getSpmStatus(spUUID=u'3edb5295-3797-4cd0-9b43-f46ec1ee7b14', options=None) 
from=::ffff:<IP>,54666, task_id=eabfe183-dfb0-4982-b7ea-beacca74aeef (api:46)
2018-06-13 12:15:39,410+0200 INFO  (jsonrpc/4) [vdsm.api] FINISH getSpmStatus 
return={'spm_st': {'spmId': 2, 'spmStatus': 'SPM', 'spmLver': 20L}} 
from=::ffff:<IP>,54666, task_id=eabfe183-dfb0-4982-b7ea-beacca74aeef (api:52)
2018-06-13 12:15:39,410+0200 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call 
StoragePool.getSpmStatus succeeded in 0.01 seconds (__init__:573)
2018-06-13 12:15:39,416+0200 INFO  (jsonrpc/1) [vdsm.api] START 
getStoragePoolInfo(spUUID=u'3edb5295-3797-4cd0-9b43-f46ec1ee7b14', 
options=None) from=::ffff:<IP>,54686, 
task_id=b2003a6f-dd74-47ab-b4f0-95ffb54dc51d (api:46)
2018-06-13 12:15:39,420+0200 INFO  (jsonrpc/1) [vdsm.api] FINISH 
getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': '', 
'pool_status': 'connected', 'lver': 20L, 'domains': 
u'373efd46-8aea-4d0e-96cc-1da0debf72d0: 
Active,9ed4b0d2-c555-4b27-8f88-75c47a99ad98:Attached,efd78d36-c395-4e9a-a46e-6059fa53756d:Active,3675435e-851e-4236-81da-fce1cc027238:Active',
 'master_uuid': 'efd78d36-c395-4e9a-a46e-6059fa53756d', 'version': '4', 
'spm_id': 2, 'type':  'FCP', 'master_ver': 12}, 'dominfo': 
{u'373efd46-8aea-4d0e-96cc-1da0debf72d0': {'status': u'Active', 'diskfree': 
'8722541707264', 'isoprefix': '', 'alerts': [], 'disktotal': '8795690369024', 
'version': 4}, u'9ed4b0d2-c555-4b27-8f88-75c 47a99ad98': {'status': 
u'Attached', 'isoprefix': '', 'alerts': []}, 
u'efd78d36-c395-4e9a-a46e-6059fa53756d': {'status': u'Active', 'diskfree': 
'8718783610880', 'isoprefix': '', 'alerts': [], 'disktotal': '8795690369024', 
'version': 4},  u'3675435e-851e-4236-81da-fce1cc027238': {'status': u'Active', 
'diskfree': '8713280684032', 'isoprefix': '', 'alerts': [], 'disktotal': 
'8795690369024', 'version': 4}}} from=::ffff:<IP>,54686, 
task_id=b2003a6f-dd74-47ab-b4f0-95 ffb54dc51d (api:52)
2018-06-13 12:15:39,421+0200 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call 
StoragePool.getInfo succeeded in 0.01 seconds (__init__:573)

All the best,
Oliver


_______________________________________________
Users mailing list -- users@ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to 
users-le...@ovirt.org<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRBMUKOZ7VTP52I2SRPMBMCW375CXS5A/

_______________________________________________
Users mailing list -- users@ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to 
users-le...@ovirt.org<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6XG4IFPYXAIDX5IJRBD3SS4FARC74ZCC/
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GCIY7JAARSCWKJKJCCMPZ4YPXEMALXMT/

Reply via email to