[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread David Johnson
 And we're up!

Regards,
David Johnson
Director of Development, Maxis Technology
844.696.2947 ext 702 (o)  |  479.531.3590 (c)
djohn...@maxistechnology.com


[image: Maxis Techncology] 
www.maxistechnology.com


*stay connected *


On Thu, Jan 9, 2020 at 7:24 AM David Johnson 
wrote:

> Never mind, I see that I have to repeat the process for other drives.
>
> Regards,
> David Johnson
> Director of Development, Maxis Technology
> 844.696.2947 ext 702 (o)  |  479.531.3590 (c)
> djohn...@maxistechnology.com
>
>
> [image: Maxis Techncology] 
> www.maxistechnology.com
>
>
> *stay connected *
>
>
> On Thu, Jan 9, 2020 at 7:17 AM David Johnson 
> wrote:
>
>> Thank you again.
>>
>> After updating legality to LEGAL,
>>
>> [root@mx-ovirt-host2 ~]# vdsm-client Volume getInfo
>> storagepoolID=25cd9bfc-bab6-11e8-90f3-78acc0b47b4d
>> storagedomainID=6e627364-5e0c-4250-ac95-7cd914d0175f
>> imageID=4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
>> volumeID=f8066c56-6db1-4605-8d7c-0739335d30b8
>> {
>> "status": "OK",
>> "lease": {
>> "path": "/rhev/data-center/mnt/192.168.2.220:
>> _mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8.lease",
>> "owners": [],
>> "version": null,
>> "offset": 0
>> },
>> "domain": "6e627364-5e0c-4250-ac95-7cd914d0175f",
>> "capacity": "1503238553600",
>> "voltype": "LEAF",
>> "description": "",
>> "parent": "a912e388-d80d-4f56-805b-ea5e2f35d741",
>> "format": "COW",
>> "generation": 0,
>> "image": "4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6",
>> "uuid": "f8066c56-6db1-4605-8d7c-0739335d30b8",
>> "disktype": "DATA",
>> "legality": "LEGAL",
>> "mtime": "0",
>> "apparentsize": "36440899584",
>> "truesize": "16916186624",
>> "type": "SPARSE",
>> "children": [],
>> "pool": "",
>> "ctime": "1571669201"
>> }
>>
>> Attempt to start the VM result are:
>>
>> Log excerpt:
>>
>> 2020-01-09 06:47:46,575-0600 INFO  (vm/c5d0a42f) [storage.StorageDomain]
>> Creating symlink from 
>> /rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
>> to
>> /var/run/vdsm/storage/6e627364-5e0c-4250-ac95-7cd914d0175f/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
>> (fileSD:580)
>> 2020-01-09 06:47:46,581-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
>> prepareImage return={'info': {'path': 
>> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
>> 'type': 'file'}, 'path': 
>> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
>> 'imgVolumesInfo': [{'domainID': '6e627364-5e0c-4250-ac95-7cd914d0175f',
>> 'leaseOffset': 0, 'path': 
>> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/a912e388-d80d-4f56-805b-ea5e2f35d741',
>> 'volumeID': u'a912e388-d80d-4f56-805b-ea5e2f35d741', 'leasePath':
>> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/a912e388-d80d-4f56-805b-ea5e2f35d741.lease',
>> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6'}, {'domainID':
>> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'leaseOffset': 0, 'path':
>> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
>> 'volumeID': u'f8066c56-6db1-4605-8d7c-0739335d30b8', 'leasePath':
>> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8.lease',
>> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6'}]} from=internal,
>> task_id=865d2ff4-4e63-44dc-b8f8-9d93cad9892f (api:52)
>> 2020-01-09 06:47:46,582-0600 INFO  (vm/c5d0a42f) [vds] prepared volume
>> path: 
>> /rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8
>> (clientIF:497)
>> 2020-01-09 06:47:46,583-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
>> prepareImage(sdUUID='ec6ccb14-03c2-49cc-9cc0-b1a87d582ed7',
>> spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
>> imgUUID='60077050-6f99-41db-b280-446f018b104b',
>> leafUUID='a67eb40c-e0a1-49cc-9179-bebb263d6e9c', allowIllegal=False)
>> from=internal, task_id=08830292-0f75-4c5b-a411-695894c66475 (api:46)
>> 2020-01-09 06:47:46,632-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
>> prepareImage error=Cannot prepare 

[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread David Johnson
Never mind, I see that I have to repeat the process for other drives.

Regards,
David Johnson
Director of Development, Maxis Technology
844.696.2947 ext 702 (o)  |  479.531.3590 (c)
djohn...@maxistechnology.com


[image: Maxis Techncology] 
www.maxistechnology.com


*stay connected *


On Thu, Jan 9, 2020 at 7:17 AM David Johnson 
wrote:

> Thank you again.
>
> After updating legality to LEGAL,
>
> [root@mx-ovirt-host2 ~]# vdsm-client Volume getInfo
> storagepoolID=25cd9bfc-bab6-11e8-90f3-78acc0b47b4d
> storagedomainID=6e627364-5e0c-4250-ac95-7cd914d0175f
> imageID=4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
> volumeID=f8066c56-6db1-4605-8d7c-0739335d30b8
> {
> "status": "OK",
> "lease": {
> "path": "/rhev/data-center/mnt/192.168.2.220:
> _mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8.lease",
> "owners": [],
> "version": null,
> "offset": 0
> },
> "domain": "6e627364-5e0c-4250-ac95-7cd914d0175f",
> "capacity": "1503238553600",
> "voltype": "LEAF",
> "description": "",
> "parent": "a912e388-d80d-4f56-805b-ea5e2f35d741",
> "format": "COW",
> "generation": 0,
> "image": "4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6",
> "uuid": "f8066c56-6db1-4605-8d7c-0739335d30b8",
> "disktype": "DATA",
> "legality": "LEGAL",
> "mtime": "0",
> "apparentsize": "36440899584",
> "truesize": "16916186624",
> "type": "SPARSE",
> "children": [],
> "pool": "",
> "ctime": "1571669201"
> }
>
> Attempt to start the VM result are:
>
> Log excerpt:
>
> 2020-01-09 06:47:46,575-0600 INFO  (vm/c5d0a42f) [storage.StorageDomain]
> Creating symlink from 
> /rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
> to
> /var/run/vdsm/storage/6e627364-5e0c-4250-ac95-7cd914d0175f/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
> (fileSD:580)
> 2020-01-09 06:47:46,581-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
> prepareImage return={'info': {'path': 
> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
> 'type': 'file'}, 'path': 
> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
> 'imgVolumesInfo': [{'domainID': '6e627364-5e0c-4250-ac95-7cd914d0175f',
> 'leaseOffset': 0, 'path': 
> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/a912e388-d80d-4f56-805b-ea5e2f35d741',
> 'volumeID': u'a912e388-d80d-4f56-805b-ea5e2f35d741', 'leasePath':
> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/a912e388-d80d-4f56-805b-ea5e2f35d741.lease',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6'}, {'domainID':
> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'leaseOffset': 0, 'path':
> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
> 'volumeID': u'f8066c56-6db1-4605-8d7c-0739335d30b8', 'leasePath':
> u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8.lease',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6'}]} from=internal,
> task_id=865d2ff4-4e63-44dc-b8f8-9d93cad9892f (api:52)
> 2020-01-09 06:47:46,582-0600 INFO  (vm/c5d0a42f) [vds] prepared volume
> path: 
> /rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8
> (clientIF:497)
> 2020-01-09 06:47:46,583-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
> prepareImage(sdUUID='ec6ccb14-03c2-49cc-9cc0-b1a87d582ed7',
> spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
> imgUUID='60077050-6f99-41db-b280-446f018b104b',
> leafUUID='a67eb40c-e0a1-49cc-9179-bebb263d6e9c', allowIllegal=False)
> from=internal, task_id=08830292-0f75-4c5b-a411-695894c66475 (api:46)
> 2020-01-09 06:47:46,632-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
> prepareImage error=Cannot prepare illegal volume:
> (u'a67eb40c-e0a1-49cc-9179-bebb263d6e9c',) from=internal,
> task_id=08830292-0f75-4c5b-a411-695894c66475 (api:50)
> 2020-01-09 06:47:46,632-0600 ERROR (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='08830292-0f75-4c5b-a411-695894c66475')
> Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, 

[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread David Johnson
Thank you again.

After updating legality to LEGAL,

[root@mx-ovirt-host2 ~]# vdsm-client Volume getInfo
storagepoolID=25cd9bfc-bab6-11e8-90f3-78acc0b47b4d
storagedomainID=6e627364-5e0c-4250-ac95-7cd914d0175f
imageID=4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
volumeID=f8066c56-6db1-4605-8d7c-0739335d30b8
{
"status": "OK",
"lease": {
"path": "/rhev/data-center/mnt/192.168.2.220:
_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8.lease",
"owners": [],
"version": null,
"offset": 0
},
"domain": "6e627364-5e0c-4250-ac95-7cd914d0175f",
"capacity": "1503238553600",
"voltype": "LEAF",
"description": "",
"parent": "a912e388-d80d-4f56-805b-ea5e2f35d741",
"format": "COW",
"generation": 0,
"image": "4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6",
"uuid": "f8066c56-6db1-4605-8d7c-0739335d30b8",
"disktype": "DATA",
"legality": "LEGAL",
"mtime": "0",
"apparentsize": "36440899584",
"truesize": "16916186624",
"type": "SPARSE",
"children": [],
"pool": "",
"ctime": "1571669201"
}

Attempt to start the VM result are:

Log excerpt:

2020-01-09 06:47:46,575-0600 INFO  (vm/c5d0a42f) [storage.StorageDomain]
Creating symlink from
/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
to
/var/run/vdsm/storage/6e627364-5e0c-4250-ac95-7cd914d0175f/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6
(fileSD:580)
2020-01-09 06:47:46,581-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
prepareImage return={'info': {'path':
u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
'type': 'file'}, 'path':
u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
'imgVolumesInfo': [{'domainID': '6e627364-5e0c-4250-ac95-7cd914d0175f',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/a912e388-d80d-4f56-805b-ea5e2f35d741',
'volumeID': u'a912e388-d80d-4f56-805b-ea5e2f35d741', 'leasePath':
u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/a912e388-d80d-4f56-805b-ea5e2f35d741.lease',
'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6'}, {'domainID':
'6e627364-5e0c-4250-ac95-7cd914d0175f', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
'volumeID': u'f8066c56-6db1-4605-8d7c-0739335d30b8', 'leasePath':
u'/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8.lease',
'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6'}]} from=internal,
task_id=865d2ff4-4e63-44dc-b8f8-9d93cad9892f (api:52)
2020-01-09 06:47:46,582-0600 INFO  (vm/c5d0a42f) [vds] prepared volume
path: 
/rhev/data-center/mnt/192.168.2.220:_mnt_ovirt-freenas/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8
(clientIF:497)
2020-01-09 06:47:46,583-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
prepareImage(sdUUID='ec6ccb14-03c2-49cc-9cc0-b1a87d582ed7',
spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
imgUUID='60077050-6f99-41db-b280-446f018b104b',
leafUUID='a67eb40c-e0a1-49cc-9179-bebb263d6e9c', allowIllegal=False)
from=internal, task_id=08830292-0f75-4c5b-a411-695894c66475 (api:46)
2020-01-09 06:47:46,632-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
prepareImage error=Cannot prepare illegal volume:
(u'a67eb40c-e0a1-49cc-9179-bebb263d6e9c',) from=internal,
task_id=08830292-0f75-4c5b-a411-695894c66475 (api:50)
2020-01-09 06:47:46,632-0600 ERROR (vm/c5d0a42f) [storage.TaskManager.Task]
(Task='08830292-0f75-4c5b-a411-695894c66475') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
  File "", line 2, in prepareImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3187,
in prepareImage
raise se.prepareIllegalVolumeError(volUUID)
prepareIllegalVolumeError: Cannot prepare illegal volume:
(u'a67eb40c-e0a1-49cc-9179-bebb263d6e9c',)
2020-01-09 06:47:46,633-0600 INFO  (vm/c5d0a42f) [storage.TaskManager.Task]
(Task='08830292-0f75-4c5b-a411-695894c66475') aborting: Task is aborted:
"Cannot prepare illegal volume: 

[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread David Johnson
Additional info:

The failure appears to be from a simple legality check:

  def isLegal(self):
try:
legality = self.getMetaParam(sc.LEGALITY)
return legality != sc.ILLEGAL_VOL
except se.MetaDataKeyNotFoundError:
return True

Looking at the metadata above, the legality is 'LEGAL'

Regards,
David Johnson
Director of Development, Maxis Technology
844.696.2947 ext 702 (o)  |  479.531.3590 (c)
djohn...@maxistechnology.com


[image: Maxis Techncology] 
www.maxistechnology.com


*stay connected *


On Thu, Jan 9, 2020 at 6:15 AM David Johnson 
wrote:

> We had a drive in our NAS fail, but afterwards one of our VM's will not
> start.
>
> The boot drive on the VM is (so near as I can tell) the only drive
> affected.
>
> I confirmed that the disk images (active and snapshot) are both valid with
> qemu.
>
> I followed the instructions at
> https://www.canarytek.com/2017/07/02/Recover_oVirt_Illegal_Snapshots.html to
> identify the snapshot images that were marked "invalid" and marked them as
> valid.
>
> update images set imagestatus=1 where imagestatus=4;
>
>
>
> Log excerpt from attempt to start VM:
> 2020-01-09 02:18:44,908-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
> prepareImage(sdUUID='6e627364-5e0c-4250-ac95-7cd914d0175f',
> spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
> imgUUID='4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6',
> leafUUID='f8066c56-6db1-4605-8d7c-0739335d30b8', allowIllegal=False)
> from=internal, task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:46)
> 2020-01-09 02:18:44,931-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
> prepareImage error=Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) from=internal,
> task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:50)
> 2020-01-09 02:18:44,932-0600 ERROR (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
> Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, in prepareImage
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3187,
> in prepareImage
> raise se.prepareIllegalVolumeError(volUUID)
> prepareIllegalVolumeError: Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)
> 2020-01-09 02:18:44,932-0600 INFO  (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
> aborting: Task is aborted: "Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)" - code 227 (task:1181)
> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [storage.Dispatcher]
> FINISH prepareImage error=Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) (dispatcher:82)
> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [virt.vm]
> (vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') The vm start process failed
> (vm:949)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 878, in
> _startUnderlyingVm
> self._run()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2798, in
> _run
> self._devices = self._make_devices()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2639, in
> _make_devices
> disk_objs = self._perform_host_local_adjustment()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2712, in
> _perform_host_local_adjustment
> self._preparePathsForDrives(disk_params)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1023, in
> _preparePathsForDrives
> drive['path'] = self.cif.prepareVolumePath(drive, self.id)
>   File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 417, in
> prepareVolumePath
> raise vm.VolumeError(drive)
> VolumeError: Bad volume specification {'address': {'bus': '0',
> 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial':
> '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
> 'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
> '16916186624', 'type': 'disk', 'domainID':
> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',
> 'poolID': '25cd9bfc-bab6-11e8-90f3-78acc0b47b4d', 'device': 'disk', 'path':
> '/rhev/data-center/25cd9bfc-bab6-11e8-90f3-78acc0b47b4d/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
> 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
> 'f8066c56-6db1-4605-8d7c-0739335d30b8', 'diskType': 'file', 'alias':
> 'ua-4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'discard': False}
> 2020-01-09 02:18:44,934-0600 INFO  

[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread David Johnson
Thank you for the quick response.

Where do I find that?

Regards,
David Johnson
Director of Development, Maxis Technology
844.696.2947 ext 702 (o)  |  479.531.3590 (c)
djohn...@maxistechnology.com


[image: Maxis Techncology] 
www.maxistechnology.com


*stay connected *


On Thu, Jan 9, 2020 at 6:24 AM Benny Zlotnik  wrote:

> Did you change the volume metadata to LEGAL on the storage as well?
>
>
> On Thu, Jan 9, 2020 at 2:19 PM David Johnson 
> wrote:
>
>> We had a drive in our NAS fail, but afterwards one of our VM's will not
>> start.
>>
>> The boot drive on the VM is (so near as I can tell) the only drive
>> affected.
>>
>> I confirmed that the disk images (active and snapshot) are both valid
>> with qemu.
>>
>> I followed the instructions at
>> https://www.canarytek.com/2017/07/02/Recover_oVirt_Illegal_Snapshots.html to
>> identify the snapshot images that were marked "invalid" and marked them as
>> valid.
>>
>> update images set imagestatus=1 where imagestatus=4;
>>
>>
>>
>> Log excerpt from attempt to start VM:
>> 2020-01-09 02:18:44,908-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
>> prepareImage(sdUUID='6e627364-5e0c-4250-ac95-7cd914d0175f',
>> spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
>> imgUUID='4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6',
>> leafUUID='f8066c56-6db1-4605-8d7c-0739335d30b8', allowIllegal=False)
>> from=internal, task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:46)
>> 2020-01-09 02:18:44,931-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
>> prepareImage error=Cannot prepare illegal volume:
>> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) from=internal,
>> task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:50)
>> 2020-01-09 02:18:44,932-0600 ERROR (vm/c5d0a42f)
>> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
>> Unexpected error (task:875)
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
>> in _run
>> return fn(*args, **kargs)
>>   File "", line 2, in prepareImage
>>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
>> method
>> ret = func(*args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3187,
>> in prepareImage
>> raise se.prepareIllegalVolumeError(volUUID)
>> prepareIllegalVolumeError: Cannot prepare illegal volume:
>> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)
>> 2020-01-09 02:18:44,932-0600 INFO  (vm/c5d0a42f)
>> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
>> aborting: Task is aborted: "Cannot prepare illegal volume:
>> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)" - code 227 (task:1181)
>> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [storage.Dispatcher]
>> FINISH prepareImage error=Cannot prepare illegal volume:
>> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) (dispatcher:82)
>> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [virt.vm]
>> (vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') The vm start process failed
>> (vm:949)
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 878, in
>> _startUnderlyingVm
>> self._run()
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2798, in
>> _run
>> self._devices = self._make_devices()
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2639, in
>> _make_devices
>> disk_objs = self._perform_host_local_adjustment()
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2712, in
>> _perform_host_local_adjustment
>> self._preparePathsForDrives(disk_params)
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1023, in
>> _preparePathsForDrives
>> drive['path'] = self.cif.prepareVolumePath(drive, self.id)
>>   File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 417, in
>> prepareVolumePath
>> raise vm.VolumeError(drive)
>> VolumeError: Bad volume specification {'address': {'bus': '0',
>> 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial':
>> '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
>> 'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
>> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
>> '16916186624', 'type': 'disk', 'domainID':
>> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',
>> 'poolID': '25cd9bfc-bab6-11e8-90f3-78acc0b47b4d', 'device': 'disk', 'path':
>> '/rhev/data-center/25cd9bfc-bab6-11e8-90f3-78acc0b47b4d/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
>> 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
>> 'f8066c56-6db1-4605-8d7c-0739335d30b8', 'diskType': 'file', 'alias':
>> 'ua-4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'discard': False}
>> 2020-01-09 02:18:44,934-0600 INFO  (vm/c5d0a42f) [virt.vm]
>> 

[ovirt-users] Re: After NAS crash, one VM will not start up, Cannot prepare illegal volume

2020-01-09 Thread Benny Zlotnik
Did you change the volume metadata to LEGAL on the storage as well?


On Thu, Jan 9, 2020 at 2:19 PM David Johnson 
wrote:

> We had a drive in our NAS fail, but afterwards one of our VM's will not
> start.
>
> The boot drive on the VM is (so near as I can tell) the only drive
> affected.
>
> I confirmed that the disk images (active and snapshot) are both valid with
> qemu.
>
> I followed the instructions at
> https://www.canarytek.com/2017/07/02/Recover_oVirt_Illegal_Snapshots.html to
> identify the snapshot images that were marked "invalid" and marked them as
> valid.
>
> update images set imagestatus=1 where imagestatus=4;
>
>
>
> Log excerpt from attempt to start VM:
> 2020-01-09 02:18:44,908-0600 INFO  (vm/c5d0a42f) [vdsm.api] START
> prepareImage(sdUUID='6e627364-5e0c-4250-ac95-7cd914d0175f',
> spUUID='25cd9bfc-bab6-11e8-90f3-78acc0b47b4d',
> imgUUID='4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6',
> leafUUID='f8066c56-6db1-4605-8d7c-0739335d30b8', allowIllegal=False)
> from=internal, task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:46)
> 2020-01-09 02:18:44,931-0600 INFO  (vm/c5d0a42f) [vdsm.api] FINISH
> prepareImage error=Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) from=internal,
> task_id=26053225-6569-4b73-abdd-7d6c7e15d1e9 (api:50)
> 2020-01-09 02:18:44,932-0600 ERROR (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
> Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, in prepareImage
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3187,
> in prepareImage
> raise se.prepareIllegalVolumeError(volUUID)
> prepareIllegalVolumeError: Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)
> 2020-01-09 02:18:44,932-0600 INFO  (vm/c5d0a42f)
> [storage.TaskManager.Task] (Task='26053225-6569-4b73-abdd-7d6c7e15d1e9')
> aborting: Task is aborted: "Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',)" - code 227 (task:1181)
> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [storage.Dispatcher]
> FINISH prepareImage error=Cannot prepare illegal volume:
> (u'f8066c56-6db1-4605-8d7c-0739335d30b8',) (dispatcher:82)
> 2020-01-09 02:18:44,933-0600 ERROR (vm/c5d0a42f) [virt.vm]
> (vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') The vm start process failed
> (vm:949)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 878, in
> _startUnderlyingVm
> self._run()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2798, in
> _run
> self._devices = self._make_devices()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2639, in
> _make_devices
> disk_objs = self._perform_host_local_adjustment()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2712, in
> _perform_host_local_adjustment
> self._preparePathsForDrives(disk_params)
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1023, in
> _preparePathsForDrives
> drive['path'] = self.cif.prepareVolumePath(drive, self.id)
>   File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 417, in
> prepareVolumePath
> raise vm.VolumeError(drive)
> VolumeError: Bad volume specification {'address': {'bus': '0',
> 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial':
> '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
> 'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
> '16916186624', 'type': 'disk', 'domainID':
> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',
> 'poolID': '25cd9bfc-bab6-11e8-90f3-78acc0b47b4d', 'device': 'disk', 'path':
> '/rhev/data-center/25cd9bfc-bab6-11e8-90f3-78acc0b47b4d/6e627364-5e0c-4250-ac95-7cd914d0175f/images/4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6/f8066c56-6db1-4605-8d7c-0739335d30b8',
> 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID':
> 'f8066c56-6db1-4605-8d7c-0739335d30b8', 'diskType': 'file', 'alias':
> 'ua-4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'discard': False}
> 2020-01-09 02:18:44,934-0600 INFO  (vm/c5d0a42f) [virt.vm]
> (vmId='c5d0a42f-3b1e-43ee-a567-7844654011f5') Changed state to Down: Bad
> volume specification {'address': {'bus': '0', 'controller': '0', 'type':
> 'drive', 'target': '0', 'unit': '0'}, 'serial':
> '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'index': 0, 'iface': 'scsi',
> 'apparentsize': '36440899584', 'specParams': {}, 'cache': 'writeback',
> 'imageID': '4081ce8f-1ce1-4ee1-aa43-69af2dfc5ab6', 'truesize':
> '16916186624', 'type': 'disk', 'domainID':
> '6e627364-5e0c-4250-ac95-7cd914d0175f', 'reqsize': '0', 'format': 'cow',