[ovirt-users] Bad volume specification

2020-09-15 Thread Facundo Garat
Hi all,
 I'm having some issues with one VM. The VM won't start and it's showing
problems with the virtual disks so I started the VM without any disks and
trying to hot adding the disk and that's fail too.

 The servers are connected thru FC, all the other VMs are working fine.

  Any ideas?.

Thanks!!

PS: The engine.log is showing this:
2020-09-15 20:10:37,926-03 INFO
 [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (default
task-168) [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Lock Acquired to object
'EngineLock:{exclusiveLocks='[f5bd2e15-a1ab-4724-883a-988b4dc7985b=DISK]',
sharedLocks='[71db02c2-df29-4552-8a7e-cb8bb429a2ac=VM]'}'
2020-09-15 20:10:38,082-03 INFO
 [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Running command:
HotPlugDiskToVmCommand internal: false. Entities affected :  ID:
71db02c2-df29-4552-8a7e-cb8bb429a2ac Type: VMAction group
CONFIGURE_VM_STORAGE with role type USER
2020-09-15 20:10:38,117-03 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] START,
HotPlugDiskVDSCommand(HostName = nodo2,
HotPlugDiskVDSParameters:{hostId='1c24c269-76c3-468d-a7ce-d0332beb7aef',
vmId='71db02c2-df29-4552-8a7e-cb8bb429a2ac',
diskId='f5bd2e15-a1ab-4724-883a-988b4dc7985b', addressMap='null'}), log id:
f57ee9e
2020-09-15 20:10:38,125-03 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Disk hot-plug: 
  

  
  

  
  
  
  f5bd2e15-a1ab-4724-883a-988b4dc7985b

  
  http://ovirt.org/vm/1.0";>

  

0001-0001-0001-0001-0311

bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f

f5bd2e15-a1ab-4724-883a-988b4dc7985b

55327311-e47c-46b5-b168-258c5924757b
  

  


2020-09-15 20:10:38,289-03 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Failed in 'HotPlugDiskVDS' method
2020-09-15 20:10:38,295-03 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM nodo2 command HotPlugDiskVDS
failed: General Exception: ("Bad volume specification {'device': 'disk',
'type': 'disk', 'diskType': 'block', 'specParams': {}, 'alias':
'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
'55327311-e47c-46b5-b168-258c5924757b', 'imageID':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
'0001-0001-0001-0001-0311', 'volumeID':
'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
'/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
'none', 'iface': 'virtio', 'name': 'vda', 'serial':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",)
2020-09-15 20:10:38,295-03 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand' return
value 'StatusOnlyReturn [status=Status [code=100, message=General
Exception: ("Bad volume specification {'device': 'disk', 'type': 'disk',
'diskType': 'block', 'specParams': {}, 'alias':
'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
'55327311-e47c-46b5-b168-258c5924757b', 'imageID':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
'0001-0001-0001-0001-0311', 'volumeID':
'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
'/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
'none', 'iface': 'virtio', 'name': 'vda', 'serial':
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",)]]'
2020-09-15 20:10:38,295-03 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] HostName = nodo2
2020-09-15 20:10:38,295-03 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-36528)
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Command
'HotPlugDiskVDSCommand(HostName = nodo2,
HotPlugDiskVDSParameters:{hostId='1c24c269-76c3-468d-a7ce-d0332beb7aef',
vmId='71db02c2-df29-4552-8a7e-cb8bb429a2ac',
diskId='f5bd2e15-a1ab-4724-883a-988b4dc7985b', addressMap='null'})'
execution failed: VDSGenericException: VDSErrorException: Failed to
HotPlugDiskVDS, error = General Exception: ("Bad volume specification
{'device': 'disk', 'type': 'disk

[ovirt-users] Bad Volume Specification

2019-03-27 Thread Bryan Sockel
Having an issue with starting one of my VM's.  Looks like there was a 
problem with an auto generated snapshot on the VM.  The snapshot file is 
missing:


VDSM-Tool Output:


 image:52f31c3a-25ac-4930-8e67-105945dd42b5
 - 461e1747-3a59-4abc-ac6a-2013ec7858d3
   status: OK, voltype: INTERNAL, format: RAW, legality: LEGAL, 
type: PREALLOCATED, capacity: 53687091200, truesize: 53687091200
 - fe5fb4a4-8b3a-4bad-8f03-9dd179ac8397
   status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, 
type: SPARSE, capacity: 53687091200, truesize: 6576668672
 - a01d662f-cf33-4c86-b0ba-8bddc6d25dfd
   status: ILLEGAL, voltype: LEAF, format: COW, legality: 
ILLEGAL, type: SPARSE, capacity: 53687091200, truesize: 3221225472
   


lvs -o+lv_tags |grep 52f31c3a-25ac-4930-8e67-105945dd42b5
Couldn't find device with uuid 5UNXpR-SmSR-KqA2-swpe-kcQi-X7tt-7B5wRz.
  461e1747-3a59-4abc-ac6a-2013ec7858d3 8dfaac42-32fc-4e23-b5cb-e5c2c3c56c9d 
-wi-a-   50.00g 
IU_52f31c3a-25ac-4930-8e67-105945dd42b5,MD_43,PU_----
  a01d662f-cf33-4c86-b0ba-8bddc6d25dfd 8dfaac42-32fc-4e23-b5cb-e5c2c3c56c9d 
-wi---3.00g 
IU_52f31c3a-25ac-4930-8e67-105945dd42b5,MD_46,PU_fe5fb4a4-8b3a-4bad-8f03-9dd179ac8397
  fe5fb4a4-8b3a-4bad-8f03-9dd179ac8397 8dfaac42-32fc-4e23-b5cb-e5c2c3c56c9d 
-wi-a-6.12g 
IU_52f31c3a-25ac-4930-8e67-105945dd42b5,MD_45,PU_461e1747-3a59-4abc-ac6a-2013ec7858d3





qemu-img info 
/dev/8dfaac42-32fc-4e23-b5cb-e5c2c3c56c9d/a01d662f-cf33-4c86-b0ba-8bddc6d25dfd
qemu-img: Could not open 
'/dev/8dfaac42-32fc-4e23-b5cb-e5c2c3c56c9d/a01d662f-cf33-4c86-b0ba-8bddc6d25dfd':
 
Could not open 
'/dev/8dfaac42-32fc-4e23-b5cb-e5c2c3c56c9d/a01d662f-cf33-4c86-b0ba-8bddc6d25dfd':
 
No such file or directory



What is the best way to remove the snapshot from the vm (PSQL statement to 
delete the entry from the database)?
Thank You,


Bryan Sockel
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2QX42FAHKHTPJN4H6YG7CMEZYMTXZBKQ/


[ovirt-users] Bad volume specification

2018-05-27 Thread Bryan Sockel
Hi,

I am having to rebuild my ovirt instance for a couple of reasons.  I have 
setup a temporary ovirt portal to migrate my setup to while i rebuild my 
production portal.  

I have run int a couple of VM's that will no longer start.  

Everything that i am seeing is identical to this article - 
https://access.redhat.com/solutions/2423391

VM devel-build is down with error. Exit message: Bad volume specification 
{'serial': 'ec7a3258-7a99-4813-aa7d-dceb727a1975', 'index': 0, 'iface': 
'virtio', 'apparentsize': '1835008', 'specParams': {}, 'cache': 'none', 
'imageID': 'ec7a3258-7a99-4813-aa7d-dceb727a1975', 'truesize': '1777664', 
'type': 'disk', 'domainID': '2b79768f-a329-4eab-81e0-120a81ac8906', 
'reqsize': '0', 'format': 'cow', 'poolID': 
'9e7d643c-592d-11e8-82eb-005056b41d15', 'device': 'disk', 'path': 
'/rhev/data-center/9e7d643c-592d-11e8-82eb-005056b41d15/2b79768f-a329-4eab-81e0-120a81ac8906/images/ec7a3258-7a99-4813-aa7d-dceb727a1975/8f4ddee4-68b3-48e9-be27-0231557f5218',
 
'propagateErrors': 'off', 'name': 'vda', 'bootOrder': '1', 'volumeID': 
'8f4ddee4-68b3-48e9-be27-0231557f5218', 'diskType': 'file', 'alias': 
'ua-ec7a3258-7a99-4813-aa7d-dceb727a1975', 'discard': False}.


Currently running version 4.2.3.5-1

Is there anyway to recover the image, or even just be able to mount the 
drive to another vm to extract data.

Thank You


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


Re: [ovirt-users] Bad volume specification

2018-03-23 Thread Michal Skrivanek


> On 23 Mar 2018, at 21:02, nico...@devels.es wrote:
> 
> El 2018-03-23 15:38, Yaniv Kaul escribió:
>> On Fri, Mar 23, 2018 at 3:20 PM,  wrote:
>>> El 2018-03-23 12:16, Sandro Bonazzola escribió:
>>> 2018-03-21 13:37 GMT+01:00 :
>>> Hi,
>>> We're running oVirt 4.1.9, today I put a host on maintenance, I saw
>>> one of the VMs was taking too long to migrate so I shut it down. It
>>> seems that just in that moment the machine ended migrating, but the
>>> shutdown did happen as well.
>>> I would suggest to update to 4.2 as soon as possible since 4.1 is
>>> not
>>> supported anymore now that 4.2 is available
>> We have 2 oVirt infrastructures. One is migrated to 4.2, we can't
>> migrate the other one since most of the user portal features in 4.1
>> are not present in 4.2 and our users do a massive usage of this portal
>> to create/tune VMs. I know several issues were created on Github to
>> implement missing features, but we cannot upgrade until they are
>> implemented.
>> Have you checked the latest oVirt 4.2.2 RC? We have brought back
>> several features to the user portal.
>> Y.
>>  
> 
> Yes, I'm aware. I'm about to find some time to test it, still I think there 
> will be some features missing (I think I've read that it won't be possible to 
> deploy a VM without a template),

“Blank” is also a template:) There’s basic disk and network creation

> but I need to test it for a while. Still I guess we can upgrade and let some 
> teachers test if they can get used to the new user portal.

it’s not yet out, had some issues building dependencies today. Should be ready 
early next week

> 
> Thank you!
> 
>>> Thanks.
>>>  
>>> Now, when I try to start the VM I'm getting the following error:
>>> 2018-03-21 12:31:02,309Z ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (DefaultQuartzScheduler3) [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119),
>>> Correlation ID: null, Call Stack: null, Custom ID: null, Custom
>>> Event ID: -1, Message: VM openmaint.iaas.domain.com 
>>>  [1] [1] is down
>>> with
>>> error. Exit message: Bad volume specification {'index': '0',
>>> u'domainID': u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9', 'reqsize':
>>> '0', u'format': u'cow', u'optional': u'false', u'address':
>>> {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x',
>>> u'type': u'pci', u'slot': u'0x06'}, u'volumeID':
>>> u'68ee7a04-ceff-49f0-bf91-256870543921', 'apparentsize':
>>> '3221225472', u'imageID': u'9d087e6b-0832-46db-acb0-16d5131afa0c',
>>> u'discard': False, u'specParams': {}, u'readonly': u'false',
>>> u'iface': u'virtio', u'deviceId':
>>> u'9d087e6b-0832-46db-acb0-16d5131afa0c', 'truesize': '3221225472',
>>> u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'device':
>>> u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type':
>>> u'disk'}.
>>> It looks quite bad... I'm attaching the engine.log since the moment
>>> I start the VM.
>>> Is there anything I can do to recover the VM? oVirt says the disk
>>> is OK in the 'Disks' tab.
>>> Adding some people who may be able to help. Once solved, please
>>> consider upgrade.
>>>  
>>> Thanks.
>>> ___
>>> Users mailing list
>>> Users@ovirt.org 
>>> http://lists.ovirt.org/mailman/listinfo/users 
>>>  [2] [2]
>>> --
>>> SANDRO BONAZZOLA
>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION
>>> R&D
>>> Red Hat EMEA [3]
>>> sbona...@redhat.com    
>>>  [4]
>>>  [5]
>>> Links:
>>> --
>>> [1] http://openmaint.iaas.domain.com  [1]
>>> [2] http://lists.ovirt.org/mailman/listinfo/users 
>>>  [2]
>>> [3] https://www.redhat.com/  [3]
>>> [4] https://red.ht/sig  [4]
>>> [5] https://redhat.com/summit  [5]
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users 
>>  [2]
>> Links:
>> --
>> [1] http://openmaint.iaas.domain.com 
>> [2] http://lists.ovirt.org/mailman/listinfo/users 
>> 
>> [3] https://www.redhat.com/ 
>> [4] https://red.ht/sig 
>> [5] https://redhat.com/summit 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bad volume specification

2018-03-23 Thread nicolas

El 2018-03-23 15:38, Yaniv Kaul escribió:

On Fri, Mar 23, 2018 at 3:20 PM,  wrote:


El 2018-03-23 12:16, Sandro Bonazzola escribió:
2018-03-21 13:37 GMT+01:00 :

Hi,

We're running oVirt 4.1.9, today I put a host on maintenance, I saw
one of the VMs was taking too long to migrate so I shut it down. It
seems that just in that moment the machine ended migrating, but the
shutdown did happen as well.

I would suggest to update to 4.2 as soon as possible since 4.1 is
not
supported anymore now that 4.2 is available


 We have 2 oVirt infrastructures. One is migrated to 4.2, we can't
migrate the other one since most of the user portal features in 4.1
are not present in 4.2 and our users do a massive usage of this portal
to create/tune VMs. I know several issues were created on Github to
implement missing features, but we cannot upgrade until they are
implemented.

Have you checked the latest oVirt 4.2.2 RC? We have brought back
several features to the user portal.
Y.
 


Yes, I'm aware. I'm about to find some time to test it, still I think 
there will be some features missing (I think I've read that it won't be 
possible to deploy a VM without a template), but I need to test it for a 
while. Still I guess we can upgrade and let some teachers test if they 
can get used to the new user portal.


Thank you!




Thanks.

 

Now, when I try to start the VM I'm getting the following error:

2018-03-21 12:31:02,309Z ERROR



[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]

(DefaultQuartzScheduler3) [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119),
Correlation ID: null, Call Stack: null, Custom ID: null, Custom
Event ID: -1, Message: VM openmaint.iaas.domain.com [1] [1] is down
with
error. Exit message: Bad volume specification {'index': '0',
u'domainID': u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9', 'reqsize':
'0', u'format': u'cow', u'optional': u'false', u'address':
{u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x',
u'type': u'pci', u'slot': u'0x06'}, u'volumeID':
u'68ee7a04-ceff-49f0-bf91-256870543921', 'apparentsize':
'3221225472', u'imageID': u'9d087e6b-0832-46db-acb0-16d5131afa0c',
u'discard': False, u'specParams': {}, u'readonly': u'false',
u'iface': u'virtio', u'deviceId':
u'9d087e6b-0832-46db-acb0-16d5131afa0c', 'truesize': '3221225472',
u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'device':
u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type':
u'disk'}.

It looks quite bad... I'm attaching the engine.log since the moment
I start the VM.

Is there anything I can do to recover the VM? oVirt says the disk
is OK in the 'Disks' tab.

Adding some people who may be able to help. Once solved, please
consider upgrade.

 

Thanks.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [2] [2]

--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION
R&D

Red Hat EMEA [3]

sbona...@redhat.com   

                 [4]

 [5]

Links:
--
[1] http://openmaint.iaas.domain.com [1]
[2] http://lists.ovirt.org/mailman/listinfo/users [2]
[3] https://www.redhat.com/ [3]
[4] https://red.ht/sig [4]
[5] https://redhat.com/summit [5]


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users [2]



Links:
--
[1] http://openmaint.iaas.domain.com
[2] http://lists.ovirt.org/mailman/listinfo/users
[3] https://www.redhat.com/
[4] https://red.ht/sig
[5] https://redhat.com/summit

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bad volume specification

2018-03-23 Thread Yaniv Kaul
On Fri, Mar 23, 2018 at 3:20 PM,  wrote:

> El 2018-03-23 12:16, Sandro Bonazzola escribió:
>
>> 2018-03-21 13:37 GMT+01:00 :
>>
>> Hi,
>>>
>>> We're running oVirt 4.1.9, today I put a host on maintenance, I saw
>>> one of the VMs was taking too long to migrate so I shut it down. It
>>> seems that just in that moment the machine ended migrating, but the
>>> shutdown did happen as well.
>>>
>>
>> I would suggest to update to 4.2 as soon as possible since 4.1 is not
>> supported anymore now that 4.2 is available
>>
>>
> We have 2 oVirt infrastructures. One is migrated to 4.2, we can't migrate
> the other one since most of the user portal features in 4.1 are not present
> in 4.2 and our users do a massive usage of this portal to create/tune VMs.
> I know several issues were created on Github to implement missing features,
> but we cannot upgrade until they are implemented.
>

Have you checked the latest oVirt 4.2.2 RC? We have brought back several
features to the user portal.
Y.


>
> Thanks.
>
>
>>
>> Now, when I try to start the VM I'm getting the following error:
>>>
>>> 2018-03-21 12:31:02,309Z ERROR
>>>
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>
>>> (DefaultQuartzScheduler3) [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119),
>>> Correlation ID: null, Call Stack: null, Custom ID: null, Custom
>>> Event ID: -1, Message: VM openmaint.iaas.domain.com [1] is down with
>>> error. Exit message: Bad volume specification {'index': '0',
>>> u'domainID': u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9', 'reqsize':
>>> '0', u'format': u'cow', u'optional': u'false', u'address':
>>> {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x',
>>> u'type': u'pci', u'slot': u'0x06'}, u'volumeID':
>>> u'68ee7a04-ceff-49f0-bf91-256870543921', 'apparentsize':
>>> '3221225472', u'imageID': u'9d087e6b-0832-46db-acb0-16d5131afa0c',
>>> u'discard': False, u'specParams': {}, u'readonly': u'false',
>>> u'iface': u'virtio', u'deviceId':
>>> u'9d087e6b-0832-46db-acb0-16d5131afa0c', 'truesize': '3221225472',
>>> u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'device':
>>> u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type':
>>> u'disk'}.
>>>
>>> It looks quite bad... I'm attaching the engine.log since the moment
>>> I start the VM.
>>>
>>> Is there anything I can do to recover the VM? oVirt says the disk
>>> is OK in the 'Disks' tab.
>>>
>>
>> Adding some people who may be able to help. Once solved, please
>> consider upgrade.
>>
>>
>>
>> Thanks.
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users [2]
>>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>>
>> Red Hat EMEA [3]
>>
>> sbona...@redhat.com
>>
>>  [4]
>>
>>  [5]
>>
>>
>>
>> Links:
>> --
>> [1] http://openmaint.iaas.domain.com
>> [2] http://lists.ovirt.org/mailman/listinfo/users
>> [3] https://www.redhat.com/
>> [4] https://red.ht/sig
>> [5] https://redhat.com/summit
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bad volume specification

2018-03-23 Thread Sandro Bonazzola
2018-03-23 13:20 GMT+01:00 :

> El 2018-03-23 12:16, Sandro Bonazzola escribió:
>
>> 2018-03-21 13:37 GMT+01:00 :
>>
>> Hi,
>>>
>>> We're running oVirt 4.1.9, today I put a host on maintenance, I saw
>>> one of the VMs was taking too long to migrate so I shut it down. It
>>> seems that just in that moment the machine ended migrating, but the
>>> shutdown did happen as well.
>>>
>>
>> I would suggest to update to 4.2 as soon as possible since 4.1 is not
>> supported anymore now that 4.2 is available
>>
>>
> We have 2 oVirt infrastructures. One is migrated to 4.2, we can't migrate
> the other one since most of the user portal features in 4.1 are not present
> in 4.2 and our users do a massive usage of this portal to create/tune VMs.
> I know several issues were created on Github to implement missing features,
> but we cannot upgrade until they are implemented.
>

Understood, thanks for the feedback!


>
> Thanks.
>
>
>>
>> Now, when I try to start the VM I'm getting the following error:
>>>
>>> 2018-03-21 12:31:02,309Z ERROR
>>>
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>
>>> (DefaultQuartzScheduler3) [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119),
>>> Correlation ID: null, Call Stack: null, Custom ID: null, Custom
>>> Event ID: -1, Message: VM openmaint.iaas.domain.com [1] is down with
>>> error. Exit message: Bad volume specification {'index': '0',
>>> u'domainID': u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9', 'reqsize':
>>> '0', u'format': u'cow', u'optional': u'false', u'address':
>>> {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x',
>>> u'type': u'pci', u'slot': u'0x06'}, u'volumeID':
>>> u'68ee7a04-ceff-49f0-bf91-256870543921', 'apparentsize':
>>> '3221225472', u'imageID': u'9d087e6b-0832-46db-acb0-16d5131afa0c',
>>> u'discard': False, u'specParams': {}, u'readonly': u'false',
>>> u'iface': u'virtio', u'deviceId':
>>> u'9d087e6b-0832-46db-acb0-16d5131afa0c', 'truesize': '3221225472',
>>> u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'device':
>>> u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type':
>>> u'disk'}.
>>>
>>> It looks quite bad... I'm attaching the engine.log since the moment
>>> I start the VM.
>>>
>>> Is there anything I can do to recover the VM? oVirt says the disk
>>> is OK in the 'Disks' tab.
>>>
>>
>> Adding some people who may be able to help. Once solved, please
>> consider upgrade.
>>
>>
>>
>> Thanks.
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users [2]
>>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>>
>> Red Hat EMEA [3]
>>
>> sbona...@redhat.com
>>
>>  [4]
>>
>>  [5]
>>
>>
>>
>> Links:
>> --
>> [1] http://openmaint.iaas.domain.com
>> [2] http://lists.ovirt.org/mailman/listinfo/users
>> [3] https://www.redhat.com/
>> [4] https://red.ht/sig
>> [5] https://redhat.com/summit
>>
>


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D

Red Hat EMEA 

sbona...@redhat.com


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bad volume specification

2018-03-23 Thread nicolas

El 2018-03-23 12:16, Sandro Bonazzola escribió:

2018-03-21 13:37 GMT+01:00 :


Hi,

We're running oVirt 4.1.9, today I put a host on maintenance, I saw
one of the VMs was taking too long to migrate so I shut it down. It
seems that just in that moment the machine ended migrating, but the
shutdown did happen as well.


I would suggest to update to 4.2 as soon as possible since 4.1 is not
supported anymore now that 4.2 is available



We have 2 oVirt infrastructures. One is migrated to 4.2, we can't 
migrate the other one since most of the user portal features in 4.1 are 
not present in 4.2 and our users do a massive usage of this portal to 
create/tune VMs. I know several issues were created on Github to 
implement missing features, but we cannot upgrade until they are 
implemented.


Thanks.


 


Now, when I try to start the VM I'm getting the following error:

2018-03-21 12:31:02,309Z ERROR


[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]

(DefaultQuartzScheduler3) [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119),
Correlation ID: null, Call Stack: null, Custom ID: null, Custom
Event ID: -1, Message: VM openmaint.iaas.domain.com [1] is down with
error. Exit message: Bad volume specification {'index': '0',
u'domainID': u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9', 'reqsize':
'0', u'format': u'cow', u'optional': u'false', u'address':
{u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x',
u'type': u'pci', u'slot': u'0x06'}, u'volumeID':
u'68ee7a04-ceff-49f0-bf91-256870543921', 'apparentsize':
'3221225472', u'imageID': u'9d087e6b-0832-46db-acb0-16d5131afa0c',
u'discard': False, u'specParams': {}, u'readonly': u'false',
u'iface': u'virtio', u'deviceId':
u'9d087e6b-0832-46db-acb0-16d5131afa0c', 'truesize': '3221225472',
u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'device':
u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type':
u'disk'}.

It looks quite bad... I'm attaching the engine.log since the moment
I start the VM.

Is there anything I can do to recover the VM? oVirt says the disk
is OK in the 'Disks' tab.


Adding some people who may be able to help. Once solved, please
consider upgrade.

 


Thanks.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [2]


--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D

Red Hat EMEA [3]

sbona...@redhat.com   

 [4]

 [5]



Links:
--
[1] http://openmaint.iaas.domain.com
[2] http://lists.ovirt.org/mailman/listinfo/users
[3] https://www.redhat.com/
[4] https://red.ht/sig
[5] https://redhat.com/summit

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bad volume specification

2018-03-23 Thread Sandro Bonazzola
2018-03-21 13:37 GMT+01:00 :

> Hi,
>
> We're running oVirt 4.1.9, today I put a host on maintenance, I saw one of
> the VMs was taking too long to migrate so I shut it down. It seems that
> just in that moment the machine ended migrating, but the shutdown did
> happen as well.
>

I would suggest to update to 4.2 as soon as possible since 4.1 is not
supported anymore now that 4.2 is available



>
> Now, when I try to start the VM I'm getting the following error:
>
> 2018-03-21 12:31:02,309Z ERROR [org.ovirt.engine.core.dal.dbb
> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler3)
> [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119), Correlation ID: null, Call Stack:
> null, Custom ID: null, Custom Event ID: -1, Message: VM
> openmaint.iaas.domain.com is down with error. Exit message: Bad volume
> specification {'index': '0', u'domainID': 
> u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9',
> 'reqsize': '0', u'format': u'cow', u'optional': u'false', u'address':
> {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x', u'type':
> u'pci', u'slot': u'0x06'}, u'volumeID': 
> u'68ee7a04-ceff-49f0-bf91-256870543921',
> 'apparentsize': '3221225472', u'imageID': 
> u'9d087e6b-0832-46db-acb0-16d5131afa0c',
> u'discard': False, u'specParams': {}, u'readonly': u'false', u'iface':
> u'virtio', u'deviceId': u'9d087e6b-0832-46db-acb0-16d5131afa0c',
> 'truesize': '3221225472', u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63',
> u'device': u'disk', u'shared': u'false', u'propagateErrors': u'off',
> u'type': u'disk'}.
>
> It looks quite bad... I'm attaching the engine.log since the moment I
> start the VM.
>
> Is there anything I can do to recover the VM? oVirt says the disk is OK in
> the 'Disks' tab.
>

Adding some people who may be able to help. Once solved, please consider
upgrade.




>
> Thanks.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D

Red Hat EMEA 

sbona...@redhat.com


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bad volume specification

2018-03-23 Thread nicolas

Guys, any hints to this?

El 2018-03-21 12:37, nico...@devels.es escribió:

Hi,

We're running oVirt 4.1.9, today I put a host on maintenance, I saw
one of the VMs was taking too long to migrate so I shut it down. It
seems that just in that moment the machine ended migrating, but the
shutdown did happen as well.

Now, when I try to start the VM I'm getting the following error:

2018-03-21 12:31:02,309Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler3) [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119),
Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event
ID: -1, Message: VM openmaint.iaas.domain.com is down with error. Exit
message: Bad volume specification {'index': '0', u'domainID':
u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9', 'reqsize': '0', u'format':
u'cow', u'optional': u'false', u'address': {u'function': u'0x0',
u'bus': u'0x00', u'domain': u'0x', u'type': u'pci', u'slot':
u'0x06'}, u'volumeID': u'68ee7a04-ceff-49f0-bf91-256870543921',
'apparentsize': '3221225472', u'imageID':
u'9d087e6b-0832-46db-acb0-16d5131afa0c', u'discard': False,
u'specParams': {}, u'readonly': u'false', u'iface': u'virtio',
u'deviceId': u'9d087e6b-0832-46db-acb0-16d5131afa0c', 'truesize':
'3221225472', u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63',
u'device': u'disk', u'shared': u'false', u'propagateErrors': u'off',
u'type': u'disk'}.

It looks quite bad... I'm attaching the engine.log since the moment I
start the VM.

Is there anything I can do to recover the VM? oVirt says the disk is
OK in the 'Disks' tab.

Thanks.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Bad volume specification

2018-03-21 Thread nicolas

Hi,

We're running oVirt 4.1.9, today I put a host on maintenance, I saw one 
of the VMs was taking too long to migrate so I shut it down. It seems 
that just in that moment the machine ended migrating, but the shutdown 
did happen as well.


Now, when I try to start the VM I'm getting the following error:

2018-03-21 12:31:02,309Z ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler3) [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119), 
Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event 
ID: -1, Message: VM openmaint.iaas.domain.com is down with error. Exit 
message: Bad volume specification {'index': '0', u'domainID': 
u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9', 'reqsize': '0', u'format': 
u'cow', u'optional': u'false', u'address': {u'function': u'0x0', u'bus': 
u'0x00', u'domain': u'0x', u'type': u'pci', u'slot': u'0x06'}, 
u'volumeID': u'68ee7a04-ceff-49f0-bf91-256870543921', 'apparentsize': 
'3221225472', u'imageID': u'9d087e6b-0832-46db-acb0-16d5131afa0c', 
u'discard': False, u'specParams': {}, u'readonly': u'false', u'iface': 
u'virtio', u'deviceId': u'9d087e6b-0832-46db-acb0-16d5131afa0c', 
'truesize': '3221225472', u'poolID': 
u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'device': u'disk', u'shared': 
u'false', u'propagateErrors': u'off', u'type': u'disk'}.


It looks quite bad... I'm attaching the engine.log since the moment I 
start the VM.


Is there anything I can do to recover the VM? oVirt says the disk is OK 
in the 'Disks' tab.


Thanks.2018-03-21 12:30:39,539Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] 
(default task-151) [429d4c9e-f77d-4fad-951c-08a6b7751bd6] Lock Acquired to 
object 'EngineLock:{exclusiveLocks='[f503b710-7165-415b-a567-16251da7212d=VM]', 
sharedLocks=''}'
2018-03-21 12:30:39,645Z INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
task-151) [429d4c9e-f77d-4fad-951c-08a6b7751bd6] START, 
IsVmDuringInitiatingVDSCommand( 
IsVmDuringInitiatingVDSCommandParameters:{runAsync='true', 
vmId='f503b710-7165-415b-a567-16251da7212d'}), log id: 15ef3c8a
2018-03-21 12:30:39,645Z INFO  
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
task-151) [429d4c9e-f77d-4fad-951c-08a6b7751bd6] FINISH, 
IsVmDuringInitiatingVDSCommand, return: false, log id: 15ef3c8a
2018-03-21 12:30:39,792Z INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-151) 
[429d4c9e-f77d-4fad-951c-08a6b7751bd6] Candidate host 'kvmr05.domain.com' 
('f7ab19fe-5192-4a22-88cf-c8d019fa0372') was filtered out by 
'VAR__FILTERTYPE__INTERNAL' filter 'Memory' (correlation id: null)
2018-03-21 12:30:39,863Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] 
(org.ovirt.thread.pool-6-thread-46) [429d4c9e-f77d-4fad-951c-08a6b7751bd6] 
Running command: RunVmCommand internal: false. Entities affected :  ID: 
f503b710-7165-415b-a567-16251da7212d Type: VMAction group RUN_VM with role type 
USER
2018-03-21 12:30:39,870Z INFO  [org.ovirt.engine.core.bll.RunVmCommand] 
(org.ovirt.thread.pool-6-thread-46) [429d4c9e-f77d-4fad-951c-08a6b7751bd6] 
Emulated machine 'pc-i440fx-rhel7.3.0' selected since Custom Compatibility 
Version is set for 'VM [openmaint.iaas.domain.com]'
2018-03-21 12:30:39,979Z INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(org.ovirt.thread.pool-6-thread-46) [429d4c9e-f77d-4fad-951c-08a6b7751bd6] 
Candidate host 'kvmr05.domain.com' ('f7ab19fe-5192-4a22-88cf-c8d019fa0372') was 
filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'Memory' (correlation id: 
429d4c9e-f77d-4fad-951c-08a6b7751bd6)
2018-03-21 12:30:40,032Z INFO  
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] 
(org.ovirt.thread.pool-6-thread-46) [429d4c9e-f77d-4fad-951c-08a6b7751bd6] 
START, UpdateVmDynamicDataVDSCommand( 
UpdateVmDynamicDataVDSCommandParameters:{runAsync='true', hostId='null', 
vmId='f503b710-7165-415b-a567-16251da7212d', 
vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@db93365b'}), 
log id: 141aa350
2018-03-21 12:30:40,037Z INFO  
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] 
(org.ovirt.thread.pool-6-thread-46) [429d4c9e-f77d-4fad-951c-08a6b7751bd6] 
FINISH, UpdateVmDynamicDataVDSCommand, log id: 141aa350
2018-03-21 12:30:40,040Z INFO  
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] 
(org.ovirt.thread.pool-6-thread-46) [429d4c9e-f77d-4fad-951c-08a6b7751bd6] 
START, CreateVmVDSCommand( CreateVmVDSCommandParameters:{runAsync='true', 
hostId='b2dfb945-d767-44aa-a547-2d1a4381f8e3', 
vmId='f503b710-7165-415b-a567-16251da7212d', vm='VM 
[openmaint.iaas.domain.com]'}), log id: 3435053c
2018-03-21 12:30:40,046Z INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] 
(org.ovirt.thread.pool-6-thread-46) [429d4c9e-f77d-4fad-951c-08a6b7751bd6] 
START, CreateVDSCommand(HostName = kvmr04.domain.com, 
CreateVmVDSCommandParameters:{runAsync='true', 
hostId='b2dfb945-d767-44aa-a547-2d1a4381f8e3', 
vmId='f503b710-7165-415b-a567-16251da72

Re: [ovirt-users] Bad volume specification after hung migration

2017-10-26 Thread Michal Skrivanek

> On 26 Oct 2017, at 12:32, Roberto Nunin  wrote:
> 
> Hi Michael
> 
> By frozen I mean the action to put host in maintenance while some VM were 
> running on it.
> This action wasn't completed after more than one hour.

ok, and was the problem in this last VM not finishing the migration? Was it 
migrating at all? If yes, what was the progress in UI, any failures? There are 
various timeouts which should have been triggered, so if they were not 
triggered it would indeed point to some internal issue. Would be great to 
attach source and destination vdsm.log

> Thinking that shutting down the VM could help, I've done it. Looking at 
> results, not.

What was the result? Did it fail to shut down? Did you use Power Off to force 
immediate shutdown? If it was migrating, did you try to cancel the migration 
first?

> 
> Yes, I've restarted the ovirt-engine service, I've still not restarted the 
> hosted-engine VM.

well, it’s not only a universal “fix” of various things, sometimes it does more 
harm than benefit too. Logs would be helpful.

> Hosts still not restarted. Do you think can help ?

hard to say. Either way please salvage logs first

> 
> Obviously we will migrate, this activities are enabling us to have redundancy 
> at the storage level, then we will migrate to 4.1.x

great:)

Thanks,
michal

> 
> Thanks
> 
> 2017-10-26 12:26 GMT+02:00 Michal Skrivanek  >:
> 
>> On 26 Oct 2017, at 10:20, Roberto Nunin > > wrote:
>> 
>> We are running 4.0.1.1-1.el7.centos
> 
> Hi,
> any reason not to upgrade to 4.1?
> 
>> 
>> After a frozen migration attempt, we have two VM that after shutdown, are 
>> not anymore able to be started up again.
> 
> what do you mean by frozen? Are you talking about "VM live migration" or 
> “live storage migration”?
> How exactly did you resolve that situation, you only shut down those VMs? No 
> other troubleshooting steps, e.g. restarting engine, hosts, things like that?
> 
> Thanks,
> michal
>> 
>> Message returned is :
>> 
>> Bad volume specification {'index': '0', u'domainID': 
>> u'731d95a9-61a7-4c7a-813b-fb1c3dde47ea', 'reqsize': '0', u'format': u'cow', 
>> u'optional': u'false', u'address': {u'function': u'0x0', u'bus': u'0x00', 
>> u'domain': u'0x', u'type': u'pci', u'slot': u'0x05'}, u'volumeID': 
>> u'cffc70ff-ed72-46ef-a369-4be95de72260', 'apparentsize': '3221225472', 
>> u'imageID': u'3fe5a849-bcc2-42d3-93c5aca4c504515b', u'specParams': {}, 
>> u'readonly': u'false', u'iface': u'virtio', u'deviceId': 
>> u'3fe5a849bcc2-42d3-93c5-aca4c504515b', 'truesize': '3221225472', u'poolID': 
>> u'0001-0001-0001-0001-01ec', u'device': u'disk', u'shared': 
>> u'false', u'propagateErrors': u'off',u'type':u'disk'}
>> 
>> Probably this is caused by a wrong pointer into the database that still 
>> refer to the migration image-id.
>> 
>> If we search within all_disks view, we can find that parentid field isn't 
>> ---- like all other running vm, but it has a 
>> value:
>> 
>>vm_names   |   parentid
>> --+--
>>  working01.company.xx | ----
>>  working02.company.xx | ----
>>  working03.company.xx | ----
>>  working04.company.xx | ----
>>  broken001.company.xx | 30533842-2c83-4d0e-95d2-48162dbe23bd <
>>  working05.company.xx | ----
>> 
>> 
>> How we can recover from this ?
>> 
>> Thanks in advance
>> Regards,
>> 
>> -- 
>> Robert​o​
>> 
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users 
>> 
> 
> 
> 
> 
> -- 
> Roberto
> 
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bad volume specification after hung migration

2017-10-26 Thread Michal Skrivanek

> On 26 Oct 2017, at 10:20, Roberto Nunin  wrote:
> 
> We are running 4.0.1.1-1.el7.centos

Hi,
any reason not to upgrade to 4.1?

> 
> After a frozen migration attempt, we have two VM that after shutdown, are not 
> anymore able to be started up again.

what do you mean by frozen? Are you talking about "VM live migration" or “live 
storage migration”?
How exactly did you resolve that situation, you only shut down those VMs? No 
other troubleshooting steps, e.g. restarting engine, hosts, things like that?

Thanks,
michal
> 
> Message returned is :
> 
> Bad volume specification {'index': '0', u'domainID': 
> u'731d95a9-61a7-4c7a-813b-fb1c3dde47ea', 'reqsize': '0', u'format': u'cow', 
> u'optional': u'false', u'address': {u'function': u'0x0', u'bus': u'0x00', 
> u'domain': u'0x', u'type': u'pci', u'slot': u'0x05'}, u'volumeID': 
> u'cffc70ff-ed72-46ef-a369-4be95de72260', 'apparentsize': '3221225472', 
> u'imageID': u'3fe5a849-bcc2-42d3-93c5aca4c504515b', u'specParams': {}, 
> u'readonly': u'false', u'iface': u'virtio', u'deviceId': 
> u'3fe5a849bcc2-42d3-93c5-aca4c504515b', 'truesize': '3221225472', u'poolID': 
> u'0001-0001-0001-0001-01ec', u'device': u'disk', u'shared': 
> u'false', u'propagateErrors': u'off',u'type':u'disk'}
> 
> Probably this is caused by a wrong pointer into the database that still refer 
> to the migration image-id.
> 
> If we search within all_disks view, we can find that parentid field isn't 
> ---- like all other running vm, but it has a 
> value:
> 
>vm_names   |   parentid
> --+--
>  working01.company.xx | ----
>  working02.company.xx | ----
>  working03.company.xx | ----
>  working04.company.xx | ----
>  broken001.company.xx | 30533842-2c83-4d0e-95d2-48162dbe23bd <
>  working05.company.xx | ----
> 
> 
> How we can recover from this ?
> 
> Thanks in advance
> Regards,
> 
> -- 
> Robert​o​
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Bad volume specification after hung migration

2017-10-26 Thread Roberto Nunin
We are running 4.0.1.1-1.el7.centos

After a frozen migration attempt, we have two VM that after shutdown, are
not anymore able to be started up again.

Message returned is :

Bad volume specification {'index': '0', u'domainID':
u'731d95a9-61a7-4c7a-813b-fb1c3dde47ea', 'reqsize': '0', u'format': u'cow',
u'optional': u'false', u'address': {u'function': u'0x0', u'bus': u'0x00',
u'domain': u'0x', u'type': u'pci', u'slot': u'0x05'}, u'volumeID':
u'cffc70ff-ed72-46ef-a369-4be95de72260', 'apparentsize': '3221225472',
u'imageID': u'3fe5a849-bcc2-42d3-93c5aca4c504515b', u'specParams': {},
u'readonly': u'false', u'iface': u'virtio', u'deviceId':
u'3fe5a849bcc2-42d3-93c5-aca4c504515b', 'truesize': '3221225472',
u'poolID': u'0001-0001-0001-0001-01ec', u'device': u'disk',
u'shared': u'false', u'propagateErrors': u'off',u'type':u'disk'}

Probably this is caused by a wrong pointer into the database that still
refer to the migration image-id.

If we search within all_disks view, we can find that parentid field
isn't ----
like all other running vm, but it has a value:

   vm_names   |   parentid
--+--
 working01.company.xx | ----
 working02.company.xx | ----
 working03.company.xx | ----
 working04.company.xx | ----
 broken001.company.xx | 30533842-2c83-4d0e-95d2-48162dbe23bd <
 working05.company.xx | ----


How we can recover from this ?

Thanks in advance
Regards,

-- 
Robert
​o​
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bad volume specification

2017-01-05 Thread Sahina Bose
Can you provide the gluster mount logs
On Fri, 6 Jan 2017 at 12:01 PM, Rodrick Brown 
wrote:

> I'm using gluster/zfs for the backing store on my Ovirt VM's its seems our
> gluster volume may have ran low on space and a few VM's we're paused due to
> long i/o wait times.
>
> I'm no longer able to bring these VM's back online because I get the
> following error:
>
> OSError: [Errno 22] Invalid argument
> Thread-991494::ERROR::2017-01-06
> 01:17:32,990::vm::759::virt.vm::(_startUnderlyingVm)
> vmId=`add4f65a-6389-4fc8-bf9d-bf92964cecf0`::The vm start process failed
> Traceback (most recent call last):
>   File "/usr/sha
>
> ​StorageUnavailableError: Unable to get volume size for domain domain 
> volume 
>
> I have snapshots is their anyway I can recover or fix this issue? ​
>
>
>
>
> ___
>
> Users mailing list
>
> Users@ovirt.org
>
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Bad volume specification

2017-01-05 Thread Rodrick Brown
I'm using gluster/zfs for the backing store on my Ovirt VM's its seems our
gluster volume may have ran low on space and a few VM's we're paused due to
long i/o wait times.

I'm no longer able to bring these VM's back online because I get the
following error:

OSError: [Errno 22] Invalid argument
Thread-991494::ERROR::2017-01-06
01:17:32,990::vm::759::virt.vm::(_startUnderlyingVm)
vmId=`add4f65a-6389-4fc8-bf9d-bf92964cecf0`::The vm start process failed
Traceback (most recent call last):
  File "/usr/sha

​StorageUnavailableError: Unable to get volume size for domain domain 
volume 

I have snapshots is their anyway I can recover or fix this issue? ​
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Bad volume specification

2014-01-08 Thread Blaster

On Jan 8, 2014, at 10:10 AM, Dan Kenigsberg  wrote:

> 
> No quick answer pops to mind. Could you share your vdsm.log from the
> vmCreate line up until the error you have quoted?
> 
I figured it out.  The disk image permissions didn’t copy over, so vdsm 
couldn’t read the disk image..

Thanks for the response.___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Bad volume specification

2014-01-08 Thread Dan Kenigsberg
On Wed, Jan 08, 2014 at 10:01:13AM -0600, Blaster wrote:
> I have a couple ESXi Win 7 images on VMDKs that I converted to raw
> using qemu-img convert.
> 
> Under ovirt 3.3.1 I then used a procedure posted here previously
> where you create a VM, add a disk, then copy over the converted
> image onto the oVirt created image and away you go.
> 
> I did this twice under oVirt 3.3.1 and it worked great.
> 
> Now I have built a new oVirt 3.3.2 system and tried the same thing,
> and I get the error:
> 
> VM win7-01 is down. Exit message: Bad volume specification {'index':
> 0, 'iface': 'virtio', 'reqsize': '0', 'format': 'raw', 'bootOrder':
> '1', 'volumeID': 'd750e9e0-a906-4369-8bbb-a3b676121321',
> 'apparentsize': '107374182400', 'imageID':
> 'f674cb27-c28b-4373-ad75-9ed8a765ca31', 'specParams': {},
> 'readonly': 'false', 'domainID':
> 'f14f471e-0cce-414d-af57-779eeb88c97a', 'optional': 'false',
> 'deviceId': 'f674cb27-c28b-4373-ad75-9ed8a765ca31', 'truesize':
> '107374194688', 'poolID': '18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9',
> 'device': 'disk', 'shared': 'false', 'propagateErrors': 'off',
> 'type': 'disk'}.
> 
> The original oVirt 3.3.1 system that has now been upgraded to 3.3.2
> still boots this same disk image just fine.
> 
> I'm guessing it's upset because apparentsize and truesize are different?
> 
> Why did 3.3.1 seem not to care but 3.3.2 does now?

No quick answer pops to mind. Could you share your vdsm.log from the
vmCreate line up until the error you have quoted?

> 
> Any way I can true these up?  I've tried a few things with qemu-img
> but haven't gotten the magic right yet.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Bad volume specification

2014-01-08 Thread Blaster
I have a couple ESXi Win 7 images on VMDKs that I converted to raw using 
qemu-img convert.


Under ovirt 3.3.1 I then used a procedure posted here previously where 
you create a VM, add a disk, then copy over the converted image onto the 
oVirt created image and away you go.


I did this twice under oVirt 3.3.1 and it worked great.

Now I have built a new oVirt 3.3.2 system and tried the same thing, and 
I get the error:


VM win7-01 is down. Exit message: Bad volume specification {'index': 0, 
'iface': 'virtio', 'reqsize': '0', 'format': 'raw', 'bootOrder': '1', 
'volumeID': 'd750e9e0-a906-4369-8bbb-a3b676121321', 'apparentsize': 
'107374182400', 'imageID': 'f674cb27-c28b-4373-ad75-9ed8a765ca31', 
'specParams': {}, 'readonly': 'false', 'domainID': 
'f14f471e-0cce-414d-af57-779eeb88c97a', 'optional': 'false', 'deviceId': 
'f674cb27-c28b-4373-ad75-9ed8a765ca31', 'truesize': '107374194688', 
'poolID': '18f6234c-a9de-4fdf-bd9a-2bd90b9f33f9', 'device': 'disk', 
'shared': 'false', 'propagateErrors': 'off', 'type': 'disk'}.


The original oVirt 3.3.1 system that has now been upgraded to 3.3.2 
still boots this same disk image just fine.


I'm guessing it's upset because apparentsize and truesize are different?

Why did 3.3.1 seem not to care but 3.3.2 does now?

Any way I can true these up?  I've tried a few things with qemu-img but 
haven't gotten the magic right yet.





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users