Re: [ovirt-users] critical production issue for a vm

2017-12-07 Thread Nathanaël Blanchet



Le 07/12/2017 à 14:32, Donny Davis a écrit :
This is just a shot in the dark,  but have you tried to use the disk 
copy feature? You can copy the disks back to where they were and try 
starting the VM
No it doesn't work, the reason is always the same, some disks reference 
on the lun were broken...


On Thu, Dec 7, 2017 at 7:48 AM, Nathanaël Blanchet > wrote:




Le 06/12/2017 à 15:56, Maor Lipchuk a écrit :



On Wed, Dec 6, 2017 at 12:30 PM, Nicolas Ecarnot
mailto:nico...@ecarnot.net>> wrote:

Le 06/12/2017 à 11:21, Nathanaël Blanchet a écrit :

Hi all,

I'm about to lose one very important vm. I shut down this
vm for maintenance and then I moved the four disks to a
new created lun. This vm has 2 snapshots.

After successful move, the vm refuses to start with this
message:

Bad volume specification {u'index': 0, u'domainID':
u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0',
u'format': u'cow', u'bootOrder': u'1', u'discard': False,
u'volumeID': u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b',
'apparentsize': '2147483648 ',
u'imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a',
u'specParams': {}, u'readonly': u'false', u'iface':
u'virtio', u'optional': u'false', u'deviceId':
u'4a95614e-bf1d-407c-aa72-2df414abcb7a', 'truesize':
'2147483648 ', u'poolID':
u'48ca3019-9dbf-4ef3-98e9-08105d396350', u'device':
u'disk', u'shared': u'false', u'propagateErrors': u'off',
u'type': u'disk'}.

I tried to merge the snaphots, export , clone from
snapshot, copy disks, or deactivate disks and every
action fails when it is about disk.

I began to dd lv group to get a new vm intended to a
standalone libvirt/kvm, the vm quite boots up but it is
an outdated version before the first snapshot. There is a
lot of disks when doing a "lvs | grep 961ea94a" supposed
to be disks snapshots. Which of them must I choose to get
the last vm before shutting down? I'm not used to deal
snapshot with virsh/libvirt, so some help will be much
appreciated.


The disks which you want to copy should contain the entire volume
chain.
Based on the log you mentioned, It looks like this image is
problematic:

  storage id: '961ea94a-aced-4dd0-a9f0-266ce1810177',
  imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a
  volumeID': u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b'

What if you try to deactivate this image and try to run the VM,
will it run?

I already tried to do what you suggest, but it is the same, and
more this disk is part of a volume groupe, so I can't boot the vm
without it.




Is there some unknown command to recover this vm into ovirt?

Thank you in advance.






___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



Beside specific oVirt answers, did you try to get
informations about the snapshot tree with qemu-img info
--backing-chain on the adequate /dev/... logical volume?
As you know how to dd from LVs, you could extract every
needed snapshots files and rebuild your VM outside of oVirt.
Then take time to re-import it later and safely.

-- 
Nicolas ECARNOT

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



-- 
Nathanaël Blanchet


Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr   



___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.or

Re: [ovirt-users] critical production issue for a vm

2017-12-07 Thread Nathanaël Blanchet

Many thanks to Nicolas who saved my life!

When the export of disks (base + snapshot) has finished, I managed to 
boot up the vm into libvirt/kvm with the top disk snapshot as the main disk.
Then, I believed reimporting the vm was the last thing to do, but 
integrated virt-v2v doesn't support importing vm with external 
snapshots, so when the importing process has finished, I couldn't boot 
up the vm.

I had to merge the snapshots with qemu tools:

qemu-img rebase-b  base.raw snap2.qcow2
qemu-img commit snap2.qcow2

And then, attaching the base image of each disk to the libvirt vm before 
reimporting it chosing "preallocated" for raw disks.


This is a manual method, but it was first necessary to find the disk id 
into lvm thanks to ovirt-shell: list disks --query "name=hortensia*" 
--show-all.


When finding the volume group id corresponding to the vm, I had to 
activate all the logical volume with lvchange -ay /dev/... and then 
finding qcow2 information with qemu-img info --backing-chain


*In this specific desastry, is there something to do with ovirt itself 
instead of exporting/reimporting, knowing that vm disks on the lun are 
intact, while the main reason is that the reference to some disks are 
broken into database?*



Le 06/12/2017 à 11:30, Nicolas Ecarnot a écrit :

Le 06/12/2017 à 11:21, Nathanaël Blanchet a écrit :

Hi all,

I'm about to lose one very important vm. I shut down this vm for 
maintenance and then I moved the four disks to a new created lun. 
This vm has 2 snapshots.


After successful move, the vm refuses to start with this message:

Bad volume specification {u'index': 0, u'domainID': 
u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0', u'format': 
u'cow', u'bootOrder': u'1', u'discard': False, u'volumeID': 
u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b', 'apparentsize': 
'2147483648', u'imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', 
u'specParams': {}, u'readonly': u'false', u'iface': u'virtio', 
u'optional': u'false', u'deviceId': 
u'4a95614e-bf1d-407c-aa72-2df414abcb7a', 'truesize': '2147483648', 
u'poolID': u'48ca3019-9dbf-4ef3-98e9-08105d396350', u'device': 
u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type': 
u'disk'}.


I tried to merge the snaphots, export , clone from snapshot, copy 
disks, or deactivate disks and every action fails when it is about disk.


I began to dd lv group to get a new vm intended to a standalone 
libvirt/kvm, the vm quite boots up but it is an outdated version 
before the first snapshot. There is a lot of disks when doing a "lvs 
| grep 961ea94a" supposed to be disks snapshots. Which of them must I 
choose to get the last vm before shutting down? I'm not used to deal 
snapshot with virsh/libvirt, so some help will be much appreciated.


Is there some unknown command to recover this vm into ovirt?

Thank you in advance.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



Beside specific oVirt answers, did you try to get informations about 
the snapshot tree with qemu-img info --backing-chain on the adequate 
/dev/... logical volume?
As you know how to dd from LVs, you could extract every needed 
snapshots files and rebuild your VM outside of oVirt.

Then take time to re-import it later and safely.



--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] critical production issue for a vm

2017-12-07 Thread Donny Davis
This is just a shot in the dark,  but have you tried to use the disk copy
feature? You can copy the disks back to where they were and try starting
the VM

On Thu, Dec 7, 2017 at 7:48 AM, Nathanaël Blanchet  wrote:

>
>
> Le 06/12/2017 à 15:56, Maor Lipchuk a écrit :
>
>
>
> On Wed, Dec 6, 2017 at 12:30 PM, Nicolas Ecarnot 
> wrote:
>
>> Le 06/12/2017 à 11:21, Nathanaël Blanchet a écrit :
>>
>>> Hi all,
>>>
>>> I'm about to lose one very important vm. I shut down this vm for
>>> maintenance and then I moved the four disks to a new created lun. This vm
>>> has 2 snapshots.
>>>
>>> After successful move, the vm refuses to start with this message:
>>>
>>> Bad volume specification {u'index': 0, u'domainID':
>>> u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0', u'format':
>>> u'cow', u'bootOrder': u'1', u'discard': False, u'volumeID':
>>> u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b', 'apparentsize': '2147483648
>>> <%28214%29%20748-3648>', u'imageID': 
>>> u'4a95614e-bf1d-407c-aa72-2df414abcb7a',
>>> u'specParams': {}, u'readonly': u'false', u'iface': u'virtio', u'optional':
>>> u'false', u'deviceId': u'4a95614e-bf1d-407c-aa72-2df414abcb7a',
>>> 'truesize': '2147483648 <%28214%29%20748-3648>', u'poolID':
>>> u'48ca3019-9dbf-4ef3-98e9-08105d396350', u'device': u'disk', u'shared':
>>> u'false', u'propagateErrors': u'off', u'type': u'disk'}.
>>>
>>> I tried to merge the snaphots, export , clone from snapshot, copy disks,
>>> or deactivate disks and every action fails when it is about disk.
>>>
>>> I began to dd lv group to get a new vm intended to a standalone
>>> libvirt/kvm, the vm quite boots up but it is an outdated version before the
>>> first snapshot. There is a lot of disks when doing a "lvs | grep 961ea94a"
>>> supposed to be disks snapshots. Which of them must I choose to get the last
>>> vm before shutting down? I'm not used to deal snapshot with virsh/libvirt,
>>> so some help will be much appreciated.
>>>
>>
> The disks which you want to copy should contain the entire volume chain.
> Based on the log you mentioned, It looks like this image is problematic:
>
>   storage id: '961ea94a-aced-4dd0-a9f0-266ce1810177',
>   imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a
>   volumeID': u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b'
>
> What if you try to deactivate this image and try to run the VM, will it
> run?
>
> I already tried to do what you suggest, but it is the same, and more this
> disk is part of a volume groupe, so I can't boot the vm without it.
>
>
>
>
>
>>
>>> Is there some unknown command to recover this vm into ovirt?
>>>
>>> Thank you in advance.
>>>
>>>
>>>
>
>
>
>
>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>> Beside specific oVirt answers, did you try to get informations about the
>> snapshot tree with qemu-img info --backing-chain on the adequate /dev/...
>> logical volume?
>> As you know how to dd from LVs, you could extract every needed snapshots
>> files and rebuild your VM outside of oVirt.
>> Then take time to re-import it later and safely.
>>
>> --
>> Nicolas ECARNOT
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14blanc...@abes.fr
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] critical production issue for a vm

2017-12-07 Thread Nathanaël Blanchet



Le 06/12/2017 à 15:56, Maor Lipchuk a écrit :



On Wed, Dec 6, 2017 at 12:30 PM, Nicolas Ecarnot > wrote:


Le 06/12/2017 à 11:21, Nathanaël Blanchet a écrit :

Hi all,

I'm about to lose one very important vm. I shut down this vm
for maintenance and then I moved the four disks to a new
created lun. This vm has 2 snapshots.

After successful move, the vm refuses to start with this message:

Bad volume specification {u'index': 0, u'domainID':
u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0',
u'format': u'cow', u'bootOrder': u'1', u'discard': False,
u'volumeID': u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b',
'apparentsize': '2147483648 ',
u'imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a',
u'specParams': {}, u'readonly': u'false', u'iface': u'virtio',
u'optional': u'false', u'deviceId':
u'4a95614e-bf1d-407c-aa72-2df414abcb7a', 'truesize':
'2147483648 ', u'poolID':
u'48ca3019-9dbf-4ef3-98e9-08105d396350', u'device': u'disk',
u'shared': u'false', u'propagateErrors': u'off', u'type':
u'disk'}.

I tried to merge the snaphots, export , clone from snapshot,
copy disks, or deactivate disks and every action fails when it
is about disk.

I began to dd lv group to get a new vm intended to a
standalone libvirt/kvm, the vm quite boots up but it is an
outdated version before the first snapshot. There is a lot of
disks when doing a "lvs | grep 961ea94a" supposed to be disks
snapshots. Which of them must I choose to get the last vm
before shutting down? I'm not used to deal snapshot with
virsh/libvirt, so some help will be much appreciated.


The disks which you want to copy should contain the entire volume chain.
Based on the log you mentioned, It looks like this image is problematic:

  storage id: '961ea94a-aced-4dd0-a9f0-266ce1810177',
  imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a
  volumeID': u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b'

What if you try to deactivate this image and try to run the VM, will 
it run?
I already tried to do what you suggest, but it is the same, and more 
this disk is part of a volume groupe, so I can't boot the vm without it.




Is there some unknown command to recover this vm into ovirt?

Thank you in advance.






___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



Beside specific oVirt answers, did you try to get informations
about the snapshot tree with qemu-img info --backing-chain on the
adequate /dev/... logical volume?
As you know how to dd from LVs, you could extract every needed
snapshots files and rebuild your VM outside of oVirt.
Then take time to re-import it later and safely.

-- 
Nicolas ECARNOT

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] critical production issue for a vm

2017-12-06 Thread Maor Lipchuk
On Wed, Dec 6, 2017 at 12:30 PM, Nicolas Ecarnot 
wrote:

> Le 06/12/2017 à 11:21, Nathanaël Blanchet a écrit :
>
>> Hi all,
>>
>> I'm about to lose one very important vm. I shut down this vm for
>> maintenance and then I moved the four disks to a new created lun. This vm
>> has 2 snapshots.
>>
>> After successful move, the vm refuses to start with this message:
>>
>> Bad volume specification {u'index': 0, u'domainID':
>> u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0', u'format':
>> u'cow', u'bootOrder': u'1', u'discard': False, u'volumeID':
>> u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b', 'apparentsize': '2147483648
>> <(214)%20748-3648>', u'imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a',
>> u'specParams': {}, u'readonly': u'false', u'iface': u'virtio', u'optional':
>> u'false', u'deviceId': u'4a95614e-bf1d-407c-aa72-2df414abcb7a',
>> 'truesize': '2147483648 <(214)%20748-3648>', u'poolID':
>> u'48ca3019-9dbf-4ef3-98e9-08105d396350', u'device': u'disk', u'shared':
>> u'false', u'propagateErrors': u'off', u'type': u'disk'}.
>>
>> I tried to merge the snaphots, export , clone from snapshot, copy disks,
>> or deactivate disks and every action fails when it is about disk.
>>
>> I began to dd lv group to get a new vm intended to a standalone
>> libvirt/kvm, the vm quite boots up but it is an outdated version before the
>> first snapshot. There is a lot of disks when doing a "lvs | grep 961ea94a"
>> supposed to be disks snapshots. Which of them must I choose to get the last
>> vm before shutting down? I'm not used to deal snapshot with virsh/libvirt,
>> so some help will be much appreciated.
>>
>
The disks which you want to copy should contain the entire volume chain.
Based on the log you mentioned, It looks like this image is problematic:

  storage id: '961ea94a-aced-4dd0-a9f0-266ce1810177',
  imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a
  volumeID': u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b'

What if you try to deactivate this image and try to run the VM, will it run?




>
>> Is there some unknown command to recover this vm into ovirt?
>>
>> Thank you in advance.
>>
>>
>>




>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> Beside specific oVirt answers, did you try to get informations about the
> snapshot tree with qemu-img info --backing-chain on the adequate /dev/...
> logical volume?
> As you know how to dd from LVs, you could extract every needed snapshots
> files and rebuild your VM outside of oVirt.
> Then take time to re-import it later and safely.
>
> --
> Nicolas ECARNOT
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] critical production issue for a vm

2017-12-06 Thread Nicolas Ecarnot

Le 06/12/2017 à 11:21, Nathanaël Blanchet a écrit :

Hi all,

I'm about to lose one very important vm. I shut down this vm for 
maintenance and then I moved the four disks to a new created lun. This 
vm has 2 snapshots.


After successful move, the vm refuses to start with this message:

Bad volume specification {u'index': 0, u'domainID': 
u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0', u'format': 
u'cow', u'bootOrder': u'1', u'discard': False, u'volumeID': 
u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b', 'apparentsize': '2147483648', 
u'imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', u'specParams': {}, 
u'readonly': u'false', u'iface': u'virtio', u'optional': u'false', 
u'deviceId': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', 'truesize': 
'2147483648', u'poolID': u'48ca3019-9dbf-4ef3-98e9-08105d396350', 
u'device': u'disk', u'shared': u'false', u'propagateErrors': u'off', 
u'type': u'disk'}.


I tried to merge the snaphots, export , clone from snapshot, copy disks, 
or deactivate disks and every action fails when it is about disk.


I began to dd lv group to get a new vm intended to a standalone 
libvirt/kvm, the vm quite boots up but it is an outdated version before 
the first snapshot. There is a lot of disks when doing a "lvs | grep 
961ea94a" supposed to be disks snapshots. Which of them must I choose to 
get the last vm before shutting down? I'm not used to deal snapshot with 
virsh/libvirt, so some help will be much appreciated.


Is there some unknown command to recover this vm into ovirt?

Thank you in advance.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



Beside specific oVirt answers, did you try to get informations about the 
snapshot tree with qemu-img info --backing-chain on the adequate 
/dev/... logical volume?
As you know how to dd from LVs, you could extract every needed snapshots 
files and rebuild your VM outside of oVirt.

Then take time to re-import it later and safely.

--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] critical production issue for a vm

2017-12-06 Thread Nathanaël Blanchet

Hi all,

I'm about to lose one very important vm. I shut down this vm for 
maintenance and then I moved the four disks to a new created lun. This 
vm has 2 snapshots.


After successful move, the vm refuses to start with this message:

Bad volume specification {u'index': 0, u'domainID': 
u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0', u'format': 
u'cow', u'bootOrder': u'1', u'discard': False, u'volumeID': 
u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b', 'apparentsize': '2147483648', 
u'imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', u'specParams': {}, 
u'readonly': u'false', u'iface': u'virtio', u'optional': u'false', 
u'deviceId': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', 'truesize': 
'2147483648', u'poolID': u'48ca3019-9dbf-4ef3-98e9-08105d396350', 
u'device': u'disk', u'shared': u'false', u'propagateErrors': u'off', 
u'type': u'disk'}.


I tried to merge the snaphots, export , clone from snapshot, copy disks, 
or deactivate disks and every action fails when it is about disk.


I began to dd lv group to get a new vm intended to a standalone 
libvirt/kvm, the vm quite boots up but it is an outdated version before 
the first snapshot. There is a lot of disks when doing a "lvs | grep 
961ea94a" supposed to be disks snapshots. Which of them must I choose to 
get the last vm before shutting down? I'm not used to deal snapshot with 
virsh/libvirt, so some help will be much appreciated.


Is there some unknown command to recover this vm into ovirt?

Thank you in advance.

--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

 Event ID: -1, Message: VM hortensia was started by sblanc...@levant.abes.fr@abes.fr-authz (Host: aquilon).
2017-12-06 11:01:16,292+01 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-10) [] VM 'f337aa89-6e4e-4cc9-b78e-a5bd9ee946ec' was reported as Down on VDS 'b692c250-4f71-4569-801f-6bfd3b8f50b9'(aquilon)
2017-12-06 11:01:16,294+01 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-10) [] START, DestroyVDSCommand(HostName = aquilon, DestroyVmVDSCommandParameters:{runAsync='true', hostId='b692c250-4f71-4569-801f-6bfd3b8f50b9', vmId='f337aa89-6e4e-4cc9-b78e-a5bd9ee946ec', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 6ddce93f
2017-12-06 11:01:17,301+01 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-10) [] FINISH, DestroyVDSCommand, log id: 6ddce93f
2017-12-06 11:01:17,301+01 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-10) [] VM 'f337aa89-6e4e-4cc9-b78e-a5bd9ee946ec'(hortensia) moved from 'WaitForLaunch' --> 'Down'
2017-12-06 11:01:17,399+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-10) [] EVENT_ID: VM_DOWN_ERROR(119), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: VM hortensia is down with error. Exit message: Bad volume specification {u'index': 0, u'domainID': u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0', u'format': u'cow', u'bootOrder': u'1', u'discard': False, u'volumeID': u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b', 'apparentsize': '2147483648', u'imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', u'specParams': {}, u'readonly': u'false', u'iface': u'virtio', u'optional': u'false', u'deviceId': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', 'truesize': '2147483648', u'poolID': u'48ca3019-9dbf-4ef3-98e9-08105d396350', u'device': u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type': u'disk'}.
2017-12-06 11:01:17,400+01 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-10) [] add VM 'f337aa89-6e4e-4cc9-b78e-a5bd9ee946ec'(hortensia) to rerun treatment
2017-12-06 11:01:17,404+01 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-10) [] Rerun VM 'f337aa89-6e4e-4cc9-b78e-a5bd9ee946ec'. Called from VDS 'aquilon'
2017-12-06 11:01:17,466+01 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-39) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Correlation ID: d6fc6f3b-b3b2-466d-8fcd-c145d3cf645a, Job ID: 5674a186-14c2-46f3-9008-99fd9d3fd979, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Failed to run VM hortensia on Host aquilon.
2017-12-06 11:01:17,474+01 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-7-thread-39) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f337aa89-6e4e-4cc9-b78e-a5bd9ee946ec=VM]', sharedLocks=''}'
2017-12-06 11:01:17,525+01 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-7-thread-39) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{runAsyn