Re: [ovirt-users] Disks Snapshot

2016-03-24 Thread Marcelo Leandro
The bug created,
https://bugzilla.redhat.com/show_bug.cgi?id=1321018

Sorry for the delay, I had trouble opening the bug in Bugzilla.

Thanks.

2016-03-14 13:40 GMT-03:00 Nir Soffer :

> On Mon, Mar 14, 2016 at 6:11 PM, Marcelo Leandro 
> wrote:
> > All the disks in the
> >
> /rhev/data-center/77e24b20-9d21-4952-a089-3c5c592b4e6d/c2dc0101-748e-4a7b-9913-47993eaa52bd/images/2f2c9196-831e-45bd-8824-ebd3325c4b1c/
> > are deleted snapshots that were not removed. The disk no contain
> snapshot.
> > In
> >
> /rhev/data-center/77e24b20-9d21-4952-a089-3c5c592b4e6d/c2dc0101-748e-4a7b-9913-47993eaa52bd/images/2f2c9196-831e-45bd-8824-ebd3325c4b1c/
> > it must have just one disk after the merge.
> >
> > Right?
>
> Yes, this seems to be a bug when doing a merge on a host which is not
> the spm.
>
> According to the log you attached (vdsm.log.5):
> - we do not deactivate the lv after the merge
> - therefore the link /dev/vgname/lvname is not deleted
> - we don't delete the link at /rhev/data-center
> - we don't delete the links at /run/vdsm/storage
>
> The links under /run/vdsm/storage and /rhev/data-center should will
> be deleted when hotunpluging this disk, or when stopping the vm.
>
> Please file a ovirt/vdsm bug for this and include the information
> from this thread.
>
> Nir
>
> >
> > Em 14/03/2016 12:41, "Marcelo Leandro"  escreveu:
> >>
> >> Are you talking about /dev/vgname/lvname link or the links under
> >> /run/vdsm/storage/domain/image/volume,
> >> or /rhev/data-center/pull/domain/image/volume?
> >>
> >> in  /rhev/data-center/pull/domain/image/volume
> >>
> >>
> >> /dev/vgname/lvname is created by udev rules when lv is activated or
> >> deactivated.
> >> To understand if this is the issue, can you show the output of:
> >>
> >> pvscan --cache
> >> return:
> >> [root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]# pvscan --cache
> >> [root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]#
> >>
> >>
> >> lvs vgname
> >> return:
> >>   06d35bed-445f-453b-a1b5-cf1a26e21d57
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  19.00g
> >>   0bad7a90-e6d5-4f80-9e77-276092989ec3
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   1.00g
> >>   12e1c2eb-2e4e-4714-8358-0a8f1bf44b2f
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 502.00g
> >>   191eb95f-2604-406b-ad90-1387cd4df7aa
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  10.00g
> >>   235da77a-8713-4bdf-bb3b-4c6478b0ffe2
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---   1.68t
> >>   289b1789-e65a-4725-95fe-7b1a59208b45
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  15.00g
> >>   2d1cd019-f547-47c9-b360-0247f5283563
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  14.00g
> >>   2e59f7f2-9e30-460e-836a-5e0d3d625059
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  27.50g
> >>   2ff7d36e-2ff9-466a-ad26-c1c67ba34dc6
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  21.00g
> >>   3d01ae03-ee4e-4fc2-aedd-6fc757f84f22
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 202.00g
> >>   4626025f-53ab-487a-9f95-35ae65393f03
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   6.00g
> >>   5dbb5762-6828-4c95-9cd1-d05896758af7
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 100.00g
> >>   5e1461fc-c609-479d-9627-e88936fb15ed
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  11.00g
> >>   64800fa4-85c2-4567-9605-6dc8ed5fec52
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  39.00g
> >>   661293e4-26ef-4c2c-903b-442a2b7fb5c6
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  13.00g
> >>   79e4e84b-370a-4d6d-9683-197dabb591c2
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   5.12g
> >>   7a3a6929-973e-4eec-bef0-1b99101e850d
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  20.00g
> >>   7a79ae4f-4a47-4ce2-8570-95efc7774f7b
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  80.00g
> >>   828d4c13-62c5-4d23-b0cc-e4ec88928c1f
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 128.00m
> >>   871874e8-0d89-4f13-962a-3d8175194130
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  54.00g
> >>   a0a9aac2-d387-4148-a8a0-a906cfc1b513
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 240.00g
> >>   aa397814-43d4-42f7-9151-fd6d9f6d0b7f
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  22.00g
> >>   b3433da9-e6b5-4ab4-9aed-47a698079a62
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  55.00g
> >>   b47f58e0-d576-49be-b8aa-f30581a0373a
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 124.00g
> >>   b5174aaa-b4ed-48e2-ab60-4bd51edde175
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---   4.00g
> >>   b8027a73-2d37-4df6-a2ac-4782859b749f
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 128.00m
> >>   b86ed4a4-c922-4567-98b4-bace49d258f6
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  13.00g
> >>   ba8a3a28-1dd5-4072-bcd1-f8155fade47a
> >> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  21.00g
> >>   

Re: [ovirt-users] Disks Snapshot

2016-03-14 Thread Nir Soffer
On Mon, Mar 14, 2016 at 6:11 PM, Marcelo Leandro  wrote:
> All the disks in the
> /rhev/data-center/77e24b20-9d21-4952-a089-3c5c592b4e6d/c2dc0101-748e-4a7b-9913-47993eaa52bd/images/2f2c9196-831e-45bd-8824-ebd3325c4b1c/
> are deleted snapshots that were not removed. The disk no contain snapshot.
> In
> /rhev/data-center/77e24b20-9d21-4952-a089-3c5c592b4e6d/c2dc0101-748e-4a7b-9913-47993eaa52bd/images/2f2c9196-831e-45bd-8824-ebd3325c4b1c/
> it must have just one disk after the merge.
>
> Right?

Yes, this seems to be a bug when doing a merge on a host which is not
the spm.

According to the log you attached (vdsm.log.5):
- we do not deactivate the lv after the merge
- therefore the link /dev/vgname/lvname is not deleted
- we don't delete the link at /rhev/data-center
- we don't delete the links at /run/vdsm/storage

The links under /run/vdsm/storage and /rhev/data-center should will
be deleted when hotunpluging this disk, or when stopping the vm.

Please file a ovirt/vdsm bug for this and include the information
from this thread.

Nir

>
> Em 14/03/2016 12:41, "Marcelo Leandro"  escreveu:
>>
>> Are you talking about /dev/vgname/lvname link or the links under
>> /run/vdsm/storage/domain/image/volume,
>> or /rhev/data-center/pull/domain/image/volume?
>>
>> in  /rhev/data-center/pull/domain/image/volume
>>
>>
>> /dev/vgname/lvname is created by udev rules when lv is activated or
>> deactivated.
>> To understand if this is the issue, can you show the output of:
>>
>> pvscan --cache
>> return:
>> [root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]# pvscan --cache
>> [root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]#
>>
>>
>> lvs vgname
>> return:
>>   06d35bed-445f-453b-a1b5-cf1a26e21d57
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  19.00g
>>   0bad7a90-e6d5-4f80-9e77-276092989ec3
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   1.00g
>>   12e1c2eb-2e4e-4714-8358-0a8f1bf44b2f
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 502.00g
>>   191eb95f-2604-406b-ad90-1387cd4df7aa
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  10.00g
>>   235da77a-8713-4bdf-bb3b-4c6478b0ffe2
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---   1.68t
>>   289b1789-e65a-4725-95fe-7b1a59208b45
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  15.00g
>>   2d1cd019-f547-47c9-b360-0247f5283563
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  14.00g
>>   2e59f7f2-9e30-460e-836a-5e0d3d625059
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  27.50g
>>   2ff7d36e-2ff9-466a-ad26-c1c67ba34dc6
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  21.00g
>>   3d01ae03-ee4e-4fc2-aedd-6fc757f84f22
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 202.00g
>>   4626025f-53ab-487a-9f95-35ae65393f03
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   6.00g
>>   5dbb5762-6828-4c95-9cd1-d05896758af7
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 100.00g
>>   5e1461fc-c609-479d-9627-e88936fb15ed
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  11.00g
>>   64800fa4-85c2-4567-9605-6dc8ed5fec52
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  39.00g
>>   661293e4-26ef-4c2c-903b-442a2b7fb5c6
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  13.00g
>>   79e4e84b-370a-4d6d-9683-197dabb591c2
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   5.12g
>>   7a3a6929-973e-4eec-bef0-1b99101e850d
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  20.00g
>>   7a79ae4f-4a47-4ce2-8570-95efc7774f7b
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  80.00g
>>   828d4c13-62c5-4d23-b0cc-e4ec88928c1f
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 128.00m
>>   871874e8-0d89-4f13-962a-3d8175194130
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  54.00g
>>   a0a9aac2-d387-4148-a8a0-a906cfc1b513
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 240.00g
>>   aa397814-43d4-42f7-9151-fd6d9f6d0b7f
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  22.00g
>>   b3433da9-e6b5-4ab4-9aed-47a698079a62
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  55.00g
>>   b47f58e0-d576-49be-b8aa-f30581a0373a
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 124.00g
>>   b5174aaa-b4ed-48e2-ab60-4bd51edde175
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---   4.00g
>>   b8027a73-2d37-4df6-a2ac-4782859b749f
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 128.00m
>>   b86ed4a4-c922-4567-98b4-bace49d258f6
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  13.00g
>>   ba8a3a28-1dd5-4072-bcd1-f8155fade47a
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  21.00g
>>   bb1bb92b-a8a7-486a-b171-18317e5d8095
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 534.00g
>>   c7b5ca51-7ec5-467c-95c6-64bda2cb1fa7
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  13.00g
>>   e88dfa8a-a9dc-4843-8c46-cc57ad700a04
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   4.00g
>>   f2ca34b7-c2b5-4072-b539-d1ee91282652
>> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 137.00g
>>   ids
>> 

Re: [ovirt-users] Disks Snapshot

2016-03-14 Thread Marcelo Leandro
All the disks in the
/rhev/data-center/77e24b20-9d21-4952-a089-3c5c592b4e6d/c2dc0101-748e-4a7b-9913-47993eaa52bd/images/2f2c9196-831e-45bd-8824-ebd3325c4b1c/
are deleted snapshots that were not removed. The disk no contain snapshot.
In
/rhev/data-center/77e24b20-9d21-4952-a089-3c5c592b4e6d/c2dc0101-748e-4a7b-9913-47993eaa52bd/images/2f2c9196-831e-45bd-8824-ebd3325c4b1c/
it must have just one disk after the merge.

Right?
Em 14/03/2016 12:41, "Marcelo Leandro"  escreveu:

>
>
> *Are you talking about /dev/vgname/lvname link or the links
> under/run/vdsm/storage/domain/image/volume,or
> /rhev/data-center/pull/domain/image/volume?*
>
> in  /rhev/data-center/pull/domain/image/volume
>
>
>
> */dev/vgname/lvname is created by udev rules when lv is activated or
> deactivated.To understand if this is the issue, can you show the output of:*
>
> *pvscan --cache*
> *return:*
> [root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]# pvscan --cache
> [root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]#
>
>
> *lvs vgname*
> *return:*
>   06d35bed-445f-453b-a1b5-cf1a26e21d57
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  19.00g
>   0bad7a90-e6d5-4f80-9e77-276092989ec3
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   1.00g
>   12e1c2eb-2e4e-4714-8358-0a8f1bf44b2f
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 502.00g
>   191eb95f-2604-406b-ad90-1387cd4df7aa
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  10.00g
>   235da77a-8713-4bdf-bb3b-4c6478b0ffe2
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---   1.68t
>   289b1789-e65a-4725-95fe-7b1a59208b45
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  15.00g
>   2d1cd019-f547-47c9-b360-0247f5283563
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  14.00g
>   2e59f7f2-9e30-460e-836a-5e0d3d625059
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  27.50g
>   2ff7d36e-2ff9-466a-ad26-c1c67ba34dc6
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  21.00g
>   3d01ae03-ee4e-4fc2-aedd-6fc757f84f22
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 202.00g
>   4626025f-53ab-487a-9f95-35ae65393f03
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   6.00g
>   5dbb5762-6828-4c95-9cd1-d05896758af7
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 100.00g
>   5e1461fc-c609-479d-9627-e88936fb15ed
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  11.00g
>   64800fa4-85c2-4567-9605-6dc8ed5fec52
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  39.00g
>   661293e4-26ef-4c2c-903b-442a2b7fb5c6
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  13.00g
>   79e4e84b-370a-4d6d-9683-197dabb591c2
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   5.12g
>   7a3a6929-973e-4eec-bef0-1b99101e850d
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  20.00g
>   7a79ae4f-4a47-4ce2-8570-95efc7774f7b
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  80.00g
>   828d4c13-62c5-4d23-b0cc-e4ec88928c1f
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 128.00m
>   871874e8-0d89-4f13-962a-3d8175194130
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  54.00g
>   a0a9aac2-d387-4148-a8a0-a906cfc1b513
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 240.00g
>   aa397814-43d4-42f7-9151-fd6d9f6d0b7f
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  22.00g
>   b3433da9-e6b5-4ab4-9aed-47a698079a62
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  55.00g
>   b47f58e0-d576-49be-b8aa-f30581a0373a
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 124.00g
>   b5174aaa-b4ed-48e2-ab60-4bd51edde175
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---   4.00g
>   b8027a73-2d37-4df6-a2ac-4782859b749f
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi--- 128.00m
>   b86ed4a4-c922-4567-98b4-bace49d258f6
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  13.00g
>   ba8a3a28-1dd5-4072-bcd1-f8155fade47a
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi---  21.00g
>   bb1bb92b-a8a7-486a-b171-18317e5d8095
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 534.00g
>   c7b5ca51-7ec5-467c-95c6-64bda2cb1fa7
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao  13.00g
>   e88dfa8a-a9dc-4843-8c46-cc57ad700a04
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao   4.00g
>   f2ca34b7-c2b5-4072-b539-d1ee91282652
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 137.00g
>   ids
>  c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-ao 128.00m
>   inbox
>  c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-a- 128.00m
>   leases
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-a-   2.00g
>   master
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-a-   1.00g
>   metadata
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-a- 512.00m
>   outbox
> c2dc0101-748e-4a7b-9913-47993eaa52bd -wi-a- 128.00m
>
>
>
> *ls -l /dev/vgname*
> *return:*
> [root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]# ls -l
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/
> total 0
> lrwxrwxrwx. 1 root root 8 Mar 14 13:18
> 0569a2e0-275b-4702-8500-dff732fea13c -> ../dm-68
> lrwxrwxrwx. 1 root root 8 Mar 13 23:00
> 06d35bed-445f-453b-a1b5-cf1a26e21d57 -> ../dm-39
> 

Re: [ovirt-users] Disks Snapshot

2016-03-14 Thread Marcelo Leandro
*Are you talking about /dev/vgname/lvname link or the links
under/run/vdsm/storage/domain/image/volume,or
/rhev/data-center/pull/domain/image/volume?*

in  /rhev/data-center/pull/domain/image/volume



*/dev/vgname/lvname is created by udev rules when lv is activated or
deactivated.To understand if this is the issue, can you show the output of:*

*pvscan --cache*
*return:*
[root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]# pvscan --cache
[root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]#


*lvs vgname*
*return:*
  06d35bed-445f-453b-a1b5-cf1a26e21d57 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  19.00g
  0bad7a90-e6d5-4f80-9e77-276092989ec3 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao   1.00g
  12e1c2eb-2e4e-4714-8358-0a8f1bf44b2f c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao 502.00g
  191eb95f-2604-406b-ad90-1387cd4df7aa c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  10.00g
  235da77a-8713-4bdf-bb3b-4c6478b0ffe2 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---   1.68t
  289b1789-e65a-4725-95fe-7b1a59208b45 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  15.00g
  2d1cd019-f547-47c9-b360-0247f5283563 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  14.00g
  2e59f7f2-9e30-460e-836a-5e0d3d625059 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  27.50g
  2ff7d36e-2ff9-466a-ad26-c1c67ba34dc6 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  21.00g
  3d01ae03-ee4e-4fc2-aedd-6fc757f84f22 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi--- 202.00g
  4626025f-53ab-487a-9f95-35ae65393f03 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao   6.00g
  5dbb5762-6828-4c95-9cd1-d05896758af7 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao 100.00g
  5e1461fc-c609-479d-9627-e88936fb15ed c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  11.00g
  64800fa4-85c2-4567-9605-6dc8ed5fec52 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  39.00g
  661293e4-26ef-4c2c-903b-442a2b7fb5c6 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  13.00g
  79e4e84b-370a-4d6d-9683-197dabb591c2 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao   5.12g
  7a3a6929-973e-4eec-bef0-1b99101e850d c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  20.00g
  7a79ae4f-4a47-4ce2-8570-95efc7774f7b c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  80.00g
  828d4c13-62c5-4d23-b0cc-e4ec88928c1f c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi--- 128.00m
  871874e8-0d89-4f13-962a-3d8175194130 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  54.00g
  a0a9aac2-d387-4148-a8a0-a906cfc1b513 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi--- 240.00g
  aa397814-43d4-42f7-9151-fd6d9f6d0b7f c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  22.00g
  b3433da9-e6b5-4ab4-9aed-47a698079a62 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  55.00g
  b47f58e0-d576-49be-b8aa-f30581a0373a c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi--- 124.00g
  b5174aaa-b4ed-48e2-ab60-4bd51edde175 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---   4.00g
  b8027a73-2d37-4df6-a2ac-4782859b749f c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi--- 128.00m
  b86ed4a4-c922-4567-98b4-bace49d258f6 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  13.00g
  ba8a3a28-1dd5-4072-bcd1-f8155fade47a c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi---  21.00g
  bb1bb92b-a8a7-486a-b171-18317e5d8095 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao 534.00g
  c7b5ca51-7ec5-467c-95c6-64bda2cb1fa7 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao  13.00g
  e88dfa8a-a9dc-4843-8c46-cc57ad700a04 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao   4.00g
  f2ca34b7-c2b5-4072-b539-d1ee91282652 c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao 137.00g
  ids  c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-ao 128.00m
  inboxc2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-a- 128.00m
  leases   c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-a-   2.00g
  master   c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-a-   1.00g
  metadata c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-a- 512.00m
  outbox   c2dc0101-748e-4a7b-9913-47993eaa52bd
-wi-a- 128.00m



*ls -l /dev/vgname*
*return:*
[root@srv-qemu02 2f2c9196-831e-45bd-8824-ebd3325c4b1c]# ls -l
/dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/
total 0
lrwxrwxrwx. 1 root root 8 Mar 14 13:18 0569a2e0-275b-4702-8500-dff732fea13c
-> ../dm-68
lrwxrwxrwx. 1 root root 8 Mar 13 23:00 06d35bed-445f-453b-a1b5-cf1a26e21d57
-> ../dm-39
lrwxrwxrwx. 1 root root 8 Mar 13 23:00 0ab62c79-0dc1-43ef-9043-1f209e988bd9
-> ../dm-66
lrwxrwxrwx. 1 root root 8 Mar 14 15:22 0bad7a90-e6d5-4f80-9e77-276092989ec3
-> ../dm-86
lrwxrwxrwx. 1 root root 8 Mar 14 15:16 1196d06c-d3ea-40ee-841a-a3de379b09f9
-> ../dm-85
lrwxrwxrwx. 1 root root 8 Mar 13 22:33 12e1c2eb-2e4e-4714-8358-0a8f1bf44b2f
-> ../dm-32
lrwxrwxrwx. 1 root root 8 Mar 13 22:33 18b1b7e1-0f76-4e1b-aea1-c4b737dad26d
-> ../dm-64
lrwxrwxrwx. 1 root root 8 Mar  2 01:20 191eb95f-2604-406b-ad90-1387cd4df7aa
-> ../dm-40

Re: [ovirt-users] Disks Snapshot

2016-03-14 Thread Nir Soffer
On Mon, Mar 14, 2016 at 5:05 PM, Marcelo Leandro  wrote:
>
>
> Is it cold (the VM is down) or live (the VM is up) merge (snapshot
> deletion)?
>
> VM is up
>
> What version are you running?
>
> oVirt Engine Version: 3.6.3.4-1.el7.centos
>
>
> Can you please share engine and vdsm logs?
>
> yes.

Looking in your vdsm log, I see this error (454 times in 6 hours),
which looks like a bug:

periodic/5::ERROR::2016-03-12
09:28:02,847::executor::188::Executor::(_execute_task) Unhandled
exception in 
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 186,
in _execute_task
callable()
  File "/usr/share/vdsm/virt/periodic.py", line 279, in __call__
self._execute()
  File "/usr/share/vdsm/virt/periodic.py", line 324, in _execute
self._vm.updateNumaInfo()
  File "/usr/share/vdsm/virt/vm.py", line 5071, in updateNumaInfo
self._numaInfo = numaUtils.getVmNumaNodeRuntimeInfo(self)
  File "/usr/share/vdsm/numaUtils.py", line 116, in getVmNumaNodeRuntimeInfo
vnode_index = str(vcpu_to_vnode[vcpu_id])
KeyError: 1

Adding Francesco and Martin to look at this.

>
> Please note that at some point we try to verify that image was removed by
> running getVolumeInfo hence, the volume not found is expected. The thing is,
> that you say that volume does exist.
> Can you run following command on the host:
>
> vdsClient -s 0 getVolumeInfo
>
> return the command:
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# vdsClient -s 0
> getVolumeInfo  c2dc0101-748e-4a7b-9913-47993eaa52bd
> 77e24b20-9d21-4952-a089-3c5c592b4e6d 93633835-d709-4ebb-9317-903e62064c43
> 948d0453-1992-4a3c-81db-21248853a88a
> Volume does not exist: ('948d0453-1992-4a3c-81db-21248853a88a',)
>
> after restarting the host where vm was on, the link discs in image_group_id
> was broken but was not removed.
>
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 10:04 215a902a-1b99-403b-a648-21977dd0fa78
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/215a902a-1b99-403b-a648-21977dd0fa78
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31 3fba372c-4c39-4843-be9e-b358b196331d
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44 5097df27-c676-4ee7-af89-ecdaed2c77be
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:12 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 10:13 948d0453-1992-4a3c-81db-21248853a88a
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/948d0453-1992-4a3c-81db-21248853a88a
> lrwxrwxrwx. 1 vdsm kvm 78 Nov 27 22:30 b47f58e0-d576-49be-b8aa-f30581a0373a
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/b47f58e0-d576-49be-b8aa-f30581a0373a
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 11 22:01 c598bb22-a386-4908-bfa1-7c44bd764c96
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/c598bb22-a386-4908-bfa1-7c44bd764c96
>
>
> You question is not clear. Can you explain what is the unexpected behavior?
>
> the link to the lvm should not be deleted after deleting the snapshot?
>
>
> Thanks
>
> 2016-03-14 10:14 GMT-03:00 Nir Soffer :
>>
>> On Sat, Mar 12, 2016 at 3:10 PM, Marcelo Leandro 
>> wrote:
>> > Good morning
>> >
>> > I have a doubt, when i do a snapshot, a new lvm is generated, however
>> > when I delete this snapshot the lvm not off, that's right?
>>
>> You question is not clear. Can you explain what is the unexpected
>> behavior?
>>
>> To check if an lv created or removed by ovirt, you can do:
>>
>> pvscan --cache
>> lvs vg-uuid
>>
>> Nir
>>
>> >
>> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls
>> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
>> > 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
>> > 3fba372c-4c39-4843-be9e-b358b196331d
>> > b47f58e0-d576-49be-b8aa-f30581a0373a
>> > 5097df27-c676-4ee7-af89-ecdaed2c77be
>> > c598bb22-a386-4908-bfa1-7c44bd764c96
>> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
>> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls -l
>> > total 0
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28
>> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31
>> > 3fba372c-4c39-4843-be9e-b358b196331d ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44
>> > 5097df27-c676-4ee7-af89-ecdaed2c77be ->
>> >
>> > 

Re: [ovirt-users] Disks Snapshot

2016-03-14 Thread Nir Soffer
On Mon, Mar 14, 2016 at 5:05 PM, Marcelo Leandro  wrote:
>
>
> Is it cold (the VM is down) or live (the VM is up) merge (snapshot
> deletion)?
>
> VM is up
>
> What version are you running?
>
> oVirt Engine Version: 3.6.3.4-1.el7.centos
>
>
> Can you please share engine and vdsm logs?
>
> yes.
>
> Please note that at some point we try to verify that image was removed by
> running getVolumeInfo hence, the volume not found is expected. The thing is,
> that you say that volume does exist.
> Can you run following command on the host:
>
> vdsClient -s 0 getVolumeInfo
>
> return the command:
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# vdsClient -s 0
> getVolumeInfo  c2dc0101-748e-4a7b-9913-47993eaa52bd
> 77e24b20-9d21-4952-a089-3c5c592b4e6d 93633835-d709-4ebb-9317-903e62064c43
> 948d0453-1992-4a3c-81db-21248853a88a
> Volume does not exist: ('948d0453-1992-4a3c-81db-21248853a88a',)
>
> after restarting the host where vm was on, the link discs in image_group_id
> was broken but was not removed.
>
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 10:04 215a902a-1b99-403b-a648-21977dd0fa78
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/215a902a-1b99-403b-a648-21977dd0fa78
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31 3fba372c-4c39-4843-be9e-b358b196331d
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44 5097df27-c676-4ee7-af89-ecdaed2c77be
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:12 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 10:13 948d0453-1992-4a3c-81db-21248853a88a
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/948d0453-1992-4a3c-81db-21248853a88a
> lrwxrwxrwx. 1 vdsm kvm 78 Nov 27 22:30 b47f58e0-d576-49be-b8aa-f30581a0373a
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/b47f58e0-d576-49be-b8aa-f30581a0373a
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 11 22:01 c598bb22-a386-4908-bfa1-7c44bd764c96
> ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/c598bb22-a386-4908-bfa1-7c44bd764c96
>
>
> You question is not clear. Can you explain what is the unexpected behavior?
>
> the link to the lvm should not be deleted after deleting the snapshot?

Are you talking about /dev/vgname/lvname link or the links under
/run/vdsm/storage/domain/image/volume,
or /rhev/data-center/pull/domain/image/volume?

/dev/vgname/lvname is created by udev rules when lv is activated or deactivated.
To understand if this is the issue, can you show the output of:

pvscan --cache
lvs vgname
ls -l /dev/vgname

Both before the the merge, and after the merge was completed.

The lv should not exist, and the links should be deleted.

Links under /run/vdsm/storage or /rhev/data-center/ should be created
when starting a vm, and tore down when stopping a vm, hotunpluging
a disk, or removing a snapshot.

To understand if there is an issue, we need the output of:

tree /run/vdsm/stoage/domain/image
tree /rhev/data-center/pool/domain/images/image

Before and after the merge.

The links should be deleted.

Nir

>
>
> Thanks
>
> 2016-03-14 10:14 GMT-03:00 Nir Soffer :
>>
>> On Sat, Mar 12, 2016 at 3:10 PM, Marcelo Leandro 
>> wrote:
>> > Good morning
>> >
>> > I have a doubt, when i do a snapshot, a new lvm is generated, however
>> > when I delete this snapshot the lvm not off, that's right?
>>
>> You question is not clear. Can you explain what is the unexpected
>> behavior?
>>
>> To check if an lv created or removed by ovirt, you can do:
>>
>> pvscan --cache
>> lvs vg-uuid
>>
>> Nir
>>
>> >
>> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls
>> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
>> > 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
>> > 3fba372c-4c39-4843-be9e-b358b196331d
>> > b47f58e0-d576-49be-b8aa-f30581a0373a
>> > 5097df27-c676-4ee7-af89-ecdaed2c77be
>> > c598bb22-a386-4908-bfa1-7c44bd764c96
>> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
>> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls -l
>> > total 0
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28
>> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31
>> > 3fba372c-4c39-4843-be9e-b358b196331d ->
>> >
>> > /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
>> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44
>> > 5097df27-c676-4ee7-af89-ecdaed2c77be ->
>> >
>> > 

Re: [ovirt-users] Disks Snapshot

2016-03-14 Thread Nir Soffer
On Sat, Mar 12, 2016 at 3:10 PM, Marcelo Leandro  wrote:
> Good morning
>
> I have a doubt, when i do a snapshot, a new lvm is generated, however
> when I delete this snapshot the lvm not off, that's right?

You question is not clear. Can you explain what is the unexpected behavior?

To check if an lv created or removed by ovirt, you can do:

pvscan --cache
lvs vg-uuid

Nir

>
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls
> 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366  7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> 3fba372c-4c39-4843-be9e-b358b196331d  b47f58e0-d576-49be-b8aa-f30581a0373a
> 5097df27-c676-4ee7-af89-ecdaed2c77be  c598bb22-a386-4908-bfa1-7c44bd764c96
> 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls -l
> total 0
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28
> 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31
> 3fba372c-4c39-4843-be9e-b358b196331d ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44
> 5097df27-c676-4ee7-af89-ecdaed2c77be ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23
> 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51 ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:12
> 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> lrwxrwxrwx. 1 vdsm kvm 78 Nov 27 22:30
> b47f58e0-d576-49be-b8aa-f30581a0373a ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/b47f58e0-d576-49be-b8aa-f30581a0373a
> lrwxrwxrwx. 1 vdsm kvm 78 Mar 11 22:01
> c598bb22-a386-4908-bfa1-7c44bd764c96 ->
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/c598bb22-a386-4908-bfa1-7c44bd764c96
>
>
>
> disks snapshot:
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> image: 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> file format: qcow2
> virtual size: 112G (120259084288 bytes)
> disk size: 0
> cluster_size: 65536
> backing file: 
> ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
> backing file format: raw
> Format specific information:
> compat: 0.10
> refcount bits: 16
>
>
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> 3fba372c-4c39-4843-be9e-b358b196331d
> image: 3fba372c-4c39-4843-be9e-b358b196331d
> file format: qcow2
> virtual size: 112G (120259084288 bytes)
> disk size: 0
> cluster_size: 65536
> backing file: 
> ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
> backing file format: raw
> Format specific information:
> compat: 0.10
> refcount bits: 16
>
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> image: 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> file format: qcow2
> virtual size: 112G (120259084288 bytes)
> disk size: 0
> cluster_size: 65536
> backing file: 
> ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
> backing file format: raw
> Format specific information:
> compat: 0.10
> refcount bits: 16
>
>
> disk base:
> [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> b47f58e0-d576-49be-b8aa-f30581a0373a
> image: b47f58e0-d576-49be-b8aa-f30581a0373a
> file format: raw
> virtual size: 112G (120259084288 bytes)
> disk size: 0
>
>
> Thanks.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disks Snapshot

2016-03-14 Thread Ala Hino
Hi Marcelo,

Is it cold (the VM is down) or live (the VM is up) merge (snapshot
deletion)?
What version are you running?
Can you please share engine and vdsm logs?

Please note that at some point we try to verify that image was removed by
running getVolumeInfo hence, the volume not found is expected. The thing
is, that you say that volume does exist.
Can you run following command on the host:

vdsClient -s 0 getVolumeInfo

Thank you,
Ala


On Sat, Mar 12, 2016 at 3:35 PM, Marcelo Leandro 
wrote:

> I see the log error:
> Mar 12, 2016 10:33:40 AM
> VDSM Host04 command failed: Volume does not exist:
> (u'948d0453-1992-4a3c-81db-21248853a88a',)
>
> but the volume exist :
> 948d0453-1992-4a3c-81db-21248853a88a
>
> 2016-03-12 10:10 GMT-03:00 Marcelo Leandro :
> > Good morning
> >
> > I have a doubt, when i do a snapshot, a new lvm is generated, however
> > when I delete this snapshot the lvm not off, that's right?
> >
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls
> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> > 3fba372c-4c39-4843-be9e-b358b196331d
> b47f58e0-d576-49be-b8aa-f30581a0373a
> > 5097df27-c676-4ee7-af89-ecdaed2c77be
> c598bb22-a386-4908-bfa1-7c44bd764c96
> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# ls -l
> > total 0
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:28
> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366 ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:31
> > 3fba372c-4c39-4843-be9e-b358b196331d ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/3fba372c-4c39-4843-be9e-b358b196331d
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 08:44
> > 5097df27-c676-4ee7-af89-ecdaed2c77be ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5097df27-c676-4ee7-af89-ecdaed2c77be
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:23
> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51 ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 12 09:12
> > 7d9b6ed0-1125-4215-ab76-37bcda3f6c2d ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/7d9b6ed0-1125-4215-ab76-37bcda3f6c2d
> > lrwxrwxrwx. 1 vdsm kvm 78 Nov 27 22:30
> > b47f58e0-d576-49be-b8aa-f30581a0373a ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/b47f58e0-d576-49be-b8aa-f30581a0373a
> > lrwxrwxrwx. 1 vdsm kvm 78 Mar 11 22:01
> > c598bb22-a386-4908-bfa1-7c44bd764c96 ->
> >
> /dev/c2dc0101-748e-4a7b-9913-47993eaa52bd/c598bb22-a386-4908-bfa1-7c44bd764c96
> >
> >
> >
> > disks snapshot:
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> > 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> > image: 27a8bca3-f984-4f67-9dd2-9e2fc5a5f366
> > file format: qcow2
> > virtual size: 112G (120259084288 bytes)
> > disk size: 0
> > cluster_size: 65536
> > backing file:
> ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
> > backing file format: raw
> > Format specific information:
> > compat: 0.10
> > refcount bits: 16
> >
> >
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> > 3fba372c-4c39-4843-be9e-b358b196331d
> > image: 3fba372c-4c39-4843-be9e-b358b196331d
> > file format: qcow2
> > virtual size: 112G (120259084288 bytes)
> > disk size: 0
> > cluster_size: 65536
> > backing file:
> ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
> > backing file format: raw
> > Format specific information:
> > compat: 0.10
> > refcount bits: 16
> >
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> > 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> > image: 5aaf9ce9-d7ad-4607-aab9-2e239ebaed51
> > file format: qcow2
> > virtual size: 112G (120259084288 bytes)
> > disk size: 0
> > cluster_size: 65536
> > backing file:
> ../93633835-d709-4ebb-9317-903e62064c43/b47f58e0-d576-49be-b8aa-f30581a0373a
> > backing file format: raw
> > Format specific information:
> > compat: 0.10
> > refcount bits: 16
> >
> >
> > disk base:
> > [root@srv-qemu03 93633835-d709-4ebb-9317-903e62064c43]# qemu-img info
> > b47f58e0-d576-49be-b8aa-f30581a0373a
> > image: b47f58e0-d576-49be-b8aa-f30581a0373a
> > file format: raw
> > virtual size: 112G (120259084288 bytes)
> > disk size: 0
> >
> >
> > Thanks.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users