Re: [ovirt-users] leftover of disk moving operation

2018-02-13 Thread Gianluca Cecchi
On Wed, Jan 31, 2018 at 5:01 PM, Elad Ben Aharon 
wrote:

> Just delete the image directory 
> (remove_me_8eb435f3-e8c1-4042-8180-e9f342b2e449)
> located under  /rhev/data-center/%spuuid%/%sduuid%/images/
>
> As for the LV, please try the following:
>
> dmsetup remove /dev/mapper/%device_name% --> device name could be fetched
> by 'dmsetup table'
>

Hello,
for that oVirt environment I finished moving the disks form source to
target, so I could power off all test infra and at node reboot I didn't
have the problem again (also because I force removed the source storage
domain), so I could not investigate more.

But I have "sort of" reproduced the problem inside another FC SAN storage
based environment.
The problem happened with a VM having 4 disks: one boot disk of 50Gb and
other 3 disks of 100Gb, 200Gb, 200Gb.
The VM has been powered off and the 3 "big" disks deletion (tried both
deactivating and not the disk before removal) originated for all the same
error as in my oVirt environment above during move:

command HSMGetAllTasksStatusesVDS failed: Cannot remove Logical Volume: (['
Cannot remove Logical Volume:

So I think the problem is related to SAN itself and when you work with
relatively "big" disks perhaps.
One suspect is also a problem at hypervisor LVM filtering, because all 3
disks had a PV/VG/LV structure inside, created on the whole virtual disk at
VM level.

As this new environment is in RHEV with RHV-H hosts (layer
rhvh-4.1-0.20171002.0+1)
I opened the case #02034032 if interested.

The big problem is that the disk has been removed at VM side, but at
storage domain side the space has not been released, so that if you have to
create other "big" disks, you could go into lack of space because of this.

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] leftover of disk moving operation

2018-01-31 Thread Elad Ben Aharon
Just delete the image directory
(remove_me_8eb435f3-e8c1-4042-8180-e9f342b2e449)
located under  /rhev/data-center/%spuuid%/%sduuid%/images/

As for the LV, please try the following:

dmsetup remove /dev/mapper/%device_name% --> device name could be fetched
by 'dmsetup table'

On Wed, Jan 31, 2018 at 2:00 PM, Gianluca Cecchi 
wrote:

> On Wed, Jan 31, 2018 at 12:33 PM, Elad Ben Aharon 
> wrote:
>
>> You can correlate according to /rhev/data-center/%spuuid%/%sd
>> uuid%/images/
>> The image id you can take from:
>>
>> # lvs -o name,tags |less -S
>>
>> IU, under LV Tage, is the LV's image id, fir example:
>>
>>   LV   LV Tags
>>
>>
>>  13f5b7c1-ad93-41f3-96eb-147709640d1a IU_3646e381-d237-4940-a02d-90b
>> b90b1d45a,MD_5,PU_----
>>
>> On Tue, Jan 30, 2018 at 5:58 PM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> On Tue, Jan 30, 2018 at 4:51 PM, Elad Ben Aharon 
>>> wrote:
>>>
 Please try:

 vdsClient -s 0 teardownImage   

>>>
>>>
>>> How do I map spUUID, sdUUID and imgUUID ?
>>>
>>>
>>
> OK, thanks.
> In my case even after detaching and removing the original SD and putting the
> target SD (now the only attached to the DC) into maintenance, putting
> host into maintenance and rebooting the host, I still get these old LVs
> with flags ao (san visibility to the old source sd is still here... I'm
> going to remove when cleaned up..).
>
> Could it depend on the fact that there was another host in the DC that I
> put into maintenance and then powered off (because at the moment only one
> host was able to see the target SD where I moved all the disks)?
> So a sort of retained lock?
>
> Anyway after restarting this host and activating it with also 2 running
> VMS I have this kind of LV situation related to disks.
> Note that I also included vg_name and you can see 3 VGs
>
> be0c72ca-2dbc-4e02-ab48-5491ea0c01b7 and  f7b481c8-b744-43d4-9faa-0e494a308490
> are two VGs on source old storage domains from where I moved disks
>
> c0097b1a-a387-4ffa-a62b-f9e6972197ef is VG on target new storage domain
> where I moved the disks
>
> #  lvs -o vg_name,name,tags
>
>be0c72ca-2dbc-4e02-ab48-5491ea0c01b7 9094cc5a-a05c-47e0-8ad9-0fef274ea97b
> IU__remove_me_eb553b69-fafb-40de-9892-66b9f2826fd3,MD_6,
> PU_----
>   be0c72ca-2dbc-4e02-ab48-5491ea0c01b7 c1250065-e137-46da-9c11-38b095ff93a7
> IU__remove_me_de47a038-28f9-4131-9650-042f73088c46,MD_7,
> PU_----
>   c0097b1a-a387-4ffa-a62b-f9e6972197ef 014ce954-d645-43f6-823a-73112b80f5ae
> IU_72fb2b45-e1b3-4ea3-a95d-c19aa4b5aad4,MD_20,PU_----
>
>   c0097b1a-a387-4ffa-a62b-f9e6972197ef 09bdad3f-eae1-4449-bb6e-3c3f85e690e9
> IU_7e97309c-5aad-468c-81fc-e2179e4dea3a,MD_6,PU_----
>
>   c0097b1a-a387-4ffa-a62b-f9e6972197ef 0afa6f2e-e4eb-481a-955c-4569e8eb737a
> IU_4721b70f-7146-490f-b351-c4fbff05f62e,MD_7,PU_----
>
>   c0097b1a-a387-4ffa-a62b-f9e6972197ef 45e20ac8-f3c6-462b-a2c3-30f0c8262f84
> IU_a50a9024-e951-4755-9a71-c47b93f99d58,MD_5,PU_----
>
>   c0097b1a-a387-4ffa-a62b-f9e6972197ef 4d27c0e1-546b-48ba-80fb-95dcd87b5583
> IU_3f949bfa-749b-4201-aa94-13bbad45132a,MD_17,PU_----
>
>   c0097b1a-a387-4ffa-a62b-f9e6972197ef 6d23ec3e-79fb-47f0-85ff-08480476ea68
> IU_8eb435f3-e8c1-4042-8180-e9f342b2e449,MD_9,PU_----
>
>   c0097b1a-a387-4ffa-a62b-f9e6972197ef 75aacb1d-5ce5-4b72-b481-646abfa7a652
> IU_190135c5-710d-4ca4-9e0c-cf19e7ca0f8b,MD_11,PU_----
>
>   c0097b1a-a387-4ffa-a62b-f9e6972197ef 7f695bba-eba1-4ec2-ad24-3a0a5f03ea2d
> IU_2aef413b-b2d1-4a7c-8fbb-df81c93050ea,MD_19,PU_----
>
>   c0097b1a-a387-4ffa-a62b-f9e6972197ef 8b48d5bb-a643-4c2b-9979-1a1be532ae71
> IU_d258a707-0bb6-47d4-88b7-e0bf01b59f78,MD_16,PU_----
>
>   c0097b1a-a387-4ffa-a62b-f9e6972197ef 9094cc5a-a05c-47e0-8ad9-0fef274ea97b
> IU_eb553b69-fafb-40de-9892-66b9f2826fd3,MD_14,PU_----
>
>   c0097b1a-a387-4ffa-a62b-f9e6972197ef a1ebb202-458a-4c3a-8020-adbd432cdc75
> IU_bc0de77d-796b-4c8d-94b3-0829110a542f,MD_12,PU_----
>
>   c0097b1a-a387-4ffa-a62b-f9e6972197ef a20bb16e-7c7c-4ed4-85c0-cbf297048a8e
> IU_6d8d41d5-2502-4392-8d4a-cca07581ec1e,MD_8,PU_----
>
>   c0097b1a-a387-4ffa-a62b-f9e6972197ef a2d5f043-55c2-4701-ac8c-ccb796a799d1
> IU_c4cb42d2-c2ae-4336-8905-f743576869bc,MD_10,PU_----
>
>   c0097b1a-a387-4ffa-a62b-f9e6972197ef acb04c1e-488e-469f-9905-21540dfe4361
> IU_48ca28fb-9c08-44ff-8599-7ba767a96b32,MD_13,PU_----
>
>   c0097b1a-a387-4ffa-a62b-f9e6972197ef 

Re: [ovirt-users] leftover of disk moving operation

2018-01-31 Thread Gianluca Cecchi
On Wed, Jan 31, 2018 at 12:33 PM, Elad Ben Aharon 
wrote:

> You can correlate according to /rhev/data-center/%spuuid%/%sduuid%/images/
> The image id you can take from:
>
> # lvs -o name,tags |less -S
>
> IU, under LV Tage, is the LV's image id, fir example:
>
>   LV   LV Tags
>
>
>  13f5b7c1-ad93-41f3-96eb-147709640d1a IU_3646e381-d237-4940-a02d-
> 90bb90b1d45a,MD_5,PU_----
>
> On Tue, Jan 30, 2018 at 5:58 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Tue, Jan 30, 2018 at 4:51 PM, Elad Ben Aharon 
>> wrote:
>>
>>> Please try:
>>>
>>> vdsClient -s 0 teardownImage   
>>>
>>
>>
>> How do I map spUUID, sdUUID and imgUUID ?
>>
>>
>
OK, thanks.
In my case even after detaching and removing the original SD and putting the
target SD (now the only attached to the DC) into maintenance, putting host
into maintenance and rebooting the host, I still get these old LVs with
flags ao (san visibility to the old source sd is still here... I'm going to
remove when cleaned up..).

Could it depend on the fact that there was another host in the DC that I
put into maintenance and then powered off (because at the moment only one
host was able to see the target SD where I moved all the disks)?
So a sort of retained lock?

Anyway after restarting this host and activating it with also 2 running VMS
I have this kind of LV situation related to disks.
Note that I also included vg_name and you can see 3 VGs

be0c72ca-2dbc-4e02-ab48-5491ea0c01b7
and  f7b481c8-b744-43d4-9faa-0e494a308490 are two VGs on source old storage
domains from where I moved disks

c0097b1a-a387-4ffa-a62b-f9e6972197ef is VG on target new storage domain
where I moved the disks

#  lvs -o vg_name,name,tags

   be0c72ca-2dbc-4e02-ab48-5491ea0c01b7
9094cc5a-a05c-47e0-8ad9-0fef274ea97b
IU__remove_me_eb553b69-fafb-40de-9892-66b9f2826fd3,MD_6,PU_----
  be0c72ca-2dbc-4e02-ab48-5491ea0c01b7 c1250065-e137-46da-9c11-38b095ff93a7
IU__remove_me_de47a038-28f9-4131-9650-042f73088c46,MD_7,PU_----
  c0097b1a-a387-4ffa-a62b-f9e6972197ef 014ce954-d645-43f6-823a-73112b80f5ae
IU_72fb2b45-e1b3-4ea3-a95d-c19aa4b5aad4,MD_20,PU_----

  c0097b1a-a387-4ffa-a62b-f9e6972197ef 09bdad3f-eae1-4449-bb6e-3c3f85e690e9
IU_7e97309c-5aad-468c-81fc-e2179e4dea3a,MD_6,PU_----

  c0097b1a-a387-4ffa-a62b-f9e6972197ef 0afa6f2e-e4eb-481a-955c-4569e8eb737a
IU_4721b70f-7146-490f-b351-c4fbff05f62e,MD_7,PU_----

  c0097b1a-a387-4ffa-a62b-f9e6972197ef 45e20ac8-f3c6-462b-a2c3-30f0c8262f84
IU_a50a9024-e951-4755-9a71-c47b93f99d58,MD_5,PU_----

  c0097b1a-a387-4ffa-a62b-f9e6972197ef 4d27c0e1-546b-48ba-80fb-95dcd87b5583
IU_3f949bfa-749b-4201-aa94-13bbad45132a,MD_17,PU_----

  c0097b1a-a387-4ffa-a62b-f9e6972197ef 6d23ec3e-79fb-47f0-85ff-08480476ea68
IU_8eb435f3-e8c1-4042-8180-e9f342b2e449,MD_9,PU_----

  c0097b1a-a387-4ffa-a62b-f9e6972197ef 75aacb1d-5ce5-4b72-b481-646abfa7a652
IU_190135c5-710d-4ca4-9e0c-cf19e7ca0f8b,MD_11,PU_----

  c0097b1a-a387-4ffa-a62b-f9e6972197ef 7f695bba-eba1-4ec2-ad24-3a0a5f03ea2d
IU_2aef413b-b2d1-4a7c-8fbb-df81c93050ea,MD_19,PU_----

  c0097b1a-a387-4ffa-a62b-f9e6972197ef 8b48d5bb-a643-4c2b-9979-1a1be532ae71
IU_d258a707-0bb6-47d4-88b7-e0bf01b59f78,MD_16,PU_----

  c0097b1a-a387-4ffa-a62b-f9e6972197ef 9094cc5a-a05c-47e0-8ad9-0fef274ea97b
IU_eb553b69-fafb-40de-9892-66b9f2826fd3,MD_14,PU_----

  c0097b1a-a387-4ffa-a62b-f9e6972197ef a1ebb202-458a-4c3a-8020-adbd432cdc75
IU_bc0de77d-796b-4c8d-94b3-0829110a542f,MD_12,PU_----

  c0097b1a-a387-4ffa-a62b-f9e6972197ef a20bb16e-7c7c-4ed4-85c0-cbf297048a8e
IU_6d8d41d5-2502-4392-8d4a-cca07581ec1e,MD_8,PU_----

  c0097b1a-a387-4ffa-a62b-f9e6972197ef a2d5f043-55c2-4701-ac8c-ccb796a799d1
IU_c4cb42d2-c2ae-4336-8905-f743576869bc,MD_10,PU_----

  c0097b1a-a387-4ffa-a62b-f9e6972197ef acb04c1e-488e-469f-9905-21540dfe4361
IU_48ca28fb-9c08-44ff-8599-7ba767a96b32,MD_13,PU_----

  c0097b1a-a387-4ffa-a62b-f9e6972197ef c1250065-e137-46da-9c11-38b095ff93a7
IU_de47a038-28f9-4131-9650-042f73088c46,MD_15,PU_----

  c0097b1a-a387-4ffa-a62b-f9e6972197ef e26eac74-4445-4f3b-b2f5-ef048ae6d91b
IU_fe80f8f0-17d7-43b5-beb6-68983d9fd66c,MD_4,PU_----
 f7b481c8-b744-43d4-9faa-0e494a308490
45e20ac8-f3c6-462b-a2c3-30f0c8262f84
IU__remove_me_a50a9024-e951-4755-9a71-c47b93f99d58,MD_13,PU_----
  f7b481c8-b744-43d4-9faa-0e494a308490 4d27c0e1-546b-48ba-80fb-95dcd87b5583

Re: [ovirt-users] leftover of disk moving operation

2018-01-31 Thread Elad Ben Aharon
You can correlate according to /rhev/data-center/%spuuid%/%sduuid%/images/
The image id you can take from:

# lvs -o name,tags |less -S

IU, under LV Tage, is the LV's image id, fir example:

  LV   LV Tags

 13f5b7c1-ad93-41f3-96eb-147709640d1a
IU_3646e381-d237-4940-a02d-90bb90b1d45a,MD_5,PU_----

On Tue, Jan 30, 2018 at 5:58 PM, Gianluca Cecchi 
wrote:

> On Tue, Jan 30, 2018 at 4:51 PM, Elad Ben Aharon 
> wrote:
>
>> Please try:
>>
>> vdsClient -s 0 teardownImage   
>>
>
>
> How do I map spUUID, sdUUID and imgUUID ?
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] leftover of disk moving operation

2018-01-30 Thread Gianluca Cecchi
On Tue, Jan 30, 2018 at 4:51 PM, Elad Ben Aharon 
wrote:

> Please try:
>
> vdsClient -s 0 teardownImage   
>


How do I map spUUID, sdUUID and imgUUID ?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] leftover of disk moving operation

2018-01-30 Thread Gianluca Cecchi
On Tue, Jan 30, 2018 at 4:36 PM, Gianluca Cecchi 
wrote:

> On Tue, Jan 30, 2018 at 4:29 PM, Elad Ben Aharon 
> wrote:
>
>> Try to deactivate this LV:
>>
>> lvchange -an /dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0
>> -85ff-08480476ea68
>>
>
> # lvchange -an /dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-
> 47f0-85ff-08480476ea68
>   Logical volume 
> f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68
> is used by another device.
>
>  # fuser /dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-
> 47f0-85ff-08480476ea68
> #
>
> # ll /dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-
> 47f0-85ff-08480476ea68
> lrwxrwxrwx. 1 root root 8 Jan 29 20:29 /dev/f7b481c8-b744-43d4-9faa-
> 0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68 -> ../dm-42
> #
>
> # fuser /dev/dm-42
> #
>
> Can I try to restart vdsmd, perhaps?
>
> Gianluca
>


further info about this lv...

# ls -l /sys/block/$(basename $(readlink
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68))/holders
total 0
lrwxrwxrwx. 1 root root 0 Jan 22 15:13 dm-47 -> ../../dm-47

that seems different by the dm-42 above
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] leftover of disk moving operation

2018-01-30 Thread Elad Ben Aharon
Please try:

vdsClient -s 0 teardownImage   

On Tue, Jan 30, 2018 at 5:36 PM, Gianluca Cecchi 
wrote:

> On Tue, Jan 30, 2018 at 4:29 PM, Elad Ben Aharon 
> wrote:
>
>> Try to deactivate this LV:
>>
>> lvchange -an /dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0
>> -85ff-08480476ea68
>>
>
> # lvchange -an /dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-
> 47f0-85ff-08480476ea68
>   Logical volume 
> f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68
> is used by another device.
>
>  # fuser /dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-
> 47f0-85ff-08480476ea68
> #
>
> # ll /dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-
> 47f0-85ff-08480476ea68
> lrwxrwxrwx. 1 root root 8 Jan 29 20:29 /dev/f7b481c8-b744-43d4-9faa-
> 0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68 -> ../dm-42
> #
>
> # fuser /dev/dm-42
> #
>
> Can I try to restart vdsmd, perhaps?
>
> Gianluca
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] leftover of disk moving operation

2018-01-30 Thread Gianluca Cecchi
On Tue, Jan 30, 2018 at 4:29 PM, Elad Ben Aharon 
wrote:

> Try to deactivate this LV:
>
> lvchange -an /dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-
> 47f0-85ff-08480476ea68
>

# lvchange -an
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68
  Logical volume
f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68
is used by another device.

 # fuser
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68
#

# ll
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68
lrwxrwxrwx. 1 root root 8 Jan 29 20:29
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68
-> ../dm-42
#

# fuser /dev/dm-42
#

Can I try to restart vdsmd, perhaps?

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] leftover of disk moving operation

2018-01-30 Thread Elad Ben Aharon
Try to deactivate this LV:

lvchange -an
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68

On Tue, Jan 30, 2018 at 5:25 PM, Gianluca Cecchi 
wrote:

> On Tue, Jan 30, 2018 at 3:14 PM, Elad Ben Aharon 
> wrote:
>
>> In a case of disk migration failure with leftover LV on the destination
>> domain, lvremove is what needed. Also, make sure to remove the image
>> directory on the destination domain (located under
>> /rhev/data-center/%spuuid%/%sduuid%/images/)
>>
>>
> Ok, exactly the 2 steps I've done after noticing the broken link inside
> the image directory. I suspected something to be done at rdbms level.
> Thanks for confirmation.
> The bad was that the VM crashed because of this error.
>
> Not a big problem in my case, as this is a test env.
> I had to move many disk images from SAN to SAN, about 200Gb size each, and
> so I powered off the VMs and  then moved the disks.
> I didn't get anymore the problem at destination, but I did get sometimes
> the opposite: unable to remove the LV at source:
>
> command HSMGetAllTasksStatusesVDS failed: Cannot remove Logical Volume:
> (['Cannot remove Logical Volume: (u\'f7b481c8-b744-43d4-9faa-0e494a308490\',
> "[\'6d23ec3e-79fb-47f0-85ff-08480476ea68\']")'],)
>
> and I see that the apparently the LV remains active/open (ao flag)  at
> source...
> Even after putting the source storage domain, by now empty, into
> maintenance, at host side I get this kid of thing:
>
>  # lvs f7b481c8-b744-43d4-9faa-0e494a308490
>   LV   VG
>  Attr   LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>   45e20ac8-f3c6-462b-a2c3-30f0c8262f84 f7b481c8-b744-43d4-9faa-0e494a308490
> -wi-ao 200.00g
>   4d27c0e1-546b-48ba-80fb-95dcd87b5583 f7b481c8-b744-43d4-9faa-0e494a308490
> -wi-ao 380.00g
>   675d5f06-320a-4236-8d57-9ff7cc7eb200 f7b481c8-b744-43d4-9faa-0e494a308490
> -wi---  50.00g
>   6d23ec3e-79fb-47f0-85ff-08480476ea68 f7b481c8-b744-43d4-9faa-0e494a308490
> -wi-ao 200.00g
>   823a79f2-b09a-4665-9dfa-8ccd2850225f f7b481c8-b744-43d4-9faa-0e494a308490
> -wi--- 128.00m
>   863062c3-b3b0-4494-ad2f-76a1c29c069a f7b481c8-b744-43d4-9faa-0e494a308490
> -wi--- 128.00m
>   a2d5f043-55c2-4701-ac8c-ccb796a799d1 f7b481c8-b744-43d4-9faa-0e494a308490
> -wi-ao 200.00g
>   e26eac74-4445-4f3b-b2f5-ef048ae6d91b f7b481c8-b744-43d4-9faa-0e494a308490
> -wi-ao 200.00g
>   ids  f7b481c8-b744-43d4-9faa-0e494a308490
> -wi-a- 128.00m
>   inboxf7b481c8-b744-43d4-9faa-0e494a308490
> -wi-a- 128.00m
>   leases   f7b481c8-b744-43d4-9faa-0e494a308490
> -wi-a-   2.00g
>   master   f7b481c8-b744-43d4-9faa-0e494a308490
> -wi---   1.00g
>   metadata f7b481c8-b744-43d4-9faa-0e494a308490
> -wi-a- 512.00m
>   outbox   f7b481c8-b744-43d4-9faa-0e494a308490
> -wi-a- 128.00m
>   xleases  f7b481c8-b744-43d4-9faa-0e494a308490
> -wi-a-   1.00g
>
>
>  and for some hours (now I don't get it anymore, perhaps a consequence of
> putting the SD into maintenance), the lvs command exhibited also this kind
> of output/error
>
>
>   WARNING: PV lp71QQ-9ozC-l2Yx-o9mS-ikpg-c8ft-DfUTyK on
> /dev/c0097b1a-a387-4ffa-a62b-f9e6972197ef/e26eac74-4445-4f3b-b2f5-ef048ae6d91b
> was already found on /dev/f7b481c8-b744-43d4-9faa-
> 0e494a308490/e26eac74-4445-4f3b-b2f5-ef048ae6d91b.
>   WARNING: PV nRe20Y-XuD7-aFSl-INgl-WlkO-JGhK-zekrOl on
> /dev/c0097b1a-a387-4ffa-a62b-f9e6972197ef/45e20ac8-f3c6-462b-a2c3-30f0c8262f84
> was already found on /dev/f7b481c8-b744-43d4-9faa-
> 0e494a308490/45e20ac8-f3c6-462b-a2c3-30f0c8262f84.
>   WARNING: PV NjcwGm-Rf1H-NV9p-eCjU-qmF8-WCkB-CdzLmN on
> /dev/c0097b1a-a387-4ffa-a62b-f9e6972197ef/6d23ec3e-79fb-47f0-85ff-08480476ea68
> was already found on /dev/f7b481c8-b744-43d4-9faa-
> 0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68.
>   WARNING: PV 0dmwuU-1v6k-8weJ-yEg3-4Oup-w4Bp-GlNeRf on
> /dev/c0097b1a-a387-4ffa-a62b-f9e6972197ef/a2d5f043-55c2-4701-ac8c-ccb796a799d1
> was already found on /dev/f7b481c8-b744-43d4-9faa-
> 0e494a308490/a2d5f043-55c2-4701-ac8c-ccb796a799d1.
>   WARNING: PV lp71QQ-9ozC-l2Yx-o9mS-ikpg-c8ft-DfUTyK prefers device
> /dev/f7b481c8-b744-43d4-9faa-0e494a308490/e26eac74-4445-4f3b-b2f5-ef048ae6d91b
> because device is used by LV.
>   WARNING: PV nRe20Y-XuD7-aFSl-INgl-WlkO-JGhK-zekrOl prefers device
> /dev/f7b481c8-b744-43d4-9faa-0e494a308490/45e20ac8-f3c6-462b-a2c3-30f0c8262f84
> because device is used by LV.
>   WARNING: PV NjcwGm-Rf1H-NV9p-eCjU-qmF8-WCkB-CdzLmN prefers device
> /dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68
> because device is used by LV.
>   WARNING: PV 0dmwuU-1v6k-8weJ-yEg3-4Oup-w4Bp-GlNeRf prefers device
> 

Re: [ovirt-users] leftover of disk moving operation

2018-01-30 Thread Gianluca Cecchi
On Tue, Jan 30, 2018 at 3:14 PM, Elad Ben Aharon 
wrote:

> In a case of disk migration failure with leftover LV on the destination
> domain, lvremove is what needed. Also, make sure to remove the image
> directory on the destination domain (located under
> /rhev/data-center/%spuuid%/%sduuid%/images/)
>
>
Ok, exactly the 2 steps I've done after noticing the broken link inside the
image directory. I suspected something to be done at rdbms level.
Thanks for confirmation.
The bad was that the VM crashed because of this error.

Not a big problem in my case, as this is a test env.
I had to move many disk images from SAN to SAN, about 200Gb size each, and
so I powered off the VMs and  then moved the disks.
I didn't get anymore the problem at destination, but I did get sometimes
the opposite: unable to remove the LV at source:

command HSMGetAllTasksStatusesVDS failed: Cannot remove Logical Volume:
(['Cannot remove Logical Volume:
(u\'f7b481c8-b744-43d4-9faa-0e494a308490\',
"[\'6d23ec3e-79fb-47f0-85ff-08480476ea68\']")'],)

and I see that the apparently the LV remains active/open (ao flag)  at
source...
Even after putting the source storage domain, by now empty, into
maintenance, at host side I get this kid of thing:

 # lvs f7b481c8-b744-43d4-9faa-0e494a308490
  LV   VG
 Attr   LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  45e20ac8-f3c6-462b-a2c3-30f0c8262f84 f7b481c8-b744-43d4-9faa-0e494a308490
-wi-ao 200.00g
  4d27c0e1-546b-48ba-80fb-95dcd87b5583 f7b481c8-b744-43d4-9faa-0e494a308490
-wi-ao 380.00g
  675d5f06-320a-4236-8d57-9ff7cc7eb200 f7b481c8-b744-43d4-9faa-0e494a308490
-wi---  50.00g
  6d23ec3e-79fb-47f0-85ff-08480476ea68 f7b481c8-b744-43d4-9faa-0e494a308490
-wi-ao 200.00g
  823a79f2-b09a-4665-9dfa-8ccd2850225f f7b481c8-b744-43d4-9faa-0e494a308490
-wi--- 128.00m
  863062c3-b3b0-4494-ad2f-76a1c29c069a f7b481c8-b744-43d4-9faa-0e494a308490
-wi--- 128.00m
  a2d5f043-55c2-4701-ac8c-ccb796a799d1 f7b481c8-b744-43d4-9faa-0e494a308490
-wi-ao 200.00g
  e26eac74-4445-4f3b-b2f5-ef048ae6d91b f7b481c8-b744-43d4-9faa-0e494a308490
-wi-ao 200.00g
  ids  f7b481c8-b744-43d4-9faa-0e494a308490
-wi-a- 128.00m
  inboxf7b481c8-b744-43d4-9faa-0e494a308490
-wi-a- 128.00m
  leases   f7b481c8-b744-43d4-9faa-0e494a308490
-wi-a-   2.00g
  master   f7b481c8-b744-43d4-9faa-0e494a308490
-wi---   1.00g
  metadata f7b481c8-b744-43d4-9faa-0e494a308490
-wi-a- 512.00m
  outbox   f7b481c8-b744-43d4-9faa-0e494a308490
-wi-a- 128.00m
  xleases  f7b481c8-b744-43d4-9faa-0e494a308490
-wi-a-   1.00g


 and for some hours (now I don't get it anymore, perhaps a consequence of
putting the SD into maintenance), the lvs command exhibited also this kind
of output/error


  WARNING: PV lp71QQ-9ozC-l2Yx-o9mS-ikpg-c8ft-DfUTyK on
/dev/c0097b1a-a387-4ffa-a62b-f9e6972197ef/e26eac74-4445-4f3b-b2f5-ef048ae6d91b
was already found on
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/e26eac74-4445-4f3b-b2f5-ef048ae6d91b.
  WARNING: PV nRe20Y-XuD7-aFSl-INgl-WlkO-JGhK-zekrOl on
/dev/c0097b1a-a387-4ffa-a62b-f9e6972197ef/45e20ac8-f3c6-462b-a2c3-30f0c8262f84
was already found on
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/45e20ac8-f3c6-462b-a2c3-30f0c8262f84.
  WARNING: PV NjcwGm-Rf1H-NV9p-eCjU-qmF8-WCkB-CdzLmN on
/dev/c0097b1a-a387-4ffa-a62b-f9e6972197ef/6d23ec3e-79fb-47f0-85ff-08480476ea68
was already found on
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68.
  WARNING: PV 0dmwuU-1v6k-8weJ-yEg3-4Oup-w4Bp-GlNeRf on
/dev/c0097b1a-a387-4ffa-a62b-f9e6972197ef/a2d5f043-55c2-4701-ac8c-ccb796a799d1
was already found on
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/a2d5f043-55c2-4701-ac8c-ccb796a799d1.
  WARNING: PV lp71QQ-9ozC-l2Yx-o9mS-ikpg-c8ft-DfUTyK prefers device
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/e26eac74-4445-4f3b-b2f5-ef048ae6d91b
because device is used by LV.
  WARNING: PV nRe20Y-XuD7-aFSl-INgl-WlkO-JGhK-zekrOl prefers device
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/45e20ac8-f3c6-462b-a2c3-30f0c8262f84
because device is used by LV.
  WARNING: PV NjcwGm-Rf1H-NV9p-eCjU-qmF8-WCkB-CdzLmN prefers device
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/6d23ec3e-79fb-47f0-85ff-08480476ea68
because device is used by LV.
  WARNING: PV 0dmwuU-1v6k-8weJ-yEg3-4Oup-w4Bp-GlNeRf prefers device
/dev/f7b481c8-b744-43d4-9faa-0e494a308490/a2d5f043-55c2-4701-ac8c-ccb796a799d1
because device is used by LV.

I have been able to power on VMs with the new underlying storage domain
active.

Any hint on how to clean. I can shutdown and restart this host (that in
this moment is the only one running), but it could be of help in other
similar scenarios too...

Gianluca
___
Users mailing list
Users@ovirt.org

Re: [ovirt-users] leftover of disk moving operation

2018-01-30 Thread Elad Ben Aharon
In a case of disk migration failure with leftover LV on the destination
domain, lvremove is what needed. Also, make sure to remove the image
directory on the destination domain (located under
/rhev/data-center/%spuuid%/%sduuid%/images/)

On Mon, Jan 29, 2018 at 5:25 PM, Gianluca Cecchi 
wrote:

> Hello,
> I had a problem during a disk migration from one storage to another in a
> 4.1.7 environment connected to SAN storage.
> Now, after deleting the live storage migration snapshot, I want to retry
> (with the VM powered off) but at destination the logical volume still
> exists and was not pruned after the initial failure.
>
> I get
>
> HSMGetAllTasksStatusesVDS failed: Cannot create Logical Volume:
> ('c0097b1a-a387-4ffa-a62b-f9e6972197ef', u'a20bb16e-7c7c-4ed4-85c0-
> cbf297048a8e')
>
> I was able to move the other 4 disks that were part of this VM.
>
> Can I simply lvremove the target LV at host side (I have only one host
> running at this moment) and try the move again, or do I have to execute
> anything more, eg at engine rdbms level?
>
> Thanks,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users