On Wed, Jan 31, 2018 at 5:01 PM, Elad Ben Aharon <ebena...@redhat.com>

> Just delete the image directory 
> (remove_me_8eb435f3-e8c1-4042-8180-e9f342b2e449)
> located under  /rhev/data-center/%spuuid%/%sduuid%/images/
> As for the LV, please try the following:
> dmsetup remove /dev/mapper/%device_name% --> device name could be fetched
> by 'dmsetup table'

for that oVirt environment I finished moving the disks form source to
target, so I could power off all test infra and at node reboot I didn't
have the problem again (also because I force removed the source storage
domain), so I could not investigate more.

But I have "sort of" reproduced the problem inside another FC SAN storage
based environment.
The problem happened with a VM having 4 disks: one boot disk of 50Gb and
other 3 disks of 100Gb, 200Gb, 200Gb.
The VM has been powered off and the 3 "big" disks deletion (tried both
deactivating and not the disk before removal) originated for all the same
error as in my oVirt environment above during move:

command HSMGetAllTasksStatusesVDS failed: Cannot remove Logical Volume: (['
Cannot remove Logical Volume:

So I think the problem is related to SAN itself and when you work with
relatively "big" disks perhaps.
One suspect is also a problem at hypervisor LVM filtering, because all 3
disks had a PV/VG/LV structure inside, created on the whole virtual disk at
VM level.

As this new environment is in RHEV with RHV-H hosts (layer
I opened the case #02034032 if interested.

The big problem is that the disk has been removed at VM side, but at
storage domain side the space has not been released, so that if you have to
create other "big" disks, you could go into lack of space because of this.

Users mailing list

Reply via email to