On Tue, Feb 21, 2017 at 6:10 PM, Nir Soffer <nsof...@redhat.com> wrote:

> This is caused by active lvs on the remove storage domains that were not
> deactivated during the removal. This is a very old known issue.
>
> You have remove the remove device mapper entries - you can see the devices
> using:
>
>     dmsetup status
>
> Then you can remove the mapping using:
>
>     dmsetup remove device-name
>
> Once you removed the stale lvs, you will be able to remove the multipath
> device and the underlying paths, and lvm will not complain about read
> errors.
>
> Nir
>

OK Nir, thanks for advising.

So what I run with success on the 2 hosts

[root@ovmsrv05 vdsm]# for dev in $(dmsetup status | grep
900b1853--e192--4661--a0f9--7c7c396f6f49 | cut -d ":" -f 1)
do
   dmsetup remove $dev
done
[root@ovmsrv05 vdsm]#

and now I can run

[root@ovmsrv05 vdsm]# multipath -f 3600a0b80002999020000cd3c5501458f
[root@ovmsrv05 vdsm]#

Also, with related names depending on host,

previous maps to single devices were for example in ovmsrv05:

3600a0b80002999020000cd3c5501458f dm-4 IBM     ,1814      FAStT
size=2.0T features='2 pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='service-time 0' prio=0 status=enabled
| |- 0:0:0:2 sdb        8:16  failed undef running
| `- 1:0:0:2 sdh        8:112 failed undef running
`-+- policy='service-time 0' prio=0 status=enabled
  |- 0:0:1:2 sdg        8:96  failed undef running
  `- 1:0:1:2 sdn        8:208 failed undef running

And removal of single path devices:

[root@ovmsrv05 root]# for dev in sdb sdh sdg sdn
do
  echo 1 > /sys/block/${dev}/device/delete
done
[root@ovmsrv05 vdsm]#

All clean now... ;-)

Thanks again,

Gianluca
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to