On Thu, Mar 31, 2022 at 1:35 PM Gianluca Cecchi
<gianluca.cec...@gmail.com> wrote:
>
> Hello,
> I'm going to hot remove some LUNS that were used as storage domains from a 
> 4.4.7 environment.
> I have already removed them for oVirt.
> I think I would use the remove_mpath_device.yml playbook if I find it... it 
> seems it should be in examples dir inside ovirt ansible collections, but 
> there is not...
> Anyway I'm aware of the corresponding manual steps of (I think version 8 
> doesn't differ from 7 in this):
>
> . get disks name comprising the multipath device to remove
>
> . remove multipath device
> multipath -f "{{ lun }}"
>
> . flush I/O
> blockdev --flushbufs {{ item }}
> for every disk that was comprised in the multipath device
>
> . remove disks
> echo 1 > /sys/block/{{ item }}/device/delete
> for every disk that was comprised in the multipath device
>
> My main doubt is related to the LVM structure that I can see is yet present 
> on the multipath devices.
>
> Eg for a multipath device 360002ac0000000000000013e0001894c:
> # pvs --config 'devices { filter = ["a|.*|" ] }' | grep 
> 360002ac0000000000000013e0001894c
>   /dev/mapper/360002ac0000000000000013e0001894c 
> a7f5cf77-5640-4d2d-8f6d-abf663431d01 lvm2 a--    <4.00t <675.88g
>
> # lvs --config 'devices { filter = ["a|.*|" ] }' 
> a7f5cf77-5640-4d2d-8f6d-abf663431d01
>   LV                                   VG                                   
> Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>   067dd3d0-db3b-4fd0-9130-c616c699dbb4 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi------- 900.00g
>   1682612b-fcbb-4226-a821-3d90621c0dc3 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi-------  55.00g
>   3b863da5-2492-4c07-b4f8-0e8ac943803b a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi------- 128.00m
>   47586b40-b5c0-4a65-a7dc-23ddffbc64c7 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi-------  35.00g
>   7a5878fb-d70d-4bb5-b637-53934d234ba9 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi------- 570.00g
>   94852fc8-5208-4da1-a429-b97b0c82a538 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi-------  55.00g
>   a2edcd76-b9d7-4559-9c4f-a6941aaab956 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi------- 128.00m
>   de08d92d-611f-445c-b2d4-836e33935fcf a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi------- 300.00g
>   de54928d-2727-46fc-81de-9de2ce002bee a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi-------   1.17t
>   f9f4d24d-5f2b-4ec3-b7e3-1c50a7c45525 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi------- 300.00g
>   ids                                  a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi------- 128.00m
>   inbox                                a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi------- 128.00m
>   leases                               a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi-------   2.00g
>   master                               a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi-------   1.00g
>   metadata                             a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi------- 128.00m
>   outbox                               a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi------- 128.00m
>   xleases                              a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi-------   1.00g
>
> So the question is:
> would it be better to execute something like
> lvremove for every LV lv_name
> lvremove --config 'devices { filter = ["a|.*|" ] }' 
> a7f5cf77-5640-4d2d-8f6d-abf663431d01/lv_name
>
> vgremove
> vgremove --config 'devices { filter = ["a|.*|" ] }' 
> a7f5cf77-5640-4d2d-8f6d-abf663431d01
>
> pvremove
> pvremove --config 'devices { filter = ["a|.*|" ] }' 
> /dev/mapper/360002ac0000000000000013e0001894c
>
> and then proceed with the steps above or nothing at all as the OS itself 
> doesn't "see" the LVMs and it is only an oVirt view that is already "clean"?
> Also because LVM is not cluster aware, so after doing that on one node, I 
> would have the problem about LVM rescan on other nodes....

Removing a storage domain requires moving the storage domain to maintainance
and detaching it. In this state oVirt does not use the domain so it is
safe to remove
the lvs and vg on any host in the cluster.

But if you remove the storage domain in engine with:

    [x] Format Domain, i.e. Storage Content will be lost!

vdsm will remove all the lvs and the vg for you.

If you forgot to format the domain when removing it, removing manually
is fine.

Nir
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFADAJY6J7MLTCXY27KQZ3OGCNIMTJTT/

Reply via email to