[ovirt-users] Re: info about removal of LVM structures before removing LUNs

2022-03-31 Thread Nir Soffer
On Thu, Mar 31, 2022 at 6:03 PM Gianluca Cecchi
 wrote:
>
> On Thu, Mar 31, 2022 at 4:45 PM Nir Soffer  wrote:
>>
>>
>>
>> Regarding removing the vg on other nodes - you don't need to do anything.
>> On the host, the vg is hidden since you use lvm filter. Vdsm can see the
>> vg since vdsm uses lvm filter with all the luns on the system. Vdsm will
>> see the change the next time it runs pvs, vgs, or lvs.
>>
>> Nir
>>
> Ok, thank you very much
> So I will:
> . remove LVM structures on one node (probably I'll use the SPM host, but as 
> you said it shouldn't matter)
> . remove multipath devices and paths on both hosts (hope the second host 
> doesn't complain about LVM presence, because actually it is hidden by 
> filter...)
> . have the SAN mgmt guys unpresent LUN from both hosts
> . rescan SAN from inside oVirt (to verify LUN not detected any more and at 
> the same time all expected LUNs/paths ok)
>
> I should have also the second host updated in regard of LVM structures... 
> correct?

The right order his:

1. Make sure the vg does not have any active lv on any host, since you removed
it in the path without formatting, and some lvs may be activated
by mistake since
that time.

   vgchange -an --config 'devices { filter = ["a|.*|" ] }' vg-name

2. Remove the vg on one of the hosts
(assuming you don't need the data)

vgremove -f --config 'devices { filter = ["a|.*|" ] }' vg-name

If you don't plan to use this vg with lvm, you can remove the pvs

3. Have the SAN mgmt guys unpresent LUN from both hosts

   This should be done before removing the multipath devices, otherwise
   scsi rescan initiated by vdsm may discover the devices again and recreate
   the multipath devices.

4. Remove the multipath devices and the scsi devices related to these luns

   To verify you can use lsblk on the hosts, the devices will disappear.

   If you want to make sure the luns were unzoned, doing a rescan is a
good idea.
   it can be done by opening the "new domain" or "manage domain" in ovirt UI, or
   by running:

   vdsm-client Host getDeviceList checkStatus=''

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UVJWIXX54W3G5F5BPCYFI4UPUO2KFZCP/


[ovirt-users] Re: info about removal of LVM structures before removing LUNs

2022-03-31 Thread Gianluca Cecchi
On Thu, Mar 31, 2022 at 4:45 PM Nir Soffer  wrote:

>
>
> Regarding removing the vg on other nodes - you don't need to do anything.
> On the host, the vg is hidden since you use lvm filter. Vdsm can see the
> vg since vdsm uses lvm filter with all the luns on the system. Vdsm will
> see the change the next time it runs pvs, vgs, or lvs.
>
> Nir
>
> Ok, thank you very much
So I will:
. remove LVM structures on one node (probably I'll use the SPM host, but as
you said it shouldn't matter)
. remove multipath devices and paths on both hosts (hope the second host
doesn't complain about LVM presence, because actually it is hidden by
filter...)
. have the SAN mgmt guys unpresent LUN from both hosts
. rescan SAN from inside oVirt (to verify LUN not detected any more and at
the same time all expected LUNs/paths ok)

I should have also the second host updated in regard of LVM structures...
correct?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQHCKQ26NSPZQB3XRCBXQHBJIXBN6PWC/


[ovirt-users] Re: info about removal of LVM structures before removing LUNs

2022-03-31 Thread Nir Soffer
On Thu, Mar 31, 2022 at 3:13 PM Gianluca Cecchi
 wrote:
>
> On Thu, Mar 31, 2022 at 1:30 PM Nir Soffer  wrote:
>>
>>
>>
>> Removing a storage domain requires moving the storage domain to maintainance
>> and detaching it. In this state oVirt does not use the domain so it is
>> safe to remove
>> the lvs and vg on any host in the cluster.
>>
>> But if you remove the storage domain in engine with:
>>
>> [x] Format Domain, i.e. Storage Content will be lost!
>>
>> vdsm will remove all the lvs and the vg for you.
>>
>> If you forgot to format the domain when removing it, removing manually
>> is fine.
>>
>> Nir
>>
>
> Thanks for answering, Nir.
> In fact I think I didn't select to format the domain and so the LVM structure 
> remained in place (I did it some time ago...)
> When you write "vdsm will remove all the lvs and the vg for you", how does 
> vdsm act and work in this case and how does it coordinate the nodes' view of 
> LVM structures so that they are consistent, with no cluster LVM in place?

oVirt has its own clustered lvm solution, using sanlock.

In oVirt only the SPM host creates, extends, or deletes or changes tags in
logical volumes. Other host only consume the logical volumes by activating
them for running vms or performing storage operations.

> I presume it is lvmlockd using sanlock as external lock manager,

lvmlockd is not involved. When oVirt was created, lvmlockd supported
only dlm, which does not scale for oVirt use case. So oVirt uses sanlock
directly to manage cluster locks.

> but how can I run LVM commands mimicking what vdsm probably does?
> Or is it automagic and I need only to run the LVM commands above without 
> worrying about it?

There is no magic, but you don't need to mimic what vdsm is doing.

> When I manually remove LVs, VG and PV on the first node, what to do on other 
> nodes? Simply a
> vgscan --config 'devices { filter = ["a|.*|" ] }'

Don't run this on ovirt hosts, the host should not scan all vgs without
a filter.

> or what?

When you remove a storage domain engine, even without formatting it, no
host is using the logical volumes. Vdsm on all hosts can see the vg, but
never activate the logical volumes.

You can remove the vg on any host, since you are the only user of this vg.
Vdsm on other hosts can see the vg, but since it does not use the vg, it is
not affected.

The vg metadata is stored on one pv. When you remove a vg, lvm clears
the metadata on this pv. Other pvs cannot be affected by this change.
The only risk is trying to modify the same vg from multiple hosts at the
same time, which can corrupt the vg metadata.

Regarding removing the vg on other nodes - you don't need to do anything.
On the host, the vg is hidden since you use lvm filter. Vdsm can see the
vg since vdsm uses lvm filter with all the luns on the system. Vdsm will
see the change the next time it runs pvs, vgs, or lvs.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6J5FBLE7ULDDH33LCVQVJMPWLU4T3UQF/


[ovirt-users] Re: oVirt Local Repo

2022-03-31 Thread simon
Really appreciate the info thanks Sandro :)

Kind Regards

Simon...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QOXT56LBJULV6YTIHWAI5E3Q2MSDOBIJ/


[ovirt-users] Re: VM hangs after migration

2022-03-31 Thread Arik Hadas
Moshe, this sounds similar to what we've seen in your environment, no?
Did you manage to resolve it?

On Thu, Mar 31, 2022 at 12:57 PM Giorgio Biacchi 
wrote:

> Hi,
> I have a fresh Ovirt installation (4.4.10.7-1.el8 engine and oVirt Node
> 4.4.10) on a Dell VRTX chassis. There are 3 blades, two of them are
> identical hardware (PowerEdge M630) and the third is a little newer
> (PowerEdge M640). The third has different CPUs, more RAM, and slower
> NICs. I also have a bunch of data domains some on the shared PERC
> internal storage and others on an external iSCSI storage, all seems
> configured correctly and all the hosts are operational.
>
> I can migrate a VM back and forth from the first two blades without any
> problem, I can migrate a VM to the third blade but when I migrate a VM
> from the third blade to any of the other two the task terminate
> successfully, the VM is marked as up on the target host but the VM
> hangs, the console is frozen and the VM stops to respond to ping.
>
> I have no clues about why this is happening and I'm looking for
> suggestions about how to debug and hopefully fix this issue.
>
> Thanks in advance
> --
> gb
>
> PGP Key: http://pgp.mit.edu/
> Primary key fingerprint: C510 0765 943E EBED A4F2 69D3 16CC DC90 B9CB 0F34
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HYHAVG3KDHMNVWNYROIIX2CTHSLFPVU3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZF4LY7PTQAIQLIZCXPHCMHHFGUHPV6H/


[ovirt-users] Re: info about removal of LVM structures before removing LUNs

2022-03-31 Thread Gianluca Cecchi
On Thu, Mar 31, 2022 at 1:30 PM Nir Soffer  wrote:

>
>
> Removing a storage domain requires moving the storage domain to
> maintainance
> and detaching it. In this state oVirt does not use the domain so it is
> safe to remove
> the lvs and vg on any host in the cluster.
>
> But if you remove the storage domain in engine with:
>
> [x] Format Domain, i.e. Storage Content will be lost!
>
> vdsm will remove all the lvs and the vg for you.
>
> If you forgot to format the domain when removing it, removing manually
> is fine.
>
> Nir
>
>
Thanks for answering, Nir.
In fact I think I didn't select to format the domain and so the LVM
structure remained in place (I did it some time ago...)
When you write "vdsm will remove all the lvs and the vg for you", how does
vdsm act and work in this case and how does it coordinate the nodes' view
of LVM structures so that they are consistent, with no cluster LVM in place?
I presume it is lvmlockd using sanlock as external lock manager, but how
can I run LVM commands mimicking what vdsm probably does? Or is it
automagic and I need only to run the LVM commands above without worrying
about it?
When I manually remove LVs, VG and PV on the first node, what to do on
other nodes? Simply a
vgscan --config 'devices { filter = ["a|.*|" ] }'
or what?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GL727R3JWKZF5VO2NDJIWYXN5AVZDRVA/


[ovirt-users] Re: info about removal of LVM structures before removing LUNs

2022-03-31 Thread Nir Soffer
On Thu, Mar 31, 2022 at 1:35 PM Gianluca Cecchi
 wrote:
>
> Hello,
> I'm going to hot remove some LUNS that were used as storage domains from a 
> 4.4.7 environment.
> I have already removed them for oVirt.
> I think I would use the remove_mpath_device.yml playbook if I find it... it 
> seems it should be in examples dir inside ovirt ansible collections, but 
> there is not...
> Anyway I'm aware of the corresponding manual steps of (I think version 8 
> doesn't differ from 7 in this):
>
> . get disks name comprising the multipath device to remove
>
> . remove multipath device
> multipath -f "{{ lun }}"
>
> . flush I/O
> blockdev --flushbufs {{ item }}
> for every disk that was comprised in the multipath device
>
> . remove disks
> echo 1 > /sys/block/{{ item }}/device/delete
> for every disk that was comprised in the multipath device
>
> My main doubt is related to the LVM structure that I can see is yet present 
> on the multipath devices.
>
> Eg for a multipath device 360002ac0013e0001894c:
> # pvs --config 'devices { filter = ["a|.*|" ] }' | grep 
> 360002ac0013e0001894c
>   /dev/mapper/360002ac0013e0001894c 
> a7f5cf77-5640-4d2d-8f6d-abf663431d01 lvm2 a--<4.00t <675.88g
>
> # lvs --config 'devices { filter = ["a|.*|" ] }' 
> a7f5cf77-5640-4d2d-8f6d-abf663431d01
>   LV   VG   
> Attr   LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>   067dd3d0-db3b-4fd0-9130-c616c699dbb4 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 900.00g
>   1682612b-fcbb-4226-a821-3d90621c0dc3 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi---  55.00g
>   3b863da5-2492-4c07-b4f8-0e8ac943803b a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 128.00m
>   47586b40-b5c0-4a65-a7dc-23ddffbc64c7 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi---  35.00g
>   7a5878fb-d70d-4bb5-b637-53934d234ba9 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 570.00g
>   94852fc8-5208-4da1-a429-b97b0c82a538 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi---  55.00g
>   a2edcd76-b9d7-4559-9c4f-a6941aaab956 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 128.00m
>   de08d92d-611f-445c-b2d4-836e33935fcf a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 300.00g
>   de54928d-2727-46fc-81de-9de2ce002bee a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi---   1.17t
>   f9f4d24d-5f2b-4ec3-b7e3-1c50a7c45525 a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 300.00g
>   ids  a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 128.00m
>   inboxa7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 128.00m
>   leases   a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi---   2.00g
>   master   a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi---   1.00g
>   metadata a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 128.00m
>   outbox   a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi--- 128.00m
>   xleases  a7f5cf77-5640-4d2d-8f6d-abf663431d01 
> -wi---   1.00g
>
> So the question is:
> would it be better to execute something like
> lvremove for every LV lv_name
> lvremove --config 'devices { filter = ["a|.*|" ] }' 
> a7f5cf77-5640-4d2d-8f6d-abf663431d01/lv_name
>
> vgremove
> vgremove --config 'devices { filter = ["a|.*|" ] }' 
> a7f5cf77-5640-4d2d-8f6d-abf663431d01
>
> pvremove
> pvremove --config 'devices { filter = ["a|.*|" ] }' 
> /dev/mapper/360002ac0013e0001894c
>
> and then proceed with the steps above or nothing at all as the OS itself 
> doesn't "see" the LVMs and it is only an oVirt view that is already "clean"?
> Also because LVM is not cluster aware, so after doing that on one node, I 
> would have the problem about LVM rescan on other nodes

Removing a storage domain requires moving the storage domain to maintainance
and detaching it. In this state oVirt does not use the domain so it is
safe to remove
the lvs and vg on any host in the cluster.

But if you remove the storage domain in engine with:

[x] Format Domain, i.e. Storage Content will be lost!

vdsm will remove all the lvs and the vg for you.

If you forgot to format the domain when removing it, removing manually
is fine.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFADAJY6J7MLTCXY27KQZ3OGCNIMTJTT/


[ovirt-users] info about removal of LVM structures before removing LUNs

2022-03-31 Thread Gianluca Cecchi
Hello,
I'm going to hot remove some LUNS that were used as storage domains from a
4.4.7 environment.
I have already removed them for oVirt.
I think I would use the remove_mpath_device.yml playbook if I find it... it
seems it should be in examples dir inside ovirt ansible collections, but
there is not...
Anyway I'm aware of the corresponding manual steps of (I think version 8
doesn't differ from 7 in this):

. get disks name comprising the multipath device to remove

. remove multipath device
multipath -f "{{ lun }}"

. flush I/O
blockdev --flushbufs {{ item }}
for every disk that was comprised in the multipath device

. remove disks
echo 1 > /sys/block/{{ item }}/device/delete
for every disk that was comprised in the multipath device

My main doubt is related to the LVM structure that I can see is yet present
on the multipath devices.

Eg for a multipath device 360002ac0013e0001894c:
# pvs --config 'devices { filter = ["a|.*|" ] }' | grep
360002ac0013e0001894c
  /dev/mapper/360002ac0013e0001894c
a7f5cf77-5640-4d2d-8f6d-abf663431d01 lvm2 a--<4.00t <675.88g

# lvs --config 'devices { filter = ["a|.*|" ] }'
a7f5cf77-5640-4d2d-8f6d-abf663431d01
  LV   VG
Attr   LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  067dd3d0-db3b-4fd0-9130-c616c699dbb4 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi--- 900.00g
  1682612b-fcbb-4226-a821-3d90621c0dc3 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi---  55.00g
  3b863da5-2492-4c07-b4f8-0e8ac943803b a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi--- 128.00m
  47586b40-b5c0-4a65-a7dc-23ddffbc64c7 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi---  35.00g
  7a5878fb-d70d-4bb5-b637-53934d234ba9 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi--- 570.00g
  94852fc8-5208-4da1-a429-b97b0c82a538 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi---  55.00g
  a2edcd76-b9d7-4559-9c4f-a6941aaab956 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi--- 128.00m
  de08d92d-611f-445c-b2d4-836e33935fcf a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi--- 300.00g
  de54928d-2727-46fc-81de-9de2ce002bee a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi---   1.17t
  f9f4d24d-5f2b-4ec3-b7e3-1c50a7c45525 a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi--- 300.00g
  ids  a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi--- 128.00m
  inboxa7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi--- 128.00m
  leases   a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi---   2.00g
  master   a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi---   1.00g
  metadata a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi--- 128.00m
  outbox   a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi--- 128.00m
  xleases  a7f5cf77-5640-4d2d-8f6d-abf663431d01
-wi---   1.00g

So the question is:
would it be better to execute something like
lvremove for every LV lv_name
lvremove --config 'devices { filter = ["a|.*|" ] }'
a7f5cf77-5640-4d2d-8f6d-abf663431d01/lv_name

vgremove
vgremove --config 'devices { filter = ["a|.*|" ] }'
a7f5cf77-5640-4d2d-8f6d-abf663431d01

pvremove
pvremove --config 'devices { filter = ["a|.*|" ] }'
/dev/mapper/360002ac0013e0001894c

and then proceed with the steps above or nothing at all as the OS itself
doesn't "see" the LVMs and it is only an oVirt view that is already "clean"?
Also because LVM is not cluster aware, so after doing that on one node, I
would have the problem about LVM rescan on other nodes

Thanks in advance,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/APXCYLSEXNQ7YHX22UIQ7IXDLCIZT26P/


[ovirt-users] VM hangs after migration

2022-03-31 Thread Giorgio Biacchi

Hi,
I have a fresh Ovirt installation (4.4.10.7-1.el8 engine and oVirt Node 
4.4.10) on a Dell VRTX chassis. There are 3 blades, two of them are 
identical hardware (PowerEdge M630) and the third is a little newer 
(PowerEdge M640). The third has different CPUs, more RAM, and slower 
NICs. I also have a bunch of data domains some on the shared PERC 
internal storage and others on an external iSCSI storage, all seems 
configured correctly and all the hosts are operational.


I can migrate a VM back and forth from the first two blades without any 
problem, I can migrate a VM to the third blade but when I migrate a VM 
from the third blade to any of the other two the task terminate 
successfully, the VM is marked as up on the target host but the VM 
hangs, the console is frozen and the VM stops to respond to ping.


I have no clues about why this is happening and I'm looking for 
suggestions about how to debug and hopefully fix this issue.


Thanks in advance
--
gb

PGP Key: http://pgp.mit.edu/
Primary key fingerprint: C510 0765 943E EBED A4F2 69D3 16CC DC90 B9CB 0F34


smime.p7s
Description: Firma crittografica S/MIME
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HYHAVG3KDHMNVWNYROIIX2CTHSLFPVU3/