[ovirt-users] Re: Failed to complete VM creation,help me!

2020-04-26 Thread Strahil Nikolov
On April 27, 2020 6:12:08 AM GMT+03:00, liu...@sina.com wrote:
>Storage domain usage is as follows:
>vgdisplay 3542ca36-747b-407c-aa5e-331510f81fd9
>  --- Volume group ---
>  VG Name   3542ca36-747b-407c-aa5e-331510f81fd9
>  System ID 
>  Formatlvm2
>  Metadata Areas2
>  Metadata Sequence No  15998
>  VG Access read/write
>  VG Status resizable
>  MAX LV0
>  Cur LV1954
>  Open LV   0
>  Max PV0
>  Cur PV1
>  Act PV1
>  VG Size   113.97 TiB
>  PE Size   128.00 MiB
>  Total PE  933647
>  Alloc PE / Size   64930 / <7.93 TiB
>  Free  PE / Size   868717 / 106.04 TiB
>  VG UUID   GiL1LL-ggO5-a6mX-62uc-q597-B3X9-6bzC93
>__
>
>
>The following error occurred when I created the VM:
>VDSM H5 command HSMGetAllTasksStatusesVDS failed: Error creating a new
>volume: (u"Volume creation 62220be2-2781-4771-9373-84af5cc18f01 failed:
>(28, 'Sanlock resource write failure', 'No space left on device')",)
>Excuse me, what's the reason, thank you!
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/5C76N3P4IALIN5VG5M2D6MZJYXILBIIQ/

If it wass a Gluster or NFS, I would mention inodes.
Yet, I'm not sure if oVirt uses an FS for block storage.
What is the UI showing for disk utilization ?

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2N7XJXPJILWBX4B5KHO4MY6VKE2XLIJK/


[ovirt-users] Failed to complete VM creation,help me!

2020-04-26 Thread liug74
Storage domain usage is as follows:
vgdisplay 3542ca36-747b-407c-aa5e-331510f81fd9
  --- Volume group ---
  VG Name   3542ca36-747b-407c-aa5e-331510f81fd9
  System ID 
  Formatlvm2
  Metadata Areas2
  Metadata Sequence No  15998
  VG Access read/write
  VG Status resizable
  MAX LV0
  Cur LV1954
  Open LV   0
  Max PV0
  Cur PV1
  Act PV1
  VG Size   113.97 TiB
  PE Size   128.00 MiB
  Total PE  933647
  Alloc PE / Size   64930 / <7.93 TiB
  Free  PE / Size   868717 / 106.04 TiB
  VG UUID   GiL1LL-ggO5-a6mX-62uc-q597-B3X9-6bzC93
__


The following error occurred when I created the VM:
VDSM H5 command HSMGetAllTasksStatusesVDS failed: Error creating a new volume: 
(u"Volume creation 62220be2-2781-4771-9373-84af5cc18f01 failed: (28, 'Sanlock 
resource write failure', 'No space left on device')",)
Excuse me, what's the reason, thank you!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5C76N3P4IALIN5VG5M2D6MZJYXILBIIQ/


[ovirt-users] Re: Virtual machine replica - DR

2020-04-26 Thread Christopher Cox

On 4/26/20 4:19 PM, ccesa...@blueit.com.br wrote:

Hello,
Does someone know any tool/method to replicate the VMS from a "Production" Cluster to 
"Secondary"  Cluster, to provide a DR solution without Storage replication dependency or 
Gluster Storage.
Like Veeam, Zerto toosl do with other hypervisors.

Is there anyway, tool, product to do it!?


In all fairness, most of your "likes" play some roulette.  Taking snapshots and 
copying is fine and dandy, but cannot insure application data integrity.  Some 
might have agents for popular databases, but not necessarily everything (see 
next paragraph).


Ideally "writers" have to be quiesced (suspend write actions in a good way) 
somehow so that the data on disk prior to snapshot is integral.  Now, with that 
said, the roulette wheel is definitely slanted in your favor.  Just noting that 
the techniques that many things do aren't as fool proof as they might have you 
believe.


(btw, I think I just hinted as to the common recipe that is used to pull this 
off (with the stated flaws), at least part of it)


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PXYKE2RTDY6YO2G4AWREWJF7BIXPE2CB/


[ovirt-users] Virtual machine replica - DR

2020-04-26 Thread ccesario
Hello,
Does someone know any tool/method to replicate the VMS from a "Production" 
Cluster to "Secondary"  Cluster, to provide a DR solution without Storage 
replication dependency or Gluster Storage.
Like Veeam, Zerto toosl do with other hypervisors.

Is there anyway, tool, product to do it!?

Regards
Carlos
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CGD74P2K55PNF3D467L6SUDRIGY4XE7L/


[ovirt-users] Custom ignition in cloud-init?

2020-04-26 Thread Wesley Stewart
Before anyone tells me this is now included in 4.4.0, I saw this, but I
don't think I'm willing to update my centos7 host to centos8 yet. (But
perhaps that will be the answer).  Currently on 4.3.9/Centos 7

I am trying to run fedora core is to test it out, and the ISO installer
hangs on:
"failed to isolate default target freezing"

So I tried the QCOW2 image, but I'm looking at ways of running the ignition
file.  I see in the "run once" section i can deploy a custom unit script,
and I was wondering is this filetype agnostic?  If I generate a JSON
ignition script and put it in there will it get passed through correctly ?
Or will I need to update to 4.4.0?

I also found:
https://gerrit.ovirt.org/#/c/18/
This was abondoned for a "better solution" last September.  Was this
referring to the changes in 4.4.0?

Lastly has anyone who was running 4.3.9 upgrade their centos host to
centos8 and then upgrade ovirt without any issues?  I usually try and wait
at least a couple of minor version changes before switching to a new major
version.

Thanks guys!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LJ7MZMWXFHBG2Q5SJVWXFLH6M25E4VV7/


[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Nyika Csaba
 Hi,

Thanks you for your kind detailed anwers.
You helpd me a lot.

Now i hope we can solved the problem.

Special thanks to Gianluca too.

csabany 
 Eredeti levél 
Feladó: Nir Soffer < nsof...@redhat.com (Link -> mailto:nsof...@redhat.com) >
Dátum: 2020 április 26 17:39:36
Tárgy: [ovirt-users] Re: Ovirt vs lvm?
Címzett: Nyika Csaba < csab...@freemail.hu (Link -> mailto:csab...@freemail.hu) 
>
On Sun, Apr 26, 2020 at 3:00 PM Nyika Csaba  wrote:
>
>
>  Eredeti levél 
> Feladó: Gianluca Cecchi < gianluca.cec...@gmail.com (Link -> 
> mailto:gianluca.cec...@gmail.com) >
> Dátum: 2020 április 26 11:42:40
> Tárgy: Re: [ovirt-users] Re: Ovirt vs lvm?
> Címzett: Nyika Csaba < csab...@freemail.hu (Link -> 
> mailto:csab...@freemail.hu) >
>
> On Sun, Apr 26, 2020 at 11:06 AM Nyika Csaba < csab...@freemail.hu (Link -> 
> mailto:csab...@freemail.hu) > wrote:
>
> Thanks the advice.
> The hypervisors are "fresh". But the management server arrived from version 
> 3.6 step-by-step (We use this ovirt since 2015).
> The issuse occured diffrent clusters, hosts, diffrent HV versions. For 
> example the last but one vm occured on a ibm x3650, ovirt-node v4.2 host and 
> the last on a lenovo, ovirt-node v4.3.
> Best
>
>
> In theory on hypervisor node the only VG listed should be something like onn 
> (like Ovirt Node New generation, I think)
>
> In my case I have also gluster volumes, but in your case with FC SAN you 
> should only have onn
>
> [root@ovirt ~]# vgs
> VG #PV #LV #SN Attr VSize VFree
> gluster_vg_4t 1 2 0 wz--n- <3.64t 0
> gluster_vg_4t2 1 2 0 wz--n- <3.64t 0
> gluster_vg_nvme0n1 1 3 0 wz--n- 349.32g 0
> gluster_vg_nvme1n1 1 2 0 wz--n- 931.51g 0
> onn 1 11 0 wz--n- <228.40g <43.87g
> [root@ovirt ~]#
>
> And also the command "lvs" should so show only onn related logical volumes...
>
> Gianluca
>
> Hi,
>
> I checked all nodes, and what i got back after vgs command literally 
> "unbelievable".
>
> Some host look like good :
> VG #PV #LV #SN Attr VSize VFree
> 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t
> 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t
> 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t
> 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t
> 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t
> 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t
> 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t
> 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t
> 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t
> c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t
> d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t
No this is not good - these are VGs on shared storage, and the host
should not be able to access them.
> onn 1 11 0 wz--n- 277,46g 54,60g
I this guest VG (created inside the guest)? If so this is bad.
> Others:
> VG #PV #LV #SN Attr VSize VFree
> 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t
> 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t
> 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t
> 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t
> 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t
> 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t
> 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t
> 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t
> 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t
> c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t
> d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t
Again, bad.
> onn 1 11 0 wz--n- 277,46g 54,60g
> vg_okosvaros 2 7 0 wz-pn- <77,20g 0
Bad if this guest VGs.
> Others:
> VG #PV #LV #SN Attr VSize VFree
> 003b6a83-9133-4e65-9d6d-878d08e0de06 1 25 0 wz--n- <50,00t <44,86t
> 0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8 1 50 0 wz--n- <20,00t 4,57t
> 1831603c-e583-412a-b20e-f97b31ad9a55 1 122 0 wz--n- <25,00t <6,79t
> 3ff15d64-a716-4fad-94f0-abb69b5643a7 1 64 0 wz--n- <17,31t <4,09t
> 424fc43f-6bbf-47bb-94a0-b4c3322a4a90 1 68 0 wz--n- <14,46t <1,83t
> 4752cc9d-5f19-4cb1-b116-a62e3ee05783 1 81 0 wz--n- <28,00t <4,91t
> 567a63ec-5b34-425c-af20-5997450cf061 1 110 0 wz--n- <17,00t <2,21t
> 5f6dcc41-9a2f-432f-9de0-bed541cd6a03 1 71 0 wz--n- <20,00t <2,35t
> 8a4e4463-0945-430e-affd-c7ac2bbdc912 1 86 0 wz--n- <13,01t 2,85t
> c9543c8d-c6da-44be-8060-179e807f1211 1 55 0 wz--n- <18,00t 5,22t
> d5679d9d-ebf2-41ef-9e93-83d2cd9b027c 1 67 0 wz--n- <7,20t <1,15t
> onn 1 13 0 wz--n- <446,07g 88,39g
> vg_4trdb1p 3 7 0 wz-pn- 157,19g 0
> vg_4trdb1t 3 7 0 wz-pn- 157,19g 0
> vg_deployconfigrepo 3 7 0 wz-pn- 72,19g 0
> vg_ektrdb1p 3 7 0 wz-pn- 157,19g 0
> vg_ektrdb1t 3 7 0 wz-pn- 157,19g 0
> vg_empteszt 2 6 0 wz-pn- <77,20g <20,00g
> vg_helyiertekek 6 8 0 wz-pn- 278,11g 0
> vg_log 3 7 0 wz-pn- 347,19g 

[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Nir Soffer
On Sun, Apr 26, 2020 at 3:00 PM Nyika Csaba  wrote:
>
>
>  Eredeti levél 
> Feladó: Gianluca Cecchi < gianluca.cec...@gmail.com (Link -> 
> mailto:gianluca.cec...@gmail.com) >
> Dátum: 2020 április 26 11:42:40
> Tárgy: Re: [ovirt-users] Re: Ovirt vs lvm?
> Címzett: Nyika Csaba < csab...@freemail.hu (Link -> 
> mailto:csab...@freemail.hu) >
>
> On Sun, Apr 26, 2020 at 11:06 AM Nyika Csaba < csab...@freemail.hu (Link -> 
> mailto:csab...@freemail.hu) > wrote:
>
> Thanks the advice.
> The hypervisors are "fresh". But the management server arrived from version 
> 3.6 step-by-step (We use this ovirt since 2015).
> The issuse occured diffrent clusters, hosts, diffrent HV versions. For 
> example the last but one vm occured on a ibm x3650, ovirt-node v4.2 host and 
> the last on  a lenovo, ovirt-node v4.3.
> Best
>
>
> In theory on hypervisor node the only VG listed should be something like onn 
> (like Ovirt Node New generation, I think)
>
> In my case I have also gluster volumes, but in your case with FC SAN you 
> should only have onn
>
> [root@ovirt ~]# vgs
>   VG #PV #LV #SN Attr   VSizeVFree
>   gluster_vg_4t1   2   0 wz--n-   <3.64t  0
>   gluster_vg_4t2   1   2   0 wz--n-   <3.64t  0
>   gluster_vg_nvme0n1   1   3   0 wz--n-  349.32g  0
>   gluster_vg_nvme1n1   1   2   0 wz--n-  931.51g  0
>   onn  1  11   0 wz--n- <228.40g <43.87g
> [root@ovirt ~]#
>
> And also the command "lvs" should so show only onn related logical volumes...
>
> Gianluca
>
>  Hi,
>
> I checked all nodes, and what i got back after vgs command literally 
> "unbelievable".
>
> Some host look like good :
>   VG   #PV #LV #SN Attr   VSize   VFree
>   003b6a83-9133-4e65-9d6d-878d08e0de06   1  25   0 wz--n- <50,00t <44,86t
>   0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8   1  50   0 wz--n- <20,00t   4,57t
>   1831603c-e583-412a-b20e-f97b31ad9a55   1 122   0 wz--n- <25,00t  <6,79t
>   3ff15d64-a716-4fad-94f0-abb69b5643a7   1  64   0 wz--n- <17,31t  <4,09t
>   424fc43f-6bbf-47bb-94a0-b4c3322a4a90   1  68   0 wz--n- <14,46t  <1,83t
>   4752cc9d-5f19-4cb1-b116-a62e3ee05783   1  81   0 wz--n- <28,00t  <4,91t
>   567a63ec-5b34-425c-af20-5997450cf061   1 110   0 wz--n- <17,00t  <2,21t
>   5f6dcc41-9a2f-432f-9de0-bed541cd6a03   1  71   0 wz--n- <20,00t  <2,35t
>   8a4e4463-0945-430e-affd-c7ac2bbdc912   1  86   0 wz--n- <13,01t   2,85t
>   c9543c8d-c6da-44be-8060-179e807f1211   1  55   0 wz--n- <18,00t   5,22t
>   d5679d9d-ebf2-41ef-9e93-83d2cd9b027c   1  67   0 wz--n-  <7,20t  <1,15t

No this is not good - these are VGs on shared storage, and the host
should not be able to access them.

>   onn1  11   0 wz--n- 277,46g  54,60g

I this guest VG (created inside the guest)? If so this is bad.

> Others:
>   VG   #PV #LV #SN Attr   VSize   VFree
>   003b6a83-9133-4e65-9d6d-878d08e0de06   1  25   0 wz--n- <50,00t <44,86t
>   0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8   1  50   0 wz--n- <20,00t   4,57t
>   1831603c-e583-412a-b20e-f97b31ad9a55   1 122   0 wz--n- <25,00t  <6,79t
>   3ff15d64-a716-4fad-94f0-abb69b5643a7   1  64   0 wz--n- <17,31t  <4,09t
>   424fc43f-6bbf-47bb-94a0-b4c3322a4a90   1  68   0 wz--n- <14,46t  <1,83t
>   4752cc9d-5f19-4cb1-b116-a62e3ee05783   1  81   0 wz--n- <28,00t  <4,91t
>   567a63ec-5b34-425c-af20-5997450cf061   1 110   0 wz--n- <17,00t  <2,21t
>   5f6dcc41-9a2f-432f-9de0-bed541cd6a03   1  71   0 wz--n- <20,00t  <2,35t
>   8a4e4463-0945-430e-affd-c7ac2bbdc912   1  86   0 wz--n- <13,01t   2,85t
>   c9543c8d-c6da-44be-8060-179e807f1211   1  55   0 wz--n- <18,00t   5,22t
>   d5679d9d-ebf2-41ef-9e93-83d2cd9b027c   1  67   0 wz--n-  <7,20t  <1,15t

Again, bad.

>   onn1  11   0 wz--n- 277,46g  54,60g
>   vg_okosvaros   2   7   0 wz-pn- <77,20g  0

Bad if this guest VGs.

> Others:
>   VG   #PV #LV #SN Attr   VSizeVFree
>   003b6a83-9133-4e65-9d6d-878d08e0de06   1  25   0 wz--n-  <50,00t <44,86t
>   0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8   1  50   0 wz--n-  <20,00t   4,57t
>   1831603c-e583-412a-b20e-f97b31ad9a55   1 122   0 wz--n-  <25,00t  <6,79t
>   3ff15d64-a716-4fad-94f0-abb69b5643a7   1  64   0 wz--n-  <17,31t  <4,09t
>   424fc43f-6bbf-47bb-94a0-b4c3322a4a90   1  68   0 wz--n-  <14,46t  <1,83t
>   4752cc9d-5f19-4cb1-b116-a62e3ee05783   1  81   0 wz--n-  <28,00t  <4,91t
>   567a63ec-5b34-425c-af20-5997450cf061   1 110   0 wz--n-  <17,00t  <2,21t
>   5f6dcc41-9a2f-432f-9de0-bed541cd6a03   1  71   0 wz--n-  <20,00t  <2,35t
>   8a4e4463-0945-430e-affd-c7ac2bbdc912   1  86   0 wz--n-  <13,01t   2,85t
>   c9543c8d-c6da-44be-8060-179e807f1211   1  55   0 wz--n-  <18,00t   5,22t
>   d5679d9d-ebf2-41ef-9e93-83d2cd9b027c   1  67   0 wz--n-   <7,20t  <1,15t
>   onn1  13   0 wz--n- <446,07g  88,39g
>   vg_4trdb1p 

[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Strahil Nikolov
On April 26, 2020 4:30:33 PM GMT+03:00, Gianluca Cecchi 
 wrote:
>On Sun, Apr 26, 2020 at 2:00 PM Nyika Csaba 
>wrote:
>
>>
>> -[snip]
>>
>
>
>> In theory on hypervisor node the only VG listed should be something
>like
>> onn (like Ovirt Node New generation, I think)
>>
>> In my case I have also gluster volumes, but in your case with FC SAN
>you
>> should only have onn
>>
>> [root@ovirt ~]# vgs
>>   VG #PV #LV #SN Attr   VSizeVFree
>>   gluster_vg_4t1   2   0 wz--n-   <3.64t  0
>>   gluster_vg_4t2   1   2   0 wz--n-   <3.64t  0
>>   gluster_vg_nvme0n1   1   3   0 wz--n-  349.32g  0
>>   gluster_vg_nvme1n1   1   2   0 wz--n-  931.51g  0
>>   onn  1  11   0 wz--n- <228.40g <43.87g
>> [root@ovirt ~]#
>>
>> And also the command "lvs" should so show only onn related logical
>> volumes...
>>
>> Gianluca
>>
>>  Hi,
>>
>> I checked all nodes, and what i got back after vgs command literally
>> "unbelievable".
>>
>>
>Ok, so this is your problem.
>And the main bugzilla opened by great guy Germano from Red Hat support
>at
>time of RHV 3.6 when I first opened a case on it was this:
>https://bugzilla.redhat.com/show_bug.cgi?id=1374545
>
>If I remember correctly, you will see the problem only if inside VM you
>configured a PV for the whole virtual disk (and not its partitions) and
>if
>the disk of the VM was configured as preallocated.
>
>I have not at hand now the detailed information to solve, but for sure
>you
>will have to modify your LVM filters and rebuild initramfs of nodes and
>reboot, one by one.
>Inside the bugzilla there were a script for LVM filtering and there is
>also
>this page for oVirt:
>
>https://blogs.ovirt.org/2017/12/lvm-configuration-the-easy-way/
>
>Quite new installations should prevent problems, in my opinion, but you
>could be impacted by wrong configurations transported during upgrades.
>
>Gianluca

I wonder if you also have issues with live migration of VMs between hosts.
Have you noticed anything like that so far?
Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2H3HFEWM2CKCTVTA7IPQGEXRHHA56HJT/


[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Gianluca Cecchi
On Sun, Apr 26, 2020 at 2:00 PM Nyika Csaba  wrote:

>
> -[snip]
>


> In theory on hypervisor node the only VG listed should be something like
> onn (like Ovirt Node New generation, I think)
>
> In my case I have also gluster volumes, but in your case with FC SAN you
> should only have onn
>
> [root@ovirt ~]# vgs
>   VG #PV #LV #SN Attr   VSizeVFree
>   gluster_vg_4t1   2   0 wz--n-   <3.64t  0
>   gluster_vg_4t2   1   2   0 wz--n-   <3.64t  0
>   gluster_vg_nvme0n1   1   3   0 wz--n-  349.32g  0
>   gluster_vg_nvme1n1   1   2   0 wz--n-  931.51g  0
>   onn  1  11   0 wz--n- <228.40g <43.87g
> [root@ovirt ~]#
>
> And also the command "lvs" should so show only onn related logical
> volumes...
>
> Gianluca
>
>  Hi,
>
> I checked all nodes, and what i got back after vgs command literally
> "unbelievable".
>
>
Ok, so this is your problem.
And the main bugzilla opened by great guy Germano from Red Hat support at
time of RHV 3.6 when I first opened a case on it was this:
https://bugzilla.redhat.com/show_bug.cgi?id=1374545

If I remember correctly, you will see the problem only if inside VM you
configured a PV for the whole virtual disk (and not its partitions) and if
the disk of the VM was configured as preallocated.

I have not at hand now the detailed information to solve, but for sure you
will have to modify your LVM filters and rebuild initramfs of nodes and
reboot, one by one.
Inside the bugzilla there were a script for LVM filtering and there is also
this page for oVirt:

https://blogs.ovirt.org/2017/12/lvm-configuration-the-easy-way/

Quite new installations should prevent problems, in my opinion, but you
could be impacted by wrong configurations transported during upgrades.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PFRAYL6TOMXHQRNSCRLCD2DSNGXZTFDT/


[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Nyika Csaba
 
 Eredeti levél 
Feladó: Gianluca Cecchi < gianluca.cec...@gmail.com (Link -> 
mailto:gianluca.cec...@gmail.com) >
Dátum: 2020 április 26 11:42:40
Tárgy: Re: [ovirt-users] Re: Ovirt vs lvm?
Címzett: Nyika Csaba < csab...@freemail.hu (Link -> mailto:csab...@freemail.hu) 
>
 
On Sun, Apr 26, 2020 at 11:06 AM Nyika Csaba < csab...@freemail.hu (Link -> 
mailto:csab...@freemail.hu) > wrote:
 
Thanks the advice.
The hypervisors are "fresh". But the management server arrived from version 3.6 
step-by-step (We use this ovirt since 2015).
The issuse occured diffrent clusters, hosts, diffrent HV versions. For example 
the last but one vm occured on a ibm x3650, ovirt-node v4.2 host and the last 
on  a lenovo, ovirt-node v4.3.
Best
 
 
In theory on hypervisor node the only VG listed should be something like onn 
(like Ovirt Node New generation, I think)
 
In my case I have also gluster volumes, but in your case with FC SAN you should 
only have onn
 
[root@ovirt ~]# vgs
  VG                 #PV #LV #SN Attr   VSize    VFree  
  gluster_vg_4t        1   2   0 wz--n-   <3.64t      0
  gluster_vg_4t2       1   2   0 wz--n-   <3.64t      0
  gluster_vg_nvme0n1   1   3   0 wz--n-  349.32g      0
  gluster_vg_nvme1n1   1   2   0 wz--n-  931.51g      0
  onn                  1  11   0 wz--n- <228.40g <43.87g
[root@ovirt ~]#
 
And also the command "lvs" should so show only onn related logical volumes...
 
Gianluca
 
 Hi,

I checked all nodes, and what i got back after vgs command literally 
"unbelievable".

Some host look like good :
  VG   #PV #LV #SN Attr   VSize   VFree  
  003b6a83-9133-4e65-9d6d-878d08e0de06   1  25   0 wz--n- <50,00t <44,86t
  0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8   1  50   0 wz--n- <20,00t   4,57t
  1831603c-e583-412a-b20e-f97b31ad9a55   1 122   0 wz--n- <25,00t  <6,79t
  3ff15d64-a716-4fad-94f0-abb69b5643a7   1  64   0 wz--n- <17,31t  <4,09t
  424fc43f-6bbf-47bb-94a0-b4c3322a4a90   1  68   0 wz--n- <14,46t  <1,83t
  4752cc9d-5f19-4cb1-b116-a62e3ee05783   1  81   0 wz--n- <28,00t  <4,91t
  567a63ec-5b34-425c-af20-5997450cf061   1 110   0 wz--n- <17,00t  <2,21t
  5f6dcc41-9a2f-432f-9de0-bed541cd6a03   1  71   0 wz--n- <20,00t  <2,35t
  8a4e4463-0945-430e-affd-c7ac2bbdc912   1  86   0 wz--n- <13,01t   2,85t
  c9543c8d-c6da-44be-8060-179e807f1211   1  55   0 wz--n- <18,00t   5,22t
  d5679d9d-ebf2-41ef-9e93-83d2cd9b027c   1  67   0 wz--n-  <7,20t  <1,15t
  onn    1  11   0 wz--n- 277,46g  54,60g

Others:
  VG   #PV #LV #SN Attr   VSize   VFree  
  003b6a83-9133-4e65-9d6d-878d08e0de06   1  25   0 wz--n- <50,00t <44,86t
  0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8   1  50   0 wz--n- <20,00t   4,57t
  1831603c-e583-412a-b20e-f97b31ad9a55   1 122   0 wz--n- <25,00t  <6,79t
  3ff15d64-a716-4fad-94f0-abb69b5643a7   1  64   0 wz--n- <17,31t  <4,09t
  424fc43f-6bbf-47bb-94a0-b4c3322a4a90   1  68   0 wz--n- <14,46t  <1,83t
  4752cc9d-5f19-4cb1-b116-a62e3ee05783   1  81   0 wz--n- <28,00t  <4,91t
  567a63ec-5b34-425c-af20-5997450cf061   1 110   0 wz--n- <17,00t  <2,21t
  5f6dcc41-9a2f-432f-9de0-bed541cd6a03   1  71   0 wz--n- <20,00t  <2,35t
  8a4e4463-0945-430e-affd-c7ac2bbdc912   1  86   0 wz--n- <13,01t   2,85t
  c9543c8d-c6da-44be-8060-179e807f1211   1  55   0 wz--n- <18,00t   5,22t
  d5679d9d-ebf2-41ef-9e93-83d2cd9b027c   1  67   0 wz--n-  <7,20t  <1,15t
  onn    1  11   0 wz--n- 277,46g  54,60g
  vg_okosvaros   2   7   0 wz-pn- <77,20g  0

Others:
  VG   #PV #LV #SN Attr   VSize    VFree  
  003b6a83-9133-4e65-9d6d-878d08e0de06   1  25   0 wz--n-  <50,00t <44,86t
  0cfed8c2-cdfd-4a57-bb8c-eabfbdbccdf8   1  50   0 wz--n-  <20,00t   4,57t
  1831603c-e583-412a-b20e-f97b31ad9a55   1 122   0 wz--n-  <25,00t  <6,79t
  3ff15d64-a716-4fad-94f0-abb69b5643a7   1  64   0 wz--n-  <17,31t  <4,09t
  424fc43f-6bbf-47bb-94a0-b4c3322a4a90   1  68   0 wz--n-  <14,46t  <1,83t
  4752cc9d-5f19-4cb1-b116-a62e3ee05783   1  81   0 wz--n-  <28,00t  <4,91t
  567a63ec-5b34-425c-af20-5997450cf061   1 110   0 wz--n-  <17,00t  <2,21t
  5f6dcc41-9a2f-432f-9de0-bed541cd6a03   1  71   0 wz--n-  <20,00t  <2,35t
  8a4e4463-0945-430e-affd-c7ac2bbdc912   1  86   0 wz--n-  <13,01t   2,85t
  c9543c8d-c6da-44be-8060-179e807f1211   1  55   0 wz--n-  <18,00t   5,22t
  d5679d9d-ebf2-41ef-9e93-83d2cd9b027c   1  67   0 wz--n-   <7,20t  <1,15t
  onn    1  13   0 wz--n- <446,07g  88,39g
  vg_4trdb1p 3   7   0 wz-pn-  157,19g  0
  vg_4trdb1t 3   7   0 wz-pn-  157,19g  0
  vg_deployconfigrepo    3   7   0 wz-pn-   72,19g  0
  vg_ektrdb1p    3   7   0 wz-pn-  157,19g  0
  vg_ektrdb1t    3   7   0 wz-pn-  157,19g  0
  vg_empteszt    2   6   0 wz-pn-  <77,20g <20,00g
  

[ovirt-users] Info about openstack staging-ovirt driver connection not released

2020-04-26 Thread Gianluca Cecchi
Hello,
I'm setting up an Openstack Queens lab ( to best match OSP 13) using oVirt
VMs as nodes.
At this time only undercloud configured and 8 Openstack nodes (VMs) set as
available for provisioning.
I'm using staging-ovirt driver on director node in similar way as the vbmc
one.
I see from oVirt active user sessions page that every minute I have one
connection for node (in my case 8) of the designated user (in my case
ostackpm).
But it seems they are never released.
How can I check the problem?

Director is CentOS 7 server and the staging-ovirt driver is provided by the
package:

[root@director ~]# rpm -q python-ovirt-engine-sdk4
python-ovirt-engine-sdk4-4.3.2-2.el7.x86_64
[root@director ~]#

I didn't configure the oVirt repo but only installed the latest stable
available for 4.3.9:

wget
https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/x86_64/python-ovirt-engine-sdk4-4.3.2-2.el7.x86_64.rpm
 sudo yum localinstall python-ovirt-engine-sdk4-4.3.2-2.el7.x86_64.rpm

Anyone with experience on this?

In the mean time any way to use a command using api to kill the stale (I
think) sessions?
Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B4CDWTAR7YLRLRSWMRMVPLSYUZ7WI647/


[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Gianluca Cecchi
On Sun, Apr 26, 2020 at 11:06 AM Nyika Csaba  wrote:

>
>
>
> Thanks the advice.
>
> The hypervisors are "fresh". But the management server arrived from
> version 3.6 step-by-step (We use this ovirt since 2015).
>
> The issuse occured diffrent clusters, hosts, diffrent HV versions. For
> example the last but one vm occured on a ibm x3650, ovirt-node v4.2 host
> and the last on  a lenovo, ovirt-node v4.3.
>
> Best
>
>
In theory on hypervisor node the only VG listed should be something like
onn (like Ovirt Node New generation, I think)

In my case I have also gluster volumes, but in your case with FC SAN you
should only have onn

[root@ovirt ~]# vgs
  VG #PV #LV #SN Attr   VSizeVFree
  gluster_vg_4t1   2   0 wz--n-   <3.64t  0
  gluster_vg_4t2   1   2   0 wz--n-   <3.64t  0
  gluster_vg_nvme0n1   1   3   0 wz--n-  349.32g  0
  gluster_vg_nvme1n1   1   2   0 wz--n-  931.51g  0
  onn  1  11   0 wz--n- <228.40g <43.87g
[root@ovirt ~]#

And also the command "lvs" should so show only onn related logical
volumes...

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W6YV72INALG6JSQ4FCPZHWLAU6ERFR4P/


[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Nyika Csaba
 
 Eredeti levél 
Feladó: Gianluca Cecchi < gianluca.cec...@gmail.com (Link -> 
mailto:gianluca.cec...@gmail.com) >
Dátum: 2020 április 26 10:01:27
Tárgy: Re: [ovirt-users] Ovirt vs lvm?
Címzett: csab...@freemail.hu (Link -> mailto:csab...@freemail.hu)
 
On Sat, Apr 25, 2020 at 10:08 PM < csab...@freemail.hu (Link -> 
mailto:csab...@freemail.hu) > wrote:
Hi,
Our production ovirt system looks like: standalone management server, vesion 
4.3.9, 6 clusters, 28 nodes (v4.2, v4.3) , one storage domain, (FC SAN 
Storages), centos7 vm-s , and some windows vms.
I have a returning problem. Sometime when i power off a vm and power on again , 
i get an error message our linux vm (when we use lvm of course): dracut: 
Read-only locking type set. Write locks are prohibited., dracut: Can't get lock 
for vg.
I can repair only 70% of damaged vm.
I tried to localize the problem, but a can`t. The error occured randomly every 
cluster, every storage on last 2 years.
Has anyone ever encountered such a problem?
 
 
I think one possible reason could be hypervisor not correctly masking LVM at VM 
disk level.
There was a bug in the past about this.
Is this a fresh install or arriving from previous versions?
 
Anyway verify on all your hypervisors what is the output of the command "vgs" 
and be sure that you only see volume groups related to hypervisors themselves 
and not inner VMs.
If you have a subset of VMs with the problem, identify if that happens only on 
particular clusters/hosts, so that you can narrow the analysis to these 
hypervisors.
 
HIH,
Gianluca

Thanks the advice.

The hypervisors are "fresh". But the management server arrived from version 3.6 
step-by-step (We use this ovirt since 2015).

The issuse occured diffrent clusters, hosts, diffrent HV versions. For example 
the last but one vm occured on a ibm x3650, ovirt-node v4.2 host and the last 
on  a lenovo, ovirt-node v4.3.

Best
csabany___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JS6MSHKGBJGQZ3ZE4CO7BE2GQGKOTRUH/


[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Gianluca Cecchi
On Sat, Apr 25, 2020 at 10:08 PM  wrote:

> Hi,
>
> Our production ovirt system looks like: standalone management server,
> vesion 4.3.9, 6 clusters, 28 nodes (v4.2, v4.3) , one storage domain, (FC
> SAN Storages), centos7 vm-s , and some windows vms.
> I have a returning problem. Sometime when i power off a vm and power on
> again , i get an error message our linux vm (when we use lvm of course):
> dracut: Read-only locking type set. Write locks are prohibited., dracut:
> Can't get lock for vg.
> I can repair only 70% of damaged vm.
> I tried to localize the problem, but a can`t. The error occured randomly
> every cluster, every storage on last 2 years.
> Has anyone ever encountered such a problem?
>
>
I think one possible reason could be hypervisor not correctly masking LVM
at VM disk level.
There was a bug in the past about this.
Is this a fresh install or arriving from previous versions?

Anyway verify on all your hypervisors what is the output of the command
"vgs" and be sure that you only see volume groups related to hypervisors
themselves and not inner VMs.
If you have a subset of VMs with the problem, identify if that happens only
on particular clusters/hosts, so that you can narrow the analysis to these
hypervisors.

HIH,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7LQEMMNTNDIMQ4E7NESYK4QUTKCCNDEP/


[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Nyika Csaba
 Eredeti levél 
Feladó: Strahil Nikolov < hunter86...@yahoo.com (Link -> 
mailto:hunter86...@yahoo.com) >
Dátum: 2020 április 26 07:57:43
Tárgy: Re: [ovirt-users] Ovirt vs lvm?
Címzett: csab...@freemail.hu (Link -> mailto:csab...@freemail.hu)
On April 25, 2020 11:07:23 PM GMT+03:00, csab...@freemail.hu wrote:
>Hi,
>
>Our production ovirt system looks like: standalone management server,
>vesion 4.3.9, 6 clusters, 28 nodes (v4.2, v4.3) , one storage domain,
>(FC SAN Storages), centos7 vm-s , and some windows vms.
>I have a returning problem. Sometime when i power off a vm and power on
>again , i get an error message our linux vm (when we use lvm of
>course): dracut: Read-only locking type set. Write locks are
>prohibited., dracut: Can't get lock for vg.
>I can repair only 70% of damaged vm.
>I tried to localize the problem, but a can`t. The error occured
>randomly every cluster, every storage on last 2 years.
>Has anyone ever encountered such a problem?
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/AHKCLRZUWO4UVCFRBXN6GETS4M46A2FQ/
I haven't seen such issue so far, but I can only recommend yoiu to clone such 
VM next time, so you can try to figure out what is going on.
During the repair, have you tried rebuilding the initramfs after the issue 
happens ?
Best Regards,
Strahil Nikolov

Thanks for advice!

Definitely yes, i make a new initramfs by dracut.
When this error occure, the locking_type parameter in darcut lvm.conf file 
changed to 4.
I write back to 1: lvm vhchange -ay --config ' global {locking_type=1} '
then write back the locking_type in /etc/lvm/lvm.conf to 1.
Then exit and (if i have a lucky day) the dracut -v -f build's a new initramfs 
and the vm works fine.

The issue appear's in couple, so if i found a vm for this "error" one in the 
others running vms has this error too.

csabany___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JPJLMNBHBZLBQ7DYCHV7DB53QTE3XBOO/


[ovirt-users] Re: Move Hosted Engine VM to a different storage domain

2020-04-26 Thread Strahil Nikolov
On April 26, 2020 9:39:07 AM GMT+03:00, Yedidyah Bar David  
wrote:
>On Fri, Apr 24, 2020 at 1:04 PM Anton Louw
>
>wrote:
>
>>
>>
>> Hi All,
>>
>>
>>
>> I know this question has been asked before, by myself included. I was
>> hoping that someone has run through the exercise of moving the hosted
>> engine VM to a different storage domain. I have tried many routes,
>but the
>> backup and restore does not work for me.
>>
>
>The "standard answer" is backup and restore. Why does it not work?
>
>
>>
>>
>> Is there anybody that can perhaps give me some guidelines or a
>process I
>> can follow?
>>
>
>I didn't try that myself.
>
>The best guidelines I can give you are: Try first on at a test system.
>Do
>the backup on the real machine, create some isolated VM (isolated so
>that
>it does not interfere with your hosts/storage) somewhere to be used as
>a
>test host (or a physical machine if you have one), some storage
>somewhere,
>and restore on it. Make it work. Document what you needed to do. Ask
>here
>with specific questions if/when you have them. Then do on the
>production
>setup.
>
>Also clarify your needs. Do you need no-downtime for the VMs? If so,
>that's
>more complex. If you don't, it might be enough/simpler to deploy a new
>setup and just import the existing storage. Do you have HA VMs? etc.
>
>
>>
>>
>> The reason I need to move the HE VM is because we are decommissioning
>the
>> current storage array where the HE VM is located.
>>
>
>Good luck!
>
>Best regards,
>
>
>>
>>
>> Thank you very much
>>
>> *Anton Louw*
>> *Cloud Engineer: Storage and Virtualization* at *Vox*
>> --
>> *T:*  087 805  | *D:* 087 805 1572
>> *M:* N/A
>> *E:* anton.l...@voxtelecom.co.za
>> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
>> www.vox.co.za
>>
>> [image: F] 
>> [image: T] 
>> [image: I] 
>> [image: L] 
>> [image: Y] 
>>
>> [image: #VoxBrand]
>> 
>> *Disclaimer*
>>
>> The contents of this email are confidential to the sender and the
>intended
>> recipient. Unless the contents are clearly and entirely of a personal
>> nature, they are subject to copyright in favour of the holding
>company of
>> the Vox group of companies. Any recipient who receives this email in
>error
>> should immediately report the error to the sender and permanently
>delete
>> this email from all storage devices.
>>
>> This email has been scanned for viruses and malware, and may have
>been
>> automatically archived by *Mimecast Ltd*, an innovator in Software as
>a
>> Service (SaaS) for business. Providing a *safer* and *more useful*
>place
>> for your human generated data. Specializing in; Security, archiving
>and
>> compliance. To find out more Click Here
>> .
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>>
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/V3PRBRZD7SLUQSXPRJHWVR3FWNZM5SJS/
>>

If youchange the gluster volume you can use the 'hosted-engine' tool +  
migration  of the data ?

The example is for  Gluster, but also
hosted-engine --set-shared-config storage :/engine

hosted-engine --set-shared-config mnt_options
backup-volfile-servers=:

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CY5TBPLPCCPTY4GRDLQYFDPZELGGU3CH/


[ovirt-users] Re: Move Hosted Engine VM to a different storage domain

2020-04-26 Thread Yedidyah Bar David
On Fri, Apr 24, 2020 at 1:04 PM Anton Louw 
wrote:

>
>
> Hi All,
>
>
>
> I know this question has been asked before, by myself included. I was
> hoping that someone has run through the exercise of moving the hosted
> engine VM to a different storage domain. I have tried many routes, but the
> backup and restore does not work for me.
>

The "standard answer" is backup and restore. Why does it not work?


>
>
> Is there anybody that can perhaps give me some guidelines or a process I
> can follow?
>

I didn't try that myself.

The best guidelines I can give you are: Try first on at a test system. Do
the backup on the real machine, create some isolated VM (isolated so that
it does not interfere with your hosts/storage) somewhere to be used as a
test host (or a physical machine if you have one), some storage somewhere,
and restore on it. Make it work. Document what you needed to do. Ask here
with specific questions if/when you have them. Then do on the production
setup.

Also clarify your needs. Do you need no-downtime for the VMs? If so, that's
more complex. If you don't, it might be enough/simpler to deploy a new
setup and just import the existing storage. Do you have HA VMs? etc.


>
>
> The reason I need to move the HE VM is because we are decommissioning the
> current storage array where the HE VM is located.
>

Good luck!

Best regards,


>
>
> Thank you very much
>
> *Anton Louw*
> *Cloud Engineer: Storage and Virtualization* at *Vox*
> --
> *T:*  087 805  | *D:* 087 805 1572
> *M:* N/A
> *E:* anton.l...@voxtelecom.co.za
> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> www.vox.co.za
>
> [image: F] 
> [image: T] 
> [image: I] 
> [image: L] 
> [image: Y] 
>
> [image: #VoxBrand]
> 
> *Disclaimer*
>
> The contents of this email are confidential to the sender and the intended
> recipient. Unless the contents are clearly and entirely of a personal
> nature, they are subject to copyright in favour of the holding company of
> the Vox group of companies. Any recipient who receives this email in error
> should immediately report the error to the sender and permanently delete
> this email from all storage devices.
>
> This email has been scanned for viruses and malware, and may have been
> automatically archived by *Mimecast Ltd*, an innovator in Software as a
> Service (SaaS) for business. Providing a *safer* and *more useful* place
> for your human generated data. Specializing in; Security, archiving and
> compliance. To find out more Click Here
> .
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/V3PRBRZD7SLUQSXPRJHWVR3FWNZM5SJS/
>


-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BW5WIGFGCQ72PZZE3Z2OXBLBNS42GLJE/


[ovirt-users] Re: Ovirt vs lvm?

2020-04-26 Thread Strahil Nikolov
On April 25, 2020 11:07:23 PM GMT+03:00, csab...@freemail.hu wrote:
>Hi,
>
>Our production ovirt system looks like: standalone management server,
>vesion 4.3.9, 6 clusters, 28 nodes (v4.2, v4.3) , one storage domain,
>(FC SAN Storages), centos7 vm-s , and some windows vms.
>I have a returning problem. Sometime when i power off a vm and power on
>again , i get an error message our linux vm (when we use lvm of
>course): dracut: Read-only locking type set. Write locks are
>prohibited., dracut: Can't get lock for vg.
>I can repair only 70% of damaged vm.
>I tried to localize the problem, but a can`t. The error occured
>randomly every cluster, every storage on last 2 years.
>Has anyone ever encountered such a problem?
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/AHKCLRZUWO4UVCFRBXN6GETS4M46A2FQ/

I haven't seen such issue so far,  but I can only recommend yoiu to clone such 
VM next time, so you can try to figure out what is going on.
During the repair, have you tried rebuilding the initramfs after the issue 
happens ?

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LS7BHJSPEXUVZCICMSSRH5UM4GWNDLPY/