[ovirt-users]Re: 回复: Re: About the vm memory limit

2021-09-29 Thread Tommy Sway
In my scenario, the physical machine has a lot of memory (4TB), with dozens of 
virtual machines running on it, each vm is running a database, and every vm is 
set up with traditional large-page memory.
In this case, whether it is necessary to set up large page memory on the 
physical machine and what type of large page memory should be set up, this 
issue has not been determined, I am also very confused.







-Original Message-
From: users-boun...@ovirt.org  On Behalf Of Strahil 
Nikolov via Users
Sent: Wednesday, September 29, 2021 8:50 PM
To: 'users' ; Tommy Sway 
Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit

I got a 3 TB host (physical) with Oracle without Traditional Hugepages. The DB 
will work even without hugepages... but how much memory will be lost - that's 
another story.

Disable the transparent Huge Pages and check this documentation - should be 
valid for oVirt 4.3 and OLVM 4.3 as they share the same source:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/virtual_machine_management_guide/configuring_high_performance_virtual_machines_templates_and_pools#Configuring_Huge_Pages

Best Regards,
Strahil Nikolov





В сряда, 29 септември 2021 г., 15:20:54 ч. Гринуич+3, Tommy Sway 
 написа: 






I’am 4.3, but the memory of the VM and the SGA is large , 32GB, so the vm 
should set the traditional hugepage for the database SGA.
 
I don't know how to set up a large page on a KVM host.
Because if I make it a traditional big page, how big should I make it? Will 
non-VM SGA usage of many VMs be affected?
After all, not all memory used by QEMU needs to be HugePage, and traditional 
large page memory needs to be used by specific code calls and is not 
transparent.
 
These are all questions.
 
 
 
 
From: users-boun...@ovirt.org  On Behalf Of Strahil 
Nikolov via Users
Sent: Wednesday, September 29, 2021 5:39 PM
To: Tommy Sway ; 'users' 
Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit
 
If you are on 4.3 -> disable transparent hugepages both Hypervisor and VM.
If you are using 4.4 -> disable transparent hugepages and also cobfigure 
regular huge pages.
 
Best Regards,
Strahil Nikolov
> On Wed, Sep 29, 2021 at 5:34, Tommy Sway  wrote:
> From the Oracle OLVM support:
> 
> Configuring the Hugepages for guest VMs should be suffice, however, it needs 
> the KVM hosts too configured with the Hugepages.
> Since, if it not you may end with issues while staring the guest VMs.
> 
> I really don't know what to do now.
> 
> 
> 
> 
> 
> -Original Message-
> From: users-boun...@ovirt.org  On Behalf Of 
> Strahil Nikolov via Users
> Sent: Tuesday, September 28, 2021 3:39 PM
> To: 'users' ; Tommy Sway 
> Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit
> 
> I think that if you run VMs with Databases, you must disable transparent huge 
> pages on Hypervisour level and on VM level. Yet, if you wish you can use 
> regular huge pages on VM level.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В вторник, 28 септември 2021 г., 09:21:09 ч. Гринуич+3, Tommy Sway 
>  написа: 
> 
> 
> 
> 
> 
> 
> What problem will appear if I use the default transparent huge page enabled 
> mode for physical hosts, but configure traditional huge page memory on the 
> virtual machine for database SGA ? 
> Or is it better to disable transparent huge page on physical machines and 
> still use traditional huge page memory on virtual machines?
> 
> Which one is prefer ?
> 
> 
> From: users-boun...@ovirt.org  On Behalf Of 
> Strahil Nikolov via Users
> Sent: Tuesday, September 28, 2021 12:05 AM
> To: ‪‪‪tommy ; 'users' 
> Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit
> 
> https://docs.oracle.com/en/database/oracle/oracle-database/19/ladbi/di
> sabling-transparent-hugepages.html
> 
> https://access.redhat.com/solutions/1320153 (requires RH dev 
> subscription or other type of subscription) -> In short add 
> 'transparent_hugepage=never' to the kernel params
> 
> SLES11/12/15 -> 
> https://www.suse.com/c/sles-1112-os-tuning-optimisation-guide-part-1/
> 
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
>> On Mon, Sep 27, 2021 at 16:33, ‪‪‪tommy  wrote:
>> thank you!
>>  
>> and how to get the config info of Transparent huge pages and how to 
>> disable it?
>>  
>>  
>>  
>> 
>> 
>> 发自我的华为手机
>> 
>> 
>>  原始邮件 
>> 发件人: Strahil Nikolov 
>> 日期: 2021年9月27日周一 21:13
>> 收件人: 'users' , Tommy Sway 
>> 主 题: Re: [ovirt-users] Re: About the vm memory limit>> Transparent 
>> huge pages are enabled by default, so you need to stop >> them.I 
>> would use huge pages on both host and VM, but theoretically it >> 
>> shouldn't be a problem running a VM with enabled HugePages without >> 
>> configuring on Host.Best Regards,Strahil NikolovВ събота, 25 >> 
>> септември 2021 г., 15:25:10 ч. Гринуич+3, Tommy Sway >> 
>>  написа: transparent huge pages are not used on vm 
>> >> and physical host. But, can I enable hugepage memory on  virtual 
>> >> 

[ovirt-users] Re: Host reboots when network switch goes down

2021-09-29 Thread Strahil Nikolov via Users
Tinkering with timeouts could be risky, so in case you can't have a second 
switch - your solution (shutting down all VMs, maintenance, etc) should be the 
safest.
If possible test it on a cluster on VMs, so you get used to the whole procedure.
 Best Regards,Strahil Nikolov
 
 
  On Wed, Sep 29, 2021 at 16:16, cen wrote:   On 29. 09. 21 
13:31, Vojtech Juranek wrote:
> this is possible, but changing sanlock timeouts can be very tricky and can
> have unwanted/unexpected consequences, so be very careful. Here is a guideline
> how to do it:
>
> https://github.com/oVirt/vdsm/blob/master/doc/io-timeouts.md

Thank you for your feedback, this seems to be exactly what is happening.

After reading the doc, my gut feeling tells me it would be smarter to 
shut down our VMs, go into maintenance mode and then perform any switch 
upgrades/reboots instead of trying to tweak the timeouts to survive a 
possible 3min+ reboot. We don't have any serious uptime requirements so 
this seems like the easiest and safest way forward.


Best regards,

cen
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JEJMMLI2WH72J4PPRKSYHAEQFTIEBPZ5/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3TYARHX4QMALGVGF7DWUJV7LRC7LVJWP/


[ovirt-users] Re: Host reboots when network switch goes down

2021-09-29 Thread cen

On 29. 09. 21 13:31, Vojtech Juranek wrote:

this is possible, but changing sanlock timeouts can be very tricky and can
have unwanted/unexpected consequences, so be very careful. Here is a guideline
how to do it:

https://github.com/oVirt/vdsm/blob/master/doc/io-timeouts.md


Thank you for your feedback, this seems to be exactly what is happening.

After reading the doc, my gut feeling tells me it would be smarter to 
shut down our VMs, go into maintenance mode and then perform any switch 
upgrades/reboots instead of trying to tweak the timeouts to survive a 
possible 3min+ reboot. We don't have any serious uptime requirements so 
this seems like the easiest and safest way forward.



Best regards,

cen
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JEJMMLI2WH72J4PPRKSYHAEQFTIEBPZ5/


[ovirt-users]Re: 回复: Re: About the vm memory limit

2021-09-29 Thread Strahil Nikolov via Users
I got a 3 TB host (physical) with Oracle without Traditional Hugepages. The DB 
will work even without hugepages... but how much memory will be lost - that's 
another story.

Disable the transparent Huge Pages and check this documentation - should be 
valid for oVirt 4.3 and OLVM 4.3 as they share the same source:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/virtual_machine_management_guide/configuring_high_performance_virtual_machines_templates_and_pools#Configuring_Huge_Pages

Best Regards,
Strahil Nikolov





В сряда, 29 септември 2021 г., 15:20:54 ч. Гринуич+3, Tommy Sway 
 написа: 






I’am 4.3, but the memory of the VM and the SGA is large , 32GB, so the vm 
should set the traditional hugepage for the database SGA.
 
I don't know how to set up a large page on a KVM host.
Because if I make it a traditional big page, how big should I make it? Will 
non-VM SGA usage of many VMs be affected?
After all, not all memory used by QEMU needs to be HugePage, and traditional 
large page memory needs to be used by specific code calls and is not 
transparent.
 
These are all questions.
 
 
 
 
From: users-boun...@ovirt.org  On Behalf Of Strahil 
Nikolov via Users
Sent: Wednesday, September 29, 2021 5:39 PM
To: Tommy Sway ; 'users' 
Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit
 
If you are on 4.3 -> disable transparent hugepages both Hypervisor and VM.
If you are using 4.4 -> disable transparent hugepages and also cobfigure 
regular huge pages.
 
Best Regards,
Strahil Nikolov
> On Wed, Sep 29, 2021 at 5:34, Tommy Sway
>  wrote:
> From the Oracle OLVM support:
> 
> Configuring the Hugepages for guest VMs should be suffice, however, it needs 
> the KVM hosts too configured with the Hugepages.
> Since, if it not you may end with issues while staring the guest VMs.
> 
> I really don't know what to do now.
> 
> 
> 
> 
> 
> -Original Message-
> From: users-boun...@ovirt.org  On Behalf Of Strahil 
> Nikolov via Users
> Sent: Tuesday, September 28, 2021 3:39 PM
> To: 'users' ; Tommy Sway 
> Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit
> 
> I think that if you run VMs with Databases, you must disable transparent huge 
> pages on Hypervisour level and on VM level. Yet, if you wish you can use 
> regular huge pages on VM level.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В вторник, 28 септември 2021 г., 09:21:09 ч. Гринуич+3, Tommy Sway 
>  написа: 
> 
> 
> 
> 
> 
> 
> What problem will appear if I use the default transparent huge page enabled 
> mode for physical hosts, but configure traditional huge page memory on the 
> virtual machine for database SGA ? 
> Or is it better to disable transparent huge page on physical machines and 
> still use traditional huge page memory on virtual machines?
> 
> Which one is prefer ?
> 
> 
> From: users-boun...@ovirt.org  On Behalf Of Strahil 
> Nikolov via Users
> Sent: Tuesday, September 28, 2021 12:05 AM
> To: ‪‪‪tommy ; 'users' 
> Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit
> 
> https://docs.oracle.com/en/database/oracle/oracle-database/19/ladbi/disabling-transparent-hugepages.html
> 
> https://access.redhat.com/solutions/1320153 (requires RH dev subscription or 
> other type of subscription) -> In short add 'transparent_hugepage=never' to 
> the kernel params
> 
> SLES11/12/15 -> 
> https://www.suse.com/c/sles-1112-os-tuning-optimisation-guide-part-1/
> 
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
>> On Mon, Sep 27, 2021 at 16:33, ‪‪‪tommy  wrote:
>> thank you!
>>  
>> and how to get the config info of Transparent huge pages and how to 
>> disable it?
>>  
>>  
>>  
>> 
>> 
>> 发自我的华为手机
>> 
>> 
>>  原始邮件 
>> 发件人: Strahil Nikolov 
>> 日期: 2021年9月27日周一 21:13
>> 收件人: 'users' , Tommy Sway 
>> 主 题: Re: [ovirt-users] Re: About the vm memory limit>> Transparent huge 
>> pages are enabled by default, so you need to stop >> them.I would use huge 
>> pages on both host and VM, but theoretically it >> shouldn't be a problem 
>> running a VM with enabled HugePages without >> configuring on Host.Best 
>> Regards,Strahil NikolovВ събота, 25 >> септември 2021 г., 15:25:10 ч. 
>> Гринуич+3, Tommy Sway >>  написа: transparent huge pages 
>> are not used on vm >> and physical host. But, can I enable hugepage memory 
>> on  virtual >> machines but not on a physical machine?For database running 
>> on vm, >> and it needs to config hugepage.  From: Strahil Nikolov >> 
>>  Sent: Saturday, September 25, 2021 5:32 PMTo: >> 
>> Tommy Sway Subject: Re: [ovirt-users] About the vm >> 
>> memory limit It depends on the numa configuration of the host. If you >> 
>> have 256G per CPU, it's best to stay into that range. Also, consider >> 
>> disabling transparent huge pages on the host & VM. Since 4.4 Regular >> Huge 
>> Pages (do not confuse them with THP) can be used on the >> Hypervisors, 
>> while on 4.3 there were some issues but I can't provode >> any details. Best 
>> Regards,Strahil 

[ovirt-users] Using third-party certificate: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

2021-09-29 Thread nicolas

Hi,

I'm making a bare metal oVirt installation, version 4.4.8. 
'ovirt-engine' command ends well, however, we're using a third-party 
certificate (from LetsEncrypt) both for the apache server and the 
ovirt-websocket-proxy. So we changed configuration files regarding httpd 
and ovirt-websocket-proxy.


Once changed the configurations, if I try to log in to the oVirt engine, 
I get a "PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to 
find valid certification path to requested target" error.


In prior versions we used to add the chain to the 
/etc/pki/ovirt-engine/.truststore file, however, simply listing the 
current certificates seems not to be working on 4.4.8.


  # LANG=C keytool -list -keystore /etc/pki/ovirt-engine/.truststore 
-alias intermedia_le -storepass mypass

  keytool error: java.io.IOException: Invalid keystore format

Is there something I'm missing here?

Thank
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5VWVBQGIWJSPWVTV5UK2I2VXBNDV6GSS/


[ovirt-users]Re: 回复: Re: About the vm memory limit

2021-09-29 Thread Tommy Sway
I’am 4.3, but the memory of the VM and the SGA is large , 32GB, so the vm 
should set the traditional hugepage for the database SGA.

 

I don't know how to set up a large page on a KVM host.

Because if I make it a traditional big page, how big should I make it? Will 
non-VM SGA usage of many VMs be affected?

After all, not all memory used by QEMU needs to be HugePage, and traditional 
large page memory needs to be used by specific code calls and is not 
transparent.

 

These are all questions.

 

 

 

 

From: users-boun...@ovirt.org  On Behalf Of Strahil 
Nikolov via Users
Sent: Wednesday, September 29, 2021 5:39 PM
To: Tommy Sway ; 'users' 
Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit

 

If you are on 4.3 -> disable transparent hugepages both Hypervisor and VM.

If you are using 4.4 -> disable transparent hugepages and also cobfigure 
regular huge pages.

 

Best Regards,

Strahil Nikolov

On Wed, Sep 29, 2021 at 5:34, Tommy Sway

mailto:sz_cui...@163.com> > wrote:

>From the Oracle OLVM support:

Configuring the Hugepages for guest VMs should be suffice, however, it needs 
the KVM hosts too configured with the Hugepages.
Since, if it not you may end with issues while staring the guest VMs.

I really don't know what to do now.





-Original Message-
From: users-boun...@ovirt.org   
mailto:users-boun...@ovirt.org> > On Behalf Of 
Strahil Nikolov via Users
Sent: Tuesday, September 28, 2021 3:39 PM
To: 'users' mailto:users@ovirt.org> >; Tommy Sway 
mailto:sz_cui...@163.com> >
Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit

I think that if you run VMs with Databases, you must disable transparent huge 
pages on Hypervisour level and on VM level. Yet, if you wish you can use 
regular huge pages on VM level.

Best Regards,
Strahil Nikolov






В вторник, 28 септември 2021 г., 09:21:09 ч. Гринуич+3, Tommy Sway 
mailto:sz_cui...@163.com> > написа: 






What problem will appear if I use the default transparent huge page enabled 
mode for physical hosts, but configure traditional huge page memory on the 
virtual machine for database SGA ? 
Or is it better to disable transparent huge page on physical machines and still 
use traditional huge page memory on virtual machines?

Which one is prefer ?


From: users-boun...@ovirt.org   
mailto:users-boun...@ovirt.org> > On Behalf Of 
Strahil Nikolov via Users
Sent: Tuesday, September 28, 2021 12:05 AM
To: ‪‪‪tommy mailto:sz_cui...@163.com> >; 'users' 
mailto:users@ovirt.org> >
Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit

https://docs.oracle.com/en/database/oracle/oracle-database/19/ladbi/disabling-transparent-hugepages.html

https://access.redhat.com/solutions/1320153  
 (requires RH dev subscription 
or other type of subscription) -> In short add 'transparent_hugepage=never' to 
the kernel params

SLES11/12/15 -> 
https://www.suse.com/c/sles-1112-os-tuning-optimisation-guide-part-1/


Best Regards,
Strahil Nikolov





> On Mon, Sep 27, 2021 at 16:33, ‪‪‪tommy   > wrote:
> thank you!
>  
> and how to get the config info of Transparent huge pages and how to 
> disable it?
>  
>  
>  
> 
> 
> 发自我的华为手机
> 
> 
>  原始邮件 
> 发件人: Strahil Nikolov mailto:hunter86...@yahoo.com> >
> 日期: 2021年9月27日周一 21:13
> 收件人: 'users' mailto:users@ovirt.org> >, Tommy Sway 
> mailto:sz_cui...@163.com> >
> 主 题: Re: [ovirt-users] Re: About the vm memory limit
>> Transparent huge pages are enabled by default, so you need to stop 
>> them.I would use huge pages on both host and VM, but theoretically it 
>> shouldn't be a problem running a VM with enabled HugePages without 
>> configuring on Host.Best Regards,Strahil NikolovВ събота, 25 
>> септември 2021 г., 15:25:10 ч. Гринуич+3, Tommy Sway 
>> mailto:sz_cui...@163.com> > написа: transparent huge 
>> pages are not used on vm 
>> and physical host. But, can I enable hugepage memory on  virtual 
>> machines but not on a physical machine?For database running on vm, 
>> and it needs to config hugepage.  From: Strahil Nikolov 
>> mailto:hunter86...@yahoo.com> > Sent: Saturday, 
>> September 25, 2021 5:32 PMTo: 
>> Tommy Sway mailto:sz_cui...@163.com> >Subject: Re: 
>> [ovirt-users] About the vm 
>> memory limit It depends on the numa configuration of the host. If you 
>> have 256G per CPU, it's best to stay into that range. Also, consider 
>> disabling transparent huge pages on the host & VM. Since 4.4 Regular 
>> Huge Pages (do not confuse them with THP) can be used on the 
>> Hypervisors, while on 4.3 there were some issues but I can't provode 
>> any details. Best Regards,Strahil Nikolov> On Fri, Sep 24, 2021 at 
>> 6:40, Tommy Sway> mailto:sz_cui...@163.com> > wrote:> I 
>> would like to ask if 
>> there is any limit on the memory size of virtual machines, or 
>> performance curve or something like that?> As long as there is memory 
>> on 

[ovirt-users] Re: Host reboots when network switch goes down

2021-09-29 Thread Vojtech Juranek
On Wednesday, 29 September 2021 09:43:56 CEST cen wrote:
> Hi,
> 
> we are experiencing a weird issue with our Ovirt setup. We have two 
> physical hosts (DC1 and DC2) and mounted Lenovo NAS storage for all VM
> data.
 
> They are connected via a managed network switch.
> 
> What happens is that if switch goes down for whatever reason (firmware 
> update etc), physical host reboots. Not sure if this is an action 
> performed by Ovirt but I suspect it is because connection to mounted 
> storage is lost and it  performs some kind of an emergency action. I 
> would need to get some direction pointers to find out
> 
> a) who triggers the reboot and why

sanlock, resp. wdmd because it cannot renew the lease of some HA resource (it 
renews it by writing to the storage) and failed to kill the process using this 
resource (it should first try to kill the process and only if it fails reboot 
the host)

> c) a way to prevent reboots by increasing storage? timeouts

this is possible, but changing sanlock timeouts can be very tricky and can 
have unwanted/unexpected consequences, so be very careful. Here is a guideline 
how to do it:

https://github.com/oVirt/vdsm/blob/master/doc/io-timeouts.md


> Switch reboot takes 2-3 minutes.
> 
> 
> These are the host /var/log/messages just before reboot occurs:
> 
> Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984 
> [10993]: s11 check_our_lease warning 72 last_success 7690912
> Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984 
> [10993]: s3 check_our_lease warning 76 last_success 7690908
> Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984 
> [10993]: s1 check_our_lease warning 68 last_success 7690916
> Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984 
> [27983]: s11 delta_renew read timeout 10 sec offset 0 
> /var/run/vdsm/storage/15514c65-5d45-4ba7-bcd4-cc772351c940/fce598a8-11c3-44f
> 9-8aaf-8712c96e00ce/65413499-6970-4a4c-af04-609ef78891a2
 Sep 28 16:20:00
> ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984 [27983]: s11
> renewal error -202 delta_length 20 last_success 7690912 Sep 28 16:20:00
> ovirtnode02 wdmd[11102]: test warning now 7690984 ping 7690970 close
> 7690980 renewal 7690912 expire 7690992 client 10993 sanlock_hosted-engine:2
> Sep 28 16:20:00 ovirtnode02 wdmd[11102]: test warning now 7690984 ping 
> 7690970 close 7690980 renewal 7690908 expire 7690988 client 10993 
> sanlock_3cb12f04-5d68-4d79-8663-f33c0655baa6:2
> Sep 28 16:20:01 ovirtnode02 systemd: Created slice User Slice of root.
> Sep 28 16:20:01 ovirtnode02 systemd: Started Session 15148 of user root.
> Sep 28 16:20:01 ovirtnode02 systemd: Removed slice User Slice of root.
> Sep 28 16:20:01 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:01 7690985 
> [10993]: s11 check_our_lease warning 73 last_success 7690912
> Sep 28 16:20:01 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:01 7690985 
> [10993]: s3 check_our_lease warning 77 last_success 7690908
> Sep 28 16:20:01 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:01 7690985 
> [10993]: s1 check_our_lease warning 69 last_success 7690916
> Sep 28 16:20:01 ovirtnode02 wdmd[11102]: test warning now 7690985 ping 
> 7690970 close 7690980 renewal 7690912 expire 7690992 client 10993 
> sanlock_hosted-engine:2
> Sep 28 16:20:01 ovirtnode02 wdmd[11102]: test warning now 7690985 ping 
> 7690970 close 7690980 renewal 7690908 expire 7690988 client 10993 
> sanlock_3cb12f04-5d68-4d79-8663-f33c0655baa6:2
> Sep 28 16:20:02 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:02 7690986 
> [10993]: s11 check_our_lease warning 74 last_success 7690912
> Sep 28 16:20:02 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:02 7690986 
> [10993]: s3 check_our_lease warning 78 last_success 7690908
> Sep 28 16:20:02 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:02 7690986 
> [10993]: s1 check_our_lease warning 70 last_success 7690916
> Sep 28 16:20:02 ovirtnode02 wdmd[11102]: test warning now 7690986 ping 
> 7690970 close 7690980 renewal 7690916 expire 7690996 client 10993 
> sanlock_15514c65-5d45-4ba7-bcd4-cc772351c940:2
> Sep 28 16:20:02 ovirtnode02 wdmd[11102]: test warning now 7690986 ping 
> 7690970 close 7690980 renewal 7690912 expire 7690992 client 10993 
> sanlock_hosted-engine:2
> Sep 28 16:20:02 ovirtnode02 wdmd[11102]: test warning now 7690986 ping 
> 7690970 close 7690980 renewal 7690908 expire 7690988 client 10993 
> sanlock_3cb12f04-5d68-4d79-8663-f33c0655baa6:2
> Sep 28 16:20:03 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:03 7690987 
> [10993]: s11 check_our_lease warning 75 last_success 7690912
> Sep 28 16:20:03 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:03 7690987 
> [10993]: s3 check_our_lease warning 79 last_success 7690908
> Sep 28 16:20:03 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:03 7690987 
> [10993]: s1 check_our_lease warning 71 last_success 7690916
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email 

[ovirt-users] Re: Host reboots when network switch goes down

2021-09-29 Thread Nir Soffer
On Wed, Sep 29, 2021 at 2:08 PM cen  wrote:
>
> Hi,
>
> we are experiencing a weird issue with our Ovirt setup. We have two
> physical hosts (DC1 and DC2) and mounted Lenovo NAS storage for all VM data.
>
> They are connected via a managed network switch.
>
> What happens is that if switch goes down for whatever reason (firmware
> update etc), physical host reboots. Not sure if this is an action
> performed by Ovirt but I suspect it is because connection to mounted
> storage is lost and it  performs some kind of an emergency action. I
> would need to get some direction pointers to find out
>
> a) who triggers the reboot and why
>
> c) a way to prevent reboots by increasing storage? timeouts
>
> Switch reboot takes 2-3 minutes.
>
>
> These are the host /var/log/messages just before reboot occurs:
>
> Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984
> [10993]: s11 check_our_lease warning 72 last_success 7690912
> Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984
> [10993]: s3 check_our_lease warning 76 last_success 7690908
> Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984
> [10993]: s1 check_our_lease warning 68 last_success 7690916
> Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984
> [27983]: s11 delta_renew read timeout 10 sec offset 0
> /var/run/vdsm/storage/15514c65-5d45-4ba7-bcd4-cc772351c940/fce598a8-11c3-44f9-8aaf-8712c96e00ce/65413499-6970-4a4c-af04-609ef78891a2
> Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984
> [27983]: s11 renewal error -202 delta_length 20 last_success 7690912
> Sep 28 16:20:00 ovirtnode02 wdmd[11102]: test warning now 7690984 ping
> 7690970 close 7690980 renewal 7690912 expire 7690992 client 10993
> sanlock_hosted-engine:2
> Sep 28 16:20:00 ovirtnode02 wdmd[11102]: test warning now 7690984 ping
> 7690970 close 7690980 renewal 7690908 expire 7690988 client 10993
> sanlock_3cb12f04-5d68-4d79-8663-f33c0655baa6:2
> Sep 28 16:20:01 ovirtnode02 systemd: Created slice User Slice of root.
> Sep 28 16:20:01 ovirtnode02 systemd: Started Session 15148 of user root.
> Sep 28 16:20:01 ovirtnode02 systemd: Removed slice User Slice of root.
> Sep 28 16:20:01 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:01 7690985
> [10993]: s11 check_our_lease warning 73 last_success 7690912
> Sep 28 16:20:01 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:01 7690985
> [10993]: s3 check_our_lease warning 77 last_success 7690908
> Sep 28 16:20:01 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:01 7690985
> [10993]: s1 check_our_lease warning 69 last_success 7690916
> Sep 28 16:20:01 ovirtnode02 wdmd[11102]: test warning now 7690985 ping
> 7690970 close 7690980 renewal 7690912 expire 7690992 client 10993
> sanlock_hosted-engine:2
> Sep 28 16:20:01 ovirtnode02 wdmd[11102]: test warning now 7690985 ping
> 7690970 close 7690980 renewal 7690908 expire 7690988 client 10993
> sanlock_3cb12f04-5d68-4d79-8663-f33c0655baa6:2
> Sep 28 16:20:02 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:02 7690986
> [10993]: s11 check_our_lease warning 74 last_success 7690912
> Sep 28 16:20:02 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:02 7690986
> [10993]: s3 check_our_lease warning 78 last_success 7690908
> Sep 28 16:20:02 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:02 7690986
> [10993]: s1 check_our_lease warning 70 last_success 7690916
> Sep 28 16:20:02 ovirtnode02 wdmd[11102]: test warning now 7690986 ping
> 7690970 close 7690980 renewal 7690916 expire 7690996 client 10993
> sanlock_15514c65-5d45-4ba7-bcd4-cc772351c940:2
> Sep 28 16:20:02 ovirtnode02 wdmd[11102]: test warning now 7690986 ping
> 7690970 close 7690980 renewal 7690912 expire 7690992 client 10993
> sanlock_hosted-engine:2
> Sep 28 16:20:02 ovirtnode02 wdmd[11102]: test warning now 7690986 ping
> 7690970 close 7690980 renewal 7690908 expire 7690988 client 10993
> sanlock_3cb12f04-5d68-4d79-8663-f33c0655baa6:2
> Sep 28 16:20:03 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:03 7690987
> [10993]: s11 check_our_lease warning 75 last_success 7690912
> Sep 28 16:20:03 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:03 7690987
> [10993]: s3 check_our_lease warning 79 last_success 7690908

Leases on lockspace s3 will expire in one second after this message...

> Sep 28 16:20:03 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:03 7690987
> [10993]: s1 check_our_lease warning 71 last_success 7690916

When leases expire, sanlock to terminate the lease owner (.e.g vdsm, qemu).
If the owner of the lease cannot be terminated in (~40 seconds) the sanlock must
reboot the host.

So the host running hosted engine may be rebooted because storage is
inaccessible
and qemu is stuck on storage.

Other hosts may have the same issue they run HA VMs, server as the SPM, or run
storage tasks that use a lease.

To understand if this is the case, we need complete sanlock.log and
vdsm.log from
the hosts, when the issue happens.

Please file ovirt vdsm bug for this, and attach relevant logs.

Nir

[ovirt-users] Host reboots when network switch goes down

2021-09-29 Thread cen

Hi,

we are experiencing a weird issue with our Ovirt setup. We have two 
physical hosts (DC1 and DC2) and mounted Lenovo NAS storage for all VM data.


They are connected via a managed network switch.

What happens is that if switch goes down for whatever reason (firmware 
update etc), physical host reboots. Not sure if this is an action 
performed by Ovirt but I suspect it is because connection to mounted 
storage is lost and it  performs some kind of an emergency action. I 
would need to get some direction pointers to find out


a) who triggers the reboot and why

c) a way to prevent reboots by increasing storage? timeouts

Switch reboot takes 2-3 minutes.


These are the host /var/log/messages just before reboot occurs:

Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984 
[10993]: s11 check_our_lease warning 72 last_success 7690912
Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984 
[10993]: s3 check_our_lease warning 76 last_success 7690908
Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984 
[10993]: s1 check_our_lease warning 68 last_success 7690916
Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984 
[27983]: s11 delta_renew read timeout 10 sec offset 0 
/var/run/vdsm/storage/15514c65-5d45-4ba7-bcd4-cc772351c940/fce598a8-11c3-44f9-8aaf-8712c96e00ce/65413499-6970-4a4c-af04-609ef78891a2
Sep 28 16:20:00 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:00 7690984 
[27983]: s11 renewal error -202 delta_length 20 last_success 7690912
Sep 28 16:20:00 ovirtnode02 wdmd[11102]: test warning now 7690984 ping 
7690970 close 7690980 renewal 7690912 expire 7690992 client 10993 
sanlock_hosted-engine:2
Sep 28 16:20:00 ovirtnode02 wdmd[11102]: test warning now 7690984 ping 
7690970 close 7690980 renewal 7690908 expire 7690988 client 10993 
sanlock_3cb12f04-5d68-4d79-8663-f33c0655baa6:2

Sep 28 16:20:01 ovirtnode02 systemd: Created slice User Slice of root.
Sep 28 16:20:01 ovirtnode02 systemd: Started Session 15148 of user root.
Sep 28 16:20:01 ovirtnode02 systemd: Removed slice User Slice of root.
Sep 28 16:20:01 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:01 7690985 
[10993]: s11 check_our_lease warning 73 last_success 7690912
Sep 28 16:20:01 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:01 7690985 
[10993]: s3 check_our_lease warning 77 last_success 7690908
Sep 28 16:20:01 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:01 7690985 
[10993]: s1 check_our_lease warning 69 last_success 7690916
Sep 28 16:20:01 ovirtnode02 wdmd[11102]: test warning now 7690985 ping 
7690970 close 7690980 renewal 7690912 expire 7690992 client 10993 
sanlock_hosted-engine:2
Sep 28 16:20:01 ovirtnode02 wdmd[11102]: test warning now 7690985 ping 
7690970 close 7690980 renewal 7690908 expire 7690988 client 10993 
sanlock_3cb12f04-5d68-4d79-8663-f33c0655baa6:2
Sep 28 16:20:02 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:02 7690986 
[10993]: s11 check_our_lease warning 74 last_success 7690912
Sep 28 16:20:02 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:02 7690986 
[10993]: s3 check_our_lease warning 78 last_success 7690908
Sep 28 16:20:02 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:02 7690986 
[10993]: s1 check_our_lease warning 70 last_success 7690916
Sep 28 16:20:02 ovirtnode02 wdmd[11102]: test warning now 7690986 ping 
7690970 close 7690980 renewal 7690916 expire 7690996 client 10993 
sanlock_15514c65-5d45-4ba7-bcd4-cc772351c940:2
Sep 28 16:20:02 ovirtnode02 wdmd[11102]: test warning now 7690986 ping 
7690970 close 7690980 renewal 7690912 expire 7690992 client 10993 
sanlock_hosted-engine:2
Sep 28 16:20:02 ovirtnode02 wdmd[11102]: test warning now 7690986 ping 
7690970 close 7690980 renewal 7690908 expire 7690988 client 10993 
sanlock_3cb12f04-5d68-4d79-8663-f33c0655baa6:2
Sep 28 16:20:03 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:03 7690987 
[10993]: s11 check_our_lease warning 75 last_success 7690912
Sep 28 16:20:03 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:03 7690987 
[10993]: s3 check_our_lease warning 79 last_success 7690908
Sep 28 16:20:03 ovirtnode02 sanlock[10993]: 2021-09-28 16:20:03 7690987 
[10993]: s1 check_our_lease warning 71 last_success 7690916



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VQZSGWXLMSSG4DLLHJ445NFVEFDWWWLJ/


[ovirt-users] Ovirt 4.3 Upload of Image fails

2021-09-29 Thread Mark Morgan
Hi, I am trying to upload an image to a Ovirt 4.3 Instance but it keeps 
failing.

After a few seconds it says paused by system.
The test connection is successful in the upload image window so we have 
installed the certificate properly.
Due to an older 
thread(https://www.mail-archive.com/users@ovirt.org/msg50954.html) I 
also checked if it has something to do with wifi. But I am not even 
using a wifi connection.


Here is a small part of the log, where you can see the transfer failing.

2021-09-29 11:44:43,011+02 INFO 
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] 
(default task-96804) [d370a18b-bb12-4992-9fc8-7ce6607358f8] Running 
command: TransferImageStatusCommand internal: false. Entities affected 
:  ID: aaa0----123456789aaa Type: SystemAction group 
CREATE_DISK with role type USER
2021-09-29 11:44:43,055+02 INFO 
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] 
(default task-96804) [1cbc3b4f-b1d4-428a-965a-b9745fd0e108] Running 
command: TransferImageStatusCommand internal: false. Entities affected 
:  ID: aaa0----123456789aaa Type: SystemAction group 
CREATE_DISK with role type USER
2021-09-29 11:44:43,056+02 INFO 
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] 
(default task-96804) [1cbc3b4f-b1d4-428a-965a-b9745fd0e108] Updating 
image transfer 0681f799-f44f-4b1e-8369-4d1033bd81e6 (image 
ce221b1f-46aa-4eb4-b159-0e0adb762102) phase to Resuming (message: 'Sent 
0MB')
2021-09-29 11:44:47,096+02 INFO 
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] 
(default task-96801) [50849f1b-ef18-41ab-9380-e2c7980a1f73] Running 
command: TransferImageStatusCommand internal: false. Entities affected 
:  ID: aaa0----123456789aaa Type: SystemAction group 
CREATE_DISK with role type USER
2021-09-29 11:44:48,878+02 INFO 
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-80) 
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] Resuming transfer for Upload disk 
'CentOS-8.4.2105-x86_64-boot.iso' (disk id: 
'ce221b1f-46aa-4eb4-b159-0e0adb762102', image id: 
'45896ce1-a602-49f5-9774-4dc17d960589')
2021-09-29 11:44:48,896+02 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-80) 
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] EVENT_ID: 
TRANSFER_IMAGE_RESUMED_BY_USER(1,074), Image transfer was resumed by 
user (admin@internal-authz).
2021-09-29 11:44:48,902+02 INFO 
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-80) 
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] Renewing transfer ticket for 
Upload disk 'CentOS-8.4.2105-x86_64-boot.iso' (disk id: 
'ce221b1f-46aa-4eb4-b159-0e0adb762102', image id: 
'45896ce1-a602-49f5-9774-4dc17d960589')
2021-09-29 11:44:48,903+02 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ExtendImageTicketVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-80) 
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] START, 
ExtendImageTicketVDSCommand(HostName = virthost01, 
ExtendImageTicketVDSCommandParameters:{hostId='15d10fdf-4dc1-4a4c-a12f-cab50c492974', 
ticketId='8d09cf8c-baf9-4497-8b52-ea53a97b4a19', timeout='300'}), log 
id: 197aba7
2021-09-29 11:44:48,908+02 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ExtendImageTicketVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-80) 
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] FINISH, 
ExtendImageTicketVDSCommand, return: StatusOnlyReturn [status=Status 
[code=0, message=Done]], log id: 197aba7
2021-09-29 11:44:48,908+02 INFO 
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-80) 
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] Transfer session with ticket id 
8d09cf8c-baf9-4497-8b52-ea53a97b4a19 extended, timeout 300 seconds
2021-09-29 11:44:48,920+02 INFO 
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] 
(EE-ManagedThreadFactory-engineScheduled-Thread-80) 
[6c5f2ed0-976c-4722-a6fb-86f3d9eb1c3b] Updating image transfer 
0681f799-f44f-4b1e-8369-4d1033bd81e6 (image 
ce221b1f-46aa-4eb4-b159-0e0adb762102) phase to Transferring (message: 
'Sent 0MB')
2021-09-29 11:44:51,379+02 INFO 
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] 
(default task-96801) [e2247750-524d-40e4-bffb-1176ff13f1f5] Running 
command: TransferImageStatusCommand internal: false. Entities affected 
:  ID: aaa0----123456789aaa Type: SystemAction group 
CREATE_DISK with role type USER
2021-09-29 11:44:55,376+02 INFO 
[org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] 
(default task-96801) [f9b3dec1-9aac-4695-ba39-43e5e66bdccd] Running 
command: TransferImageStatusCommand internal: false. Entities affected 
:  ID: aaa0----123456789aaa Type: SystemAction group 
CREATE_DISK with role type 

[ovirt-users]Re: 回复: Re: About the vm memory limit

2021-09-29 Thread Strahil Nikolov via Users
If you are on 4.3 -> disable transparent hugepages both Hypervisor and VM.If 
you are using 4.4 -> disable transparent hugepages and also cobfigure regular 
huge pages.
Best Regards,Strahil Nikolov
 
 
  On Wed, Sep 29, 2021 at 5:34, Tommy Sway wrote:   From the 
Oracle OLVM support:

Configuring the Hugepages for guest VMs should be suffice, however, it needs 
the KVM hosts too configured with the Hugepages.
Since, if it not you may end with issues while staring the guest VMs.

I really don't know what to do now.





-Original Message-
From: users-boun...@ovirt.org  On Behalf Of Strahil 
Nikolov via Users
Sent: Tuesday, September 28, 2021 3:39 PM
To: 'users' ; Tommy Sway 
Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit

I think that if you run VMs with Databases, you must disable transparent huge 
pages on Hypervisour level and on VM level. Yet, if you wish you can use 
regular huge pages on VM level.

Best Regards,
Strahil Nikolov






В вторник, 28 септември 2021 г., 09:21:09 ч. Гринуич+3, Tommy Sway 
 написа: 






What problem will appear if I use the default transparent huge page enabled 
mode for physical hosts, but configure traditional huge page memory on the 
virtual machine for database SGA ? 
Or is it better to disable transparent huge page on physical machines and still 
use traditional huge page memory on virtual machines?
 
Which one is prefer ?
 
 
From: users-boun...@ovirt.org  On Behalf Of Strahil 
Nikolov via Users
Sent: Tuesday, September 28, 2021 12:05 AM
To: ‪‪‪tommy ; 'users' 
Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit
 
https://docs.oracle.com/en/database/oracle/oracle-database/19/ladbi/disabling-transparent-hugepages.html
 
https://access.redhat.com/solutions/1320153 (requires RH dev subscription or 
other type of subscription) -> In short add 'transparent_hugepage=never' to the 
kernel params
 
SLES11/12/15 -> 
https://www.suse.com/c/sles-1112-os-tuning-optimisation-guide-part-1/
 
 
Best Regards,
Strahil Nikolov
 
 
 
 
 
> On Mon, Sep 27, 2021 at 16:33, ‪‪‪tommy  wrote:
> thank you!
>  
> and how to get the config info of Transparent huge pages and how to 
> disable it?
>  
>  
>  
> 
> 
> 发自我的华为手机
> 
> 
>  原始邮件 
> 发件人: Strahil Nikolov 
> 日期: 2021年9月27日周一 21:13
> 收件人: 'users' , Tommy Sway 
> 主 题: Re: [ovirt-users] Re: About the vm memory limit
>> Transparent huge pages are enabled by default, so you need to stop 
>> them.I would use huge pages on both host and VM, but theoretically it 
>> shouldn't be a problem running a VM with enabled HugePages without 
>> configuring on Host.Best Regards,Strahil NikolovВ събота, 25 
>> септември 2021 г., 15:25:10 ч. Гринуич+3, Tommy Sway 
>>  написа: transparent huge pages are not used on vm 
>> and physical host. But, can I enable hugepage memory on  virtual 
>> machines but not on a physical machine?For database running on vm, 
>> and it needs to config hugepage.  From: Strahil Nikolov 
>>  Sent: Saturday, September 25, 2021 5:32 PMTo: 
>> Tommy Sway Subject: Re: [ovirt-users] About the vm 
>> memory limit It depends on the numa configuration of the host. If you 
>> have 256G per CPU, it's best to stay into that range. Also, consider 
>> disabling transparent huge pages on the host & VM. Since 4.4 Regular 
>> Huge Pages (do not confuse them with THP) can be used on the 
>> Hypervisors, while on 4.3 there were some issues but I can't provode 
>> any details. Best Regards,Strahil Nikolov> On Fri, Sep 24, 2021 at 
>> 6:40, Tommy Sway>  wrote:> I would like to ask if 
>> there is any limit on the memory size of virtual machines, or 
>> performance curve or something like that?> As long as there is memory 
>> on the physical machine, the more virtual machines the better?>  > In 
>> our usage scenario, there are many virtual machines with databases, 
>> and their memory varies greatly. > For some virtual machines, 4G 
>> memory is enough, while for some virtual machines, 64GB memory is 
>> needed.>  > I want to know what is the best use of memory for a 
>> virtual machine, since the virtual machine is just a QEMU emulation 
>> process on a physical machine, and I worry that it is not using as 
>> much memory as a physical machine. Understand this so that we can 
>> develop guidelines for optimal memory usage scenarios for virtual 
>> machines.>  > Thank you!>  >  > 
>> ___Users mailing list -- 
>> users@ovirt.orgTo unsubscribe send an email to 
>> users-leave@ovirt.orgPrivacy Statement: 
>> https://www.ovirt.org/privacy-policy.htmloVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/List 
>> Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y6XDOIM
>> KCP4JEJF7M5DLJ33U2ENJKGAN/___
>> Users mailing list -- users@ovirt.orgTo unsubscribe send an email 
>> to users-leave@ovirt.orgPrivacy Statement: 
>> https://www.ovirt.org/privacy-policy.htmloVirt 

[ovirt-users] Failed to update OVF disks / Failed to update VMs/Templates OVF data for Storage Domain

2021-09-29 Thread nicolas

Hi,

We upgraded from oVirt 4.3.8 to 4.4.8 and sometimes we're finding events 
like these in the event log (3-4 times/day):


Failed to update OVF disks 77818843-f72e-4d40-9354-4e1231da341f, OVF 
data isn't updated on those OVF stores (Data Center KVMRojo, Storage 
Domain pv04-003).
Failed to update VMs/Templates OVF data for Storage Domain pv02-002 
in Data Center KVMRojo.


I found [1], however, it seems not to solve the issue. I restarted all 
the hosts and we're still getting the messages.


We couldn't upgrade hosts to 4.4 yet, FWIW. Maybe it's caused by this?

If someone could shed some light about this, I'd be grateful.

Thanks.

  [1]: https://access.redhat.com/solutions/3353011
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TUZ6JXMWVYZMEOOOAA4NESZK4LZNEC2A/