[ovirt-users] Re: Creating VMs from templates with their own disks

2021-11-28 Thread Strahil Nikolov via Users
I would put that code in vm_create.yml and then you can: - name: Create VMs  include_tasks: vm_create.yml  loop:    - VM1    - VM2  loop_control:    loop_var: vm_name Then inside your vm_create.yml you will use "{{ vm_name }}" as the name of the VM. Best Regards,Strahil Nikolov On Sun,

[ovirt-users] Re: Attaching LVM logical volume to VM

2021-11-28 Thread Strahil Nikolov via Users
Usually in oVirt you are want your VMs to be on a shared storage, so they can live migrate between the hosts. Such shared storages are: iSCSI, SAN, NFS & GlusterFS (CEPH is still "experimental") or any other POSIX Filesystem. If you wish to use local storage, previously (I'm not sure if this is

[ovirt-users] Re: Creating VMs from templates with their own disks

2021-11-27 Thread Strahil Nikolov via Users
Does it work when you remove the 'custom_script' section ? Best Regards,Strahil Nikolov On Sat, Nov 27, 2021 at 7:35, Sina Owolabi wrote: No errors at all Same results again Screenshot attached for a better view, but this is where it's at right now:       sso: true        

[ovirt-users] Re: Creating VMs from templates with their own disks

2021-11-26 Thread Strahil Nikolov via Users
yaml is picky... write_files:            - path: /tmp/setup.sh            permissions: '0755'            content: | permissions & content should be on the same indentation with path: - path  permissions  content What is the error you receive ? Best Regards,Strahil Nikolov On Sat, Nov 27,

[ovirt-users] Re: Creating VMs from templates with their own disks

2021-11-26 Thread Strahil Nikolov via Users
Yep. Your code: cloud_init: custom_script: | host_name: "{{ vm_fqdn }}" user_name: myadmin user_password: write_files: - path: /tmp/setup.sh permissions: '0755' content: | #!/bin/bash echo "$(hostnamectl)" >> /tmp/myhostname.txt Yet, in "custom_script" there is no "host_name"

[ovirt-users] Re: Creating VMs from templates with their own disks

2021-11-26 Thread Strahil Nikolov via Users
I don't seeany Ansible code bellow. Best Regards,Strahil Nikolov On Fri, Nov 26, 2021 at 17:04, Sina Owolabi wrote: ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:

[ovirt-users] Re: broker.log file filling up

2021-11-25 Thread Strahil Nikolov via Users
The broker logs can be changed... No need to shutdown the VM. Just use the hosted-engine tool to put the vm in global maintenance, then change the log level and restart the broker/agent service. After several minutes  (wait at least 5 min) 'hosted-engine --vm-status' should show that the

[ovirt-users] Re: what is the best practices to delete vdisks in 100% used gluster stoargedoamin

2021-11-25 Thread Strahil Nikolov via Users
,Strahil Nikolov On Thu, Nov 25, 2021 at 18:18, Strahil Nikolov via Users wrote: ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code

[ovirt-users] Re: what is the best practices to delete vdisks in 100% used gluster stoargedoamin

2021-11-25 Thread Strahil Nikolov via Users
Usually, gluster has the following options:cluster.min-free-diskcluster.min-free-inodes The cluster.min-free-disk is set to 10% which means that you have the option to: - Identify VMs/disks/objects that can be quickly deleted (snapshot deletion is not fast enough)- set the option to lower

[ovirt-users] Re: Add static route to ovirt nodes

2021-11-20 Thread Strahil Nikolov via Users
Hi, according to https://access.redhat.com/solutions/5982361 (which can be seen if you obtain a free RH developer subscription) you will need to use nmstate. First create a yaml like this one (indentation must follow yaml requirements): routes: config: - destination: 10.9.0.0/16

[ovirt-users] Re: how to forcibly remove dead node from Gluster 3 replica 1 arbiter setup?

2021-11-12 Thread Strahil Nikolov via Users
As I mentioned in the slack, the safest approach is to: 1. Reduce the volume to replica 1 (there is no need to keep the arbiter until resynchronization gluster volume remove-brick VOLUME replica 1   beclovkvma02.bec.net:/data/brick2/brick2    beclovkvma03.bec.net:/data/brick1/brick2 

[ovirt-users] Re: how to forcibly remove dead node from Gluster 3 replica 1 arbiter setup?

2021-11-11 Thread Strahil Nikolov via Users
Please provide  'gluster volume info datastore1' and specify which bricks you want to remove. Best Regards,Strahil Nikolov On Thu, Nov 11, 2021 at 6:13, dhanaraj.ramesh--- via Users wrote: Hi Strahil Nikolov Thank you for the suggestion but it does not help... [root@beclovkvma01

[ovirt-users] Re: how to forcibly remove dead node from Gluster 3 replica 1 arbiter setup?

2021-11-10 Thread Strahil Nikolov via Users
You have to specify the volume type.When you remove 1 brick from a replica 3 volume - you are actually converting it to replica 2. As you got 2 data bricks + 1 arbiter, then Just remove the arbiter brick and the missing node's brick: gluster volume remove-brick VOL replica 1 node2:/brick

[ovirt-users] Re: oVirt networks

2021-11-09 Thread Strahil Nikolov via Users
Have you thought about bonding modes 2 with option 'xmit_hash_policy=layer3+4') or 6 ? Best Regards,Strahil Nikolov On Tue, Nov 9, 2021 at 18:22, Sandro Bonazzola wrote: ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to

[ovirt-users] Re: Issues with resizing Gluster Volumes with sharding and lookup-optimize enabled

2021-11-08 Thread Strahil Nikolov via Users
What about Listing volumes via: http://ovirt.github.io/ovirt-engine-api-model/4.4/#services/gluster_volumes -> list And changing the option via: http://ovirt.github.io/ovirt-engine-api-model/4.4/#services/gluster_volume ->  setoption Best Regards, Strahil Nikolov В понеделник, 8 ноември

[ovirt-users] Issues with resizing Gluster Volumes with sharding and lookup-optimize enabled

2021-11-07 Thread Strahil Nikolov via Users
Hello All, I recently found a RH solution regards issues with resizing Gluster volumes with lookup-optimize enabled and I think it's worth sharing:https://access.redhat.com/solutions/5896761https://github.com/gluster/glusterfs/issues/1918 @Sandro,Do you think it's worth checking and disabling

[ovirt-users] Re: New Install. 3 node HCI using 4.4.8 ISO

2021-11-03 Thread Strahil Nikolov via Users
Check if you have gluster-selinux rpm installed. Best Regards,Strahil Nikolov On Tue, Nov 2, 2021 at 8:05, shane.kr...@gmail.com wrote: Hello everyone. First time posting. Sorry if this is already been spoken to... I didn't see anything with a quick search that I did. 3 nodes running

[ovirt-users] Re: how to forcibly remove dead node from Gluster 3 replica 1 arbiter setup?

2021-11-03 Thread Strahil Nikolov via Users
In order to remove a dead host you will need to:- Remove all bricks (originating from that host ) in all volumes of the TSPgluster volume remove-brick engine host3:/gluster_bricks/brick1/ force- Remove the host from the TSPgluster peer-detach host3 - Next remove the host from oVirt. In some

[ovirt-users] Re: question about engine deployment success rate

2021-11-02 Thread Strahil Nikolov via Users
40-50% is low. Usually it should be above 90% but some versions are more buggy than others (depending on the version). 4.3.10 is the latest 4.3 version and is the only supported for migration to 4.4/4.5, so consider altering the deployment process. When an engine constantly restarts -> put it in

[ovirt-users] Re: Gluster Install Fail again :(

2021-10-30 Thread Strahil Nikolov via Users
OK, that's odd . Can you check the following: On all nodes: grep storage[1-3].private /etc/hosts for i in {1..3}; do host storage${i}.private.net; done On the first node: gluster peer probe storage1.private.netgluster peer probe storage2.private.netgluster peer probe storage3.private.net gluster

[ovirt-users] Re: question about engine deployment success rate

2021-10-30 Thread Strahil Nikolov via Users
Well, it (the install process)requires some polishing. Any reason not to use 4.3.10 ? This is the only supported version for migration to 4.4 Can you share what were the errors ? Best Regards,Strahil Nikolov On Sat, Oct 30, 2021 at 20:25, Henning Sprang wrote: Hi Strahil,  Thanks for

[ovirt-users] Re: Gluster Install Fail again :(

2021-10-30 Thread Strahil Nikolov via Users
What is the output of :gluster peer list (from all nodes) Output from the ansible will be useful. Best Regards,Strahil Nikolov I have been working on getting this up and running for about a week now and I am totally frustrated.  I am not sure even where to begin.  Here is the error I

[ovirt-users] Re: Q: oVirt guest agent + spice-vdagent on Debian 11 Bullseye

2021-10-30 Thread Strahil Nikolov via Users
You need qemu-guest-agent as ovirt agent is no longer needed, nor available. Best Regards,Strahil Nikolov On Fri, Oct 29, 2021 at 12:33, Andrei Verovski wrote: Hi, Anyone have compiled these deb packages for Debian 11 Bullseye? oVirt guest agent + spice-vdagent Packages from Buster

[ovirt-users] Re: Engine VM FQDN will not validate.

2021-10-30 Thread Strahil Nikolov via Users
Do you have a PTR record for the engine's IP ? Best Regards,Strahil Nikolov Hello, I am trying to install the Hosted engine using the wizard.  I am NOT using the hyper converged.  When I add the fqdn  of  vmengine1.domain.com  I get the error. localhost is not a valid address.  When I

[ovirt-users] Re: question about engine deployment success rate

2021-10-30 Thread Strahil Nikolov via Users
If you want to increase your deployment success, you will need to use repository management and freeze your OS & oVirt repos to a working level .For example if you use RHEL 8.4 and current level of oVirt - you will have dependency issues until RHEL 8.5 is released. Once this happens and your

[ovirt-users] Re: The Engine VM (/32) and this host (/32) will not be in the same IP subnet.

2021-10-27 Thread Strahil Nikolov via Users
Prefix of '32' means that there are no other IPs in the subnet -> usually this prefix is used in firewalls to indicate that the rule is only for that IP. It doesn't seem legit to me. Best Regards,Strahil Nikolov Hi list, 'The Engine VM (10.200.30.5/32) and this host (10.200.30.3/32)

[ovirt-users] Re: upgrade dependency issues

2021-10-25 Thread Strahil Nikolov via Users
Hi, what is the output from: 'yum repolist' ? Best Regards, Strahil Nikolov В понеделник, 25 октомври 2021 г., 17:12:58 ч. Гринуич+3, John Florian написа: I recently upgrade my engine to 4.4.9 following the usual engine-setup procedure.  Once that was done, I tried to do a dnf

[ovirt-users] Re: Unable to put host in maintenance mode & unable to login to Postgres

2021-10-23 Thread Strahil Nikolov via Users
That's why I put the user and the pass in single quotes... Like this: 'user@domain'@'pass' Best Regards,Strahil Nikolov On Thu, Oct 21, 2021 at 23:47, David White via Users wrote: ___ Users mailing list -- users@ovirt.org To unsubscribe send an

[ovirt-users] Re: TPM 2.0 Support in Ovirt

2021-10-21 Thread Strahil Nikolov via Users
Have you tried  https://www.ovirt.org/documentation/virtual_machine_management_guide/#Adding_TPM_devices ? Best Regards,Strahil Nikolov On Thu, Oct 21, 2021 at 19:21, bob.franzke--- via Users wrote: Need to deploy a VM of a Windows 11 guest. Windows 11 requires TPM 2.0 support for it to

[ovirt-users] Re: Hosted Engine Deployment failure

2021-10-21 Thread Strahil Nikolov via Users
Do you use proxy ? Best Regards,Strahil Nikolov On Thu, Oct 21, 2021 at 5:07, Raj P wrote: Hi Sandro, Am not sure where the problem is, I am on a 1gig link with 500 mbps upload download on average. I think its just happening with the repos and its a different error every time.

[ovirt-users] Re: Unable to put host in maintenance mode & unable to login to Postgres

2021-10-20 Thread Strahil Nikolov via Users
Try with curl -u 'admin@internal'@'pass' ... Best Regards,Strahil Nikolov On Thu, Oct 21, 2021 at 2:17, David White via Users wrote: ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy

[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-20 Thread Strahil Nikolov via Users
It was later discovered that the selinux policy was removed from the selinux packages. You will need gluster-selinux which should be available in the latest version of oVirt. Best Regards,Strahil Nikolov On Wed, Oct 20, 2021 at 3:17, ad...@foundryserver.com wrote: I have the same

[ovirt-users] Re: Unable to put host in maintenance mode & unable to login to Postgres

2021-10-17 Thread Strahil Nikolov via Users
Try unlock_entitiy.sh with '-t all -r' Best Regards,Strahil Nikolov On Sat, Oct 16, 2021 at 13:43, David White via Users wrote: ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:

[ovirt-users] Re: cannot stop vm normally

2021-10-14 Thread Strahil Nikolov via Users
Shitdown needs integration and cooperation from the VM OS. Did you install the qemu-guest-agent , is it working properly ? Best Regards,Strahil Nikolov On Fri, Oct 15, 2021 at 4:12, zhou...@vip.friendtimes.net wrote: #yiv1327093825 body {line-height:1.5;}#yiv1327093825 blockquote

[ovirt-users] Re: Engine insists on running on 1 host in cluster when that host is online

2021-10-14 Thread Strahil Nikolov via Users
For the score issue you can check  https://www.google.com/url?sa=t=web=j=https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf=2ahUKEwiWlZu5yMvzAhX0hf0HHaDACzQQFnoECAQQAQ=AOvVaw2SJWnH5ghQoZq7CV_f-9hs and then identify your problem and fix it. For the he_local, you can use hosted-engine

[ovirt-users] Re: Engine insists on running on 1 host in cluster when that host is online

2021-10-12 Thread Strahil Nikolov via Users
>How do I reconfigure oVirt to use the 2nd replica as a secondary mount point? Verify that your engine's volume is really OK. gluster volume info enginegluster volume status enginegluster volume heal engine info summary >I cannot migrate the engine off of c. And if the engine is running on the

[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-10 Thread Strahil Nikolov via Users
Actually it seems that glusterfs-selinux should fix the problem. Best Regards,Strahil Nikolov On Mon, Oct 11, 2021 at 0:28, Strahil Nikolov via Users wrote: ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le

[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-10 Thread Strahil Nikolov via Users
Hey Sandro, do we know why this has been done ? Best Regards,Strahil Nikolov On Sun, Oct 10, 2021 at 16:48, Ax Olmos wrote: The problem is that the ‘glusterd_brick_t’ file context is missing from selinux-policy-targeted 3.14.3-80 on CentOS 8 Stream. It exists in the CentOS 8.4 version:

[ovirt-users] Re: Slow gluster replication

2021-10-10 Thread Strahil Nikolov via Users
gluster volume set help | grep shd Most probably you want to change cluster.shd-max-threads if your hardware can supoort it. Best Regards,Strahil Nikolov On Sun, Oct 10, 2021 at 2:26, David White via Users wrote: I can't remember if I've asked this already, or if someone else has brought

[ovirt-users] Re: API how to increase extend resize disk VM

2021-10-04 Thread Strahil Nikolov via Users
I would check the api guide at https://ovirt.somedomain/ovirt-engine/apidoc/#/ Best Regards,Strahil Nikolov Hello, please how to increase extend resize disk of VM? I can working with ansible or REST API. Ansible working is here, but i not found manual for update size:

[ovirt-users]Re: 回复:Re: 回复:Re: 回复: Re: About the vm memory limit

2021-10-04 Thread Strahil Nikolov via Users
原始邮件 发件人: Strahil Nikolov 日期: 2021年10月3日周日 17:05 收件人: sz_cui...@163.com, Strahil Nikolov via Users 抄送: Simon Coter 主 题: Re: [ovirt-users]Re: 回复:Re: 回复: Re: About the vm memory limit For 4.3 - yes. This is due to the issues in 4.3Once you move to 4.4 , you should be able to enable

[ovirt-users]Re: 回复:Re: 回复: Re: About the vm memory limit

2021-10-03 Thread Strahil Nikolov via Users
if you are eligible for the developr subscription you can subscribe at : https://developers.redhat.com/register  P.S.: It now includes up to 16 RHEL machines for production usage. Best Regards,Strahil Nikolov On Sun, Oct 3, 2021 at 12:57, Tommy Sway wrote:

[ovirt-users]Re: 回复:Re: 回复: Re: About the vm memory limit

2021-10-03 Thread Strahil Nikolov via Users
For 4.3 - yes. This is due to the issues in 4.3Once you move to 4.4 , you should be able to enable HugePages (not THP) on the Hypervisors without troubles. I assume that your workload is 'in production' and stability (no downtime) is more important that some performance gain from HugePages on

[ovirt-users]Re: 回复: Re: About the vm memory limit

2021-10-02 Thread Strahil Nikolov via Users
HugePages filter -> expects preallocated Huge Pages (not THP) So either use dynamic allocation with disabled filter or on the contrary -> fixed ammount of pages  Best Regards,Strahil Nikolov On Sat, Oct 2, 2021 at 14:25, Tommy Sway wrote: ___

[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-02 Thread Strahil Nikolov via Users
I just checked the module source and it should be working with 'glusterd_brick_t'. Do you have gluster-server installed on all nodes ? Best Regards,Strahil Nikolov On Sat, Oct 2, 2021 at 23:13, Strahil Nikolov via Users wrote: ___ Users

[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-02 Thread Strahil Nikolov via Users
Don't you have a task just like  https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/mount.yml#L64-L70 ? Best Regards,Strahil Nikolov On Sat, Oct 2, 2021 at 23:00, Woo Hsutung wrote: ___ Users mailing list --

[ovirt-users]Re: 回复:Re: 回复: Re: About the vm memory limit

2021-10-02 Thread Strahil Nikolov via Users
vdsm supports dynamic allocation of Huge Pages, but https://access.redhat.com/solutions/4904441 indicates the issues in 4.3 related to hugepages (you can see the solution via RH dev subscription).If you needed to set them manually and everything works normally, I would have picked something

[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-02 Thread Strahil Nikolov via Users
Also, you can edit the /etc/fstab entries by adding in the mount options: context="system_u:object_r:glusterd_brick_t:s0" Then remount the bricks (umount ; mount ). This tells the kernel to skip selinux lookups and assume everything has gluster brick files, which will reduce the I/O. Best

[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-02 Thread Strahil Nikolov via Users
Most probably it's in a variable. just run the following:semanage fcontext -a -t  "system_u:object_r:glusterd_brick_t:s0" "/gluster_bricks(/.*)?" restorecon -RFvv /gluster_bricks/ Best Regards,Strahil Nikolov On Sat, Oct 2, 2021 at 3:08, Woo Hsutung wrote: Strahil, Thanks for your

[ovirt-users]Re: 回复:Re: 回复: Re: About the vm memory limit

2021-10-01 Thread Strahil Nikolov via Users
-- 发件人: Strahil Nikolov 日期: 2021年10月1日周五 22:52 收件人: tommy sway , Strahil Nikolov via Users 主 题: Re: [ovirt-users]Re: 回复: Re: About the vm memory limit Actually yes - should speedup the process. I've edited that draft too many times :) On Fri, Oct 1, 2021 at 7:32, tommy sway

[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-10-01 Thread Strahil Nikolov via Users
In cockpit installer last step allows you to edit the ansible before running it.Just search for glusterd_brick_t and replace it. Best Regards,Strahil Nikolov On Fri, Oct 1, 2021 at 17:48, Woo Hsutung wrote: Same issue happens when I deploy on single node. And I can’t find where I can

[ovirt-users] Re: gluster 5834 Unsynced entries present

2021-10-01 Thread Strahil Nikolov via Users
Put ovnode2 in maintenance (put a tick for stopping gluster), wait till all VMs evacuate and the host is really in maintenance and activate it back. Restarting the glusterd also should do the trick, but it's always better to ensure no gluster processes have been left running(inclusing the mount

[ovirt-users]Re: 回复: Re: About the vm memory limit

2021-10-01 Thread Strahil Nikolov via Users
Actually yes - should speedup the process. I've edited that draft too many times :) On Fri, Oct 1, 2021 at 7:32, tommy sway wrote: ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy

[ovirt-users] Re: about the expiration time of the oVirt certs

2021-10-01 Thread Strahil Nikolov via Users
I was thinking the same.Would you open a feature request to bugzilla.redhat.com ? I know that certmonger can renew automatically all certs via an external CA, so that would be a great feature. Best Regards,Strahil Nikolov On Fri, Oct 1, 2021 at 7:41, tommy sway wrote:

[ovirt-users] Re: about the expiration time of the oVirt certs

2021-09-30 Thread Strahil Nikolov via Users
I think you are looking for certmonger, but it will require some manual steps: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system-level_authentication_guide/certmongerx Best Regards,Strahil Nikolov On Thu, Sep 30, 2021 at 10:17, Tommy Sway wrote: As you

[ovirt-users]Re: 回复: Re: About the vm memory limit

2021-09-30 Thread Strahil Nikolov via Users
een determined, I am also very confused. -Original Message- From: users-boun...@ovirt.org On Behalf Of Strahil Nikolov via Users Sent: Wednesday, September 29, 2021 8:50 PM To: 'users' ; Tommy Sway Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit I got a 3 TB host (physi

[ovirt-users] Re: Host reboots when network switch goes down

2021-09-29 Thread Strahil Nikolov via Users
Tinkering with timeouts could be risky, so in case you can't have a second switch - your solution (shutting down all VMs, maintenance, etc) should be the safest. If possible test it on a cluster on VMs, so you get used to the whole procedure.  Best Regards,Strahil Nikolov On Wed, Sep 29,

[ovirt-users]Re: 回复: Re: About the vm memory limit

2021-09-29 Thread Strahil Nikolov via Users
ince, if it not you may end with issues while staring the guest VMs. > > I really don't know what to do now. > > > > > > -Original Message- > From: users-boun...@ovirt.org On Behalf Of Strahil > Nikolov via Users > Sent: Tuesday, September 28, 2021 3:39 P

[ovirt-users]Re: 回复: Re: About the vm memory limit

2021-09-29 Thread Strahil Nikolov via Users
Behalf Of Strahil Nikolov via Users Sent: Tuesday, September 28, 2021 3:39 PM To: 'users' ; Tommy Sway Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit I think that if you run VMs with Databases, you must disable transparent huge pages on Hypervisour level and on VM level. Yet, if you wi

[ovirt-users] Re: oVirt / Hyperconverged

2021-09-28 Thread Strahil Nikolov via Users
Yes, you can use with 4 nodes.  You have to check what has caused the crash before starting over or loosing the logs. Best Regards, Strahil Nikolov В вторник, 28 септември 2021 г., 09:56:30 ч. Гринуич+3, написа: I have 4 servers of identical hardware. The documentation says "you

[ovirt-users]Re: 回复: Re: About the vm memory limit

2021-09-28 Thread Strahil Nikolov via Users
page memory on virtual machines?   Which one is prefer ?     From: users-boun...@ovirt.org On Behalf Of Strahil Nikolov via Users Sent: Tuesday, September 28, 2021 12:05 AM To: ‪‪‪tommy ; 'users' Subject: [ovirt-users]Re: 回复: Re: About the vm memory limit   https://docs.oracle.com/en/database

[ovirt-users]Re: 回复: Re: About the vm memory limit

2021-09-27 Thread Strahil Nikolov via Users
https://docs.oracle.com/en/database/oracle/oracle-database/19/ladbi/disabling-transparent-hugepages.html https://access.redhat.com/solutions/1320153 (requires RH dev subscription or other type of subscription) -> In short add 'transparent_hugepage=never' to the kernel params SLES11/12/15 ->

[ovirt-users] Re: about the Live Storage Migration

2021-09-27 Thread Strahil Nikolov via Users
Sadly, libgfapi has some limits and some of them are like your case. If it's a linux VM, you can create new disk from the Gluster Storage and then attach and clone/pvmove it from within the guest. Another approach is to disable libgfapi, shutdown the vm, power on the VM, migrate the disk and

[ovirt-users] Re: about the Live Storage Migration

2021-09-27 Thread Strahil Nikolov via Users
Admin Portal -> Storage -> Disks -> Select Disk -> upper right corner -> Move -> follow the wizard Best Regards, Strahil Nikolov В неделя, 26 септември 2021 г., 14:06:23 ч. Гринуич+3, Tommy Sway написа: From the document:   Overview of Live Storage Migration Virtual disks can be

[ovirt-users] Re: About the vm memory limit

2021-09-27 Thread Strahil Nikolov via Users
I can't recall - it was discussed here in the list and some users had troubles. I hope some of the devs can chime in. Best Regards, Strahil Nikolov В неделя, 26 септември 2021 г., 07:04:29 ч. Гринуич+3, Tommy Sway написа: In fact, I am very interested in the part you mentioned,

[ovirt-users] Re: About the vm memory limit

2021-09-27 Thread Strahil Nikolov via Users
Transparent huge pages are enabled by default, so you need to stop them. I would use huge pages on both host and VM, but theoretically it shouldn't be a problem running a VM with enabled HugePages without configuring on Host. Best Regards, Strahil Nikolov В събота, 25 септември 2021 г.,

[ovirt-users] Re: why cannot set the power management proxy server ?

2021-09-27 Thread Strahil Nikolov via Users
According to  https://portal.nutanix.com/page/documents/kbs/details?targetId=kA060008T3jCAE : 'The lanplus parameter is required to enable the IPMI 2.0 RMPC+ protocol' According to  https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface 'RMCP+ (a UDP-based protocol with

[ovirt-users] Re: Reinstall without dataloss

2021-09-27 Thread Strahil Nikolov via Users
Just take a CentOS DVD , select troubleshooting and then once you drop to a shell -> you can mount /proc/ , /sys , /dev & /run with the bind option and chroot. Then just follow the procedure for your boot type EFI vs Legacy and recover the files missing. Best Regards, Strahil Nikolov В

[ovirt-users] Re: why cannot set the power management proxy server ?

2021-09-25 Thread Strahil Nikolov via Users
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-ipmi-ca True or 1. If blank, then value is False. It is recommended that you enable Lanplus to improve the security of your connection if your hardware supports it. Most

[ovirt-users] Re: VM hanging at sustained high throughput

2021-09-23 Thread Strahil Nikolov via Users
What happens if you define a tmpfs and then create the qemu disk ontop of that ramdisk.Does qemu hang again ? Best Regards,Strahil Nikolov On Thu, Sep 23, 2021 at 18:25, Shantur Rathore wrote: ___ Users mailing list -- users@ovirt.org To

[ovirt-users] Re: about the power management of the hosts

2021-09-23 Thread Strahil Nikolov via Users
When systems go 'crazy' there is no guarantee that they will be completely unresponsive. HA VMs should be fine, but regular VMs won't be restarted as the engine won't know if the host is dead or not (and no fencing is configured to guarantee that). Also, storage tasks could fail if that host is

[ovirt-users] Re: about the power management of the hosts

2021-09-22 Thread Strahil Nikolov via Users
It is possible, but without the SPM host being fenced you won't be able to do any storage-related tasks. Even snapahot management will be impossible withoutmanual intervention (reboot the host from the remote management and then mark the host as restarted). Best Regards,Strahil Nikolov On

[ovirt-users] Re: about the power management of the hosts

2021-09-18 Thread Strahil Nikolov via Users
Values are based on the method you want to fence the node.What type of server do you have ? Best Regards,Strahil Nikolov On Sat, Sep 18, 2021 at 15:31, Tommy Sway wrote: ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to

[ovirt-users] Re: Hosted Engine cluster version compatib.

2021-09-18 Thread Strahil Nikolov via Users
Have you tried the windows fix (a.k.a Engine restart )? Best Regards,Strahil Nikolov On Fri, Sep 17, 2021 at 16:43, Andrea Chierici wrote: ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy

[ovirt-users] Re: about the power management of the hosts

2021-09-18 Thread Strahil Nikolov via Users
I think that the following documentation explains it far better than a single person can do: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/technical_reference/power_management Best Regards,Strahil Nikolov On Sat, Sep 18, 2021 at 9:00, Tommy Sway wrote:

[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-09-16 Thread Strahil Nikolov via Users
It should be   system_u:object_r:glusterd_brick_t:s0 Best Regards,Strahil Nikolov I'm having this same issue on 4.4.8 with a fresh 3-node install as well. Same errors as the OP.  Potentially relevant test command: [root@ovirt-node0 ~]# semanage fcontext -a -t glusterd_brick_t

[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-09-16 Thread Strahil Nikolov via Users
I think that in the UI there is an option to edit.Find and replace glusterd_brick_t with system_u:object_r:glusterd_brick_t:s0 and run again. Best Regards,Strahil Nikolov On Thu, Sep 16, 2021 at 12:33, bgrif...@affinityplus.org wrote: I had the same issue with a new 3-node deploy on

[ovirt-users] Re: about the power management of the hosts

2021-09-16 Thread Strahil Nikolov via Users
If your Host is stale and is having thr SPM role -> some tasks will fail.Also, the VMs on such host won't be recovered unless VM HA is enabled (with storage lease). For prod, I would setup that.Keep in mind that the fencing is issued by another host in the cluster, so you need minimum 2 hosts

[ovirt-users] Re: Deployment failed, ovirt-engine-appliance haved beend installed by rpm on host

2021-09-15 Thread Strahil Nikolov via Users
You need to specify the following variable:he_offline_deployment: true Best Regards,Strahil Nikolov [ INFO ] TASK [ovirt.ovirt.engine_setup : Gather facts on installed packages] [ INFO ] ok: [localhost -> 192.168.1.248] [ INFO ] TASK [ovirt.ovirt.engine_setup : Fail when firewall manager is

[ovirt-users] Re: Poor gluster performances over 10Gbps network

2021-09-10 Thread Strahil Nikolov via Users
in user benchmarks were impressive, and the underlying qemu/libvirt bugs are now fixed or were close to be. Guillaume Pavese Ingénieur Système et RéseauInteractiv-Group On Thu, Sep 9, 2021 at 7:00 PM Strahil Nikolov via Users wrote: Dis you enable libgfapi ?engine-config -s LibgfApiSupported=true

[ovirt-users] Re: Cannot Start VM After Pausing due to Storage I/O Error

2021-09-10 Thread Strahil Nikolov via Users
Can you provide the output from all nodes: gluster pool listgluster peer statusgluster volume status Best Regards,Strahil Nikolov On Fri, Sep 10, 2021 at 0:50, marcel d'heureuse wrote: ___ Users mailing list -- users@ovirt.org To unsubscribe

[ovirt-users] Re: Poor gluster performances over 10Gbps network

2021-09-09 Thread Strahil Nikolov via Users
Dis you enable libgfapi ?engine-config -s LibgfApiSupported=true Note: power off and then power on the VM. The qemu process should not use the '/rhev' mountpoints. Also, share your current setup:- disks- hw controller- did you storage align your block devices (hw raid only)- tuned-profile-

[ovirt-users] Re: Poor gluster performances over 10Gbps network

2021-09-09 Thread Strahil Nikolov via Users
Did you check  https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7I3PQVERQZT6Q6CXDWJEWCY2ELEGRHY/ ? Best Regards,Strahil Nikolov On Wed, Sep 8, 2021 at 14:25, Staniforth, Paul wrote: ___ Users mailing list -- users@ovirt.org To

[ovirt-users] Re: fresh hyperconverged Gluster setup failed in ovirt 4.4.8

2021-09-09 Thread Strahil Nikolov via Users
When setting up over the UI, last step shows the ansible tasks. Can you find your version of and print it here: Set Gluster specific SeLinux context on the bricks' Best Regards,Strahil Nikolov On Wed, Sep 8, 2021 at 12:43, dhanaraj.ramesh--- via Users wrote: Hi  Team I'm trying to

[ovirt-users] Re: Time Drift Issues

2021-09-07 Thread Strahil Nikolov via Users
I had similar issues where the battery of the motherboard was running out of 'juice' and the hardware cloxk was just going crazy. NTP couldn't fix it , so a replacement of the battery was the only option. Best Regards,Strahil Nikolov On Tue, Sep 7, 2021 at 7:26, dhanaraj.ramesh--- via

[ovirt-users] Re: Hostedengine Database broken

2021-09-06 Thread Strahil Nikolov via Users
As long as thr old engine is completely offline, I think that you can import them without issues. Best Regards,Strahil Nikolov On Mon, Sep 6, 2021 at 17:12, mar...@deheureu.se wrote: so Ovirt is setup. The GlusterFS where the VMs are located also in but i have no an illegal disk from a

[ovirt-users] Re: how to remove a failed backup operation

2021-09-03 Thread Strahil Nikolov via Users
This looks like a bug. It should have 'recovered' from the failure. I'm not sure which logs would help identify the root cause. Best Regards,Strahil Nikolov On Fri, Sep 3, 2021 at 16:45, Gianluca Cecchi wrote: Hello,I was trying incremental backup with the provided

[ovirt-users] Re: NFS storage was locked for 45 minutes after I attempted a clone operation

2021-09-03 Thread Strahil Nikolov via Users
That's really odd. Maybe you can try to clone it and then experiment on the clone itself. Once the reason is found out, you can try with the original. My first look is to check all logs on the engine and the SPM for clues. Best Regards,Strahil Nikolov On Fri, Sep 3, 2021 at 11:42, David

[ovirt-users] Re: NFS Synology NAS (DSM 7)

2021-08-31 Thread Strahil Nikolov via Users
I guess gou need to try:all_squash + anonuid=36 + anongid=36 Best Regards,Strahil Nikolov On Fri, Aug 27, 2021 at 23:44, Alex K wrote: ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy

[ovirt-users] Re: [External] : Re: about the hugepage setting of the KVM server

2021-08-31 Thread Strahil Nikolov via Users
Don't mix THP with HP.THP is a mechanism of the kernel to "create" hugepages , but it's inefficient. Disable THP on the VM.Also, if you have large VMs on the host -> consider disabling it there too. Best Regards,Strahil Nikolov Sent from Yahoo Mail on Android On Thu, Aug 26, 2021 at 16:26,

[ovirt-users] Re: NFS storage was locked for 45 minutes after I attempted a clone operation

2021-08-31 Thread Strahil Nikolov via Users
Hi David, how big are your VM disks ? I suppose you have several very large ones. Best Regards,Strahil Nikolov Sent from Yahoo Mail on Android On Thu, Aug 26, 2021 at 3:27, David White via Users wrote: I have an HCI cluster running on Gluster storage. I exposed an NFS share into oVirt

[ovirt-users] Re: Add nodes to single node gluster hyperconverged

2021-08-29 Thread Strahil Nikolov via Users
I know that the GUi should work in most setups, as oVirt is the upstream of the Red Hat Gluster Storage Console. Despite RH being an open-source company, it's also trying to sell support contracts and thus it's not an open-documentation company - they need to make money after all. I know that

[ovirt-users] Re: glusterfs not starting

2021-08-24 Thread Strahil Nikolov via Users
I can't call it "resolved" , but it's up to you. I would look at gluster logs for clues. Best Regards,Strahil Nikolov Sent from Yahoo Mail on Android Its resolved but had to rebuild the cluster and lost some data. ___ Users mailing list --

[ovirt-users] Re: Moving datacenters & IP addresses with minimal downtime

2021-08-23 Thread Strahil Nikolov via Users
You can 'clone' by using Gluster/Ovirt's DR functionality (keep in mind that the receiving volume is in read-only mode, so when the cut-over comes you have to change it). Best Regards,Strahil Nikolov Sent from Yahoo Mail on Android On Sat, Aug 21, 2021 at 12:36, David White via Users

[ovirt-users] Re: glusterfs not starting

2021-08-22 Thread Strahil Nikolov via Users
Hi , first check the brick is mounted. Then, you can force start the volume which will force the brick to be started. Best Regards,Strahil Nikolov Sent from Yahoo Mail on Android On Fri, Aug 20, 2021 at 22:13, eev...@digitaldatatechs.com wrote: I have an ovirt 4.3 on 3 Centos 7 with

[ovirt-users] Re: oVirt Hosted Engine Offline Deployment

2021-08-15 Thread Strahil Nikolov via Users
The offline install mode should work in both 4.3 and 4.4 .About CentOS Stream ... it is as it is. You can use any EL8 distro (for example RHEL8 with the new developer subscription, the company can have up to 16 physical prod systems).4.3 is not supported which means that it has no bug fixes,

[ovirt-users] Re: oVirt Hosted Engine Offline Deployment

2021-08-15 Thread Strahil Nikolov via Users
I would try latest 4.4 (I think it was 4.4.7). Best Regards,Strahil Nikolov On Sun, Aug 15, 2021 at 16:46, Andrew Lamarra wrote: Thank you, both, for the help! I see that there's a version 4.3.10. Will that version not work? Andrew ___ Users

[ovirt-users] Re: oVirt Hosted Engine Offline Deployment

2021-08-14 Thread Strahil Nikolov via Users
I think that first you need to install the appliance and then in offline mode ot should skip connecting to yum repos. Best Regards,Strahil Nikolov On Fri, Aug 13, 2021 at 19:56, Andrew Lamarra wrote: Hi there. I'm trying to get oVirt up & running on a server in a network that has no

[ovirt-users] Re: Resize iSCSI LUN and Storage Domain

2021-08-09 Thread Strahil Nikolov via Users
If I wish to resize an non-Ovirt iSCSI LUN I would: - resize the block paths pointing to the iSCSI LUN - resize the multipathd device that was aggregated I guess you can give it a try. Best Regards, Strahil Nikolov  В понеделник, 9 август 2021 г., 19:32:11 ч. Гринуич+3, Shantur Rathore

[ovirt-users] Re: Add nodes to single node gluster hyperconverged

2021-08-09 Thread Strahil Nikolov via Users
I have never managed gluster via the UI, but theoretically it should work from there. Maybe there is a bug. You can prepare the new bricks (don't forget to "mkfs.xfs -i size=512 /dev/vg/lv") manually, but it's more error-prone. Let's see if someone else can confirm that this (UI) behavior is

  1   2   3   4   5   6   7   8   9   >