[ovirt-users] Re: What is the status of the whole Ovirt Project?

2023-07-13 Thread Volenbovskyi, Konstantin via Users
Hi,
We switched from Gluster to NFS provided by SAN array: maybe it was matter of 
combination of factors (configuration/version/whatever),
but it was unstable for us.
SPICE/QXL in RHEL 9: yeah, I understand that for some people it is important (I 
saw that someone is doing some forks whatever)

I think that ovirt 4.5 (nightly build __) might be OK for some time, but I 
think that alternatives are:
-OpenStack for larger setups (but be careful with distribution -as I remember 
Red Hat abandons TripleO and introduces OpenShift stuff for installation of 
OpenStack)
-ProxMox and CloudStack for all sizes.
-Maybe XCP-ng + (paid?) XenOrchestra, but I trust KVM/QEMU more than Xen __
OpenShift Virtualization/OKD Virtualization - I don't know...
Actually might be good if someone specifically comments on going from ovirt to 
OpenShift Virtualization/OKD Virtualization.

Not sure if this statement below https://news.ycombinator.com/item?id=32832999 
is still correct and what exactly are the consequences of 'OpenShift 
Virtualization is just to give a path/time to migrate to containers'
"The whole purpose behind OpenShift Virtualization is to aid in organization 
modernization as a way to consolidate workloads onto a single platform while 
giving app dev time to migrate their work to containers and microservice based 
deployments."



BR,
Konstantin

Am 13.07.23, 09:10 schrieb "Alex McWhirter" mailto:a...@triadic.us>>:


We still have a few oVirt and RHV installs kicking around, but between
this and some core features we use being removed from el8/9 (gluster,
spice / qxl, and probably others soon at this rate) we've heavily been
shifting gears away from both Red Hat and oVirt. Not to mention the
recent drama...


In the past we toyed around with the idea of helping maintain oVirt, but
with the list of things we'd need to support growing beyond oVirt and
into other bits as well, we aren't equipped to fight on multiple fronts
so to speak.


For the moment we've found a home with SUSE / Apache CloudStack, and
when el7 EOL's that's likely going to be our entire stack moving
forward.


On 2023-07-13 02:21, eshwa...@gmail.com  wrote:
> I am beginning to have very similar thoughts. It's working fine for
> me now, but at some point something big is going to break. I already
> have VMWare running, and in fact, my two ESXi nodes have the exact
> same hardware as my two KVM nodes. Would be simple to do, but I
> really don't want to go just yet. At the same time, I don't want to
> be the last person turning off the lights. Difficult times.
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> 
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EJFIRAT6TNCS5TZUFPGBV5UZSCBW6LE4/
>  
> 
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 

Privacy Statement: https://www.ovirt.org/privacy-policy.html 

oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 

List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EFBHZ76GZ73HA52XJMGTRGTYCPIGNPHH/
 




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O6KPWG3HA6IJY2FQYNJIQIDDC6GD56WD/


[ovirt-users] Re: ovirt 4.5.4 deploy self-hosted engine

2023-07-05 Thread Volenbovskyi, Konstantin via Users
Hi,
I gave certain pointers in 
https://www.mail-archive.com/users@ovirt.org/msg72371.html , but the person who 
wrote initial question haven’t provided any updates

Several thoughts around that:

  1.  you had several attempts (log file you attached contains attempts from 
4th of July and below events are from 5th of July) and somehow I think that in 
attempt below you haven’t specified amount of vCPUs and it took the value ‘max’
| he_vcpus | max | The amount of CPUs used on the engine VM |

  1.  based on what you see in  https://gitlab.com/libvirt/libvirt/-/issues/324 
I am pretty sure that you 
can disable cgroups v2
and you will be 100% sure that you are dealing with this fault/this might be 
acceptable workaround


  1.  the Libvirt versions specified in the commit addressing that
start with 
v9.0.0
Hmm, so you use CentOS Stream 9 and 
https://mirror.stream.centos.org/9-stream/AppStream/x86_64/os/Packages/ it is 
Libvirt 9.
But maybe Libvirt here comes from ovirt repo?...


BR,
Konstantin


Von: Jorge Visentini 
Datum: Mittwoch, 5. Juli 2023 um 15:59
An: users 
Betreff: [ovirt-users] ovirt 4.5.4 deploy self-hosted engine

Hi.

I'm trying to deploy the engine but I'm having some errors that I couldn't 
identify.
I don't know if it's incompatibility with my hardware or some libvirt bug.

Jul 05 10:06:21 ksmmi1r02ovirt36.kosmo.cloud ansible-async_wrapper.py[690916]: 
690917 still running (48505)
Jul 05 10:06:21 ksmmi1r02ovirt36.kosmo.cloud libvirtd[701878]: Domain id=1 
name='HostedEngineLocal' uuid=922a156c-7f4c-4815-a645-54ed07 794451 is tainted: 
custom-ga-command
Jul 05 10:06:21 ksmmi1r02ovirt36.kosmo.cloud virtlogd[630980]: Client hit max 
requests limit 1. This may result in keep-alive timeouts. Consider tuning the 
max_client_requests server parameter
Jul 05 10:06:22 ksmmi1r02ovirt36.kosmo.cloud libvirtd[701878]: Invalid value 
'-1' for 'cpu.max': Invalid argument
Jul 05 10:06:26 ksmmi1r02ovirt36.kosmo.cloud ansible-async_wrapper.py[690916]: 
690917 still running (48500)
Jul 05 10:06:31 ksmmi1r02ovirt36.kosmo.cloud ansible-async_wrapper.py[690916]: 
690917 still running (48495)
Jul 05 10:06:31 ksmmi1r02ovirt36.kosmo.cloud systemd[1]: 
systemd-timedated.service: Deactivated successfully.
Jul 05 10:06:36 ksmmi1r02ovirt36.kosmo.cloud ansible-async_wrapper.py[690916]: 
690917 still running (48490)
Jul 05 10:06:37 ksmmi1r02ovirt36.kosmo.cloud libvirtd[701878]: Invalid value 
'-1' for 'cpu.max': Invalid argument
Jul 05 10:06:41 ksmmi1r02ovirt36.kosmo.cloud ansible-async_wrapper.py[690916]: 
690917 still running (48485)
Jul 05 10:06:46 ksmmi1r02ovirt36.kosmo.cloud ansible-async_wrapper.py[690916]: 
690917 still running (48480)
Jul 05 10:06:51 ksmmi1r02ovirt36.kosmo.cloud ansible-async_wrapper.py[690916]: 
690917 still running (48475)
Jul 05 10:06:52 ksmmi1r02ovirt36.kosmo.cloud libvirtd[701878]: Invalid value 
'-1' for 'cpu.max': Invalid argument
Jul 05 10:06:56 ksmmi1r02ovirt36.kosmo.cloud ansible-async_wrapper.py[690916]: 
690917 still running (48470)
Jul 05 10:07:01 ksmmi1r02ovirt36.kosmo.cloud ansible-async_wrapper.py[690916]: 
690917 still running (48465)

My config:
CPU: 2 x Intel(R) Xeon(R) Platinum 8276M CPU @ 2.20GHz
Memory: 4TB
Disk: 120GB RAID 1
ISO: ovirt-node-ng-installer-4.5.4-2022120615.el9.iso

Packages:
kernel-5.14.0-202.el9.x86_64
libvirt-8.9.0-2.el9.x86_64
centos-release-ovirt45-9.1-3.el9s.noarch
python3-ovirt-engine-sdk4-4.6.0-1.el9.x86_64
ovirt-imageio-common-2.4.7-1.el9.x86_64
ovirt-imageio-client-2.4.7-1.el9.x86_64
ovirt-openvswitch-ovn-2.15-4.el9.noarch
ovirt-openvswitch-ovn-common-2.15-4.el9.noarch
ovirt-imageio-daemon-2.4.7-1.el9.x86_64
ovirt-openvswitch-ovn-host-2.15-4.el9.noarch
python3-ovirt-setup-lib-1.3.3-1.el9.noarch
ovirt-vmconsole-1.0.9-1.el9.noarch
ovirt-vmconsole-host-1.0.9-1.el9.noarch
ovirt-openvswitch-2.15-4.el9.noarch
python3-ovirt-node-ng-nodectl-4.4.2-1.el9.noarch
ovirt-node-ng-nodectl-4.4.2-1.el9.noarch
ovirt-ansible-collection-3.0.0-1.el9.noarch
ovirt-python-openvswitch-2.15-4.el9.noarch
ovirt-openvswitch-ipsec-2.15-4.el9.noarch
ovirt-hosted-engine-ha-2.5.0-1.el9.noarch
ovirt-provider-ovn-driver-1.2.36-1.el9.noarch
ovirt-host-dependencies-4.5.0-3.el9.x86_64
ovirt-hosted-engine-setup-2.7.0-1.el9.noarch
ovirt-host-4.5.0-3.el9.x86_64
ovirt-release-host-node-4.5.4-1.el9.x86_64
ovirt-node-ng-image-update-placeholder-4.5.4-1.el9.noarch
ovirt-engine-appliance-4.5-20221206125848.1.el9.x86_64

For better understanding, the deploy log is attached.
I appreciate any tips that help me.

Thank you!
--
Att,
Jorge Visentini
+55 55 98432-9868
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: