I think you may be right, here. I decided to just start over and use the
actual ovirt-node installation media, rather than Centos Stream installation
media. Hopefully that gets the software-side situated. Thanks for the pointers.
From: Strahil Nikolov
Sent:
>Correct. Was added in an upgrade script. There might be a bug/issue/change
>in the order that the upgrade script runs vs create_views_* above.
>Adding Aviv and Shirly.
i found this problem at log.
UPDATE 0
2022-01-21 16:02:41,589+0900 Running upgrade sql script
Hello, Thanks for your answer
>This indeed looks like the root cause for the failure. Can you please
>share the full setup log? Thanks.
i send the log file by mail
>I'd like to know as well. Generally speaking, if the engine is down,
>all VMs should still be up, but there is no
and after few minutes, the status of engine is like
{"vm": "down_unexpected", "health": "bad", "detail": "Down", "reason": "bad vm
status"}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy
Hello,
when i check --vm-status, engine is up but health is bad status
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
yum downgrade qemu-kvm-block-gluster-6.0.0-33.el8s
libvirt-daemon-driver-qemu-6.0.0-33.el8s qemu-kvm-common-6.0.0-33.el8s
qemu-kvm-hw-usbredir-6.0.0-33.el8s qemu-kvm-ui-opengl-6.0.0-33.el8s
qemu-kvm-block-rbd-6.0.0-33.el8s qemu-img-6.0.0-33.el8s qemu-kvm-6.0.0-33.el8s
*cluster.server-quorum-ratio is set to 51% which in your case means that you
can afford only 1 server down (both 3-node TSP and 4-node TSP).
Best Regards,Strahil Nikolov
On Sun, Jan 23, 2022 at 22:46, Strahil Nikolov via Users
wrote: ___
Users
I've seen this.
Ensure that all qemu-related packages are coming from
centos-advanced-virtualization repo (6.0.0-33.el8s.x86_64).
There is a known issue with the latest packages in the CentOS Stream.
Also, you can set the following alias on the Hypervisours:
alias virsh='virsh -c
Thanks for the response. How can I verify this? Has something with the
installation procedures changed recently?
From: Strahil Nikolov
Sent: Sunday, January 23, 2022 3:41 PM
To: users ; Robert Tongue
Subject: Re: [ovirt-users] Failed HostedEngine Deployment
Ahh, I did some repoquery commands can see a good bit of qemu* packages are
coming from appstream rather than
ovirt-4.4-centos-stream-advanced-virtualization.
What's the recommanded fix?
From: Strahil Nikolov
Sent: Sunday, January 23, 2022 3:41 PM
To: users ;
The oVirt settings on the Gluster have server quorum enabled . If quorum is
less than 50% +1 servers -> all bricks will shutdown.
If the compute-only node is part of the TSP (Gluster's cluster) , it will also
be calculated for quorum and in most cases you don't want that.
If it's so - just
Greetings oVirt people,
I am having a problem with the hosted-engine deployment, and unfortunately
after a weekend spent trying to get this far, I am finally stuck, and cannot
figure out how to fix this.
I am starting with 1 host, and will have 4 when this is finished. Storage is
GlusterFS,
Hi Thomas,
On Fri, Jan 21, 2022 at 11:16 PM Thomas Hoberg wrote:
>
> In the recent days, I've been trying to validate the transition from CentOS 8
> to Alma, Rocky, Oracle and perhaps soon Liberty Linux for existing HCI
> clusters.
>
> I am using nested virtualization on a VMware workstation
On Fri, Jan 21, 2022 at 12:02 PM wrote:
>
> Hello,
>
> I tried to upgrade ovirt to 4.4.9 from 4.4.3
>
> when i do 'engine-setup' i get error
>
On Sat, Jan 22, 2022 at 11:41 PM ravi k wrote:
> Hello team,
>
Hi,
Thank you for all the wonderful work you've been doing. I'm starting out
> new with oVirt and OVN. So please excuse me if the questions are too naive.
> We intend to do a POC to check if we can migrate VMs off our current
>
On Fri, Jan 21, 2022 at 6:28 PM Gianluca Cecchi
wrote:
>
> Hello,
> after updating the external engine from CentOS 8.4 and 4.4.8 to Rocky Linux
> 8.5 and 4.4.9 as outlined here:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YUDJRC22SQPAPAIURQIVSEMGITDRQOOM/
> I went further and
16 matches
Mail list logo