Hello everyone,
we have a self hosted engine environment (oVirt 4.4.5) that use a replica 2 +
arbiter glusterfs. Those servers are both glusterfs nodes and oVirt node of the
hosted engine.
For an upgrade we followed this guide
Hello everyone!
We have less than 500 hosts and ~800 vms handled by our engine and we could
benefit from deploy another engine for splitting the workload (in ex.:we
perform daily and weekly backup via python script, scheduled snapshot
(dell+add). Is there any way for migrating/moving a host
Hi all,
as I read in the documentation
https://www.ovirt.org/documentation/administration_guide/index.html:
"Support for OpenStack Glance is now deprecated. This functionality will be
removed in a later release."
Do you know any alternative to Glance for a "single point" images archives for
Hi all,
I'm running a glusterFS setup v 8.6 with two node and one arbiter. Both nodes
and arbiter are CentOS 8 Stream with oVirt 4.4. Under gluster I have a LVM thin
partition.
VMs running in this cluster have really poor write performance, when a test
directly performend on the disk score
Hi all,
I'm using websockify + noVNC for expose the vm console via browser getting the
graphicsconsoles ticket via API. Everything works fine for every other host
that I have (more than 200), the console works either via oVirt engine and via
browser) but just for a single host (CentOS Stream
Hi all,
I'm trying to remove a snapshot from a HA VM in a setup with glusterfs (2 nodes
C8 stream oVirt 4.4 + 1 arbiter C8). The error that appears in the vdsm log of
the host is:
2022-01-10 09:33:03,003+0100 ERROR (jsonrpc/4) [api] FINISH merge error=Merge
failed: {'top':
Hi all,
I'm trying to delete via vdsm-client toolan illegal volume that is not listed
in the engine database. The volume ID is 5cb3fe58-3e01-4d32-bc7c-5907a4f858a8:
[root@ovirthost ~]# vdsm-tool dump-volume-chains
e25db7d0-060a-4046-94b5-235f38097cd8
Images volume chains (base volume first)
Hi,
I have an issue with a VM (Windows Server 2016), running on Centos8, oVirt host
4.4.8, oVirt engine 4.4.5. I used to perform regular snapshot (deleting the
previous one) on this VM but starting from 25/10 the task fail with the errors
that I'll attach at the bottom. The volume ID mentioned
Hi all,
resuming the "dead" thread "Importing VM from Xen Server 6.5"
(https://lists.ovirt.org/pipermail/users/2016-August/075213.html) I'm trying to
import via GUI a VM from Xen Server 7.1 in a host Centos 8.4, oVirt 4.4.
Created the SSH key for vdsm user, added the IP in the host target
pid=InProduct=Global_Internal_YGrowth_AndroidEmailSig__AndroidUsers_wl=ym_sub1=Internal_sub2=Global_YGrowth_sub3=EmailSignature>
On Fri, Feb 5, 2021 at 11:50, francesco--- via Users
wrote:
Hi all,
I'm experiencing random reboot on several oVirt nodes (CentOS 7/8,
oVirt 4.3/4.4 as well). Some
Hi all,
I'm experiencing random reboot on several oVirt nodes (CentOS 7/8, oVirt
4.3/4.4 as well). Sometimes it happens three times in a day, and the more hosts
I'm adding to my pool, the more I noticing.
The logs are not helpful: it's like a brute poweroff cause there are no entries
at all
)
at Unknown.eval(webadmin-0.js)
Il 23/11/2020 09:50, Francesco via Users ha scritto:
A tiny little "up" because is driving me crazy
Francesco
Il 19/11/2020 10:57, francesco--- via Users ha scritto:
Hi all,
I'm using oVirt SDK python for retrieving info about storage domain, in seve
)
at Unknown.eval(webadmin-0.js)
Il 23/11/2020 09:50, Francesco via Users ha scritto:
A tiny little "up" because is driving me crazy
Francesco
Il 19/11/2020 10:57, francesco--- via Users ha scritto:
Hi all,
I'm using oVirt SDK python for retrieving info about storage domain, in seve
A tiny little "up" because is driving me crazy
Francesco
Il 19/11/2020 10:57, francesco--- via Users ha scritto:
Hi all,
I'm using oVirt SDK python for retrieving info about storage domain, in several
hosts (centos7/ovirt4.3 and centos8/ovirt4.4), but the script exits with the
Hi all,
I'm using oVirt SDK python for retrieving info about storage domain, in several
hosts (centos7/ovirt4.3 and centos8/ovirt4.4), but the script exits with the
following error in some of them:
Traceback (most recent call last):
File "get_uuid.py", line 70, in
storage_domain =
Ok, solved.
Simply the server node2 could not mount via NFS the data domain of the
node 1. Added node1 in the node2 firewall and in /etc/exports, tested
and everything went fine.
Regards,
Francesco
Il 21/09/2020 17:44, francesco--- via Users ha scritto:
Hi Everyone,
In a test environment
Hi Everyone,
In a test environment I'm trying to deploy a single node self hosted engine 4.4
on CentOS 8 from a 4.3 backup. The actual setup is:
- node1 with CentOS7, oVirt 4.3 with a working SH engine. The data domain is a
local NFS;
- node2 with CentOS8, where we are triyng to deploy the
Hi All,
I'm facing a really slow export ov vms hosted on a single node cluster, in a
local storage. The Vm disk is 600 GB and the effective usage is around 300 GB.
I estimated that the following process would take up about 15 hours to end:
vdsm 25338 25332 99 04:14 pts/007:40:09
18 matches
Mail list logo