Il giorno lun 21 nov 2022 alle ore 20:05 Alex McWhirter
ha scritto:
> I have some manpower im willing to throw at oVirt, but i somewhat need to
> know if what the community wants and what we want are in line.
>
> 1. We'd bring back spice and maybe qxl. We are already maintaining forks
> of the ov
Hi Users,
I remove the vm and check the Box including disks.
Ovirt didn't know the vm and disks anymore.
Can I remove manual the uuid named files from remove_me folder and also from
the shard folder?
Br
Marcel
Sorry used the wrong Sender address.
Br
Marcel
Am 21. November 2022 23:28:02 MEZ
Sorry, I can't answer this. I've only ever done oVirt/RHV on iSCSI.
On Tue, 22 Nov 2022 at 12:59, Marcel d'Heureuse
wrote:
> Hi,
>
> I remove the vm and check the Box including disks.
>
> Ovirt didn't know the vm and disks anymore.
>
> Can I remove manual the uuid named files from remove_me fol
Any ideas please @Strahil @Sandro @Yedidyah
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/communit
Hi Users,
I have got a Problem:
Vmstore is full with 97 % (3TB).
I want to cleanup and have delete 2 vms with round 500 GB harddiskspace
together on a single node.
In gluster .shard/.remove_me are 6 files but after 6 h I got no new free
diskspace. What I am doing wrong? Can I force the clean
hello everyone, I hope you can help me
I have 3 servers with rocky linux 8.6 glusterfs in replica. Everything is
working correctly, except for the snapshots. When I do a snapshot, it ends up
saying "VM snapshot xxx failed". and in the section of storage and disks, the
snapshot of the memory and
I have some manpower im willing to throw at oVirt, but i somewhat need
to know if what the community wants and what we want are in line.
1. We'd bring back spice and maybe qxl. We are already maintaining forks
of the ovirt and RH kernels for this. We use ovirt currently for a lot
of VDI soluti
Hello, good afternoon!
Install the ansible-core package in version 2.12 to work around the issue.
Em seg., 21 de nov. de 2022 às 11:40, escreveu:
> Hello:
>
> I am following Sandro's guide to deploy hosted engine using Ceph and
> iSCSI.
>
> I have deployed a 3-node Ceph cluster and set up the i
I use RockyLinux for the engine. I had to use and versionlock this:
https://vault.centos.org/centos/8/AppStream/aarch64/os/Packages/postgresql-jdbc-42.2.3-3.el8_2.noarch.rpm
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-
Thanks Sandro,
That's good to hear as I've always had very quick feedback and "support" :)
Kind Regards
Simon
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/priva
Thank you Murilo,
I have just stripped everything back and created a new internal ansible branch
we use for automated oVirt installations, copied in the relevant inventory
files and it worked.
Will investigate any differences later but I'm assuming file corruption as the
same inventory files w
Il giorno lun 21 nov 2022 alle ore 11:27 ha scritto:
> Is all oVirt Support now gone - is oVirt dead in the water?
>
>
"oVirt Support" never existed so it can't be gone :-)
No, oVirt is not dead in the water but as said on different occasions, Red
Hat developers who previously worked exclusively
I have upgraded all 3 nodes to the latest 4.5.3 and still unable to add
additional gluster volumes to the same thinpool/vg
There was never an issue prior to 4.5
TASK [gluster.infra/roles/backend_setup : Create volume groups]
**
Hello:
I am following Sandro's guide to deploy hosted engine using Ceph and
iSCSI.
I have deployed a 3-node Ceph cluster and set up the iSCSI gateway. I am
using Rocky Linux 8.6 as host OS.
These are the steps I have taken:
Install Rocky Linux on the hosts.
Deploy Ceph Quincy cluster using
Could you post the deploy logs? By the way, have you tried to deploy with
the bridge (ovirtmgmt) created?
Em seg., 21 de nov. de 2022 às 07:27, escreveu:
> Is all oVirt Support now gone - is oVirt dead in the water?
> ___
> Users mailing list -- users@
Hi, I was in the process of upgrading 4.3 to 4.4. First I upgraded the engine
and the hosts are still 4.3. The engine has started fencing my hosts as there
has been 1 (one) connection problem to VDSM on the host. I Basically, the
engine at some random time connects to vdsm, this crashes (didnt o
Is all oVirt Support now gone - is oVirt dead in the water?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ov
17 matches
Mail list logo