Hi,
I guess this will need to be patched to make it work.
SPICE is also removed on centos 9/rhel 9, so that's the same for oVirt.
I would suggest to reconfigure all the vm's on the source to VNC, and
then export/import them.
But if you have Oracle Linux, I guess you can just open a support
Hi,
Are you using qcow2 and discard option?
Jean-Louis
On 23/02/2024 13:03, rootpenteste...@gmail.com wrote:
Hi team,
I am using Ovirt 4.4 and ISCSI as my storage domain. Some of my VMs have actual
sizes showing more than virtual size and when we look inside the Vm the used
space is much
Engine works fine on el9s
On 21/02/2024 16:24, Nathanaël Blanchet via Users wrote:
Hello,
According to official doc, only el8 is supported for engine.
So, has anyone experienced a successfull el9 installation and if not
what is the challenge for finalizing it?
The fix got merged into the 42.2.x branch:
https://github.com/pgjdbc/pgjdbc/commits/release/42.2/
So guess we just need to bump the dep in the pom.
But as far as I see the code doesn't use the PreferQueryMode flag, so
we're save.
Jean-Louis
On 21/02/2024 09:30, Fabrice Bacchella via Users
https://bugzilla.redhat.com/show_bug.cgi?id=1679333
You can find the config I use in this bugreport :)
Jean-Louis
On 14/02/2024 13:24, marek wrote:
can you publish your sql_exporter configuration?
i found this exporter https://github.com/czerwonk/ovirt_exporter .
will give a try
Marek
Hi Marek,
In fact all the data you need is already collected by oVirt/VDSM itself
and saved into the DWH database.
I configured sql_exporter for prometheus which does queries on the DWH
database to gather the data I need.
This is exported to prometheus, and there I can query all the data and
On 14/12/2023 13:17, Jirka Simon wrote:
source machine is clear, but the destination host (the updated)
Couldn't destroy incoming VM: Domain not found: no domain with
matching uuid '77f85710-45e7-43ca
-b0f4-69f87766cc43' (vm:4054)
2023-12-14 12:35:44,492+0100 INFO (libvirt/events)
Best to look in the vdsm logs on both source and destination.
Engine gives no clues :)
Thanks
On 14/12/2023 11:12, Jirka Simon wrote:
Hello there,
after today's update I have problem with live migration to this
host. with message
2023-12-14 10:00:01,089+01 INFO
The ONLY was Rocky Linux 9 will be supported is by somebody doing a
complete check of everything (ovirt-engine, vdsm, etc), and makes it
compatible with RL9 and get all the changes/fixes merges.
Supporting RL9 does also not mean we cannot support any other distro
next to it. For example
Then pay for redhat?
I really don't understand this. You want a free OS with a team of people
behind that help you within x hours if you have an issue, but you don't
want to pay for it.
Either pay, or use centos stream and report the issues/debug them
yourselves.
On 12/12/2023 11:29,
Hi,
There is no manual testing anymore. Even no development anymore at this
moment.
If you want to check what tests are being run automatically for example
when a PR is created, I would advice to check the github pipelines.
These contain all the information what checks are being run etc for a
Maybe the best way to find out how it builds, is by checking the CI
pipeline which builds the packages.
This way you can also see how you could build it locally for example.
On 24/10/2023 08:13, Sandro Bonazzola wrote:
This is a bit outdated as it mentions bugzilla which is no longer used
but
On 11/10/2023 21:05, Jon Sattelberger wrote:
Hi,
VM xxx has been paused due to unknown storage error.
Migration failed due to a failed validation: [Migrating a VM in paused status
due to I/O error is not supported.] (VM: xxx, Source: yyy).
Up until recently oVirt 4.5.4 has been running fine
Feel free to post patches to build oVirt on Fedora for example? What is
needed for it? Is it a big change? Can't we just support FC and CS if
the need is that high?
The SPICE argument is a non-issue imo, there are already some people
that rebuild the qemu packages for CS9 with SPICE enabled,
Depends on the error of course :)
The patches I created fixes the need to install the oVirt CA on your
local machine.
As the console.vv file contains the oVirt CA, and with the patches that
CA is now used to verify the certificate when connecting.
Jean-Louis
On 13/06/2023 16:05, Klaas
Fixed in https://github.com/oVirt/ovirt-engine/pull/862
On 31/03/2023 22:04, karl.mor...@gmail.com wrote:
Seeing the following on the engine when attempting to start a novnc console
2023-03-31 11:57:34,988-07 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] (default
This is just normal. The /dev/sdX names are never stable, as they are
assigned during boot time.
And it depends which device module is loaded first, so names might change.
If you need stable naming, then use /dev/disk/by-xxx/ symlinks.
On 7/02/2023 15:14, si...@justconnect.ie wrote:
I should
Hi All,
We backup our VM's with a custom script based on the
https://github.com/oVirt/python-ovirt-engine-sdk4/blob/main/examples/backup_vm.py
example.
This works fine, but we start to see scaling issues.
On VM's where there are a lot of dirty blocks, the transfer goes really
slow
There is some info in the bugreport
https://bugzilla.redhat.com/show_bug.cgi?id=1934158
On 1/02/2023 11:40, Laurent Duparchy wrote:
Thanks for your quick reply,
What's behind "/some... might lose/". Is there something to check
beforehand, to know which one will lose and which one will not ?
Something seems to have gone wrong with the building.
The rpm is uploaded to the repository, but not added in the repodata.
Can this be fixed?
Thanks
On 6/12/2022 17:43, Sandro Bonazzola wrote:
The GitHub runners returned online: fresh new oVirt Node and oVirt
appliances have been released
Hi,
This weekend we've got an issue with a disk that did not get extend
anymore because of the following reason:
2022-12-17 12:12:27,852+0100 INFO (libvirt/events) [virt.vm]
(vmId='a2926e08-d19b-4e3a-98fa-cc1ee241a037') abnormal vm stop device
ua-6ccc3ee1-02e5-4fad-b1b7-9f2d6c187416 error
Every LUN is a PV in LVM terms.
And if you have multiple LUN's for a storage domain, then all those
LUN's are combined into one single VG (storage domain).
On 29/11/2022 05:25, pet...@mdg-it.com wrote:
Thanks Jean-Louis,
The system we’re working with definitely has multiple LUNs in the one
Hi Peter,
The question is somewhat unclear here.
First of all, a storage domain on iSCSI maps 1:1 to a LUN. So 1 LUN = 1
Storage domain.
The storage domain is configured with LVM and each VM disk is a Logical
Volume.
If the LUN/Storage domain goes out of space, then no new space can be
Got fixed :)
On 8/11/2022 16:08, Jean-Louis Dupond via Users wrote:
Hi
Same here, seems like something broke down!
Jean-Louis
On 8/11/2022 07:29, jb wrote:
Hello community,
since yesterday I try to update my cluster, but when I was trying it
through GUI it never finished and when I try
Hi
Same here, seems like something broke down!
Jean-Louis
On 8/11/2022 07:29, jb wrote:
Hello community,
since yesterday I try to update my cluster, but when I was trying it
through GUI it never finished and when I try it manually I got:
[MIRROR]
https://www.ovirt.org/release/4.5.3/ -> 404 :)
Thanks for the new release!
On 18/10/2022 16:52, Lev Veyde wrote:
The oVirt project is excited to announce the general availability of
oVirt 4.5.3, as of October 18th, 2022.
This release unleashes an altogether more powerful and flexible
Seems like this is a libvirt/qemu issue:
2022-08-30 08:18:31,121+ ERROR (libvirt/events) [virt.vm]
(vmId='53adff44-8506-41e7-86d1-5a6ca760721e') Block job
89eab626-9c32-48fb-b006-dbc09cb0026a type COMMIT for drive sdb has
failed (vm:5972)
Can you also share the qemu/libvirt logs?
On
Hi,
To find out which guest nic is mapped to which ovirt nic you can do the
following API call:
/ovirt-engine/api/vms/4c1d50b2-4eee-46d6-a1b1-e4d9e21edaa6/nics/76ff8008-ae2e-46da-aaf4-8cc589dd0c12/reporteddevices
But I would expect the reporteddevices to only show the one from the
specified
Hi,
We would like to install oVirt Node via some automated way.
I think the best option to do this is via a PXE boot, and there run the
installation.
But I would like to know how you can customize the installation of oVirt
Node. So no manual intervention is needed.
Is it just creating a
Which Qemu version are you using?
Cause it might be related to
https://bugzilla.redhat.com/show_bug.cgi?id=1994494
Jean-Louis
On 10/09/2021 19:59, Strahil Nikolov via Users wrote:
Can you provide the output from all nodes:
gluster pool list
gluster peer status
gluster volume status
Best
Hi All,
While debugging some Qemu 6 issue I was requested to remove the
error_policy from the libvirt xml.
See https://bugzilla.redhat.com/show_bug.cgi?id=1999051#c12
Now to have this working I just wrote some small python hook for VDSM
which adjusts the XML before startup.
Just added the
31 matches
Mail list logo