Can anyone explain how affinity labels work in 4.4?
I created a label containing a host and a VM. I had assumed that would require
the VM to run on that host, but the VM continues to run on any host in the
cluster.
I then checked the 4.4 documentation and it says that I need the filters
What exactly are you trying to achieve? LVM is just an implementation detail of
oVirt so you shouldn't be interacting with on it on a normal basis.
If you want to share block devices between VMs then you can create a shareable
disk.
On Wed, 07 Jun 2023 13:25:41 +0100 wrote ---
odify creation of VM to create one vNIC with link state=down and
then use ATTR{operstate}
BR,
Konstantin
Von: Alan G <mailto:alan+ov...@griff.me.uk>
Datum: Mittwoch, 24. Mai 2023 um 15:46
An: "Volenbovskyi, Konstantin" <mailto:konstantin.volenbovs...@haufe.com>
PCI address than
vNIC1; however vNIC3 has higher PCI address than vNIC4)? I am surprised about
that…
BR,
Konstantin
Von: Alan G <mailto:alan+ov...@griff.me.uk>
Datum: Mittwoch, 24. Mai 2023 um 10:50
An: Guillaume Pavese <mailto:guillaume.pav...@interactiv-group.com>
Cc: user
se
Ingénieur Système et Réseau
Interactiv-Group
On Wed, May 24, 2023 at 1:46 AM Alan G <mailto:alan%2bov...@griff.me.uk> wrote:
Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à
l’intention exclusive de ses destinataires et sont confidentiels. Si
Is there any way to enforce NIC ordering so the vNICs match the ordering in the
Engine UI?
I found this but not clear if it was ever implemented?
https://www.ovirt.org/develop/release-management/features/network/predictable-vnic-order.html___
I believe Oracle offer their spin of oVirt called Oracle Linux Virtualization
Manager.
https://www.oracle.com/uk/a/ocom/docs/oracle-linux-virtualization-manager-ds-final.pdf
On Fri, 21 Apr 2023 10:14:39 +0100 masood.ahmed--- via Users
wrote ---
Hi,
I am working on a project
Hi,
Trying to create a VM while attaching an existing disk. I can create the VM
then attach the disk with an additional call, but I thought it should be
possible to do it in one hit?
My code is
vm = vms_service.add(
types.Vm(
name='alma8.7',
description='AlmaLinux 8.7
, when I trigger an export that a snapshot is created. Thanks.
On Sun, 08 May 2022 13:00:26 +0100 Arik Hadas wrote
On Fri, May 6, 2022 at 3:00 PM Alan G <mailto:alan+ov...@griff.me.uk> wrote:
>
> Hi,
>
> Is there a way to export an OVA from a snapshot? Or onl
Hi,
Is there a way to export an OVA from a snapshot? Or only from the "Active VM"
image?
Thanks,
Alan___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
Depends what you mean by "sync" and "swap".
If you can keep the same NFS endpoint, but swap-in your upgraded storage under
the hood then I think you can do as you propose. Stop all VMs, put the storage
domain in maintenance, make you changes and bring the domain out of maintenance.
Sorry, I missed the fact that you also have HE in the same domain.
I think the previous statement still stands. But you'd be advised to stop the
ovirt-ha-agent and ovirt-ha-broker on all hosts as well.
On Wed, 09 Feb 2022 06:07:44 + Pascal D wrote
I need to upgrade my
2021 11:18:57 + Alan G wrote
Hi,
I sent this a while back and never got a response. We've since upgrade to 4.3
and the issue persists.
2021-03-24 10:53:48,934+ ERROR (periodic/2) [virt.periodic.Operation]
operation failed
(periodic:188)
Traceback (most recent call last
tely hit a brick wall with it. We've had to disable fencing on both
nodes as sometimes they get erroneously fenced when vdsm stops function
correctly. At this point I'm thinking about replaced the severs with different
models in-case it's something in the NIC drivers...
Alan
On Mon, 0
Hi,
Is there any support for custom zones in firewalld? The only reference I can
find is this from years ago
https://lists.ovirt.org/pipermail/users/2015-May/032791.html
We're trying to put ovirtmgmt into a zone other than the default zone, but vdsm
keeps reverting the config whenever
You might find affinity labels are a better fit in this use case. It would
allow you to add/remove/move hosts in the future much more quickly than hard
coding affinity. But that still leaves the bulk configuration tasks.
The only way to do this quickly is with the REST API or one of the SDKs
The "long-winded" way is the only way I know of doing it.
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/chap-backups_and_migration
Providing you plan in advance I don't think it's really that onerous. And it
means you have a stable
You can definitely build a bare metal Engine and do a DB restore onto it. I've
done this before when I had problems. Obviously you will need a sane backup
from your hosted engine.
I'm surprised the NFS build didn't work though. You probably need to remove the
stale iSCSI hosted_storage
as allocated memory and it is a bug.
I am working on a fix right now.
Regards,
Lucia
On Wed, Apr 15, 2020 at 1:21 PM Alan G <mailto:alan%2bov...@griff.me.uk> wrote:
Hi,
I seem to have found an issue when trying to setup a high performance VM
utilising hugepages and NUMA p
Hi,
I seem to have found an issue when trying to setup a high performance VM
utilising hugepages and NUMA pinning.
The VM is configured for 32GB RAM and uses hugepages of size 1G.
The host has two NUMA nodes each having 64GB RAM (for 128GB total system RAM).
No other VMs are running
, 16 Mar 2020 14:39:16 + Michal Skrivanek
wrote
On 13 Mar 2020, at 18:55, Alan G <mailto:alan+ov...@griff.me.uk> wrote:
I've observed that oVirt considers cache/buffer memory as "used”.
Where do you see that?
So a host can report, for example, 10% memory
Look at Network Filters in the vNIC profile for the network. I haven't tested
it but there is one called clean-traffic-gateway, which I believe allows only
communication between a VM and the designated gateway.
On Mon, 16 Mar 2020 10:11:57 + Hendrik Peyerl
wrote
We do have
I've observed that oVirt considers cache/buffer memory as "used". So a host can
report, for example, 10% memory utilisation when hosting 0 VMs. A reboot of the
host will of course free all that memory and the host will again report
something close to 0%.
This caused me a shock a few weeks
Hi,
I hit a few issues while performing a recent HE install of 4.3. While I managed
to find solutions/workarounds to all the problems I thought I might share them
here
* As defined in the Ansible defaults the temp dir for building the local HE VM
is /var/tmp. I was 80M short of the required
is not really
technically possible.
On Mon, 24 Feb 2020 13:34:49 + Nir Soffer wrote
On Mon, Feb 24, 2020 at 3:03 PM Gorka Eguileor <mailto:gegui...@redhat.com>
wrote:
>
> On 22/02, Nir Soffer wrote:
> > On Sat, Feb 22, 2020, 13:02 Alan G <mailto:alan+ov.
the domain is full way
before it actually is.
Not clear if this is handled natively in oVirt or by the underlying lvs?
On Fri, 21 Feb 2020 21:35:06 + Nir Soffer wrote
On Fri, Feb 21, 2020, 17:14 Alan G <mailto:alan%2bov...@griff.me.uk> wrote:
Hi,
I have an oVirt c
Hi,
I have an oVirt cluster with a storage domain hosted on a FC storage array that
utilises block de-duplication technology. oVirt reports the capacity of the
domain as though the de-duplication factor was 1:1, which of course is not the
case. So what I would like to understand is the
Hi,
I have issues with one host where supervdsm is failing in network_caps.
I see the following trace in the log.
MainProcess|jsonrpc/1::ERROR::2020-01-06
03:01:05,558::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) Error
in network_caps
Traceback (most recent call last):
I've had to do this a couple of times and always ended up with a working system
in the end.
As a fall back option (although I've never had to use it) I have a backup
engine VM running completely outside of oVIrt (ESXi host in my case). Then if
the hosted_engine deploy fails for any reason
I have the same issue and haven't been able to find a solution. It seems that
the initial LUN is hard coded into /etc/ovirt-hosted-engine/hosted-engine.conf
and there's no way to add additional paths to it.
On Mon, 11 Nov 2019 17:42:17 + Francesco Castellano
wrote
Hi
Hi,
Yesterday I had an AddDisk operation fail. The task is no longer running on the
SPM but it's stuck in the engine DB and the failed disk is now locked. I've
tried to clean it out using taskcleaner.sh, but either I'm using it incorrectly
or it doesn't handle my specific scenario.
There
de can reliably ping your gateway IP,
failures there will cause nodes to bounce.
A starting place rather a solution, but the first places to look. Good luck!
-Darrell
On May 7, 2019, at 5:14 AM, Alan G <mailto:alan+ov...@griff.me.uk> wrote:
Hi,
We have a dev cluster running
Hi,
Trying to re-deploy Hosted Engine into a new storage domain. "hosted-engine
--deploy --noansible" has completed and the engine is up, but I cannot remove
the existing hosted_storage domain to allow the new one to be imported.
I cannot remove the domain until the old HostedEngine VM is
Hi,
We have a dev cluster running 4.2. It had to be powered down as the building
was going to loose power. Since we've brought it back up it has been massively
un-stable (Hosts constantly switching state, VMs migrating all the time).
I now have one host running (with HE) and all others in
Hi,
I have some Cisco C-220M4 servers with CIMC (not connected to UCS fabric).
I can get the fencing to work on the command line like this
fence_ipmilan -a XXX -l YYY -p ZZZ --hexadecimal-kg=ABC -o status -P
But when I try to configure & test in the Engine UI I get, "Test failed:
thing, we
had a serious bug around this[1]
and it was fixed in 4.2, I am not sure it applies to your case as well, as
there are multiple factors in play, so best test it first on some other disk
[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1574346
On Mon, Mar 18, 2019 at 2:51 PM
r cold migration?
which version?
currently the best way (and probably the only one we have) is to kill the
qemu-img convert process (if you are doing cold migration), unless there is a
bug in your version, it should rollback properly
On Mon, Mar 18, 2019 at 2:10 PM Alan G <mailto:
Hi,
I accidentally triggered a storage migration for a large vdisk that will take
some hours to complete. Is there a way to cleanly cancel the task, such that
the vdisk will remain on the original domain?
Thanks,
Alan___
Users mailing list --
Hi,
I made an error when provisioning a new FC storage domain and gave it LUN ID 0
by mistake. I now need to move it to another ID.
Can I do this: -
1. Shutdown all VMs on the domain.
2. Put the domain in maintenance.
3. Change the LUN ID.
4. Force SCSI rescan on every host.
5. Bring the
rovide the output of vdsm-tool dump-volume-chains ?
On Wed, Feb 27, 2019 at 11:45 AM Alan G <mailto:alan%2bov...@griff.me.uk> wrote:
___
Users mailing list -- mailto:users@ovirt.org
To unsubscribe send an email to mailto:users-le...@ovirt.
egal volume:
('5f5b436d-6c48-4b9f-a68c-f67d666741ab',)
On Tue, 26 Feb 2019 18:28:02 + Benny Zlotnik
wrote ----
Can you remove the snapshot now?
On Tue, Feb 26, 2019 at 7:06 PM Alan G <mailto:alan%2bov...@griff.me.uk> wrote:
_
use the VM is down, you can manually activate using
$ lvchange -a y vgname/lvname
remember to deactivate after
On Tue, Feb 26, 2019 at 6:15 PM Alan G <mailto:alan%2bov...@griff.me.uk> wrote:
I tried that initially but I'm not sure how to access the image on block
storage?
can try to run
$ qemu-img check -r leaks
(make sure to have it backed up)
On Tue, Feb 26, 2019 at 5:40 PM Alan G <mailto:alan%2bov...@griff.me.uk> wrote:
___
Users mailing list -- mailto:users@ovirt.org
To unsubscribe send an
Hi,
I performed the following: -
1. Shutdown VM.
2. Take a snapshot
3. Create a clone from snapshot.
4. Start the clone. Clone starts fine.
5. Attempt to delete snapshot from original VM, fails.
6. Attempt to start original VM, fails with "Bad volume specification".
This was logged
"lun": "0"
} ], "devtype": "iSCSI", "physicalblocksize":
"4096", "pvUUID": "cWucdo-DYZc-IlLU-VuED-6FAa-iLdx-dq3RWU",
"serial": "SNETAPP_LUN
Hi, I'm setting up a lab with oVirt 4.2. All hosts are disk-less and boot from
a NetApp using iSCSI. All storage domains are also iSCSI, to the same NetApp as
BFS. Whenever I put a host into maintenance vdsm seems to try to un-mount all
iSCSI partitions including the OS partition, causing the
Great, thanks for clarification. On Thu, 25 Oct 2018 13:07:58 +0100 Simone
Tiraboschi wrote On Thu, Oct 25, 2018 at 1:31 PM
Alan G wrote: Hi, I have 4.1 cluster with FC block
storage and hosted engine. Last night a host went unreachable due to a
driver/firmware issue with the NIC
Hi, I have 4.1 cluster with FC block storage and hosted engine. Last night a
host went unreachable due to a driver/firmware issue with the NIC card. The
Engine spotted this, the host was fenced and everything behaved as expected.
However, it got me thinking - if the affected host had been the
I managed to import it in the end using the old import-to-ovirt.pl script. We
are in the process of upgrading production but it will take a while. On
Thu, 28 Jun 2018 08:14:10 +0100 Daniel Erez wrote On
Wed, Jun 27, 2018 at 1:04 PM Alan G wrote: Hi, I'm
trying to import a KVM VM
Hi, I'm trying to import a KVM VM into Ovirt. First I tried the GUI VM import
functionality and this failed with the error below. However other VMs from the
same source host were imported fine. read-32893::ERROR::2018-06-27
09:43:48,703::v2v::679::root::(_run) Job
50 matches
Mail list logo