It's an effect that also had me puzzled for a long time: To my understanding
gluster volume command should only ever show peers that contribute bricks to a
volume, not peers in general.
Now perhaps an exception needs to be made for hosts that have been enabled to
run the management engine, as
Sharing disks typically requires that you need to coordinate their use above
the disk.
So did you consider sharing a file system instead?
Members in my team have been using NetApp for their entire career and are quite
used to sharing files even for databases.
And since Gluster HCI basically
ovirt-hosted-engine-cleanup will only operate on the host you run it on.
In a cluster that might have side-effects, but as a rule it will try to undo
all configuration settings that had a Linux host become an HCI member or just a
host under oVirt management.
While the GUI will try to do the
That very much describes my own situation two years ago..., just a slight time
and geographic offset as my home is near Frankfurt and my work is in Lyon. I
had been doing 70:1 consolidation via virtualization based on OpenVZ
(containers, but with a IaaS abstraction), since 2006 because it was
I've just given it a try: works just fine with me.
But I did notice, that I chose virtio-scsi when I created the disk, don't now
if that makes any difference, but as an old-timer, I still got "SCSI" ingrained
as "better than ATA".
Chose FreeBSD 9.2 x64 as OS type while creating the VM (nothing
Well, that's why I really want a theory of operation here, because removing a
host as a gluster peer might just break something in oVirt... And trying to
fix that may be not trivial either.
It's one of those cases where I'd just really love to have nested
virtualization work better so I can
My understanding is that in a HCI environment, the storage nodes should be
rather static, but that the pure compute nodes, can be much more dynamic or
opportunistic: actually those should/could even be switched off and restarted
as part of oVirt's resource optimization.
The 'pure compute'
Hi Strahil,
when you said "The Gluster documentation on the topic is quite extensive", I
wasn't quite sure, if that was mean to be ironic: you typically are not.
At the moment the only documentation I can see navigating from the
documentation menu on ovirt.org is this:
11.6. Preparing and
Hi Strahil,
I did actually find the matching RHV documentation now.
The reason I didn't before seems to be that this documentation was only added
for RHHI 1.8 or oVirt 4.4 and did not exist for RHHI 1.7 or oVirt 4.3
oVirt may have started as a vSphere 'look-alike', but it graduated to a Nutanix
'clone', at least in terms of marketing.
IMHO that means the 3-node hyperconverged default oVirt setup (2 replicas and 1
arbiter) deserves special love in terms of documenting failure scenarios.
3-node HCI is
I personally consider the fact that you gave up on 4.3/CentOS7 before CentOS 8
could have even been remotely reliable to run "a free open-source
virtualization solution for your entire enterprise", a rather violent break of
trust.
I understand Redhat's motivation with Python 2/3 etc., but
I am glad you got it done!
I find that oVirt resembles more an adventure game (with all its huge emotional
rewards, once you prevail), than a streamlined machine, that just works every
time you push a button.
Those are boring, sure, but really what I am looking for when the mission is to
run
It's important to understand the oVirt design philosophy.
That may be somewhat understated in the documentation, because I am afraid they
copied that from VMware's vSphere who might have copied it from Nutanix, who
might have copied it from who-know-else... which might explain why they are a
Export domain should work, with the usual constraints that you have to
detach/attach the whole domain and you'd probably want to test with one or a
few pilot VMs first.
There could be issues with 'base' templates etc. for VMs that where created as
new on 4.4: be sure to try every machine type
Roman, I believe the bug is in
/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/pre_checks/validate_memory_size.yml
- name: Set Max memory
set_fact:
max_mem: "{{ free_mem.stdout|int + cached_mem.stdout|int -
he_reserved_memory_MB + he_avail_memory_grace_MB }}"
If these
Yup, that's a bug in the ansible code, I've come across on hosts that had 512GB
of RAM.
I quite simply deleted the checks from the ansible code and re-ran the wizard.
I can't read YAML or Python or whatever it is that Ansible uses, but my
impression is that things are 'cast' or converted into
> On Wed, Jan 27, 2021 at 9:14 AM
>
> Ok, I think I found at least for Nvidia. You can follow what described for
> RHV:
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/...
>
> In the same manual there are also instructions for vGPU.
>
> There is also the guide for
A geographically distributed cluster is a very expensive random number
generator: any cluster crtically depends on the assumption that the
communication between nodes is at least an order of magnitude more reliable
than the node itself. Otherwise you just multiply the chance of failures.
That
To your question "Is it worth building a new virtualization platform based on
ovirt?" Sandro answered "currently there's no change in the Red Hat's support
of the oVirt project".
That may technically be true, but it doesn't really answer your question, I'd
believe.
oVirt is a management layer
The main issue isn't oVirt, but Nvidia's drivers inside a virtual machine: You
use 'unapproved' GPUs in what Nvidia drivers recognise as a VM, they'll refuse
to load.
RTX series cards should be ok, I've tried K40 and P100 and they are just as
fine with oVirt pass-through.
V100 tests are still
I would agree, if CentOS 8 could still be considered a proper base to build on.
But since that is now EOL at the end of this year, oVirt is no longer viable,
because it's beta-on-beta.
I can only hope that IBM will read the writing on the wall before you guys will
be out of a job.
Unlocked the snapshot, deleted the VM... done!
That was easiser than I thought.
Thanks for your help in any case!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
And I should have guessed, that this is only how trouble starts =:-O
I immediately removed the locked snapshot image (actually they pretty much
seemed to disappear by themselves as delete operations might have been pending)
but the VM that was doing the snapshot, is still there, even if the
Thanks Shani,
I had just found that myself and even managed to unlock and remove the images.
Somehow the reference to https://ovirt.org/develop/developer-guide/ seems to
not be available from the ovirt site navigation but now I have discovered it
via another post here and found a treasure
On one oVirt 4.3 farm I have three locked images I'd like to clear out.
One is an ISO image, that somehow never completed the transfer due to a slow
network. It's occupying little space, except in the GUI where it sticks out and
irritates. I guess it would just be an update somewhere on the
I am glad you think so and it works for you. But I'd also guess that you put
more than a partial FTE to the project.
I got attracted via the HCI angle they started pushing as a result of Nutanix
creating a bit of a stir. The ability to use discarded production boxes for a
lab with the
Hi Strahil,
OpenVZ is winding down, unfortunately. They haven't gone near CentOS8 yet any I
don't see that happen either. It's very unfortunate, because I really loved
that project and I always preferred its container abstraction as well as the
resource management tools, because scale-in is
The major issue with that is that oVirt 4.3 is out of maintenance, with Python2
EOL being a main reason.
CentOS reboots are happening, but will be to little avail, when the oVirt team
won't support them.
It's a mess that does lots of damage to this project, but IBM might just have
different
I'd put my money on a fall-through error condition where TSC is simply the last
one with a 'good' error message pointed to. I have clusters with CPUs that are
both 10 years and 10x apart in performance performing migrations between
themselves quite happily (Sandy Bridge dual quads to Skylake 56
Hi, that CPU looks rather good to me: KVM/oVirt should love it (as would I)!
I am actually more inclined to believe that it may be a side channel mitigation
issue: KVM is distingiuishing between base CPUs and CPUs with enabled
mitigations to ensure that you won't accidentally migrate a VM from
I came to oVirt thinking that it was like CentOS: There might be bugs, but
given the mainline usage in home and coporate labs with light workloads and
nothing special, chances to hit one should be pretty minor: I like looking for
new fronteers atop of my OS, not inside.
I have been runing
On 10/15/20 11:27 AM, Jeff Bailey wrote:
On 10/15/2020 12:07 PM, Michael Thomas wrote:
On 10/15/20 10:19 AM, Jeff Bailey wrote:
On 10/15/2020 10:01 AM, Michael Thomas wrote:
Getting closer...
I recreated the storage domain and added rbd_default_features=3 to
ceph.conf. Now I see the new
On 10/15/20 10:19 AM, Jeff Bailey wrote:
On 10/15/2020 10:01 AM, Michael Thomas wrote:
Getting closer...
I recreated the storage domain and added rbd_default_features=3 to
ceph.conf. Now I see the new disk being created with (what I think
is) the correct set of features:
# rbd info
guessing that it's being accessed by the vdsm user?
--Mike
On 10/14/20 10:59 AM, Michael Thomas wrote:
Hi Benny,
You are correct, I tried attaching to a running VM (which failed), then
tried booting a new VM using this disk (which also failed). I'll use
the workaround in the bug report going
r this, but it is currently
> not possible. The options are indeed to either recreate the storage
> domain or edit the database table
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8
>
>
>
>
> On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas wrote:
>>
On 10/14/20 3:30 AM, Benny Zlotnik wrote:
Jeff is right, it's a limitation of kernel rbd, the recommendation is
to add `rbd default features = 3` to the configuration. I think there
are plans to support rbd-nbd in cinderlib which would allow using
additional features, but I'm not aware of
ote:
When you ran `engine-setup` did you enable cinderlib preview (it will
not be enabled by default)?
It should handle the creation of the database automatically, if you
didn't you can enable it by running:
`engine-setup --reconfigure-optional-components`
On Wed, Sep 30, 2020 at 1:58 AM Mich
:
When you ran `engine-setup` did you enable cinderlib preview (it will
not be enabled by default)?
It should handle the creation of the database automatically, if you
didn't you can enable it by running:
`engine-setup --reconfigure-optional-components`
On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas
Thanks a lot for this feed back!
I've never had any practical experience with Ceph, MooseFS, BeeGFS or Lustre:
GlusterFS to me mostly had the charme of running on 1/2/3 nodes and then
anything beyond at a balanced benefits in terms of resilience vs.
performance... in theory, of course.
And on
Thank you very much for your story!
It has very much confirmed a few suspicions that have been gathering over the
last... O my God! Has it two years already?
1. Don't expect plug-and-play, unless you're on SAN or NFS (even HCI doesn't
seem to be in the heart of the oVirt team)
2. Don't expect
From what I observed (but it's not something I try often), if you try to enable
maintenance on a host and have VMs on it, it will try migrating the VMs first,
which is a copy-first, state-transfer-afterwards process. So if there is no
migration target available or if the copying and
This is a shot in the dark, but it's possible that your dnf command was
running off of cached repo metadata.
Try running 'dnf clean metadata' before 'dnf upgrade'.
--Mike
On 10/2/20 12:38 PM, Erez Zarum wrote:
Hey,
Bunch of hosts installed from oVirt Node Image, i have upgraded the
Que yo sepa, no existe alternativa al reinstalarlo... La maquina de
manejamiento hereda su configuracion del cluster y en este caso, cuando lo
tengas cambiado, le falta la hardware virtualizada para arrancar, y como no
tienes GUI tampoco lo puedes cambiar... ¡No eres el primero quien haya
handle the creation of the database automatically, if you
didn't you can enable it by running:
`engine-setup --reconfigure-optional-components`
On Wed, Sep 30, 2020 at 1:58 AM Michael Thomas wrote:
Hi Benny,
Thanks for the confirmation. I've installed openstack-ussuri and ceph
Octopus. Then I
or.centos.org/centos/8/cloud/x86_64/openstack-ussuri/Packages/p/
On Tue, Sep 29, 2020 at 10:37 PM Michael Thomas wrote:
I'm looking for the latest documentation for setting up a Managed Block
Device storage domain so that I can move some of my VM images to ceph rbd.
I found this:
https://ovirt
I'm looking for the latest documentation for setting up a Managed Block
Device storage domain so that I can move some of my VM images to ceph rbd.
I found this:
https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
...but it has a big note at the top that it
I can hear you saying: "You did understand that single node HCI is just a toy,
right?"
For me the primary use of a single node HCI is adding some disaster resilience
in small server edge type scenarios, where a three node HCI provides the fault
tolerance: 3+1 with a bit of distance, warm or
I was looking forward to that presentation for exactly that reason: But it
completely bypassed the HCI scenario, was very light on details and of course
assumed that everything would just work, because there is no easy fail-back and
you're probably better off taking down the complete farm
I am hoping for a miracle like that, too.
In the mean-time I am trying to make sure that all variants of exports and
imports from *.ova to re-attachable NFS domains work properly, in case I have
to start from scratch.
HCI upgrades don't get the special love you'd expect after RHV's proud
Not to give you any false hope, but when I recently reinstalled my oVirt
4.4.2 cluster, I left the gluster disks alone and only reformatted the
OS disks. Much to my surprise, after running the oVirt HCI wizard on
this new installation (using the exact same gluster settings as before),
the
I can see new posts coming in via e-mail, but updates on the web sites have
stopped and posts don't disappear?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
Am 14.09.2020 um 15:23 schrieb tho...@hoberg.net:
Sorry two times now:
1. It is a duplicate post, because the delay for posts to show up on the
web site is ever longer (as I am responding via mail, the first post is
still not shown...)
2. It seems to have been a wild goose chase: The gluster
Sorry if it's a duplicate: I go an error on post... And yes I posted it on the
Gluster slack first, but I am using Gluster only because the marketing on oVirt
HCI worked so well...
I got 3 recycled servers for an oVirt test environment first and set those up
as 3-node HCI using defaults
Yes, I've also posted this on the Gluster Slack. But I am using Gluster mostly
because it's part of oVirt HCI, so don't just send me away, please!
Problem: GlusterD refusing to start due to quorum issues for volumes where it
isn’t contributing any brick
(I've had this before on a different
> On Sun, Aug 30, 2020 at 7:13 PM
> Using export domain is not a single click, but it is not that complicated.
> But this is good feedback anyway.
>
>
> I think the issue is gluster, not qemu-img.
>
>
> How did you try? transfer via the UI is completely different than
> transfer using the
Is there a CLI for setting up a hyperconverged environment with
glusterfs? The docs that I've found detail how to do it using the
cockpit interface[1], but I'd prefer to use a cli similar to
'hosted-engine --deploy' if it is available.
Thanks,
--Mike
In this specific case Ieven used virgin hardware originally.
Once I managed to kill the hosted-engine by downgrading the datacenter cluster
to legacy, I re-installed all gluster storage from the VDO level up. No traces
of a file system should be left with LVM and XFS on top, even if I didn't
I've just tried to verify what you said here.
As a base line I started with the 1nHCI Gluster setup. From four VMs, two
legacy, two Q35 on the single node Gluster, one survived the import, one failed
silently with an empty disk, two failed somewhere in the middle of qemu-img
trying to write
Thanks for the suggestion, I tried that, but I didn't get very far on a single
node HCI cluster...
And I'm afraid it won't be much better on HCI in general, which is really the
use case I am most interested in.
Silently converting VMs is something rather unexpected from a hypervisor, doing
it
Not sure it will actually do that, but if you create a new network profile, you
can select a 'port mirroring' option: It is my understanding that is what you
need. You may also want to deselect the filtering rules in the same place.
___
Users mailing
I like playing with shiny new toys and VDO is one :-)
Actually with that it shouldn't matter if whatever is put on top actually has
spare allocation in the file format or just blocks with zeros, which makes for
a reusable HDD in case I am working with imageio images that are raw.
Good to know
On a machine that survived the import, that worked as expected.
May want to add that to a check list, that the original machine type for legacy
isn't carried over after an import but needs to be set explicitly.
___
Users mailing list -- users@ovirt.org
I might have found one: You can set the emulated machine to 'legacy' before the
first launch.
No idea yet, if it will actually work, because the boot disk was copied empty...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
Testing the 4.3 to 4.4 migration... what I describe here is facts is mostly
observations and conjecture, could be wrong, just makes writing easier...
While 4.3 seems to maintain a default emulated machine type
(pc-i440fx-rhel7.6.0 by default), it doesn't actually allow setting it in the
After this
(https://devconfcz2020a.sched.com/event/YOtG/ovirt-4k-teaching-an-old-dog-new-tricks)
I sure do not expect this (log below):
Actually I am trying to evaluate just how portable oVirt storage is and this
case I had prepared a USB3 HDD with VDO, which I could literally move between
BTW: This is the message I get on the import:
VDSM nucvirt command HSMGetAllTasksStatusesVDS failed: value=low level Image
copy failed: ("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none',
'-T', 'none', '-f', 'qcow2',
> On Fri, Aug 28, 2020 at 2:31 AM
> You should really try the attach/detach storage domain, this is the
> recommended way to move
> vms from one ovirt system to another.
>
> You could detach the entire domain with all vms from the old system,
> and connect it to the new
> system, without
Struggling with bugs and issues on OVA export/import (my clear favorite
otherwise, especially when moving VMs between different types of hypervisors),
I've tried pretty much everything else, too.
Export domains are deprecated and require quite a bit of manual handling.
Unfortunately the
Thanks for diving into that mess first, because it allowed me to understand
what I had done as well...
In my case the issue was a VM moved from 4.3 to 4.4 seemed to be silently
upgraded from "default" (whatever was default on 4.3) to "Q35", which seems to
be the new default of 4.4.
But that
While I am trying to prepare a migration from 4.3 to 4.4 with the base OS
switch, I am exploring all variants of moving VMs.
OVA export/import and export domains have issues and failures so now I am
trying backup domains and fail to understand how they are to be used and the
sparse
I am testing the migration from CentOS7/oVirt 4.3 to CentOS8/oVirt 4.4.
Exporting all VMs to OVAs, and re-importing them on a new cluster built from
scratch seems the safest and best method, because in the step-by-step
migration, there is simply far too many things that can go wrong and no easy
Just fresh from a single node HCI setup (CentOS 8 + oVirt 4.4.1) I did
yesterday on an Intel i7-8559U NUC with a single 1TB NVMe storage stick and
64GB of RAM:
The HCI wizard pretty much assumes a "second disk", but a partition will
actually do. And it doesn't even have to be in /dev/mapper,
It would be interesting to know, how the previous team got to six nodes: I
don't remember seeing any documentation how to do that easily...
However, this state of affairs also seems to be quite normal, whenever I reboot
a single node HCI setup: I've seen that with two systems now, one running
Replicated is pretty much hard coded into all the Ansible scripts for the HCI
setup. So you can't do anything but replicated and only choose between arbiter
or full replica there. Distributed give you anything with three nodes, but with
five, seven, nine or really high numbers, it becomes quite
I have not been able to find answers to a couple of questions in the
self-hosted engine documentation[1].
* When installing a new Enterprise Linux host for ovirt, what are the
network requirements? Specifically, am I supposed to set up the
ovirtmgmt bridge myself on new hosts, or am I
Howto: Create a new network profile that doesn't have the MAC spoofing filter
included. In my case I used one without any filters, you may want to be more
careful.
Background:
I had tried the nested approach with the default settings and found that the
hosted-engine setup worked just find to
One of the many cases where nested virtualization would be quite handy for
testing...
If only it worked with oVirt on top of oVirt...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
>
> Since 4.4 is the last feature release, this will never change even if it is
> not
> documented.
>
> Nir
Hi Nir,
could you please clarfiy: What does "last feature release" mean here: Is oVirt
being feature frozen in preparation to something more drastic?
Hi Strahil, no updates, especially since I am stuck on 4.3.11 and they tell us
it's final.
Glusterfs is 6.9-1.el7.
Apart from those three VMs and the inability to copy disks the whole farm runs
fine so far.
Where would I configure the verbosity of logging? Can't find an obvious config
option.
Thanks for putting in the effort!
I learned a lot of new things.
I also learned that I need to learn a few more now.
The table could use some alternating background or a grid: Too easy to get lost.
Environments change over time, e.g. you find you really should have split the
management,
I came across this in an environment where the proxy was very slow. The
downloaded image for the appliance gets deleted every so often (and perhaps
also via the cleanup script), so when it needs to be reloaded from the
internet, it can take its time, because its quite large (>1GB).
Another
While trying to diagnose an issue with a set of VMs that get stopped for I/O
problems at startup, I try to deal with the fact that their boot disks cause
this issue, no matter where I connect them. They might have been the first
disks I ever tried to sparsify and I was afraid that might have
If the defalt blank template from a fresh install had enabled
high-availability, that would be a bug.
But if someone had set this on their blank template, I can see how that would
cause such an issue and I can just imagine the drama behind finding it... It
seems common enough to code for it!
ven't seen the need
when moving VMs between VB/VMware/Hyper-V. And with Windows VMs, starting with
W7/W2012 it's become almost a no-op, just pop it in and it will adjust with
little more than a reboot, unless you want those virtio drivers for speed.
>
> Thanks Thomas, this is very useful fee
Here is the explanation, I think:
root 12319 12313 15 16:59 pts/000:00:56 qemu-img convert -O qcow2
/dev/loop0
/rhev/data-center/mnt/glusterSD/192.168.0.91:_vmstore/9d1b8774-c5dc-46a8-bfa2-6a6db5851195/images/3be7c1bb-377c-4d5e-b4f6-1a6574b8a52b/845cdd93-def8-4d84-9a08-f8c991f89fe3
This
Empty disks from exports that went wrong didn't help. But that's fixed now,
even if I can't fully validate the OVA exports on VMware and VirtualBox.
The export/import target for the *.ova files is an SSD hosted xfs file system
on a pure compute Xeon D oVirt node, exported and automounted to the
Not yet, because I was fighting a nasty outage, evidently created from
importing, starting and deleting images too quickly for my Atoms + Xeon-D based
test farm...
But it's on the list :-)
Ciao, Thomas
___
Users mailing list -- users@ovirt.org
After applying the OVA export patch, that ensured disk content was actually
written ino the OVA image, I have been able to transfer *.ova VMs between two
oVirt clusters. There are still problems that I will report once I have fully
tested what's going on, but in the mean-time for all those, who
I have done that, even added five nodes that contribute a separate Gluster file
system using dispersed (erasure codes, more efficient) mode.
But in another cluster with such a 3-node-HCI base, I had a lot (3 or 4) of
compute nodes, that were actually dual-boot or just shut off when not used:
If OVA export and import work for you, you get to chose between the two at
import.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of
After OVA export/import was a) recommended against b) not working with the
current 4.3 on CentOS 7, I am trying to make sure I keep working copies of
critical VMs before I test if the OVA export now works properly, with the
Redhat fix from 4.4. applied to 4.3.
Long story short, I have an
HI Gianluca,
I have had a look at the change and it's a single line of code added on the
hosted-engine. I'll verify that it's working 4.3 and will make a note of
checking it with engine upgrade, which for now seems the less troublesome
approach.
Hopefully it will get integrated/backported
Dear Nir,
I am sorry if that sounded too harsh. I do appreciate what you are doing!
The thing is, that my only chances at getting my employer to go with the
commercial variant depend a lot on the upstream project showing already
demonstratable usability in the research lab.
Just try to
Nir, first of all, thanks a lot for the detailed description and the quick fix
in 4.4!
I guess I'll be able to paste that single line fix into the 4.3. variant
myself, but I'd rather see that included in the next 4.3 release, too: How
would that work?
---
"OVA is not a backup solution":
From
Unfortunately that means OVA export/import doesn't seem to be a QA test case
for oVirt releases... which would make me want to drop it like a hot potato, if
there was any alternative at this point.
I guess my biggest mistake was to assume that oVirt was like CentOS.
I confirm, I get "qemu-img: /dev/loop0: error while converting qcow2: Could not
open device: Permission denied", too.
You've nailed it and Nir has pasted your log into the bug report 1862115.
___
Users mailing list -- users@ovirt.org
To unsubscribe
Wow, it seems a good find for a potential reason!
I'd totally subscribe to the cause, if it only affected my Atom HCI clusters,
who have had timing issues so bad, even the initial 3 node HCI setup had to be
run on a 5 GHz Kaby Lake to succeed, because my J5005 Atoms evidently were too
slow.
Hi Nir,
performance is not really an issue here, we're most interested in functional
testing and migration support.
That's where nesting 4.4 on top of 4.3 would be a crucial migration enabler,
especially since you don't support a 7/8 or 4.3/4.4 mix that much elsewhere.
Currently my tests
Just looking at the release notes, using anything 'older' in the case of 4.4
seems very rough going...
Testing 4.4 was actually how I got into the nesting domain this time around,
because I didn't have physical hardware left for another HCI cluster.
Testing of oVirt 4.4 on top of oVirt 4.3
That's exactly the point: Running oVirt on top of a vanilla KVM seems to be the
only case supported, because the Redhat engineers use that themselves for
development, testing and demoing.
Running virtual machines nested inside an oVirt VM via KVM also works per se,
as HostedEngineLocal during
101 - 200 of 397 matches
Mail list logo