Hello,
Is there a way to start a VM directly from the KVM host if the oVirt Engine is
down or temporarily unavailable due to network issues or other reasons?
thanks for any advice.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email
You are aware that you're trying to ride an arguably dead horse to new
frontiers, right?
I'm trying somethings similar with Proxmox on ARM using an Orange PI 5+ and a
Raspberry Pi5, where nearly everything works, except live migration.
But there is a lot of things that are still missing in QEMU
And I might have misread where your problems actually are...
Because oVirt was born on SAN but tries to be storage agnostic, it creates its
own overlay abstraction, a block layer that is then managed within oVirt even
when you use NFS or GlusterFS underneath.
"The ISO domain" has actually been
Hi Tim,
HA, HCI and failover either require or at least benefit from consistent storage.
The original NFS reduce the risk of inconsistency to single files, Gluster puts
the onus of consistency mostly the clients and I guess Ceph is similar.
iSCSI has been described as a bit the worst of everyth
oVirt isn't exactly a trivial piece of software.
Actually I'd say it's not even a piece of software, as the integration of the
various companies whose fully independent products now make up oVirt, never
fully happened.
oVirt is Redhat Linux, Qumranet (KVM+Spice), Ansible (Ansible), GlusterFS (Z
Simon Coter just told me, I'm all wrong and that 4.5 still supports HCI as well
as both kernels.
So, please test and prove me wrong!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https
Hi Simon!
I'd given up on ever finding any real person or back-channel on the Oracle side
of oVirt, so you're saying there is actually such a thing!
I'd [have] been more than happy to feed back all those results I was collecting
in my desperate attempts to maintain a HCI infra with all those sh
HCI has been deprecated years ago, but somehow the code survived until oVirt
4.5.5 or so.
Which means it's still present in Oracle's 4.4 derivative. but not in their 4.5
release.
On that base (make sure to use to switch to the Redhat kernel on all hosts and
the management engine to avoid troub
I've tried to re-deploy oVirt 4.3 on CentOS7 servers because I had managed to
utterly destroy a HCI farm, where most VMs had migrated to Oracles variant of
RHV 4.4 on Oracle Linux. I guess I grew a bit careless towards its end.
Mostly it was just an academic exercise to see if it could be resurr
I think I may have just messed up my cluster.
I'm running an older 4.4.2.6 cluster on CentOS-8 with 4 nodes and a
self-hosted engine. I wanted to assemble the spare drives on 3 of the 4
nodes into a new gluster volume for extra VM storage.
Unfortunately, I did not look closely enough at one o
In theory, if oVirt supports it, the Oracle variant would do it to... unless
they manage to break it.
And since there is zero information on what they test, that could happen at any
time.
Same for HCI with GlusterFS or VDO. HCI has been removed as "a tested feature",
but if you use the Cockpit
> Thomas, your e-mail created too much food for thought... as usual I would
> say, remembering the past ;-)
> I try to reply to some of them online below, putting my own personal
> considerations on the table
>
Hi Gianluca, nice to meet you again!
> On Thu, Dec 21, 2023 at 11:4
Redhat's decision to shut down RHV caught Oracle pretty unprepared, I'd guess,
who had just shut down their own vSphere clone in favor of a RHV clone a couple
of years ago.
Oracle is even less vocal about their "Oracle Virtualization" strategy, they
don't even seem to have a proper naming conve
Oracle VM is based on RHV 4.4, which has been declared end of life.
I'm afraid the chances of Oracle taking over oVirt and doing releases on EL9++
are slim.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Is a change in /etc/pki/vdsm/cert/cacert.pem on the nodes going to
disrupt the communications between nodes and the engine?
The procedure I followed blew away all of /etc/pki/vdsm on each node. I
saved the old one.
Jason
On 8/4/23 14:38, Jason P. Thomas wrote:
I restarted vdsmd and
cen
On 4. 08. 23 20:12, Jason P. Thomas wrote:
I updated the VDSM certs on the hosts and the apache cert on the
engine. I'm guessing something is wrong with however the engine
interacts with vdsm, I just don't know exactly what to do about it.
Jason
On 8/4/23 14:00, Derek Atkin
cen
On 4. 08. 23 20:12, Jason P. Thomas wrote:
I updated the VDSM certs on the hosts and the apache cert on the
engine. I'm guessing something is wrong with however the engine
interacts with vdsm, I just don't know exactly what to do about it.
Jason
On 8/4/23 14:00, Derek Atkin
I believe oVirt draws the line at Nehalem, which contained important
improvements to VM performance like extended page tables. Your Core 2 based
Xeon is below that line and you'd have to change the code to make it work.
Ultimately oVirt is just using KVM, so if KVM works, oVirt can be made to wo
Is a change in /etc/pki/vdsm/cert/cacert.pem on the nodes going to
disrupt the communications between nodes and the engine?
The procedure I followed blew away all of /etc/pki/vdsm on each node. I
saved the old one.
Jason
On 8/4/23 14:38, Jason P. Thomas wrote:
I restarted vdsmd and
I restarted vdsmd and libvirtd after the cert update on each host.
Jason
On 8/4/23 14:34, Derek Atkins wrote:
Did you restart vdsm after updating the certs?
-derek
On Fri, August 4, 2023 2:12 pm, Jason P. Thomas wrote:
I updated the VDSM certs on the hosts and the apache cert on the
engine
dated.. Or possibly even the
Engine CA Cert.
-derek
On Fri, August 4, 2023 1:45 pm, Jason P. Thomas wrote:
Konstantin,
Right after I sent the email I got the engine running. The
libvirt-spice certs had incorrect ownership. It still is not connecting
to anything. Error in Events on the Eng
Konstantin,
Right after I sent the email I got the engine running. The
libvirt-spice certs had incorrect ownership. It still is not connecting
to anything. Error in Events on the Engine is now: "VDSM
command Get Host Capabilities failed: General SSLEngine
problem"
So status right now is,
l and I'm afraid I'm one power outage
away from complete disaster. I need to keep the old location up and
functioning for another 4-6 months, so any insights would be greatly
appreciated.
Sincerely,
Jason P. Thomas
___
Users mail
There is little chance you'll get much response here, because it's probably not
considered an oVirt issue.
It's somewhere between your BIOS, the host kernel and KVM and I'd start by
breaking it down to passing each GPU separately.
Fromt he PCI-ID it seems to be V100 SMX2 variants that would req
In my experience OVA exports and imports saw very little QA, even within oVirt
itself, right up to OVA exports full of zeros on the last 4.3 release (in
preparation for a migration to 4.4).
The OVA format also shows very little practical interoperatbility, I've tried
and failed in pretty much e
I have seen this type of behavior when building a HCI cluster on Atoms.
The problem is that at this poing the machine that is generated for the
management engine has a machine type that is above what is actually supported
in hardware.
Since it's not the first VM that is run during the setup pro
Live migration across major releases sounds like the sort of feature everybody
would just love to have but oVirt would support as little as operating clusters
with mixed release nodes.
AFAIK HCI upgrades from 4.3 to 4.4 were never even described and definitely
didn't involve live VMs.
I export
and
manually assign both networks and successfully activate the hosts.
I'm not sure if issues will arise during future oVirt host updates,
however everything is working fine at this point.
On Mon, Sep 5, 2022 at 7:18 AM Thomas Simmons wrote:
> Hello,
> I am trying to deploy the l
Hello,
I am trying to deploy the latest oVirt (4.5.2), on a fully patched Rocky
8.6 system and am having and issue where "ovirt-hosted-engine-setup" is
failing when it tries to create the ovirtmgmt network with the error
"error: Must be number, not str"}]". When this happens, the engine setup
pause
This can also happen with a misconfigured logrotate config. If a
process is writing to a large log file, and logrotate comes along and
removes it, then the process still has an open filehandle to the large
file even though you can't see it. The space won't get removed until
the process closes
Hi Roberto
We also used the ElRepo drivers in production - most of our e-learning
servers including BBB are on oVirt Gen8 hosts. No problems whatsoever
The problem is that I could not find drivers for the kernel version of
the 4.4.10 release.
Thomas
On Sat, 16 Apr 2022, 19:58 Roberto
Hi all
It looks like Gen9 is also not supported in RHEL8. Its a shame, from our
side it looks like a deal breaker since we are not ready to part with our
old but reliable servers.
Maybe we will try Proxmox.
Thanks you all for your help
BR
Thomas
On Fri, 15 Apr 2022, 00:04 Strahil
ProLiant BL460c Gen8
BR
Thomas
On Thu, Apr 14, 2022 at 5:38 PM Strahil Nikolov
wrote:
> What gen is your hardware ?
>
> Best Regards,
> Strahil Nikolov
>
> On Thu, Apr 14, 2022 at 9:00, Thomas Kamalakis
> wrote:
> ___
>
you can skip all this kernel business.
BR
Thomas
On Wed, Apr 13, 2022 at 10:35 PM Darrell Budic
wrote:
> I hit this too, RH appears to have limited the supported PCI ids in their
> be2net and no longer includes the Emulex OneConnect in the list of
> supported IDs. I switched
> On Tue, Feb 22, 2022 at 1:25 PM Thomas Hoberg k8s does not dictate anything regarding the workload. There is just a
> scheduler which can or can not schedule your workload to nodes.
>
One of these days I'll have to dig deep and see what it does.
"Scheduling" c
> Le 21/02/2022 à 17:15, Klaas Demter a écrit :
> Thank you, it is ok now but... we
> are faced to the first side effects of
> an upstream distribution that continuously ships newer packages and
> finally breaks dependencies (at least repos) into a stable ovirt realease.
which is the effect I wa
This is very cryptic: care to expand a little?
oVirt supports live migration--of VMs, meaning the (smaller) RAM contents--and
tries to avoid (larger) storage migration.
The speed for VM migration has the network as an upper bound, not sure how
intelligently unused (ballooned?) RAM is excluded o
I'm glad you made it work!
My main lesson from oVirt from the last two years is: It's not a turnkey
solution.
Unless you are willing to dive deep and understand how it works (not so easy,
because there is few up-to-date materials to explain the concepts) *AND* spend
a significant amount of tim
sorry a type there: s/have both ends move/have both ends BOOT the Clonezilla
ISO...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of
> So as title states I am moving VMs from an old system of ours with alot of
> issues to a new
> 4.4 HC Gluster envr although it seems I am running into what I have learnt is
> a 4.3 bug of
> some sort with exporting OVAs.
The latest release of 4.3 still contained a bug, essentially a race condi
> On Tue, Feb 22, 2022 at 9:48 AM Simone Tiraboschi wrote:
>
>
> Just to clarify the state of things a little: It is not only technically
> there. KubeVirt supports pci passthrough, GPU passthrough and
> SRIOV (including live-migration for SRIOV). I can't say if the OpenShift UI
> can compete wi
I think Patrick already gave quite sound advice.
I'd only want to add, that you should strictly separate dealing with Gluster
and oVirt: the integration isn't strong and oVirt just uses Gluster and won't
try to fix it intelligently.
Changing hostnames on an existing Gluster is "not supported" I
That's exactly the direction I originally understood oVirt would go, with the
ability to run VMs and container side-by-side on the bare metal or nested with
containers inside VMs for stronger resource or security isolation and network
virtualization. To me it sounded especially attractive with a
>The impression I've got from this mailing list is they are
intentional design decisions to enforce "correctness" of the cluster.
My understanding of cluster (ever since the VAX) is that it's a fault-tolerance
mechanism and that was originally one of the major selling points of these
hypervisor
> On Tue, Feb 15, 2022 at 8:50 PM Thomas Hoberg
> For quite some time, ovirt-system-tests did test also HCI, routinely.
> Admittedly, this flow never had the breadth of the "plain" (separate
> storage) flows.
I've known virtualization from the days of the VM/370.
Am I pessimistic about the future of oVirt? Quite honestely, yes.
Do I want it to fail? Absolutely not! In fact I wanted it to be a viable and
reliable product and live up to its motto "designed to manage your entire
enterprise infrastructure".
It turned out to be very mixed: It has bugs, I con
Comments & motivational stuff were moved to the end...
Source/license:
Xen the hypervisor is moved to the Linux foundation. Perpetual open source,
free to use.
Xcp-ng is a distribution of Xen, produced by a small French company based on
Xen using (currently) a Linux 4.19 LTS kernel and an EL7 fr
> Wait a minute.
>
> Use of GlusterFS as a storage backend is now deperecated and will be
> removed in a future update?
>
> What are those who's deployments have GlusterFS as their storage
> backend supposed to use as a replacement?
>
They are to fully understand the opportunities and risks that
> On Mon, Feb 7, 2022 at 3:04 PM Sandro Bonazzola wrote:
>
>
> The oVirt storage team never worked on HCI and we don't plan to work on
> it in the future. HCI was designed and maintained by Gluster folks. Our
> contribution for HCI was adding 4k support, enabling usage of VDO.
>
> Improving on
There I always pictured you two throwing paper balls at each other across the
office or going for a coffee together...
In the past that difference wouldn't have mattered, I guess.
But with upstream vs downstream your disagreement opens a chasm oVirt can ill
afford.
_
Sandro, I am ever so glad you're fighting on, buon coraggio!
Yes, please write a blog post on how oVirt could develop without a commercial
downstream product that pays your salaries.
Ideally you'd add a perspective for current HCI users, many of which chose this
approach, because a fault-tolera
Alas, Ceph seems to take up an entire brain and mine regularly overflows just
looking at their home page.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-pol
I just hit across the fact that XOSAN (the "native" HCI solution for XCP-ng) is
in fact LinStor...
That's what's behind the €6000/year support fee, but there is a beta that's
community and that I'll try fow now.
___
Users mailing list -- users@ovirt.or
> Oh i have spent years looking.
>
> ProxMox is probably the closest option, but has no multi-clustering
> support. The clusters are more or less isolated from each other, and
> would need another layer if you needed the ability to migrate between
> them.
Also been looking at ProxMox for ages.
> I wonder if Oracle would not be interested in keeping the ovirt. It will
> really be too bad that ovirt is discontinued.
>
> https://docs.oracle.com/en/virtualization/oracle-linux-virtualization-man...
>
>
> Em sáb., 5 de fev. de 2022 09:43, Thomas Hoberg escreveu:
I&
Xen came before KVM, but ultimately Redhat played a heavy hand to swing much of
the market but with Citrix it managed to survive (so far).
XCP-ng is a recent open source spin-off, which attempts to gather a larger
community.
Their XOSAN storage is aimed to deliver a HCI solution somewhat like Gl
There is unfortunately no formal announcement on the fate of oVirt, but with
RHGS and RHV having a known end-of-life, oVirt may well shut down in Q2.
So it's time to hunt for an alternative for those of us to came to oVirt
because they had already rejected vSAN or Nutanix.
Let's post what we fi
Please have a look here:
https://access.redhat.com/support/policy/updates/rhev/
Without a commercial product to pay the vast majority of the developers, there
is just no chance oVirt can survive (unless you're ready to take over). RHV 4.4
full support ends this August and that very likely means
With Gluster gone, you could still use SAN and NFS storage, just like before
they tried to compete with Nutanix and vSphere.
Can you imagine IBM sponsoring oVirt, which doesn't make any money without RHV,
which evidently isn't profitable enough?
Most likely oVirt will lead RHV, in this case to
I just read this message: https://bugzilla.redhat.com/show_bug.cgi?id=2016359
I am shocked but not surprised. And very, very sad.
But I believe this decision needs to be communicated more prominently, as
people should not get aboard a project already axed.
___
Actually the inability to mix CPU vendors is increasingly becoming an issue,
and probably not just for me.
Of course this isn't an oVirt topic, not even a KVM-only topic, but reaches
deep into the OS and even applications.
I guess Intel rather likes adding extensions and proprietary registers/f
It was this near endless range of possibilities via permutation of the parts
that originally attracted me to oVirt.
Being clearly a member of the original Lego generation I imagined how you could
simply add blocks of this and that to rebuild to something new fantastic...,
limitless gluster scal
> On Tue, Feb 1, 2022 at 7:55 PM Richard W.M. Jones wrote:
>
> Would you like to file a doc bug about this?
>
> oVirt on RHEL is not such a common combination..
Well IBM seems bent on changing that (see the developer license post below)
>
> In CI we only test on Centos Stream (8, hopefully soon
> you have 16 developer self support subscriptions from RH, those are more than
> enough to
> use with ovirt as a cluster/s.
I'd consider that an off-topic post.
And whilst we are off-topic, one of the main attractions of using TrueCentOS
(the downstream Community ENterprise Operating System) e
> Hi Emilio,
>
> Yes, looks like the patch that should fix this issue is already here:
> https://github.com/oVirt/ovirt-release/pull/93 , but indeed it still hasn't
> been reviewed and merged yet.
>
> I hope that we'll have a fixed version very soon, but meanwhile you can try
> to simply apply th
In the recent days, I've been trying to validate the transition from CentOS 8
to Alma, Rocky, Oracle and perhaps soon Liberty Linux for existing HCI clusters.
I am using nested virtualization on a VMware workstation host, because I
understand snapshoting and linked clones much better on VMware,
Unfortunately I have no answer to your problem.
But I'd like to know: where does that leave you?
Are youre severs still running with normal operational tasks performing, are
you just not able to handle migrations, restarts or is your environment down
until this gets fixed?
Or were you able to
I'm running ovirt 4.4.2 on CentOS 8.2. My ovirt nodes have two network
addresses, ovirtmgmt and a second used for normal routed traffic to the
cluster and WAN.
After the ovirt nodes were set up, I found that I needed to add an extra
static route to the cluster interface to allow the hosts to s
Hello,
I exported all of the VMs on my 4.3.10 cluster to OVAs and while a few have
imported into my new 4.4.9 cluster just fine, most are failing to import with
the error below being logged in my engine.log.
Caused by: org.postgresql.util.PSQLException: ERROR: insert or update on table
"vm_stat
Hello,
I am running into the same issue while attempting a new install of oVirt 4.4 to
replace my hyperconverged 4.3 cluster. I am running into the issue on one of
the inital steps (yum/dnf install cockpit-ovirt-dashboard vdsm-gluster
ovirt-host). After some digging, I found that the version of
Actually quite a few of my 3 node HCI deployments wound up with only the first
host showing up in oVirt: Neither the hosts nor the gluster nodes were visible
for nodes #2 and #3.
Now that could be because I am too impatient and self-discovery will eventually
add them or it could be because I am
Hi Strahil, I am not as confident as you are, that this is actually what the
single-node is "designed" for. As a matter of fact, any "design purpose
statement" for the single-node setup seems missing.
The even more glaring omission is any official guide on how to increase HCI
from 1 to 9 in ste
Ubuntu support: I feel ready to bet a case of beer, that that won't happen.
oVirt lives in a niche, which doesn't have a lot of growth left.
It's really designed to run VMs on premise, but once you're fully VM and
containers, cloud seems even more attractive and then why bother with oVirt
(whic
You would do good to mirror everything that oVirt is using, especially if you
want to install/rebuild while remaining offline.
The 1.1 GB file you mention is the oVirt appliance initial machine image, which
unfortunately seems to get explicitly deleted from time to time, most likely
the officia
For me this is one of the scenarios where I'd want to use OVA export and import.
Unfortunately a full bidirectional set of tests between oVirt, VMware, Xen
Server or VirtualBox isn't within oVirt's release pipeline, so very little
seems to work, not even within oVirt instances.
I think I did ge
You're welcome!
My machine learning team members that I am maintaining oVirt for tend to load
training data is large sequential batches, which means bandwidth is nice to
have. While I give them local SSD storage on the compute nodes, I also give
them lots of HDD/VDO based gluster file space, wh
In the two years that I have been using oVirt, I've been yearning for some nice
architecture primer myself, but I have not been able to find a nice "textbook
style" architecture document.
And it does not help that some of the more in-depth information on the oVirt
site, doesn't seem navigatable
Honestly, this sounds like a $1000 advice!
Thanks for sharing!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://ww
You found the issue!
VirtIOSCSI can only do its magic, when it's actually used. And once the boot
disks was running using AHCI emulation it's a little hard to make it
"re-attach" to SCSI.
I am pretty sure it could be done, like you could make Windows disks switch
from IDE to SATA/AHCI with a b
You gave some different details in your other post, but here you mention use of
GPU pass through.
Any pass through will lose you the live migration ability, but unfortunately
with GPUs, that's just how it is these days: while those could in theory be
moved when the GPUs were identical (because
If you manage to export the disk image via the GUI, the result should be a
qcow2 format file, which you can mount/attach to anything Linux (well, if the
VM was Linux... it didn't say)
But it's perhaps easier to simply try to attach the disk of the failed VM as a
secondary to a live VM to recove
First off, I have very little hope, you'll be able to recover your data working
at gluster level...
And then there is a lot of information missing between the lines: I guess you
are using a 3 node HCI setup and were adding new disks (/dev/sdb) on all three
nodes and trying to move the glusterfs
>
> The caveat with local storage is that I can only use the remaining free
> space in /var/ for disk images. The result is the 1TB SSD has around
> 700GB remaining free space.
>
> So I was wondering about simply passing through the nvme ssd (PCI) to the
> guest, so the guest can utilise the fil
This looks to me like something I've been stumbling across several times...
When trying to redo a filed partial installation of HCI, I often stumbled
across volume setups not working, even if I had cleared "everything" via the
'cleanup partial install' button (I don't recall literally what it is
It's better when you post distinct problems in distinct posts.
I'll answer on the CPU aspect, which may not be related to the networking topic
at all.
Sounds like you're adding Haswell parts to a farm that was built on Skylakes.
In order for VMs to remain mobile across hosts, oVirt needs to est
Do you think it would add significant value to your use of oVirt if
- single node HCI could easily promote to 3-node HCI?
- single increments of HCI nodes worked with "sensible solution of quota
issues"?
- extra HCI nodes (say beyond 6) could easily transition into erasure coding
for good quota m
just go on as they do now, I consider a death knell to oVirt.
Kind regards,
Thomas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of C
and you expect newcomers to find that significant bit of information within the
reference that you quote as they try to evaluate if oVirt is the right tool for
the job?
I only found out once I tried to add dispersed volumes to an existing 3 node
HCI and dug through the log files.
Of course, I
ch locations, that act as compute-only nodes
initially (to avoid split quotas), but can be promoted to replace a storage
node that failed without hands-on intervention.
oVirt HCI is as close at it gets to LEGO computers, but right now it's doing
LEGO
Thank you Gianluca, for supporting my claim: it's patchwork and not "a solution
designed for the entire enterprise".
Instead it's more of "a set of assets where two major combinations from a
myriad of potential permutations have received a bit of testing and might be
useful somewhere in your en
>
> You're welcome to help with oVirt project design and discuss with the
> community the parts that you think should benefit from a re-design.
I consider these pesky little comments part of the discussion, even if I know
they are not the best style.
But how much is there to discuss, if Redhat
Sigh, please ignore my blabbering about PCI vs PCIe, it seems that the VirtIO
adapters are all PCI not PCIe independant of the chipset chosen...
In any case I posted the KVM xml configs generated via e-mail to the list and
they should arrive here shortly.
I am attaching both working configs here.
Von: Nur Imam Febrianto
Gesendet: Dienstag, 20. April 2021 14:14
An: Thomas Hoberg ; users@ovirt.org
Betreff: [ovirt-users] Re: FreeBSD 13 and virtio
Seems strange. I want to use q35, but whenever I try even to start the
installation (vm disk
I tried again with a 440FX chipset and it still worked fine with VirtIO-SCSI
and the virtual NIC.
I also discovered the other reason I prefer VirtIO-SCSI, which is support for
discard, always appreciated by SSDs.
It would seem that the virtio family of storage and network adapters support
both
I have used these tools to get rid of snapshots that wouldn't go away any other
way:
https://www.ovirt.org/develop/developer-guide/db-issues/helperutilities.html
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.
q35 with BIOS as that is the cluster default with >4.3.
Running the dmesg messages through my mind as I remember them, the vio hardware
may be all PCIe based, which would explain why this won't work on a virtual FX
440FX system, because those didn't have PCIe support AFAIK.
Any special reason w
I'd say very good luck, concentration and coffee...
Would you mind reporting back how it went?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oV
I'd only hazard that the pass-through virtualization settings have zero effect
on anything network, unless you're actually running a nested VM.
SR-IOV would be an entirely different issue if that is actually used and not
just enabled.
___
Users mailing
This is where a design philosophy chapter in the documentation would really
help, especially since its brilliance would make for a very nice read.
The self hosted engine (SHE) is in fact extremely highly available, because it
always leaves behind a fully working 'testament' on what needs to run
As long as CentOS was downstream of RHEL, it was a base so solid it might have
been better than the oVirt node image, even if that was theoretically going
through some full stack QA testing.
But with CentOS [Up]Stream you get beta quality for the base and then the
various acquired parts that ma
1 - 100 of 412 matches
Mail list logo