Hi Roberto
We also used the ElRepo drivers in production - most of our e-learning
servers including BBB are on oVirt Gen8 hosts. No problems whatsoever
The problem is that I could not find drivers for the kernel version of
the 4.4.10 release.
Thomas
On Sat, 16 Apr 2022, 19:58 Roberto
Hi all
It looks like Gen9 is also not supported in RHEL8. Its a shame, from our
side it looks like a deal breaker since we are not ready to part with our
old but reliable servers.
Maybe we will try Proxmox.
Thanks you all for your help
BR
Thomas
On Fri, 15 Apr 2022, 00:04 Strahil
ProLiant BL460c Gen8
BR
Thomas
On Thu, Apr 14, 2022 at 5:38 PM Strahil Nikolov
wrote:
> What gen is your hardware ?
>
> Best Regards,
> Strahil Nikolov
>
> On Thu, Apr 14, 2022 at 9:00, Thomas Kamalakis
> wrote:
> ___
>
it
if you can skip all this kernel business.
BR
Thomas
On Wed, Apr 13, 2022 at 10:35 PM Darrell Budic
wrote:
> I hit this too, RH appears to have limited the supported PCI ids in their
> be2net and no longer includes the Emulex OneConnect in the list of
> supported IDs. I switched
> On Tue, Feb 22, 2022 at 1:25 PM Thomas Hoberg k8s does not dictate anything regarding the workload. There is just a
> scheduler which can or can not schedule your workload to nodes.
>
One of these days I'll have to dig deep and see what it does.
"Scheduling" can en
> Le 21/02/2022 à 17:15, Klaas Demter a écrit :
> Thank you, it is ok now but... we
> are faced to the first side effects of
> an upstream distribution that continuously ships newer packages and
> finally breaks dependencies (at least repos) into a stable ovirt realease.
which is the effect I
This is very cryptic: care to expand a little?
oVirt supports live migration--of VMs, meaning the (smaller) RAM contents--and
tries to avoid (larger) storage migration.
The speed for VM migration has the network as an upper bound, not sure how
intelligently unused (ballooned?) RAM is excluded
I'm glad you made it work!
My main lesson from oVirt from the last two years is: It's not a turnkey
solution.
Unless you are willing to dive deep and understand how it works (not so easy,
because there is few up-to-date materials to explain the concepts) *AND* spend
a significant amount of
sorry a type there: s/have both ends move/have both ends BOOT the Clonezilla
ISO...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code
> So as title states I am moving VMs from an old system of ours with alot of
> issues to a new
> 4.4 HC Gluster envr although it seems I am running into what I have learnt is
> a 4.3 bug of
> some sort with exporting OVAs.
The latest release of 4.3 still contained a bug, essentially a race
> On Tue, Feb 22, 2022 at 9:48 AM Simone Tiraboschi wrote:
>
>
> Just to clarify the state of things a little: It is not only technically
> there. KubeVirt supports pci passthrough, GPU passthrough and
> SRIOV (including live-migration for SRIOV). I can't say if the OpenShift UI
> can compete
I think Patrick already gave quite sound advice.
I'd only want to add, that you should strictly separate dealing with Gluster
and oVirt: the integration isn't strong and oVirt just uses Gluster and won't
try to fix it intelligently.
Changing hostnames on an existing Gluster is "not supported"
That's exactly the direction I originally understood oVirt would go, with the
ability to run VMs and container side-by-side on the bare metal or nested with
containers inside VMs for stronger resource or security isolation and network
virtualization. To me it sounded especially attractive with
>The impression I've got from this mailing list is they are
intentional design decisions to enforce "correctness" of the cluster.
My understanding of cluster (ever since the VAX) is that it's a fault-tolerance
mechanism and that was originally one of the major selling points of these
> On Tue, Feb 15, 2022 at 8:50 PM Thomas Hoberg
> For quite some time, ovirt-system-tests did test also HCI, routinely.
> Admittedly, this flow never had the breadth of the "plain" (separate
> storage) flows.
I've known virtualization from the days of the VM/370.
Am I pessimistic about the future of oVirt? Quite honestely, yes.
Do I want it to fail? Absolutely not! In fact I wanted it to be a viable and
reliable product and live up to its motto "designed to manage your entire
enterprise infrastructure".
It turned out to be very mixed: It has bugs, I
Comments & motivational stuff were moved to the end...
Source/license:
Xen the hypervisor is moved to the Linux foundation. Perpetual open source,
free to use.
Xcp-ng is a distribution of Xen, produced by a small French company based on
Xen using (currently) a Linux 4.19 LTS kernel and an EL7
> Wait a minute.
>
> Use of GlusterFS as a storage backend is now deperecated and will be
> removed in a future update?
>
> What are those who's deployments have GlusterFS as their storage
> backend supposed to use as a replacement?
>
They are to fully understand the opportunities and risks
> On Mon, Feb 7, 2022 at 3:04 PM Sandro Bonazzola wrote:
>
>
> The oVirt storage team never worked on HCI and we don't plan to work on
> it in the future. HCI was designed and maintained by Gluster folks. Our
> contribution for HCI was adding 4k support, enabling usage of VDO.
>
> Improving on
There I always pictured you two throwing paper balls at each other across the
office or going for a coffee together...
In the past that difference wouldn't have mattered, I guess.
But with upstream vs downstream your disagreement opens a chasm oVirt can ill
afford.
Sandro, I am ever so glad you're fighting on, buon coraggio!
Yes, please write a blog post on how oVirt could develop without a commercial
downstream product that pays your salaries.
Ideally you'd add a perspective for current HCI users, many of which chose this
approach, because a
Alas, Ceph seems to take up an entire brain and mine regularly overflows just
looking at their home page.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
I just hit across the fact that XOSAN (the "native" HCI solution for XCP-ng) is
in fact LinStor...
That's what's behind the €6000/year support fee, but there is a beta that's
community and that I'll try fow now.
___
Users mailing list --
> Oh i have spent years looking.
>
> ProxMox is probably the closest option, but has no multi-clustering
> support. The clusters are more or less isolated from each other, and
> would need another layer if you needed the ability to migrate between
> them.
Also been looking at ProxMox for ages.
> I wonder if Oracle would not be interested in keeping the ovirt. It will
> really be too bad that ovirt is discontinued.
>
> https://docs.oracle.com/en/virtualization/oracle-linux-virtualization-man...
>
>
> Em sáb., 5 de fev. de 2022 09:43, Thomas Hoberg escreve
Xen came before KVM, but ultimately Redhat played a heavy hand to swing much of
the market but with Citrix it managed to survive (so far).
XCP-ng is a recent open source spin-off, which attempts to gather a larger
community.
Their XOSAN storage is aimed to deliver a HCI solution somewhat like
There is unfortunately no formal announcement on the fate of oVirt, but with
RHGS and RHV having a known end-of-life, oVirt may well shut down in Q2.
So it's time to hunt for an alternative for those of us to came to oVirt
because they had already rejected vSAN or Nutanix.
Let's post what we
Please have a look here:
https://access.redhat.com/support/policy/updates/rhev/
Without a commercial product to pay the vast majority of the developers, there
is just no chance oVirt can survive (unless you're ready to take over). RHV 4.4
full support ends this August and that very likely
With Gluster gone, you could still use SAN and NFS storage, just like before
they tried to compete with Nutanix and vSphere.
Can you imagine IBM sponsoring oVirt, which doesn't make any money without RHV,
which evidently isn't profitable enough?
Most likely oVirt will lead RHV, in this case to
I just read this message: https://bugzilla.redhat.com/show_bug.cgi?id=2016359
I am shocked but not surprised. And very, very sad.
But I believe this decision needs to be communicated more prominently, as
people should not get aboard a project already axed.
Actually the inability to mix CPU vendors is increasingly becoming an issue,
and probably not just for me.
Of course this isn't an oVirt topic, not even a KVM-only topic, but reaches
deep into the OS and even applications.
I guess Intel rather likes adding extensions and proprietary
It was this near endless range of possibilities via permutation of the parts
that originally attracted me to oVirt.
Being clearly a member of the original Lego generation I imagined how you could
simply add blocks of this and that to rebuild to something new fantastic...,
limitless gluster
> On Tue, Feb 1, 2022 at 7:55 PM Richard W.M. Jones wrote:
>
> Would you like to file a doc bug about this?
>
> oVirt on RHEL is not such a common combination..
Well IBM seems bent on changing that (see the developer license post below)
>
> In CI we only test on Centos Stream (8, hopefully
> you have 16 developer self support subscriptions from RH, those are more than
> enough to
> use with ovirt as a cluster/s.
I'd consider that an off-topic post.
And whilst we are off-topic, one of the main attractions of using TrueCentOS
(the downstream Community ENterprise Operating System)
> Hi Emilio,
>
> Yes, looks like the patch that should fix this issue is already here:
> https://github.com/oVirt/ovirt-release/pull/93 , but indeed it still hasn't
> been reviewed and merged yet.
>
> I hope that we'll have a fixed version very soon, but meanwhile you can try
> to simply apply
In the recent days, I've been trying to validate the transition from CentOS 8
to Alma, Rocky, Oracle and perhaps soon Liberty Linux for existing HCI clusters.
I am using nested virtualization on a VMware workstation host, because I
understand snapshoting and linked clones much better on VMware,
Unfortunately I have no answer to your problem.
But I'd like to know: where does that leave you?
Are youre severs still running with normal operational tasks performing, are
you just not able to handle migrations, restarts or is your environment down
until this gets fixed?
Or were you able to
I'm running ovirt 4.4.2 on CentOS 8.2. My ovirt nodes have two network
addresses, ovirtmgmt and a second used for normal routed traffic to the
cluster and WAN.
After the ovirt nodes were set up, I found that I needed to add an extra
static route to the cluster interface to allow the hosts to
Hello,
I exported all of the VMs on my 4.3.10 cluster to OVAs and while a few have
imported into my new 4.4.9 cluster just fine, most are failing to import with
the error below being logged in my engine.log.
Caused by: org.postgresql.util.PSQLException: ERROR: insert or update on table
Hello,
I am running into the same issue while attempting a new install of oVirt 4.4 to
replace my hyperconverged 4.3 cluster. I am running into the issue on one of
the inital steps (yum/dnf install cockpit-ovirt-dashboard vdsm-gluster
ovirt-host). After some digging, I found that the version
Actually quite a few of my 3 node HCI deployments wound up with only the first
host showing up in oVirt: Neither the hosts nor the gluster nodes were visible
for nodes #2 and #3.
Now that could be because I am too impatient and self-discovery will eventually
add them or it could be because I
Hi Strahil, I am not as confident as you are, that this is actually what the
single-node is "designed" for. As a matter of fact, any "design purpose
statement" for the single-node setup seems missing.
The even more glaring omission is any official guide on how to increase HCI
from 1 to 9 in
Ubuntu support: I feel ready to bet a case of beer, that that won't happen.
oVirt lives in a niche, which doesn't have a lot of growth left.
It's really designed to run VMs on premise, but once you're fully VM and
containers, cloud seems even more attractive and then why bother with oVirt
You would do good to mirror everything that oVirt is using, especially if you
want to install/rebuild while remaining offline.
The 1.1 GB file you mention is the oVirt appliance initial machine image, which
unfortunately seems to get explicitly deleted from time to time, most likely
the
For me this is one of the scenarios where I'd want to use OVA export and import.
Unfortunately a full bidirectional set of tests between oVirt, VMware, Xen
Server or VirtualBox isn't within oVirt's release pipeline, so very little
seems to work, not even within oVirt instances.
I think I did
You're welcome!
My machine learning team members that I am maintaining oVirt for tend to load
training data is large sequential batches, which means bandwidth is nice to
have. While I give them local SSD storage on the compute nodes, I also give
them lots of HDD/VDO based gluster file space,
In the two years that I have been using oVirt, I've been yearning for some nice
architecture primer myself, but I have not been able to find a nice "textbook
style" architecture document.
And it does not help that some of the more in-depth information on the oVirt
site, doesn't seem
Honestly, this sounds like a $1000 advice!
Thanks for sharing!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
You found the issue!
VirtIOSCSI can only do its magic, when it's actually used. And once the boot
disks was running using AHCI emulation it's a little hard to make it
"re-attach" to SCSI.
I am pretty sure it could be done, like you could make Windows disks switch
from IDE to SATA/AHCI with a
You gave some different details in your other post, but here you mention use of
GPU pass through.
Any pass through will lose you the live migration ability, but unfortunately
with GPUs, that's just how it is these days: while those could in theory be
moved when the GPUs were identical (because
If you manage to export the disk image via the GUI, the result should be a
qcow2 format file, which you can mount/attach to anything Linux (well, if the
VM was Linux... it didn't say)
But it's perhaps easier to simply try to attach the disk of the failed VM as a
secondary to a live VM to
First off, I have very little hope, you'll be able to recover your data working
at gluster level...
And then there is a lot of information missing between the lines: I guess you
are using a 3 node HCI setup and were adding new disks (/dev/sdb) on all three
nodes and trying to move the
>
> The caveat with local storage is that I can only use the remaining free
> space in /var/ for disk images. The result is the 1TB SSD has around
> 700GB remaining free space.
>
> So I was wondering about simply passing through the nvme ssd (PCI) to the
> guest, so the guest can utilise the
This looks to me like something I've been stumbling across several times...
When trying to redo a filed partial installation of HCI, I often stumbled
across volume setups not working, even if I had cleared "everything" via the
'cleanup partial install' button (I don't recall literally what it
It's better when you post distinct problems in distinct posts.
I'll answer on the CPU aspect, which may not be related to the networking topic
at all.
Sounds like you're adding Haswell parts to a farm that was built on Skylakes.
In order for VMs to remain mobile across hosts, oVirt needs to
Do you think it would add significant value to your use of oVirt if
- single node HCI could easily promote to 3-node HCI?
- single increments of HCI nodes worked with "sensible solution of quota
issues"?
- extra HCI nodes (say beyond 6) could easily transition into erasure coding
for good quota
y do now, I consider a death knell to oVirt.
Kind regards,
Thomas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www
and you expect newcomers to find that significant bit of information within the
reference that you quote as they try to evaluate if oVirt is the right tool for
the job?
I only found out once I tried to add dispersed volumes to an existing 3 node
HCI and dug through the log files.
Of course, I
initially (to avoid split quotas), but can be promoted to replace a storage
node that failed without hands-on intervention.
oVirt HCI is as close at it gets to LEGO computers, but right now it's doing
LEGO with your hands tied behind your back.
Kind regar
Thank you Gianluca, for supporting my claim: it's patchwork and not "a solution
designed for the entire enterprise".
Instead it's more of "a set of assets where two major combinations from a
myriad of potential permutations have received a bit of testing and might be
useful somewhere in your
>
> You're welcome to help with oVirt project design and discuss with the
> community the parts that you think should benefit from a re-design.
I consider these pesky little comments part of the discussion, even if I know
they are not the best style.
But how much is there to discuss, if Redhat
Sigh, please ignore my blabbering about PCI vs PCIe, it seems that the VirtIO
adapters are all PCI not PCIe independant of the chipset chosen...
In any case I posted the KVM xml configs generated via e-mail to the list and
they should arrive here shortly.
I am attaching both working configs here.
Von: Nur Imam Febrianto
Gesendet: Dienstag, 20. April 2021 14:14
An: Thomas Hoberg ; users@ovirt.org
Betreff: [ovirt-users] Re: FreeBSD 13 and virtio
Seems strange. I want to use q35, but whenever I try even to start the
installation (vm disk
I tried again with a 440FX chipset and it still worked fine with VirtIO-SCSI
and the virtual NIC.
I also discovered the other reason I prefer VirtIO-SCSI, which is support for
discard, always appreciated by SSDs.
It would seem that the virtio family of storage and network adapters support
I have used these tools to get rid of snapshots that wouldn't go away any other
way:
https://www.ovirt.org/develop/developer-guide/db-issues/helperutilities.html
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
q35 with BIOS as that is the cluster default with >4.3.
Running the dmesg messages through my mind as I remember them, the vio hardware
may be all PCIe based, which would explain why this won't work on a virtual FX
440FX system, because those didn't have PCIe support AFAIK.
Any special reason
I'd say very good luck, concentration and coffee...
Would you mind reporting back how it went?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
I'd only hazard that the pass-through virtualization settings have zero effect
on anything network, unless you're actually running a nested VM.
SR-IOV would be an entirely different issue if that is actually used and not
just enabled.
___
Users
This is where a design philosophy chapter in the documentation would really
help, especially since its brilliance would make for a very nice read.
The self hosted engine (SHE) is in fact extremely highly available, because it
always leaves behind a fully working 'testament' on what needs to run
As long as CentOS was downstream of RHEL, it was a base so solid it might have
been better than the oVirt node image, even if that was theoretically going
through some full stack QA testing.
But with CentOS [Up]Stream you get beta quality for the base and then the
various acquired parts that
The last oVirt 4.3 release contains a bug which will export OVAs with empty
disks. Just do a du -h to see if it contains more than the XML
header and tons of zeros.
Hopefully the orignal VMs are still with you because you'll need to fix the
python code that does the export: It's a single line
It's an effect that also had me puzzled for a long time: To my understanding
gluster volume command should only ever show peers that contribute bricks to a
volume, not peers in general.
Now perhaps an exception needs to be made for hosts that have been enabled to
run the management engine, as
Sharing disks typically requires that you need to coordinate their use above
the disk.
So did you consider sharing a file system instead?
Members in my team have been using NetApp for their entire career and are quite
used to sharing files even for databases.
And since Gluster HCI basically
ovirt-hosted-engine-cleanup will only operate on the host you run it on.
In a cluster that might have side-effects, but as a rule it will try to undo
all configuration settings that had a Linux host become an HCI member or just a
host under oVirt management.
While the GUI will try to do the
That very much describes my own situation two years ago..., just a slight time
and geographic offset as my home is near Frankfurt and my work is in Lyon. I
had been doing 70:1 consolidation via virtualization based on OpenVZ
(containers, but with a IaaS abstraction), since 2006 because it was
I've just given it a try: works just fine with me.
But I did notice, that I chose virtio-scsi when I created the disk, don't now
if that makes any difference, but as an old-timer, I still got "SCSI" ingrained
as "better than ATA".
Chose FreeBSD 9.2 x64 as OS type while creating the VM (nothing
Well, that's why I really want a theory of operation here, because removing a
host as a gluster peer might just break something in oVirt... And trying to
fix that may be not trivial either.
It's one of those cases where I'd just really love to have nested
virtualization work better so I can
My understanding is that in a HCI environment, the storage nodes should be
rather static, but that the pure compute nodes, can be much more dynamic or
opportunistic: actually those should/could even be switched off and restarted
as part of oVirt's resource optimization.
The 'pure compute'
Hi Strahil,
when you said "The Gluster documentation on the topic is quite extensive", I
wasn't quite sure, if that was mean to be ironic: you typically are not.
At the moment the only documentation I can see navigating from the
documentation menu on ovirt.org is this:
11.6. Preparing and
Hi Strahil,
I did actually find the matching RHV documentation now.
The reason I didn't before seems to be that this documentation was only added
for RHHI 1.8 or oVirt 4.4 and did not exist for RHHI 1.7 or oVirt 4.3
oVirt may have started as a vSphere 'look-alike', but it graduated to a Nutanix
'clone', at least in terms of marketing.
IMHO that means the 3-node hyperconverged default oVirt setup (2 replicas and 1
arbiter) deserves special love in terms of documenting failure scenarios.
3-node HCI is
I personally consider the fact that you gave up on 4.3/CentOS7 before CentOS 8
could have even been remotely reliable to run "a free open-source
virtualization solution for your entire enterprise", a rather violent break of
trust.
I understand Redhat's motivation with Python 2/3 etc., but
I am glad you got it done!
I find that oVirt resembles more an adventure game (with all its huge emotional
rewards, once you prevail), than a streamlined machine, that just works every
time you push a button.
Those are boring, sure, but really what I am looking for when the mission is to
run
It's important to understand the oVirt design philosophy.
That may be somewhat understated in the documentation, because I am afraid they
copied that from VMware's vSphere who might have copied it from Nutanix, who
might have copied it from who-know-else... which might explain why they are a
Export domain should work, with the usual constraints that you have to
detach/attach the whole domain and you'd probably want to test with one or a
few pilot VMs first.
There could be issues with 'base' templates etc. for VMs that where created as
new on 4.4: be sure to try every machine type
Roman, I believe the bug is in
/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/pre_checks/validate_memory_size.yml
- name: Set Max memory
set_fact:
max_mem: "{{ free_mem.stdout|int + cached_mem.stdout|int -
he_reserved_memory_MB + he_avail_memory_grace_MB }}"
If these
Yup, that's a bug in the ansible code, I've come across on hosts that had 512GB
of RAM.
I quite simply deleted the checks from the ansible code and re-ran the wizard.
I can't read YAML or Python or whatever it is that Ansible uses, but my
impression is that things are 'cast' or converted into
> On Wed, Jan 27, 2021 at 9:14 AM
>
> Ok, I think I found at least for Nvidia. You can follow what described for
> RHV:
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/...
>
> In the same manual there are also instructions for vGPU.
>
> There is also the guide for
A geographically distributed cluster is a very expensive random number
generator: any cluster crtically depends on the assumption that the
communication between nodes is at least an order of magnitude more reliable
than the node itself. Otherwise you just multiply the chance of failures.
That
To your question "Is it worth building a new virtualization platform based on
ovirt?" Sandro answered "currently there's no change in the Red Hat's support
of the oVirt project".
That may technically be true, but it doesn't really answer your question, I'd
believe.
oVirt is a management layer
The main issue isn't oVirt, but Nvidia's drivers inside a virtual machine: You
use 'unapproved' GPUs in what Nvidia drivers recognise as a VM, they'll refuse
to load.
RTX series cards should be ok, I've tried K40 and P100 and they are just as
fine with oVirt pass-through.
V100 tests are still
I would agree, if CentOS 8 could still be considered a proper base to build on.
But since that is now EOL at the end of this year, oVirt is no longer viable,
because it's beta-on-beta.
I can only hope that IBM will read the writing on the wall before you guys will
be out of a job.
Unlocked the snapshot, deleted the VM... done!
That was easiser than I thought.
Thanks for your help in any case!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
And I should have guessed, that this is only how trouble starts =:-O
I immediately removed the locked snapshot image (actually they pretty much
seemed to disappear by themselves as delete operations might have been pending)
but the VM that was doing the snapshot, is still there, even if the
Thanks Shani,
I had just found that myself and even managed to unlock and remove the images.
Somehow the reference to https://ovirt.org/develop/developer-guide/ seems to
not be available from the ovirt site navigation but now I have discovered it
via another post here and found a treasure
On one oVirt 4.3 farm I have three locked images I'd like to clear out.
One is an ISO image, that somehow never completed the transfer due to a slow
network. It's occupying little space, except in the GUI where it sticks out and
irritates. I guess it would just be an update somewhere on the
I am glad you think so and it works for you. But I'd also guess that you put
more than a partial FTE to the project.
I got attracted via the HCI angle they started pushing as a result of Nutanix
creating a bit of a stir. The ability to use discarded production boxes for a
lab with the
Hi Strahil,
OpenVZ is winding down, unfortunately. They haven't gone near CentOS8 yet any I
don't see that happen either. It's very unfortunate, because I really loved
that project and I always preferred its container abstraction as well as the
resource management tools, because scale-in is
The major issue with that is that oVirt 4.3 is out of maintenance, with Python2
EOL being a main reason.
CentOS reboots are happening, but will be to little avail, when the oVirt team
won't support them.
It's a mess that does lots of damage to this project, but IBM might just have
different
I'd put my money on a fall-through error condition where TSC is simply the last
one with a 'good' error message pointed to. I have clusters with CPUs that are
both 10 years and 10x apart in performance performing migrations between
themselves quite happily (Sandy Bridge dual quads to Skylake 56
1 - 100 of 369 matches
Mail list logo