I guess this is a little late now... but:
I wanted to do the same, especially because the additional servers (beyond 3)
would lower the relative storage cost overhead when using erasure coding, but
it's 'off the trodden path' and I cannot recommend it, after seeing just how
fragile the whole ho
Hi Christian,
I'd say that the CPUs aren't perfectly uniform in terms of capabilities and
microcode patches.
"ssbd" is a speculative store bypass, as far as I know and if your host doesn't
have the µ-code patches installed but your cluster definition has them (based
typically on the machine use
After re-reading...
The primary host determines the CPU base requirements. But in this case the
base may be newer than what the canned hosted image for the hosted-engine
supports initially (before you update it).
So by deactivating the mitigations temporarily via a boot flag on the host, you
c
I have a somewhat similar issue because I use J5005 based Atom boxes for oVirt
in the home-lab, but these fail strangely during installation and are just
hair-tairingly slow during installation.
So I move to a Kaby-Lake desktop for the installation and then need to
downgrade all the way to Neha
At least you're not alone: https://bugzilla.redhat.com/show_bug.cgi?id=1745181
Now since that Epyc is useless: Do you want to swap for a J5005?
(tschuldigung... konnte mich wieder nicht bremsen)
All that code is Python somewhere, so you can find who adds that tag and
suppress it until they fix
caching with 550GB size (note
that it fails to apply that value beyond the first node).
Has anyone else tried a hyperconverged 3-node with SSD caching with success
recently?
Thanks for your feedback and help so far,
Thomas
___
Users mailing list -- us
Three Gen8 HP360 recalled from retirement with single 1TB TLC SATA SSD for boot
and oVirt /engine and 7x4TB HDD RAID6 for /vmstore and /data, 10Gbit NICs and
network.
All CentOS 7.7 updated daily.
These machines may not be used exclusively for oVirt so I don't want to
re-install the OS, when a
I am having problems installing a 3-node HCI cluster on machines that used to
work fine and on a fresh set of servers, too.
After a series of setbacks on a set of machines with failed installations and
potentially failed clean-ups, I am ssing a fresh set of servers that had never
run oVirt
Thanks Strahil, for your suggestions.
Actually, I was far beyond the pick-up point you describe, as the Gluster had
all been prepared and was operable, even the local VM was already running and
accessible via the GUI.
But I picked up your hint to try to continue with the scripted variant, and
After spending another couple of hours trying to track down the problem, I have
found that the "lost connection" seems due to KVM shutting down, because it
cannot find the certificates for the Spice and VNC connections in
/etc/pki/vdsm/*, where 'ovirt-hosted-engine-cleanup' deleted them.
So now
The error message "/dev/sdXX has been excluded by a filter" is potentially very
misleading, because it catches all sorts of conditions.
Basically any known storage signature on the storage you may recycle (perhaps
from a previous attempt) will trigger this.
The functionality is more of a feature,
Yes, I have had the same and posted about it here somewhere: I believe it's an
incompatible Ansible change.
Here is the critical part if the message below:
"The 'ovirt_host_facts' module has been renamed to 'ovirt_host_info', and the
renamed one no longer returns ansible_facts"
and that change
This is the reference
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/YWAPIHYM7RMJIWFBC5WCAAWG3EQIGTRZ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/
Actually oVirt simply makes the node you initially install the HostedEngine on
the baseline, which is why it says in the docs that you should run it from your
"oldest" machine to ensure it doesn't fall over in case an even older host wins
the contest for running the management engine VM after "H
Hi Strahil, first of all, thanks for following up on this...
I think I'll put that list of yours on the wall: It's a key piece of
documentation that I found missing: Perhaps you could reconstruct it from
systemd dependencies, but...
I may not have rebooted... it takes a long time on these olde
Some documentation, especially on older RHEV versions seems to indicate that
Gluster storage roles and compute server roles in an oVirt cluster or actually
exclusive.
Yet HCI is all about doing both, which is slightly confusing when you try to
overcome HCI issues simply by running the managemen
This seems to be a much bigger generic issue with Ansible 2.9. Here is an
excerpt from the release notes:
"Renaming from _facts to _info
Ansible 2.9 renamed a lot of modules from _facts to
_info, because the modules do not return Ansible facts. Ansible
facts relate to a specific host. For exam
You may just be able to use the username and password already created during
the installation: vdsm@ovirt/shibboleth.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org
Thanks Martin, that actually helps a lot, because I was afraid of the
implications.
So the error must really be elsewhere. I am having another look at the logs
from the HostedEngineLocal and it seems to complain that no Gluster members are
up, not even the initial one.
I also saw no entries in
What got me derailed was the "ERROR" tag and the fact that it was the last
thing to happen on the outside ("waiting for engine to be up"), while the
HostedEngineLocal on the inside was looking for Gluster members it couldn't
find...
___
Users mailing l
I'd basically like to second the "request" or question: I started on an /28
allocation, because that was the only available at the time. In the mean-time I
managed to get hold of a full class C net, which I'd like to use for the VMs on
a 3-node HCI with a couple of extra compute and gluster stor
I want to run containers and VMs side by side and not necessarily nested. The
main reason for that is GPUs, Voltas mostly, used for CUDA machine learning not
for VDI, which is what most of the VM orchestrators like oVirt or vSphere seem
to focus on. And CUDA drivers are notorious for refusing to
Hi Strahil,
color me surprised, too, especially considering where things are supposed to go
in terms of roadmap.
Yet again, both oVIrt and Docker could be excused to think that they "own the
hardware" they are running on, because it's a rather natural assumption, even
if there are good reason
The general idea was, that with oVirt I'd get a little more automation and
benefits than with just using VirtualBox as a GUI for KVMs.
Boy did I underestimate the amount of intellectual investment for the first
value return. Turned out quite a bit bigger than vSphere, but much more
intriguing,
For my home-lab I operate a 3 node HCI cluster on 100% passive Atoms, mostly to
run light infrastructure services such as LDAP and NextCloud.
I then add workstations or even laptops as pure compute hosts to the cluster
for bigger but temporary things, that might actually run a different OS most
Just like Strahil, I would expect this to work just fine. RAM differences are
actually the smallest concern, unless you run out of it in the mean-time. And
there you may want to be careful and perhaps move VMs around manually with such
a small HCI setup.
oVirt will properly optimize the VMs and
OK ;-)
3 node HCI 2+1 data/arbiter
added 3 compute-only nodes via host install without HE support which add no
storage to the gluster (install still adds them as peers).
With 2 compute-only nodes inactive/down I updated the third compute node (no
contributing bricks) and saw all VMs pausing and
My enthusiasm for CentOS8 is limited.
My enthusiasm for a hard migration even more so.
So how much time do I have before 4.3 becomes inoperable?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statem
Ciao Sandro,
just tried to re-install a Centos7 based HCI cluster because it had moved to
another network.
And from what I can tell it fails because Gobinda Das introduced a hot-fix into
'gluster-ansible-infra/roles/backend_setup/tasks/vdo_create.yml' that adds a
'--maxDiscardSize 16M' option
Running a couple of oVirt clusters on left-over hardware in an R&D niche of the
data center. Lots of switches/proxies still at 100Mbit and just checking for
updates via 'yum update' can take awhile, even time out 2 times of out 3.
The network between the nodes is 10Gbit though, faster than any o
I'd say you were close!
I tried fiddling with the penalties, but that didn't do anything good.
But once I found that hosted-engine --vm-status displayed the score across the
hosts, I found them to be very low constantly, the 1600 gateway penalty seems a
proper match.
I then reinstalled the clus
Not sure I understand your interest for an older repo. I am still regularly
doing ovirt 4.3 installs just by not chosing the 4.4 repo, which is exclusively
CentOS 8 (while 4.3 is exclusively CentOS 7: You cannot mix).
Here is my current ovirt-4.3.repo:
[root@ yum.repos.d]# more ovirt-4.3.repo
[o
Irgendwie habe ich das Gefühl, daß Dir da eine schlüsselfertige Lösung
vorschwebt, die Du direkt verticken kannst...
From what I have gathered, Redhat prefers to work with layers of abstraction.
NVMe, whether it's direct attached or over a nice (Mellanox operated dual
personality?) fabric will
I have successfully imported OVA files which came from VirtualBox before,
something I'd consider the slightly greater hurdle.
Exporting a VM to an OVA file and then importing it on another platform can
obviously create compatibility issues, but oVirt <-> oVirt exports and imports
with the same
I tried using nested virtualization, too, a couple of weeks ago.
I was using a 3 node HCI CentOS 7.8 cluster and I got pretty far. Configuring
KVM to work with nested page tables isn't all that well documented but I got
there, I even installed some host extensions, that seem requried.
Even the
From what I have observed, OVA import seems to have two modes:
If the OVA is a 'foreign' format, the ova file needs to be converted and that
conversion effort is then logged into /var/log/vdsm/import on the oVirt node
where in import runs. Import failures are then mostly an issue with the
inabi
I honestly don't know, because I have not tried myself, so you might just want
to stop reading right here.
But from what I understand about the design philosophy of oVirt, it should be
ok to change it, while nobody probably ever tested it and everybody would tell
you it's a bad idea to do so, u
I believe the oVirt team thought it easier to implement on GlusterFS, which
AFAIK does pretty much what you want to do, but comes supported as part of the
product as an (almost-)ready-to-use HCI.
___
Users mailing list -- users@ovirt.org
To unsubscribe
Hi Damien, I'm afraid nobody will be able to help you here. What made the CPU
jump into a sea of "FF FF FF FF..." can't be diagnosed remotely, but it's KVM
that stopped, because that's not legal x86-64 code, nothing oVirt can do about
it.
Salut Damien, j'ai bien peur que personne ne puisse t'ai
I have the same issue, it seems, but AFAIK the problem is on the export side:
While the OVA logical file size matches the size of the VM's disk, the actual
storage usage on the VDO compressed and deduplicated NFS I use as export target
per 'du -h *.ova' is a mere 28KB, while 'strings .ova' repor
I might have run into similar issues at one point...
And it might be where the network configuration on the installation host has
already been partially changed to include the bridge that's used to communicate
with HostedEngineLocal.
The installation is not able to pick up cleanly from every po
Sorry Strahil, that I didn't get back to you on this...
I dare not repeat the exercise, because I have no idea if I'll get out of such
a complete break-down again cleanly.
Since I don't have duplicate physical infrastructure to just test this
behavior, I was going to use a big machine to run a
Hi Jp, while the project looks intriging, it also looks still very whet behind
the ears and nothing I could do on a side job time budget.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: h
I've tested this now in three distinct farms with CentOS7.8 and the latest
oVirt 4.3 release: OVA export files only contain an XML header and then lots of
zeros where the disk images should be.
Where an 'ls -l .ova' shows a file about the size of the disk, 'du -h
.ova' shows mere kilobytes, 'st
Bongiorno Gianluca,
I am entering the data in bugzilla right now...
Logs: There is a problem, I have not found any log evidence of this export and
import, I don't even know which component is actually doing this.
For 'foreign' imports like VMware and other *.vmdk producing hypervisors, VDSM
se
Here is the ticket: https://bugzilla.redhat.com/show_bug.cgi?id=1862115
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
h
Export/Import/Transfer via export domain worked just fine. Just moved my last
surviving Univention domain controller from the single node HCI
disaster-recovery farm to the three node HCI primary, where I can now try to
make it primary and start building new backups...
Backups that do not work a
Thanks to your log post, I was able to identify the proper place to find
information about the export process.
OVA exports don't fail for me, but result in empty disks, which is perhaps even
worse.
Two things I can suggest: Try with a new and smaller VM to see if it's actually
a size related i
That's exactly the point: Running oVirt on top of a vanilla KVM seems to be the
only case supported, because the Redhat engineers use that themselves for
development, testing and demoing.
Running virtual machines nested inside an oVirt VM via KVM also works per se,
as HostedEngineLocal during t
Just looking at the release notes, using anything 'older' in the case of 4.4
seems very rough going...
Testing 4.4 was actually how I got into the nesting domain this time around,
because I didn't have physical hardware left for another HCI cluster.
Testing of oVirt 4.4 on top of oVirt 4.3 inc
Hi Nir,
performance is not really an issue here, we're most interested in functional
testing and migration support.
That's where nesting 4.4 on top of 4.3 would be a crucial migration enabler,
especially since you don't support a 7/8 or 4.3/4.4 mix that much elsewhere.
Currently my tests revea
Wow, it seems a good find for a potential reason!
I'd totally subscribe to the cause, if it only affected my Atom HCI clusters,
who have had timing issues so bad, even the initial 3 node HCI setup had to be
run on a 5 GHz Kaby Lake to succeed, because my J5005 Atoms evidently were too
slow.
Bu
I confirm, I get "qemu-img: /dev/loop0: error while converting qcow2: Could not
open device: Permission denied", too.
You've nailed it and Nir has pasted your log into the bug report 1862115.
___
Users mailing list -- users@ovirt.org
To unsubscribe send
Unfortunately that means OVA export/import doesn't seem to be a QA test case
for oVirt releases... which would make me want to drop it like a hot potato, if
there was any alternative at this point.
I guess my biggest mistake was to assume that oVirt was like CentOS.
_
Nir, first of all, thanks a lot for the detailed description and the quick fix
in 4.4!
I guess I'll be able to paste that single line fix into the 4.3. variant
myself, but I'd rather see that included in the next 4.3 release, too: How
would that work?
---
"OVA is not a backup solution":
From
Dear Nir,
I am sorry if that sounded too harsh. I do appreciate what you are doing!
The thing is, that my only chances at getting my employer to go with the
commercial variant depend a lot on the upstream project showing already
demonstratable usability in the research lab.
Just try to imagin
HI Gianluca,
I have had a look at the change and it's a single line of code added on the
hosted-engine. I'll verify that it's working 4.3 and will make a note of
checking it with engine upgrade, which for now seems the less troublesome
approach.
Hopefully it will get integrated/backported also
After OVA export/import was a) recommended against b) not working with the
current 4.3 on CentOS 7, I am trying to make sure I keep working copies of
critical VMs before I test if the OVA export now works properly, with the
Redhat fix from 4.4. applied to 4.3.
Long story short, I have an export
If OVA export and import work for you, you get to chose between the two at
import.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of
I have done that, even added five nodes that contribute a separate Gluster file
system using dispersed (erasure codes, more efficient) mode.
But in another cluster with such a 3-node-HCI base, I had a lot (3 or 4) of
compute nodes, that were actually dual-boot or just shut off when not used:
Ev
After applying the OVA export patch, that ensured disk content was actually
written ino the OVA image, I have been able to transfer *.ova VMs between two
oVirt clusters. There are still problems that I will report once I have fully
tested what's going on, but in the mean-time for all those, who
Not yet, because I was fighting a nasty outage, evidently created from
importing, starting and deleting images too quickly for my Atoms + Xeon-D based
test farm...
But it's on the list :-)
Ciao, Thomas
___
Users mailing list -- users@ovirt.o
Empty disks from exports that went wrong didn't help. But that's fixed now,
even if I can't fully validate the OVA exports on VMware and VirtualBox.
The export/import target for the *.ova files is an SSD hosted xfs file system
on a pure compute Xeon D oVirt node, exported and automounted to the
Here is the explanation, I think:
root 12319 12313 15 16:59 pts/000:00:56 qemu-img convert -O qcow2
/dev/loop0
/rhev/data-center/mnt/glusterSD/192.168.0.91:_vmstore/9d1b8774-c5dc-46a8-bfa2-6a6db5851195/images/3be7c1bb-377c-4d5e-b4f6-1a6574b8a52b/845cdd93-def8-4d84-9a08-f8c991f89fe3
This
t. Linux is so hypervisor
enabled and comes with built-in guest additions, that I haven't seen the need
when moving VMs between VB/VMware/Hyper-V. And with Windows VMs, starting with
W7/W2012 it's become almost a no-op, just pop it in and it will adjust with
little more than a reboot
If the defalt blank template from a fresh install had enabled
high-availability, that would be a bug.
But if someone had set this on their blank template, I can see how that would
cause such an issue and I can just imagine the drama behind finding it... It
seems common enough to code for it!
___
While trying to diagnose an issue with a set of VMs that get stopped for I/O
problems at startup, I try to deal with the fact that their boot disks cause
this issue, no matter where I connect them. They might have been the first
disks I ever tried to sparsify and I was afraid that might have mes
I came across this in an environment where the proxy was very slow. The
downloaded image for the appliance gets deleted every so often (and perhaps
also via the cleanup script), so when it needs to be reloaded from the
internet, it can take its time, because its quite large (>1GB).
Another sour
Thanks for putting in the effort!
I learned a lot of new things.
I also learned that I need to learn a few more now.
The table could use some alternating background or a grid: Too easy to get lost.
Environments change over time, e.g. you find you really should have split the
management, storage,
Hi Strahil, no updates, especially since I am stuck on 4.3.11 and they tell us
it's final.
Glusterfs is 6.9-1.el7.
Apart from those three VMs and the inability to copy disks the whole farm runs
fine so far.
Where would I configure the verbosity of logging? Can't find an obvious config
option.
>
> Since 4.4 is the last feature release, this will never change even if it is
> not
> documented.
>
> Nir
Hi Nir,
could you please clarfiy: What does "last feature release" mean here: Is oVirt
being feature frozen in preparation to something more drastic?
One of the many cases where nested virtualization would be quite handy for
testing...
If only it worked with oVirt on top of oVirt...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: http
Howto: Create a new network profile that doesn't have the MAC spoofing filter
included. In my case I used one without any filters, you may want to be more
careful.
Background:
I had tried the nested approach with the default settings and found that the
hosted-engine setup worked just find to th
Replicated is pretty much hard coded into all the Ansible scripts for the HCI
setup. So you can't do anything but replicated and only choose between arbiter
or full replica there. Distributed give you anything with three nodes, but with
five, seven, nine or really high numbers, it becomes quite
It would be interesting to know, how the previous team got to six nodes: I
don't remember seeing any documentation how to do that easily...
However, this state of affairs also seems to be quite normal, whenever I reboot
a single node HCI setup: I've seen that with two systems now, one running
4
Just fresh from a single node HCI setup (CentOS 8 + oVirt 4.4.1) I did
yesterday on an Intel i7-8559U NUC with a single 1TB NVMe storage stick and
64GB of RAM:
The HCI wizard pretty much assumes a "second disk", but a partition will
actually do. And it doesn't even have to be in /dev/mapper, ev
I am testing the migration from CentOS7/oVirt 4.3 to CentOS8/oVirt 4.4.
Exporting all VMs to OVAs, and re-importing them on a new cluster built from
scratch seems the safest and best method, because in the step-by-step
migration, there is simply far too many things that can go wrong and no easy
While I am trying to prepare a migration from 4.3 to 4.4 with the base OS
switch, I am exploring all variants of moving VMs.
OVA export/import and export domains have issues and failures so now I am
trying backup domains and fail to understand how they are to be used and the
sparse documentatio
Thanks for diving into that mess first, because it allowed me to understand
what I had done as well...
In my case the issue was a VM moved from 4.3 to 4.4 seemed to be silently
upgraded from "default" (whatever was default on 4.3) to "Q35", which seems to
be the new default of 4.4.
But that ha
I found OVA export/import to be rather tricky.
On the final 4.3 release there still remains a bug, which can lead to the OVA
files containing nothing but zeros for the disks, a single line fix that only
made it into 4.4
https://bugzilla.redhat.com/show_bug.cgi?id=1813028
Once I fixed that on 4.
Struggling with bugs and issues on OVA export/import (my clear favorite
otherwise, especially when moving VMs between different types of hypervisors),
I've tried pretty much everything else, too.
Export domains are deprecated and require quite a bit of manual handling.
Unfortunately the buttons
> On Fri, Aug 28, 2020 at 2:31 AM
> You should really try the attach/detach storage domain, this is the
> recommended way to move
> vms from one ovirt system to another.
>
> You could detach the entire domain with all vms from the old system,
> and connect it to the new
> system, without copying
BTW: This is the message I get on the import:
VDSM nucvirt command HSMGetAllTasksStatusesVDS failed: value=low level Image
copy failed: ("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none',
'-T', 'none', '-f', 'qcow2',
'/rhev/data-center/mnt/petitcent.mtk.hoberg.net:_flash_export/fe9fb0
After this
(https://devconfcz2020a.sched.com/event/YOtG/ovirt-4k-teaching-an-old-dog-new-tricks)
I sure do not expect this (log below):
Actually I am trying to evaluate just how portable oVirt storage is and this
case I had prepared a USB3 HDD with VDO, which I could literally move between
f
Testing the 4.3 to 4.4 migration... what I describe here is facts is mostly
observations and conjecture, could be wrong, just makes writing easier...
While 4.3 seems to maintain a default emulated machine type
(pc-i440fx-rhel7.6.0 by default), it doesn't actually allow setting it in the
cluster
I might have found one: You can set the emulated machine to 'legacy' before the
first launch.
No idea yet, if it will actually work, because the boot disk was copied empty...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to user
On a machine that survived the import, that worked as expected.
May want to add that to a check list, that the original machine type for legacy
isn't carried over after an import but needs to be set explicitly.
___
Users mailing list -- users@ovirt.org
I like playing with shiny new toys and VDO is one :-)
Actually with that it shouldn't matter if whatever is put on top actually has
spare allocation in the file format or just blocks with zeros, which makes for
a reusable HDD in case I am working with imageio images that are raw.
Good to know t
Not sure it will actually do that, but if you create a new network profile, you
can select a 'port mirroring' option: It is my understanding that is what you
need. You may also want to deselect the filtering rules in the same place.
___
Users mailing li
Thanks for the suggestion, I tried that, but I didn't get very far on a single
node HCI cluster...
And I'm afraid it won't be much better on HCI in general, which is really the
use case I am most interested in.
Silently converting VMs is something rather unexpected from a hypervisor, doing
it t
I've just tried to verify what you said here.
As a base line I started with the 1nHCI Gluster setup. From four VMs, two
legacy, two Q35 on the single node Gluster, one survived the import, one failed
silently with an empty disk, two failed somewhere in the middle of qemu-img
trying to write the
In this specific case Ieven used virgin hardware originally.
Once I managed to kill the hosted-engine by downgrading the datacenter cluster
to legacy, I re-installed all gluster storage from the VDO level up. No traces
of a file system should be left with LVM and XFS on top, even if I didn't
ac
> On Sun, Aug 30, 2020 at 7:13 PM
> Using export domain is not a single click, but it is not that complicated.
> But this is good feedback anyway.
>
>
> I think the issue is gluster, not qemu-img.
>
>
> How did you try? transfer via the UI is completely different than
> transfer using the pyt
Yes, I've also posted this on the Gluster Slack. But I am using Gluster mostly
because it's part of oVirt HCI, so don't just send me away, please!
Problem: GlusterD refusing to start due to quorum issues for volumes where it
isn’t contributing any brick
(I've had this before on a different farm
Sorry if it's a duplicate: I go an error on post... And yes I posted it on the
Gluster slack first, but I am using Gluster only because the marketing on oVirt
HCI worked so well...
I got 3 recycled servers for an oVirt test environment first and set those up
as 3-node HCI using defaults mostly,
I can see new posts coming in via e-mail, but updates on the web sites have
stopped and posts don't disappear?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privac
I am hoping for a miracle like that, too.
In the mean-time I am trying to make sure that all variants of exports and
imports from *.ova to re-attachable NFS domains work properly, in case I have
to start from scratch.
HCI upgrades don't get the special love you'd expect after RHV's proud
annou
I was looking forward to that presentation for exactly that reason: But it
completely bypassed the HCI scenario, was very light on details and of course
assumed that everything would just work, because there is no easy fail-back and
you're probably better off taking down the complete farm during
I can hear you saying: "You did understand that single node HCI is just a toy,
right?"
For me the primary use of a single node HCI is adding some disaster resilience
in small server edge type scenarios, where a three node HCI provides the fault
tolerance: 3+1 with a bit of distance, warm or eve
Que yo sepa, no existe alternativa al reinstalarlo... La maquina de
manejamiento hereda su configuracion del cluster y en este caso, cuando lo
tengas cambiado, le falta la hardware virtualizada para arrancar, y como no
tienes GUI tampoco lo puedes cambiar... ¡No eres el primero quien haya caido
1 - 100 of 412 matches
Mail list logo