Hi Strahil, first of all, thanks for following up on this...
I think I'll put that list of yours on the wall: It's a key piece of
documentation that I found missing: Perhaps you could reconstruct it from
systemd dependencies, but...
I may not have rebooted... it takes a long time on these olde
Most probably the vdsm or supervdsm's PreExec task is doing it (they got
multiple, so you can run manually till you find it out).
Just try the following:
systemctl stop vdsmd supervdsmd
systemctl start supervdsmd
Check for certs
systemctl start vdsmd
Keep in mind that that the chain of events (a
If you manage to get your VM on the gluster - you are almost done.
I had similar situation and using virsh/hosted-engine , I have managed to reach
the GUI.
From there we can have a Clue what is going on.
Usually there is a dependency:
- Master storage domain should be UP
-This allows the DC to bec
After spending another couple of hours trying to track down the problem, I have
found that the "lost connection" seems due to KVM shutting down, because it
cannot find the certificates for the Spice and VNC connections in
/etc/pki/vdsm/*, where 'ovirt-hosted-engine-cleanup' deleted them.
So now
Thanks Strahil, for your suggestions.
Actually, I was far beyond the pick-up point you describe, as the Gluster had
all been prepared and was operable, even the local VM was already running and
accessible via the GUI.
But I picked up your hint to try to continue with the scripted variant, and
I think that you can go on with the installation (as far as I remember , next
phase is the HostedEngine deployment) on the same node. You should not use the
single node setup , but the other one.At the end - the engine (once migrated to
gluster volume and started up by the ovirt-ha-broker/ovirt
6 matches
Mail list logo