Update:
Removing all gluster mounts from /etc/fstab, solves the boot problem. I am
then able to manually mount all gluster bricks, bring up glusterd properly.
I'm trying to deploy the hosted VM again now, but I suspect that the
problem there as well, is going to be that it's trying to mount a glus
Which logs?
The nodes hang on boot at "Started Flush Journal to Persistent Storage".
This would be followed by gluster mounts coming up (before networking,
which still doesn't make sense to me...) but they of course all fail as
networking is down.
The gluster logs, post node failure, simply state
On Thu, Feb 7, 2019 at 5:19 PM feral wrote:
> I've never managed to get a connection to the engine via VNC/Spice (works
> fine for my other hypervisors...)
>
> As I said, the network setup is super simple. All three nodes have 1
> interface each (eth0). They are all set with static IP's, with mat
I've never managed to get a connection to the engine via VNC/Spice (works
fine for my other hypervisors...)
As I said, the network setup is super simple. All three nodes have 1
interface each (eth0). They are all set with static IP's, with matching
DHCP reservations on the DHCP server, with matchi
On Wed, Feb 6, 2019 at 11:07 PM feral wrote:
> I have no idea what's wrong at this point. Very vanilla install of 3
> nodes. Run the Hyperconverged wizard, completes fine. Run the engine
> deployment, takes hours, eventually fails with :
>
> [ INFO ] TASK [oVirt.hosted-engine-setup : Check engine
Update, when the node is rebooted, it fails with "timed out waiting for
device dev-gluster_vg_vdb-gluster_lv_data.device
The node also has no networking online, which is probably the cause of the
gluster failure.
On Wed, Feb 6, 2019 at 2:04 PM feral wrote:
> I have no idea what's wrong at this p
6 matches
Mail list logo