Hi Sahina, Strahil
thank you for the information, I managed to
start the heal and restore both the hosted engine and the Vms.
This is
the log's on all nodes
[2019-03-26 08:30:58.462329] I [MSGID: 104045]
[glfs-master.c:91:notify] 0-gfapi: New graph
676c6e6f-6465-3032-2e61-736370642e6c
Hi Andrea,
My guess is that while node2 was in maintenance , node3 brick(s) have died, or
there were some pending heals.
For backup, you can use anything that works for KVM, but the hard part is to
get the configuration of each VM. If the VM is running, you can use 'virsh
dumpxml domain' to
You will first need to restore connectivity between the gluster peers
for heal to work. So restart glusterd on all hosts as Strahil
mentioned, and check if "gluster peer status" returns the other nodes
as connected. If not, please check the glusterd log to see what's
causing the issue. Share the
Hi Andrea,
The cluster volumes might have sharding enabled and thus files larger than
shard size can be recovered only via cluster.
You can try to restart gluster on all nodes and force heal:
1. Kill gluster processes:
systemctl stop glusterd
During maintenance of a machine the hosted engine crashed.
At that point there was no more chance of managing anything.
The VMs have paused, and were no longer manageable.
I restarted the machine, but one point all the bricks were no longer reachable.
Now I am in a situation where the engine
5 matches
Mail list logo