On 01/22/2019 02:57 PM, Martin Toth wrote:
Yes it should. Before you `shutdown` a node, kill all the gluster
processes on it. i.e. `pkill gluster`.
I just want to ensure myself how self-healing process exactly works, because I
need to turn one of my nodes down for maintenance.
I have replica 3 setup. Nothing complicated. 3 nodes, 1 volume, 1 brick per
node (ZFS pool). All nodes running Qemu VMs and disks of VMs are on Gluster
I want to turn off node1 for maintenance. If I will migrate all VMs to node2
and node3 and shutdown node1, I suppose everything will be running without
downtime. (2 nodes of 3 will be online)
The list of files needing heal on node1 are captured on the other 2
nodes that were up, so the selfheal daemons on those nodes will do the
My question is if I will start up node1 after maintenance and node1 will be
done back online in running state, this will trigger self-healing process on
all disk files of all VMs.. will this healing process be only and only on node1?
Yes. You might notice some performance drop if there are a lot of heals
Can node2 and node3 run VMs without problem while node1 will be healing these
Heal won't block I/O from clients indefinitely. If both are writing to
overlapping offset, one of them (i.e either heal or client I/O) will
get the lock, do its job and release the lock so that the other can
acquire it and continue.
I want to ensure myself this files (VM disks) will not get “locked” on node2
and node3 while self-healing will be in process on node1.
Thanks for clarification in advance.
Gluster-users mailing list
Gluster-devel mailing list