On 01/22/2019 02:57 PM, Martin Toth wrote:
Hi all,

I just want to ensure myself how self-healing process exactly works, because I 
need to turn one of my nodes down for maintenance.
I have replica 3 setup. Nothing complicated. 3 nodes, 1 volume, 1 brick per 
node (ZFS pool). All nodes running Qemu VMs and disks of VMs are on Gluster 
volume.

I want to turn off node1 for maintenance. If I will migrate all VMs to node2 
and node3 and shutdown node1, I suppose everything will be running without 
downtime. (2 nodes of 3 will be online)
Yes it should. Before you `shutdown` a node, kill all the gluster processes on it. i.e. `pkill gluster`.

My question is if I will start up node1 after maintenance and node1 will be 
done back online in running state, this will trigger self-healing process on 
all disk files of all VMs.. will this healing process be only and only on node1?
The list of files needing heal on node1 are captured on the other 2 nodes that were up, so the selfheal daemons on those nodes will do the heals.
Can node2 and node3 run VMs without problem while node1 will be healing these 
files?
Yes. You might notice some performance drop if there are a lot of heals happening though.

I want to ensure myself this files (VM disks) will not get “locked” on node2 
and node3 while self-healing will be in process on node1.
Heal won't block I/O from clients indefinitely. If both are writing to overlapping offset, one of them (i.e either heal or client I/O)  will get the lock, do its job and release the lock so that the other can acquire it and continue.
HTH,
Ravi

Thanks for clarification in advance.

BR!
_______________________________________________
Gluster-users mailing list
gluster-us...@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Reply via email to