Hi Martin,

By default gluster will proactively start to heal every 10 min - so this is not 
OK.

Usually, I do not wait for that to get triggered and i run gluster volume heal 
<volname> full (using replica 3 with sharding of 4 MB -> oVirt default).

Best Regards,
Strahil NikolovOn Jun 3, 2019 19:58, Martin <snowmai...@gmail.com> wrote:
>
> Hi all,
>
> I need someone to explain if my gluster behaviour is correct. I am not sure 
> if my gluster works as it should. I have simple Replica 3 - Number of Bricks: 
> 1 x 3 = 3.
>
> When one of my hypervisor is disconnected as peer, i.e. gluster process is 
> down but bricks running, other two healthy nodes start signalling that they 
> lost one peer. This is correct.
> Next, I restart gluster process on node where gluster process failed and I 
> thought It should trigger healing of files on failed node but nothing is 
> happening.
>
> I run VMs disks on this gluster volume. No healing is triggered after gluster 
> restart, remaining two nodes get peer back after restart of gluster and 
> everything is running without down time.
> Even VMs that are running on “failed” node where gluster process was down 
> (bricks were up) are running without down time.
>
> Is this behaviour correct? I mean No healing is triggered after peer is 
> reconnected back and VMs.
>
> Thanks for explanation.
>
> BR!
> Martin 
>
> _______________________________________________
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Reply via email to