My VMs using Gluster as storage through libgfapi support in Qemu. But I dont see any healing of reconnected brick.
Thanks Karthik / Ravishankar in advance! > On 10 Jun 2019, at 16:07, Hari Gowtham <[email protected]> wrote: > > On Mon, Jun 10, 2019 at 7:21 PM snowmailer <[email protected] > <mailto:[email protected]>> wrote: >> >> Can someone advice on this, please? >> >> BR! >> >> Dňa 3. 6. 2019 o 18:58 užívateľ Martin <[email protected]> napísal: >> >>> Hi all, >>> >>> I need someone to explain if my gluster behaviour is correct. I am not sure >>> if my gluster works as it should. I have simple Replica 3 - Number of >>> Bricks: 1 x 3 = 3. >>> >>> When one of my hypervisor is disconnected as peer, i.e. gluster process is >>> down but bricks running, other two healthy nodes start signalling that they >>> lost one peer. This is correct. >>> Next, I restart gluster process on node where gluster process failed and I >>> thought It should trigger healing of files on failed node but nothing is >>> happening. >>> >>> I run VMs disks on this gluster volume. No healing is triggered after >>> gluster restart, remaining two nodes get peer back after restart of gluster >>> and everything is running without down time. >>> Even VMs that are running on “failed” node where gluster process was down >>> (bricks were up) are running without down time. > > I assume your VMs use gluster as the storage. In that case, the > gluster volume might be mounted on all the hypervisors. > The mount/ client is smart enough to give the correct data from the > other two machines which were always up. > This is the reason things are working fine. > > Gluster should heal the brick. > Adding people how can help you better with the heal part. > @Karthik Subrahmanya @Ravishankar N do take a look and answer this part. > >>> >>> Is this behaviour correct? I mean No healing is triggered after peer is >>> reconnected back and VMs. >>> >>> Thanks for explanation. >>> >>> BR! >>> Martin >>> >>> >> _______________________________________________ >> Gluster-users mailing list >> [email protected] <mailto:[email protected]> >> https://lists.gluster.org/mailman/listinfo/gluster-users >> <https://lists.gluster.org/mailman/listinfo/gluster-users> > > > > -- > Regards, > Hari Gowtham.
_______________________________________________ Gluster-users mailing list [email protected] https://lists.gluster.org/mailman/listinfo/gluster-users
