On 10/14/2015 09:39 AM, Lindsay Mathieson wrote:
On 13 October 2015 at 22:33, Krutika Dhananjay <[email protected]
<mailto:[email protected]>> wrote:
However I managed to create a state where a file was being
healed on all three nodes (probably y live migrating a VM
while it was being healed). I didn't think that was possible
without creating a split brain problem, but it eventually got
all the way to being healed.
I don't think it is possible for heal of this image to be
happening on all three nodes.
I should have recorded the info output, but it did show the same file
being "possibly healed" on all three nodes.
There seems to be a confusion in understanding how to interpret output
of "gluster volume heal <volname> info". We will address it by either
improving the output or adding some documentation about how to interpret it.
For now, all it means is that one of the self-heal-daemons or the mount
is doing the heal on that file. The main confusing point seems to be the
fact that the same output is seen on multiple bricks. Finding
intersection/union of the results between bricks will cost (LOT)more
iops, so we went with giving same output multiple times(when same info
is present on more than one brick) as perceived by each brick. Doesn't
mean that each of the bricks is doing the heal!, afr takes necessary
locks to make sure parallel heals don't happen on the file.
Pranith
Gluster 3.6.6
--
Lindsay
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users