13.07.2016 08:36, Pranith Kumar Karampuri пишет:
On Wed, Jul 13, 2016 at 9:35 AM, Dmitry Melekhov <[email protected]
<mailto:[email protected]>> wrote:
13.07.2016 01:52, Anuradha Talur пишет:
----- Original Message -----
From: "Dmitry Melekhov" <[email protected] <mailto:[email protected]>>
To: "Pranith Kumar Karampuri" <[email protected]
<mailto:[email protected]>>
Cc: "gluster-users" <[email protected]
<mailto:[email protected]>>
Sent: Tuesday, July 12, 2016 9:27:17 PM
Subject: Re: [Gluster-users] 3.7.13, index healing broken?
12.07.2016 17:39, Pranith Kumar Karampuri пишет:
Wow, what are the steps to recreate the problem?
just set file length to zero, always reproducible.
If you are setting the file length to 0 on one of the bricks
(looks like
that is the case), it is not a bug.
Index heal relies on failures seen from the mount point(s)
to identify the files that need heal. It won't be able to
recognize any file
modification done directly on bricks. Same goes for heal info
command which
is the reason heal info also shows 0 entries.
Well, this makes self-heal useless then- if any file is accidently
corrupted or deleted (yes! if file is deleted directly from brick
this is no recognized by idex heal too), then it will not be
self-healed, because self-heal uses index heal.
It is better to look into bit-rot feature if you want to guard against
these kinds of problems.
Bit rot detects bit problems, not missing files or their wrong length,
i.e. this is overhead for such simple task.
Thank you!
Heal full on the other hand will individually compare certain
aspects of all
files/dir to identify files to be healed. This is why heal
full works in this case
but index heal doesn't.
OK, thank yo for explanation, but , once again how about
self-healing and data consistency?
And, if I access this deleted or broken file from client then it
will be healed, I guess this is what self-heal needs to do.
Thank you!
On Tue, Jul 12, 2016 at 3:09 PM, Dmitry Melekhov <
[email protected] <mailto:[email protected]> > wrote:
12.07.2016 13:33, Pranith Kumar Karampuri пишет:
What was "gluster volume heal <volname> info" showing when
you saw this
issue?
just reproduced :
[root@father brick]# > gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]# gluster volume heal pool
Launching heal operation to perform index self heal on
volume pool has been
successful
Use heal info commands to check status
[root@father brick]# gluster volume heal pool info
Brick father:/wall/pool/brick
Status: Connected
Number of entries: 0
Brick son:/wall/pool/brick
Status: Connected
Number of entries: 0
Brick spirit:/wall/pool/brick
Status: Connected
Number of entries: 0
[root@father brick]#
On Mon, Jul 11, 2016 at 3:28 PM, Dmitry Melekhov <
[email protected] <mailto:[email protected]> > wrote:
Hello!
3.7.13, 3 bricks volume.
inside one of bricks:
[root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
-rw-r--r-- 2 root root 52268 июл 11 13:00
gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]#
[root@father brick]# > gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
-rw-r--r-- 2 root root 0 июл 11 13:54
gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]#
so now file has 0 length.
try to heal:
[root@father brick]# gluster volume heal pool
Launching heal operation to perform index self heal on
volume pool has been
successful
Use heal info commands to check status
[root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
-rw-r--r-- 2 root root 0 июл 11 13:54
gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]#
nothing!
[root@father brick]# gluster volume heal pool full
Launching heal operation to perform full self heal on
volume pool has been
successful
Use heal info commands to check status
[root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
-rw-r--r-- 2 root root 52268 июл 11 13:00
gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]#
full heal is OK.
But, self-heal is doing index heal according to
http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Developer-guide/afr-self-heal-daemon/
Is this bug?
As far as I remember it worked in 3.7.10....
_______________________________________________
Gluster-users mailing list
[email protected] <mailto:[email protected]>
http://www.gluster.org/mailman/listinfo/gluster-users
--
Pranith
--
Pranith
_______________________________________________
Gluster-users mailing list
[email protected] <mailto:[email protected]>
http://www.gluster.org/mailman/listinfo/gluster-users
--
Pranith
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users