13.07.2016 07:46, Pranith Kumar Karampuri пишет:
On Tue, Jul 12, 2016 at 9:27 PM, Dmitry Melekhov <[email protected]
<mailto:[email protected]>> wrote:
12.07.2016 17:39, Pranith Kumar Karampuri пишет:
Wow, what are the steps to recreate the problem?
just set file length to zero, always reproducible.
Changing things on the brick i.e. not from gluster volume mount is not
something you want to do. In the worst case(I have seen this only once
in the last 5 years though) where you do this it can lead to data loss
also. So please be aware of it.
Data replication with gluster is a way to avoid data loss, right? Or no?
If not- why use gluster then?
I though that gluster self-healing will heal or at least report missed
files or files with wrong lenths- i.e. corruptions visible just by
reading brick's directory,
not comparing data as bit rot detection...
If this is not a bug, then gluster is not what I expected :-(
Thank you!
On Tue, Jul 12, 2016 at 3:09 PM, Dmitry Melekhov <[email protected]
<mailto:[email protected]>> wrote:
12.07.2016 13:33, Pranith Kumar Karampuri пишет:
What was "gluster volume heal <volname> info" showing when
you saw this issue?
just reproduced :
[root@father brick]# > gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]# gluster volume heal pool
Launching heal operation to perform index self heal on volume
pool has been successful
Use heal info commands to check status
[root@father brick]# gluster volume heal pool info
Brick father:/wall/pool/brick
Status: Connected
Number of entries: 0
Brick son:/wall/pool/brick
Status: Connected
Number of entries: 0
Brick spirit:/wall/pool/brick
Status: Connected
Number of entries: 0
[root@father brick]#
On Mon, Jul 11, 2016 at 3:28 PM, Dmitry Melekhov
<[email protected] <mailto:[email protected]>> wrote:
Hello!
3.7.13, 3 bricks volume.
inside one of bricks:
[root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
-rw-r--r-- 2 root root 52268 июл 11 13:00
gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]#
[root@father brick]# > gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
-rw-r--r-- 2 root root 0 июл 11 13:54
gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]#
so now file has 0 length.
try to heal:
[root@father brick]# gluster volume heal pool
Launching heal operation to perform index self heal on
volume pool has been successful
Use heal info commands to check status
[root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
-rw-r--r-- 2 root root 0 июл 11 13:54
gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]#
nothing!
[root@father brick]# gluster volume heal pool full
Launching heal operation to perform full self heal on
volume pool has been successful
Use heal info commands to check status
[root@father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
-rw-r--r-- 2 root root 52268 июл 11 13:00
gstatus-0.64-3.el7.x86_64.rpm
[root@father brick]#
full heal is OK.
But, self-heal is doing index heal according to
http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Developer-guide/afr-self-heal-daemon/
Is this bug?
As far as I remember it worked in 3.7.10....
_______________________________________________
Gluster-users mailing list
[email protected] <mailto:[email protected]>
http://www.gluster.org/mailman/listinfo/gluster-users
--
Pranith
--
Pranith
--
Pranith
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users