Hello,

I am stuck with failure of my gluster 2x replica heal with messages at glustershd.log as :

*[2018-11-21 05:28:07.813003] E [MSGID: 114031] [client-rpc-fops.c:1646:client3_3_entrylk_cbk] 0-gv1-client-0: remote operation failed [Transport endpoint is not connected]*

When the log hits here In either of the replica nodes; I can see that my command: *watch glsuter volume heal <my volume> statistics *returns no more progress and status unchanged afterward. I am running glusterfs on top of zfs and it is basically a storage for small read-only files. There was a thread with Shyam Ranganathan and Reiner Keller in here where the core of the problem was the storage going out of inods and no space left error which obviously can not be my case as am on top of ZFS. However, the similarity between us is that we were previously on 3.10 and upon various issues with that version we upgraded to 3.12 on Ubuntu 16.04 with kernel 4.4.0-116-generic.

Anybody faced issue as above?Can you advise what can be done as it is for over a month with no effective self-heal process completed...

Here is my gluster cluster info:

*Volume Name: gv1
Type: Replicate
Volume ID: f1c955a1-7a92-4b1b-acb5-8b72b41aaace
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: IMG-01:/images/storage/brick1
Brick2: IMG-02:/images/storage/brick1
Options Reconfigured:
cluster.self-heal-daemon: enable
cluster.eager-lock: off
client.event-threads: 4
performance.cache-max-file-size: 8
features.scrub: Inactive
features.bitrot: off
network.inode-lru-limit: 50000
nfs.disable: true
performance.readdir-ahead: on
server.statedump-path: /tmp
cluster.background-self-heal-count: 32
performance.md-cache-timeout: 30
cluster.readdir-optimize: on
cluster.shd-max-threads: 4
cluster.lookup-optimize: on
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
performance.cache-invalidation: on
server.event-threads: 4*

Thank you


--
Hamid Safe
www.devopt.net
+989361491768

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to