Thanks Ravi, I now manually created the missing "dirty" directory and do not 
get any error messages in the gluster self-heal daemon log file.

There is still one warning message which I see often in my brick log file and 
would be thankful if you could let me know what this means or what the problem 
could be:

[2017-05-07 11:26:22.465194] W [dict.c:1223:dict_foreach_match] 
(-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_foreach_match+0x65) 
[0x7f8795902f45] 
-->/usr/lib/x86_64-linux-gnu/glusterfs/3.8.11/xlator/features/index.so(+0x31d0) 
[0x7f878d6971d0] 
-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_foreach_match+0xe1) 
[0x7f8795902fc1] ) 0-dict: dict|match|action is NULL [Invalid argument]

Any ideas?

-------- Original Message --------
Subject: Re: [Gluster-users] glustershd: unable to get index-dir on 
myvolume-client-0
Local Time: May 3, 2017 3:09 AM
UTC Time: May 3, 2017 1:09 AM
From: ravishan...@redhat.com
To: mabi <m...@protonmail.ch>
Gluster Users <gluster-users@gluster.org>

On 05/02/2017 11:48 PM, mabi wrote:
Hi Ravi,

Thanks for the pointer, you are totally right the "dirty" directory is missing 
on my node1. Here is the output of a "ls -la" of both nodes:

node1:
drw------- 2 root root 2 Apr 28 22:15 entry-changes
drw------- 2 root root 2 Mar 6 2016 xattrop

node2:
drw------- 2 root root 3 May 2 19:57 dirty
drw------- 2 root root 2 Apr 28 22:15 entry-changes
drw------- 2 root root 3 May 2 19:57 xattrop

Now what would be the procedure in order to add the "dirty" directory on node1? 
Can I simply do an "mkdir dirty" in the indices directory? or do I need to stop 
the volume before?

mkdir should work. The folders are created whenever the brick process is 
started, so I'm wondering how it went missing in the first place.
-Ravi

Regards,
M.

-------- Original Message --------
Subject: Re: [Gluster-users] glustershd: unable to get index-dir on 
myvolume-client-0
Local Time: May 2, 2017 10:56 AM
UTC Time: May 2, 2017 8:56 AM
From: ravishan...@redhat.com
To: mabi [<m...@protonmail.ch>](mailto:m...@protonmail.ch), Gluster Users 
[<gluster-users@gluster.org>](mailto:gluster-users@gluster.org)

On 05/02/2017 01:08 AM, mabi wrote:
Hi,
I have a two nodes GlusterFS 3.8.11 replicated volume and just noticed today in 
the glustershd.log log file a lot of the following warning messages:

[2017-05-01 18:42:18.004747] W [MSGID: 108034] 
[afr-self-heald.c:479:afr_shd_index_sweep] 0-myvolume-replicate-0: unable to 
get index-dir on myvolume-client-0
[2017-05-01 18:52:19.004989] W [MSGID: 108034] 
[afr-self-heald.c:479:afr_shd_index_sweep] 0-myvolume-replicate-0: unable to 
get index-dir on myvolume-client-0
[2017-05-01 19:02:20.004827] W [MSGID: 108034] 
[afr-self-heald.c:479:afr_shd_index_sweep] 0-myvolume-replicate-0: unable to 
get index-dir on myvolume-client-0

Does someone understand what it means and if I should be concerned or not? 
Could it be related that I use ZFS and not XFS as filesystem?

In replicate volumes, the /<path-to-backend-brick>/.glusterfs/indices directory 
of bricks must contain these sub folders: 'dirty', 'entry-changes' and 
'xattrop'. From the messages, it looks like these are missing from your first 
brick (myvolume-client-0). Can you check if that is the case?

-Ravi

Best regards,
M.

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org

http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to