On 12/02/2014 10:29 PM, Emmanuel Dreyfus wrote:
> Hi
> 
> I have been tracking down a bug reported by 
> /tests/basic/afr/entry-self-heal.t 
> on NetBSD, and now I wonder how glustershd is supposed to work. 
> 
> In xlators/cluster/afr/src/afr-self-heald.c, we create a healder for
> each AFR subvolume. In afr_selfheal_tryinodelk(), each healer performs 
> the INODELK for each AFR subvolume, using AFR_ONALL().
> 
> The result is that healers compete for the locks on the same inodes
> in the subvolumes. They sometime conflict, and if we have only two 
> subvolumes, we ran into this condition:
>                 if (ret < AFR_SH_MIN_PARTICIPANTS) {
>                         /* Either less than two subvols available, or another
>                            selfheal (from another server) is in progress. Skip
>                            for now in any case there isn't anything to do.
>                         */             
>                         ret = -ENOTCONN;
>                         goto unlock;
>                 }
> 
> Since there is no glustershd doing the work on another server, the entry
> will remain unhealed.

The index healer threads scan the .glusterfs/indices/xattrop directory of the
respective bricks in a while loop with a sleep (1). Even if only one entry is 
present in it,
wouldn't it be unlikely that contention happens every time? One of the threads 
should succeed
in getting the inodelks eventually.

 I beleive this is exactly the same problem I am
> trying to address in http://review.gluster.org/9074
> 
> What is wrong here? Should there really be healers for each subvolume, 
> or is it the AFR_ONALL() usage that is wrong? Or did I completely miss
> the thing?
> 

One healer per subvol(i.e. child/brick)  of an AFR subvolume is required 
because it may very well
be the case that the '.glusterfs/indices/xattrop/<gfid-name>' entry for the 
file that needs heal may
be present on only one of the bricks. The index healer for that brick then 
reads it and triggers the heal.


_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel

Reply via email to