Another possibility is that those files are under constant modification. At the 
moment self-heal info shows some false +ves, i.e. even when files don't need 
self-heal but are undergoing normal I/O it shows them in the output. This has 
been a problem seen by lot of users for a while now and the following patches 
have been submitted upstream/3.5 and are undergoing review. With these changes 
false +ves in files undergoing data changes (writes/truncates) won't be 
reported.
Upstream:
http://review.gluster.org/6637
http://review.gluster.org/6624
http://review.gluster.org/6603
http://review.gluster.org/6530
Pranith
----- Original Message -----
> From: "Vijay Bellur" <vbel...@redhat.com>
> To: "Diep Pham Van" <i...@favadi.com>, gluster-users@gluster.org
> Cc: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> Sent: Monday, January 6, 2014 3:16:58 PM
> Subject: Re: [Gluster-users] One brick always has a lot of files that need to 
> be heal
> 
> On 01/06/2014 01:18 PM, Diep Pham Van wrote:
> > Hello,
> > I have a cluster with 40 bricks running glusterfs 3.4.0 on Centos 6.4.
> > When I run gluster volume heal gv0 info to see how many files need to be
> > healing, I notice that there is one brick always has about 20-30
> > entries, while all other bricks have 0 or 1 entries. This problem has
> > been happenning for two weeks.
> >
> > Is this some sign of a hardware problem? How can I find out?
> >
> 
> You can observe glustershd.log in the glusterfs log directory of the
> server hosting the relevant brick to determine why self heals are not
> happening.
> 
> -Vijay
> 
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to