Okay, just some general suggestion:
For the files that have link count 1 on the back-end on the 'bad' brick, remove the file from the back end on that brick and perform a stat on the file from the *mount* on the 'good' brick. This should create the file and the .glusterfs hard link on the bad brick too. If that works, you can do the same for all files.

But you have to be sure that there are no pending heals on the parent directory before you do this or a reverse heal will happen (and the file will disappear from the good brick too!). If you are not sure, the safest way is to keep a copy of the file (from the good brick) elsewhere before doing this.



On 01/29/2016 08:31 PM, Ronny Adsetts wrote:
Ravishankar N wrote on 29/01/2016 14:35:
What version of gluster are you using? Was there a chance there were
directory renames from the client?
Currently running 3.6.8-1 from gluster.org on Debian Wheezy, arch is amd64. My 
other reply has a history of the upgrades.

The directory containing the bulk of the files (win_patches) has not been 
renamed since files were copied there a few weeks ago when the volume was 
created. There are affected files all over the volume.

The gluster volumes are not really being used in anger yet. This 'software' volume is 
being used by our patch management software via the samba-shared fuse-mounted volume. I 
had noticed problems within the app when running "3.2.7-3+deb7u1~bpo60+1" from 
Debian Squeeze but had not investigated as the system upgrade was pending anyway.

It's possible that node reboots have coincided with the patch management 
software running scheduled patch downloads though not to the extent that ~50% 
of the node files are is some sort of indeterminate state.

There was a bug which Pranith fixed quite some time back:
http://review.gluster.org/#/c/7879/ for missing .glusterfs link
files.
Ronny


_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to