Last Friday, I rebooted one of my gluster nodes and it didn't properly
mount the filesystem holding its brick (I had forgotten to add it to
fstab...), so, when I got back to work on Monday, its root filesystem
was full and the gluster heal info showed around 25000 entries needing
to be healed.

I got the filesystems straightened out and, within a matter of minutes,
the number of entries waiting to be healed in that subvolume dropped to
59.  (Showing twice, of course.  The cluster is replica 2+A, so the
other full replica and the arbiter are both showing the same list of
entries.)  Over a full day later, it's still at 59.

Is there anything I can do to kick the self-heal back into action and
get those final 59 entries cleaned up?

-- 
Dave Sherohman
_______________________________________________
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to