I'm gidding through them now but haven't seen anything. The shd log
shows nothing.
On 12/21/18 11:26 AM, John Strunk wrote:
I think the next step is to look through the logs...
- brick logs
- glusterd logs
- self-heal logs
Also the output from gluster vol heal info may be helpful (for the vol
w/ 85 pending).
-John
On Fri, Dec 21, 2018 at 4:36 AM Brett Holcomb <[email protected]
<mailto:[email protected]>> wrote:
No changes. Still stuck on the same numbers after many hours.
On 12/20/18 8:26 PM, John Strunk wrote:
Assuming your bricks are up... yes, the heal count should be
decreasing.
There is/was a bug wherein self-heal would stop healing but would
still be running. I don't know whether your version is affected,
but the remedy is to just restart the self-heal daemon.
Force start one of the volumes that has heals pending. The bricks
are already running, but it will cause shd to restart and,
assuming this is the problem, healing should begin...
$ gluster vol start my-pending-heal-vol force
Others could better comment on the status of the bug.
-John
On Thu, Dec 20, 2018 at 5:45 PM Brett Holcomb
<[email protected] <mailto:[email protected]>> wrote:
I have one volume that has 85 pending entries in healing and
two more
volumes with 58,854 entries in healing pending. These
numbers are from
the volume heal info summary command. They have stayed
constant for two
days now. I've read the gluster docs and many more. The
Gluster docs
just give some commands and non gluster docs basically repeat
that.
Given that it appears no self-healing is going on for my
volume I am
confused as to why.
1. If a self-heal deamon is listed on a host (all of mine
show one with
a volume status command) can I assume it's enabled and running?
2. I assume the volume that has all the self-heals pending
has some
serious issues even though I can access the files and
directories on
it. If self-heal is running shouldn't the numbers be decreasing?
It appears to me self-heal is not working properly so how to
I get it to
start working or should I delete the volume and start over?
I'm running gluster 5.2 on Centos 7 latest and updated.
Thank you.
_______________________________________________
Gluster-users mailing list
[email protected] <mailto:[email protected]>
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users