That make sense ^_^
Unfortunately I haven't kept the interresting data you need.
Basically I had some write errors on my gluster clients when my
monitoring tool tested mkdir & create files.
The server's load was huge during the healing (cpu at 100%), and the
disk latency increased a lot.
That may be the source of my write errors, we'll know for sure next
time... I'll keep & post all the data you asked.
No way on the client side to force the gluster mount on 1 peer ?
Thanks for your help Karthik!
Quentin
Le 09/10/2017 à 12:10, Karthik Subrahmanya a écrit :
Hi,
There is no way to isolate the healing peer. Healing happens from the
good brick to the bad brick.
I guess your replica bricks are on a different peers. If you try to
isolate the healing peer, it will stop the healing process itself.
What is the error you are getting while writing? It would be helpful
to debug the issue, if you can provide us the output of the following
commands:
gluster volume info <vol_name>
gluster volume heal <vol_name> info
And also provide the client & heal logs.
Thanks & Regards,
Karthik
On Mon, Oct 9, 2017 at 3:02 PM, ML <[email protected]
<mailto:[email protected]>> wrote:
Hi everyone,
I've been using gluster for a few month now, on a simple 2 peers
replicated infrastructure, 22Tb each.
One of the peers has been offline last week during 10 hours (raid
resync after a disk crash), and while my gluster server was
healing bricks, I could see some write errors on my gluster clients.
I couldn't find a way to isolate my healing peer, in the
documentation or anywhere else.
Is there a way to avoid that ? Detach the peer while healing ?
Some tunning on the client side maybe ?
I'm using gluster 3.9 on debian 8.
Thank you for your help.
Quentin
_______________________________________________
Gluster-users mailing list
[email protected] <mailto:[email protected]>
http://lists.gluster.org/mailman/listinfo/gluster-users
<http://lists.gluster.org/mailman/listinfo/gluster-users>
_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users