You have replica2 so you can't really take 50% of your cluster down
without turning off quorum (and risking split brain). So detaching the
rebuilding peer is really not an option.
If you had replica3 or an arbiter, you CAN detach or isolate the problem
peer. I've done things like change the
I only got this issue with this one template (CentOS7), i checked other
templates and it worked fine.
I didn't quite understand what you need, i have one UUID and three files, one
for the img and .meta and the other one is .lease.
I got the output of all three files which represent the VM i
On Mon, Oct 09, 2017 at 03:29:41PM +0200, ML wrote:
> The server's load was huge during the healing (cpu at 100%), and the
> disk latency increased a lot.
Depending on the file sizes, you might want to consider changing the
heal algortithm. Might be better to just re-download the whole file /
That make sense ^_^
Unfortunately I haven't kept the interresting data you need.
Basically I had some write errors on my gluster clients when my
monitoring tool tested mkdir & create files.
The server's load was huge during the healing (cpu at 100%), and the
disk latency increased a lot.
On 09/22/2017 07:27 PM, Niels de Vos wrote:
On Fri, Sep 22, 2017 at 12:27:46PM +0530, Ravishankar N wrote:
Hello,
In AFR we currently allow look-ups to pass through without taking into
account whether the lookup is served from the good or bad brick. We always
serve from the good brick
OK.
Is this problem unique to templates for a particular guest OS type? Or is
this something you see for all guest OS?
Also, can you get the output of `getfattr -d -m . -e hex ` for the
following two "paths" from all of the bricks:
path to the file representing the vm created off this template
Hi,
There is no way to isolate the healing peer. Healing happens from the good
brick to the bad brick.
I guess your replica bricks are on a different peers. If you try to isolate
the healing peer, it will stop the healing process itself.
What is the error you are getting while writing? It would
Hi everyone,
I've been using gluster for a few month now, on a simple 2 peers
replicated infrastructure, 22Tb each.
One of the peers has been offline last week during 10 hours (raid resync
after a disk crash), and while my gluster server was healing bricks, I
could see some write errors on