‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Monday, October 26, 2020 2:56 PM, Diego Zuccato <diego.zucc...@unibo.it> 
wrote:

> The volume is built by 26 10TB disks w/ genetic data. I currently don't
> have exact numbers, but it's still at the beginning, so there are a bit
> less than 10TB actually used.
> But you're only removing the arbiters, you always have two copies of
> your files. The worst that can happen is a split brain condition
> (avoidable by requiring a 2-nodes quorum, in that case the worst is that
> the volume goes readonly).

Right, seen liked that this sounds reasonable. Do you actually remember the 
exact command you ran in order to remove the brick? I was thinking this should 
be it:

gluster volume remove-brick <VOLNAME> <BRICK> force

but should I use "force" or "start"?

> IIRC it took about 3 days, but the arbiters are on a VM (8CPU, 8GB RAM)
> that uses an iSCSI disk. More than 80% continuous load on both CPUs and RAM.

That's quite long I must say and I am in the same case as you, my arbiter is a 
VM.
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to