Re: [Gluster-users] Gluster distributed replicated setup does not serve read from all bricks belonging to the same replica

2018-11-23 Thread Anh Vo
pending self-heal that would have made hashed mode worse, or is it about as bad as any brick selection policy? Thanks On Thu, Nov 22, 2018 at 7:59 PM Ravishankar N wrote: > > > On 11/22/2018 07:07 PM, Anh Vo wrote: > > Thanks Ravi, I will try that option. > One question: > Let

Re: [Gluster-users] Gluster distributed replicated setup does not serve read from all bricks belonging to the same replica

2018-11-22 Thread Anh Vo
re > are no self-heals pending). > > Hope this helps, > Ravi > > [1] https://review.gluster.org/#/c/glusterfs/+/19698/ > > On 11/22/2018 10:20 AM, Anh Vo wrote: > > Hi, > Our setup: We have a distributed replicated setup of 3 replica. The total > number of servers va

[Gluster-users] Gluster distributed replicated setup does not serve read from all bricks belonging to the same replica

2018-11-21 Thread Anh Vo
Hi, Our setup: We have a distributed replicated setup of 3 replica. The total number of servers varies between clusters, in some cases we have a total of 36 (12 x 3) servers, in some of them we have 12 servers (4 x 3). We're using gluster 3.12.15 In all instances what I am noticing is that only

Re: [Gluster-users] Failed to mount nfs due to split-brain and Input/Output Error

2018-07-04 Thread Anh Vo
4, 2018 at 9:01 AM, Ravishankar N wrote: > > > On 07/04/2018 09:20 PM, Anh Vo wrote: > > I forgot to mention we're using 3.12.10 > > On Wed, Jul 4, 2018 at 8:45 AM, Anh Vo wrote: > >> If I run "sudo gluster volume heal gv0 split-brain latest-mtime /" I g

Re: [Gluster-users] Failed to mount nfs due to split-brain and Input/Output Error

2018-07-04 Thread Anh Vo
I forgot to mention we're using 3.12.10 On Wed, Jul 4, 2018 at 8:45 AM, Anh Vo wrote: > If I run "sudo gluster volume heal gv0 split-brain latest-mtime /" I get > the following: > > Lookup failed on /:Invalid argument. > Volume heal failed. > > node2 was not

Re: [Gluster-users] Failed to mount nfs due to split-brain and Input/Output Error

2018-07-04 Thread Anh Vo
te? > > 3. As for the discrepancy in output of heal info, is node2 connected to > the other nodes? Does heal info still print the details of all 3 bricks > when you run it on node2 ? > -Ravi > > > On 07/04/2018 01:47 AM, Anh Vo wrote: > > Actually we just discovered that th

Re: [Gluster-users] Failed to mount nfs due to split-brain and Input/Output Error

2018-07-03 Thread Anh Vo
02 trusted.afr.gv0-client-2=0x trusted.gfid=0x0001 trusted.glusterfs.dht=0x0001 trusted.glusterfs.volume-id=0x7fa3aac372d543f987ed0c66b77f02e2 Where do I go from here? Thanks On Tue, Jul 3, 2018 at 11:54 AM, Anh Vo wrot

[Gluster-users] Failed to mount nfs due to split-brain and Input/Output Error

2018-07-03 Thread Anh Vo
I am trying to mount nfs to gluster volume and got mount.nfs failure. Looking at nfs.log I am seeing these entries Heal info does not show the mentioned gfid ( ----0001 ) being in split-brain. [2018-07-03 18:16:27.694953] W [MSGID: 112199]

Re: [Gluster-users] gluster becomes too slow, need frequent stop-start or reboot

2018-06-25 Thread Anh Vo
FINODELK We have about one or two months before we need to make a decision to keep Gluster and so far it has been a lot of headache. On Thu, Jun 14, 2018 at 10:18 AM, Anh Vo wrote: > Our gluster keeps getting to a state where it becomes painfully slow and > many of our applications ti

[Gluster-users] gluster becomes too slow, need frequent stop-start or reboot

2018-06-14 Thread Anh Vo
Our gluster keeps getting to a state where it becomes painfully slow and many of our applications time out on read/write call. When this happens a simple ls at top level directory from the mount takes somewhere between 8-25s (normally it is very fast, at most 1-2s). The top level directory only

[Gluster-users] Rebalance state stuck or corrupted

2018-05-23 Thread Anh Vo
We have had a rebalance operation going on for a few days. After a couple days the rebalance status said "failed". We stopped the rebalance operation by doing gluster volume rebalance gv0 stop. Rebalance log indicated gluster did try to stop the rebalance. However, when we try now to stop the

Re: [Gluster-users] Expand distributed replicated volume with new set of smaller bricks

2018-04-04 Thread Anh Vo
led for the volume and run rebalance with the start force option. > Which version of gluster are you running (we fixed a bug around this a > while ago)? > > Regards, > Nithya > > On 4 April 2018 at 11:36, Anh Vo <vtq...@gmail.com> wrote: > >> We currently have a 3 nod

[Gluster-users] Expand distributed replicated volume with new set of smaller bricks

2018-04-04 Thread Anh Vo
We currently have a 3 node gluster setup each has a 100TB brick (total 300TB, usable 100TB due to replica factor 3) We would like to expand the existing volume by adding another 3 nodes, but each will only have a 50TB brick. I think this is possible, but will it affect gluster performance and if