Re: [Gluster-users] / - is in split-brain

2019-03-20 Thread Pablo Schandin
nostics.client-log-level: WARNING El mié., 20 mar. 2019 a las 0:16, Nithya Balachandran () escribió: > Hi, > > What is the output of the gluster volume info ? > > Thanks, > Nithya > > On Wed, 20 Mar 2019 at 01:58, Pablo Schandin wrote: > >> Hello all! >>

[Gluster-users] / - is in split-brain

2019-03-19 Thread Pablo Schandin
Hello all! I had a volume with only a local brick running vms and recently added a second (remote) brick to the volume. After adding the brick, the heal command reported the following: root@gluster-gu1:~# gluster volume heal gv1 info > Brick gluster-gu1:/mnt/gv_gu1/brick > / - Is in split-brain

[Gluster-users] Gluster trying to heal /

2018-11-22 Thread Pablo Schandin
Hello! I'm seeing something strange. When executed a v heal info on my volume I saw this: root@gluster-gu3:~# gluster volume heal gv3 info Brick gluster-gu3.xcade.net:/mnt/gv_gu3/brick / Status: Connected Number of entries: 1 Brick gluster-gu1.xcade.net:/mnt/gv_gu3/brick Status: Connected

Re: [Gluster-users] Upgrade gluster 3.7 to 4.1

2018-09-05 Thread Pablo Schandin
Hi! Sorry for this, but anyone has any suggestions? I am more inclined to try directly to go from 3.7 to 4.1. Do you see any issues with that? Thanks! Pablo. El mar., 28 ago. 2018 a las 9:45, Pablo Schandin (< pablo.schan...@avature.net>) escribió: > Hello! > > I have s

[Gluster-users] Upgrade gluster 3.7 to 4.1

2018-08-28 Thread Pablo Schandin
Hello! I have some old 2-nodes replication gluster clusters with 3.7 and would need to upgrade them to 4.1. Is it safe to jump that much versions directly with the 'online'(so no downtime) documentation? Or do you think I need to go through all the middle versions? For example, upgrade the

Re: [Gluster-users] Issues in AFR and self healing

2018-08-21 Thread Pablo Schandin
! If I have any other news I will let you know. Pablo. On 08/16/2018 01:06 AM, Ravishankar N wrote: On 08/15/2018 11:07 PM, Pablo Schandin wrote: I found another log that I wasn't aware of in /var/log/glusterfs/brick, that is te mount log, I confused the log files. In this file I see a lot

Re: [Gluster-users] Issues in AFR and self healing

2018-08-15 Thread Pablo Schandin
91-2018/08/15-16:41:03:103872-gv1-client-0-0-0 So I see a lot of disconnections, right? This might be why the self healing is triggered all the time? Thanks! Pablo. Avature Get Engaged to Talent On 08/14/2018 09:15 AM, Pablo Schandin wrote: Thanks for the info! I cannot see any logs

Re: [Gluster-users] Issues in AFR and self healing

2018-08-14 Thread Pablo Schandin
wrote: On 08/10/2018 11:25 PM, Pablo Schandin wrote: Hello everyone! I'm having some trouble with something but I'm not quite sure of with what yet. I'm running GlusterFS 3.12.6 on Ubuntu 16.04. I have two servers (nodes) in the cluster in a replica mode. Each server has 2 bricks

[Gluster-users] Issues in AFR and self healing

2018-08-10 Thread Pablo Schandin
Hello everyone! I'm having some trouble with something but I'm not quite sure of with what yet. I'm running GlusterFS 3.12.6 on Ubuntu 16.04. I have two servers (nodes) in the cluster in a replica mode. Each server has 2 bricks. As the servers are KVM running several VMs, one brick has some