nostics.client-log-level: WARNING
El mié., 20 mar. 2019 a las 0:16, Nithya Balachandran ()
escribió:
> Hi,
>
> What is the output of the gluster volume info ?
>
> Thanks,
> Nithya
>
> On Wed, 20 Mar 2019 at 01:58, Pablo Schandin wrote:
>
>> Hello all!
>>
Hello all!
I had a volume with only a local brick running vms and recently added a
second (remote) brick to the volume. After adding the brick, the heal
command reported the following:
root@gluster-gu1:~# gluster volume heal gv1 info
> Brick gluster-gu1:/mnt/gv_gu1/brick
> / - Is in split-brain
Hello!
I'm seeing something strange. When executed a v heal info on my volume I
saw this:
root@gluster-gu3:~# gluster volume heal gv3 info
Brick gluster-gu3.xcade.net:/mnt/gv_gu3/brick
/
Status: Connected
Number of entries: 1
Brick gluster-gu1.xcade.net:/mnt/gv_gu3/brick
Status: Connected
Hi! Sorry for this, but anyone has any suggestions? I am more inclined to
try directly to go from 3.7 to 4.1.
Do you see any issues with that?
Thanks!
Pablo.
El mar., 28 ago. 2018 a las 9:45, Pablo Schandin (<
pablo.schan...@avature.net>) escribió:
> Hello!
>
> I have s
Hello!
I have some old 2-nodes replication gluster clusters with 3.7 and would
need to upgrade them to 4.1. Is it safe to jump that much versions
directly with the 'online'(so no downtime) documentation?
Or do you think I need to go through all the middle versions? For
example, upgrade the
! If I have any other news I will let you know.
Pablo.
On 08/16/2018 01:06 AM, Ravishankar N wrote:
On 08/15/2018 11:07 PM, Pablo Schandin wrote:
I found another log that I wasn't aware of in
/var/log/glusterfs/brick, that is te mount log, I confused the log
files. In this file I see a lot
91-2018/08/15-16:41:03:103872-gv1-client-0-0-0
So I see a lot of disconnections, right? This might be why the self
healing is triggered all the time?
Thanks!
Pablo.
Avature
Get Engaged to Talent
On 08/14/2018 09:15 AM, Pablo Schandin wrote:
Thanks for the info!
I cannot see any logs
wrote:
On 08/10/2018 11:25 PM, Pablo Schandin wrote:
Hello everyone!
I'm having some trouble with something but I'm not quite sure of with
what yet. I'm running GlusterFS 3.12.6 on Ubuntu 16.04. I have two
servers (nodes) in the cluster in a replica mode. Each server has 2
bricks
Hello everyone!
I'm having some trouble with something but I'm not quite sure of with
what yet. I'm running GlusterFS 3.12.6 on Ubuntu 16.04. I have two
servers (nodes) in the cluster in a replica mode. Each server has 2
bricks. As the servers are KVM running several VMs, one brick has some