All,
I notice on the main page, Gluster 3.9 is listed as
Gluster 3.9 is the latest major release as of November 2016.
Yet, if you click on the download link, there is no mention of Gluster 3.9 on
that page at all. It has:
GlusterFS version 3.8 is the latest version at the moment.
Is there going
gt;
Cc: gluster-users@gluster.org; Andrus, Brian Contractor <bdand...@nps.edu>
Subject: Re: [Gluster-users] Replace replicate in single brick
Il 10 gen 2017 05:59, "Ravishankar N"
<ravishan...@redhat.com<mailto:ravishan...@redhat.com>> ha scritto:
If you are using glusterfs
All,
I am a little confused here...
I have a 7.2 terabyte mirrored gluster implementation. The backing filesystem
is zfs.
This is what I am seeing:
node45: Filesystem Size Used Avail Use% Mounted on
node45: /dev/sdb1190M 33M 148M 19% /boot
node45: /dev/sdb3854G 12G
want to try and
track it down.
Brian Andrus
From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
Sent: Thursday, January 28, 2016 10:36 PM
To: Andrus, Brian Contractor <bdand...@nps.edu>; gluster-users@gluster.org
Subject: Re: [Gluster-users] Moved files from one directory to anothe
All,
It seems that snapshotting for volumes based on ZFS is still 'in the works'. Is
that the case?
snapshot create: failed: Snapshot is supported only for thin provisioned LV.
Ensure that all bricks of DATA are thinly provisioned LV.
Using glusterfs-3.7.6-1.el6.x86_64
Brian Andrus
All,
I have a glusterfs setup with a disperse volume over 3 zfs bricks, one on each
node.
I just did a 'mv' of some log files from one directory to another and when I
look in the directory, they are not there at all!
Neither is any of the data I used to have. It is completely empty. I try
All,
I have one (of 4) gluster node that is using almost all of the available memory
on my box. It has been growing and is up to 89%
I have already done 'echo 2 > /proc/sys/vm/drop_caches'
There seems to be no effect.
Are there any gotcha to just restart glusterd?
This is a CentOS 6.6 system
All,
I have a small gluster filesystem on 3 nodes.
I have a perl program that multi-threads and each thread writes it's output to
one of 3 files depending on some results.
My trouble is that I am seeing missing lines from the output.
The input is a file of 500 lines. Depending on the line, it
All,
I am seeing it VERY consistently that when I do a 'gluster peer status' or
'gluster pool list', the system 'hangs' for up to 1 minute before spitting back
results.
I have 10 nodes all on the same network and currently ZERO volumes or bricks
configured. Just trying to get good performance
All,
I am trying to do an install of gluster 3.7.6
OS is CentOS 6.5
I start doing peer probe commands to add the servers, but I am constantly
getting intermittent rpc_clnt_ping_timer_expired messages in the logs and
arbitrary servers show 'Disconnected' when I try doing 'gluster pool list'
I
All,
I am seeing a problem with conflicting ports.
I am running a relatively simple gluster implementation (4 x 2 = 8)
But on the same nodes I also run lustre.
I find that since gluster starts first, it seems to take over port 988, which
lnet needs.
Unfortunately, I do not see where I can affect
I had this as well. It was a BIG pain for me. FWIW, after upgrading to gluster
3.7, I have not had an issue.
YMMV
Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238
-Original Message-
From: gluster-users-boun...@gluster.org
Usually works:
yum downgrade packagename
Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Kingsley
Sent:
schrieb Andrus, Brian Contractor:
I have similar issues with gluster and am starting to wonder if it really is
stable for VM images.
My setup is simple: 1X2=2
I am mirroring a disk, basically.
Trouble has been that the VM images (qcow2 files) go split-brained when one
of the VMs gets busier than
I have similar issues with gluster and am starting to wonder if it really is
stable for VM images.
My setup is simple: 1X2=2
I am mirroring a disk, basically.
Trouble has been that the VM images (qcow2 files) go split-brained when one of
the VMs gets busier than usual. Once that happens, heal
All,
I have noticed that one of my HDs in a 1x2 replica is throwing sector errors.
Now I am wondering the best way to fix or replace it. The backing filesystem is
xfs, so xfs_repair can only be used when it is offline.
So, can I unmount the brick, run the repairs to block out the bad sectors
16 matches
Mail list logo