Our bricks are 50TB, running ZOL, 16 disks raidz2. Works OK with Gluster
now that they fixed xattrs.
8k writes with fsync 170MB/Sec, reads 335MB/Sec.
On Tue, 2014-10-07 at 14:24 +1000, Dan Mons wrote:
> We have 6 nodes with one brick per node (2x3 replicate-distribute).
> 35TB per brick, for 107
We have 6 nodes with one brick per node (2x3 replicate-distribute).
35TB per brick, for 107TB total usable.
Not sure if our low brick count (or maybe large brick per node?)
contributes to the slowdown when full.
We're looking to add more nodes by the end of the year. After that,
I'll look this t
Not an issue for us, were at 92% on an 800TB distributed volume, 16
bricks spread across 4 servers. Lookups can be a bit slow but raw IO
hasn't changed.
On Tue, 2014-10-07 at 09:16 +1000, Dan Mons wrote:
> On 7 October 2014 08:56, Jeff Darcy wrote:
> > I can't think of a good reason for such a
On 7 October 2014 08:56, Jeff Darcy wrote:
> I can't think of a good reason for such a steep drop-off in GlusterFS.
> Sure, performance should degrade somewhat due to fragmenting, but not
> suddenly. It's not like Lustre, which would do massive preallocation
> and fall apart when there was no lon
> Yup, pretty common for us. Once we hit ~90% on either of our two
> production clusters (107 TB usable each), performance takes a beating.
>
> I don't consider this a problem, per se. Most file systems (clustered
> or otherwise) are the same. I consider a high water mark for any
> production f
Same here, we try to keep them under 80% too.
2014-10-06 19:40 GMT-03:00 Dan Mons :
> Yup, pretty common for us. Once we hit ~90% on either of our two
> production clusters (107 TB usable each), performance takes a beating.
>
> I don't consider this a problem, per se. Most file systems (cluster
Yup, pretty common for us. Once we hit ~90% on either of our two
production clusters (107 TB usable each), performance takes a beating.
I don't consider this a problem, per se. Most file systems (clustered
or otherwise) are the same. I consider a high water mark for any
production file system t
Hello,
My glusterfs-3.4.2-1.el6 is having a performance issue. It was working fine
until the 100TB file system hit ~90% full. I was seeing around 90Mb/s for the
last 10 months. This then dropped to 40Mb/s. Since nothing changed on the
system, I focused on the transition to the 90% full file syst
You are correct... Typo on my part. It happened when I installed
3.6.0-beta3.
I'll file the bug report so that fuse installation is dependent on attr
being installed... Thanks...
David
-- Original Message --
From: "Niels de Vos"
To: "David F. Robinson"
Cc: gluster-users@gluster.org
On Mon, Oct 06, 2014 at 02:30:11PM +, David F. Robinson wrote:
> When I installed the 3.5.3beta on my HPC cluster, I get the following
> warnings during the mounts:
>
> WARNING: getfattr not found, certain checks will be skipped..
> I do not have attr installed on my compute nodes. Is this so
Hi all,
I have two Redhat EL6 and I install glusterfs
[~]# rpm -qa | grep gluster |sort
glusterfs-3.5.1-1.el6.x86_64
glusterfs-api-3.5.1-1.el6.x86_64
glusterfs-cli-3.5.1-1.el6.x86_64
glusterfs-fuse-3.5.1-1.el6.x86_64
glusterfs-libs-3.5.1-1.el6.x86_64
glusterfs-server-3.5.1-1.el6.x86_64
I create o
When I installed the 3.5.3beta on my HPC cluster, I get the following
warnings during the mounts:
WARNING: getfattr not found, certain checks will be skipped..
I do not have attr installed on my compute nodes. Is this something
that I need in order for gluster to work properly or can this safe
Testcase tests/bugs/bug-887145.t fails due permission issue.
Here is a snippet,
root@fractal-0025:/home/kiran/glusterfs# touch /mnt/glusterfs/0/dir/file
touch: cannot touch '/mnt/glusterfs/0/dir/file': Permission denied
root@fractal-0025:/home/kiran/glusterfs#
root@fractal-0025:/home/kiran/gluste
Hello,
http://gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework
page has been updated with Ubuntu 14.04 steps and now you can run the test
suite on ubuntu.
I ran test suite and below are results.
Gluster version: v3.4.5
OS: Ubuntu 14.04 LTS
Test Summary Report
Hi Tom,
Which version of Gluster are you running? I talked with my operations team, and
they don't seem to recall a log entry afr_dir_exclusive_crawl. But AFR seems
like a self-heal.
Therefore I suspect you're using Gluster in a very similar way that we do,
which means a lot of file entries in a
Den 2014-10-03 16:23, Niels de Vos skrev:
On Fri, Oct 03, 2014 at 03:26:04PM +0200, Peter Haraldson wrote:
Hi all!
Hi Peter!
I'm rather new to glusterfs, trying it out for redundant storage for my very
small company.
I have a minimal setup of glusterfs, 2 servers (storage1 & storage2) with
o
Hi Jocelyn,
Thanks for your response. I noticed this 100% CPU WAIT on server01 and
decided to reboot it.
After booting I noticed these two messages:
glustershd.log:[2014-10-03 05:05:46.969650] I
[afr-self-heald.c:1180:afr_dir_exclusive_crawl] 0-myvol-replicate-0:
Another crawl is in progress
17 matches
Mail list logo