<html><body>Steve,
If you are going to grow to that size...then you should make a larger test
system. I run a 6x2 Distributed Replicated setup and I maintain 400-575MB/s
write speeds and 780-800MB/s read speeds. I would share with you my settings
but I have my system tuned for 100MB - 2 GB files. To give you an idea....I
roll roughly 4 TB of data every night writing new data in and deleting out old
data.
Many people try to take the minimum setup to test gluster when you don't
really start seeing the benefits of gluster until you start to scale it both in
number of clients as well as number of bricks.
Just thought I would share my experiences.
Bryan
-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Steve Thompson
Sent: Wednesday, September 26, 2012 4:57 PM
To: [email protected]
Subject: Re: [Gluster-users] GlusterFS performance
On Wed, 26 Sep 2012, Joe Landman wrote:
> Read performance with the gluster client isn't that good, write
> performance (effectively write caching at the brick layer) is pretty good.
Yep. I found out today that if I set up a 2-brick distributed non-replicated
volume using two servers, GlusterFS read performance is good from the server
that does _not_ contain a copy of the file. In fact, I got 148 MB/sec, largely
due to the two servers having dual-bonded gigabit links (balance-alb mode) to
each other via a common switch. From the server that _does_ have a copy of the
file, of course read performance is excellent (over 580 MB/sec).
It remains that read performance on another client (same subnet but an extra
switch hop) is too low to be useable, and I can point the finger at GlusterFS
here since NFS on the same client gets good performance, as does MooseFS
(although MooseFS has other issues). And if using a replicated volume,
GlusterFS write performance is too low to be useable also.
> I know its a generalization, but this is basically what we see. In
> the best case scenario, we can tune it pretty hard to get within 50%
> of native speed. But it takes lots of work to get it to that point, as
> well as an application which streams large IO. Small IO is a (still)
> bad on the system IMO.
I'm very new to GlusterFS, so it looks like I have my work cut out for me.
My aim was ultimately to build a large (100 TB) file system with redundancy for
linux home directories and samba shares. I've already given up on MooseFS after
several months' work.
Thanks for all your comments,
Steve
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
NOTICE: This email and any attachments may contain confidential and proprietary
information of NetSuite Inc. and is for the sole use of the intended recipient
for the stated purpose. Any improper use or distribution is prohibited. If you
are not the intended recipient, please notify the sender; do not review, copy
or distribute; and promptly delete or destroy all transmitted information.
Please note that all communications and information transmitted through this
email system may be monitored by NetSuite or its agents and that all incoming
email is automatically scanned by a third party spam and filtering service
</body></html>
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users