On 05/11/2011 10:22 AM, Burnash, James wrote:

Standard disclaimers apply ... we build really fast storage systems and storage clusters and have a financial interest in these things, so take what we say in this context.

Answers inline below as well J

Hope this helps.

James Burnash, Unix Engineering

*From:*[email protected]
[mailto:[email protected]] *On Behalf Of *Nyamul Hassan
*Sent:* Wednesday, May 11, 2011 10:04 AM
*To:* [email protected]
*Subject:* Re: [Gluster-users] [SPAM?] Storage Design Overview

Thank you for the prompt and insightful answer, James. My remarks are
inline.

    1.Can we mount a GlusterFS on a client and expect it to provide
    sustained throughput near wirespeed? <No>

In your scenario, what were the maximum read speeds that you observed?

                 Read (using dd) approximately 60MB/sec to 100MB/sec.

Depends upon many things in a long chain ... network performance, local stack performance, remote disk performance, etc.

Our experience has been that the cause of a majority of the lower performing situations we have observed in self-designed systems, has been a significant (often severe and designed in) bottleneck, that actively prevents users from achieving anything more than moderate speed.

We have measured up to 700 MB/s for simple dd's over an SDR Infiniband network using gluster 3.1.3, and about 500 MB/s over 10GbE. It is achievable, but you have to start with a good design. Good designs aren't buzzword enabled ... there are methods to the madness as it were. This is what we provide to our customer base.


    3.Does it put extra pressure on the client?<What do you mean by
    pressure? My clients (HP ProLiant DL360 G5 Quad Core with 32GB RAM)
    show up to 2GB of memory usage when the native Gluster client is
    used for mounts – but that is dependent on what you set the client
    cache max for – in my case, 2GB. CPU utilization is usually
    negligible in my systems, network bandwidth utilization and I/O
    throughput … depend on what the files sizes and access patterns look
    like>

Heavy IO will fill up work queue slots in the kernel. This is true of every file system.


Thx for the insight. Can you describe your current deployment a bit
more, like configs of the storage nodes, and the client nodes, and what
type of application you are using it for? Don't want to be too
intrusive, just to get an idea on what others are doing.

All on Gluster 3.1.3

We are also at 3.1.3 in the lab after experiencing problems with 3.1.4 and 3.2.0. Have a few bugs filed.



--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: [email protected]
web  : http://scalableinformatics.com
       http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to