>From my interactions during a similar evaluation with the gluster.com folks, 
>I've been told that striping carries a heavy performance penalty, so it was 
>suggested that I setup using just distribute on client on top of the 
>subvolumes.

I was also told to use the readahead and writebehind translators, as well as 
the iocache, locks, and posix ones (which you're probably already using).

The pertinent bit from my client config file is below:

<snip>
volume distribute
  type cluster/distribute
  subvolumes client3 client4
end-volume

volume writebehind
    type performance/write-behind
    option cache-size 4MB
    subvolumes distribute
end-volume

volume readahead
    type performance/read-ahead
    option page-count 4
    subvolumes writebehind
end-volume

volume iocache
    type performance/io-cache
    option cache-size 1GB
    option cache-timeout 1
    subvolumes readahead
end-volume

volume quickread
    type performance/quick-read
    option cache-timeout 1
    option max-file-size 64kB
    subvolumes iocache
end-volume

volume statprefetch
    type performance/stat-prefetch
    subvolumes quickread
end-volume

Hopefully this helps - I have seen a measurable performance improvement under 
iozone (though not bonnie++) with these translators configured.

James Burnash
Unix SA
x2248

-----Original Message-----
From: [email protected] 
[mailto:[email protected]] On Behalf Of [email protected]
Sent: Tuesday, January 19, 2010 6:07 AM
To: [email protected]
Subject: [Gluster-users] stripe nfs performance

I'm in the process of evaluating gluster, and to that end I installed
Gluster Storage Platform 3.0 (using the "usb-method") on four servers. I
created a striped volume and exported it as nfs (and glusterfs).

I nfs-mounted this volume on a node and copied a file about 400 MB large,
this took about twice the time compared to copy to an ordinary nfs-mounted
partition. Are these timings reasonable?

Then I tested to copy one instance of this file from several nodes
simultaneously, but those timings did not indicate superior performance
for the gluster mounted file system: in all cases it took about twice as
long time to complete all copying to the gluster mounted system doing the
copying from two and three nodes at the same time (as compared to doing it
to a singel nfs mounted partition).

Hardware is nothing fancy, disks are old sata (about 35GB), and all nodes
sit on a gigabit switch.

I guess I'm doing something wrong here, but since I'm using the Gluster
Storage Platform there shouldn't be too many ways to go wrong ...

Regards,

/jon

_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to