On Sat, 21 Feb 2009, Keith Freedman wrote:

is direct, a locally attached hard drive?
a network filesystem will NEVER perform as well as a locally attached disk.

Odd, actually you should be able to make it preform BETTER, as you scale Lustre the I/O for even a single file scales.

I think you're numbers aren't too unreasonable.
You could probably improve your performance by adding some performance translators.

I have them on the server, should I also put them on the client?

Write behind would likely help you a bit.

Also, even DRBD is better and it is IPoIP, not raw infiniband:

[r...@xen0 share]# dd if=/dev/zero of=/share/bar bs=1G count=8
8+0 records in
8+0 records out
8589934592 bytes (8.6 GB) copied, 60.8988 seconds, 141 MB/s


At 06:59 PM 2/21/2009, Nathan Stratton wrote:

Direct:
[r...@xen0 unify]# dd if=/dev/zero of=/sdb2/bar bs=1G count=8
8+0 records in
8+0 records out
8589934592 bytes (8.6 GB) copied, 51.3145 seconds, 167 MB/s

Gluster:
[r...@xen0 unify]# dd if=/dev/zero of=/unify/foo bs=1G count=8
8+0 records in
8+0 records out
8589934592 bytes (8.6 GB) copied, 87.7885 seconds, 97.8 MB/s

Boxes are connected with 10 gig Infiniband so that should not be an issue.

http://share.robotics.net/glusterfs.vol
http://share.robotics.net/glusterfsd.vol

<>
Nathan Stratton                                CTO, BlinkMind, Inc.
nathan at robotics.net                         nathan at blinkmind.com
http://www.robotics.net                        http://www.blinkmind.com


_______________________________________________
Gluster-users mailing list
[email protected]
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
[email protected]
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users

Reply via email to