Maybe it's cheating by writing sparse files or something of the like because it knows it's all zeros? Create some files locally from /dev/urandom and copy them. I think you'll see much lower performance. Better yet, use bonnie++.

Jeff White - Linux/Unix Systems Engineer
University of Pittsburgh - CSSD


On 03/29/2012 03:34 PM, Jeff Darcy wrote:
On 03/29/2012 03:30 PM, Harry Mangalam wrote:
I'm doing some perf tests on a small gluster filesystem - 5 bricks on 4
servers, all single-homed on the private net.

I've spawned up to 70 simultaneous jobs on our cluster nodes writing files of
various sizes from /dev/zero to the gluster fs to see what the effect on the
aggregate bandwith and the data is slightly unbelievable in that it seems to
exceed the theoretical max of the network. (I used /dev/zero instead of
/dev/urandom since /dev/urandom couldn't generate data fast enough.

The 35,000 files of the right size do hit the filesystem (of course they're all
zero's) but the speed at which they transfer exceeds (by quite a bit) the
theoretical max of a 1 Gb network.

Does gluster (or anything else) do transparent compression? What else would
explain this oddity?
How do you define "theoretical max of a 1Gb network"?  If it's a switched
network, the actual maximum throughput depends on the capabilities of the
switch but is likely to be far in excess of 1Gb/s.  Could that be it?  Could
you give more detail about the actual traffic patterns and results?

BTW, this is my favorite message title ever.  Thanks for that.  :)

_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to