Hi glusterfs users,

I am still testing stripe performance .... 

In a previous email I concluded that glusterfs-stripe works favorable for big 
files, because a lot of small files can introduce a lot of overhead. To test 
streaming performance I did a very simple write/read test.  My configuration is 
as follows:
- a big client machine with a 10Gb nic, 16 GB memory
- 4 glusterfs servers with 1 Gb nic's, exporting each a volume, 4 GB memory
- all connected to the same switch

Config of glusterfs 3.1.1 with a 4 node striped volume:
# gluster volume info
 
Volume Name: testvol
Type: Stripe
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: node20.storage.xx.nl:/data1
Brick2: node30.storage.xx.nl:/data1
Brick3: node40.storage.xx.nl:/data1
Brick4: node50.storage.xx.nl:/data1

I write a stream and subsequently read the same file, all with "dd". This will 
give me raw streaming performance. I use a file big enough file to eliminate 
cache effects on both the client and servers. Between the read and write I 
clear the OS buffer cache.

Results:
[[email protected] ~]# echo 3 > /proc/sys/vm/drop_caches
[[email protected] ~]# dd if=/dev/zero of=/gluster/file1 bs=1M count=33k
33792+0 records in
33792+0 records out
35433480192 bytes (35 GB) copied, 146.224 seconds, 242 MB/s

The write gives a nice result. I tested this also on the storage brick on a 
local disk and got 96MB/s. This will give a theoretical max of 384GB/s for this 
4 node stripe. Ofcourse the is much more overhead, like the tcp/ip stack etc. 
The result of 242MB/s could be better but is ok.
The read performance of a local disk is 121 MB/s. This is impossible over the 1 
Gb nics but we should get nice results too for the 4 node stripe read:

[[email protected] ~]# echo 3 > /proc/sys/vm/drop_caches
[[email protected] ~]# dd of=/dev/null if=/gluster/file1 bs=1M count=33k
33792+0 records in
33792+0 records out
35433480192 bytes (35 GB) copied, 358.679 seconds, 98.8 MB/s

And this is very disappointing! Any idea what is happening here? Because I 
can't believe this is normal behavior.  

Greetings

Peter Gotwalt

P.S. Didn't do any tuning on the client or server side. 

_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to