This test is pretty easy to replicate anywhere- only takes 1 disk, one machine, one tarball. Untarring to local disk directly vs thru gluster is about 4.5x faster. At first I thought this may be due to a slow host (Opteron 2.4ghz). But it's not- same configuration, on a much faster machine (dual 3.33ghz Xeon) yields the performance below.

####THIS TEST WAS TO A LOCAL DISK THRU GLUSTER####
[r...@ac33 jenos]# time tar xzf /scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz

real    0m41.290s
user    0m14.246s
sys     0m2.957s

####THIS TEST WAS TO A LOCAL DISK (BYPASS GLUSTER)####
[r...@ac33 jenos]# cd /export/jenos/
[r...@ac33 jenos]# time tar xzf /scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz

real    0m8.983s
user    0m6.857s
sys     0m1.844s

####THESE ARE TEST FILE DETAILS####
[r...@ac33 jenos]# tar tzvf /scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz |wc -l
109
[r...@ac33 jenos]# ls -l /scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz -rw-r--r-- 1 jenos ac 804385203 2010-02-07 06:32 /scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz
[r...@ac33 jenos]#

These are the relevant performance options I'm using in my .vol file:

#------------Performance Options-------------------

volume readahead
  type performance/read-ahead
  option page-count 4           # 2 is default option
  option force-atime-update off # default is off
  subvolumes ghome
end-volume

volume writebehind
  type performance/write-behind
  option cache-size 1MB
  subvolumes readahead
end-volume

volume cache
  type performance/io-cache
  option cache-size 1GB
  subvolumes writebehind
end-volume

What can I do to improve gluster's performance?

    Jeremy

_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to