I also neglected to mention that the underlying filesystem is ext3.

On 3/24/2010 3:44 AM, Jeremy Enos wrote:
I haven't tried all performance options disabled yet- I can try that tomorrow when the resource frees up. I was actually asking first before blindly trying different configuration matrices in case there's a clear direction I should take with it. I'll let you know.

    Jeremy

On 3/24/2010 2:54 AM, Stephan von Krawczynski wrote:
Hi Jeremy,

have you tried to reproduce with all performance options disabled? They are
possibly no good idea on a local system.
What local fs do you use?


--
Regards,
Stephan


On Tue, 23 Mar 2010 19:11:28 -0500
Jeremy Enos<[email protected]>  wrote:

Stephan is correct- I primarily did this test to show a demonstrable
overhead example that I'm trying to eliminate.  It's pronounced enough
that it can be seen on a single disk / single node configuration, which
is good in a way (so anyone can easily repro).

My distributed/clustered solution would be ideal if it were fast enough
for small block i/o as well as large block- I was hoping that single
node systems would achieve that, hence the single node test.  Because
the single node test performed poorly, I eventually reduced down to
single disk to see if it could still be seen, and it clearly can be.
Perhaps it's something in my configuration? I've pasted my config files
below.
thx-

      Jeremy

######################glusterfsd.vol######################
volume posix
    type storage/posix
    option directory /export
end-volume

volume locks
    type features/locks
    subvolumes posix
end-volume

volume disk
    type performance/io-threads
    option thread-count 4
    subvolumes locks
end-volume

volume server-ib
    type protocol/server
    option transport-type ib-verbs/server
    option auth.addr.disk.allow *
    subvolumes disk
end-volume

volume server-tcp
    type protocol/server
    option transport-type tcp/server
    option auth.addr.disk.allow *
    subvolumes disk
end-volume

######################ghome.vol######################

#-----------IB remotes------------------
volume ghome
    type protocol/client
    option transport-type ib-verbs/client
#  option transport-type tcp/client
    option remote-host acfs
    option remote-subvolume raid
end-volume

#------------Performance Options-------------------

volume readahead
    type performance/read-ahead
    option page-count 4           # 2 is default option
    option force-atime-update off # default is off
    subvolumes ghome
end-volume

volume writebehind
    type performance/write-behind
    option cache-size 1MB
    subvolumes readahead
end-volume

volume cache
    type performance/io-cache
    option cache-size 1GB
    subvolumes writebehind
end-volume

######################END######################



On 3/23/2010 6:02 AM, Stephan von Krawczynski wrote:
On Tue, 23 Mar 2010 02:59:35 -0600 (CST)
"Tejas N. Bhise"<[email protected]>   wrote:


Out of curiosity, if you want to do stuff only on one machine,
why do you want to use a distributed, multi node, clustered,
file system ?

Because what he does is a very good way to show the overhead produced only by
glusterfs and nothing else (i.e. no network involved).
A pretty relevant test scenario I would say.

--
Regards,
Stephan



Am I missing something here ?

Regards,
Tejas.

----- Original Message -----
From: "Jeremy Enos"<[email protected]>
To: [email protected]
Sent: Tuesday, March 23, 2010 2:07:06 PM GMT +05:30 Chennai, Kolkata, Mumbai, New Delhi
Subject: [Gluster-users] gluster local vs local = gluster x4 slower

This test is pretty easy to replicate anywhere- only takes 1 disk, one machine, one tarball. Untarring to local disk directly vs thru gluster is about 4.5x faster. At first I thought this may be due to a slow host
(Opteron 2.4ghz).  But it's not- same configuration, on a much faster
machine (dual 3.33ghz Xeon) yields the performance below.

####THIS TEST WAS TO A LOCAL DISK THRU GLUSTER####
[r...@ac33 jenos]# time tar xzf
/scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz

real    0m41.290s
user    0m14.246s
sys     0m2.957s

####THIS TEST WAS TO A LOCAL DISK (BYPASS GLUSTER)####
[r...@ac33 jenos]# cd /export/jenos/
[r...@ac33 jenos]# time tar xzf
/scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz

real    0m8.983s
user    0m6.857s
sys     0m1.844s

####THESE ARE TEST FILE DETAILS####
[r...@ac33 jenos]# tar tzvf
/scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz  |wc -l
109
[r...@ac33 jenos]# ls -l
/scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz
-rw-r--r-- 1 jenos ac 804385203 2010-02-07 06:32
/scratch/jenos/intel/l_cproc_p_11.1.064_intel64.tgz
[r...@ac33 jenos]#

These are the relevant performance options I'm using in my .vol file:

#------------Performance Options-------------------

volume readahead
     type performance/read-ahead
     option page-count 4           # 2 is default option
     option force-atime-update off # default is off
     subvolumes ghome
end-volume

volume writebehind
     type performance/write-behind
     option cache-size 1MB
     subvolumes readahead
end-volume

volume cache
     type performance/io-cache
     option cache-size 1GB
     subvolumes writebehind
end-volume

What can I do to improve gluster's performance?

       Jeremy

_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to