[ This didn't seem to come through the first time - sorry if you get two copies 
:) ]

----- Forwarded Message -----
From: "Darren Austin" <[email protected]>
To: [email protected]
Sent: Tuesday, 28 June, 2011 11:05:38 AM
Subject: Performance + Translators

Hi,
I've been measuring the performance of a GlusterFS backed Apache
instance and noticed some quite concerning performance issues.

Should the GlusterFS client be hitting BOTH the replicated servers for
every httpd request that comes in?
For every request that is made, the client seems to be requesting the
file from both the replicated servers in the cluster - is this a
situation which is normal?

Also, even though the requested file is exactly the same every time,
Gluster seems to be re-requesting it from the server(s) for every httpd
request.

I've read that the quick-read translator is supposed to cache file
access requests to increase performance, but it doesn't seem to have any
effect here - but that may be due to a configuration error on my part.

I'm not even sure if the quick-read translator is being enabled - and if
it has to be enabled in some way on the client?
I've attached logs from the point of setting up a new volume and
mounting it from the client - even though the vol file being sent to the
client contains references to the quick-read translator, I don't know if
it's actually being used - is there some way to tell?

Also, as a test, I used 'volume set XXX performance.quick-read off', and
re-mounted from the client. The same performance tests didn't change -
so either the quick-read translator is having no effect, or it's not
being enabled on the client.

I'm pretty new to GlusterFS, so can anyone offer some insight into how
client side translators get enabled/disabled? My impression from the
documentation is that you set their options using 'volume set' on the
server, and the client picks them up - but that doesn't seem to be the
case here.

Thanks,
Darren.

-- 
Darren Austin - Systems Administrator, Widgit Software.
Tel: +44 (0)1926 333680.    Web: http://www.widgit.com/
26 Queen Street, Cubbington, Warwickshire, CV32 7NA.

Attachment: data-volume-fuse.vol
Description: Binary data

Attachment: data-volume.10.49.14.115.data.vol
Description: Binary data

Attachment: data-volume.10.234.158.226.data.vol
Description: Binary data

[2011-06-27 15:56:25.535850] W [write-behind.c:3023:init] 0-data-volume-write-behind: disabling write-behind for first 0 bytes
[2011-06-27 15:56:25.541966] I [client.c:1935:notify] 0-data-volume-client-0: parent translators are ready, attempting connect on transport
[2011-06-27 15:56:25.542842] I [client.c:1935:notify] 0-data-volume-client-1: parent translators are ready, attempting connect on transport
Given volfile:
+------------------------------------------------------------------------------+
  1: volume data-volume-client-0
  2:     type protocol/client
  3:     option remote-host 10.234.158.226
  4:     option remote-subvolume /data
  5:     option transport-type tcp
  6: end-volume
  7: 
  8: volume data-volume-client-1
  9:     type protocol/client
 10:     option remote-host 10.49.14.115
 11:     option remote-subvolume /data
 12:     option transport-type tcp
 13: end-volume
 14: 
 15: volume data-volume-replicate-0
 16:     type cluster/replicate
 17:     subvolumes data-volume-client-0 data-volume-client-1
 18: end-volume
 19: 
 20: volume data-volume-write-behind
 21:     type performance/write-behind
 22:     subvolumes data-volume-replicate-0
 23: end-volume
 24: 
 25: volume data-volume-read-ahead
 26:     type performance/read-ahead
 27:     subvolumes data-volume-write-behind
 28: end-volume
 29: 
 30: volume data-volume-io-cache
 31:     type performance/io-cache
 32:     subvolumes data-volume-read-ahead
 33: end-volume
 34: 
 35: volume data-volume-quick-read
 36:     type performance/quick-read
 37:     subvolumes data-volume-io-cache
 38: end-volume
 39: 
 40: volume data-volume-stat-prefetch
 41:     type performance/stat-prefetch
 42:     subvolumes data-volume-quick-read
 43: end-volume
 44: 
 45: volume data-volume
 46:     type debug/io-stats
 47:     option latency-measurement off
 48:     option count-fop-hits off
 49:     subvolumes data-volume-stat-prefetch
 50: end-volume

+------------------------------------------------------------------------------+
[2011-06-27 15:56:25.547290] I [rpc-clnt.c:1531:rpc_clnt_reconfig] 0-data-volume-client-0: changing port to 24009 (from 0)
[2011-06-27 15:56:25.547917] I [rpc-clnt.c:1531:rpc_clnt_reconfig] 0-data-volume-client-1: changing port to 24009 (from 0)
[2011-06-27 15:56:29.534327] I [client-handshake.c:1080:select_server_supported_programs] 0-data-volume-client-0: Using Program GlusterFS-3.1.0, Num (1298437), Version (310)
[2011-06-27 15:56:29.535528] I [client-handshake.c:1080:select_server_supported_programs] 0-data-volume-client-1: Using Program GlusterFS-3.1.0, Num (1298437), Version (310)
[2011-06-27 15:56:29.535709] I [client-handshake.c:913:client_setvolume_cbk] 0-data-volume-client-0: Connected to 10.234.158.226:24009, attached to remote volume '/data'.
[2011-06-27 15:56:29.535738] I [afr-common.c:2514:afr_notify] 0-data-volume-replicate-0: Subvolume 'data-volume-client-0' came back up; going online.
[2011-06-27 15:56:29.543152] I [fuse-bridge.c:3316:fuse_graph_setup] 0-fuse: switched to graph 0
[2011-06-27 15:56:29.543369] I [fuse-bridge.c:2897:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel 7.15
[2011-06-27 15:56:29.543426] I [client-handshake.c:913:client_setvolume_cbk] 0-data-volume-client-1: Connected to 10.49.14.115:24009, attached to remote volume '/data'.
[2011-06-27 15:56:29.544968] I [afr-common.c:836:afr_fresh_lookup_cbk] 0-data-volume-replicate-0: added root inode
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to