I've now got Gluster 3.04 up and running on my servers. 

Setup: 

gluster1 - back end file server 
gluster2 - ditto. Redundant. These are setup with the --raid 1 option 
app1 - my app server mounts /data/export on /home 
app2 - ditto 

The appropriate conf files are attached. 

My simple test involves reading an input file and writing it to the Gluster 
home versus local /tmp. 

Writing is 65x-ish slower. Reading is 28x-ish slower. Now, obviously, there's 
some overhead with regards to the Gluster system, and while the overall numbers 
themselves aren't bad in the real world, I'm wondering what, if any, 
performance tweaks I could be using to help this out. 

The app servers are far more read-heavy than write-heavy. Writing will be done 
in small chunks appending to files (audit logging, uploading images, writing 
out temp files from the application). 

glusterTest output: 

Writing 4879 lines to /home/output 
11.4823799133 
Writing 4879 lines to /tmp/output 
0.173233985901 
Reading /home/output 
Read 43911 lines in 0.757484197617 seconds. 
Reading /tmp/output 
Read 43911 lines in 0.0274169445038 seconds. 
## file auto generated by /usr/local/bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /usr/local/bin/glusterfs-volgen --name journyx-gluster --raid 1 
gluster1.int.journyx.com:/data/export gluster2.int.journyx.com:/
data/export

volume posix1
  type storage/posix
  option directory /data/export
end-volume

volume locks1
    type features/locks
    subvolumes posix1
end-volume

volume brick1
    type performance/io-threads
    option thread-count 8
    subvolumes locks1
end-volume

volume server-tcp
    type protocol/server
    option transport-type tcp
    option auth.addr.brick1.allow *
    option transport.socket.listen-port 6996
    option transport.socket.nodelay on
    subvolumes brick1
end-volume
## file auto generated by /usr/local/bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /usr/local/bin/glusterfs-volgen --name journyx-gluster --raid 1 
gluster1.int.journyx.com:/data/export gluster2.int.journyx.com:/
data/export

volume posix1
  type storage/posix
  option directory /data/export
end-volume

volume locks1
    type features/locks
    subvolumes posix1
end-volume

volume brick1
    type performance/io-threads
    option thread-count 8
    subvolumes locks1
end-volume

volume server-tcp
    type protocol/server
    option transport-type tcp
    option auth.addr.brick1.allow *
    option transport.socket.listen-port 6996
    option transport.socket.nodelay on
    subvolumes brick1
end-volume
## file auto generated by /usr/local/bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /usr/local/bin/glusterfs-volgen --name journyx-gluster --raid 1 
gluster1.int.journyx.com:/data/export gluster2.int.journyx.com:/
data/export

# RAID 1
# TRANSPORT-TYPE tcp
volume gluster1.int.journyx.com-1
    type protocol/client
    option transport-type tcp
    option remote-host 192.168.100.71
    option transport.socket.nodelay on
    option transport.remote-port 6996
    option remote-subvolume brick1
end-volume

volume gluster2.int.journyx.com-1
    type protocol/client
    option transport-type tcp
    option remote-host 192.168.100.72
    option transport.socket.nodelay on
    option transport.remote-port 6996
    option remote-subvolume brick1
end-volume

volume mirror-0
    type cluster/replicate
    subvolumes gluster1.int.journyx.com-1 gluster2.int.journyx.com-1
end-volume

volume writebehind
    type performance/write-behind
    option cache-size 4MB
    subvolumes mirror-0
end-volume

volume readahead
    type performance/read-ahead
    option page-count 4
    subvolumes writebehind
end-volume

volume iocache
    type performance/io-cache
    option cache-size 1GB
    option cache-timeout 1
    subvolumes readahead
end-volume

volume quickread
    type performance/quick-read
    option cache-timeout 1
    option max-file-size 64kB
    subvolumes iocache
end-volume

volume statprefetch
    type performance/stat-prefetch
    subvolumes quickread
end-volume
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to