Hi

We have successfully deployed glusterfs.  We have deployed two servers with one 
volume as a replica.  We use Citrix XenServer for our virtual environment.  In 
XenServer we have created a storage repository via NFS using the glusterfs 
volume.  This has been successful and XenServer can now use that storage 
repository for storing virtual machine images.

We are finding that the storage performance for the virtual machine images 
being stored across the network is too slow.  I am comparing this to using a 
straight Linux NFS server in the same scenario.  glusterfs is very appealing as 
a replacement for NFS due to its support for NFS clients and its redundancy.

Virtual machine images range from 10GB to 100GB, most are 10GB.
The two glusterfs servers are on servers with RAID 6 hardware and we get great 
storage performance out of them.

Current volume configuration:
gluster volume info
Volume Name: storage-pool-01
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: scluster01.corp.assureprograms.com.au:/gluster-export/storage-pool-01
Brick2: scluster02.corp.assureprograms.com.au:/gluster-export/storage-pool-01
Options Reconfigured:
nfs.volume-access: read-write
auth.allow: 192.168.10.*
nfs.ports-insecure: on
nfs.addr-namelookup: on
server.allow-insecure: on
nfs.export-volumes: on

As you can see above we have applied some volume configuration changes like 
"nfs.volume-access: read-write".  Does anyone have any suggested configurations 
changes that can be made to glusterfs to improve performance of the I/O in this 
scenario?


Kind regards 
Stewart Campbell 
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to