Edy,
Have been working on this on and off for some time now, but have yet to find a 
working configuration.
Upon failover I always end up with inaccessible within VMware, have you seen 
this?
But to answer your question have you looked at sharding? storing large files as 
smaller chunks to reduce the sync times between nodes.
Jon
    On Wednesday, 8 August 2018, 15:11:12 BST, Pui Edylie <[email protected]> 
wrote:  
 
 Dear All,

Recently I have setup a glusterfs 4.1.2 with 3 nodes and uses 
nfs-ganesha with storhaug to share out the NFS service to Vmware 6.7 as 
a datastore.

The following is my volume setting

Volume Name: gv0
Type: Replicate
Volume ID: b1b57ff2-b81f-4625-846a-87064023cf22
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.0.3:/brick1685/gv0
Brick2: 192.168.0.2:/brick1684/gv0
Brick3: 192.168.0.1:/brick1683/gv0
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
network.ping-timeout: 1
cluster.enable-shared-storage: enable


Do you have any suggestion to tune the volume to optimise for NFSv3 and 
as a Vmware ESXI 6.7 datastore?

Thank you!

Regards,
Edy

_______________________________________________
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users
  
_______________________________________________
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to