This is my fstab localhost:/root /mnt/root glusterfs defaults,*direct-io-mode=enable* 0 0
-- Saludos, LG On Wed, Jun 22, 2016 at 9:49 AM, ML mail <[email protected]> wrote: > Luciano, how do you enable direct-io-mode? > > > On Wednesday, June 22, 2016 7:09 AM, Luciano Giacchetta < > [email protected]> wrote: > > > Hi, > > I have similar scenario, for a cars classified with millions of small > files, mounted with gluster native client in a replica config. > The gluster server has 16gb RAM and 4 cores and mount the glusterfs with > direct-io-mode=enable. Then i export to all servers ( windows included with > CIFS ) > > performance.cache-refresh-timeout: 60 > performance.read-ahead: enable > performance.write-behind-window-size: 4MB > performance.io-thread-count: 64 > performance.cache-size: 12GB > performance.quick-read: on > performance.flush-behind: on > performance.write-behind: on > nfs.disable: on > > > -- > Saludos, LG > > On Sat, May 28, 2016 at 6:46 AM, Gandalf Corvotempesta < > [email protected]> wrote: > > if i remember properly, each stat() on a file needs to be sent to all host > in replica to check if are in sync > Is this true for both gluster native client and nfs ganesha? > Which is the best for a shared hosting storage with many millions of small > files? About 15.000.000 small files in 800gb ? Or even for Maildir hosting > Ganesha can be configured for HA and loadbalancing so the biggest issue > that was present in standard NFS now is gone > Any advantage about native gluster over Ganesha? Removing the fuse > requirement should also be a performance advantage for Ganesha over native > client > > _______________________________________________ > Gluster-users mailing list > [email protected] > http://www.gluster.org/mailman/listinfo/gluster-users > > > > _______________________________________________ > Gluster-users mailing list > [email protected] > http://www.gluster.org/mailman/listinfo/gluster-users > > >
_______________________________________________ Gluster-users mailing list [email protected] http://www.gluster.org/mailman/listinfo/gluster-users
