Hi, Speaking from shard translator's POV, one thing you can do to improve performance is to use preallocated images. This will at least eliminate the need for shard to perform multiple steps as part of the writes - such as creating the shard and then writing to it and then updating the aggregated file size - all of which require one network call each, which further get blown up once they reach AFR (replicate) into many more network calls. What this also means is that the performance with and without shard will be the same with this change.
Also, could you also enable client-io-threads and see if that improves performance? There's a patch that is part of 3.11.1 that has been found to improve performance for vm workloads based on our testing - https://review.gluster.org/#/c/17391/ You can give this version a try. -Krutika On Mon, Sep 4, 2017 at 7:48 PM, Roei G <[email protected]> wrote: > Hey everyone! > I have deployed gluster on 3 nodes with 4 SSDs each and 10Gb Ethernet > connection. > > The storage is configured with 3 gluster volumes, every volume has 12 > bricks (4 bricks on every server, 1 per ssd in the server). > > With the 'features.shard' off option my writing speed (using the 'dd' > command) is approximately 250 Mbs and when the feature is on the writing > speed is around 130mbs. > > --------- gluster version 3.8.13 -------- > > Volume name: data > Number of bricks : 4 * 3 = 12 > Bricks: > Brick1: server1:/brick/data1 > Brick2: server1:/brick/data2 > Brick3: server1:/brick/data3 > Brick4: server1:/brick/data4 > Brick5: server2:/brick/data1 > . > . > . > Options reconfigure: > Performance.strict-o-direct: off > Cluster.nufa: off > Features.shard-block-size: 512MB > Features.shard: on > Cluster.server-quorum-type: server > Cluster.quorum-type: auto > Cluster.eager-lock: enable > Network.remote-dio: on > Performance.readdir-ahead: on > > Any idea on how to improve my performance? > > > > _______________________________________________ > Gluster-users mailing list > [email protected] > http://lists.gluster.org/mailman/listinfo/gluster-users >
_______________________________________________ Gluster-users mailing list [email protected] http://lists.gluster.org/mailman/listinfo/gluster-users
