Sharding has one benefit for me (oVirt)  ->  faster  heal after maintenance.
Otherwise imagine 150 GB VM disk - while you reboot recently patched node ,  
all  files on the running replica will be marked for replication.
Either it will consume alot of CPU ( to find the neccessary ofsets for heal) or 
use full heal and replicate the whole file.

With sharding - it's quite simple and fast.

Best Regards,
Strahil NikolovOn Apr 18, 2019 16:13, Martin Toth <[email protected]> wrote:
>
> Hi, 
>
> I am curious about your setup and settings also. I have exactly same setup 
> and use case. 
>
> - why do you use sharding on replica3? Do you have various size of 
> bricks(disks) pre node? 
>
> Wonder if someone will share settings for this setup. 
>
> BR! 
>
> > On 18 Apr 2019, at 09:27, [email protected] wrote: 
> > 
> > Hi, 
> > 
> > We've been using the same settings, found in an old email here, since 
> > v3.7 of gluster for our VM hosting volumes. They've been working fine 
> > but since we've just installed a v6 for testing I figured there might 
> > be new settings I should be aware of. 
> > 
> > So for access through the libgfapi (qemu), for VM hard drives, is that 
> > still optimal and recommended ? 
> > 
> > Volume Name: glusterfs 
> > Type: Replicate 
> > Volume ID: b28347ff-2c27-44e0-bc7d-c1c017df7cd1 
> > Status: Started 
> > Snapshot Count: 0 
> > Number of Bricks: 1 x 3 = 3 
> > Transport-type: tcp 
> > Bricks: 
> > Brick1: ips1adm.X:/mnt/glusterfs/brick 
> > Brick2: ips2adm.X:/mnt/glusterfs/brick 
> > Brick3: ips3adm.X:/mnt/glusterfs/brick 
> > Options Reconfigured: 
> > performance.readdir-ahead: on 
> > cluster.quorum-type: auto 
> > cluster.server-quorum-type: server 
> > network.remote-dio: enable 
> > cluster.eager-lock: enable 
> > performance.quick-read: off 
> > performance.read-ahead: off 
> > performance.io-cache: off 
> > performance.stat-prefetch: off 
> > features.shard: on 
> > features.shard-block-size: 64MB 
> > cluster.data-self-heal-algorithm: full 
> > network.ping-timeout: 30 
> > diagnostics.count-fop-hits: on 
> > diagnostics.latency-measurement: on 
> > transport.address-family: inet 
> > nfs.disable: on 
> > performance.client-io-threads: off 
> > 
> > Thanks ! 
> > _______________________________________________ 
> > Gluster-users mailing list 
> > [email protected] 
> > https://lists.gluster.org/mailman/listinfo/gluster-users 
>
> _______________________________________________ 
> Gluster-users mailing list 
> [email protected] 
> https://lists.gluster.org/mailman/listinfo/gluster-users 
_______________________________________________
Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to