Hi, Ideally, both bricks in a replica set should be of the same size.
Ravi, can you confirm? Regards, Nithya On 21 February 2017 at 16:05, Daniele Antolini <[email protected]> wrote: > Hi Serkan, > > thanks a lot for the answer. > > So, if you are correct, in a distributed with replica environment the best > practice is to pair nodes with the smallest size together? > > For example: > > node1 1 GB > node2 10 GB > node3 4 GB > node4 8 GB > node5 15 GB > node6 7 GB > > So: > > node1 with node3 (smallest is 1 GB) > node4 with node6 (smallest is 7 GB) > node2 with node5 (smallest is 10 GB) > > The command to launch: > > gluster volume create gv0 replica 2 node1:/opt/data/gv0 node3:/opt/data/gv0 > node4:/opt/data/gv0 node6:/opt/data/gv0 node2:/opt/data/gv0 > node5:/opt/data/gv0 > > Right? In this way I should have 18 GB of free space on the mounted volume > (1 GB + 7 GB + 10 GB) > > > > 2017-02-21 11:30 GMT+01:00 Serkan Çoban <[email protected]>: > >> I think, gluster1 and gluster2 became a replica pair. Smallest size >> between them is affective size (1GB) >> Same for gluster3 and gluster4 (3GB). Total 4GB space available. This >> is just a guest though.. >> >> On Tue, Feb 21, 2017 at 1:18 PM, Daniele Antolini <[email protected]> >> wrote: >> > Hi all, >> > >> > first of all, nice to meet you. I'm new here and I'm subscribing to do a >> > very simple question. >> > >> > I don't understand completely how, in a distributed with replica >> > environment, heterogeneous bricks are involved. >> > >> > I've just done a test with four bricks: >> > >> > gluster1 1 GB >> > gluster2 2 GB >> > gluster3 5 GB >> > gluster4 3 GB >> > >> > Each partition is mounted locally at /opt/data >> > >> > I've created a gluster volume with: >> > >> > gluster volume create gv0 replica 2 gluster1:/opt/data/gv0 >> > gluster2:/opt/data/gv0 gluster3:/opt/data/gv0 gluster4:/opt/data/gv0 >> > >> > and then mounted on a client: >> > >> > testgfs1:/gv0 4,0G 65M 4,0G 2% /mnt/test >> > >> > I see 4 GB of free space but I cannot understand how this space has been >> > allocated. >> > Can please someone explain to me how this can happened? >> > >> > Thanks a lot >> > >> > Daniele >> > >> > >> > >> > >> > _______________________________________________ >> > Gluster-users mailing list >> > [email protected] >> > http://lists.gluster.org/mailman/listinfo/gluster-users >> > > > _______________________________________________ > Gluster-users mailing list > [email protected] > http://lists.gluster.org/mailman/listinfo/gluster-users >
_______________________________________________ Gluster-users mailing list [email protected] http://lists.gluster.org/mailman/listinfo/gluster-users
