I am thinking of disabling this option altogether. This was developed to
make sure we have better reporting when same disk gets shared by multiple
volumes (was an option discussed for container use-cases, and +1 scaling
feature.

But considering the amount of bugs and confusion it got-in, and as we are
bit far from +1 scaling, how about disabling this option ?

-Amar


---------- Forwarded message ---------
From: M. <[email protected]>
Date: Wed, Mar 25, 2020 at 12:08 PM
Subject: Re: [gluster/glusterfs] Disperse volume only showing one third of
the available space on client (#1131)
To: gluster/glusterfs <[email protected]>
Cc: Subscribed <[email protected]>


Hello and thanks for your help. Enclosed you will find the
shared-brick-count settings across all bricks across all nodes (with the
correct names this time):

Node 1:

/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 3
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 3
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdd-brick.vol:
   option shared-brick-count 3
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdd-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sde-brick.vol:
   option shared-brick-count 0

Node 2:

/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdd-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 2
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 2
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdd-brick.vol:
   option shared-brick-count 1
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sde-brick.vol:
   option shared-brick-count 0

Node 3:

/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc01.mnt-gluster-sdd-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc02.mnt-gluster-sdd-brick.vol:
   option shared-brick-count 0
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sdb-brick.vol:
   option shared-brick-count 1
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sdc-brick.vol:
   option shared-brick-count 1
/var/lib/glusterd/vols/glusterpoc/glusterpoc.glusterpoc03.mnt-gluster-sde-brick.vol:
   option shared-brick-count 1

—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<https://github.com/gluster/glusterfs/issues/1131#issuecomment-603668047>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEYSSPCWYMAG3YGLIHFYH3RJGRIBANCNFSM4LRXL7VQ>
.


-- 
--
https://kadalu.io
Container Storage made easy!
_______________________________________________
maintainers mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/maintainers

Reply via email to