On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban <[email protected]> wrote:
> It is the over all time, 8TB data disk healed 2x faster in 8+2 > configuration. > Wow, that is counter intuitive for me. I will need to explore about this to find out why that could be. Thanks a lot for this feedback! > > On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri > <[email protected]> wrote: > > > > > > On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban <[email protected]> > wrote: > >> > >> Healing gets slower as you increase m in m+n configuration. > >> We are using 16+4 configuration without any problems other then heal > >> speed. > >> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on > >> 8+2 is faster by 2x. > > > > > > As you increase number of nodes that are participating in an EC set > number > > of parallel heals increase. Is the heal speed you saw improved per file > or > > the over all time it took to heal the data? > > > >> > >> > >> > >> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey <[email protected]> > wrote: > >> > > >> > 8+2 and 8+3 configurations are not the limitation but just > suggestions. > >> > You can create 16+3 volume without any issue. > >> > > >> > Ashish > >> > > >> > ________________________________ > >> > From: "Alastair Neil" <[email protected]> > >> > To: "gluster-users" <[email protected]> > >> > Sent: Friday, May 5, 2017 2:23:32 AM > >> > Subject: [Gluster-users] disperse volume brick counts limits in RHES > >> > > >> > > >> > Hi > >> > > >> > we are deploying a large (24node/45brick) cluster and noted that the > >> > RHES > >> > guidelines limit the number of data bricks in a disperse set to 8. Is > >> > there > >> > any reason for this. I am aware that you want this to be a power of > 2, > >> > but > >> > as we have a large number of nodes we were planning on going with > 16+3. > >> > Dropping to 8+2 or 8+3 will be a real waste for us. > >> > > >> > Thanks, > >> > > >> > > >> > Alastair > >> > > >> > > >> > _______________________________________________ > >> > Gluster-users mailing list > >> > [email protected] > >> > http://lists.gluster.org/mailman/listinfo/gluster-users > >> > > >> > > >> > _______________________________________________ > >> > Gluster-users mailing list > >> > [email protected] > >> > http://lists.gluster.org/mailman/listinfo/gluster-users > >> _______________________________________________ > >> Gluster-users mailing list > >> [email protected] > >> http://lists.gluster.org/mailman/listinfo/gluster-users > > > > > > > > > > -- > > Pranith > -- Pranith
_______________________________________________ Gluster-users mailing list [email protected] http://lists.gluster.org/mailman/listinfo/gluster-users
