I can ask our other engineer but I don't have those figues. -Alastair
On 30 June 2017 at 13:52, Serkan Çoban <[email protected]> wrote: > Did you test healing by increasing disperse.shd-max-threads? > What is your heal times per brick now? > > On Fri, Jun 30, 2017 at 8:01 PM, Alastair Neil <[email protected]> > wrote: > > We are using 3.10 and have a 7 PB cluster. We decided against 16+3 as > the > > rebuild time are bottlenecked by matrix operations which scale as the > square > > of the number of data stripes. There are some savings because of larger > > data chunks but we ended up using 8+3 and heal times are about half > compared > > to 16+3. > > > > -Alastair > > > > On 30 June 2017 at 02:22, Serkan Çoban <[email protected]> wrote: > >> > >> >Thanks for the reply. We will mainly use this for archival - near-cold > >> > storage. > >> Archival usage is good for EC > >> > >> >Anything, from your experience, to keep in mind while planning large > >> > installations? > >> I am using 3.7.11 and only problem is slow rebuild time when a disk > >> fails. It takes 8 days to heal a 8TB disk.(This might be related with > >> my EC configuration 16+4) > >> 3.9+ versions has some improvements about this but I cannot test them > >> yet... > >> > >> On Thu, Jun 29, 2017 at 2:49 PM, jkiebzak <[email protected]> wrote: > >> > Thanks for the reply. We will mainly use this for archival - near-cold > >> > storage. > >> > > >> > > >> > Anything, from your experience, to keep in mind while planning large > >> > installations? > >> > > >> > > >> > Sent from my Verizon, Samsung Galaxy smartphone > >> > > >> > -------- Original message -------- > >> > From: Serkan Çoban <[email protected]> > >> > Date: 6/29/17 4:39 AM (GMT-05:00) > >> > To: Jason Kiebzak <[email protected]> > >> > Cc: Gluster Users <[email protected]> > >> > Subject: Re: [Gluster-users] Multi petabyte gluster > >> > > >> > I am currently using 10PB single volume without problems. 40PB is on > >> > the way. EC is working fine. > >> > You need to plan ahead with large installations like this. Do complete > >> > workload tests and make sure your use case is suitable for EC. > >> > > >> > > >> > On Wed, Jun 28, 2017 at 11:18 PM, Jason Kiebzak <[email protected]> > >> > wrote: > >> >> Has anyone scaled to a multi petabyte gluster setup? How well does > >> >> erasure > >> >> code do with such a large setup? > >> >> > >> >> Thanks > >> >> > >> >> _______________________________________________ > >> >> Gluster-users mailing list > >> >> [email protected] > >> >> http://lists.gluster.org/mailman/listinfo/gluster-users > >> _______________________________________________ > >> Gluster-users mailing list > >> [email protected] > >> http://lists.gluster.org/mailman/listinfo/gluster-users > > > > >
_______________________________________________ Gluster-users mailing list [email protected] http://lists.gluster.org/mailman/listinfo/gluster-users
