>Thanks for the reply. We will mainly use this for archival - near-cold storage. Archival usage is good for EC
>Anything, from your experience, to keep in mind while planning large >installations? I am using 3.7.11 and only problem is slow rebuild time when a disk fails. It takes 8 days to heal a 8TB disk.(This might be related with my EC configuration 16+4) 3.9+ versions has some improvements about this but I cannot test them yet... On Thu, Jun 29, 2017 at 2:49 PM, jkiebzak <[email protected]> wrote: > Thanks for the reply. We will mainly use this for archival - near-cold > storage. > > > Anything, from your experience, to keep in mind while planning large > installations? > > > Sent from my Verizon, Samsung Galaxy smartphone > > -------- Original message -------- > From: Serkan Çoban <[email protected]> > Date: 6/29/17 4:39 AM (GMT-05:00) > To: Jason Kiebzak <[email protected]> > Cc: Gluster Users <[email protected]> > Subject: Re: [Gluster-users] Multi petabyte gluster > > I am currently using 10PB single volume without problems. 40PB is on > the way. EC is working fine. > You need to plan ahead with large installations like this. Do complete > workload tests and make sure your use case is suitable for EC. > > > On Wed, Jun 28, 2017 at 11:18 PM, Jason Kiebzak <[email protected]> wrote: >> Has anyone scaled to a multi petabyte gluster setup? How well does erasure >> code do with such a large setup? >> >> Thanks >> >> _______________________________________________ >> Gluster-users mailing list >> [email protected] >> http://lists.gluster.org/mailman/listinfo/gluster-users _______________________________________________ Gluster-users mailing list [email protected] http://lists.gluster.org/mailman/listinfo/gluster-users
