The latter one is the one I have been referring to. And it is pretty dangerous Imho
Il 31/ago/2017 01:19, <[email protected]> ha scritto: > Solved as to 3.7.12. The only bug left is when adding new bricks to > create a new replica set, now sure where we are now on that bug but > that's not a common operation (well, at least for me). > > On Wed, Aug 30, 2017 at 05:07:44PM +0200, Ivan Rossi wrote: > > There has ben a bug associated to sharding that led to VM corruption that > > has been around for a long time (difficult to reproduce I understood). I > > have not seen reports on that for some time after the last fix, so > > hopefully now VM hosting is stable. > > > > 2017-08-30 3:57 GMT+02:00 Everton Brogliatto <[email protected]>: > > > > > Ciao Gionatan, > > > > > > I run Gluster 3.10.x (Replica 3 arbiter or 2 + 1 arbiter) to provide > > > storage for oVirt 4.x and I have had no major issues so far. > > > I have done online upgrades a couple of times, power losses, > maintenance, > > > etc with no issues. Overall, it is very resilient. > > > > > > Important thing to keep in mind is your network, I run the Gluster > nodes > > > on a redundant network using bonding mode 1 and I have performed > > > maintenance on my switches, bringing one of them off-line at a time > without > > > causing problems in my Gluster setup or in my running VMs. > > > Gluster recommendation is to enable jumbo frames across the > > > subnet/servers/switches you use for Gluster operations. Your switches > must > > > support MTU 9000 + 208 at least. > > > > > > There were two occasions where I purposely caused a split brain > situation > > > and I was able to heal the files manually. > > > > > > Volume performance tuning can make a significant difference in > Gluster. As > > > others have mentioned previously, sharding is recommended when running > VMs > > > as it will split big files in smaller pieces, making it easier for the > > > healing to occur. > > > When you enable sharding, the default sharding block size is 4MB which > > > will significantly reduce your writing speeds. oVirt recommends the > shard > > > block size to be 512MB. > > > The volume options you are looking here are: > > > features.shard on > > > features.shard-block-size 512MB > > > > > > I had an experimental setup in replica 2 using an older version of > Gluster > > > few years ago and it was unstable, corrupt data and crashed many > times. Do > > > not use replica 2. As others have already said, minimum is replica 2+1 > > > arbiter. > > > > > > If you have any questions that I perhaps can help with, drop me an > email. > > > > > > > > > Regards, > > > Everton Brogliatto > > > > > > > > > On Sat, Aug 26, 2017 at 1:40 PM, Gionatan Danti <[email protected]> > > > wrote: > > > > > >> Il 26-08-2017 07:38 Gionatan Danti ha scritto: > > >> > > >>> I'll surely give a look at the documentation. I have the "bad" habit > > >>> of not putting into production anything I know how to repair/cope > > >>> with. > > >>> > > >>> Thanks. > > >>> > > >> > > >> Mmmm, this should read as: > > >> > > >> "I have the "bad" habit of not putting into production anything I do > NOT > > >> know how to repair/cope with" > > >> > > >> Really :D > > >> > > >> > > >> Thanks. > > >> > > >> -- > > >> Danti Gionatan > > >> Supporto Tecnico > > >> Assyoma S.r.l. - www.assyoma.it > > >> email: [email protected] - [email protected] > > >> GPG public key ID: FF5F32A8 > > >> _______________________________________________ > > >> Gluster-users mailing list > > >> [email protected] > > >> http://lists.gluster.org/mailman/listinfo/gluster-users > > >> > > > > > > > > > _______________________________________________ > > > Gluster-users mailing list > > > [email protected] > > > http://lists.gluster.org/mailman/listinfo/gluster-users > > > > > > _______________________________________________ > > Gluster-users mailing list > > [email protected] > > http://lists.gluster.org/mailman/listinfo/gluster-users > > > _______________________________________________ > Gluster-users mailing list > [email protected] > http://lists.gluster.org/mailman/listinfo/gluster-users >
_______________________________________________ Gluster-users mailing list [email protected] http://lists.gluster.org/mailman/listinfo/gluster-users
