On Fri, Jan 19, 2018 at 6:19 AM, Shyam Ranganathan <[email protected]> wrote:
> On 01/18/2018 07:34 PM, Ravishankar N wrote: > > > > > > On 01/18/2018 11:53 PM, Shyam Ranganathan wrote: > >> On 01/02/2018 11:08 AM, Shyam Ranganathan wrote: > >>> Hi, > >>> > >>> As release 3.13.1 is announced, here is are the needed details for > >>> 3.13.2 > >>> > >>> Release date: 19th Jan, 2018 (20th is a Saturday) > >> Heads up, this is tomorrow. > >> > >>> Tracker bug for blockers: > >>> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.13.2 > >> The one blocker bug has had its patch merged, so I am assuming there are > >> no more that should block this release. > >> > >> As usual, shout out in case something needs attention. > > > > Hi Shyam, > > > > 1. There is one patch https://review.gluster.org/#/c/19218/ which > > introduces full locks for afr writevs. We're introducing this as a > > GD_OP_VERSION_3_13_2 option. Please wait for it to be merged on 3.13 > > branch today. Karthik, please back port the patch. > > Do we need this behind an option, if existing behavior causes split > brains? Or is the option being added for workloads that do not have > multiple clients or clients writing to non-overlapping regions (and thus > need not suffer a penalty in performance maybe? But they should not > anyway as a single client and AFR eager locks should ensure this is done > only once for the lifetime of the file being accesses, right?) > > Basically I would like to keep options out it possible in backports, as > that changes the gluster op-version and involves other upgrade steps to > be sure users can use this option etc. Which means more reading and > execution of upgrade steps for our users. Hence the concern! > > > > > 2. I'm also backporting https://review.gluster.org/#/c/18571/. Please > > consider merging it too today if it is ready. > Let's take this one in 3.13.3, I think we need to test a few more cases that I missed at the time of review. > > This should be fine. > > > > > We will attach the relevant BZs to the tracker bug. > > > > Thanks > > Ravi > >> > >>> Shyam > >>> _______________________________________________ > >>> Gluster-devel mailing list > >>> [email protected] > >>> http://lists.gluster.org/mailman/listinfo/gluster-devel > >>> > >> _______________________________________________ > >> Gluster-devel mailing list > >> [email protected] > >> http://lists.gluster.org/mailman/listinfo/gluster-devel > > > -- Pranith
_______________________________________________ Gluster-devel mailing list [email protected] http://lists.gluster.org/mailman/listinfo/gluster-devel
