Re: [Gluster-Maintainers] Upgrade issue when new mem type is added in libglusterfs

2016-07-11 Thread Atin Mukherjee
I still see the release notes for 3.8.1 & 3.7.13 not reflecting this
change.

Niels, Kaushal,

Shouldn't we highlight this as early as possible to the users given release
note is the best possible medium to capture all the known issues and the
work around?


~Atin


On Sat, Jul 9, 2016 at 10:02 PM, Atin Mukherjee  wrote:

> We have hit a bug 1347250 in downstream (applicable upstream too) where it
> was seen that glusterd didnt regenerate the volfiles when it was interimly
> brought up with upgrade mode by yum. Log file captured that gsyncd
> --version failed to execute and hence glusterd init couldnt proceed till
> the volfile regeneration. Since the ret code is not handled here in spec
> file users wouldnt come to know about this and going forward this is going
> to cause major issues in healing and all and finally it exploits the
> possibility of split brains at its best.
>
> Further analysis by Kotresh & Raghavendra Talur reveals that gsyncd failed
> here because of the compatibility issue where gsyncd was still not upgraded
> where as glusterfs-server was and this failure was mainly because of change
> in the mem type enum. We have seen a similar issue for RDMA as well
> (probably a year back). So to be very generic this can happen in any
> upgrade path from one version to another where new mem type is introduced.
> We have seen this from 3.7.8 to 3.7.12 and 3.8. People upgrading from 3.6
> to 3.7/3.8 will also experience this issue.
>
> Till we work on this fix, I suggest all the release managers to highlight
> this in the release note of the latest releases with the following work
> around after yum update:
>
> 1. grep -irns "geo-replication module not working as desired" 
> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log | wc -l
>
>  If the output is non-zero, then go to step 2 else follow the rest of the 
> steps as per the guide.
>
> 2.Check if glusterd instance is running or not by 'ps aux | grep glusterd', 
> if it is, then stop the glusterd service.
>
>  3. glusterd --xlator-option *.upgrade=on -N
>
> and then proceed ahead with rest of the steps as per the guide.
>
> Thoughts?
>
> P.S : this email is limited to maintainers till we decide on the approach
> to highlight this issues to the users
>
>
> --
> Atin
> Sent from iPhone
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] gluster_strfmt - Build # 4 - Failure!

2016-07-11 Thread Atin Mukherjee
cli-rps-ops.c warnings are from cli_populate_req_dict_for_delete () which
is related to snapshot functionality.

Rajesh/Avra - could you take a look at it?


On Tue, Jul 12, 2016 at 9:43 AM,  wrote:

> String formatting warnings have been detected. See the attached
> warnings.txt for details.
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] gluster_strfmt - Build # 4 - Failure!

2016-07-11 Thread ci
String formatting warnings have been detected. See the attached warnings.txt 
for details.

warnings.txt
Description: Binary data
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Jenkins build is back to normal : regression-test-burn-in #1317

2016-07-11 Thread jenkins
See 

___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #1316

2016-07-11 Thread jenkins
  -  6 second
./tests/bugs/cli/bug-1004218.t  -  6 second
./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t  -  6 second
./tests/bugs/access-control/bug-1051896.t  -  6 second
./tests/bitrot/bug-internal-xattrs-check-1243391.t  -  6 second
./tests/bitrot/bug-1221914.t  -  6 second
./tests/basic/jbr/jbr-volgen.t  -  6 second
./tests/basic/gfid-access.t  -  6 second
./tests/basic/fops-sanity.t  -  6 second
./tests/basic/ec/nfs.t  -  6 second
./tests/basic/afr/arbiter-cli.t  -  6 second
./tests/bugs/glusterd/bug-00.t  -  5 second
./tests/bugs/fuse/bug-1283103.t  -  5 second
./tests/bugs/core/bug-903336.t  -  5 second
./tests/bugs/core/bug-1135514-allow-setxattr-with-null-value.t  -  5 second
./tests/bugs/core/949327.t  -  5 second
./tests/bugs/cli/bug-977246.t  -  5 second
./tests/bugs/cli/bug-961307.t  -  5 second
./tests/bugs/cli/bug-949298.t  -  5 second
./tests/bugs/cli/bug-921215.t  -  5 second
./tests/bugs/cli/bug-764638.t  -  5 second
./tests/bugs/cli/bug-1047378.t  -  5 second
./tests/basic/rpm.t  -  1 second
./tests/basic/posixonly.t  -  1 second
./tests/basic/first-test.t  -  1 second
./tests/basic/netgroup_parsing.t  -  0 second
./tests/basic/exports_parsing.t  -  0 second

Result is 1

+ RET=1
++ wc -l
++ ls -l '/*.core'
+ cur_count=0
++ ls '/*.core'
+ cur_cores=
+ '[' 0 '!=' 0 ']'
+ '[' 1 -ne 0 ']'
+ filename=logs/glusterfs-logs-20160711:13:02:45.tgz
+ tar -czf /archives/logs/glusterfs-logs-20160711:13:02:45.tgz 
/var/log/glusterfs /var/log/messages /var/log/messages-20160619 
/var/log/messages-20160626 /var/log/messages-20160703 /var/log/messages-20160710
tar: Removing leading `/' from member names
+ echo Logs archived in 
http://slave32.cloud.gluster.org/logs/glusterfs-logs-20160711:13:02:45.tgz
Logs archived in 
http://slave32.cloud.gluster.org/logs/glusterfs-logs-20160711:13:02:45.tgz
+ case $(uname -s) in
++ uname -s
+ /sbin/sysctl -w kernel.core_pattern=/%e-%p.core
kernel.core_pattern = /%e-%p.core
+ exit 1
+ RET=1
+ '[' 1 = 0 ']'
+ V=-1
+ VERDICT=FAILED
+ '[' 0 -eq 1 ']'
+ exit 1
Build step 'Execute shell' marked build as failure
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Jenkins build is back to normal : regression-test-burn-in #1314

2016-07-11 Thread jenkins
See 

___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Upgrade issue when new mem type is added in libglusterfs

2016-07-11 Thread Niels de Vos
On Mon, Jul 11, 2016 at 12:56:24PM +0530, Kaushal M wrote:
> On Sat, Jul 9, 2016 at 10:02 PM, Atin Mukherjee  wrote:

...

> GlusterD depends on the cluster op-version when generating volfiles,
> to insert new features/xlators into the volfile graph.
> This was done to make sure that the homogeneity of the volfiles is
> preserved across the cluster.
> This behaviour makes running GlusterD in upgrade mode after a package
> upgrade, essentially a noop.
> The cluster op-version doesn't change automatically when packages are 
> upgraded,
> so the regenerated volfiles in the post-upgrade section are basically
> the same as before.
> (If something is getting added into volfiles after this, it is
> incorrect, and is something I'm yet to check).
> 
> The correct time to regenerate the volfiles is after all members of
> the cluster have been upgraded and the cluster op-version has been
> bumped.
> (Bumping op-version doesn't regenerate anything, it is just an
> indication that the cluster is now ready to use new features.)
> 
> We don't have a direct way to get volfiles regenerated on all members
> with a single command yet. We can implement such a command with
> relative ease.
> For now, volfiles can regenerated by making use of the `volume set`
> command, by setting a `user.upgrade` option on a volume.
> Options in the `user.` namespace are passed on to hook scripts and not
> added into any volfiles, but setting such an option on a volume causes
> GlusterD to regenerate volfiles for the volume.
> 
> My suggestion would be to stop using glusterd in upgrade mode during
> post-upgrade to regenerate volfiles, and document the above way to get
> volfiles regenerated across the cluster correctly.
> We could do away with upgrade mode itself, but it could be useful for
> other things (Though I can't think of any right now).
> 
> What do the other maintainers feel about this?

Would it make sense to have the volfiles regenerated when changing the
op-version? For environments where multiple volumes are used, I do not
like the need to regenerate them manually for all of them.

On the other hand, a regenerate+reload/restart results in a short
interruption. This may not be suitable for all volumes at the same time.
A per volume option might be preferred by some users. Getting the
feedback from users would be good before deciding on an approach.

Running GlusterD in upgrade mode while updating the installed binaries
is something that easily gets forgotten. I'm not even sure if this is
done in all packages, and I guess it is skipped a lot when people have
installations from source. We should probably put the exact steps in our
release-notes to remind everyone.

Thanks,
Niels


signature.asc
Description: PGP signature
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Upgrade issue when new mem type is added in libglusterfs

2016-07-11 Thread Atin Mukherjee
My intention to initiate the email was more from how to prevent users to
hit this problem with a proper work around captured in the release note. We
can fork a separate thread on the approach of fixing this issue. Now on the
fix, my take is given that glusterd coming up with upgrade mode is just to
regenerate the volfiles, we don't need to call gsyncd --version with
upgrade=ON and it should solve this specific issue. However from long term
perspective we do need to think about versioning the other libraries as
pointed out by Kaushal/Niels.

The BZ is not yet filed upstream, I think Kotresh would be taking care of
that.

~Atin

On Mon, Jul 11, 2016 at 1:19 PM, Niels de Vos  wrote:

> On Mon, Jul 11, 2016 at 12:56:24PM +0530, Kaushal M wrote:
> > On Sat, Jul 9, 2016 at 10:02 PM, Atin Mukherjee 
> wrote:
> > > We have hit a bug 1347250 in downstream (applicable upstream too)
> where it
> > > was seen that glusterd didnt regenerate the volfiles when it was
> interimly
> > > brought up with upgrade mode by yum. Log file captured that gsyncd
> --version
> > > failed to execute and hence glusterd init couldnt proceed till the
> volfile
> > > regeneration. Since the ret code is not handled here in spec file users
> > > wouldnt come to know about this and going forward this is going to
> cause
> > > major issues in healing and all and finally it exploits the
> possibility of
> > > split brains at its best.
> > >
> > > Further analysis by Kotresh & Raghavendra Talur reveals that gsyncd
> failed
> > > here because of the compatibility issue where gsyncd was still not
> upgraded
> > > where as glusterfs-server was and this failure was mainly because of
> change
> > > in the mem type enum. We have seen a similar issue for RDMA as well
> > > (probably a year back). So to be very generic this can happen in any
> upgrade
> > > path from one version to another where new mem type is introduced. We
> have
> > > seen this from 3.7.8 to 3.7.12 and 3.8. People upgrading from 3.6 to
> 3.7/3.8
> > > will also experience this issue.
> > >
> > > Till we work on this fix, I suggest all the release managers to
> highlight
> > > this in the release note of the latest releases with the following work
> > > around after yum update:
> > >
> > > 1. grep -irns "geo-replication module not working as desired"
> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log | wc -l
> > >
> > >  If the output is non-zero, then go to step 2 else follow the rest of
> the
> > > steps as per the guide.
> > >
> > > 2.Check if glusterd instance is running or not by 'ps aux | grep
> glusterd',
> > > if it is, then stop the glusterd service.
> > >
> > >  3. glusterd --xlator-option *.upgrade=on -N
> > >
> > > and then proceed ahead with rest of the steps as per the guide.
> > >
> > > Thoughts?
> >
> > Proper .so versioning of libglusterfs should help with problems like
> > this. I don't know how to do this though.
>
> We could provde the 'current' version of libglusterfs with the same
> number as the op-version. For 3.7.13 it would be 030713, dropping the
> prefixed 0 makes that 30713, so libglusterfs.so.30713. The same should
> probably be done for all other internal libraries.
>
> Some more details about library versioning can be found here:
>
> https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/versioning.md
>
> Note that libgfapi uses symbol versioning, that is a more fine-grained
> solution. It prevents the need for applications using the library to get
> re-compiled. Details about that, and the more involved changes to get
> that to work correctly are in this document:
>
> https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/gfapi-symbol-versions.md
>
> Is there already a bug filed to get this fixed?
>
> Thanks,
> Niels
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Upgrade issue when new mem type is added in libglusterfs

2016-07-11 Thread Niels de Vos
On Mon, Jul 11, 2016 at 12:56:24PM +0530, Kaushal M wrote:
> On Sat, Jul 9, 2016 at 10:02 PM, Atin Mukherjee  wrote:
> > We have hit a bug 1347250 in downstream (applicable upstream too) where it
> > was seen that glusterd didnt regenerate the volfiles when it was interimly
> > brought up with upgrade mode by yum. Log file captured that gsyncd --version
> > failed to execute and hence glusterd init couldnt proceed till the volfile
> > regeneration. Since the ret code is not handled here in spec file users
> > wouldnt come to know about this and going forward this is going to cause
> > major issues in healing and all and finally it exploits the possibility of
> > split brains at its best.
> >
> > Further analysis by Kotresh & Raghavendra Talur reveals that gsyncd failed
> > here because of the compatibility issue where gsyncd was still not upgraded
> > where as glusterfs-server was and this failure was mainly because of change
> > in the mem type enum. We have seen a similar issue for RDMA as well
> > (probably a year back). So to be very generic this can happen in any upgrade
> > path from one version to another where new mem type is introduced. We have
> > seen this from 3.7.8 to 3.7.12 and 3.8. People upgrading from 3.6 to 3.7/3.8
> > will also experience this issue.
> >
> > Till we work on this fix, I suggest all the release managers to highlight
> > this in the release note of the latest releases with the following work
> > around after yum update:
> >
> > 1. grep -irns "geo-replication module not working as desired"
> > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log | wc -l
> >
> >  If the output is non-zero, then go to step 2 else follow the rest of the
> > steps as per the guide.
> >
> > 2.Check if glusterd instance is running or not by 'ps aux | grep glusterd',
> > if it is, then stop the glusterd service.
> >
> >  3. glusterd --xlator-option *.upgrade=on -N
> >
> > and then proceed ahead with rest of the steps as per the guide.
> >
> > Thoughts?
> 
> Proper .so versioning of libglusterfs should help with problems like
> this. I don't know how to do this though.

We could provde the 'current' version of libglusterfs with the same
number as the op-version. For 3.7.13 it would be 030713, dropping the
prefixed 0 makes that 30713, so libglusterfs.so.30713. The same should
probably be done for all other internal libraries.

Some more details about library versioning can be found here:
  
https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/versioning.md

Note that libgfapi uses symbol versioning, that is a more fine-grained
solution. It prevents the need for applications using the library to get
re-compiled. Details about that, and the more involved changes to get
that to work correctly are in this document:
  
https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/gfapi-symbol-versions.md

Is there already a bug filed to get this fixed?

Thanks,
Niels


signature.asc
Description: PGP signature
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Upgrade issue when new mem type is added in libglusterfs

2016-07-11 Thread Kaushal M
On Sat, Jul 9, 2016 at 10:02 PM, Atin Mukherjee  wrote:
> We have hit a bug 1347250 in downstream (applicable upstream too) where it
> was seen that glusterd didnt regenerate the volfiles when it was interimly
> brought up with upgrade mode by yum. Log file captured that gsyncd --version
> failed to execute and hence glusterd init couldnt proceed till the volfile
> regeneration. Since the ret code is not handled here in spec file users
> wouldnt come to know about this and going forward this is going to cause
> major issues in healing and all and finally it exploits the possibility of
> split brains at its best.
>
> Further analysis by Kotresh & Raghavendra Talur reveals that gsyncd failed
> here because of the compatibility issue where gsyncd was still not upgraded
> where as glusterfs-server was and this failure was mainly because of change
> in the mem type enum. We have seen a similar issue for RDMA as well
> (probably a year back). So to be very generic this can happen in any upgrade
> path from one version to another where new mem type is introduced. We have
> seen this from 3.7.8 to 3.7.12 and 3.8. People upgrading from 3.6 to 3.7/3.8
> will also experience this issue.
>
> Till we work on this fix, I suggest all the release managers to highlight
> this in the release note of the latest releases with the following work
> around after yum update:
>
> 1. grep -irns "geo-replication module not working as desired"
> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log | wc -l
>
>  If the output is non-zero, then go to step 2 else follow the rest of the
> steps as per the guide.
>
> 2.Check if glusterd instance is running or not by 'ps aux | grep glusterd',
> if it is, then stop the glusterd service.
>
>  3. glusterd --xlator-option *.upgrade=on -N
>
> and then proceed ahead with rest of the steps as per the guide.
>
> Thoughts?

Proper .so versioning of libglusterfs should help with problems like
this. I don't know how to do this though.

But I do have some thoughts to share on using GlusterDs upgrade-mode.

GlusterD depends on the cluster op-version when generating volfiles,
to insert new features/xlators into the volfile graph.
This was done to make sure that the homogeneity of the volfiles is
preserved across the cluster.
This behaviour makes running GlusterD in upgrade mode after a package
upgrade, essentially a noop.
The cluster op-version doesn't change automatically when packages are upgraded,
so the regenerated volfiles in the post-upgrade section are basically
the same as before.
(If something is getting added into volfiles after this, it is
incorrect, and is something I'm yet to check).

The correct time to regenerate the volfiles is after all members of
the cluster have been upgraded and the cluster op-version has been
bumped.
(Bumping op-version doesn't regenerate anything, it is just an
indication that the cluster is now ready to use new features.)

We don't have a direct way to get volfiles regenerated on all members
with a single command yet. We can implement such a command with
relative ease.
For now, volfiles can regenerated by making use of the `volume set`
command, by setting a `user.upgrade` option on a volume.
Options in the `user.` namespace are passed on to hook scripts and not
added into any volfiles, but setting such an option on a volume causes
GlusterD to regenerate volfiles for the volume.

My suggestion would be to stop using glusterd in upgrade mode during
post-upgrade to regenerate volfiles, and document the above way to get
volfiles regenerated across the cluster correctly.
We could do away with upgrade mode itself, but it could be useful for
other things (Though I can't think of any right now).

What do the other maintainers feel about this?

~kaushal

PS: If this discussion is distracting from the original conversation,
I'll start a new thread.

>
> P.S : this email is limited to maintainers till we decide on the approach to
> highlight this issues to the users
>
>
> --
> Atin
> Sent from iPhone
>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Glusterfs-3.7.13 release plans

2016-07-11 Thread Raghavendra Gowdappa


- Original Message -
> From: "Oleksandr Natalenko" 
> To: "Kaushal M" 
> Cc: "Raghavendra Gowdappa" , maintainers@gluster.org, 
> "Gluster Devel"
> 
> Sent: Friday, July 8, 2016 6:31:57 PM
> Subject: Re: [Gluster-devel] [Gluster-Maintainers] Glusterfs-3.7.13 release 
> plans
> 
> Does this issue have some fix pending, or there is just bugreport?

We've an RCA that strongly points out the issue. I'll be working towards 
testing the hypothesis and sending out a fix.

> 
> 08.07.2016 15:12, Kaushal M написав:
> > On Fri, Jul 8, 2016 at 2:22 PM, Raghavendra Gowdappa
> >  wrote:
> >> There seems to be a major inode leak in fuse-clients:
> >> https://bugzilla.redhat.com/show_bug.cgi?id=1353856
> >> 
> >> We have found an RCA through code reading (though have a high
> >> confidence on the RCA). Do we want to include this in 3.7.13?
> > 
> > I'm not going to be delaying the release anymore. I'll be adding this
> > issue into the release-notes as a known-issue.
> > 
> >> 
> >> regards,
> >> Raghavendra.
> >> 
> >> - Original Message -
> >>> From: "Kaushal M" 
> >>> To: "Pranith Kumar Karampuri" 
> >>> Cc: maintainers@gluster.org, "Gluster Devel"
> >>> 
> >>> Sent: Friday, July 8, 2016 11:51:11 AM
> >>> Subject: Re: [Gluster-Maintainers] Glusterfs-3.7.13 release plans
> >>> 
> >>> On Fri, Jul 8, 2016 at 9:59 AM, Pranith Kumar Karampuri
> >>>  wrote:
> >>> > Could you take in http://review.gluster.org/#/c/14598/ as well? It is
> >>> > ready
> >>> > for merge.
> >>> >
> >>> > On Thu, Jul 7, 2016 at 3:02 PM, Atin Mukherjee 
> >>> > wrote:
> >>> >>
> >>> >> Can you take in http://review.gluster.org/#/c/14861 ?
> >>> 
> >>> Can you get one of the maintainers to give it a +2?
> >>> 
> >>> >>
> >>> >>
> >>> >> On Thursday 7 July 2016, Kaushal M  wrote:
> >>> >>>
> >>> >>> On Thu, Jun 30, 2016 at 11:08 AM, Kaushal M 
> >>> >>> wrote:
> >>> >>> > Hi all,
> >>> >>> >
> >>> >>> > I'm (or was) planning to do a 3.7.13 release on schedule today.
> >>> >>> > 3.7.12
> >>> >>> > has a huge issue with libgfapi, solved by [1].
> >>> >>> > I'm not sure if this fixes the other issues with libgfapi noticed
> >>> >>> > by
> >>> >>> > Lindsay on gluster-users.
> >>> >>> >
> >>> >>> > This patch has been included in the packages 3.7.12 built for
> >>> >>> > CentOS,
> >>> >>> > Fedora, Ubuntu, Debian and SUSE. I guess Lindsay is using one of
> >>> >>> > these
> >>> >>> > packages, so it might be that the issue seen is new. So I'd like to
> >>> >>> > do
> >>> >>> > a quick release once we have a fix.
> >>> >>> >
> >>> >>> > Maintainers can merge changes into release-3.7 that follow the
> >>> >>> > criteria given in [2]. Please make sure to add the bugs for patches
> >>> >>> > you are merging are added as dependencies for the 3.7.13 tracker
> >>> >>> > bug
> >>> >>> > [3].
> >>> >>> >
> >>> >>>
> >>> >>> I've just merged the fix for the gfapi breakage into release-3.7, and
> >>> >>> hope to tag 3.7.13 soon.
> >>> >>>
> >>> >>> The current head for release-3.7 is commit bddf6f8. 18 patches have
> >>> >>> been merged since 3.7.12 for the following components,
> >>> >>>  - gfapi
> >>> >>>  - nfs (includes ganesha related changes)
> >>> >>>  - glusterd/cli
> >>> >>>  - libglusterfs
> >>> >>>  - fuse
> >>> >>>  - build
> >>> >>>  - geo-rep
> >>> >>>  - afr
> >>> >>>
> >>> >>> I need and acknowledgement from the maintainers of the above
> >>> >>> components that they are ready.
> >>> >>> If any maintainers know of any other issues, please reply here. We'll
> >>> >>> decide how to address them for this release here.
> >>> >>>
> >>> >>> Also, please don't merge anymore changes into release-3.7. If you
> >>> >>> need
> >>> >>> to get something merged, please inform me.
> >>> >>>
> >>> >>> Thanks,
> >>> >>> Kaushal
> >>> >>>
> >>> >>> > Thanks,
> >>> >>> > Kaushal
> >>> >>> >
> >>> >>> > [1]: https://review.gluster.org/14822
> >>> >>> > [2]: https://public.pad.fsfe.org/p/glusterfs-release-process-201606
> >>> >>> > under the GlusterFS minor release heading
> >>> >>> > [3]: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.13
> >>> >>> ___
> >>> >>> maintainers mailing list
> >>> >>> maintainers@gluster.org
> >>> >>> http://www.gluster.org/mailman/listinfo/maintainers
> >>> >>
> >>> >>
> >>> >>
> >>> >> --
> >>> >> Atin
> >>> >> Sent from iPhone
> >>> >>
> >>> >> ___
> >>> >> maintainers mailing list
> >>> >> maintainers@gluster.org
> >>> >> http://www.gluster.org/mailman/listinfo/maintainers
> >>> >>
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > Pranith
> >>> ___
> >>> maintainers mailing list
> >>> maintainers@gluster.org
> >>>