Re: [Gluster-Maintainers] Time to review the MAINTAINERS file

2019-10-29 Thread Kaushal M
I was meaning to propose removal of myself from the maintainers list for
some time. Never got around to it. So this is okay with me. A couple of
things.
1. I'm still listed as maintainer of glusterd2.
2. I still have admin access in Gerrit and Github org. I'd like to have
that removed too.

I've had a wonderful time in the Gluster community for over 7 years. Thank
you for having me.

~kaushal

On Tue, Oct 29, 2019, 15:19 Xavi Hernandez  wrote:

> Hi Amar,
>
> the changes seem ok to me.
>
> On Sat, Oct 26, 2019 at 11:25 AM Amar Tumballi  wrote:
>
>> Hello,
>>
>> It is been 28 months since we last committed our major changes to
>> MAINTAINERS file. Lot of waters have flown in all the rivers since then.
>> New people joined, and some people left to find more interesting projects
>> from Gluster Project.
>>
>> In my opinion, we surely should be having a practice of reviewing our
>> maintainer list for successful progress of project. It should reflect the
>> recent active contributors in the list, and ideally (if you compare to any
>> other active projects), we should be reviewing the list *every year*.
>> But 2 years is not a bad thing considering the time we took to change
>> things for v2.0 (from v1.0).
>>
>> I am attaching the proposed patch (can be broken into different patches
>> per component if one wants). I am planning to send this to review next week
>> with everyone of the maintainers (everyone whose name shifts places). But I
>> thought letting the Maintainers know about this through email first is a
>> good idea.
>>
>> Feel free to agree, disagree or ignore. Please make sure you have reasons
>> why some changes are not valid. Please open the patch, where I have tried
>> to capture details too.
>>
>> Would be great if we close on this soon. I say lets time box it to 15
>> days, for everyone to raise objections, if any. After which, it would get
>> merged (15 days from the date of submission of patch).
>>
>> Regards,
>> Amar
>>
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Fwd: Bug#914125: Peering on glusterfs-server (version >5) impossible under IPv4 on Buster

2018-11-29 Thread Kaushal M
The IPV6 implementation in Gluster requires that IPV6 be enabled in the
kernel and that getaddrinfo works for AF_INET6. This is why IPV6_DEFAULT is
disabled by default, and is only enabled if GlusterFS is configured with
`--with-ipv6-default`.
This has been the case since it was introduced in 3.13.

The change v5.0 debian packages that is causing the failures seems to be
that glusterfs is configured with `--with-ipv6-default` enabled. This needs
to be disabled for now. The packages provided by other distributions do not
enable this.

The IPV6 changes were contributed to GlusterFS by Facebook, and are
designed to run on their internal networks which are all IPV6 enabled. The
`--with-ipv6-default` flag is provided for their convenience, and is
disabled by default. There hasn't been any further contributions to this to
make it suitable for everyone as there hasn't been a lot of community ask
for it.

On Thu, Nov 22, 2018 at 2:19 PM Patrick Matthäi 
wrote:

> Hi,
>
> looks like my MUA didn't send the message..
>
> Our users have problems with the --with-ipv6-default option, they are not
> able to connect at all. Could you help here?
>
>  Weitergeleitete Nachricht 
> Betreff: Bug#914125: Peering on glusterfs-server (version >5) impossible
> under IPv4 on Buster
> Weitersenden-Datum: Mon, 19 Nov 2018 17:15:02 +
> Weitersenden-Von: Emmanuel Quemener 
> 
> Weitersenden-An: debian-bugs-d...@lists.debian.org
> Weitersenden-CC: Patrick Matthäi 
> 
> Datum: Mon, 19 Nov 2018 18:12:46 +0100
> Von: Emmanuel Quemener 
> 
> Antwort an: Emmanuel Quemener 
> , 914...@bugs.debian.org
> An: sub...@bugs.debian.org
>
> Package: glusterfs-server
> Version: 5.1-1
>
> 1)  When I install the glusterfs-server with a version >5, peering is
>   impossible in IPv4 on buster distribution.
>
> # apt install glusterfs-server
>
> # systemctl start glusterd
>
> # systemctl start glustereventsd
>
> # gluster peer probe 140.77.79.200
> peer probe: success. Probe on localhost not needed
>
> # gluster peer probe 140.77.79.185
> peer probe: failed: Probe returned with Transport endpoint is not connected
>
> When I look inside the logs in /var/log/glusterfs/glusterd.log
>
> [2018-11-19 16:50:37.051633] E [MSGID: 101075] [common-
> utils.c:508:gf_resolve_ip6] 0-resolver: getaddrinfo failed (Address
> family for hostname not supported)
> [2018-11-19 16:50:37.051638] E
> [name.c:258:af_inet_client_get_remote_sockaddr] 0-management: DNS
> resolution failed on host 140.77.79.185
>
> 2)  It's working on glusterfs-server version 4 but without firewalld
> package.
>
> 3)  It's working with glusterfs-server version 5.1 when I REMOVE the 2
> following options inside debian/rules AND without firewalld :
>
> DEB_CONFIGURE_EXTRA_FLAGS := \
> --enable-firewalld \
> --with-ipv6-default
>
> I also try without any success to use the following options to fall back
> on IPv4 options in /etc/glusterfs/glusterd.vol :
>
> option transport-type socket/inet,rdma
> option transport.address-family inet
>
> Best regards.
>
> EQ
>
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: GA tomorrow!

2018-10-23 Thread Kaushal M
On Tue, Oct 23, 2018 at 8:24 PM Kaleb S. KEITHLEY  wrote:
>
> On 10/23/18 10:49 AM, Kaleb S. KEITHLEY wrote:
> > On 10/23/18 10:38 AM, Kaushal M wrote:
> >> On Mon, Oct 22, 2018 at 7:41 PM Kaleb S. KEITHLEY  
> >> wrote:
> >>>
> >>> On 10/22/18 9:29 AM, Shyam Ranganathan wrote:
> >>>>>
> >>>>> @Kaushal, if we tag GA tomorrow EDT, would it be possible to tag GD2
> >>>>> today, for the packaging team to pick the same up?
> >>>>>
> >>>>
> >>>> @GD2 team can someone tag/branch GD2 for release-5, else we are stuck
> >>>> with the RC1 tag for the same.
> >>>
> >>> And build packages in Fedora for f29 and f30, to match the glusterfs-5
> >>> packages i f29 and f30. According to
> >>> https://src.fedoraproject.org/rpms/glusterd2 it looks like only Kaushal
> >>> can do this. (Perhaps consider adding some other people so that Kaushal
> >>> isn't a bottleneck here?)

Release has been tagged [1], and f29-candidate[2] and rawhide[3]
packages have been built.

[1]: https://github.com/gluster/glusterd2/releases/tag/v5.0
[2]: https://koji.fedoraproject.org/koji/buildinfo?buildID=1155389
[3]: https://koji.fedoraproject.org/koji/buildinfo?buildID=1155387

> >>
> >> Kaleb, I've added you (kkeithle) as an admin.
> >> Additionally, is there a 'gluster' fedora packagers team that I can
> >> give permissions to?
> >
> > There's not a team as such. In addition to me there is Niels and Anoop C
> > S who are Fedora gluster packagers.
> >
> > (And jsteffan who I think dates back to before I took over glusterfs
> > packaging, but he has done nothing in nearly seven years and I doubt
> > he'd be interested.)
>
> But I was really hoping that someone else on the gd2 team would
> "volunteer." :-)
>
> --
>
> Kaleb
>
>
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 5: GA tomorrow!

2018-10-23 Thread Kaushal M
On Mon, Oct 22, 2018 at 7:41 PM Kaleb S. KEITHLEY  wrote:
>
> On 10/22/18 9:29 AM, Shyam Ranganathan wrote:
> >>
> >> @Kaushal, if we tag GA tomorrow EDT, would it be possible to tag GD2
> >> today, for the packaging team to pick the same up?
> >>
> >
> > @GD2 team can someone tag/branch GD2 for release-5, else we are stuck
> > with the RC1 tag for the same.
>
> And build packages in Fedora for f29 and f30, to match the glusterfs-5
> packages i f29 and f30. According to
> https://src.fedoraproject.org/rpms/glusterd2 it looks like only Kaushal
> can do this. (Perhaps consider adding some other people so that Kaushal
> isn't a bottleneck here?)

Kaleb, I've added you (kkeithle) as an admin.
Additionally, is there a 'gluster' fedora packagers team that I can
give permissions to?

>
> Please make the branch release-5 (not release-5.0) and the version tag
> v5.0 (not v5.0.0) to match the branch and tags used for glusterfs.
>
> Thanks,
>
> --
>
> Kaleb
> ___
> maintainers mailing list
> maintainers@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Maintainer's meeting series

2018-07-02 Thread Kaushal M
On Mon, Jul 2, 2018 at 6:32 PM Pranith Kumar Karampuri
 wrote:
>
>
>
> On Mon, Jul 2, 2018 at 5:01 PM, Amar Tumballi  wrote:
>>
>>
>>
>> On Mon, Jul 2, 2018 at 4:51 PM, Nigel Babu  wrote:
>>>
>>> This discussion sort of died, so I'm going to propose 1300 UTC and 1500 UTC 
>>> on Mondays. If you cannot make it to *both* those times, please chime in.
>>>
>>
>> Thanks for getting this back to life. Was in my todo list to finalize it.
>>
>> If there are no voices, I would pick 1300 UTC on Monday and schedule events 
>> starting next week.
>> I don't have any concerns for 1500 UTC too. Would like to hear others.
>
>
> 1300 UTC works better for me on Monday.

1300UTC will not work for me for next Monday (Jul 9). But after that, I'm okay.

>
>>
>>
>>>
>>> On Wed, Jun 20, 2018 at 12:09 AM Vijay Bellur  wrote:



 On Tue, Jun 19, 2018 at 3:08 AM Nigel Babu  wrote:
>
> I propose that we alternate times for every other meeting so that we can 
> accommodate people across all timezones. We're never going to find one 
> single timezone that works for everyone. The next best compromise that 
> I've seen projects make is to have the edge timezones take a compromise 
> every other meeting.


 +1. Other models that we can consider:

 - Choose a time slot that works for the majority of maintainers.
 - Have two different meetings to accommodate various TZs.

 Thanks,
 Vijay


>
> On Tue, Jun 19, 2018 at 2:36 PM Amar Tumballi  wrote:
>>
>> Hi All,
>>
>> On the fun side, it seems that other than 2 people, not many people have 
>> noticed the end of recurring maintainer's meetings, on Wednesdays.
>>
>> Overall, there were 20+ maintainers meeting in last 1 year, and in those 
>> meetings, we tried to keep all the discussion open (shared agenda before 
>> for everyone to make a point, and shared meeting minutes with even the 
>> BJ download link). This also helped us to take certain decisions which 
>> otherwise would have taken long time to achieve, or even help with some 
>> release related discussions, helping us to keep the release timelines 
>> sane.
>>
>> I propose to get the biweekly maintainer's meeting back to life, and 
>> this time, considering some requests from previous thread, would like to 
>> keep it on Monday 9AM EST (would recommend to keep it 9AM EDT as-well). 
>> Or else Thursday 10AM EST ? I know it wouldn't be great time for many 
>> maintainers in India, but considering we now have presence from US West 
>> Coast to India... I guess these times are the one we can consider.
>>
>> Avoiding Tuesday/Wednesday slots mainly because major sponsor for 
>> project, Red Hat's members would be busy with multiple meetings during 
>> that time.
>>
>> Happy to hear the thoughts, and comments.
>>
>> Regards,
>> Amar
>> --
>> Amar Tumballi (amarts)
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> http://lists.gluster.org/mailman/listinfo/maintainers
>
>
>
> --
> nigelb
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>>>
>>>
>>>
>>> --
>>> nigelb
>>
>>
>>
>>
>> --
>> Amar Tumballi (amarts)
>>
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> http://lists.gluster.org/mailman/listinfo/maintainers
>>
>
>
>
> --
> Pranith
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 4.1: Branched

2018-06-18 Thread Kaushal M
On Mon, Jun 18, 2018 at 11:30 PM Kaleb S. KEITHLEY  wrote:
>
> On 06/18/2018 12:03 PM, Kaushal M wrote:
> >
> > GD2 packages have been built for Fedora 28 and available from the
> > updates-testing repo, and soon from the updates repo
> > Packages are also available for Fedora 29/Rawhide.
> >
> I built GD2 rpms for Fedora 27 using the -vendor tar file. They are
> available at [1].
>
> Attempts to build from the non-vendor tar file failed. Logs from one of
> the failed builds are at [2] for anyone who cares to examine them to see
> why they failed.

It's because a few of the updated packages that are required were/are
still in updates-testing.
I've found the koji uses dependencies in updates-testing when building packages.
So when I built the package previously, I didn't notice that the
packages weren't actually available in the updates repo.
This has now been corrected, the dependencies in updates-testing have
moved to updates, and one additional missing dependency has been added
to updates-testing, which should move into updates soon.

>
>
> [1] https://download.gluster.org/pub/gluster/glusterd2/4.1/
> [2] https://koji.fedoraproject.org/koji/taskinfo?taskID=27705828
>
>
> --
>
> Kaleb
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 4.1: Branched

2018-06-18 Thread Kaushal M
On Fri, Jun 15, 2018 at 6:26 PM Niels de Vos  wrote:
>
> On Fri, Jun 15, 2018 at 05:03:38PM +0530, Kaushal M wrote:
> > In Tue, Jun 12, 2018 at 10:15 PM Niels de Vos  wrote:
> > >
> > > On Tue, Jun 12, 2018 at 11:26:33AM -0400, Shyam Ranganathan wrote:
> > > > On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > > > > As brick-mux tests were failing (and still are on master), this was
> > > > > holding up the release activity.
> > > > >
> > > > > We now have a final fix [1] for the problem, and the situation has
> > > > > improved over a series of fixes and reverts on the 4.1 branch as well.
> > > > >
> > > > > So we hope to branch RC0 today, and give a week for package and 
> > > > > upgrade
> > > > > testing, before getting to GA. The revised calendar stands as follows,
> > > > >
> > > > > - RC0 Tagging: 31st May, 2018
> > > > > - RC0 Builds: 1st June, 2018
> > > > > - June 4th-8th: RC0 testing
> > > > > - June 8th: GA readiness callout
> > > > > - June 11th: GA tagging
> > > >
> > > > GA has been tagged today, and is off to packaging.
> > >
> > > The glusterfs packages should land in the testing repositories from the
> > > CentOS Storage SIG soon. Currently glusterd2 is still on rc0 though.
> > > Please test with the instructions from
> > > http://lists.gluster.org/pipermail/packaging/2018-June/000553.html
> > >
> > > Thanks!
> > > Niels
> >
> > GlusterD2-v4.1.0 has been tagged and released [1].
> >
> > [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0
>
> Packages should become available in the CentOS Storage SIGs
> centos-gluster41-test repository (el7 only) within an hour or so.
> Testing can be done with the description from
> http://lists.gluster.org/pipermail/packaging/2018-June/000553.html, the
> package is called glusterd2.
>
> Please let me know if the build is functioning as required and I'll mark
> if for release.

GD2 packages have been built for Fedora 28 and available from the
updates-testing repo, and soon from the updates repo
Packages are also available for Fedora 29/Rawhide.

>
> Thanks,
> Niels
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 4.1: Branched

2018-06-15 Thread Kaushal M
In Tue, Jun 12, 2018 at 10:15 PM Niels de Vos  wrote:
>
> On Tue, Jun 12, 2018 at 11:26:33AM -0400, Shyam Ranganathan wrote:
> > On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > > As brick-mux tests were failing (and still are on master), this was
> > > holding up the release activity.
> > >
> > > We now have a final fix [1] for the problem, and the situation has
> > > improved over a series of fixes and reverts on the 4.1 branch as well.
> > >
> > > So we hope to branch RC0 today, and give a week for package and upgrade
> > > testing, before getting to GA. The revised calendar stands as follows,
> > >
> > > - RC0 Tagging: 31st May, 2018
> > > - RC0 Builds: 1st June, 2018
> > > - June 4th-8th: RC0 testing
> > > - June 8th: GA readiness callout
> > > - June 11th: GA tagging
> >
> > GA has been tagged today, and is off to packaging.
>
> The glusterfs packages should land in the testing repositories from the
> CentOS Storage SIG soon. Currently glusterd2 is still on rc0 though.
> Please test with the instructions from
> http://lists.gluster.org/pipermail/packaging/2018-June/000553.html
>
> Thanks!
> Niels

GlusterD2-v4.1.0 has been tagged and released [1].

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0

> ___
> maintainers mailing list
> maintainers@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 4.1: Branched

2018-06-04 Thread Kaushal M
On Mon, Jun 4, 2018 at 10:55 PM Kaleb S. KEITHLEY  wrote:
>
> On 06/04/2018 11:32 AM, Kaushal M wrote:
>
> >
> > We have a proper release this time. Source tarballs are available from [1].
> >
> > [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0
> >
>
> I didn't wait for you to do COPR builds.
>
> There are rpm packages for RHEL/CentOS 7, Fedora 27, Fedora 28, and
> Fedora 29 at [1].
>
> If you really want to use COPR builds instead, let me know and I'll
> replace the ones I built with your COPR builds.
>
> I think you will find (as I did) that Fedora 28 (still) doesn't have all
> the dependencies and you'll need to build from the -vendor tar file.
> Ditto for Fedora 27. If you believe this should not be the case please
> let me know.

I did find this as well. Builds failed for F27 and F28, but succeeded
on rawhide.
What makes this strange is that our dependency versions haven't
changed since 4.0,
and I was able to build on all Fedora versions then.
I'll need to investigate this.

>
> [1] https://download.gluster.org/pub/gluster/glusterd2/qa-releases/4.1rc0/
>
> --
>
> Kaleb
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 4.1: Branched

2018-06-04 Thread Kaushal M
On Mon, Jun 4, 2018 at 8:54 PM Kaushal M  wrote:
>
> On Mon, Jun 4, 2018 at 8:39 PM Kaushal M  wrote:
> >
> > On Sat, Jun 2, 2018 at 12:11 AM Kaushal M  wrote:
> > >
> > > On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  
> > > wrote:
> > > >
> > > > On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > > > > As brick-mux tests were failing (and still are on master), this was
> > > > > holding up the release activity.
> > > > >
> > > > > We now have a final fix [1] for the problem, and the situation has
> > > > > improved over a series of fixes and reverts on the 4.1 branch as well.
> > > > >
> > > > > So we hope to branch RC0 today, and give a week for package and 
> > > > > upgrade
> > > > > testing, before getting to GA. The revised calendar stands as follows,
> > > > >
> > > > > - RC0 Tagging: 31st May, 2018
> > > >
> > > > RC0 Tagged and off to packaging!
> > >
> > > GD2 has been tagged as well. [1]
> > >
> > > [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0
> >
> > I've just realized I've made a mistake. I've pushed just the tags,
> > without updating the branch.
> > And now, the branch has landed new commits without my additional commits.
> > So, I've unintentionally created a different branch.
> >
> > I'm planning on deleting the tag, and updating the branch with the
> > release commits, and tagging once again.
> > Would this be okay?
>
> Oh well. Another thing I messed up in my midnight release-attempt.
> I forgot to publish the release-draft once I'd uploaded the tarballs.
> But this makes it easier for me. Because of this no one has the
> mis-tagged release.
> I'll do what I planned above, and do a proper release this time.

We have a proper release this time. Source tarballs are available from [1].

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0

>
> >
> > >
> > > >
> > > > > - RC0 Builds: 1st June, 2018
> > > > > - June 4th-8th: RC0 testing
> > > > > - June 8th: GA readiness callout
> > > > > - June 11th: GA tagging
> > > > > - +2-4 days release announcement
> > > > >
> > > > > Thanks,
> > > > > Shyam
> > > > >
> > > > > [1] Last fix for mux (and non-mux related):
> > > > > https://review.gluster.org/#/c/20109/1
> > > > >
> > > > > On 05/09/2018 03:41 PM, Shyam Ranganathan wrote:
> > > > >> Here is the current release activity calendar,
> > > > >>
> > > > >> - RC0 tagging: May 14th
> > > > >> - RC0 builds: May 15th
> > > > >> - May 15th - 25th
> > > > >>   - Upgrade testing
> > > > >>   - General testing and feedback period
> > > > >> - (on need basis) RC1 build: May 26th
> > > > >> - GA readiness call out: May, 28th
> > > > >> - GA tagging: May, 30th
> > > > >> - +2-4 days release announcement
> > > > >>
> > > > >> Thanks,
> > > > >> Shyam
> > > > >>
> > > > >> On 05/06/2018 09:20 AM, Shyam Ranganathan wrote:
> > > > >>> Hi,
> > > > >>>
> > > > >>> Release 4.1 has been branched, as it was done later than 
> > > > >>> anticipated the
> > > > >>> calendar of tasks below would be reworked accordingly this week and
> > > > >>> posted to the lists.
> > > > >>>
> > > > >>> Thanks,
> > > > >>> Shyam
> > > > >>>
> > > > >>> On 03/27/2018 02:59 PM, Shyam Ranganathan wrote:
> > > > >>>> Hi,
> > > > >>>>
> > > > >>>> As we have completed potential scope for 4.1 release (reflected 
> > > > >>>> here [1]
> > > > >>>> and also here [2]), it's time to talk about the schedule.
> > > > >>>>
> > > > >>>> - Branching date (and hence feature exception date): Apr 16th
> > > > >>>> - Week of Apr 16th release notes updated for all features in the 
> > > > >>>> release
> > > > >>>> - RC0 tagging: Apr 23rd
> > > > >>>> - We

Re: [Gluster-Maintainers] [Gluster-devel] Release 4.1: Branched

2018-06-04 Thread Kaushal M
On Mon, Jun 4, 2018 at 8:39 PM Kaushal M  wrote:
>
> On Sat, Jun 2, 2018 at 12:11 AM Kaushal M  wrote:
> >
> > On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  
> > wrote:
> > >
> > > On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > > > As brick-mux tests were failing (and still are on master), this was
> > > > holding up the release activity.
> > > >
> > > > We now have a final fix [1] for the problem, and the situation has
> > > > improved over a series of fixes and reverts on the 4.1 branch as well.
> > > >
> > > > So we hope to branch RC0 today, and give a week for package and upgrade
> > > > testing, before getting to GA. The revised calendar stands as follows,
> > > >
> > > > - RC0 Tagging: 31st May, 2018
> > >
> > > RC0 Tagged and off to packaging!
> >
> > GD2 has been tagged as well. [1]
> >
> > [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0
>
> I've just realized I've made a mistake. I've pushed just the tags,
> without updating the branch.
> And now, the branch has landed new commits without my additional commits.
> So, I've unintentionally created a different branch.
>
> I'm planning on deleting the tag, and updating the branch with the
> release commits, and tagging once again.
> Would this be okay?

Oh well. Another thing I messed up in my midnight release-attempt.
I forgot to publish the release-draft once I'd uploaded the tarballs.
But this makes it easier for me. Because of this no one has the
mis-tagged release.
I'll do what I planned above, and do a proper release this time.

>
> >
> > >
> > > > - RC0 Builds: 1st June, 2018
> > > > - June 4th-8th: RC0 testing
> > > > - June 8th: GA readiness callout
> > > > - June 11th: GA tagging
> > > > - +2-4 days release announcement
> > > >
> > > > Thanks,
> > > > Shyam
> > > >
> > > > [1] Last fix for mux (and non-mux related):
> > > > https://review.gluster.org/#/c/20109/1
> > > >
> > > > On 05/09/2018 03:41 PM, Shyam Ranganathan wrote:
> > > >> Here is the current release activity calendar,
> > > >>
> > > >> - RC0 tagging: May 14th
> > > >> - RC0 builds: May 15th
> > > >> - May 15th - 25th
> > > >>   - Upgrade testing
> > > >>   - General testing and feedback period
> > > >> - (on need basis) RC1 build: May 26th
> > > >> - GA readiness call out: May, 28th
> > > >> - GA tagging: May, 30th
> > > >> - +2-4 days release announcement
> > > >>
> > > >> Thanks,
> > > >> Shyam
> > > >>
> > > >> On 05/06/2018 09:20 AM, Shyam Ranganathan wrote:
> > > >>> Hi,
> > > >>>
> > > >>> Release 4.1 has been branched, as it was done later than anticipated 
> > > >>> the
> > > >>> calendar of tasks below would be reworked accordingly this week and
> > > >>> posted to the lists.
> > > >>>
> > > >>> Thanks,
> > > >>> Shyam
> > > >>>
> > > >>> On 03/27/2018 02:59 PM, Shyam Ranganathan wrote:
> > > >>>> Hi,
> > > >>>>
> > > >>>> As we have completed potential scope for 4.1 release (reflected here 
> > > >>>> [1]
> > > >>>> and also here [2]), it's time to talk about the schedule.
> > > >>>>
> > > >>>> - Branching date (and hence feature exception date): Apr 16th
> > > >>>> - Week of Apr 16th release notes updated for all features in the 
> > > >>>> release
> > > >>>> - RC0 tagging: Apr 23rd
> > > >>>> - Week of Apr 23rd, upgrade and other testing
> > > >>>> - RCNext: May 7th (if critical failures, or exception features 
> > > >>>> arrive late)
> > > >>>> - RCNext: May 21st
> > > >>>> - Week of May 21st, final upgrade and testing
> > > >>>> - GA readiness call out: May, 28th
> > > >>>> - GA tagging: May, 30th
> > > >>>> - +2-4 days release announcement
> > > >>>>
> > > >>>> and, review focus. As in older releases, I am starring reviews that 
> > > >>>> are
> > > >>>

Re: [Gluster-Maintainers] [Gluster-devel] Release 4.1: Branched

2018-06-04 Thread Kaushal M
On Sat, Jun 2, 2018 at 12:11 AM Kaushal M  wrote:
>
> On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  wrote:
> >
> > On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > > As brick-mux tests were failing (and still are on master), this was
> > > holding up the release activity.
> > >
> > > We now have a final fix [1] for the problem, and the situation has
> > > improved over a series of fixes and reverts on the 4.1 branch as well.
> > >
> > > So we hope to branch RC0 today, and give a week for package and upgrade
> > > testing, before getting to GA. The revised calendar stands as follows,
> > >
> > > - RC0 Tagging: 31st May, 2018
> >
> > RC0 Tagged and off to packaging!
>
> GD2 has been tagged as well. [1]
>
> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0

I've just realized I've made a mistake. I've pushed just the tags,
without updating the branch.
And now, the branch has landed new commits without my additional commits.
So, I've unintentionally created a different branch.

I'm planning on deleting the tag, and updating the branch with the
release commits, and tagging once again.
Would this be okay?

>
> >
> > > - RC0 Builds: 1st June, 2018
> > > - June 4th-8th: RC0 testing
> > > - June 8th: GA readiness callout
> > > - June 11th: GA tagging
> > > - +2-4 days release announcement
> > >
> > > Thanks,
> > > Shyam
> > >
> > > [1] Last fix for mux (and non-mux related):
> > > https://review.gluster.org/#/c/20109/1
> > >
> > > On 05/09/2018 03:41 PM, Shyam Ranganathan wrote:
> > >> Here is the current release activity calendar,
> > >>
> > >> - RC0 tagging: May 14th
> > >> - RC0 builds: May 15th
> > >> - May 15th - 25th
> > >>   - Upgrade testing
> > >>   - General testing and feedback period
> > >> - (on need basis) RC1 build: May 26th
> > >> - GA readiness call out: May, 28th
> > >> - GA tagging: May, 30th
> > >> - +2-4 days release announcement
> > >>
> > >> Thanks,
> > >> Shyam
> > >>
> > >> On 05/06/2018 09:20 AM, Shyam Ranganathan wrote:
> > >>> Hi,
> > >>>
> > >>> Release 4.1 has been branched, as it was done later than anticipated the
> > >>> calendar of tasks below would be reworked accordingly this week and
> > >>> posted to the lists.
> > >>>
> > >>> Thanks,
> > >>> Shyam
> > >>>
> > >>> On 03/27/2018 02:59 PM, Shyam Ranganathan wrote:
> > >>>> Hi,
> > >>>>
> > >>>> As we have completed potential scope for 4.1 release (reflected here 
> > >>>> [1]
> > >>>> and also here [2]), it's time to talk about the schedule.
> > >>>>
> > >>>> - Branching date (and hence feature exception date): Apr 16th
> > >>>> - Week of Apr 16th release notes updated for all features in the 
> > >>>> release
> > >>>> - RC0 tagging: Apr 23rd
> > >>>> - Week of Apr 23rd, upgrade and other testing
> > >>>> - RCNext: May 7th (if critical failures, or exception features arrive 
> > >>>> late)
> > >>>> - RCNext: May 21st
> > >>>> - Week of May 21st, final upgrade and testing
> > >>>> - GA readiness call out: May, 28th
> > >>>> - GA tagging: May, 30th
> > >>>> - +2-4 days release announcement
> > >>>>
> > >>>> and, review focus. As in older releases, I am starring reviews that are
> > >>>> submitted against features, this should help if you are looking to help
> > >>>> accelerate feature commits for the release (IOW, this list is the watch
> > >>>> list for reviews). This can be found handy here [3].
> > >>>>
> > >>>> So, branching is in about 4 weeks!
> > >>>>
> > >>>> Thanks,
> > >>>> Shyam
> > >>>>
> > >>>> [1] Issues marked against release 4.1:
> > >>>> https://github.com/gluster/glusterfs/milestone/5
> > >>>>
> > >>>> [2] github project lane for 4.1:
> > >>>> https://github.com/gluster/glusterfs/projects/1#column-1075416
> > >>>>
> > >>>> [3] Review focus dashboard:
> > >>>> https://review.gluster.org/#/q/starredby:srangana%2540redhat.com
> > >>>> ___
> > >>>> maintainers mailing list
> > >>>> maintainers@gluster.org
> > >>>> http://lists.gluster.org/mailman/listinfo/maintainers
> > >>>>
> > >>> ___
> > >>> maintainers mailing list
> > >>> maintainers@gluster.org
> > >>> http://lists.gluster.org/mailman/listinfo/maintainers
> > >>>
> > >> ___
> > >> Gluster-devel mailing list
> > >> gluster-de...@gluster.org
> > >> http://lists.gluster.org/mailman/listinfo/gluster-devel
> > >>
> > > ___
> > > Gluster-devel mailing list
> > > gluster-de...@gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-devel
> > >
> > ___
> > maintainers mailing list
> > maintainers@gluster.org
> > http://lists.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 4.1: Branched

2018-06-04 Thread Kaushal M
On Mon, Jun 4, 2018 at 5:29 PM Kaleb S. KEITHLEY  wrote:
>
> On 06/02/2018 07:47 AM, Niels de Vos wrote:
> > On Sat, Jun 02, 2018 at 12:11:55AM +0530, Kaushal M wrote:
> >> On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  
> >> wrote:
> >>>
> >>> On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> >>>> As brick-mux tests were failing (and still are on master), this was
> >>>> holding up the release activity.
> >>>>
> >>>> We now have a final fix [1] for the problem, and the situation has
> >>>> improved over a series of fixes and reverts on the 4.1 branch as well.
> >>>>
> >>>> So we hope to branch RC0 today, and give a week for package and upgrade
> >>>> testing, before getting to GA. The revised calendar stands as follows,
> >>>>
> >>>> - RC0 Tagging: 31st May, 2018
> >>>
> >>> RC0 Tagged and off to packaging!
> >>
> >> GD2 has been tagged as well. [1]
> >>
> >> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0
> >
> > What is the status of the RPM for GD2? Can the Fedora RPM be rebuilt
> > directly on CentOS, or does it need additional dependencies? (Note that
> > CentOS does not allow dependencies from Fedora EPEL.)
> >
>
> I checked, and was surprised to see that gd2 made it into Fedora[1]. I
> guess I missed the announcement.

This happened in time for the 4.0 release. I did send out an announcement IIRC.

>
> But I was disappointed to see that packages have only been built for
> Fedora29/rawhide. We've been shipping glusterfs-4.0 in Fedora28 and even
> if [2] didn't say so, I would think it would be obvious that we should
> have packages for gd2 in F28 too.
>

The package got accepted during the F28 freeze. So I was only able to
request a branch for F27 when 4.0 happened.
I should have gotten around to requesting a branch for F28, but I forgot.

> And it's good that RC0 was tagged in a timely matter. Who is building
> those packages?

I can build the RPMs. I'll build them on the COPR I've been
maintaining. But I don't believe that those can be used as the
official RPM sources.
So where should I build and how should they be distributed?

>
> [1] https://koji.fedoraproject.org/koji/packageinfo?packageID=26508
> [2] https://docs.gluster.org/en/latest/Install-Guide/Community_Packages/
> --
>
> Kaleb
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 4.1: Branched

2018-06-04 Thread Kaushal M
On Mon, Jun 4, 2018 at 7:05 PM Kaleb S. KEITHLEY  wrote:
>
> On 06/02/2018 07:47 AM, Niels de Vos wrote:
> > On Sat, Jun 02, 2018 at 12:11:55AM +0530, Kaushal M wrote:
> >> On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  
> >> wrote:
> >>>
> >>> On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> >>>> As brick-mux tests were failing (and still are on master), this was
> >>>> holding up the release activity.
> >>>>
> >>>> We now have a final fix [1] for the problem, and the situation has
> >>>> improved over a series of fixes and reverts on the 4.1 branch as well.
> >>>>
> >>>> So we hope to branch RC0 today, and give a week for package and upgrade
> >>>> testing, before getting to GA. The revised calendar stands as follows,
> >>>>
> >>>> - RC0 Tagging: 31st May, 2018
> >>>
> >>> RC0 Tagged and off to packaging!
> >>
> >> GD2 has been tagged as well. [1]
> >>
> >> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0
> >
> > What is the status of the RPM for GD2? Can the Fedora RPM be rebuilt
> > directly on CentOS, or does it need additional dependencies? (Note that
> > CentOS does not allow dependencies from Fedora EPEL.)
> >
>
> My recollection of how this works is that one would need to build from
> the "bundled vendor" tarball.
>
> Except when I tried to download the vendor bundle tarball I got the same
> bits as the unbundled tarball.
>
> ISTR Kaushal had to do something extra to generate the vendor bundled
> tarball. It doesn't appear that that occured.

That is right. For CentOS/EL, the default in the spec is to use the
vendored tarball. Using this, the only requirement to build GD2 is
golang>=1.8.
Are you sure you're downloading the right tarball [1]?

[1]: 
https://github.com/gluster/glusterd2/releases/download/v4.1.0-rc0/glusterd2-v4.1.0-rc0-vendor.tar.xz
>
> --
>
> Kaleb
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 4.1: Branched

2018-06-01 Thread Kaushal M
On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  wrote:
>
> On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
> > As brick-mux tests were failing (and still are on master), this was
> > holding up the release activity.
> >
> > We now have a final fix [1] for the problem, and the situation has
> > improved over a series of fixes and reverts on the 4.1 branch as well.
> >
> > So we hope to branch RC0 today, and give a week for package and upgrade
> > testing, before getting to GA. The revised calendar stands as follows,
> >
> > - RC0 Tagging: 31st May, 2018
>
> RC0 Tagged and off to packaging!

GD2 has been tagged as well. [1]

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0

>
> > - RC0 Builds: 1st June, 2018
> > - June 4th-8th: RC0 testing
> > - June 8th: GA readiness callout
> > - June 11th: GA tagging
> > - +2-4 days release announcement
> >
> > Thanks,
> > Shyam
> >
> > [1] Last fix for mux (and non-mux related):
> > https://review.gluster.org/#/c/20109/1
> >
> > On 05/09/2018 03:41 PM, Shyam Ranganathan wrote:
> >> Here is the current release activity calendar,
> >>
> >> - RC0 tagging: May 14th
> >> - RC0 builds: May 15th
> >> - May 15th - 25th
> >>   - Upgrade testing
> >>   - General testing and feedback period
> >> - (on need basis) RC1 build: May 26th
> >> - GA readiness call out: May, 28th
> >> - GA tagging: May, 30th
> >> - +2-4 days release announcement
> >>
> >> Thanks,
> >> Shyam
> >>
> >> On 05/06/2018 09:20 AM, Shyam Ranganathan wrote:
> >>> Hi,
> >>>
> >>> Release 4.1 has been branched, as it was done later than anticipated the
> >>> calendar of tasks below would be reworked accordingly this week and
> >>> posted to the lists.
> >>>
> >>> Thanks,
> >>> Shyam
> >>>
> >>> On 03/27/2018 02:59 PM, Shyam Ranganathan wrote:
>  Hi,
> 
>  As we have completed potential scope for 4.1 release (reflected here [1]
>  and also here [2]), it's time to talk about the schedule.
> 
>  - Branching date (and hence feature exception date): Apr 16th
>  - Week of Apr 16th release notes updated for all features in the release
>  - RC0 tagging: Apr 23rd
>  - Week of Apr 23rd, upgrade and other testing
>  - RCNext: May 7th (if critical failures, or exception features arrive 
>  late)
>  - RCNext: May 21st
>  - Week of May 21st, final upgrade and testing
>  - GA readiness call out: May, 28th
>  - GA tagging: May, 30th
>  - +2-4 days release announcement
> 
>  and, review focus. As in older releases, I am starring reviews that are
>  submitted against features, this should help if you are looking to help
>  accelerate feature commits for the release (IOW, this list is the watch
>  list for reviews). This can be found handy here [3].
> 
>  So, branching is in about 4 weeks!
> 
>  Thanks,
>  Shyam
> 
>  [1] Issues marked against release 4.1:
>  https://github.com/gluster/glusterfs/milestone/5
> 
>  [2] github project lane for 4.1:
>  https://github.com/gluster/glusterfs/projects/1#column-1075416
> 
>  [3] Review focus dashboard:
>  https://review.gluster.org/#/q/starredby:srangana%2540redhat.com
>  ___
>  maintainers mailing list
>  maintainers@gluster.org
>  http://lists.gluster.org/mailman/listinfo/maintainers
> 
> >>> ___
> >>> maintainers mailing list
> >>> maintainers@gluster.org
> >>> http://lists.gluster.org/mailman/listinfo/maintainers
> >>>
> >> ___
> >> Gluster-devel mailing list
> >> gluster-de...@gluster.org
> >> http://lists.gluster.org/mailman/listinfo/gluster-devel
> >>
> > ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-devel
> >
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Meeting minutes (7th March)

2018-03-07 Thread Kaushal M
On Thu, Mar 8, 2018 at 10:21 AM, Amar Tumballi  wrote:
> Meeting date: 03/07/2018 (March 3rd, 2018. 19:30IST, 14:00UTC, 09:00EST)
>
> BJ Link
>
> Bridge: https://bluejeans.com/205933580
> Download : https://bluejeans.com/s/mOGb7
>
> Attendance
>
> [Sorry Note] : Atin (conflicting meeting), Michael Adam, Amye, Niels de Vos,
> Amar, Nigel, Jeff, Shyam, Kaleb, Kotresh
>
> Agenda
>
> AI from previous meeting:
>
> Email on version numbers: Still pending - Amar/Shyam
>
> Planning to do this by Friday (9th March)
>
> can we run regression suite with GlusterD2
>
> OK with failures, but can we run?
> Nigel to run tests and give outputs

Apologies for not attending this meeting.

I can help get this up and running.

But, I also wanted to setup a smoke job to run GD2 CI against glusterfs patches.
This will help us catch changes that adversly affect GD2, in
particular changes to the option_t and xlator_api_t structs.
Will not be a particularly long test to run. On average the current
GD2 centos-ci jobs finish in under 4 minutes.
I expect that building glusterfs will add about 5 minutes more.
This job should be simple enough to get setup, and I'd like it if can
set this up first.

>
> Line coverage tests:
>
> SIGKILL was sent to processes, so the output was not proper.
> Patch available, Nigel to test with the patch and give output before
> merging.
> [Nigel] what happens with GD2 ?
>
> [Shyam] https://github.com/gojp/goreportcard
> [Shyam] (what I know)
> https://goreportcard.com/report/github.com/gluster/glusterd2
>
> Gluster 4.0 is tagged:
>
> Retrospect meeting: Can this be google form?
>
> It usually is, let me find and paste the older one:
>
> 3.10 retro:
> http://lists.gluster.org/pipermail/gluster-users/2017-February/030127.html
> 3.11 retro: https://www.gluster.org/3-11-retrospectives/
>
> [Nigel] Can we do it a less of form, and keep it more generic?
> [Shyam] Thats what mostly the form tries to do. Prefer meeting & Form
>
> Gluster Infra team is testing the distributed testing framework contributed
> from FB
>
> [Nigel] Any issues, would like to collaborate
> [Jeff] Happy to collaborate, let me know.
>
> Call out for features on 4-next
>
> should the next release be LTM and 4.1 and then pick the version number
> change proposal later.
>
> Bugzilla Automation:
>
> Planning to test it out next week.
> AI: send the email first, and target to take the patches before next
> maintainers meeting.
>
> Round Table
>
> [Kaleb] space is tight on download.gluster.org
> * may we delete, e.g. purpleidea files? experimental (freebsd stuff from
> 2014)?
> * any way to get more space?
> * [Nigel] Should be possible to do it, file a bug
> * AI: Kaleb to file a bug
> *
>
> yesterday I noticed that some files (…/3.12/3.12.2/Debian/…) were not owned
> by root:root. They were rsync_aide:rsync_aide. Was there an aborted rsync
> job or something that left them like that?
>
> most glusterfs 4.0 packages are on download.g.o now. Starting on gd2
> packages now.
>
> el7 packages on on buildroot if someone (shyam?) wants to get a head start
> on testing them
>
> [Nigel] Testing IPv6 (with IPv4 on too), only 4 tests are consistently
> failing. Need to look at it.
>
>
>
> --
> Amar Tumballi (amarts)
>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] glusterfs-4.0.0 released

2018-03-06 Thread Kaushal M
On Tue, Mar 6, 2018 at 9:19 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Tue, Mar 6, 2018 at 8:59 PM, Shyam Ranganathan <srang...@redhat.com> wrote:
>> On 03/06/2018 10:25 AM, jenk...@build.gluster.org wrote:
>>> SRC: 
>>> https://build.gluster.org/job/release-new/45/artifact/glusterfs-4.0.0.tar.gz
>>> HASH: 
>>> https://build.gluster.org/job/release-new/45/artifact/glusterfs-4.0.0.sha512sum
>>>
>>> This release is made off jenkins-release-45
>>
>> Some call outs!
>>
>> 1) @kaushal GD2 tags for GA need to be created and shared (if not
>> already done)
>
> Done. https://github.com/gluster/glusterd2/releases/tag/v4.0.0

Also, kicked off COPR builds at
https://copr.fedorainfracloud.org/coprs/kshlm/glusterd2/build/724903/

>
>>
>> 2) @humble Once the packages are created for CentOS SIG, and validated
>> (usually by me), you would need to crank out the container images
>>
>> Thanks!
>>
>>>
>>>
>>>
>>> ___
>>> maintainers mailing list
>>> maintainers@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/maintainers
>>>
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> http://lists.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] glusterfs-4.0.0 released

2018-03-06 Thread Kaushal M
On Tue, Mar 6, 2018 at 8:59 PM, Shyam Ranganathan  wrote:
> On 03/06/2018 10:25 AM, jenk...@build.gluster.org wrote:
>> SRC: 
>> https://build.gluster.org/job/release-new/45/artifact/glusterfs-4.0.0.tar.gz
>> HASH: 
>> https://build.gluster.org/job/release-new/45/artifact/glusterfs-4.0.0.sha512sum
>>
>> This release is made off jenkins-release-45
>
> Some call outs!
>
> 1) @kaushal GD2 tags for GA need to be created and shared (if not
> already done)

Done. https://github.com/gluster/glusterd2/releases/tag/v4.0.0

>
> 2) @humble Once the packages are created for CentOS SIG, and validated
> (usually by me), you would need to crank out the container images
>
> Thanks!
>
>>
>>
>>
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> http://lists.gluster.org/mailman/listinfo/maintainers
>>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] GlusterD2 v4.0rc0 tagged

2018-01-31 Thread Kaushal M
Hi all,

GlusterD2 v4.0rc0 has been tagged and a release made in anticipation
of GlusterFS-v4.0rc0. The release and source tarballs are available
from [1].

There aren't any sepcific release-notes for this release.

Thanks.
~kaushal

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.0rc0
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] Release 4.0: Making it happen! (GlusterD2)

2018-01-10 Thread Kaushal M
On Thu, Jan 11, 2018 at 1:56 AM, Kaleb S. KEITHLEY  wrote:
> comments inline
>
> On 01/10/2018 02:08 PM, Shyam Ranganathan wrote:
>
> Hi, (GD2 team, packaging team, please read)
>
> Here are some things we need to settle so that we can ship/release GD2
> along with Gluster 4.0 release (considering this is a separate
> repository as of now).
>
> 1) Generating release package (read as RPM for now) to go with Gluster
> 4.0 release
>
> Proposal:
>   - GD2 makes github releases, as in [1]
>
>   - GD2 Releases (tagging etc.) are made in tandem to Gluster releases
> - So, when an beta1/RC0 is tagged for gluster release, this will
> receive a coordinated release (if required) from the GD2 team
> - GD2 team will receive *at-least* a 24h notice on a tentative
> Gluster tagging date/time, to aid the GD2 team to prepare the required
> release tarball in github
>
> This is a no-op. In github creating a tag or a release automatically creates
> the tar source file.

While true, this tarball isn't enough. The GD2 build scripts lookup
versioning from git tags or from a VERSION file (same as glusterfs).
Both of these are not present in the tarball github generates.
The GD2 release script generates tarballs that have everything
required to build a properly versioned GD2.

>
>   - Post a gluster tag being created, and the subsequent release job is
> run for gluster 4.0, the packaging team will be notified about which GD2
> tag to pick up for packaging, with this gluster release
> - IOW, a response to the Jenkins generated packaging job, with the
> GD2 version/tag/release to pick up
>
>   - GD2 will be packaged as a sub-package of the glusterfs package, and
> hence will have appropriate changes to the glusterfs spec file (or other
> variants of packaging as needed), to generate one more package (RPM) to
> post in the respective download location
>
>   - The GD2 sub-package version would be the same as the release version
> that GD2 makes (it will not be the gluster package version, at least for
> now)
>
> IMO it's clearer if the -glusterd2 sub-package has the same version as the
> rest of the glusterfs-* packages.
>

+1. We will follow glusterfs versioning not just for the packages, but
for the source itself.

> The -glusterd2 sub-package's Summary and/or its %description can be used to
> identify the version of GD2.
>
> Emphasis on IMO. It is possible for the -glusterd sub-package to have a
> version that's different than the parent package(s).
>
>   - For now, none of the gluster RPMs would be dependent on the GD2 RPM
> in the downloads, so any user wanting to use GD2 would have to install
> the package specifically and then proceed as needed
>
>   - (thought/concern) Jenkins smoke job (or other jobs) that builds RPMs
> will not build GD2 (as the source is not available) and will continue as
> is (which means there is enough spec file magic here that we can specify
> during release packaging to additionally build GD2)
>
> 2) Generate a quick start or user guide, to aid using GD2 with 4.0
>
> @Kaushal if this is generated earlier (say with beta builds of 4.0
> itself) we could get help from the community to test drive the same and
> provide feedback to improve the guide for users by the release (as
> discussed in the maintainers meeting)
>
> One thing not covered above is what happens when GD2 fixes a high priority
> bug between releases of glusterfs.
>
> Once option is we wait until the next release of glusterfs to include the
> update to GD2.
>
> Or we can respin (rerelease) the glusterfs packages with the updated GD2.
> I.e. glusterfs-4.0.0-1 (containing GD2-1.0.0) -> glusterfs-4.0.0-2
> (containing GD2-1.0.1).
>
> Or we can decide not to make a hard rule and do whatever makes the most
> sense at the time. If the fix is urgent, we respin. If the fix is not urgent
> it waits for the next Gluster release. (From my perspective though I'd
> rather not do respins, I've already got plenty of work doing the regular
> releases.)
>
> The alternative to all of the above is to package GD2 in its own package.
> This entails opening a New Package Request and going through the packaging
> reviews. All in all it's a lot of work. If GD2 source is eventually going to
> be moved into the main glusterfs source though this probably doesn't make
> sense.
>
> --
>
> Kaleb
>
>
>
>
> ___
> packaging mailing list
> packag...@gluster.org
> http://lists.gluster.org/mailman/listinfo/packaging
>
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] GD2 integration - Commands

2017-09-22 Thread Kaushal M
Hey all,

We are continuing our development of the core of GD2, and will soon
come to the stage where we can begin integrating and developing other
commands for GD2.

As the GD2 team, we are developing and implementing the core commands
for managing Gluster. These include the basic volume management
commands and cluster management commands. To implement the other
Gluster feature commands (snapshot, geo-rep, quota, NFS, gfproxy etc.)
we will be depending on the feature maintainers.

We are not yet ready to begin full fledged implementation of the new
commands. But we can begin getting ready by designing out the various
requirements and parts of command implementation. We have prepared a
document [1] to help begin this design process. This document explains
what is involved in implementing a new command and how it will be
structured. This will help developers come up with a skeleton design
for their command, and its operation. Which will help speed up
implementation later.

We request everyone to start this process soon, so we can deliver on
target. If there are any questions, feel free to ask on this thread or
reach out to us.

Thanks.

[1]: https://github.com/gluster/glusterd2/wiki/New-Commands-Guide
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Community Meeting 2017-09-13 - Volunteers to host?

2017-09-13 Thread Kaushal M
Hi all,

We need a volunteer to stand in as the host of today's community meeting, as
I will not be able to host or attend today's community meeting.

If anyone wants to volunteer, let the rest know by replying here.
Also, reply if you're willing to regularly host/co-host the meetings
as well.

Who's ready to host?

~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Changing Submit Type on review.gluster.org

2017-09-07 Thread Kaushal M
On 7 Sep 2017 6:25 pm, "Niels de Vos"  wrote:

On Thu, Sep 07, 2017 at 04:41:54PM +0530, Nigel Babu wrote:
> On Thu, Sep 07, 2017 at 12:43:28PM +0200, Niels de Vos wrote:
> >
> > Q: Can patches of a series be merged before all patches in the series
> > have a +2? Initial changes that prepare things, or add new (unused) core
> > functionalities should be mergable so that follow-up patches can be
> > posted against the HEAD of the branch.
> >
> > A: Nigel?
> >
>
> If you have patches that are dependent like this:
>
> A -> B -> C -> D
>
> where A is the first patch and B is based on top of A and so forth.
>
> Merging A is not dependent on B. It can be merged any time you have Code
Review
> and Regression votes.
>
> However, you cannot merge B until A is ready or merged. If A is unmerged,
but
> is ready to merge, when you merge B, Gerrit will merge them in order, i.e.
> first merge A, and B automatically.
>
> Does this answer your question? If it helps, I can arrange for staging to
be
> online so moe people can test this out.

That answers my question, I don't need to try it out myself.

Thanks!
Niels
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers



Gerrit still provides all the meta information about patches in a special
branch as git-notes. Git can be configured to display these notes along
with commit messages. You would still effectively get the same experience
as before.

More information is available at [1]. This depends on a gerrit plugin, but
I believe it's enabled by default.

[1]
https://gerrit.googlesource.com/plugins/reviewnotes/+/master/src/main/resources/Documentation/refs-notes-review.md
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] 4.0 discussions: Need time slots

2017-07-31 Thread Kaushal M
I'm okay with the Wednesday and Friday slots.

On Mon, Jul 31, 2017 at 12:36 PM, Amar Tumballi  wrote:
> Hi All,
>
> It will be great to have everyone's participation for 4.0 discussions,
> considering it would be significant decision for the project. Hence, having
> a meeting slot which doesn't conflict with majority would be a great .
>
> Below are the time slots I am thinking for 4.0 discussions.
>
> Tuesday (1st Aug) - 8:30pm-9:30pm IST (11am - 12pm EDT).
> Wednesday (2nd Aug) - 5pm - 6pm IST (7:30am - 8:30am EDT)
> Wednesday (2nd Aug) - 7:30pm - 8:30pm IST (10am - 11am EDT)
> Friday (4th Aug) - 5pm - 7pm IST (7:30am - 9:30am EDT)
>
> Please respond to this email, so I can create the meeting slot.
>
> For now, I will create a calendar invite for Wednesday 5pm-6pm IST.
>
> Regards,
> Amar
>
>
> --
> Amar Tumballi (amarts)
>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Release 3.11: Has been Branched (and pending feature notes)

2017-05-05 Thread Kaushal M
On Thu, May 4, 2017 at 6:40 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Thu, May 4, 2017 at 4:38 PM, Niels de Vos <nde...@redhat.com> wrote:
>> On Thu, May 04, 2017 at 03:39:58PM +0530, Pranith Kumar Karampuri wrote:
>>> On Wed, May 3, 2017 at 2:36 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>>
>>> > On Tue, May 2, 2017 at 3:55 PM, Pranith Kumar Karampuri
>>> > <pkara...@redhat.com> wrote:
>>> > >
>>> > >
>>> > > On Sun, Apr 30, 2017 at 9:01 PM, Shyam <srang...@redhat.com> wrote:
>>> > >>
>>> > >> Hi,
>>> > >>
>>> > >> Release 3.11 for gluster has been branched [1] and tagged [2].
>>> > >>
>>> > >> We have ~4weeks to release of 3.11, and a week to backport features 
>>> > >> that
>>> > >> slipped the branching date (May-5th).
>>> > >>
>>> > >> A tracker BZ [3] has been opened for *blockers* of 3.11 release. 
>>> > >> Request
>>> > >> that any bug that is determined as a blocker for the release be noted
>>> > as a
>>> > >> "blocks" against this bug.
>>> > >>
>>> > >> NOTE: Just a heads up, all bugs that are to be backported in the next 4
>>> > >> weeks need not be reflected against the blocker, *only* blocker bugs
>>> > >> identified that should prevent the release, need to be tracked against
>>> > this
>>> > >> tracker bug.
>>> > >>
>>> > >> We are not building beta1 packages, and will build out RC0 packages 
>>> > >> once
>>> > >> we cross the backport dates. Hence, folks interested in testing this
>>> > out can
>>> > >> either build from the code or wait for (about) a week longer for the
>>> > >> packages (and initial release notes).
>>> > >>
>>> > >> Features tracked as slipped and expected to be backported by 5th May
>>> > are,
>>> > >>
>>> > >> 1) [RFE] libfuse rebase to latest? #153 (@amar, @csaba)
>>> > >>
>>> > >> 2) SELinux support for Gluster Volumes #55 (@ndevos, @jiffin)
>>> > >>   - Needs a +2 on https://review.gluster.org/13762
>>> > >>
>>> > >> 3) Enhance handleops readdirplus operation to return handles along with
>>> > >> dirents #174 (@skoduri)
>>> > >>
>>> > >> 4) Halo - Initial version (@pranith)
>>> > >
>>> > >
>>> > > I merged the patch on master. Will send out the port on Thursday. I have
>>> > to
>>> > > leave like right now to catch train and am on leave tomorrow, so will be
>>> > > back on Thursday and get the port done. Will also try to get the other
>>> > > patches fb guys mentioned post that preferably by 5th itself.
>>> >
>>> > Niels found that the HALO patch has pulled in a little bit of the IPv6
>>> > patch. This shouldn't have happened.
>>> > The IPv6 patch is currently stalled because it depends on an internal
>>> > FB library. The IPv6 bits that made it in pull this dependency.
>>> > This would have lead to a -2 on the HALO patch by me, but as I wasn't
>>> > aware of it, the patch was merged.
>>> >
>>> > The IPV6 changes are in rpcsvh.{c,h} and configure.ac, and don't seem
>>> > to affect anything HALO. So they should be easily removable and should
>>> > be removed.
>>> >
>>>
>>> As per the configure.ac the macro is enabled only when we are building
>>> gluster with "--with-fb-extras", which I don't think we do anywhere, so
>>> didn't think they are important at the moment. Sorry for the confusion
>>> caused because of this. Thanks to Kaushal for the patch. I will backport
>>> that one as well when I do the 3.11 backport of HALO. So will wait for the
>>> backport until Kaushal's patch is merged.
>>
>> Note that there have been disucssions about preventing special vendor
>> (Red Hat or Facebook) flags and naming. In that sense, --with-fb-extras
>> is not acceptible. Someone was interested in providing a "site.h"
>> configuration file that different vendors can use to fine-tune certain
>> things that are too detailed for ./configure options.
>>
>> We should remove the --with

Re: [Gluster-Maintainers] [Gluster-devel] Release 3.11: Has been Branched (and pending feature notes)

2017-05-04 Thread Kaushal M
On Thu, May 4, 2017 at 4:38 PM, Niels de Vos <nde...@redhat.com> wrote:
> On Thu, May 04, 2017 at 03:39:58PM +0530, Pranith Kumar Karampuri wrote:
>> On Wed, May 3, 2017 at 2:36 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>
>> > On Tue, May 2, 2017 at 3:55 PM, Pranith Kumar Karampuri
>> > <pkara...@redhat.com> wrote:
>> > >
>> > >
>> > > On Sun, Apr 30, 2017 at 9:01 PM, Shyam <srang...@redhat.com> wrote:
>> > >>
>> > >> Hi,
>> > >>
>> > >> Release 3.11 for gluster has been branched [1] and tagged [2].
>> > >>
>> > >> We have ~4weeks to release of 3.11, and a week to backport features that
>> > >> slipped the branching date (May-5th).
>> > >>
>> > >> A tracker BZ [3] has been opened for *blockers* of 3.11 release. Request
>> > >> that any bug that is determined as a blocker for the release be noted
>> > as a
>> > >> "blocks" against this bug.
>> > >>
>> > >> NOTE: Just a heads up, all bugs that are to be backported in the next 4
>> > >> weeks need not be reflected against the blocker, *only* blocker bugs
>> > >> identified that should prevent the release, need to be tracked against
>> > this
>> > >> tracker bug.
>> > >>
>> > >> We are not building beta1 packages, and will build out RC0 packages once
>> > >> we cross the backport dates. Hence, folks interested in testing this
>> > out can
>> > >> either build from the code or wait for (about) a week longer for the
>> > >> packages (and initial release notes).
>> > >>
>> > >> Features tracked as slipped and expected to be backported by 5th May
>> > are,
>> > >>
>> > >> 1) [RFE] libfuse rebase to latest? #153 (@amar, @csaba)
>> > >>
>> > >> 2) SELinux support for Gluster Volumes #55 (@ndevos, @jiffin)
>> > >>   - Needs a +2 on https://review.gluster.org/13762
>> > >>
>> > >> 3) Enhance handleops readdirplus operation to return handles along with
>> > >> dirents #174 (@skoduri)
>> > >>
>> > >> 4) Halo - Initial version (@pranith)
>> > >
>> > >
>> > > I merged the patch on master. Will send out the port on Thursday. I have
>> > to
>> > > leave like right now to catch train and am on leave tomorrow, so will be
>> > > back on Thursday and get the port done. Will also try to get the other
>> > > patches fb guys mentioned post that preferably by 5th itself.
>> >
>> > Niels found that the HALO patch has pulled in a little bit of the IPv6
>> > patch. This shouldn't have happened.
>> > The IPv6 patch is currently stalled because it depends on an internal
>> > FB library. The IPv6 bits that made it in pull this dependency.
>> > This would have lead to a -2 on the HALO patch by me, but as I wasn't
>> > aware of it, the patch was merged.
>> >
>> > The IPV6 changes are in rpcsvh.{c,h} and configure.ac, and don't seem
>> > to affect anything HALO. So they should be easily removable and should
>> > be removed.
>> >
>>
>> As per the configure.ac the macro is enabled only when we are building
>> gluster with "--with-fb-extras", which I don't think we do anywhere, so
>> didn't think they are important at the moment. Sorry for the confusion
>> caused because of this. Thanks to Kaushal for the patch. I will backport
>> that one as well when I do the 3.11 backport of HALO. So will wait for the
>> backport until Kaushal's patch is merged.
>
> Note that there have been disucssions about preventing special vendor
> (Red Hat or Facebook) flags and naming. In that sense, --with-fb-extras
> is not acceptible. Someone was interested in providing a "site.h"
> configuration file that different vendors can use to fine-tune certain
> things that are too detailed for ./configure options.
>
> We should remove the --with-fb-extras as well, specially because it is
> not useful for anyone that does not have access to the forked fbtirpc
> library.
>
> Kaushal mentioned he'll update the patch that removed the IPv6 default
> define, to also remove the --with-fb-extras and related bits.

The patch removing IPV6 and fbextras is at
https://review.gluster.org/17174 waiting for regression tests to run.

I've merged the Selinux backports, https://review.gluster.org/17159
and https:

Re: [Gluster-Maintainers] [Gluster-devel] Release 3.11: Has been Branched (and pending feature notes)

2017-05-03 Thread Kaushal M
On Tue, May 2, 2017 at 3:55 PM, Pranith Kumar Karampuri
 wrote:
>
>
> On Sun, Apr 30, 2017 at 9:01 PM, Shyam  wrote:
>>
>> Hi,
>>
>> Release 3.11 for gluster has been branched [1] and tagged [2].
>>
>> We have ~4weeks to release of 3.11, and a week to backport features that
>> slipped the branching date (May-5th).
>>
>> A tracker BZ [3] has been opened for *blockers* of 3.11 release. Request
>> that any bug that is determined as a blocker for the release be noted as a
>> "blocks" against this bug.
>>
>> NOTE: Just a heads up, all bugs that are to be backported in the next 4
>> weeks need not be reflected against the blocker, *only* blocker bugs
>> identified that should prevent the release, need to be tracked against this
>> tracker bug.
>>
>> We are not building beta1 packages, and will build out RC0 packages once
>> we cross the backport dates. Hence, folks interested in testing this out can
>> either build from the code or wait for (about) a week longer for the
>> packages (and initial release notes).
>>
>> Features tracked as slipped and expected to be backported by 5th May are,
>>
>> 1) [RFE] libfuse rebase to latest? #153 (@amar, @csaba)
>>
>> 2) SELinux support for Gluster Volumes #55 (@ndevos, @jiffin)
>>   - Needs a +2 on https://review.gluster.org/13762
>>
>> 3) Enhance handleops readdirplus operation to return handles along with
>> dirents #174 (@skoduri)
>>
>> 4) Halo - Initial version (@pranith)
>
>
> I merged the patch on master. Will send out the port on Thursday. I have to
> leave like right now to catch train and am on leave tomorrow, so will be
> back on Thursday and get the port done. Will also try to get the other
> patches fb guys mentioned post that preferably by 5th itself.

Niels found that the HALO patch has pulled in a little bit of the IPv6
patch. This shouldn't have happened.
The IPv6 patch is currently stalled because it depends on an internal
FB library. The IPv6 bits that made it in pull this dependency.
This would have lead to a -2 on the HALO patch by me, but as I wasn't
aware of it, the patch was merged.

The IPV6 changes are in rpcsvh.{c,h} and configure.ac, and don't seem
to affect anything HALO. So they should be easily removable and should
be removed.

>
>>
>>
>> Thanks,
>> Kaushal, Shyam
>>
>> [1] 3.11 Branch: https://github.com/gluster/glusterfs/tree/release-3.11
>>
>> [2] Tag for 3.11.0beta1 :
>> https://github.com/gluster/glusterfs/tree/v3.11.0beta1
>>
>> [3] Tracker BZ for 3.11.0 blockers:
>> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.0
>>
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> http://lists.gluster.org/mailman/listinfo/maintainers
>
>
>
>
> --
> Pranith
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Stop sending patches for and merging patches on release-3.7

2017-02-01 Thread Kaushal M
Hi all,

GlusterFS-3.7.20 is intended to be the final release for release-3.7.
3.7 enters EOL with the expected release of 3.10 in about 2 weeks.

Once 3.10 is released I'll be closing any open bugs on 3.7 and
abandoning any patches on review.

So as the subject says, developers please stop sending changes to
release-3.7, and maintainers please don't merge any more changes onto
release-3.7.

~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] GlusterFS-3.7.20 will be tagged later today

2017-01-30 Thread Kaushal M
I'll be tagging this release later today. This should be the final 3.7
release after over a year and half of updates.

~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-3.8.8 released

2017-01-14 Thread Kaushal M
Daryl, on the gluster-users list, tested out the Storage SIG packages
for 3.8.8 [1]. You can push the packages to release and announce the
release now.

[1]: https://lists.gluster.org/pipermail/gluster-users/2017-January/029667.html

On Wed, Jan 11, 2017 at 10:09 PM, Niels de Vos  wrote:
> On Wed, Jan 11, 2017 at 05:15:43AM -0800, Gluster Build System wrote:
>>
>>
>> SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.8.8.tar.gz
>
> Packages have been built and tagged into the "centos-gluster38-test"
> repository. If there are no problems reported, I probably annonuce the
> release tomorrow.
>
> Thanks,
> Niels
>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-3.7.19 released

2017-01-11 Thread Kaushal M
On Mon, Jan 9, 2017 at 9:41 PM, Niels de Vos  wrote:
> On Mon Jan 9 16:57:02 2017 GMT+0100, Kaleb S. KEITHLEY wrote:
>> On 01/09/2017 01:44 AM, Gluster Build System wrote:
>> >
>> >
>> > SRC: 
>> > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.7.19.tar.gz
>> >
>> > This release is made off jenkins-release-180
>>
>> packages for everything (excepting CentOS Storage SIG) listed at
>> http://gluster.readthedocs.io/en/latest/Install-Guide/Community_Packages/
>> are available now at their respective locations.
>
> Packages for the CentOS Storage SIG are in the "centos-gluster37-test" 
> repository. Let me know when they should be marked for releasing as an update.

They install and upgrade okay. Simple volume IO works. This can be
marked for release.

>
> Niels
>
> --
> Sent from my Jolla w/ the Open Source friendly services from kolabnow.com
> ___
> packaging mailing list
> packag...@gluster.org
> http://www.gluster.org/mailman/listinfo/packaging
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] GlusterFS-3.7.19 tagging approaching

2017-01-02 Thread Kaushal M
3.7.19 was supposed to be tagged on 30th December, but didn't happen
due to the holidays.

I'll be tagging the release before the end of this week. To do this,
no more patches will be merged after tomorrows meeting. If anyone has
anything to get merged before then, please bring it to my notice.

Right now, there have been 6 commits since .18 and the current
release-3.7 HEAD is at 2892fb430.

Thanks.

Kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-3.7.18 released

2016-12-12 Thread Kaushal M
On Mon, Dec 12, 2016 at 12:35 PM, Niels de Vos  wrote:
> On Thu, Dec 08, 2016 at 04:17:59AM -0500, Kaleb Keithley wrote:
>>
>> Packages for Fedora 23 are queued for testing in Bodhi; will land soon in 
>> Updates-Testing and then Updates repos.
>> Packages for Fedora 24 and Fedora 25 are on download.gluster.org
>> Packages for Fedora 26 (rawhide) will be available soon on 
>> download.gluster.org pending resolution of a build issue.
>>
>> Packages for RHEL/CentOS 5, 6, and 7 are on download.gluster.org, and will 
>> also be available soon in the CentOS Storage SIG.
>
> Packages for the CentOS Storage SIG have been available in the testing
> repository for a couple of days now. I can mark them as released and
> have them pushed to the mirrors if someone agrees with that.

+1 to do this. I'll be making the announcement today.

>
> Thanks,
> Niels
>
>
>>
>> Packages for Debian Wheezy/7, Jessie/8, and Stretch/9 are on 
>> download.gluster.org.
>>
>> Packages for Ubuntu Trusty/14.04, Wily/15/10, and Xenial/16.04 are in the 
>> Ubuntu Launchpad PPA.
>>
>> Packages for SuSE will be available soon.
>>
>> The .../glusterfs/LTM-3.7 and .../glusterfs/3.7/LATEST symlinks on 
>> download.gluster.org have been set to 3.7.18
>>
>>
>>
>>
>> - Original Message -
>> > From: "Gluster Build System" 
>> > To: sbair...@redhat.com, maintainers@gluster.org, packag...@gluster.org
>> > Sent: Wednesday, December 7, 2016 2:01:48 PM
>> > Subject: [gluster-packaging] glusterfs-3.7.18 released
>> >
>> >
>> >
>> > SRC:
>> > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.7.18.tar.gz
>> >
>> > This release is made off jenkins-release-178
>> >
>> > -- Gluster Build System
>> > ___
>> > packaging mailing list
>> > packag...@gluster.org
>> > http://www.gluster.org/mailman/listinfo/packaging
>> >
>> ___
>> packaging mailing list
>> packag...@gluster.org
>> http://www.gluster.org/mailman/listinfo/packaging
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-18 Thread Kaushal M
On Fri, Nov 18, 2016 at 3:29 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Fri, Nov 18, 2016 at 2:04 PM, Kaushal M <kshlms...@gmail.com> wrote:
>> On Fri, Nov 18, 2016 at 1:22 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>> On Fri, Nov 18, 2016 at 1:17 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>>> IMPORTANT: Till this is fixed please stop merging changes into release-3.7
>>>>
>>>> I made a mistake.
>>>>
>>>> When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
>>>> not correct.
>>>> So I corrected it with a new commit, c11131f, directly on top of my
>>>> local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
>>>> tagged this commit as 3.7.17.
>>>>
>>>> Unfortunately, when pushing I just pushed the tags and didn't push my
>>>> updated branch to release-3.7. Because of this I inadvertently created
>>>> a new (virtual) branch.
>>>> Any new changes merged in release-3.7 since have happened on top of
>>>> 8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
>>>> v3.7.17 exists as a virtual branch now.
>>>>
>>>> The current branching for release-3.7 and v3.7.17 looks like this.
>>>>
>>>> | release-3.7 CURRENT HEAD
>>>> |
>>>> | new commits
>>>> |   | c11131f (tag: v3.7.17)
>>>> 8b95eba /
>>>> |
>>>> | old commits
>>>>
>>>> The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
>>>> push this as the new release-3.7.
>>>>
>>>>  | release-3.7 NEW HEAD
>>>> |release-3.7 CURRENT HEAD -->| Merge commit
>>>> ||
>>>> | new commits*   |
>>>> || c11131f (tag: v3.7.17)
>>>> | 8b95eba ---/
>>>> |
>>>> | old commits
>>>>
>>>> I'd like to avoid doing a rebase because it would lead to changes
>>>> commit-ids, and break any existing clones.
>>>>
>>>> The actual commands I'll be doing on my local system are:
>>>> (NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
>>>> to the 3.7.17 branch in the picture above)
>>>> ```
>>>> $ git fetch origin # fetch latest origin
>>>> $ git checkout release-3.7 # checking out my local release-3.7
>>>> $ git merge origin/release-3.7 # merge updates from origin into my
>>>> local release-3.7. This will create a merge commit.
>>>> $ git push origin release-3.7:release-3.7 # push my local branch to
>>>> remote and point remote release-3.7 to my release-3.7 ie. the merge
>>>> commit.
>>>> ```
>>>>
>>>> After this users with existing clones should get changes done on their
>>>> next `git pull`.
>>>
>>> I've tested this out locally, and it works.
>>>
>>>>
>>>> I'll do this in the next couple of hours, if there are no objections.
>>>>
>>
>> If forgot to give credit. Thanks JoeJulian and gnulnx for noticing
>> this and bringing attention to it.
>
> I'm going ahead with the plan. I've not gotten any bad feedback. On
> JoeJulian and Niels said it looks okay.

This is now done. A merge commit 94ba6c9 was created which merges back
v3.7.17 into release-3.7. The head of release-3.7 now points to this
merge commit.
Future pulls of release-3.7 will not be affected. If anyone faces
issues, please let met know.

>
>>
>>>> ~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-18 Thread Kaushal M
On Fri, Nov 18, 2016 at 2:04 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Fri, Nov 18, 2016 at 1:22 PM, Kaushal M <kshlms...@gmail.com> wrote:
>> On Fri, Nov 18, 2016 at 1:17 PM, Kaushal M <kshlms...@gmail.com> wrote:
>>> IMPORTANT: Till this is fixed please stop merging changes into release-3.7
>>>
>>> I made a mistake.
>>>
>>> When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
>>> not correct.
>>> So I corrected it with a new commit, c11131f, directly on top of my
>>> local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
>>> tagged this commit as 3.7.17.
>>>
>>> Unfortunately, when pushing I just pushed the tags and didn't push my
>>> updated branch to release-3.7. Because of this I inadvertently created
>>> a new (virtual) branch.
>>> Any new changes merged in release-3.7 since have happened on top of
>>> 8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
>>> v3.7.17 exists as a virtual branch now.
>>>
>>> The current branching for release-3.7 and v3.7.17 looks like this.
>>>
>>> | release-3.7 CURRENT HEAD
>>> |
>>> | new commits
>>> |   | c11131f (tag: v3.7.17)
>>> 8b95eba /
>>> |
>>> | old commits
>>>
>>> The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
>>> push this as the new release-3.7.
>>>
>>>  | release-3.7 NEW HEAD
>>> |release-3.7 CURRENT HEAD -->| Merge commit
>>> ||
>>> | new commits*   |
>>> || c11131f (tag: v3.7.17)
>>> | 8b95eba ---/
>>> |
>>> | old commits
>>>
>>> I'd like to avoid doing a rebase because it would lead to changes
>>> commit-ids, and break any existing clones.
>>>
>>> The actual commands I'll be doing on my local system are:
>>> (NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
>>> to the 3.7.17 branch in the picture above)
>>> ```
>>> $ git fetch origin # fetch latest origin
>>> $ git checkout release-3.7 # checking out my local release-3.7
>>> $ git merge origin/release-3.7 # merge updates from origin into my
>>> local release-3.7. This will create a merge commit.
>>> $ git push origin release-3.7:release-3.7 # push my local branch to
>>> remote and point remote release-3.7 to my release-3.7 ie. the merge
>>> commit.
>>> ```
>>>
>>> After this users with existing clones should get changes done on their
>>> next `git pull`.
>>
>> I've tested this out locally, and it works.
>>
>>>
>>> I'll do this in the next couple of hours, if there are no objections.
>>>
>
> If forgot to give credit. Thanks JoeJulian and gnulnx for noticing
> this and bringing attention to it.

I'm going ahead with the plan. I've not gotten any bad feedback. On
JoeJulian and Niels said it looks okay.

>
>>> ~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-18 Thread Kaushal M
On Fri, Nov 18, 2016 at 1:22 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Fri, Nov 18, 2016 at 1:17 PM, Kaushal M <kshlms...@gmail.com> wrote:
>> IMPORTANT: Till this is fixed please stop merging changes into release-3.7
>>
>> I made a mistake.
>>
>> When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
>> not correct.
>> So I corrected it with a new commit, c11131f, directly on top of my
>> local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
>> tagged this commit as 3.7.17.
>>
>> Unfortunately, when pushing I just pushed the tags and didn't push my
>> updated branch to release-3.7. Because of this I inadvertently created
>> a new (virtual) branch.
>> Any new changes merged in release-3.7 since have happened on top of
>> 8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
>> v3.7.17 exists as a virtual branch now.
>>
>> The current branching for release-3.7 and v3.7.17 looks like this.
>>
>> | release-3.7 CURRENT HEAD
>> |
>> | new commits
>> |   | c11131f (tag: v3.7.17)
>> 8b95eba /
>> |
>> | old commits
>>
>> The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
>> push this as the new release-3.7.
>>
>>  | release-3.7 NEW HEAD
>> |release-3.7 CURRENT HEAD -->| Merge commit
>> ||
>> | new commits*   |
>> || c11131f (tag: v3.7.17)
>> | 8b95eba ---/
>> |
>> | old commits
>>
>> I'd like to avoid doing a rebase because it would lead to changes
>> commit-ids, and break any existing clones.
>>
>> The actual commands I'll be doing on my local system are:
>> (NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
>> to the 3.7.17 branch in the picture above)
>> ```
>> $ git fetch origin # fetch latest origin
>> $ git checkout release-3.7 # checking out my local release-3.7
>> $ git merge origin/release-3.7 # merge updates from origin into my
>> local release-3.7. This will create a merge commit.
>> $ git push origin release-3.7:release-3.7 # push my local branch to
>> remote and point remote release-3.7 to my release-3.7 ie. the merge
>> commit.
>> ```
>>
>> After this users with existing clones should get changes done on their
>> next `git pull`.
>
> I've tested this out locally, and it works.
>
>>
>> I'll do this in the next couple of hours, if there are no objections.
>>

If forgot to give credit. Thanks JoeJulian and gnulnx for noticing
this and bringing attention to it.

>> ~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-17 Thread Kaushal M
On Fri, Nov 18, 2016 at 1:17 PM, Kaushal M <kshlms...@gmail.com> wrote:
> IMPORTANT: Till this is fixed please stop merging changes into release-3.7
>
> I made a mistake.
>
> When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
> not correct.
> So I corrected it with a new commit, c11131f, directly on top of my
> local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
> tagged this commit as 3.7.17.
>
> Unfortunately, when pushing I just pushed the tags and didn't push my
> updated branch to release-3.7. Because of this I inadvertently created
> a new (virtual) branch.
> Any new changes merged in release-3.7 since have happened on top of
> 8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
> v3.7.17 exists as a virtual branch now.
>
> The current branching for release-3.7 and v3.7.17 looks like this.
>
> | release-3.7 CURRENT HEAD
> |
> | new commits
> |   | c11131f (tag: v3.7.17)
> 8b95eba /
> |
> | old commits
>
> The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
> push this as the new release-3.7.
>
>  | release-3.7 NEW HEAD
> |release-3.7 CURRENT HEAD -->| Merge commit
> ||
> | new commits*   |
> || c11131f (tag: v3.7.17)
> | 8b95eba ---/
> |
> | old commits
>
> I'd like to avoid doing a rebase because it would lead to changes
> commit-ids, and break any existing clones.
>
> The actual commands I'll be doing on my local system are:
> (NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
> to the 3.7.17 branch in the picture above)
> ```
> $ git fetch origin # fetch latest origin
> $ git checkout release-3.7 # checking out my local release-3.7
> $ git merge origin/release-3.7 # merge updates from origin into my
> local release-3.7. This will create a merge commit.
> $ git push origin release-3.7:release-3.7 # push my local branch to
> remote and point remote release-3.7 to my release-3.7 ie. the merge
> commit.
> ```
>
> After this users with existing clones should get changes done on their
> next `git pull`.

I've tested this out locally, and it works.

>
> I'll do this in the next couple of hours, if there are no objections.
>
> ~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-17 Thread Kaushal M
IMPORTANT: Till this is fixed please stop merging changes into release-3.7

I made a mistake.

When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
not correct.
So I corrected it with a new commit, c11131f, directly on top of my
local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
tagged this commit as 3.7.17.

Unfortunately, when pushing I just pushed the tags and didn't push my
updated branch to release-3.7. Because of this I inadvertently created
a new (virtual) branch.
Any new changes merged in release-3.7 since have happened on top of
8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
v3.7.17 exists as a virtual branch now.

The current branching for release-3.7 and v3.7.17 looks like this.

| release-3.7 CURRENT HEAD
|
| new commits
|   | c11131f (tag: v3.7.17)
8b95eba /
|
| old commits

The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
push this as the new release-3.7.

 | release-3.7 NEW HEAD
|release-3.7 CURRENT HEAD -->| Merge commit
||
| new commits*   |
|| c11131f (tag: v3.7.17)
| 8b95eba ---/
|
| old commits

I'd like to avoid doing a rebase because it would lead to changes
commit-ids, and break any existing clones.

The actual commands I'll be doing on my local system are:
(NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
to the 3.7.17 branch in the picture above)
```
$ git fetch origin # fetch latest origin
$ git checkout release-3.7 # checking out my local release-3.7
$ git merge origin/release-3.7 # merge updates from origin into my
local release-3.7. This will create a merge commit.
$ git push origin release-3.7:release-3.7 # push my local branch to
remote and point remote release-3.7 to my release-3.7 ie. the merge
commit.
```

After this users with existing clones should get changes done on their
next `git pull`.

I'll do this in the next couple of hours, if there are no objections.

~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Please pause merging patches to 3.9 waiting for just one patch

2016-11-13 Thread Kaushal M
Pranith,

This change [1] removing experimental xlators isn't merged yet. It
should be taken in before you do your release.

[1]: https://review.gluster.org/15750

On Fri, Nov 11, 2016 at 4:23 PM, Niels de Vos  wrote:
> On Thu, Nov 10, 2016 at 04:52:29PM -0500, Kaleb S. KEITHLEY wrote:
>> On 11/10/2016 04:12 PM, Vijay Bellur wrote:
>> > On Thu, Nov 10, 2016 at 11:56 AM, Niels de Vos  wrote:
>> > >
>> > > The packages from the CentOS Storage SIG will by default provide the
>> > > latest LTM release. The STM release is provided in addition, and needs
>> > > an extra step to enable.
>> > >
>> > > I am not sure how we can handle this in other distributions (or also
>> > > with the packages on d.g.o.).
>> >
>> > Maybe we should not flip the LATEST for non-RPM distributions in
>> > d.g.o? or should we introduce LTM/LATEST and encourage users to change
>> > their repository files to point to this?
>>
>> I like having LATEST and LTM symlinks, but---
>>
>> Did we decide that after 3.8 the next LTM release will be 3.10? (Or 4.0
>> whenever that lands?) And an LTM release is maintained for 12 or 18 months?
>>
>> If so there probably will be two active LTM releases, assuming we can ship
>> the next releases on time.
>
> Yes, and we have is documented (with diagrams!) on
> https://www.gluster.org/community/release-schedule/ , see the "Post-3.8"
> section.
>
>> We should have LTM-3.8 and eventually LTM-3.10 symlinks then. Or are there
>> other ideas?
>>
>> > Packaging in distributions would be handled by package maintainers and
>> > I presume they can decide the appropriateness of a release for
>> > packaging?
>>
>> Indeed. Well, that's the status quo, and beyond our control in any event.
>
> We should probably send out a reminder to the packaging list as that
> should contain all known packagers for different distributions.
> Including 3.9 in a distribution might be appropriate for some, as long
> as the distribution/version goes EOL before our STM release.
>
> Niels
>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-09 Thread Kaushal M
On Thu, Nov 10, 2016 at 1:11 PM, Atin Mukherjee  wrote:
>
>
> On Thu, Nov 10, 2016 at 1:04 PM, Pranith Kumar Karampuri
>  wrote:
>>
>> I am trying to understand the criticality of these patches. Raghavendra's
>> patch is crucial because gfapi workloads(for samba and qemu) are affected
>> severely. I waited for Krutika's patch because VM usecase can lead to disk
>> corruption on replace-brick. If you could let us know the criticality and we
>> are in agreement that they are this severe, we can definitely take them in.
>> Otherwise next release is better IMO. Thoughts?
>
>
> If you are asking about how critical they are, then the first two are
> definitely not but third one is actually a critical one as if user upgrades
> from 3.6 to latest with quota enable, further peer probes get rejected and
> the only work around is to disable quota and re-enable it back.
>

If a workaround is present, I don't consider it a blocker for the release.

> On a different note, 3.9 head is not static and moving forward. So if you
> are really looking at only critical patches need to go in, that's not
> happening, just a word of caution!
>
>>
>> On Thu, Nov 10, 2016 at 12:56 PM, Atin Mukherjee 
>> wrote:
>>>
>>> Pranith,
>>>
>>> I'd like to see following patches getting in:
>>>
>>> http://review.gluster.org/#/c/15722/
>>> http://review.gluster.org/#/c/15714/
>>> http://review.gluster.org/#/c/15792/
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Nov 10, 2016 at 7:12 AM, Pranith Kumar Karampuri
>>>  wrote:

 hi,
   The only problem left was EC taking more time. This should affect
 small files a lot more. Best way to solve it is using compound-fops. So for
 now I think going ahead with the release is best.

 We are waiting for Raghavendra Talur's
 http://review.gluster.org/#/c/15778 before going ahead with the release. If
 we missed any other crucial patch please let us know.

 Will make the release as soon as this patch is merged.

 --
 Pranith & Aravinda

 ___
 maintainers mailing list
 maintainers@gluster.org
 http://www.gluster.org/mailman/listinfo/maintainers

>>>
>>>
>>>
>>> --
>>>
>>> ~ Atin (atinm)
>>
>>
>>
>>
>> --
>> Pranith
>
>
>
>
> --
>
> ~ Atin (atinm)
>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Reminder to add meeting updates

2016-11-01 Thread Kaushal M
Hi all,

This is a reminder to all to add their updates to weekly meeting pad
[1]. Please make sure you find some time to add updates about your
components, features or things you are working on.

Thanks,
Kaushal

[1] https://public.pad.fsfe.org/p/gluster-community-meetings
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Gluster Test Thursday - Release 3.9

2016-10-28 Thread Kaushal M
I'm continuing testing GlusterD for 3.9.0rc2. I wasted a lot of my
time earlier this morning testing 3.8.5 because of an oversight.

I have one issue till now, the cluster.op-version defaults to 4.
This shouldn't be how it's supposed to be. It needs to be set to the
39000 for 3.9.0.

I'll send out a patch to fix this.

On Fri, Oct 28, 2016 at 11:29 AM, Raghavendra Gowdappa
 wrote:
> Thanks to "Tirumala Satya Prasad Desala" , we were able 
> to run tests for Plain distribute and didn't see any failures.
>
> Ack Plain distribute.
>
> - Original Message -
>> From: "Kaleb S. KEITHLEY" 
>> To: "Aravinda" , "Gluster Devel" 
>> , "GlusterFS Maintainers"
>> 
>> Sent: Thursday, October 27, 2016 8:51:36 PM
>> Subject: Re: [Gluster-devel] [Gluster-Maintainers] Gluster Test Thursday -   
>>  Release 3.9
>>
>>
>> Ack on nfs-ganesha bits. Tentative ack on gnfs bits.
>>
>> Conditional ack on build, see:
>>http://review.gluster.org/15726
>>http://review.gluster.org/15733
>>http://review.gluster.org/15737
>>http://review.gluster.org/15743
>>
>> There will be backports to 3.9 of the last three soon. Timely reviews of
>> the last three will accelerate the availability of backports.
>>
>> On 10/26/2016 10:34 AM, Aravinda wrote:
>> > Gluster 3.9.0rc2 tarball is available here
>> > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.9.0rc2.tar.gz
>> >
>> > regards
>> > Aravinda
>> >
>> > On Tuesday 25 October 2016 04:12 PM, Aravinda wrote:
>> >> Hi,
>> >>
>> >> Since Automated test framework for Gluster is in progress, we need
>> >> help from Maintainers and developers to test the features and bug
>> >> fixes to release Gluster 3.9.
>> >>
>> >> In last maintainers meeting Shyam shared an idea about having a Test
>> >> day to accelerate the testing and release.
>> >>
>> >> Please participate in testing your component(s) on Oct 27, 2016. We
>> >> will prepare the rc2 build by tomorrow and share the details before
>> >> Test day.
>> >>
>> >> RC1 Link:
>> >> http://www.gluster.org/pipermail/maintainers/2016-September/001442.html
>> >> Release Checklist:
>> >> https://public.pad.fsfe.org/p/gluster-component-release-checklist
>> >>
>> >>
>> >> Thanks and Regards
>> >> Aravinda and Pranith
>> >>
>> >
>> > ___
>> > maintainers mailing list
>> > maintainers@gluster.org
>> > http://www.gluster.org/mailman/listinfo/maintainers
>>
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] GlusterFS-3.7.16 release approaching

2016-10-02 Thread Kaushal M
A quick update on the status of 3.7.16.

I fell sick and couldn't do the release as scheduled on the 30th. I'll
try to get the release done over the next two days.

Neils, one of the changes you listed has a merge-conflict. I'll merge
the other two, only if you can confirm they are all independent of
each other.
If you could confirm before this time tomorrow, I'll take them in.

Thanks,
Kaushal

On Fri, Sep 30, 2016 at 9:00 PM, Niels de Vos <nde...@redhat.com> wrote:
> On Mon, Sep 26, 2016 at 03:08:42PM +0530, Kaushal M wrote:
>> Hi all,
>>
>> GlusterFS-3.7.16 is on target to be released on Sep 30, 4 days from now.
>>
>> In preparation for the release, maintainers please stop merging
>> anymore changes into release-3.7.
>> If any developer has a change that needs to be merged, please reply to
>> this email before end of day Sep 28.
>>
>> At this moment, 16 changes have been merged on top of v3.7.15. There
>> are still 5 patches under review that have been added since the last
>> release [2].
>
> I've backported and posted some additional patches that were waiting on
> mainline reviews. That has finally happened and the 3.7 versions are
> available in Gerrit now:
>   
> http://review.gluster.org/#/q/project:glusterfs+branch:release-3.7+topic:bug-1347715+NOT+status:abandoned
>
> Regression tests are slowly passing (and smoke rechecked). It would be
> most welcome if these changes can be included in the 3.7.16 release.
>
> Thanks,
> Niels
>
>
>>
>> Thanks,
>> Kaushal
>>
>> [1] 
>> http://review.gluster.org/#/q/project:glusterfs+branch:release-3.7+status:open
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> http://www.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] GlusterFS-3.7.16 release approaching

2016-09-26 Thread Kaushal M
Hi all,

GlusterFS-3.7.16 is on target to be released on Sep 30, 4 days from now.

In preparation for the release, maintainers please stop merging
anymore changes into release-3.7.
If any developer has a change that needs to be merged, please reply to
this email before end of day Sep 28.

At this moment, 16 changes have been merged on top of v3.7.15. There
are still 5 patches under review that have been added since the last
release [2].

Thanks,
Kaushal

[1] 
http://review.gluster.org/#/q/project:glusterfs+branch:release-3.7+status:open
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Volunteer required for managing 3.7.16

2016-09-15 Thread Kaushal M
On Thu, Sep 8, 2016 at 1:55 PM, Kaushal M <kshlms...@gmail.com> wrote:
> Hi all,
>
> This is a call for volunteers to manage the 3.7.16 release.
>
> I want to spend some more time on GD2, and so I want to temporarily
> give up release management duties.
>
> I'll be available to help with the release process if you are not
> clear about it.

Bumping this again.

Is there anyone out there up for this?

>
> ~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Volunteer required for managing 3.7.16

2016-09-08 Thread Kaushal M
Hi all,

This is a call for volunteers to manage the 3.7.16 release.

I want to spend some more time on GD2, and so I want to temporarily
give up release management duties.

I'll be available to help with the release process if you are not
clear about it.

~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] glusterfs-3.7.15 released

2016-09-07 Thread Kaushal M
On Tue, Aug 30, 2016 at 9:16 PM, Niels de Vos  wrote:
> On Tue, Aug 30, 2016 at 07:14:48AM -0700, Gluster Build System wrote:
>>
>>
>> SRC: 
>> http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.7.15.tar.gz
>
> Packages for the CentOS Storage SIG have been build for the
> centos-gluster37-test repositories and should arrive there over the next
> hours.
>
> Please report test results so that I can mark them as released and they
> get signed+pushed to the mirrors.
>

I tested the packages, and they are working well. I tested updates,
network encryption, snapshot, healing (afr and ec) and rebalancing,
and everything worked.
You can push the packages now.

> Thanks,
> Niels
>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] 3.9 branching problem

2016-09-01 Thread Kaushal M
Also, remember to edit the rfc.sh script to point to the correct branch.

On Thu, Sep 1, 2016 at 11:40 AM, Kaushal M <kshlms...@gmail.com> wrote:
> You should be pushing to git.gluster.org, not github. Remove the
> `g...@github.com:gluster/glusterfs.git` url from the remote. And try
> again.
>
> On Thu, Sep 1, 2016 at 10:53 AM, Pranith Kumar Karampuri
> <pkara...@redhat.com> wrote:
>> hi,
>>   We were waiting for last minute patches to be merged and no when we
>> tried to create the branch we found that Aravinda and I don't have
>> permissions for pushing to github. What is the procedure to get these
>> permissions?
>>
>> We followed these steps for pushing to upstreams:
>> 1) Have the following alias in my .git/config
>> [remote "public"]
>> url = g...@github.com:gluster/glusterfs.git
>> url = ssh://@git.gluster.org/glusterfs.git
>>
>> 2) git branch release-3.9
>> 3) git push -u public release-3.9
>>
>> I get the following error when executing the command in step-3)
>>
>> Permission denied (publickey).
>> fatal: Could not read from remote repository.
>>
>> Please make sure you have the correct access rights
>> and the repository exists.
>>
>> --
>> Pranith
>>
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> http://www.gluster.org/mailman/listinfo/maintainers
>>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-3.7.15 released

2016-08-31 Thread Kaushal M
Thanks, Kaleb.
I'll make the announcement now.

On Wed, Aug 31, 2016 at 9:03 PM, Kaleb S. KEITHLEY  wrote:
> On 08/30/2016 10:14 AM, Gluster Build System wrote:
>>
>>
>> SRC: 
>> http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.7.15.tar.gz
>>
>> This release is made off jenkins-release-168
>>
>> -- Gluster Build System
>
> Fedora 23 RPMs queued for testing, f24, f25,f26 RPMs on d.g.o
> Debian .dpkgs on d.g.o
> Ubuntu .dpkgs in launchpad
> SuSE RPMs in SuSE build system.
>
> 3.7/LATEST symlink on d.g.o. moved to 3.7.15 (from 3.7.14)
>
> Reminder No EL5, EL6, EL7 RPMs starting with 3.7.15 as discussed; get
> EL6, EL7 RPMs from CentOS Storage SIG.
>
> --
>
> Kaleb
> ___
> packaging mailing list
> packag...@gluster.org
> http://www.gluster.org/mailman/listinfo/packaging
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Update on GlusterFS-3.7.15

2016-08-22 Thread Kaushal M
Hi all.

We have 1 more week till the scheduled 30th August release date for
GlusterFS-3.7.15.

Till today, 34 new commits have been merged into release-3.7 since the
tagging of 3.7.14. Gerrit has ~30 open patches on release-3.7 [1],
about 10 of which have been submitted after 3.7.14.

Notify the maintainers and me of any changes you need merged. You can
reply to this thread to notify. Try to ensure that your changes get
merged before this weekend.

Maintainers are free to merge patches into release-3.7 till this
weekend. Ensure that the patches satisfy the backport criteria [2].
I'll send out another announcement notifying when to stop merging
patches.

Let's have another good release.

~kaushal


[1] 
https://review.gluster.org/#/q/project:glusterfs+branch:release-3.7+status:open
[2] 
https://github.com/kshlm/glusterdocs/blob/abbe527d05745fe55b28b80169e9436e78c052c9/Contributors-Guide/GlusterFS-Release-process.md
(under review at https://github.com/gluster/glusterdocs/pull/139)
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] GlusterFS-3.6 bug screening

2016-08-22 Thread Kaushal M
Let's try this again.

We are doing a final screening of the 3.6 bug list after the next
bug-triage meeting (1200UTC 23 Aug 2016, ie tomorrow).

All maintainers are requested to attend this meeting and screen bugs
for their components. The list of bugs is available at [1]. Bugs that
have been stricken out have been already screened. We have ~80 bugs to
be screened.

This time is being setup so that maintainers have a dedicated time to
get together and cleanup this list. But maintainers are free to screen
bugs in their own time before the meeting. Other developers are also
free to attend and screen bugs.

I hope to see better attendance tomorrow than last week.

Thanks.
Kaushal

[1] https://public.pad.fsfe.org/p/gluster-3.6-final-bug-screen
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Updates to the GlusterFS release process document

2016-08-07 Thread Kaushal M
On Tue, Aug 2, 2016 at 2:21 PM, Kaushal M <kshlms...@gmail.com> wrote:
> Hi All.
>
> We've been discussing about improvements to our release process and
> schedules for a while now. As a result of these discussions I had
> started an etherpad [1] to put together a release process document.
>
> I've created a pull-request [2] to glusterdocs based on this etherpad,
> to make it official. The document isn't complete yet. I'll be adding
> more information and doing cleanups as required. Most of the required
> information regarding the release-process has been added. I'd like the
> maintainers to go over the pull-request and give comments.

Bumping this again. I request all maintainers to please review the
document and provide your approvals.
Currently I've gotten comments only from Neils and Aravinda.

~kaushal

>
> Thanks,
> Kaushal
>
> [1] https://public.pad.fsfe.org/p/glusterfs-release-process-201606
> [2] https://github.com/gluster/glusterdocs/pull/139
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Updates to the GlusterFS release process document

2016-08-02 Thread Kaushal M
Hi All.

We've been discussing about improvements to our release process and
schedules for a while now. As a result of these discussions I had
started an etherpad [1] to put together a release process document.

I've created a pull-request [2] to glusterdocs based on this etherpad,
to make it official. The document isn't complete yet. I'll be adding
more information and doing cleanups as required. Most of the required
information regarding the release-process has been added. I'd like the
maintainers to go over the pull-request and give comments.

Thanks,
Kaushal

[1] https://public.pad.fsfe.org/p/glusterfs-release-process-201606
[2] https://github.com/gluster/glusterdocs/pull/139
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] glusterfs-3.7.14 released

2016-08-02 Thread Kaushal M
Thanks for the quick builds! This is going to be the fastest release
ever to go from tagging to announcement.

On Mon, Aug 1, 2016 at 10:52 PM, Kaleb S. KEITHLEY  wrote:
> On 08/01/2016 02:16 AM, Gluster Build System wrote:
>>
>>
>> SRC: 
>> http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.7.14.tar.gz
>>
>> This release is made off jenkins-release-165
>>
>> -- Gluster Build System
>
> Packages for Fedora 23 are queued for testing in Fedora Koji/Bodhi. They
> will appear first via dnf in the Updates-Testing repo, then in the
> Updates repo.
>
> Packages for Fedora 24, 25, 26; epel 5, 6, 7; debian wheezy, jessie, and
> stretch, are available now on download.gluster.org.
>
> Packages for Ubuntu Trusty, Wily, and Xenial are available now in Launchpad.
>
> Packages for SuSE SLES-12, OpenSuSE 13.1, and Leap42.1 are available now
> in the SuSE build system.
>
> See the READMEs in the respective subdirs at
> https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.14/ for more
> details on how to obtain them.
>
> --
>
> Kaleb
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] GlusterFS-v3.7.14 tagging approaching

2016-07-27 Thread Kaushal M
Hi all,

3.7.14 is scheduled to be tagged on the 30th of this month
(30/Jul/2016), ie. in 3 more days.
Please ensure that you get the required changes merged by 1200 UTC on the 30th.

We have a good amount of fixes already with 18 commits into
release-3.7 since 3.7.13.

Make sure to add bugs for changes to be merged to the tracking bug [1].

Thanks,
Kaushal

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.14
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Upgrade issue when new mem type is added in libglusterfs

2016-07-11 Thread Kaushal M
On Sat, Jul 9, 2016 at 10:02 PM, Atin Mukherjee  wrote:
> We have hit a bug 1347250 in downstream (applicable upstream too) where it
> was seen that glusterd didnt regenerate the volfiles when it was interimly
> brought up with upgrade mode by yum. Log file captured that gsyncd --version
> failed to execute and hence glusterd init couldnt proceed till the volfile
> regeneration. Since the ret code is not handled here in spec file users
> wouldnt come to know about this and going forward this is going to cause
> major issues in healing and all and finally it exploits the possibility of
> split brains at its best.
>
> Further analysis by Kotresh & Raghavendra Talur reveals that gsyncd failed
> here because of the compatibility issue where gsyncd was still not upgraded
> where as glusterfs-server was and this failure was mainly because of change
> in the mem type enum. We have seen a similar issue for RDMA as well
> (probably a year back). So to be very generic this can happen in any upgrade
> path from one version to another where new mem type is introduced. We have
> seen this from 3.7.8 to 3.7.12 and 3.8. People upgrading from 3.6 to 3.7/3.8
> will also experience this issue.
>
> Till we work on this fix, I suggest all the release managers to highlight
> this in the release note of the latest releases with the following work
> around after yum update:
>
> 1. grep -irns "geo-replication module not working as desired"
> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log | wc -l
>
>  If the output is non-zero, then go to step 2 else follow the rest of the
> steps as per the guide.
>
> 2.Check if glusterd instance is running or not by 'ps aux | grep glusterd',
> if it is, then stop the glusterd service.
>
>  3. glusterd --xlator-option *.upgrade=on -N
>
> and then proceed ahead with rest of the steps as per the guide.
>
> Thoughts?

Proper .so versioning of libglusterfs should help with problems like
this. I don't know how to do this though.

But I do have some thoughts to share on using GlusterDs upgrade-mode.

GlusterD depends on the cluster op-version when generating volfiles,
to insert new features/xlators into the volfile graph.
This was done to make sure that the homogeneity of the volfiles is
preserved across the cluster.
This behaviour makes running GlusterD in upgrade mode after a package
upgrade, essentially a noop.
The cluster op-version doesn't change automatically when packages are upgraded,
so the regenerated volfiles in the post-upgrade section are basically
the same as before.
(If something is getting added into volfiles after this, it is
incorrect, and is something I'm yet to check).

The correct time to regenerate the volfiles is after all members of
the cluster have been upgraded and the cluster op-version has been
bumped.
(Bumping op-version doesn't regenerate anything, it is just an
indication that the cluster is now ready to use new features.)

We don't have a direct way to get volfiles regenerated on all members
with a single command yet. We can implement such a command with
relative ease.
For now, volfiles can regenerated by making use of the `volume set`
command, by setting a `user.upgrade` option on a volume.
Options in the `user.` namespace are passed on to hook scripts and not
added into any volfiles, but setting such an option on a volume causes
GlusterD to regenerate volfiles for the volume.

My suggestion would be to stop using glusterd in upgrade mode during
post-upgrade to regenerate volfiles, and document the above way to get
volfiles regenerated across the cluster correctly.
We could do away with upgrade mode itself, but it could be useful for
other things (Though I can't think of any right now).

What do the other maintainers feel about this?

~kaushal

PS: If this discussion is distracting from the original conversation,
I'll start a new thread.

>
> P.S : this email is limited to maintainers till we decide on the approach to
> highlight this issues to the users
>
>
> --
> Atin
> Sent from iPhone
>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Glusterfs-3.7.13 release plans

2016-07-08 Thread Kaushal M
On Fri, Jul 8, 2016 at 2:22 PM, Raghavendra Gowdappa
<rgowd...@redhat.com> wrote:
> There seems to be a major inode leak in fuse-clients:
> https://bugzilla.redhat.com/show_bug.cgi?id=1353856
>
> We have found an RCA through code reading (though have a high confidence on 
> the RCA). Do we want to include this in 3.7.13?

I'm not going to be delaying the release anymore. I'll be adding this
issue into the release-notes as a known-issue.

>
> regards,
> Raghavendra.
>
> ----- Original Message -
>> From: "Kaushal M" <kshlms...@gmail.com>
>> To: "Pranith Kumar Karampuri" <pkara...@redhat.com>
>> Cc: maintainers@gluster.org, "Gluster Devel" <gluster-de...@gluster.org>
>> Sent: Friday, July 8, 2016 11:51:11 AM
>> Subject: Re: [Gluster-Maintainers] Glusterfs-3.7.13 release plans
>>
>> On Fri, Jul 8, 2016 at 9:59 AM, Pranith Kumar Karampuri
>> <pkara...@redhat.com> wrote:
>> > Could you take in http://review.gluster.org/#/c/14598/ as well? It is ready
>> > for merge.
>> >
>> > On Thu, Jul 7, 2016 at 3:02 PM, Atin Mukherjee <amukh...@redhat.com> wrote:
>> >>
>> >> Can you take in http://review.gluster.org/#/c/14861 ?
>>
>> Can you get one of the maintainers to give it a +2?
>>
>> >>
>> >>
>> >> On Thursday 7 July 2016, Kaushal M <kshlms...@gmail.com> wrote:
>> >>>
>> >>> On Thu, Jun 30, 2016 at 11:08 AM, Kaushal M <kshlms...@gmail.com> wrote:
>> >>> > Hi all,
>> >>> >
>> >>> > I'm (or was) planning to do a 3.7.13 release on schedule today. 3.7.12
>> >>> > has a huge issue with libgfapi, solved by [1].
>> >>> > I'm not sure if this fixes the other issues with libgfapi noticed by
>> >>> > Lindsay on gluster-users.
>> >>> >
>> >>> > This patch has been included in the packages 3.7.12 built for CentOS,
>> >>> > Fedora, Ubuntu, Debian and SUSE. I guess Lindsay is using one of these
>> >>> > packages, so it might be that the issue seen is new. So I'd like to do
>> >>> > a quick release once we have a fix.
>> >>> >
>> >>> > Maintainers can merge changes into release-3.7 that follow the
>> >>> > criteria given in [2]. Please make sure to add the bugs for patches
>> >>> > you are merging are added as dependencies for the 3.7.13 tracker bug
>> >>> > [3].
>> >>> >
>> >>>
>> >>> I've just merged the fix for the gfapi breakage into release-3.7, and
>> >>> hope to tag 3.7.13 soon.
>> >>>
>> >>> The current head for release-3.7 is commit bddf6f8. 18 patches have
>> >>> been merged since 3.7.12 for the following components,
>> >>>  - gfapi
>> >>>  - nfs (includes ganesha related changes)
>> >>>  - glusterd/cli
>> >>>  - libglusterfs
>> >>>  - fuse
>> >>>  - build
>> >>>  - geo-rep
>> >>>  - afr
>> >>>
>> >>> I need and acknowledgement from the maintainers of the above
>> >>> components that they are ready.
>> >>> If any maintainers know of any other issues, please reply here. We'll
>> >>> decide how to address them for this release here.
>> >>>
>> >>> Also, please don't merge anymore changes into release-3.7. If you need
>> >>> to get something merged, please inform me.
>> >>>
>> >>> Thanks,
>> >>> Kaushal
>> >>>
>> >>> > Thanks,
>> >>> > Kaushal
>> >>> >
>> >>> > [1]: https://review.gluster.org/14822
>> >>> > [2]: https://public.pad.fsfe.org/p/glusterfs-release-process-201606
>> >>> > under the GlusterFS minor release heading
>> >>> > [3]: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.13
>> >>> ___
>> >>> maintainers mailing list
>> >>> maintainers@gluster.org
>> >>> http://www.gluster.org/mailman/listinfo/maintainers
>> >>
>> >>
>> >>
>> >> --
>> >> Atin
>> >> Sent from iPhone
>> >>
>> >> ___
>> >> maintainers mailing list
>> >> maintainers@gluster.org
>> >> http://www.gluster.org/mailman/listinfo/maintainers
>> >>
>> >
>> >
>> >
>> > --
>> > Pranith
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> http://www.gluster.org/mailman/listinfo/maintainers
>>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Glusterfs-3.7.13 release plans

2016-07-08 Thread Kaushal M
On Fri, Jul 8, 2016 at 9:59 AM, Pranith Kumar Karampuri
<pkara...@redhat.com> wrote:
> Could you take in http://review.gluster.org/#/c/14598/ as well? It is ready
> for merge.
>
> On Thu, Jul 7, 2016 at 3:02 PM, Atin Mukherjee <amukh...@redhat.com> wrote:
>>
>> Can you take in http://review.gluster.org/#/c/14861 ?

Can you get one of the maintainers to give it a +2?

>>
>>
>> On Thursday 7 July 2016, Kaushal M <kshlms...@gmail.com> wrote:
>>>
>>> On Thu, Jun 30, 2016 at 11:08 AM, Kaushal M <kshlms...@gmail.com> wrote:
>>> > Hi all,
>>> >
>>> > I'm (or was) planning to do a 3.7.13 release on schedule today. 3.7.12
>>> > has a huge issue with libgfapi, solved by [1].
>>> > I'm not sure if this fixes the other issues with libgfapi noticed by
>>> > Lindsay on gluster-users.
>>> >
>>> > This patch has been included in the packages 3.7.12 built for CentOS,
>>> > Fedora, Ubuntu, Debian and SUSE. I guess Lindsay is using one of these
>>> > packages, so it might be that the issue seen is new. So I'd like to do
>>> > a quick release once we have a fix.
>>> >
>>> > Maintainers can merge changes into release-3.7 that follow the
>>> > criteria given in [2]. Please make sure to add the bugs for patches
>>> > you are merging are added as dependencies for the 3.7.13 tracker bug
>>> > [3].
>>> >
>>>
>>> I've just merged the fix for the gfapi breakage into release-3.7, and
>>> hope to tag 3.7.13 soon.
>>>
>>> The current head for release-3.7 is commit bddf6f8. 18 patches have
>>> been merged since 3.7.12 for the following components,
>>>  - gfapi
>>>  - nfs (includes ganesha related changes)
>>>  - glusterd/cli
>>>  - libglusterfs
>>>  - fuse
>>>  - build
>>>  - geo-rep
>>>  - afr
>>>
>>> I need and acknowledgement from the maintainers of the above
>>> components that they are ready.
>>> If any maintainers know of any other issues, please reply here. We'll
>>> decide how to address them for this release here.
>>>
>>> Also, please don't merge anymore changes into release-3.7. If you need
>>> to get something merged, please inform me.
>>>
>>> Thanks,
>>> Kaushal
>>>
>>> > Thanks,
>>> > Kaushal
>>> >
>>> > [1]: https://review.gluster.org/14822
>>> > [2]: https://public.pad.fsfe.org/p/glusterfs-release-process-201606
>>> > under the GlusterFS minor release heading
>>> > [3]: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.13
>>> ___
>>> maintainers mailing list
>>> maintainers@gluster.org
>>> http://www.gluster.org/mailman/listinfo/maintainers
>>
>>
>>
>> --
>> Atin
>> Sent from iPhone
>>
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> http://www.gluster.org/mailman/listinfo/maintainers
>>
>
>
>
> --
> Pranith
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Glusterfs-3.7.13 release plans

2016-06-30 Thread Kaushal M
On Thu, Jun 30, 2016 at 1:57 PM, Niels de Vos <nde...@redhat.com> wrote:
> On Thu, Jun 30, 2016 at 12:46:57PM +0530, Atin Mukherjee wrote:
>> On Thu, Jun 30, 2016 at 11:56 AM, Atin Mukherjee <amukh...@redhat.com>
>> wrote:
>>
>> >
>> >
>> > On Thu, Jun 30, 2016 at 11:08 AM, Kaushal M <kshlms...@gmail.com> wrote:
>> >
>> >> Hi all,
>> >>
>> >> I'm (or was) planning to do a 3.7.13 release on schedule today. 3.7.12
>> >> has a huge issue with libgfapi, solved by [1].
>> >> I'm not sure if this fixes the other issues with libgfapi noticed by
>> >> Lindsay on gluster-users.
>> >>
>> >> This patch has been included in the packages 3.7.12 built for CentOS,
>> >> Fedora, Ubuntu, Debian and SUSE. I guess Lindsay is using one of these
>> >> packages, so it might be that the issue seen is new. So I'd like to do
>> >> a quick release once we have a fix.
>> >>
>> >
>> >  http://review.gluster.org/14835 probably is the one you are looking for.
>> >
>> >
>>
>> Ignore it. I had a chance to talk to Poornima and she mentioned that this
>> is a different problem.
>
> The patch that fixes the problem is http://review.gluster.org/14822 and
> I've merged it yesterday. The problem was introduced by
> http://review.gluster.org/14822 (similar subject as 12835 above).

This probably should be another review, the same change cannot
possibly introduce and fix a problem.

But are you sure that the VM pauses observed were due to buffer
overflows, which the patch fixes?
I think this is a different problem, as I'm pretty sure Lindsay was
using packages that included this patch.

> Unfortunately none of the libgfapi maintainer did completely review the
> change before it got merged. It also seems that minimal testing was done
> after the change got included (last minute change in 3.8, quickly
> backported as well).
>
> In order to make Gluster more stable, and prevent problems like this
> again, we really need to work on automating test cases. I hope all
> maintainers are thinking about how they want to test the components they
> are responsible for. For example, I'm planning to run the upstream QEMU
> tests against our nightly builds (libgfapi), and similar for the
> connectathon tests (Gluster/NFS). At one point it should be possible to
> wrap these in DiSTAF, but the DiSTAF job in the CentOS CI is not ready
> yet.
>
> Thanks,
> Niels
>
>
>>
>>
>> >
>> >> Maintainers can merge changes into release-3.7 that follow the
>> >> criteria given in [2]. Please make sure to add the bugs for patches
>> >> you are merging are added as dependencies for the 3.7.13 tracker bug
>> >> [3].
>> >>
>> >> Thanks,
>> >> Kaushal
>> >>
>> >> [1]: https://review.gluster.org/14822
>> >> [2]: https://public.pad.fsfe.org/p/glusterfs-release-process-201606
>> >> under the GlusterFS minor release heading
>> >> [3]: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.13
>> >> ___
>> >> maintainers mailing list
>> >> maintainers@gluster.org
>> >> http://www.gluster.org/mailman/listinfo/maintainers
>> >>
>> >
>> >
>
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> http://www.gluster.org/mailman/listinfo/maintainers
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Glusterfs-3.7.13 release plans

2016-06-29 Thread Kaushal M
Hi all,

I'm (or was) planning to do a 3.7.13 release on schedule today. 3.7.12
has a huge issue with libgfapi, solved by [1].
I'm not sure if this fixes the other issues with libgfapi noticed by
Lindsay on gluster-users.

This patch has been included in the packages 3.7.12 built for CentOS,
Fedora, Ubuntu, Debian and SUSE. I guess Lindsay is using one of these
packages, so it might be that the issue seen is new. So I'd like to do
a quick release once we have a fix.

Maintainers can merge changes into release-3.7 that follow the
criteria given in [2]. Please make sure to add the bugs for patches
you are merging are added as dependencies for the 3.7.13 tracker bug
[3].

Thanks,
Kaushal

[1]: https://review.gluster.org/14822
[2]: https://public.pad.fsfe.org/p/glusterfs-release-process-201606
under the GlusterFS minor release heading
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.13
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-3.7.12 released

2016-06-28 Thread Kaushal M
On Mon, Jun 27, 2016 at 11:40 PM, Kaleb S. KEITHLEY  wrote:
> On 06/24/2016 07:26 AM, Gluster Build System wrote:
>>
>>
>> SRC: 
>> http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.7.12.tar.gz
>>
>> This release is made off jenkins-release-162
>
> Packages of 3.7.12 for the following linux distributions are now available:
>
>   Fedora 23: pending testing (Updates-Testing repo),
>   Fedora 24, 25: download.gluster.org
>   Debian Wheezy, Jessie, Stretch: download.gluster.org
>   Ubuntu trusty, wily, xenial: Launchpad PPA
>   RHEL/CentOS 5, 6, 7: download.gluster.org
>
> SuSE packages coming in a bit.

Thanks! I'll announce the release on the users and devel lists now.

>
> All packages include http://review.gluster.org/14779

Why is a patch, which has not been merged into any branch including
master, been included?
This is very easy way to miss the patch for future releases.

>
> N.B. .../3.7/LATEST symlink has been updated to point to .../3.7/3.7.12
>
> --
>
> Kaleb
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Maintainers acks needed for 3.7.12

2016-06-24 Thread Kaushal M
On Fri, Jun 24, 2016 at 3:09 PM, Raghavendra Gowdappa
<rgowd...@redhat.com> wrote:
>
>
> - Original Message -
>> From: "Kaushal M" <kshlms...@gmail.com>
>> To: "Raghavendra Gowdappa" <rgowd...@redhat.com>
>> Cc: maintainers@gluster.org
>> Sent: Friday, June 24, 2016 1:47:01 PM
>> Subject: Re: [Gluster-Maintainers] Maintainers acks needed for 3.7.12
>>
>> On Thu, Jun 23, 2016 at 3:47 PM, Raghavendra Gowdappa
>> <rgowd...@redhat.com> wrote:
>> >
>> >
>> > - Original Message -
>> >> From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
>> >> To: "Kaushal M" <kshlms...@gmail.com>
>> >> Cc: maintainers@gluster.org
>> >> Sent: Thursday, June 23, 2016 10:16:52 AM
>> >> Subject: Re: [Gluster-Maintainers] Maintainers acks needed for 3.7.12
>> >>
>> >>
>> >>
>> >> - Original Message -
>> >> > From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
>> >> > To: "Kaushal M" <kshlms...@gmail.com>
>> >> > Cc: maintainers@gluster.org
>> >> > Sent: Thursday, June 23, 2016 10:10:22 AM
>> >> > Subject: Re: [Gluster-Maintainers] Maintainers acks needed for 3.7.12
>> >> >
>> >> >
>> >> >
>> >> > - Original Message -
>> >> > > From: "Kaushal M" <kshlms...@gmail.com>
>> >> > > To: maintainers@gluster.org, "Vijay Bellur" <vbel...@redhat.com>
>> >> > > Cc: "Pranith Kumar Karampuri" <pkara...@redhat.com>, "Raghavendra
>> >> > > Gowdappa"
>> >> > > <rgowd...@redhat.com>, "Xavier Hernandez"
>> >> > > <xhernan...@datalab.es>
>> >> > > Sent: Wednesday, June 22, 2016 7:07:56 PM
>> >> > > Subject: Re: Maintainers acks needed for 3.7.12
>> >> > >
>> >> > > On Wed, Jun 15, 2016 at 7:37 PM, Kaushal M <kshlms...@gmail.com>
>> >> > > wrote:
>> >> > > > Hi all,
>> >> > > >
>> >> > > > If anyone doesn't know about it yet, the release process for a
>> >> > > > release
>> >> > > > has changed. Refer to the mail thread [1] for more information.
>> >> > > >
>> >> > > > tl;dr, component maintainers need to now provide an acknowledgement
>> >> > > > that their component is ready to the release manager, for the
>> >> > > > manager
>> >> > > > to make the release. The maintainers are expected to test their
>> >> > > > components and make sure that it is not obviously broken. In the
>> >> > > > future, we expect to make this testing automated, which will require
>> >> > > > maintainers to provide these tests (using DiSTAF).
>> >> > > >
>> >> > > > Vijay, who's managing 3.7.12, sent out a call for acks [2], which
>> >> > > > hasn't recieved any acks apart from the Atin for GlusterD. It is
>> >> > > > possible that most of you missed this as it was part of another
>> >> > > > thread.
>> >> > > >
>> >> > > > I'm starting this thread as the official call for acks, so that it
>> >> > > > is
>> >> > > > more visible to all developers. I cc'd component maintainers (from
>> >> > > > MAINTAINERS) to make sure no one misses this.
>> >> > > >
>> >> > > > Use the tag v3.7.12rc1 to verify your components and provide your
>> >> > > > acks
>> >> > > > by replying to this thread.
>> >> > >
>> >> > > I'll be doing the release on behalf of Vijay.
>> >> > > I had hoped to do the release right now (22-Jun-2016 UTC 1330), but
>> >> > > I'm postponing it to tomorrow, to give my hurting hand some rest.
>> >> > >
>> >> > > This gives maintainers a little more time to provide the missing ACKs.
>> >> > >
>> >> > > ACKs have been obtained for GlusterD, Geo-Rep, Snapshots, Tiering,
>> >> > > NFS, transports(tcp,rdma).
>> >> > >
>> >> > > Quota has an iffy ACK, as 2 more chang

Re: [Gluster-Maintainers] Maintainers acks needed for 3.7.12

2016-06-24 Thread Kaushal M
On Thu, Jun 23, 2016 at 3:47 PM, Raghavendra Gowdappa
<rgowd...@redhat.com> wrote:
>
>
> - Original Message -
>> From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
>> To: "Kaushal M" <kshlms...@gmail.com>
>> Cc: maintainers@gluster.org
>> Sent: Thursday, June 23, 2016 10:16:52 AM
>> Subject: Re: [Gluster-Maintainers] Maintainers acks needed for 3.7.12
>>
>>
>>
>> ----- Original Message -
>> > From: "Raghavendra Gowdappa" <rgowd...@redhat.com>
>> > To: "Kaushal M" <kshlms...@gmail.com>
>> > Cc: maintainers@gluster.org
>> > Sent: Thursday, June 23, 2016 10:10:22 AM
>> > Subject: Re: [Gluster-Maintainers] Maintainers acks needed for 3.7.12
>> >
>> >
>> >
>> > - Original Message -
>> > > From: "Kaushal M" <kshlms...@gmail.com>
>> > > To: maintainers@gluster.org, "Vijay Bellur" <vbel...@redhat.com>
>> > > Cc: "Pranith Kumar Karampuri" <pkara...@redhat.com>, "Raghavendra
>> > > Gowdappa"
>> > > <rgowd...@redhat.com>, "Xavier Hernandez"
>> > > <xhernan...@datalab.es>
>> > > Sent: Wednesday, June 22, 2016 7:07:56 PM
>> > > Subject: Re: Maintainers acks needed for 3.7.12
>> > >
>> > > On Wed, Jun 15, 2016 at 7:37 PM, Kaushal M <kshlms...@gmail.com> wrote:
>> > > > Hi all,
>> > > >
>> > > > If anyone doesn't know about it yet, the release process for a release
>> > > > has changed. Refer to the mail thread [1] for more information.
>> > > >
>> > > > tl;dr, component maintainers need to now provide an acknowledgement
>> > > > that their component is ready to the release manager, for the manager
>> > > > to make the release. The maintainers are expected to test their
>> > > > components and make sure that it is not obviously broken. In the
>> > > > future, we expect to make this testing automated, which will require
>> > > > maintainers to provide these tests (using DiSTAF).
>> > > >
>> > > > Vijay, who's managing 3.7.12, sent out a call for acks [2], which
>> > > > hasn't recieved any acks apart from the Atin for GlusterD. It is
>> > > > possible that most of you missed this as it was part of another
>> > > > thread.
>> > > >
>> > > > I'm starting this thread as the official call for acks, so that it is
>> > > > more visible to all developers. I cc'd component maintainers (from
>> > > > MAINTAINERS) to make sure no one misses this.
>> > > >
>> > > > Use the tag v3.7.12rc1 to verify your components and provide your acks
>> > > > by replying to this thread.
>> > >
>> > > I'll be doing the release on behalf of Vijay.
>> > > I had hoped to do the release right now (22-Jun-2016 UTC 1330), but
>> > > I'm postponing it to tomorrow, to give my hurting hand some rest.
>> > >
>> > > This gives maintainers a little more time to provide the missing ACKs.
>> > >
>> > > ACKs have been obtained for GlusterD, Geo-Rep, Snapshots, Tiering,
>> > > NFS, transports(tcp,rdma).
>> > >
>> > > Quota has an iffy ACK, as 2 more changes have been requested for
>> > > merging, but they don't have proper all the flags to be merged yet.
>> > > The ACK doesn't say if these 2 changes were included.
>> > >
>> > > Major components yet to recieve ACKs are AFR, DHT and EC.
>>
>> Thanks to QE from Redhat, specifically "Krishnaram Karthick Ramdoss"
>> <kramd...@redhat.com> and "Anil Shah" <as...@redhat.com>, we have some
>> sanity tests being run on Quota and DHT (including the list of patches which
>> I compiled). Once the tests are complete, I'll let you know the results.
>
> DHT tests have passed. DHT rebalance tests are still running. The update is 
> that it will take 12+ hours for the tests to complete.
>

Any update on this?

>>
>> > >
>> > > DHT also has a couple of changes that were requested for merging. I've
>> > > merged one, the other doesn't have all the flags.
>> >
>> > Sorry for being illiterate on the flags. What are the flags needed? I can
>> > try
>> > to get them.

The reviews were missing Core-Review+2's.

>> >
>> > >
>> > >
>> > > ~kaushal
>> > >
>> > > >
>> > > > Thanks,
>> > > > Kaushal
>> > > >
>> > > > [1]
>> > > > https://www.gluster.org/pipermail/maintainers/2016-April/000679.html
>> > > > [2] https://www.gluster.org/pipermail/maintainers/2016-June/000847.html
>> > >
>> > ___
>> > maintainers mailing list
>> > maintainers@gluster.org
>> > http://www.gluster.org/mailman/listinfo/maintainers
>> >
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Maintainers acks needed for 3.7.12

2016-06-22 Thread Kaushal M
On Wed, Jun 15, 2016 at 7:37 PM, Kaushal M <kshlms...@gmail.com> wrote:
> Hi all,
>
> If anyone doesn't know about it yet, the release process for a release
> has changed. Refer to the mail thread [1] for more information.
>
> tl;dr, component maintainers need to now provide an acknowledgement
> that their component is ready to the release manager, for the manager
> to make the release. The maintainers are expected to test their
> components and make sure that it is not obviously broken. In the
> future, we expect to make this testing automated, which will require
> maintainers to provide these tests (using DiSTAF).
>
> Vijay, who's managing 3.7.12, sent out a call for acks [2], which
> hasn't recieved any acks apart from the Atin for GlusterD. It is
> possible that most of you missed this as it was part of another
> thread.
>
> I'm starting this thread as the official call for acks, so that it is
> more visible to all developers. I cc'd component maintainers (from
> MAINTAINERS) to make sure no one misses this.
>
> Use the tag v3.7.12rc1 to verify your components and provide your acks
> by replying to this thread.

I'll be doing the release on behalf of Vijay.
I had hoped to do the release right now (22-Jun-2016 UTC 1330), but
I'm postponing it to tomorrow, to give my hurting hand some rest.

This gives maintainers a little more time to provide the missing ACKs.

ACKs have been obtained for GlusterD, Geo-Rep, Snapshots, Tiering,
NFS, transports(tcp,rdma).

Quota has an iffy ACK, as 2 more changes have been requested for
merging, but they don't have proper all the flags to be merged yet.
The ACK doesn't say if these 2 changes were included.

Major components yet to recieve ACKs are AFR, DHT and EC.

DHT also has a couple of changes that were requested for merging. I've
merged one, the other doesn't have all the flags.


~kaushal

>
> Thanks,
> Kaushal
>
> [1] https://www.gluster.org/pipermail/maintainers/2016-April/000679.html
> [2] https://www.gluster.org/pipermail/maintainers/2016-June/000847.html
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Maintainers acks needed for 3.7.12

2016-06-16 Thread Kaushal M
Where these patches merged in with Vijay's approval? Changes should
ideally not be merged by other maintainers when release candidates are
being tracked. This will make the release-maintainers job hard.

In any case, I guess for this time we can do it at the HEAD of
release-3.7, which is commit dca4de3 right now.

I request all maintainers to not merge anymore changes into
release-3.7 till further notice. If you need any change merge, please
inform the release-maintainer, who will take the call on merging it.

Thanks.



On Thu, Jun 16, 2016 at 11:24 AM, Aravinda <avish...@redhat.com> wrote:
> Thanks Kaushal for the reminder.
>
> Following patches merged in Geo-rep after v3.7.12rc1, which are required for
> 3.7.12 since these are regressions caused after the merge of
> http://review.gluster.org/#/c/14322/
>
> http://review.gluster.org/14641
> http://review.gluster.org/14710
> http://review.gluster.org/14637
>
> I will acknowledge the working of Geo-replication components with the above
> mentioned patches. Let me know if that is fine.
>
> regards
> Aravinda
>
>
> On 06/15/2016 07:37 PM, Kaushal M wrote:
>>
>> Hi all,
>>
>> If anyone doesn't know about it yet, the release process for a release
>> has changed. Refer to the mail thread [1] for more information.
>>
>> tl;dr, component maintainers need to now provide an acknowledgement
>> that their component is ready to the release manager, for the manager
>> to make the release. The maintainers are expected to test their
>> components and make sure that it is not obviously broken. In the
>> future, we expect to make this testing automated, which will require
>> maintainers to provide these tests (using DiSTAF).
>>
>> Vijay, who's managing 3.7.12, sent out a call for acks [2], which
>> hasn't recieved any acks apart from the Atin for GlusterD. It is
>> possible that most of you missed this as it was part of another
>> thread.
>>
>> I'm starting this thread as the official call for acks, so that it is
>> more visible to all developers. I cc'd component maintainers (from
>> MAINTAINERS) to make sure no one misses this.
>>
>> Use the tag v3.7.12rc1 to verify your components and provide your acks
>> by replying to this thread.
>>
>> Thanks,
>> Kaushal
>>
>> [1] https://www.gluster.org/pipermail/maintainers/2016-April/000679.html
>> [2] https://www.gluster.org/pipermail/maintainers/2016-June/000847.html
>
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Backport acceptance criteria

2016-06-15 Thread Kaushal M
On Fri, Jun 10, 2016 at 6:39 AM, Niels de Vos <nde...@redhat.com> wrote:
> On Wed, Jun 08, 2016 at 07:12:12PM +0530, Kaushal M wrote:
>> On Sat, May 7, 2016 at 2:07 PM, Niels de Vos <nde...@redhat.com> wrote:
>> > Hi,
>> >
>> > with the close to getting released 3.8, I would like to make sure that
>> > we do not start to backport features and make invasive changes. This
>> > should ensure us that most developers work on the next version with a
>> > broader feature set, and spend less time on backporting and testing
>> > those backported changes. We should make sure to address bugs in the
>> > stable 3.8 release, but keep away from user (or automation) visible
>> > changes.
>> >
>> > Based a little on RFC 2119 [1], I'm proposing several categories that
>> > describe if a backport to a stable branch is acceptable or not. These
>> > "backport acceptance criteria" should be added to our documentation
>> > after a brief discussion among the maintainers of the project.
>> >
>> > I'd like to encourage all maintainers to review and comment on my
>> > current proposed criteria. It is my expectation that others add more
>> > items to the list.
>> >
>> > Maintainers that do not share their opinion, are assumed to be in
>> > agreement. I plan to have this list ready for sharing on the devel list
>> > before the next community meeting on Wednesday.
>> >
>>
>> This is really, really late, but this is good list for backport criteria.
>> I agree with almost all the criteria, and have commented inline on
>> which I don't (yet).
>
> Great, that is REALLY much appreciated!
>
>> > Thanks,
>> > Niels
>> >
>> >
>> > Patches for a stable branch have the following requirements:
>> >
>> >  * a change MUST fix a bug that users have reported or are very likely
>> >to hit
>> >
>> >  * each change SHOULD have a public test-case (.t or DiSTAF)
>> >
>> >  * a change MUST NOT add a new FOP
>> >
>> >  * a change MUST NOT add a new xlator
>> >
>> >  * a change SHOULD NOT add a new volume option, unless a public
>> >discussion was kept and several maintainers agree that this is the
>> >only right approach
>> >
>> >  * a change MAY add new values for existing volume options, these need
>> >to be documented in the release notes and be marked as a 'minor
>> >feature enhancement' or similar
>>
>> This should be allowed only if the new values don't break old 
>> clients/servers.
>> If it does, the change would require a public discussion and agreement
>> from maintainers to be accepted, and the breakage should be noted
>> clearly in the release notes and documentation.
>
> Yes, of course.
>
>> >  * it is NOT RECOMMENDED to modify the contents of existing log
>> >messages, automation and log parsers can depend on the phrasing
>>
>> I would also add a criteria for CLI output as well. We cannot have CLI
>> outputs changing too much on minor releases.
>
> Indeed! Some status checking scripts parse the output of the CLI (in
> XML-format or not) and we should prevent breaking those.
>
>> >  * a change SHOULD NOT have more than approx. 100 lines changed,
>> >additional public discussion and agreement among maintainers is
>> >required to get big changes approved
>>
>>
>> I'd like it if this criteria became stricter as release deadline
>> approached. This will help things move faster earlier in the release
>> cycle.
>> For example, during the initial part of a release cycle large patches
>> can get in if the maintainer of the component is okay.
>> Towards the end, (say 1 week away from an RC), such changes will need
>> to get agreement from a majority of the maintainers to be merged.
>> After an RC is done, such changes will not be merged at all.
>
> This was mainly written with the stable/bugfix releases in mind, and not
> so much for new major versions. Even for new versions I would be
> hesitant to backport anything that does not match the criteria from the
> 1st email. Any major changes suggest that the feature was not ready at
> the time of branching, and it may be a better decision to move it to the
> next version instead (or at least have it marked as experimental).
>
>> >  * a change MUST NOT modify existing structures or parameters that get
>> >sent over the network
>> >
>> >  * existing structures or parameters 

[Gluster-Maintainers] Maintainers acks needed for 3.7.12

2016-06-15 Thread Kaushal M
Hi all,

If anyone doesn't know about it yet, the release process for a release
has changed. Refer to the mail thread [1] for more information.

tl;dr, component maintainers need to now provide an acknowledgement
that their component is ready to the release manager, for the manager
to make the release. The maintainers are expected to test their
components and make sure that it is not obviously broken. In the
future, we expect to make this testing automated, which will require
maintainers to provide these tests (using DiSTAF).

Vijay, who's managing 3.7.12, sent out a call for acks [2], which
hasn't recieved any acks apart from the Atin for GlusterD. It is
possible that most of you missed this as it was part of another
thread.

I'm starting this thread as the official call for acks, so that it is
more visible to all developers. I cc'd component maintainers (from
MAINTAINERS) to make sure no one misses this.

Use the tag v3.7.12rc1 to verify your components and provide your acks
by replying to this thread.

Thanks,
Kaushal

[1] https://www.gluster.org/pipermail/maintainers/2016-April/000679.html
[2] https://www.gluster.org/pipermail/maintainers/2016-June/000847.html
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Random voting in Gerrit - Check votes before merging

2016-06-09 Thread Kaushal M
On Thu, Jun 9, 2016 at 12:13 PM, Kaushal M <kshlms...@gmail.com> wrote:
> A heads up to all maintainers and developers.
>
> As all of you probably already know, reviews in Gerrit are getting
> random votes for jobs that ran for other patchsets.
>
> We've had people noticing these votes only when they've been negative.
> But these votes can be positive as well (I've got an example in the
> forwarded mail below).
>
> Maintainers need to be make sure that any positive vote given to a
> review is correct and for a job that ran for the particular review,
> before merging it.
>
> To make sure that changes that have been given such a bogus vote don't
> get merged, any developer finding such a vote, can give a Verified-1
> to the review to block it from merging. I've changed the Verified flag
> so that a Verified-1 blocks a review from being merged. I'll remove
> this change after we figure out what's happening.
>
> I'll be posting updates to the infra-list to the mail-thread I've
> forwarded below.

This (and the random build failures) should be fixed now.

There should no longer be any random votes/comments. Anyone who's had
incorrect votes, please re-trigger the jobs.

I'll leave the Verified-1 configuration around, as a way to block
changes being merged in the future, if similar situations occur.

~kaushal

PS: For anyone curious about what happened (tl;dr: zombie-jenkins),
please refer to the thread
'Investigating random votes in Gerrit' in the gluster-infra list.

>
> ~kaushal
>
>
> -- Forwarded message --
> From: Kaushal M <kshlms...@gmail.com>
> Date: Thu, Jun 9, 2016 at 11:52 AM
> Subject: Investigating random votes in Gerrit
> To: gluster-infra <gluster-in...@gluster.org>
>
>
> In addition to the builder issues we're having, we are also facing
> problems with jenkins voting/commenting randomly.
>
> The comments generally link to older jobs for older patchsets, which
> were run about 2 months back (beginning of April). For example,
> https://review.gluster.org/14665 has a netbsd regression +1 vote, from
> a job run in April for review 13873, and which actually failed.
>
> Another observation that I've made is that these fake votes sometime
> provide a -1 Verified. Jenkins shouldn't be using this flag anymore.
>
> These 2 observations, make me wonder if another jenkins instance is
> running somewhere, from our old backups possibly? Michael, could this
> be possible?
>
> To check from where these votes/comments were coming from, I tried
> checking the Gerrit sshd logs. This wasn't helpful, because all logins
> apparently happen from 127.0.0.1. This is probably some firewall rule
> that has been setup, post migration, because I see older logs giving
> proper IPs. I'll require Michael's help with fixing this, if possible.
>
> I'll continue to investigate, and update this thread with anything I find.
>
> ~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Random voting in Gerrit - Check votes before merging

2016-06-09 Thread Kaushal M
A heads up to all maintainers and developers.

As all of you probably already know, reviews in Gerrit are getting
random votes for jobs that ran for other patchsets.

We've had people noticing these votes only when they've been negative.
But these votes can be positive as well (I've got an example in the
forwarded mail below).

Maintainers need to be make sure that any positive vote given to a
review is correct and for a job that ran for the particular review,
before merging it.

To make sure that changes that have been given such a bogus vote don't
get merged, any developer finding such a vote, can give a Verified-1
to the review to block it from merging. I've changed the Verified flag
so that a Verified-1 blocks a review from being merged. I'll remove
this change after we figure out what's happening.

I'll be posting updates to the infra-list to the mail-thread I've
forwarded below.

~kaushal


-- Forwarded message --
From: Kaushal M <kshlms...@gmail.com>
Date: Thu, Jun 9, 2016 at 11:52 AM
Subject: Investigating random votes in Gerrit
To: gluster-infra <gluster-in...@gluster.org>


In addition to the builder issues we're having, we are also facing
problems with jenkins voting/commenting randomly.

The comments generally link to older jobs for older patchsets, which
were run about 2 months back (beginning of April). For example,
https://review.gluster.org/14665 has a netbsd regression +1 vote, from
a job run in April for review 13873, and which actually failed.

Another observation that I've made is that these fake votes sometime
provide a -1 Verified. Jenkins shouldn't be using this flag anymore.

These 2 observations, make me wonder if another jenkins instance is
running somewhere, from our old backups possibly? Michael, could this
be possible?

To check from where these votes/comments were coming from, I tried
checking the Gerrit sshd logs. This wasn't helpful, because all logins
apparently happen from 127.0.0.1. This is probably some firewall rule
that has been setup, post migration, because I see older logs giving
proper IPs. I'll require Michael's help with fixing this, if possible.

I'll continue to investigate, and update this thread with anything I find.

~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Backport acceptance criteria

2016-06-08 Thread Kaushal M
On Sat, May 7, 2016 at 2:07 PM, Niels de Vos  wrote:
> Hi,
>
> with the close to getting released 3.8, I would like to make sure that
> we do not start to backport features and make invasive changes. This
> should ensure us that most developers work on the next version with a
> broader feature set, and spend less time on backporting and testing
> those backported changes. We should make sure to address bugs in the
> stable 3.8 release, but keep away from user (or automation) visible
> changes.
>
> Based a little on RFC 2119 [1], I'm proposing several categories that
> describe if a backport to a stable branch is acceptable or not. These
> "backport acceptance criteria" should be added to our documentation
> after a brief discussion among the maintainers of the project.
>
> I'd like to encourage all maintainers to review and comment on my
> current proposed criteria. It is my expectation that others add more
> items to the list.
>
> Maintainers that do not share their opinion, are assumed to be in
> agreement. I plan to have this list ready for sharing on the devel list
> before the next community meeting on Wednesday.
>

This is really, really late, but this is good list for backport criteria.
I agree with almost all the criteria, and have commented inline on
which I don't (yet).

> Thanks,
> Niels
>
>
> Patches for a stable branch have the following requirements:
>
>  * a change MUST fix a bug that users have reported or are very likely
>to hit
>
>  * each change SHOULD have a public test-case (.t or DiSTAF)
>
>  * a change MUST NOT add a new FOP
>
>  * a change MUST NOT add a new xlator
>
>  * a change SHOULD NOT add a new volume option, unless a public
>discussion was kept and several maintainers agree that this is the
>only right approach
>
>  * a change MAY add new values for existing volume options, these need
>to be documented in the release notes and be marked as a 'minor
>feature enhancement' or similar

This should be allowed only if the new values don't break old clients/servers.
If it does, the change would require a public discussion and agreement
from maintainers to be accepted, and the breakage should be noted
clearly in the release notes and documentation.

>
>  * it is NOT RECOMMENDED to modify the contents of existing log
>messages, automation and log parsers can depend on the phrasing

I would also add a criteria for CLI output as well. We cannot have CLI
outputs changing too much on minor releases.

>
>  * a change SHOULD NOT have more than approx. 100 lines changed,
>additional public discussion and agreement among maintainers is
>required to get big changes approved


I'd like it if this criteria became stricter as release deadline
approached. This will help things move faster earlier in the release
cycle.
For example, during the initial part of a release cycle large patches
can get in if the maintainer of the component is okay.
Towards the end, (say 1 week away from an RC), such changes will need
to get agreement from a majority of the maintainers to be merged.
After an RC is done, such changes will not be merged at all.

>
>  * a change MUST NOT modify existing structures or parameters that get
>sent over the network
>
>  * existing structures or parameters MAY get extended with additional
>values (i.e. new flags in a bitmap/mask) if the extensions are
>optional and do not affect older/newer client/server combinations
>
> NOTE: Changes to experimental features (as announced on the roadmap and
>   in the release notes) are exempted from these criteria, except for
>   the MOST NOT requirements. These features explicitly may change
>   their behaviour, configuration and management interface while
>   experimenting to find the optimal solution.
>
> 1. https://tools.ietf.org/html/rfc2119

I'd like a similar set of criteria for the merge window for a release
following the proposed release timelines.
I think I'll start writing a proper document for the proposal, using
the language from rfc2119. It should help drive better discussions
around it.

>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Release Management Process change - proposal

2016-05-10 Thread Kaushal M
On Tue, May 10, 2016 at 12:01 AM, Vijay Bellur  wrote:
> Hi All,
>
> We are blocked on 3.7.12 owing to this proposal. Appreciate any
> feedback on this!
>
> Thanks,
> Vijay
>
> On Thu, Apr 28, 2016 at 11:58 PM, Vijay Bellur  wrote:
>> Hi All,
>>
>> We have encountered a spate of regressions in recent 3.7.x releases. The
>> 3.7.x maintainers are facing additional burdens to ensure functional,
>> performance and upgrade correctness. I feel component maintainers should own
>> these aspects of stability as we own the components and understand our
>> components better than anybody else. In order to have more active
>> participation from maintainers for every release going forward, I propose
>> this process:
>>
>> 1. All component maintainers will need to provide an explicit ack about the
>> content and quality of their respective components before a release is
>> tagged.
>>
>> 2. A release will not be tagged if any component is not acked by a
>> maintainer.
>>
>> 3. Release managers will co-ordinate getting acks from maintainers and
>> perform necessary housekeeping (closing bugs etc.).
>>
>> This is not entirely new and a part of this process has been outlined in the
>> Guidelines for Maintainers [1] document. I am inclined to enforce this
>> process with more vigor to ensure that we do better on quality & stability.
>>
>> Thoughts, questions and feedback about the process are very welcome!
>>

+1 from me. Spreading out the verification duties will help us do
better releases.

>> Thanks,
>> Vijay
>>
>> [1]
>> http://www.gluster.org/community/documentation/index.php/Guidelines_For_Maintainers
>>
>>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Don't merge changes on release-3.7

2016-04-15 Thread Kaushal M
I've merged this change as it had passed all the jobs, and doesn't
seem (to me) to have any unexpected side effects.

On Sat, Apr 16, 2016 at 7:24 AM, Kaushal M <kshlms...@gmail.com> wrote:
> As I understand it's just a regression in the log messages. Does it
> have any impact on functionality?
>
> On Fri, Apr 15, 2016 at 6:42 PM, Vijaikumar Mallikarjuna
> <vmall...@redhat.com> wrote:
>> There is a regression bug:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1325822 with recent fix.
>>
>> This regression bug has been fixed in master, can we take this patch for
>> 3.7.11?
>>
>> Here is the 3.7 patch: http://review.gluster.org/#/c/13962/
>>
>> Thanks,
>> Vijay
>>
>>
>> On Thu, Apr 7, 2016 at 10:14 AM, Kaushal M <kshlms...@gmail.com> wrote:
>>>
>>> Hi All,
>>>
>>> I'm tagging 3.7.11, so please don't merge anymore changes on the
>>> release-3.7 branch.
>>>
>>> I'm currently on commit 458d4ba, which will be tagged as v3.7.11.
>>>
>>> I'll notify once the tagging is done, and merging changes can be resumed.
>>>
>>> Thanks,
>>> Kaushal
>>> ___
>>> maintainers mailing list
>>> maintainers@gluster.org
>>> http://www.gluster.org/mailman/listinfo/maintainers
>>
>>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Don't merge changes on release-3.7

2016-04-15 Thread Kaushal M
As I understand it's just a regression in the log messages. Does it
have any impact on functionality?

On Fri, Apr 15, 2016 at 6:42 PM, Vijaikumar Mallikarjuna
<vmall...@redhat.com> wrote:
> There is a regression bug:
> https://bugzilla.redhat.com/show_bug.cgi?id=1325822 with recent fix.
>
> This regression bug has been fixed in master, can we take this patch for
> 3.7.11?
>
> Here is the 3.7 patch: http://review.gluster.org/#/c/13962/
>
> Thanks,
> Vijay
>
>
> On Thu, Apr 7, 2016 at 10:14 AM, Kaushal M <kshlms...@gmail.com> wrote:
>>
>> Hi All,
>>
>> I'm tagging 3.7.11, so please don't merge anymore changes on the
>> release-3.7 branch.
>>
>> I'm currently on commit 458d4ba, which will be tagged as v3.7.11.
>>
>> I'll notify once the tagging is done, and merging changes can be resumed.
>>
>> Thanks,
>> Kaushal
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> http://www.gluster.org/mailman/listinfo/maintainers
>
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] More news on 3.7.11

2016-04-15 Thread Kaushal M
On Fri, Apr 15, 2016 at 2:36 PM, Niels de Vos <nde...@redhat.com> wrote:
> On Fri, Apr 15, 2016 at 01:32:23PM +0530, Kaushal M wrote:
>> Some more (bad) news on the status of 3.7.11.
>>
>> I've been doing some more tests with release-3.7, and found that the
>> fix for solving daemons failing to start when management encryption is
>> enabled doesn't work in all cases.
>>
>> Now I've got 2 options I can take, and would like some opinions on
>> which I should take.
>>
>> 1. Delay the release a little more, and fix the issue completely. I
>> don't know how long a proper fix is going to take.
>>
>> Or,
>> 2. Revert the IPv6 patch that exposed this problem, and release
>> immediately. We can then work on getting the issue fixed on master,
>> and then backport the IPv6 change again.
>>
>> What do other maintainers feel? Hopefully I get some opinions before
>> the weekend.
>
> I'm all for reverting the patch and release 3.7.11 as soon as possible.
>
> It is also not clear to me how much the IPv6 change was a fix, or a
> feature enhancement. Anything that gets backported to a stable branch
> should be done with extremely high confidence that nothing can break.
> Any patch, even the really simple ones, can have unexpected
> side-effects. A patch that gets backported and breaks the release (or
> the release schedule) might not have been suitable for inclusion in the
> stable branch in the first place.

One of the reasons we didn't catch the failures earlier is because our
regression test VMs have IPv6 disabled IIRC. I don't know why this was
disabled, but it should be enabled again.

>
> Thanks,
> Niels
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] More news on 3.7.11

2016-04-15 Thread Kaushal M
Some more (bad) news on the status of 3.7.11.

I've been doing some more tests with release-3.7, and found that the
fix for solving daemons failing to start when management encryption is
enabled doesn't work in all cases.

Now I've got 2 options I can take, and would like some opinions on
which I should take.

1. Delay the release a little more, and fix the issue completely. I
don't know how long a proper fix is going to take.

Or,
2. Revert the IPv6 patch that exposed this problem, and release
immediately. We can then work on getting the issue fixed on master,
and then backport the IPv6 change again.

What do other maintainers feel? Hopefully I get some opinions before
the weekend.

~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Another regression in release-3.7 and master

2016-04-07 Thread Kaushal M
On Thu, Apr 7, 2016 at 7:24 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Thu, Apr 7, 2016 at 6:23 PM, Kaushal M <kshlms...@gmail.com> wrote:
>> On Thu, Apr 7, 2016 at 6:00 PM, Atin Mukherjee <amukh...@redhat.com> wrote:
>>>
>>>
>>> On 04/07/2016 05:37 PM, Kaushal M wrote:
>>>>
>>>> On 7 Apr 2016 5:36 p.m., "Niels de Vos" <nde...@redhat.com
>>>> <mailto:nde...@redhat.com>> wrote:
>>>>>
>>>>> On Thu, Apr 07, 2016 at 05:13:54PM +0530, Kaushal M wrote:
>>>>> > On Thu, Apr 7, 2016 at 5:11 PM, Kaushal M <kshlms...@gmail.com
>>>> <mailto:kshlms...@gmail.com>> wrote:
>>>>> > > We've hit another regression.
>>>>> > >
>>>>> > > With management encryption enabled, daemons like NFS and SHD don't
>>>>> > > start on the current heads of release-3.7 and master branches.
>>>>> > >
>>>>> > > I still have no clear root cause for it, and would appreciate some
>>>> help.
>>>>> >
>>>>> > This was working with 3.7.9 from what I've heard.
>>>>>
>>>>> Do we have a simple test-case for this? If someone write a script, we
>>>>> should be able to "git bisect" it pretty quickly.
>>>>
>>>> I am doing this right now.
>>> "b33f3c9 glusterd: Bug fixes for IPv6 support" has caused this
>>> regression. I am yet to find the RCA though.
>>
>> git-bisect agrees with this as well.
>>
>> I initially thought it was because GlusterD didn't listen on IPv6
>> (checked using `ss`).
>> This change makes it so that connections to localhost use ::1 instead
>> of 127.0.0.1, and so the connection failed.
>> This should have caused all connection attempts to fail, irrespective
>> of it being encrypted or not.
>> But the failure only happens when management encryption is enabled.
>> So this theory doesn't make sense.
>
> This is the part of the problem!
>
> The initial IPv6 connection to ::1 fails for non encrypted connections as 
> well.
> But these connections correctly retry connect with the next address
> once the first connect attempt fails.
> Since the next address is 127.0.0.1, the connection succeeds, volfile
> is fetched and the daemon starts.
>
> Encrypted connections on the other hand, give up after the first
> failure and don't attempt a reconnect.
> This is somewhat surprising to me, as I'd recently fixed an issue
> which caused crashes when encrypted connections attempted a reconnect
> after a failure to connect.
>
> I'll diagnose this a little bit more and try to find a solution.

Found the full problem. This is mainly a result of the fix I did, that
I mentioned above.
(A slight correction is that actually it wasn't crashes that it fixed,
but a encrypted reconnect issue in GlusterD).

I'm posting the root-cause as I described in the commit message for
the fix for this.
"""
With commit d117466 socket_poller() wasn't launched from
socket_connect
(for encrypted connections), if connect() failed. This was done to
prevent the socket private data from being double unreffed, from the
cleanups in both socket_poller() and socket_connect(). This allowed
future reconnects to happen successfully.

If a socket reconnects is sort of decided by the rpc notify function
registered. The above change worked with glusterd, as the glusterd rpc
notify function (glusterd_peer_rpc_notify()) continuously allowed
reconnects on failure.

mgmt_rpc_notify(), the rpc notify function in glusterfsd, behaves
differently.

For a DISCONNECT event, if more volfile servers are available or if
more
addresses are available in the dns cache, it allows reconnects. If not
it terminates the program.

For a CONNECT event, it attempts to do a volfile fetch rpc request. If
sending this rpc fails, it immediately terminates the program.

One side effect of commit d117466, was that the encrypted socket was
registered with epoll, unintentionally, on a connect failure.  A weird
thing happens because of this. The epoll notifier notifies
mgmt_rpc_notify() of a CONNECT event, instead of a DISCONNECT as
expected. This causes mgmt_rpc_notify() to attempt an unsuccessful
volfile fetch rpc request, and terminate.
(I still don't know why the epoll raises the CONNECT event)

Commit 46bd29e fixed some issues with IPv6 in GlusterFS. This caused
address resolution in GlusterFS to also request of IPv6 addresses
(AF_UNSPEC) instead of just IPv4. On most systems, this causes the
IPv6
addresses to be returned first.

GlusterD listens on 0.0.0.0:24007 by default. While this attaches to
all
interfaces, it only l

Re: [Gluster-Maintainers] [Gluster-devel] Another regression in release-3.7 and master

2016-04-07 Thread Kaushal M
On Thu, Apr 7, 2016 at 6:23 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Thu, Apr 7, 2016 at 6:00 PM, Atin Mukherjee <amukh...@redhat.com> wrote:
>>
>>
>> On 04/07/2016 05:37 PM, Kaushal M wrote:
>>>
>>> On 7 Apr 2016 5:36 p.m., "Niels de Vos" <nde...@redhat.com
>>> <mailto:nde...@redhat.com>> wrote:
>>>>
>>>> On Thu, Apr 07, 2016 at 05:13:54PM +0530, Kaushal M wrote:
>>>> > On Thu, Apr 7, 2016 at 5:11 PM, Kaushal M <kshlms...@gmail.com
>>> <mailto:kshlms...@gmail.com>> wrote:
>>>> > > We've hit another regression.
>>>> > >
>>>> > > With management encryption enabled, daemons like NFS and SHD don't
>>>> > > start on the current heads of release-3.7 and master branches.
>>>> > >
>>>> > > I still have no clear root cause for it, and would appreciate some
>>> help.
>>>> >
>>>> > This was working with 3.7.9 from what I've heard.
>>>>
>>>> Do we have a simple test-case for this? If someone write a script, we
>>>> should be able to "git bisect" it pretty quickly.
>>>
>>> I am doing this right now.
>> "b33f3c9 glusterd: Bug fixes for IPv6 support" has caused this
>> regression. I am yet to find the RCA though.
>
> git-bisect agrees with this as well.
>
> I initially thought it was because GlusterD didn't listen on IPv6
> (checked using `ss`).
> This change makes it so that connections to localhost use ::1 instead
> of 127.0.0.1, and so the connection failed.
> This should have caused all connection attempts to fail, irrespective
> of it being encrypted or not.
> But the failure only happens when management encryption is enabled.
> So this theory doesn't make sense.

This is the part of the problem!

The initial IPv6 connection to ::1 fails for non encrypted connections as well.
But these connections correctly retry connect with the next address
once the first connect attempt fails.
Since the next address is 127.0.0.1, the connection succeeds, volfile
is fetched and the daemon starts.

Encrypted connections on the other hand, give up after the first
failure and don't attempt a reconnect.
This is somewhat surprising to me, as I'd recently fixed an issue
which caused crashes when encrypted connections attempted a reconnect
after a failure to connect.

I'll diagnose this a little bit more and try to find a solution.

>
> One other thing was on my laptop, even bricks failed to start when
> glusterd was started with management encryption.
> But on a VM, the bricks started, but other daemons failed.
>
>>>
>>>>
>>>> Niels
>>>
>>>
>>>
>>> ___
>>> Gluster-devel mailing list
>>> gluster-de...@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Another regression in release-3.7 and master

2016-04-07 Thread Kaushal M
On 7 Apr 2016 5:36 p.m., "Niels de Vos" <nde...@redhat.com> wrote:
>
> On Thu, Apr 07, 2016 at 05:13:54PM +0530, Kaushal M wrote:
> > On Thu, Apr 7, 2016 at 5:11 PM, Kaushal M <kshlms...@gmail.com> wrote:
> > > We've hit another regression.
> > >
> > > With management encryption enabled, daemons like NFS and SHD don't
> > > start on the current heads of release-3.7 and master branches.
> > >
> > > I still have no clear root cause for it, and would appreciate some
help.
> >
> > This was working with 3.7.9 from what I've heard.
>
> Do we have a simple test-case for this? If someone write a script, we
> should be able to "git bisect" it pretty quickly.

I am doing this right now.

>
> Niels
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Another regression in release-3.7 and master

2016-04-07 Thread Kaushal M
I'd rather not take in new changes. Right now my priority is to get
this new regression.

On Thu, Apr 7, 2016 at 5:21 PM, Ravishankar N <ravishan...@redhat.com> wrote:
> On 04/07/2016 05:11 PM, Kaushal M wrote:
>>
>> As earlier, please don't merge any more changes on the release-3.7
>> branch till this is fixed and 3.7.11 is released.
>
> http://review.gluster.org/#/c/13925/ (and its corresponding patch in master)
> has to be merged for 3.7.11. It fixes a performance issue in arbiter. The
> patch does not affect any other component (including non-arbiter replicate
> volumes).
>
> -Ravi
>
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Another regression in release-3.7 and master

2016-04-07 Thread Kaushal M
On Thu, Apr 7, 2016 at 5:11 PM, Kaushal M <kshlms...@gmail.com> wrote:
> We've hit another regression.
>
> With management encryption enabled, daemons like NFS and SHD don't
> start on the current heads of release-3.7 and master branches.
>
> I still have no clear root cause for it, and would appreciate some help.

This was working with 3.7.9 from what I've heard.

>
> With another regression, 3.7.11 will be delayed a little while longer.
> As earlier, please don't merge any more changes on the release-3.7
> branch till this is fixed and 3.7.11 is released.
>
> Thanks,
> Kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Don't merge changes on release-3.7

2016-04-06 Thread Kaushal M
Hi All,

I'm tagging 3.7.11, so please don't merge anymore changes on the
release-3.7 branch.

I'm currently on commit 458d4ba, which will be tagged as v3.7.11.

I'll notify once the tagging is done, and merging changes can be resumed.

Thanks,
Kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Stop merging changes - Not all smoke tests are reporting status to gerrit

2016-04-01 Thread Kaushal M
Hi All,

There has been a recent change which has caused failures to build
RPMs. This change was also unknowingly backported to release-3.7,
because the failures were not reported back to gerrit.

Rpmbuild results aren't being reported back to gerrit since we brought
in the new flags for voting. None of us seemed to notice it since
then.

So I'll be taking some time to fix this issue. Till then please don't
merge any changes on any of the branches.

I'll update the list once this has been fixed.

~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Update on 3.7.10 - on schedule to be tagged at 2200PDT 30th March.

2016-04-01 Thread Kaushal M
In the time I was waiting for https://review.gluster.org/13861/ , a
change was merged in (which I didn't know of) and has broken building
RPMs.

The offending change is 3d34c49  (cluster/ec: Rebalance hangs during
rename) by Ashish.
The same change had earlier also broken building RPMs on master.

For now, to proceed with 3.7.10, I'm going to revert the offending
change. Please make sure this change is merged in for the next
release.

~kaushal

On Thu, Mar 31, 2016 at 8:28 PM, Kotresh Hiremath Ravishankar
<khire...@redhat.com> wrote:
> Point noted, will keep informed from next time!
>
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -
>> From: "Kaushal M" <kshlms...@gmail.com>
>> To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
>> Cc: "Aravinda" <avish...@redhat.com>, "Gluster Devel" 
>> <gluster-de...@gluster.org>, maintainers@gluster.org
>> Sent: Thursday, March 31, 2016 7:32:58 PM
>> Subject: Re: [Gluster-Maintainers] Update on 3.7.10 - on schedule to be 
>> tagged at 2200PDT 30th March.
>>
>> This is a really hard to hit issue, that requires a lot of things to
>> be in place for it to happen.
>> But it is an unexpected data loss issue.
>>
>> I'll wait tonight for the change to be merged, though I really don't like it.
>>
>> You could have informed me on this thread earlier.
>> Please, in the future, keep release-managers/maintainers updated about
>> any critical changes.
>>
>> The only reason this is getting merged now, is because of the Jenkins
>> migration which got completed surprisingly quickly.
>>
>> On Thu, Mar 31, 2016 at 7:08 PM, Kotresh Hiremath Ravishankar
>> <khire...@redhat.com> wrote:
>> > Kaushal,
>> >
>> > I just replied to Aravinda's mail. Anyway pasting the snippet if someone
>> > misses that.
>> >
>> > "In the scenario mentioned by aravinda below, when an unlink comes on a
>> > entry, in changelog xlator, it's 'loc->pargfid'
>> > was getting modified to "/". So consequence is that , when it hits
>> > posix, the 'loc->pargfid' would be pointing
>> > to "/" instead of actual parent. This is not so terrible yet, as we are
>> > saved by posix. Posix checks
>> > for "loc->path" first, only if it's not filled, it will use
>> > "pargfid/bname" combination. So only for
>> > clients like self-heal who does not populate 'loc->path' and the same
>> > basename exists on root, the
>> > unlink happens on root instead of actual path."
>> >
>> > Thanks and Regards,
>> > Kotresh H R
>> >
>> > - Original Message -
>> >> From: "Kaushal M" <kshlms...@gmail.com>
>> >> To: "Aravinda" <avish...@redhat.com>
>> >> Cc: "Gluster Devel" <gluster-de...@gluster.org>, maintainers@gluster.org,
>> >> "Kotresh Hiremath Ravishankar"
>> >> <khire...@redhat.com>
>> >> Sent: Thursday, March 31, 2016 6:56:18 PM
>> >> Subject: Re: [Gluster-Maintainers] Update on 3.7.10 - on schedule to be
>> >> tagged at 2200PDT 30th March.
>> >>
>> >> Kotresh, Could you please provide the details?
>> >>
>> >> On Thu, Mar 31, 2016 at 6:43 PM, Aravinda <avish...@redhat.com> wrote:
>> >> > Hi Kaushal,
>> >> >
>> >> > We have a Changelog bug which can lead to data loss if Glusterfind is
>> >> > enabled(To be specific,  when changelog.capture-del-path and
>> >> > changelog.changelog options enabled on a replica volume).
>> >> >
>> >> > http://review.gluster.org/#/c/13861/
>> >> >
>> >> > This is very corner case. but good to go with the release. We tried to
>> >> > merge
>> >> > this before the merge window for 3.7.10, but regressions not yet
>> >> > complete
>> >> > :(
>> >> >
>> >> > Do you think we should wait for this patch?
>> >> >
>> >> > @Kotresh can provide more details about this issue.
>> >> >
>> >> > regards
>> >> > Aravinda
>> >> >
>> >> >
>> >> > On 03/31/2016 01:29 PM, Kaushal M wrote:
>> >> >>
>> >> >> The last change for 3.7.10 has been merged now. Commit 2cd5b75 will be
>> &

Re: [Gluster-Maintainers] [Gluster-users] glusterfs-3.6.9 released

2016-03-09 Thread Kaushal M
Hey Johnny,

Could you please do the bugzilla cleanup for the 3.6.9 release (close
bugs, close tracker) and open the 3.6.10 tracker?
This hasn't been done yet, and should be performed when announcing a release.

Thanks.

On Fri, Mar 4, 2016 at 9:02 PM, FNU Raghavendra Manjunath
 wrote:
> Hi,
>
> glusterfs-3.6.9 has been released and the packages for RHEL/Fedora/Centos
> can be found here.
> http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/
>
> Requesting people running 3.6.x to please try it out and let us know if
> there are any issues.
>
> This release supposedly fixes the bugs listed below since 3.6.8 was made
> available. Thanks to all who submitted patches, reviewed the changes.
>
> 1302541 - Problem when enabling quota : Could not start quota auxiliary
> mount
> 1302310 - log improvements:- enabling quota on a volume reports numerous
> entries of "contribution node list is empty
> which is an error" in brick logs
>
> 1308806 - tests : Modifying tests for crypt xlator
>
> 1304668 - Add missing release-notes on the 3.6 branch
> 1296931 - Installation of glusterfs-3.6.8 fails on CentOS-7
>
>
> Regards,
>
> Raghavendra Bhat
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Need merge access to Gluster repo

2016-03-09 Thread Kaushal M
I've added you to the maintainers lists now.

On Wed, Mar 9, 2016 at 11:15 AM, Vijaikumar Mallikarjuna
 wrote:
> Hi,
>
> I will be the maintainer for quota and marker component and the same is
> updated in the Maintainer's List.
> Could you please provide merge access to the Gluster repo?
>
> Thanks,
> Vijay
>
>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Updating Maintainers Guide

2016-03-02 Thread Kaushal M
Hello maintainers!

If you didn't know, we have a maintainers guide at [1]. This document
describes what is expected from a GlusterFS maintainer, and is a quick
guide on maintainer-ship.

But this document is very light at the moment, and can be improved.
So, I'd like to know what other information can be added to the doc.
I'll start.

- Properly describe the responsibilities of different types of
maintainers; sub-maintainers, release-maintainers etc.
- Define the process for becoming a maintainer (Neils came up with
this actually)

Please go through the document and reply with your opinions.

Thanks,
Kaushal

[1] 
https://gluster.readthedocs.org/en/latest/Contributors-Guide/Guidelines-For-Maintainers/
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Need new manager for 3.7.9

2016-02-24 Thread Kaushal M
On Feb 25, 2016 6:18 AM, "Vijay Bellur" <vbel...@redhat.com> wrote:
>
> On 02/24/2016 07:45 PM, Vijay Bellur wrote:
>>
>> On 02/24/2016 08:50 AM, Raghavendra Talur wrote:
>>>
>>>
>>> On Feb 24, 2016 7:01 PM, "Vijay Bellur" <vbel...@redhat.com
>>> <mailto:vbel...@redhat.com>> wrote:
>>>  >
>>>  > On 02/24/2016 07:20 AM, Kaushal M wrote:
>>>  >>
>>>  >> Hi All.
>>>  >>
>>>  >> Raghavendra had volunteered to be the release manager last week. But
>>>  >> unfortunately, he cannot find enough time in the next couple of
weeks
>>>  >> (because of other work commitments) to perform the manager tasks.
>>>  >>
>>>  >> So we now need a volunteer to be the new release manager. Anyone?
>>>  >>
>>>  >> Keep in mind, 3.7.9 is targeted for 30th of February. So this
>>> requires
>>>  >> time and effort in the immediate two weeks.
>>>  >>
>>>  >
>>>  > You certainly intended to mean 30th of March, right? [1]
>>>
>>> Actually he meant February 29th.
>>>

I wasn't thinking when I wrote this 

>>
>> OK, are we releasing 3.7.9 in the first week of March and 3.7.10 by
>> March 30th to adhere to the release cadence?
>
>
> s/cadence/schedule/

Thanks for picking this up Vijay.

>
> -Vijay
>
>
>
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Need new manager for 3.7.9

2016-02-24 Thread Kaushal M
Hi All.

Raghavendra had volunteered to be the release manager last week. But
unfortunately, he cannot find enough time in the next couple of weeks
(because of other work commitments) to perform the manager tasks.

So we now need a volunteer to be the new release manager. Anyone?

Keep in mind, 3.7.9 is targeted for 30th of February. So this requires
time and effort in the immediate two weeks.

~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] glusterfs-3.6.9 release plans

2016-02-24 Thread Kaushal M
Hey Raghu,

Now that the patch submission issues are solved, could you provide an
update on 3.6.9. We're past the 20th now.

~kaushal

On Wed, Feb 17, 2016 at 7:18 PM, Kaushal M <kshlms...@gmail.com> wrote:
> I'm online now. We can figure out what the problem is.
>
> On Feb 17, 2016 7:17 PM, "FNU Raghavendra Manjunath" <rab...@redhat.com>
> wrote:
>>
>> Hi, Kaushal,
>>
>> I have been trying to merge few patches. But every time I try (i.e. do a
>> cherry pick in gerrit), a new patch set gets submitted. I need some help in
>> resolving it.
>>
>> Regards,
>> Raghavendra
>>
>>
>> On Wed, Feb 17, 2016 at 8:31 AM, Kaushal M <kshlms...@gmail.com> wrote:
>>>
>>> Hey Johnny,
>>>
>>> Could you please provide an update on the 3.6.9 release plans?
>>>
>>> The GlusterFS release schedule has 3.6 releases happening every month
>>> on the week of the 20th. We're less than a week away to the release as
>>> per the schedule, so we'd like to know if it is still on track.
>>>
>>> Thanks.
>>>
>>> Kaushal
>>
>>
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] glusterfs-3.6.9 release plans

2016-02-17 Thread Kaushal M
I'm online now. We can figure out what the problem is.
On Feb 17, 2016 7:17 PM, "FNU Raghavendra Manjunath" <rab...@redhat.com>
wrote:

> Hi, Kaushal,
>
> I have been trying to merge few patches. But every time I try (i.e. do a
> cherry pick in gerrit), a new patch set gets submitted. I need some help in
> resolving it.
>
> Regards,
> Raghavendra
>
>
> On Wed, Feb 17, 2016 at 8:31 AM, Kaushal M <kshlms...@gmail.com> wrote:
>
>> Hey Johnny,
>>
>> Could you please provide an update on the 3.6.9 release plans?
>>
>> The GlusterFS release schedule has 3.6 releases happening every month
>> on the week of the 20th. We're less than a week away to the release as
>> per the schedule, so we'd like to know if it is still on track.
>>
>> Thanks.
>>
>> Kaushal
>>
>
>
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] glusterfs-3.6.9 release plans

2016-02-17 Thread Kaushal M
Hey Johnny,

Could you please provide an update on the 3.6.9 release plans?

The GlusterFS release schedule has 3.6 releases happening every month
on the week of the 20th. We're less than a week away to the release as
per the schedule, so we'd like to know if it is still on track.

Thanks.

Kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Requesting separate labels in Gerrit for better testing results

2016-01-18 Thread Kaushal M
On Mon, Jan 18, 2016 at 1:03 PM, Raghavendra Talur <rta...@redhat.com> wrote:
>
>
> On Fri, Jan 15, 2016 at 4:22 PM, Niels de Vos <nde...@redhat.com> wrote:
>>
>> On Thu, Jan 14, 2016 at 10:26:46PM +0530, Kaushal M wrote:
>> > I'd pushed the config to a new branch instead of updating the
>> > `refs/meta/config` branch. I've corrected this now.
>> >
>> > The 3 new labels are,
>> > - Smoke
>> > - CentOS-regression
>> > - NetBSD-regression
>> >
>> > The new labels are active now. Changes cannot be merged without all of
>> > them being +1. Only the bot accounts (Gluster Build System and NetBSD
>> > Build System) can set them.
>
>
> Thanks Kaushal !
>
>>
>>
>> It seems that Verified is also a label that is required. Because this is
>> now the label for manual testing by reviewers/qa, I do not think it
>> should be a requirement anymore.
>>
>> Could the labels that are needed for merging be setup like this?
>>
>>   Code-Review=+2 && (Verified=+1 || (Smoke=+1 && CentOS-regression=+1 &&
>> NetBSD-regression=+1))
>
>
> I would prefer not having Verified=+1 here. A dev should not be allowed to
> override the restrictions.

I've made the Verified flag a `NoBlock` flag. No changes are
merge-able only with (Code-Review+2 && Smoke+1 && CentOS-regression+1
&& NetBSD-regression+1).

>
>>
>>
>> I managed to get http://review.gluster.org/13208 merged now, please
>> check if the added tags in the commit message are ok, or need to get
>> modified.
>>
>> Thanks,
>> Niels
>>
>>
>> >
>> > On Thu, Jan 14, 2016 at 9:22 PM, Kaushal M <kshlms...@gmail.com> wrote:
>> > > On Thu, Jan 14, 2016 at 5:12 PM, Niels de Vos <nde...@redhat.com>
>> > > wrote:
>> > >> On Thu, Jan 14, 2016 at 03:46:02PM +0530, Kaushal M wrote:
>> > >>> On Thu, Jan 14, 2016 at 2:43 PM, Niels de Vos <nde...@redhat.com>
>> > >>> wrote:
>> > >>> > On Thu, Jan 14, 2016 at 11:51:15AM +0530, Raghavendra Talur wrote:
>> > >>> >> On Tue, Jan 12, 2016 at 7:59 PM, Atin Mukherjee
>> > >>> >> <atin.mukherje...@gmail.com>
>> > >>> >> wrote:
>> > >>> >>
>> > >>> >> > -Atin
>> > >>> >> > Sent from one plus one
>> > >>> >> > On Jan 12, 2016 7:41 PM, "Niels de Vos" <nde...@redhat.com>
>> > >>> >> > wrote:
>> > >>> >> > >
>> > >>> >> > > On Tue, Jan 12, 2016 at 07:21:37PM +0530, Raghavendra Talur
>> > >>> >> > > wrote:
>> > >>> >> > > > We have now changed the gerrit-jenkins workflow as follows:
>> > >>> >> > > >
>> > >>> >> > > > 1. Developer works on a new feature/bug fix and tests it
>> > >>> >> > > > locally(run
>> > >>> >> > > > run-tests.sh completely).
>> > >>> >> > > > 2. Developer sends the patch to gerrit using rfc.sh.
>> > >>> >> > > >
>> > >>> >> > > > +++Note that no regression runs have started automatically
>> > >>> >> > > > for this
>> > >>> >> > patch
>> > >>> >> > > > at this point.+++
>> > >>> >> > > >
>> > >>> >> > > > 3. Developer marks the patch as +1 verified on gerrit as a
>> > >>> >> > > > promise of
>> > >>> >> > > > having tested the patch completely. For cases where patches
>> > >>> >> > > > don't have
>> > >>> >> > a +1
>> > >>> >> > > > verified from the developer, maintainer has the following
>> > >>> >> > > > options
>> > >>> >> > > > a. just do the code-review and award a +2 code review.
>> > >>> >> > > > b. pull the patch locally and test completely and award a
>> > >>> >> > > > +1 verified.
>> > >>> >> > > > Both the above actions would result in triggering of
>> > >>> >> > > > regression runs
>> > >>>

Re: [Gluster-Maintainers] Requesting separate labels in Gerrit for better testing results

2016-01-14 Thread Kaushal M
On Thu, Jan 14, 2016 at 5:12 PM, Niels de Vos <nde...@redhat.com> wrote:
> On Thu, Jan 14, 2016 at 03:46:02PM +0530, Kaushal M wrote:
>> On Thu, Jan 14, 2016 at 2:43 PM, Niels de Vos <nde...@redhat.com> wrote:
>> > On Thu, Jan 14, 2016 at 11:51:15AM +0530, Raghavendra Talur wrote:
>> >> On Tue, Jan 12, 2016 at 7:59 PM, Atin Mukherjee 
>> >> <atin.mukherje...@gmail.com>
>> >> wrote:
>> >>
>> >> > -Atin
>> >> > Sent from one plus one
>> >> > On Jan 12, 2016 7:41 PM, "Niels de Vos" <nde...@redhat.com> wrote:
>> >> > >
>> >> > > On Tue, Jan 12, 2016 at 07:21:37PM +0530, Raghavendra Talur wrote:
>> >> > > > We have now changed the gerrit-jenkins workflow as follows:
>> >> > > >
>> >> > > > 1. Developer works on a new feature/bug fix and tests it locally(run
>> >> > > > run-tests.sh completely).
>> >> > > > 2. Developer sends the patch to gerrit using rfc.sh.
>> >> > > >
>> >> > > > +++Note that no regression runs have started automatically for this
>> >> > patch
>> >> > > > at this point.+++
>> >> > > >
>> >> > > > 3. Developer marks the patch as +1 verified on gerrit as a promise 
>> >> > > > of
>> >> > > > having tested the patch completely. For cases where patches don't 
>> >> > > > have
>> >> > a +1
>> >> > > > verified from the developer, maintainer has the following options
>> >> > > > a. just do the code-review and award a +2 code review.
>> >> > > > b. pull the patch locally and test completely and award a +1 
>> >> > > > verified.
>> >> > > > Both the above actions would result in triggering of regression runs
>> >> > for
>> >> > > > the patch.
>> >> > >
>> >> > > Would it not help if anyone giving +1 code-review starts the 
>> >> > > regression
>> >> > > tests too? When developers ask me to review, I prefer to see reviews
>> >> > > done by others first, and any regression failures should have been 
>> >> > > fixed
>> >> > > by the time I look at the change.
>> >> > When this idea was originated (long back) I was in favour of having
>> >> > regression triggered on a +1, however verified flag set by the developer
>> >> > would still trigger the regression. Being a maintainer I would always
>> >> > prefer to look at a patch when its verified  flag is +1 which means the
>> >> > regression result would also be available.
>> >> >
>> >>
>> >>
>> >> Niels requested in IRC that it is good have a mechanism of getting all
>> >> patches that have already passed all regressions before starting review.
>> >> Here is what I found
>> >> a. You can use the search string
>> >> status:open label:Verified+1,user=build AND label:Verified+1,user=nb7build
>> >> b. You can bookmark this link and it will take you directly to the page
>> >> with list of such patches.
>> >>
>> >> http://review.gluster.org/#/q/status:open+label:Verified%252B1%252Cuser%253Dbuild+AND+label:Verified%252B1%252Cuser%253Dnb7build
>> >
>> > Hmm, copy/pasting this URL does not work for me, I get an error:
>> >
>> > Code Review - Error
>> > line 1:26 no viable alternative at character '%'
>> > [Continue]
>> >
>> >
>> > Kaushal, could you add the following labels to gerrit, so that we can
>> > update the Jenkins jobs and they can start setting their own labels?
>> >
>> > http://review.gluster.org/Documentation/config-labels.html#label_custom
>> >
>> > - Smoke: misc smoke testing, compile, bug check, posix, ..
>> > - NetBSD: NetBSD-7 regression
>> > - Linux: Linux regression on CentOS-6
>>
>> I added these labels to the gluster projects' project.config, but they
>> don't seem to be showing up. I'll check once more when I get back
>> home.
>
> Might need a restart/reload of Gerrit? It seems required for the main
> gerrit.config file too:
>
>   
> http://review.gluster.org/Documentation/config-gerrit.html#_file_code_etc_gerrit_config_code

I was using Chromium and did a restart. Both hadn't helped. I'll try again.
>
> Niels
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] glusterfs-3.6.7 released

2015-11-25 Thread Kaushal M
Did we just have 3 releases of 3.6.7?

On Wed, Nov 25, 2015 at 5:49 PM, Gluster Build System
 wrote:
>
>
> SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.6.7.tar.gz
>
> This release is made off jenkins-release-147
>
> -- Gluster Build System
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] Automatic triggering broken on Jenkins

2015-11-21 Thread Kaushal M
On Sat, Nov 21, 2015 at 4:45 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Fri, Nov 20, 2015 at 10:13 PM, Vijay Bellur <vbel...@redhat.com> wrote:
>>
>>
>> - Original Message -
>>> From: "Kaushal M" <kshlms...@gmail.com>
>>> To: "Niels de Vos" <nde...@redhat.com>
>>> Cc: maintainers@gluster.org
>>> Sent: Thursday, November 19, 2015 6:14:59 AM
>>> Subject: Re: [Gluster-Maintainers] Automatic triggering broken on Jenkins
>>>
>>> On Thu, Nov 19, 2015 at 4:34 PM, Niels de Vos <nde...@redhat.com> wrote:
>>> > On Thu, Nov 19, 2015 at 03:54:54PM +0530, Kaushal M wrote:
>>> >> The gerrit-trigger plugin which automatically triggered jobs is
>>> >> temporarily not working. We are trying to find out why and will fix it
>>> >> soon.
>>> >>
>>
>> What is the latest on this problem? Has the root cause been established? If 
>> not, I can
>> also start looking into this.
>>
>
> Still no idea why this is broken. The listener which maintains a live
> connection to gerrit for events isn't starting. Except this everything
> else with the gerrit-trigger plugins is working. There is nothing that
> I could find from either of jenkins' or gerrit's logs.
>
> I want to try restarting Jenkins once, and also want to try updating
> the plugins.

I've updated the plugins and scheduled a restart. Jenkins should
automatically restart after the running jobs finish. We have 2
regression-jobs running which have been going on for ~2 and ~3 hours.
>
>> Thanks!
>> Vijay
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Automatic triggering broken on Jenkins

2015-11-19 Thread Kaushal M
The gerrit-trigger plugin which automatically triggered jobs is
temporarily not working. We are trying to find out why and will fix it
soon.

In the mean time maintainers who need tests to run immediately, can
manually trigger jobs. To manually trigger jobs,
1. Get the REFSPEC of the change from Gerrit. This will be available
from the review page for a change, and will be in the format
`refs/changes/42/11342/10`. You can get it easily from the `Download`
menu in the top-right of the page.

2. Go to the job page for the test you want on jenkins and click on
`Build with parameters`. You should land up on a page with URL similar
to 
`https://build.gluster.org/job/rackspace-regression-2GB-triggered/build?delay=0sec`

3. Enter the REFSPEC for the change in to the box and click build.
Jenkins should now schedule, run and report back the job status to
gerrit.

~kaushal
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] I'll be mostly unavailable till beginning of october

2015-09-06 Thread Kaushal M
Congrats Xavi!

On Sun, Sep 6, 2015 at 5:07 PM, Atin Mukherjee  wrote:
> Congrats Xavi!!
>
> ~Atin
>
> On 09/04/2015 06:56 PM, Xavier Hernandez wrote:
>> Hi all,
>>
>> I've recently become father and the child keeps me quite busy, so I
>> won't be much available until I return to the office (first/second week
>> of october).
>>
>> I'll read emails as often as I can till october, but I cannot promise
>> anything. If there's anything important related to ec, I'll answer as
>> soon as possible, but it would be better if you also CC Pranith.
>>
>> Thanks,
>>
>> Xavi
>>
>> ___
>> maintainers mailing list
>> maintainers@gluster.org
>> http://www.gluster.org/mailman/listinfo/maintainers
> ___
> maintainers mailing list
> maintainers@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
___
maintainers mailing list
maintainers@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers