On Thu, 8 Mar 2018 at 11:43, Kaushal M <kshlms...@gmail.com> wrote:

> On Thu, Mar 8, 2018 at 10:21 AM, Amar Tumballi <atumb...@redhat.com>
> wrote:
> > Meeting date: 03/07/2018 (March 3rd, 2018. 19:30IST, 14:00UTC, 09:00EST)
> >
> > BJ Link
> >
> > Bridge: https://bluejeans.com/205933580
> > Download : https://bluejeans.com/s/mOGb7
> >
> > Attendance
> >
> > [Sorry Note] : Atin (conflicting meeting), Michael Adam, Amye, Niels de
> Vos,
> > Amar, Nigel, Jeff, Shyam, Kaleb, Kotresh
> >
> > Agenda
> >
> > AI from previous meeting:
> >
> > Email on version numbers: Still pending - Amar/Shyam
> >
> > Planning to do this by Friday (9th March)
> >
> > can we run regression suite with GlusterD2
> >
> > OK with failures, but can we run?
> > Nigel to run tests and give outputs
>
> Apologies for not attending this meeting.
>
> I can help get this up and running.
>
> But, I also wanted to setup a smoke job to run GD2 CI against glusterfs
> patches.
> This will help us catch changes that adversly affect GD2, in
> particular changes to the option_t and xlator_api_t structs.
> Will not be a particularly long test to run. On average the current
> GD2 centos-ci jobs finish in under 4 minutes.
> I expect that building glusterfs will add about 5 minutes more.
> This job should be simple enough to get setup, and I'd like it if can
> set this up first.


+1, this is definitely needed going forward.


>
> >
> > Line coverage tests:
> >
> > SIGKILL was sent to processes, so the output was not proper.
> > Patch available, Nigel to test with the patch and give output before
> > merging.
> > [Nigel] what happens with GD2 ?
> >
> > [Shyam] https://github.com/gojp/goreportcard
> > [Shyam] (what I know)
> > https://goreportcard.com/report/github.com/gluster/glusterd2
> >
> > Gluster 4.0 is tagged:
> >
> > Retrospect meeting: Can this be google form?
> >
> > It usually is, let me find and paste the older one:
> >
> > 3.10 retro:
> >
> http://lists.gluster.org/pipermail/gluster-users/2017-February/030127.html
> > 3.11 retro: https://www.gluster.org/3-11-retrospectives/
> >
> > [Nigel] Can we do it a less of form, and keep it more generic?
> > [Shyam] Thats what mostly the form tries to do. Prefer meeting & Form
> >
> > Gluster Infra team is testing the distributed testing framework
> contributed
> > from FB
> >
> > [Nigel] Any issues, would like to collaborate
> > [Jeff] Happy to collaborate, let me know.
> >
> > Call out for features on 4-next
> >
> > should the next release be LTM and 4.1 and then pick the version number
> > change proposal later.
> >
> > Bugzilla Automation:
> >
> > Planning to test it out next week.
> > AI: send the email first, and target to take the patches before next
> > maintainers meeting.
> >
> > Round Table
> >
> > [Kaleb] space is tight on download.gluster.org
> > * may we delete, e.g. purpleidea files? experimental (freebsd stuff from
> > 2014)?
> > * any way to get more space?
> > * [Nigel] Should be possible to do it, file a bug
> > * AI: Kaleb to file a bug
> > *
> >
> > yesterday I noticed that some files (…/3.12/3.12.2/Debian/…) were not
> owned
> > by root:root. They were rsync_aide:rsync_aide. Was there an aborted rsync
> > job or something that left them like that?
> >
> > most glusterfs 4.0 packages are on download.g.o now. Starting on gd2
> > packages now.
> >
> > el7 packages on on buildroot if someone (shyam?) wants to get a head
> start
> > on testing them
> >
> > [Nigel] Testing IPv6 (with IPv4 on too), only 4 tests are consistently
> > failing. Need to look at it.
> >
> >
> >
> > --
> > Amar Tumballi (amarts)
> >
> > _______________________________________________
> > maintainers mailing list
> > maintainers@gluster.org
> > http://lists.gluster.org/mailman/listinfo/maintainers
> >
> _______________________________________________
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel

-- 
--Atin
_______________________________________________
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers

Reply via email to