Re: [Gluster-Maintainers] Proposal to make Mohit and Sanju as cli/glusterd maintainer

2019-10-22 Thread Atin Mukherjee
On Tue, Oct 22, 2019 at 1:40 PM Niels de Vos  wrote:

> On Tue, Oct 22, 2019 at 01:22:33PM +0530, Atin Mukherjee wrote:
> > I’d like to propose Mohit Agrawal & Sanju Rakonde to be made CLI/Glusterd
> > maintainers. If there’s no objection on this proposal can we please get
> > this done by end of this month?
>
> +1 from me.
>
> Please send a patch for the MAINTAINERS file and have the current
> maintainers approve that together with Mohit and Sanju for their
> acceptance.
>

https://review.gluster.org/#/c/glusterfs/+/23601/


> In case they are not aware yet, have them go through
>
> https://docs.gluster.org/en/latest/Contributors-Guide/Guidelines-For-Maintainers/
> and subscribe to this list (should probably be part of that doc).
>
> Thanks,
> Niels
>
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #4767

2019-10-22 Thread jenkins
See 


Changes:

[Kotresh H R] geo-rep: Fix Permission denied traceback on non root setup

[Raghavendra G] rpc: Synchronize slot allocation code


--
[...truncated 3.75 MB...]
./tests/bugs/gfapi/bug-1032894.t  -  10 second
./tests/bugs/fuse/bug-985074.t  -  10 second
./tests/bugs/distribute/bug-1088231.t  -  10 second
./tests/bugs/distribute/bug-1086228.t  -  10 second
./tests/bugs/core/bug-1699025-brick-mux-detach-brick-fd-issue.t  -  10 second
./tests/bugs/cli/bug-1030580.t  -  10 second
./tests/bugs/cli/bug-1022905.t  -  10 second
./tests/bugs/bug-1702299.t  -  10 second
./tests/bugs/bug-1371806_2.t  -  10 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  10 second
./tests/basic/quota-nfs.t  -  10 second
./tests/basic/glusterd/check-cloudsync-ancestry.t  -  10 second
./tests/basic/distribute/file-create.t  -  10 second
./tests/basic/ctime/ctime-rep-heal.t  -  10 second
./tests/basic/afr/stale-file-lookup.t  -  10 second
./tests/basic/afr/root-squash-self-heal.t  -  10 second
./tests/line-coverage/cli-peer-and-volume-operations.t  -  9 second
./tests/bugs/upcall/bug-1422776.t  -  9 second
./tests/bugs/transport/bug-873367.t  -  9 second
./tests/bugs/shard/bug-1259651.t  -  9 second
./tests/bugs/replicate/bug-1139230.t  -  9 second
./tests/bugs/readdir-ahead/bug-1446516.t  -  9 second
./tests/bugs/posix/bug-990028.t  -  9 second
./tests/bugs/posix/bug-1122028.t  -  9 second
./tests/bugs/glusterfs/bug-872923.t  -  9 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  9 second
./tests/bugs/fuse/bug-983477.t  -  9 second
./tests/bugs/distribute/bug-912564.t  -  9 second
./tests/bugs/changelog/bug-1321955.t  -  9 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  9 second
./tests/bugs/bitrot/1209751-bitrot-scrub-tunable-reset.t  -  9 second
./tests/bitrot/br-stub.t  -  9 second
./tests/basic/ec/nfs.t  -  9 second
./tests/basic/distribute/file-rename.t  -  9 second
./tests/basic/ctime/ctime-ec-heal.t  -  9 second
./tests/basic/changelog/changelog-rename.t  -  9 second
./tests/basic/afr/ta-read.t  -  9 second
./tests/basic/afr/gfid-heal.t  -  9 second
./tests/gfid2path/block-mount-access.t  -  8 second
./tests/features/readdir-ahead.t  -  8 second
./tests/bugs/snapshot/bug-1064768.t  -  8 second
./tests/bugs/shard/shard-inode-refcount-test.t  -  8 second
./tests/bugs/shard/bug-1342298.t  -  8 second
./tests/bugs/shard/bug-1258334.t  -  8 second
./tests/bugs/shard/bug-1256580.t  -  8 second
./tests/bugs/replicate/bug-986905.t  -  8 second
./tests/bugs/replicate/bug-1686568-send-truncate-on-arbiter-from-shd.t  -  8 
second
./tests/bugs/replicate/bug-1626994-info-split-brain.t  -  8 second
./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t  -  8 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  8 second
./tests/bugs/replicate/bug-1132102.t  -  8 second
./tests/bugs/readdir-ahead/bug-1439640.t  -  8 second
./tests/bugs/quota/bug-1243798.t  -  8 second
./tests/bugs/protocol/bug-1321578.t  -  8 second
./tests/bugs/posix/bug-gfid-path.t  -  8 second
./tests/bugs/io-stats/bug-1598548.t  -  8 second
./tests/bugs/glusterfs/bug-861015-log.t  -  8 second
./tests/bugs/glusterfs/bug-848251.t  -  8 second
./tests/bugs/ec/bug-1227869.t  -  8 second
./tests/bugs/distribute/bug-884597.t  -  8 second
./tests/bugs/distribute/bug-882278.t  -  8 second
./tests/bugs/distribute/bug-1368012.t  -  8 second
./tests/bugs/core/io-stats-1322825.t  -  8 second
./tests/bugs/core/bug-1168803-snapd-option-validation-fix.t  -  8 second
./tests/bugs/changelog/bug-1208470.t  -  8 second
./tests/bugs/bug-1258069.t  -  8 second
./tests/bugs/bitrot/bug-1210684-scrub-pause-resume-error-handling.t  -  8 second
./tests/bugs/bitrot/1209752-volume-status-should-show-bitrot-scrub-info.t  -  8 
second
./tests/basic/xlator-pass-through-sanity.t  -  8 second
./tests/basic/md-cache/bug-1317785.t  -  8 second
./tests/basic/gfapi/libgfapi-fini-hang.t  -  8 second
./tests/basic/fencing/fencing-crash-conistency.t  -  8 second
./tests/basic/ec/ec-anonymous-fd.t  -  8 second
./tests/basic/distribute/bug-1265677-use-readdirp.t  -  8 second
./tests/basic/afr/ta-write-on-bad-brick.t  -  8 second
./tests/bugs/upcall/bug-1394131.t  -  7 second
./tests/bugs/shard/bug-1272986.t  -  7 second
./tests/bugs/replicate/bug-1325792.t  -  7 second
./tests/bugs/posix/bug-765380.t  -  7 second
./tests/bugs/posix/bug-1175711.t  -  7 second
./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  7 second
./tests/bugs/md-cache/bug-1726205.t  -  7 second
./tests/bugs/io-cache/bug-858242.t  -  7 second
./tests/bugs/glusterfs/bug-895235.t  -  7 second
./tests/bugs/glusterfs/bug-1482528.t  -  7 second
./tests/bugs/ec/bug-1161621.t  -  7 second
./tests/bugs/core/bug-913544.t  -  7 second
./tests/bugs/cli/bug-1087487.t  -  7 second
./tests/bugs/access-control/bug-1051896.t  -  7 

[Gluster-Maintainers] Jenkins build is back to normal : regression-test-with-multiplex #1532

2019-10-22 Thread jenkins
See 


___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Proposal: move glusterfs development to github workflow, completely

2019-10-22 Thread Michael Scherer
Le mardi 22 octobre 2019 à 13:17 +0530, Amar Tumballi a écrit :
> Thanks for the email Misc. My reasons inline.
> 
> On Mon, Oct 21, 2019 at 4:44 PM Michael Scherer 
> wrote:
> 
> > Le lundi 14 octobre 2019 à 20:30 +0530, Amar Tumballi a écrit :
> > > On Mon, 14 Oct, 2019, 5:37 PM Niels de Vos, 
> > > wrote:
> > > 
> > > > On Mon, Oct 14, 2019 at 03:52:30PM +0530, Amar Tumballi wrote:
> > > > > Any thoughts on this?
> > > > > 
> > > > > I tried a basic .travis.yml for the unified glusterfs repo I
> > > > > am
> > > > > maintaining, and it is good enough for getting most of the
> > > > > tests.
> > > > > Considering we are very close to glusterfs-7.0 release, it is
> > > > > good to
> > > > 
> > > > time
> > > > > this after 7.0 release.
> > > > 
> > > > Is there a reason to move to Travis? GitHub does offer
> > > > integration
> > > > with
> > > > Jenkins, so we should be able to keep using our existing CI, I
> > > > think?
> > > > 
> > > 
> > > Yes, that's true. I tried Travis because I don't have complete
> > > idea
> > > of
> > > Jenkins infra and trying Travis needed just basic permissions
> > > from me
> > > on
> > > repo (it was tried on my personal repo)
> > 
> > Travis is limited to 1 builder per project with the free version..
> > So since the regression test last 4h, I am not sure exactly what is
> > the
> > plan there.
> > 
> > 
> 
> We can't regress from our current testing coverage when we migrate.
> So, My take is, we should start with surely using existing Jenkins
> itself from github. And eventually see if there are any better
> options, or else at least remain with this CI.

Ansible did use travis first, and we did had a lot of issues after a
while. I think that if we want to have the same amount of CI than now,
we would need around 5 to 6 VM at minima for a average workload, and a
bit more for release time. Assuming we start to have more activity
(and/or more coverage), we would need to scale up to more VM too.

Plus, given glusterfs nature, I am not sure we should use a Ci where we
do not control the underlying kernel or infra. We still have some
issues with I/O on AWS that break test, so I can't imagine how it would
be with a random kernel :/


> > Now, on the whole migration stuff, I do have a few questions:
> > 
> > - what will happen to the history of the project (aka, the old
> > review.gluster.org server). I would be in favor of dropping it if
> > we
> > move out, but then, we would lose all informations there (the
> > review
> > content itself).
> > 
> > 
> 
> I would like to see it hosted somewhere (ie, in same URL preferably).
> But depending on sponsorship for the hosting charges, if we had to
> decide to shutting the service down, my take is, we can make the DB
> content made available for public download. Happy to provide a 'how
> to view patches' guide so one can setup Gerrit locally and see the
> details.

Wouldn't it be a problem to drop emails, etc in a DB like this ? RGPD
compliance come to mind. (I do not think that's a problem, but I would
really prefer to have a lawyer opinion first, since I would be legally
responsible in my country based on my memories of law courses in
university).

Plus, let's be honest with ourself, nobody is going to go to the hassle
of setting a old version of gerrit just to read a review. 

We could try to do a html mirror using wget, but I have no idea how
long it would take, nor how complete it would be in practice.

And yes, same URL is doable.

Then what about other gerrit using projects, shouldn't they be moved
before glusterfs ?

> - what happen to existing proposed patches, do they need to be
> migrated
> > one by one (and if so, who is going to script that part)
> > 
> > 
> 
> I checked that we have < 50 patches active on master branch, and
> other than Yaniv, no one has more than 5 patches active in review
> queue. So, I propose people can take up their own patches and post it
> to GitHub. For those who are not willing to do that extra work, or
> not active in project now,  I am happy to help them migrate the patch
> to PR.

How hard would it be to have github/gerrit side by side for a time ?
I am wary of any huge move and prefer incremental steps.

> 
> 
> > - can we, while we are on it, force 2FA for the whole org on github
> > ?
> > before, I didn't push too hard because this wasn't critical, but if
> > there is a migration, that would be much more important.
> > 
> > 
> 
> Yes. I believe that is totally fine, specifically for those who are
> admins of the org, and those who can merge.

I think we can't force per team, just the whole org (I may be wrong, it
happen, but not that often).

> 
> > - what is the plan to force to enforce the various policies ?
> > (like the fact that commit need to be sign, in a DCO like fashion,
> > or
> > to decide who can merge, who can give +2, and how we trigger build
> > only
> > when someone has said "this is verified")
> > 
> > 
> 
> About people, two options IMO:
> 1. Provide access to same set 

Re: [Gluster-Maintainers] Proposal to make Mohit and Sanju as cli/glusterd maintainer

2019-10-22 Thread Niels de Vos
On Tue, Oct 22, 2019 at 01:22:33PM +0530, Atin Mukherjee wrote:
> I’d like to propose Mohit Agrawal & Sanju Rakonde to be made CLI/Glusterd
> maintainers. If there’s no objection on this proposal can we please get
> this done by end of this month?

+1 from me.

Please send a patch for the MAINTAINERS file and have the current
maintainers approve that together with Mohit and Sanju for their
acceptance.

In case they are not aware yet, have them go through
https://docs.gluster.org/en/latest/Contributors-Guide/Guidelines-For-Maintainers/
and subscribe to this list (should probably be part of that doc).

Thanks,
Niels

___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


[Gluster-Maintainers] Proposal to make Mohit and Sanju as cli/glusterd maintainer

2019-10-22 Thread Atin Mukherjee
I’d like to propose Mohit Agrawal & Sanju Rakonde to be made CLI/Glusterd
maintainers. If there’s no objection on this proposal can we please get
this done by end of this month?
-- 
- Atin (atinm)
___
maintainers mailing list
maintainers@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [Gluster-devel] Proposal: move glusterfs development to github workflow, completely

2019-10-22 Thread Amar Tumballi
Thanks for the email Misc. My reasons inline.

On Mon, Oct 21, 2019 at 4:44 PM Michael Scherer  wrote:

> Le lundi 14 octobre 2019 à 20:30 +0530, Amar Tumballi a écrit :
> > On Mon, 14 Oct, 2019, 5:37 PM Niels de Vos, 
> > wrote:
> >
> > > On Mon, Oct 14, 2019 at 03:52:30PM +0530, Amar Tumballi wrote:
> > > > Any thoughts on this?
> > > >
> > > > I tried a basic .travis.yml for the unified glusterfs repo I am
> > > > maintaining, and it is good enough for getting most of the tests.
> > > > Considering we are very close to glusterfs-7.0 release, it is
> > > > good to
> > >
> > > time
> > > > this after 7.0 release.
> > >
> > > Is there a reason to move to Travis? GitHub does offer integration
> > > with
> > > Jenkins, so we should be able to keep using our existing CI, I
> > > think?
> > >
> >
> > Yes, that's true. I tried Travis because I don't have complete idea
> > of
> > Jenkins infra and trying Travis needed just basic permissions from me
> > on
> > repo (it was tried on my personal repo)
>
> Travis is limited to 1 builder per project with the free version..
> So since the regression test last 4h, I am not sure exactly what is the
> plan there.
>
>
We can't regress from our current testing coverage when we migrate. So, My
take is, we should start with surely using existing Jenkins itself from
github. And eventually see if there are any better options, or else at
least remain with this CI.


> Now, on the whole migration stuff, I do have a few questions:
>
> - what will happen to the history of the project (aka, the old
> review.gluster.org server). I would be in favor of dropping it if we
> move out, but then, we would lose all informations there (the review
> content itself).
>
>
I would like to see it hosted somewhere (ie, in same URL preferably).

But depending on sponsorship for the hosting charges, if we had to decide
to shutting the service down, my take is, we can make the DB content made
available for public download. Happy to provide a 'how to view patches'
guide so one can setup Gerrit locally and see the details.

- what happen to existing proposed patches, do they need to be migrated
> one by one (and if so, who is going to script that part)
>
>
I checked that we have < 50 patches active on master branch, and other than
Yaniv, no one has more than 5 patches active in review queue. So, I propose
people can take up their own patches and post it to GitHub. For those who
are not willing to do that extra work, or not active in project now,  I am
happy to help them migrate the patch to PR.



> - can we, while we are on it, force 2FA for the whole org on github ?
> before, I didn't push too hard because this wasn't critical, but if
> there is a migration, that would be much more important.
>
>
Yes. I believe that is totally fine, specifically for those who are admins
of the org, and those who can merge.


> - what is the plan to force to enforce the various policies ?
> (like the fact that commit need to be sign, in a DCO like fashion, or
> to decide who can merge, who can give +2, and how we trigger build only
> when someone has said "this is verified")
>
>
About people, two options IMO:
1. Provide access to same set of people who have access in Gerrit.
or 2. Look at the activity list in last 1 year, and see who has actually
reviewed AND merged any patch from the above list to have access.

About policies on how to trigger build, and merge I prefer to use tools
like mergify.io which is also used by many open source projects, and also
friends @ Ceph project use the same. That way, there would be no human
pressing merge, but policy based patches would be merged.

About what strings, commands to use for triggering builds (/run smoke, /run
regression etc), I am happy to work with someone to get this done.


> - can we define also some goals on why to migrate ?
>

Sure, will list below.


> the thread do not really explain why, except "that's what everybody is
> doing". Based on previous migrations for different contexts, that's
> usually not sufficient, and we get the exact same amount of
> contribution no matter what (like static blog vs wordpress vs static
> blog), except that someone (usually me) has to do lots of work.
>
>
I agree, and sorry about causing lot of work for you :-/ None of this
intentional. We all thrive and look for better way as they (and we) evolve.
It is good to recheck whether we are using right tools, right processes or
not every 2 yrs at least.


> So could someone give some estimate that can be measured on what is
> going to be improved, along a timeframe for the estimated improvement ?
> (so like in 6 months, this will be bring new developpers, or this will
> make patch being merged 10% faster). And just to be clear, if the goals
> are not reached in the timeframe, I am gonna make people accountable
> and complain next time someone propose a migration.
>
>
This part of the email is very critical, for everyone. Because if we don't
measure it, we didn't achieve anything.