Re: [Gluster-infra] Updating email alias for GerritHub account

2022-03-02 Thread Niels de Vos
On Wed, Mar 02, 2022 at 12:23:34PM +0100, Michael Scherer wrote:
> Hi,
> 
> i redirected to root:
> https://github.com/gluster/gluster.org_ansible_configuration/commit/b0e583918f16c924c80f8daa997b4a9890a9075e

Great, thanks!

Niels


> 
> On Wed, 23 Feb 2022 at 11:08, Niels de Vos  wrote:
> >
> > Hi!
> >
> > gerrit...@gluster.org is an alias that currently points to me. As I am
> > not (much) involved in Gluster anymore, I am not the right person to
> > receive these emails.
> >
> > GerritHub is still used with NFS-Ganesha, and the FSAL integration with
> > libgfapi.
> >
> > I'd appreciate it if the gerrit...@gluster.org alias can get pointed to
> > some private (password reset mails) mailinglist, or someone that should
> > be able to recover the Gluster Community Jenkins account on GerritHub.
> >
> > Thanks!
> > Niels
> > ___
> > Gluster-infra mailing list
> > Gluster-infra@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-infra
> 
> 
> 
> -- 
> Michael Scherer / He/Il/Er/Él
> Sysadmin, Community Infrastructure
> 


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] Updating email alias for GerritHub account

2022-02-23 Thread Niels de Vos
Hi!

gerrit...@gluster.org is an alias that currently points to me. As I am
not (much) involved in Gluster anymore, I am not the right person to
receive these emails.

GerritHub is still used with NFS-Ganesha, and the FSAL integration with
libgfapi.

I'd appreciate it if the gerrit...@gluster.org alias can get pointed to
some private (password reset mails) mailinglist, or someone that should
be able to recover the Gluster Community Jenkins account on GerritHub.

Thanks!
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Update on georep failure

2021-02-03 Thread Niels de Vos
On Tue, Feb 02, 2021 at 09:19:23PM +0100, Michael Scherer wrote:
> Le mardi 02 février 2021 à 21:06 +0200, Yaniv Kaul a écrit :
> > On Tue, Feb 2, 2021 at 8:14 PM Michael Scherer 
> > wrote:
> > 
> > > Hi,
> > > 
> > > so we finally found the cause of the georep failure, after several
> > > days
> > > of work from Deepshika and I.
> > > 
> > > Short story:
> > > 
> > > 
> > > side effect of adding libtirpc-devel on EL 7:
> > > https://github.com/gluster/project-infrastructure/issues/115
> > 
> > 
> > Looking at
> > https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/191
> > - we
> > weren't supposed to use it?
> > From
> > https://github.com/gluster/glusterfs/blob/d1d7a6f35c816822fab51c820e25023863c239c1/glusterfs.spec.in#L61
> > :
> > # Do not use libtirpc on EL6, it does not have xdr_uint64_t() and
> > xdr_uint32_t
> > # Do not use libtirpc on EL7, it does not have xdr_sizeof()
> > %if ( 0%{?rhel} && 0%{?rhel} <= 7 )
> > %global _without_libtirpc --without-libtirpc
> > %endif
> > 
> > 
> > CentOS 7 has an ancient version, CentOS 8 has a newer version, so
> > perhaps
> > just one CentOS 8 slaves?
> 
> Fine for me for C8, but if libtirpc on EL7 is missing a function (or
> more), how come the code compile without trouble, and fail at run time
> in a rather non obvious way ?

>From what I remember of the rpc functions, is that glibc provides an
implementation too. Symbols might get partially from libtirpc and the
missing symbols from glibc. Mixing these will not work, as the internal
status/structures are different. Memory corruption and possibly
segfaults would most likely be the result.

If there is something linking against libtirpc, the library will (just
like glibc) be in memory, and symbols might get picked up from the wrong
library causing issues.

HTH,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] Added new admin to maintainers mailinglist

2020-02-17 Thread Niels de Vos
Hi,

sankars...@kadalu.io has been added to the admins/owners of the
maintainers list and he received login details off-list.

There were no admins listed anymore, not sure who deleted those since a
few weeks? list-ad...@gluster.org has been added as well, it seems this
address is used for (all?) others lists too.

Niels

___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] New GitHub reposotory: samba-integration

2019-08-19 Thread Niels de Vos
Hi,

The developers working on Samba want to add some tests for integrating
Samba with Gluster. To get this started, I have created a repository
'samba-integration' under the Gluster project. Members of the team
'Samba Integration' will have maintainer permissions there.

Please let me know if there is anything else I need to take care of
concerning the infrastructure component of the repo/team creation.

Thanks,
Niels
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] gluster-centos container on Docker Hub does still not get automatically rebuild

2019-01-03 Thread Niels de Vos
On Wed, Oct 17, 2018 at 10:53:21PM +0200, Niels de Vos wrote:
> Hi Humble,
> 
> It seems that merging changes in the gluster-container repository still
> does not trigger a rebuild in Docker Hub. Two weeks ago you had a look
> at it and thing did get rebuild at least once. Could you have a look at
> it again?
> 
> I'm also happy to check it out, but I'm not in the Gluster team. Maybe
> you or someone else can add me? My username is 'nixpanic'.

It seems that the container images are still not build automatically.
What needs to be done to get some progress here?

Reported at 
https://github.com/gluster/gluster-containers/pull/115#issuecomment-451070886
it would good to update the state there as well.

Thanks,
Niels
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] Centos CI automation Retrospective

2018-11-02 Thread Niels de Vos
On Fri, Nov 02, 2018 at 11:32:12AM +0530, Nigel Babu wrote:
> Hello folks,
> 
> On Monday, I merged in the changes that allowed all the jobs in Centos CI
> to be handled in an automated fashion. In the past, it depended on Infra
> team members to review, merge, and apply the changes on Centos CI. I've now
> changed that so that the individual job owners can do their own merges.
> 
> 1. On sending a pull request, a travis-ci job will ensure the YAML is valid
> JJB.
> 2. On merge, we'll apply the changes to ci.centos.org with travis-ci.

Thanks for getting this done, it is a great improvement!

Niels
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] gluster-centos container on Docker Hub does still not get automatically rebuild

2018-10-17 Thread Niels de Vos
Hi Humble,

It seems that merging changes in the gluster-container repository still
does not trigger a rebuild in Docker Hub. Two weeks ago you had a look
at it and thing did get rebuild at least once. Could you have a look at
it again?

I'm also happy to check it out, but I'm not in the Gluster team. Maybe
you or someone else can add me? My username is 'nixpanic'.

Thanks!
Niels
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Plan to move out of rackspace

2018-10-17 Thread Niels de Vos
On Wed, Oct 17, 2018 at 12:01:45PM +0200, Michael Scherer wrote:
> Hi,
> 
> so as people who have to deal with that, we have to get out of
> rackspace since they are stopping sponsoring free software projects in
> 2 months. 
> 
> After cleaning up servers, we still have the following systems in
> rackspace:
> 
> - download.gluster.org
> - supercolony.gluster.org
> - bugs.gluster.org
> - a few builders

...

> bugs
> 
> 
> This one is the simple, and will likely be the first one I deal with. 
> 
> SO my first plan was to just finish doing ansible integration, move it
> to a VM on the internal vlan (so behind our proxy), and switch. 
> 
> However, looking at the work I did, I wonder if it would be simpler to
> generate the code on a jenkins builder (for example, bugzilla.int), and
> just run the jobs here (and copy the data to the internal http server
> like ci-logs, etc)
> 
> Nigel did discuss moving that to Openshift, which would be a good idea,
> but we do not have that yet, and frankly, given that all my attempts to
> install openshift did finish with spending lots of time fixing small
> bugs, I think I will wait until the path of break^W innovation slow
> down before using that when I do have a 2 months deadline looming.
> 
> Niels, as you are the one who deal with bugs.gluster.org, have a
> opinion with that plan ? 

Go for it! The https://bugs.cloud.gluster.org site is quite static and
can be generated from Jenkins. As long as the generated data file is
correctly linked from the webpage, everything should work just fine. The
page itself does not often get updates, but it could be a git clone
after which the data gets captured. A webserver just needs to host
three(?) files.

See https://github.com/gluster/gluster-bugs-webui for details.

Thanks,
Niels
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Maintaining gluster/centosci repo

2018-10-12 Thread Niels de Vos
On Fri, Oct 12, 2018 at 10:59:08AM +0530, Nigel Babu wrote:
> Hello folks,
> 
> The centosci repo keeps falling behind in terms of reviews and merges +
> delay in applying the merges on ci.centos.org. I'd like to propose the
> following to change that. This change will impact everyone who runs a job
> on Centos CI.
> 
> * As soon as you merge a patch into that repo, we will apply that patch on
> Centos CI using Jenkins/Travis (don't really care which one).
> * Every team that has a job will have at least one committer (preferably
> more than 1). Please feel free to review and merge patches as long as it
> only applies to your job. If you want to add new committers.
> * If you need to create a new job, you can ask us for initial review, but
> the rest can be handled by your team independently.
> * If you want an old job deleted, please file a bug.
> 
> Does this sound acceptable? I'm going to deploy a CI job to apply master on
> Centos CI on 29th. Please nominate folks from your teams who need explicit
> commit access. The first day might be choppy in case there's a diff between
> what's in ci.centos.org vs what's on the repo.

Thanks! Having a job that automatically applies changes to the jobs in
the CI will be very useful.

Maybe we can have a MAINTAINERS or OWNERS file in the repo that lists
who the contact for certain component tests is?

Niels
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] Weekly Untriaged Bugs

2018-10-01 Thread Niels de Vos
On Mon, Oct 01, 2018 at 07:37:13AM +0530, Sankarshan Mukhopadhyay wrote:
> On Mon, Oct 1, 2018 at 7:15 AM  wrote:
> >
> 
> 
> > https://bugzilla.redhat.com/1625501 / project-infrastructure: gd2 smoke 
> > tests fail with cannot create directory ‘/var/lib/glusterd’: Permission 
> > denied
> > https://bugzilla.redhat.com/1626453 / project-infrastructure: 
> > glusterd2-containers nightly job failing
> > https://bugzilla.redhat.com/1627624 / project-infrastructure: Run gd2-smoke 
> > only after smoke passes
> > https://bugzilla.redhat.com/1631390 / project-infrastructure: Run smoke and 
> > regression on a patch only after passing clang-format job
> > https://bugzilla.redhat.com/1633497 / project-infrastructure: Unable to 
> > subscribe to upstream mailing list.
> 
> Some of these have on-going commentary but are in NEW, perhaps they
> should be in ASSIGNED.

Or add Triaged to the list of Keywords in the BZs.
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Github teams/repo cleanup

2018-07-25 Thread Niels de Vos
On Wed, Jul 25, 2018 at 02:38:57PM +0200, Michael Scherer wrote:
> Le mercredi 25 juillet 2018 à 14:08 +0200, Michael Scherer a écrit :
> > Le mercredi 25 juillet 2018 à 16:06 +0530, Nigel Babu a écrit :
> > > I think our team structure on Github has become unruly. I prefer
> > > that
> > > we
> > > use teams only when we can demonstrate that there is a strong need.
> > > At the
> > > moment, the gluster-maintainers and the glusterd2 projects have
> > > teams
> > > that
> > > have a strong need. If any other repo has a strong need for teams,
> > > please
> > > speak up. Otherwise, I suggest we delete the teams and add the
> > > relevant
> > > people as collaborators on the project.
> > > 
> > > It should be safe to delete the gerrit-hooks repo. These are now
> > > Github
> > > jobs. I'm not in favor of archiving the old projects if they're
> > > going
> > > to be
> > > hidden from someone looking for it. If they just move to the end of
> > > the
> > > listing, it's fine to archive.
> > 
> > So I did a test and just archived gluster/vagrant, and it can still
> > be
> > found.
> > 
> > So I am going to archives at least the salt stuff, and the gerrit-
> > hooks 
> > one. And remove the empty one.
> 
> So while cleaning thing up, I wonder if we can remove this one:
> https://github.com/gluster/jenkins-ssh-slaves-plugin
> 
> We have just a fork, lagging from upstream and I am sure we do not use
> it.

We had someone working on starting/stopping Jenkins slaves in Rackspace
on-demand. He since has left Red Hat and I do not think the infra team
had a great interest in this either (with the move out of Rackspace).

It can be deleted from my point of view.

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Switching mailman to https only

2018-07-02 Thread Niels de Vos
On Mon, Jul 02, 2018 at 04:55:22PM +0200, Michael Scherer wrote:
> Hi,
> 
> as part of a long due cleanup on our playbook, I moved mailman to be
> https only, and removed some hack due to supercolony being EL6 (so not
> certbot, etc, etc). I will continue to do clean that server so we can
> one day hope to switch a more modern stack and finally get ride of all
> exceptions we have around EL6 in our playbooks.
> 
> So if you see anything weird wrt the web interface of mailman, please
> open a bug against infra component so we can take a look.

When I moderate some messages, I get the following warning from Firefox:

The information you have entered on this page will be sent over an
insecure connection and could be read by a third party.

Are you sure you want to send this information?

There might be a mailman config option that makes the forms post to
https?

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] run-tests-in-vagrant

2018-02-16 Thread Niels de Vos
On Fri, Feb 16, 2018 at 10:08:51AM +0530, Nigel Babu wrote:
> So we have a job that's unmaintained and unwatched. If nobody steps up to
> own it in the next 2 weeks, I'll be deleting this job.

I fixed the downloading of the Vagrant box with
https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/122 .

Maybe Talur can help with updating the box so that geo-replication tests
function? If there is an Ansible role/repository with the changes that
were done on the Jenkins slaves, that could possibly help.

Is it possible to provide a Vargrant box configured similar to the
Jenkins slaves (without the Jenkins bits and other internal pieces) from
the same deployment as the slaves? That would make things less manual
and much easier to consume.

Thanks!
Niels


> 
> On Wed, Feb 14, 2018 at 4:49 PM, Niels de Vos <nde...@redhat.com> wrote:
> 
> > On Wed, Feb 14, 2018 at 11:15:23AM +0530, Nigel Babu wrote:
> > > Hello,
> > >
> > > Centos CI has a run-tests-in-vagrant job. Do we continue to need this
> > > anymore? It still runs master and 3.8. I don't see this job adding much
> > > value at this point given we only look at results that are on
> > > build.gluster.org. I'd like to use the extra capacity for other tests
> > that
> > > will run on centos-ci.
> >
> > The ./run-tests-in-vagrant.sh script is ideally what developers run
> > before submitting their patches. In case it fails, we should fix it.
> > Being able to run tests locally is something many of the new
> > contributors want to do. Having a controlled setup for the testing can
> > really help with getting new contributors onboard.
> >
> > Hmm, and the script/job definitely seems to be broken with at least two
> > parts:
> > - the Vagrant version on CentOS uses the old URL to get the box
> > - 00-georep-verify-setup.t fails, but the result is marked as SUCCESS
> >
> > It seems we need to get better at watching the CI, or at least be able
> > to receive and handle notifications...
> >
> > Thanks,
> > Niels
> >
> 
> 
> 
> -- 
> nigelb
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-Maintainers] glusterfs-3.13.1 released

2017-12-21 Thread Niels de Vos
On Thu, Dec 21, 2017 at 02:07:50AM +, jenk...@build.gluster.org wrote:
> SRC: 
> https://build.gluster.org/job/release-new/35/artifact/glusterfs-3.13.1.tar.gz
> HASH: 
> https://build.gluster.org/job/release-new/35/artifact/glusterfs-3.13.1.sha512sum

The tarball is not at the usual place yet. Could it be copied to

 - http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.13.1.tar.gz
   (maybe its there, but I get a 403 error message)
 - https://download.gluster.org/pub/gluster/glusterfs/3.13/ + subdir

Thanks,
Niels
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] gluster-zeroconf has been moved to the Gluster GitHub organisation

2017-11-16 Thread Niels de Vos
On Thu, Nov 16, 2017 at 08:25:49PM +0530, Nigel Babu wrote:
> Please create a team add non-org admins as team maintainers.

Thanks, I've create a @gluster/gluster-zeroconf team with admin
permissions to the gluster-zeroconf repository. Maintainers for the
repository will are added to the team and not as individual
contributors.

Niels


> 
> On Thu, Nov 16, 2017 at 7:27 PM, Niels de Vos <nde...@redhat.com> wrote:
> 
> > Hi all,
> >
> > I have moved the gluster-zeroconf repository from my personal github
> > account to the Gluster organisation one. Dustin Black and me are the
> > current two contributors/maintainers, we'll be adding more if people
> > express interest (Ramky?). There is no "GitHub Team" for the admins of
> > this repository, I do not know if that is something we're doing for all
> > of the projects we're hosting?
> >
> > For this repository we will be using GitHub Pull-Requests for reviewing
> > and merging changes. The number of patches will likely be small and the
> > Gerrit review workflow adds (too much) overhead.
> >
> > Niels
> > ___
> > Gluster-infra mailing list
> > Gluster-infra@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-infra
> >
> 
> 
> 
> -- 
> nigelb
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] gluster-zeroconf has been moved to the Gluster GitHub organisation

2017-11-16 Thread Niels de Vos
Hi all,

I have moved the gluster-zeroconf repository from my personal github
account to the Gluster organisation one. Dustin Black and me are the
current two contributors/maintainers, we'll be adding more if people
express interest (Ramky?). There is no "GitHub Team" for the admins of
this repository, I do not know if that is something we're doing for all
of the projects we're hosting?

For this repository we will be using GitHub Pull-Requests for reviewing
and merging changes. The number of patches will likely be small and the
Gerrit review workflow adds (too much) overhead.

Niels
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Jenkins Nodes changes

2017-10-11 Thread Niels de Vos
On Wed, Oct 11, 2017 at 06:29:47PM +0530, Amar Tumballi wrote:
> Thanks for the link Niels.
> 
> I was thinking of similar setup. Didn't knew it existed. Now, question is,
> can I have an tar.gz also here? and can we include experimental branch here?

Sure, the tar.gz can be added easily. It is part of the src.rpm that
should be available there too. You can send a pull request for this
script to add the tar.gz if you like:
  
https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/master/centos-ci/scripts/nightly-builds.sh

The experimental branch can be added too, you'll need to modify these
two Jenkins Job Builder files for that:
  
https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/master/centos-ci/jobs/build_rpms.yml
  
https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/master/centos-ci/jobs/nightly-rpm-builds.yml

And because the release-* branches are detected like != 'master', it
needs an update to the nightly-builds.sh script linked above as well.

> Another suggestion, should we have a 'date' wise organization of these
> folders?

This is kept as simple as possible, the git commit is part of the
release so it should be easy to find the date. Also the file listing
lists the date of the build. Not sure what benefit there would be when
we put the date somewhere in there. We can create subdirectories with
the date for each build if needed, the YUM repository can be generated
with RPMs in subdirectories. What is the advantage or practical use-case
you see?

Niels


> -Amar
> 
> On Wed, Oct 11, 2017 at 6:26 PM, Niels de Vos <nde...@redhat.com> wrote:
> 
> > On Wed, Oct 11, 2017 at 11:53:21AM +0530, Amar Tumballi wrote:
> > > Can we keep nightly builds of different branches in this new server?
> > Would
> > > be good to keep just the last 7 days of builds.
> >
> > Which nightly builds do you refer to?
> >
> > We have builds that can be consumed in the CentOS CI (CentOS 6/7, x86_64
> > only atm) for all active branches. The environment is configured to
> > remove files older than 30 days. There are *many* builds available:
> >   http://artifacts.ci.centos.org/gluster/nightly/
> >
> > Niels
> >
> >
> > >
> > > Regards,
> > > Amar
> > >
> > > On 11-Oct-2017 10:14 AM, "Nigel Babu" <nig...@redhat.com> wrote:
> > >
> > > > Hello folks,
> > > >
> > > > I've just gotten back after a week away. I've made a couple of changes
> > to
> > > > Jenkins nodes:
> > > >
> > > > * All smoke jobs now run on internal nodes.
> > > > * All Rackspace nodes are back in action. We had a few issues with some
> > > > nodes, all of them have been looked into and fixed.
> > > >
> > > > In the near future, we plan to have a ci-logs.gluster.org domain where
> > > > the smoke logs and regression logs will be available instead of having
> > a
> > > > web server on individual nodes. Deepshika is actively working on
> > making the
> > > > changes required to get this done.
> > > >
> > > > --
> > > > nigelb
> > > >
> > > > ___
> > > > Gluster-infra mailing list
> > > > Gluster-infra@gluster.org
> > > > http://lists.gluster.org/mailman/listinfo/gluster-infra
> > > >
> >
> > > ___
> > > Gluster-infra mailing list
> > > Gluster-infra@gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-infra
> >
> >
> 
> 
> -- 
> Amar Tumballi (amarts)
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Jenkins Nodes changes

2017-10-11 Thread Niels de Vos
On Wed, Oct 11, 2017 at 11:53:21AM +0530, Amar Tumballi wrote:
> Can we keep nightly builds of different branches in this new server? Would
> be good to keep just the last 7 days of builds.

Which nightly builds do you refer to?

We have builds that can be consumed in the CentOS CI (CentOS 6/7, x86_64
only atm) for all active branches. The environment is configured to
remove files older than 30 days. There are *many* builds available:
  http://artifacts.ci.centos.org/gluster/nightly/

Niels


> 
> Regards,
> Amar
> 
> On 11-Oct-2017 10:14 AM, "Nigel Babu"  wrote:
> 
> > Hello folks,
> >
> > I've just gotten back after a week away. I've made a couple of changes to
> > Jenkins nodes:
> >
> > * All smoke jobs now run on internal nodes.
> > * All Rackspace nodes are back in action. We had a few issues with some
> > nodes, all of them have been looked into and fixed.
> >
> > In the near future, we plan to have a ci-logs.gluster.org domain where
> > the smoke logs and regression logs will be available instead of having a
> > web server on individual nodes. Deepshika is actively working on making the
> > changes required to get this done.
> >
> > --
> > nigelb
> >
> > ___
> > Gluster-infra mailing list
> > Gluster-infra@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-infra
> >

> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-infra

___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Static analysis job for Gluster

2017-08-16 Thread Niels de Vos
On Wed, Aug 16, 2017 at 06:05:20PM +0530, Deepshikha Khandelwal wrote:
> Hello everyone,
> 
> We have been testing static analysis for Gluster on the CI system for the
> last few weeks.
> 
> We have implemented cppcheck[1] and clang[2] jobs on build.gluster.org.
> They run nightly against the master branch.
> 
> These jobs are currently active and will provide a report every day. If you
> have any feedback or questions, please feel free to get in touch with us.
> 
> [1] https://build.gluster.org/job/cppcheck/
> [2] https://build.gluster.org/job/clang-scan/

I like graphs, those are great! Hopefully we see the number of errors
drop though :)

It  took me a while to find details and have a stable (without jobId)
URL to access it. Would these be the ones I can bookmark?

https://build.gluster.org/job/cppcheck/lastCompletedBuild/cppcheckResult/
 - scroll down to "unchanged"
 - if I click on the links, I get "The current user must have the
   WORKSPACE permission for the job."

https://build.gluster.org/job/clang-scan/lastCompletedBuild/clangScanBuildBugs/

Thanks,
Niels


> Thanks & Regards,
> Deepshikha Khandelwal

> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-infra

___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Weird branches in the glusterfs repository on GitHub

2017-05-01 Thread Niels de Vos
On Mon, May 01, 2017 at 12:05:51PM +0530, Nigel Babu wrote:
> The first one is a Gerrit configuration branch and it *should* be there.

Well, the configuration seems to be old (last updated 1+ year ago). So
it probably was sync'd occasionally to GitHub, but not with any recent
changes? I've never noticed the branch in GitHub before, but maybe I'm
mistaking and it always was there.

> The other two, well, I recommend asking around to see who created those
> branches. There's no "sync script". We sync all of what's in Gerrit over to
> Github. If it's there, it means someone created it on Gerrit.

No, these do not seem to be in Gerrit. There are (or were) quite a few
people with push privileges to the GitHub glusterfs repository. Maybe
someone pushed directly there without going through Gerrit? I don't know
of any way to check who pushes to to a repository, but maybe someone on
this list does and can check?

Niels


> 
> On Sun, Apr 30, 2017 at 3:40 PM, Niels de Vos <nde...@redhat.com> wrote:
> 
> > There are a few branches in the glusterfs github repository that should
> > not be there:
> >
> >   - meta/config (partial/old Gerrit configuration)
> >   - v3.7.15
> >   - v3.8.2
> >
> > The last two are *branches*, and the matching tags exist as well. I'm
> > not sure since when they are in the github repository, I dont seem to be
> > able to find details about any of the the push operations there.
> >
> > Could it be that a sync-script went wrong at one point?
> >
> > Niels
> >
> > ___
> > Gluster-infra mailing list
> > Gluster-infra@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-infra
> >
> 
> 
> 
> -- 
> nigelb


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] Weird branches in the glusterfs repository on GitHub

2017-04-30 Thread Niels de Vos
There are a few branches in the glusterfs github repository that should
not be there:

  - meta/config (partial/old Gerrit configuration)
  - v3.7.15
  - v3.8.2

The last two are *branches*, and the matching tags exist as well. I'm
not sure since when they are in the github repository, I dont seem to be
able to find details about any of the the push operations there. 

Could it be that a sync-script went wrong at one point?

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Bug 1418369] Need way to revert commits or mark tests bad without testing

2017-04-21 Thread Niels de Vos
On Mon, Apr 17, 2017 at 05:46:53AM +, bugzi...@redhat.com wrote:
> https://bugzilla.redhat.com/show_bug.cgi?id=1418369
> 
> Nigel Babu  changed:
> 
>What|Removed |Added
> 
>  Status|NEW |CLOSED
>  CC||j...@pl.atyp.us,
>||nig...@redhat.com
>  Resolution|--- |CURRENTRELEASE
> Last Closed||2017-04-17 01:46:53
> 
> 
> 
> --- Comment #3 from Nigel Babu  ---
> The technical capability is now done. There's a team called gluster-plumbers
> who have permission to push to any branch directly. It has Vijay, Jeff, and
> Shyam. As far as infra is concerned this is fixed. If the policy dictates
> changes to this, please file a bug.

Just wondering if we should have a "plumber" role in the Maintainers 2.0
document too?

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Needed permission to post new labels to github repo

2017-03-28 Thread Niels de Vos
On Tue, Mar 28, 2017 at 03:44:14PM +0530, Amar Tumballi wrote:
> Hi,
> 
> Not sure who maintains the permissions / ownership of gluster's github
> repos.
> 
> I need permission to create new labels in github.com/gluster/glusterfs repo
> as we are starting to use github issues to track releases etc.

I think the new "process" is to file a bug in Bugzilla against the
project-infrastructure component and request both Vijay and Amye to
"approve" it.

HTH,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Dropping nightly build from download.gluster.org ?

2017-03-24 Thread Niels de Vos
On Thu, Mar 23, 2017 at 05:29:05PM -0400, Michael Scherer wrote:
> Hi,
> 
> so I am trying to run aide on a download.gluster.org mirror for more
> security (rational being that if d.g.o is compromised a separate server
> wouldn't, but that mean that we sync the data on a regular basis), and I
> did hit a few issue, mostly around space.
> 
> One of the issue was the size of static analysis logs, but we will move
> them out of d.g.o. (28G of logs...)
> 
> Another one is the presence of nightlies snapshots on that server,
> around 52G of data. 
> 
> There is for example 20G of gluster-3.7 snapshot, and I think we could
> remove them now (until we push them to a separate download server as
> well).
> 
> Do we still need them ?

No nightly builds are needed on d.g.o anymore, any dgo-nightly stuff can
be removed. It is not updated anymore and nightly builds were only there
for testing.


> in fact, who here would be volunteer to spend time with me to make a
> decision about cleaning the directories, since there is lots of stuff
> where the hierarchy is slightly suboptimal.
> 
> For example:
> # ls
> dgo-nightly  glusterfs  glusterfs-3.3  glusterfs-3.4  glusterfs-3.5
> glusterfs-3.6  glusterfs-3.7  sources
> 
> glusterfs has no specific version, but the others do. It also show
> fedora 23 as the most recent version. 
> sources contains all the source on a flat directory structure.
> 
> Another example:
> pub/gluster/glusterfs has various directory for versions of glusterfs,
> but also do have libvirt, vagrant and nfs-ganesha, who are not version,
> and might be rather served from a directory upstream (in fact,
> nfs-ganesha and glusterfs-coreutils are also on
> https://download.gluster.org/pub/gluster/ )

A cleanup is much appreciated! Maybe come up with a proposed directory
structure and see from there what makes sense to keep or remove?

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] Java bindings are now maintained under the Gluster Organisation in GitHub

2017-02-17 Thread Niels de Vos
This week we moved the two repositories [0][1] from Louis' personal
account in GitHub to the Gluster organisation. Louis has done a great
amount of work on these already, and keeps full admin privileges to
these repositories.

Ramesh is one of our Red Hat engineers that will start to contribute to
these repositories and help with making packages available in
distributions that show interest.

Thanks guys!
Niels


[0] https://github.com/gluster/glusterfs-java-filesystem
[1] https://github.com/gluster/libgfapi-jni



signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Build machines requires glib2-devel

2016-09-30 Thread Niels de Vos
On Thu, Sep 29, 2016 at 03:22:57PM +0530, Prasanna Kalever wrote:
> On Thu, Sep 29, 2016 at 3:01 PM, Niels de Vos <nde...@redhat.com> wrote:
> > On Thu, Sep 29, 2016 at 02:17:38PM +0530, Prasanna Kalever wrote:
> >> Hi,
> >>
> >> The patch [1] requires glib2-devel, can someone help me in installing
> >> this in the smoke machines?
> >
> > Shouldnt configure.ac not detect dependencies and disable parts in case
> > glib2-devel is missing?
> 
> That's a good point Niels, Yes Ideally it should.
> 
> From: Console Log
> QEMU Block formats   : yes
> 
> Which means the machine, has glib2-devel
> 
> Now, I am not sure why this is failing then ...
> 
> 07:08:42 ERROR:
> Exception(glusterfs-3.10dev-0.94.git2389cc0.el6.src.rpm)
> Config(fedora-24-x86_64) 5 minutes 53 seconds
> 07:08:42 INFO: Results and/or logs in:
> /home/jenkins/root/workspace/devrpm-fedora/RPMS/fc24/x86_64/
> 07:08:42 INFO: Cleaning up build root ('cleanup_on_failure=True')
> 07:08:42 Start: clean chroot
> 07:08:52 Finish: clean chroot
> 07:08:52 ERROR: Command failed. See logs for output.
> 07:08:52  # bash --login -c /usr/bin/rpmbuild -bb --target x86_64
> --nodeps /builddir/build/SPECS/glusterfs.spec
> 07:08:52 Build step 'Execute shell' marked build as failure
> 07:08:53 Archiving artifacts
> 
> 07:08:53 Finished: FAILURE
> 
> Any Clues from logs ?

Yes, click "build.log" for the failed tests that you have linked below.
you'll land at
https://build.gluster.org/job/devrpm-fedora/1660/artifact/RPMS/fc24/x86_64/build.log
and similar.

  Checking for unpackaged file(s): /usr/lib/rpm/check-files 
/builddir/build/BUILDROOT/glusterfs-3.10dev-0.94.git2389cc0.fc24.x86_64
  RPM build errors:
  error: Installed (but unpackaged) file(s) found:
 /usr/lib64/glusterfs/3.10dev/xlator/features/qemu-block.so
  Installed (but unpackaged) file(s) found:
 /usr/lib64/glusterfs/3.10dev/xlator/features/qemu-block.so
  Child return code was: 1

All files need to be listed in the %files section for the sub-package
that should include them.

Niels


> 
> 
> Apologies for the wrong buzzer!
> 
> --
> Prasanna
> 
> >
> > Niels
> >
> >>
> >> Here is the failure smoke, reporting the rpm build configuration failure 
> >> [2].
> >>
> >> [1] http://review.gluster.org/#/c/15589/1/glusterfs.spec.in
> >>
> >> [2]
> >> http://build.gluster.org/job/devrpm-fedora/1660/ : FAILURE
> >> http://build.gluster.org/job/devrpm-el6/1647/ : FAILURE
> >> http://build.gluster.org/job/devrpm-el7/1652/ : FAILURE
> >> http://build.gluster.org/job/strfmt_errors/588/ : FAILURE
> >>
> >> Thanks,
> >> --
> >> Prasanna


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Build machines requires glib2-devel

2016-09-30 Thread Niels de Vos
On Thu, Sep 29, 2016 at 02:17:38PM +0530, Prasanna Kalever wrote:
> Hi,
> 
> The patch [1] requires glib2-devel, can someone help me in installing
> this in the smoke machines?

Shouldnt configure.ac not detect dependencies and disable parts in case
glib2-devel is missing?

Niels

> 
> Here is the failure smoke, reporting the rpm build configuration failure [2].
> 
> [1] http://review.gluster.org/#/c/15589/1/glusterfs.spec.in
> 
> [2]
> http://build.gluster.org/job/devrpm-fedora/1660/ : FAILURE
> http://build.gluster.org/job/devrpm-el6/1647/ : FAILURE
> http://build.gluster.org/job/devrpm-el7/1652/ : FAILURE
> http://build.gluster.org/job/strfmt_errors/588/ : FAILURE
> 
> Thanks,
> --
> Prasanna


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] centos-5 build failures on mainline

2016-09-21 Thread Niels de Vos
On Wed, Sep 21, 2016 at 11:53:43AM +0530, Atin Mukherjee wrote:
> 
> 
> As of now we don't check for build sanity on RHEL5/centos-5 distros.
> I believe Gluster still has legacy support for these distros. Here we could
> either add a glusterfs-devrpms script for el5 for every patch submission or
> at worst have a nightly build to check the sanity of el5 build on mainline
> branch to ensure we don't break further?

Currently there is no way to build only the Gluster client part. This is
a limitation in the autoconf/automake scripts. The server-side requires
fancy things that are not available (in the required versions) for
different parts (mainly GlusterD?).

Once we have a "./configre --without-server" or similar, there is no use
in trying to build for RHEL/CentOS-5.

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] slave33 is sick

2016-09-14 Thread Niels de Vos
On Thu, Sep 15, 2016 at 08:36:51AM +0530, Nigel Babu wrote:
> File a bug rather than emailing gluster-infra, please.

Isnt that already filed as bug 1375521? And I'm pretty sure I disabled
the system eariler...

Niels

> 
> On Thu, Sep 15, 2016 at 8:33 AM, Kaleb Keithley  wrote:
> 
> > rpmbuild jobs are crashing with java runtime errors.
> >
> > And it seems to be first in the queue every time. I had to disable it so
> > that (one of) the other machines would pick up the job.
> >
> > --
> >
> > Kaleb
> > ___
> > Gluster-infra mailing list
> > Gluster-infra@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-infra
> >
> 
> 
> 
> -- 
> nigelb

> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra

___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] Jenkins Jobs on Gerrit

2016-09-11 Thread Niels de Vos
On Mon, Sep 12, 2016 at 10:47:21AM +0530, Nigel Babu wrote:
> Hello,
> 
> This has been in the works for some time and it's finally ready. Today, I'll 
> be
> moving the Jenkins Jobs for build.gluster.org to Gerrit[1]. This should not
> have any major side effects. The only one I see is a few pull requests will
> become un-mergable and will need to be re-submitted on Gerrit.
> 
> My eventual goal is the /opt/qa scripts to be separated from the jobs that 
> will
> be used to run them. When the Centos CI jobs are moved over to JJB, I'll have
> them in a different folder in the same repo.

Sounds good to me. But with this last paragraph, do you intend to place
the contents of /opt/qa in its own repository? Not that it really
matters to me, just trying to understand :)

Thanks,
Niels


> 
> This seems like a good way to move forward. If anyone has better ideas, let me
> know.
> 
> [1]: http://review.gluster.org/#/admin/projects/
> 
> --
> nigelb
> 
> --
> nigelb
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Zuul?

2016-09-06 Thread Niels de Vos
On Mon, Sep 05, 2016 at 11:32:19PM -0400, Vijay Bellur wrote:
> On 09/04/2016 01:43 PM, Niels de Vos wrote:
> > On Fri, Sep 02, 2016 at 12:12:00PM -0400, Jeff Darcy wrote:
> > > > We already only merge after NetBSD-regression and CentOS-regression have
> > > > voted
> > > > back. All I'm changing is that you don't need to do the merge manually 
> > > > or do
> > > > Verified +1 for regression to run.. Zuul will run the tests after you 
> > > > get
> > > > Code-Review +2 and merge it for you with patches ordered correctly.
> > > 
> > > The problem is that some reviewers (including myself) might not even look 
> > > at
> > > a patch until it already has CentOS+1 and NetBSD+1.  Reviewing code, 
> > > having
> > > it fail regressions, reviewing a substantially new version, having *that*
> > > fail regressions, etc. tends to be very frustrating for both authors and
> > > reviewers.  Fighting with the regression tests *prior* to review can still
> > > be very frustrating for authors, but at least it doesn't frustrate 
> > > reviewers
> > > as much and doesn't contribute to author/reviewer animosity (apparently a
> > > real problem in this group) as much.
> > 
> > This is the main problem I see with this proposal too. There are quite
> > regular patches posted that break things in related regression tests.
> > Those corner cases can be very difficult to spot in code review, but
> > automated testing catches them. It is one of the reasons I prefer to do
> > code reviews for changes that are known to be (mostly) correct. Other
> > times changes (they should!) add test-cases, and these fail with the
> > initial versions...
> 
> Would it be possible to have a workflow where verified +1 vote from the
> developer indicates that the regression tests have passed in their local
> setup?

I *really* hope that is the case already!!

> If we can establish that protocol, then reviewers will have more
> confidence to look into such patches. Additionally authors will also have
> more motivation to run tests locally and fix obvious problems in our
> regression tests.

I think that local testing is always a must. It is not like it is
difficult to run the tests in a VM. I admit that I do not always run the
whole suite, but at minimal the tests for the component that is affected
by the change I'm going to post. I wait with Verified=+1 until I
actually did run the tests locally.

> I am in favor of switching to zuul and would welcome not burdening the
> regression infrastructure by running unqualified patches.

Agreed :)


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Gerrit Access Control

2016-09-06 Thread Niels de Vos
On Tue, Sep 06, 2016 at 12:32:42PM +0530, Nigel Babu wrote:
> On Thu, Sep 01, 2016 at 12:43:06PM +0530, Nigel Babu wrote:
> > > > Just need a clarification. Does a "commit in the last 90 days" means
> > > > merging a patch sent by someone else by maintainer or maintainer 
> > > > sending a
> > > > patch to be merged?
> > >
> >
> > Your email needs to either be in Reviewed-By or Author in git log. So you
> > either need to send patches or review patches. Ideally, I'm looking for
> > activity on Gerrit and this is the easiest way to figure that out. Yes, I'm
> > checking across all active branches.
> >
> > As an additional bonus, this will also give us a list of people who should 
> > be
> > on the maintainers team, but aren't.
> >
> > > Interesting question. I was wondering about something similar as well.
> > > What about commits/permissions for the different repositories we host on
> > > Gerrit? Does each repository has its own maintainers, or is it one group
> > > of maintainers that has merge permissions for all repos?
> > >
> >
> > Each repo on Gerrit seems to mostly have it's own permissions. That's
> > a sensible way to go about it. Some of them are unused a clean up is coming
> > along, but that's later.
> 
> I've answered everyone's concerns on this thread. If nobody is opposed to the
> idea, shall I go ahead with this?

If you mean "removing+emailing maintainers that are not active in Gerrit
anymore", I guess that should be fine. However, before you do this, make
sure the requirements to be counted as 'active' are included in our
contributors guide. You can then easily add the link to the page in your
emails.

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Zuul?

2016-09-04 Thread Niels de Vos
On Fri, Sep 02, 2016 at 12:12:00PM -0400, Jeff Darcy wrote:
> > We already only merge after NetBSD-regression and CentOS-regression have
> > voted
> > back. All I'm changing is that you don't need to do the merge manually or do
> > Verified +1 for regression to run.. Zuul will run the tests after you get
> > Code-Review +2 and merge it for you with patches ordered correctly.
> 
> The problem is that some reviewers (including myself) might not even look at
> a patch until it already has CentOS+1 and NetBSD+1.  Reviewing code, having
> it fail regressions, reviewing a substantially new version, having *that*
> fail regressions, etc. tends to be very frustrating for both authors and
> reviewers.  Fighting with the regression tests *prior* to review can still
> be very frustrating for authors, but at least it doesn't frustrate reviewers
> as much and doesn't contribute to author/reviewer animosity (apparently a
> real problem in this group) as much.

This is the main problem I see with this proposal too. There are quite
regular patches posted that break things in related regression tests.
Those corner cases can be very difficult to spot in code review, but
automated testing catches them. It is one of the reasons I prefer to do
code reviews for changes that are known to be (mostly) correct. Other
times changes (they should!) add test-cases, and these fail with the
initial versions...

> That said, it would be nice to have *something* as a gate between +2 and
> merge - certainly a build, and at least a few basic tests (more than smoke
> does IMO).  If Zuul can help us avoid broken builds due to improper merge
> order, which seem to be the most common kind of broken builds, I'm all for
> it.

Our regression test suite allows running tests in a subdirectory, so
changes to the tests should not really be needed. In addition to the
simple smoke test, this might do:

  # ./run-tests.sh tests/basic/

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Gerrit Access Control

2016-08-29 Thread Niels de Vos
On Mon, Aug 29, 2016 at 09:18:05PM +0530, Pranith Kumar Karampuri wrote:
> On Mon, Aug 29, 2016 at 12:25 PM, Nigel Babu  wrote:
> 
> > Hello folks,
> >
> > We have not pruned our Gerrit maintainers list ever as far as I can see.
> > We've
> > only added people. For security reasons, I'd like to propose that we do the
> > following:
> >
> > If you do not have a commit in the last 90 days, your membership from
> > gluster-maintainers team on Gerrit will be revoked. This means you won't
> > have
> > permission to merge patches. This does not mean you're no longer
> > maintainer.
> > This is only a security measure. To gain access again, all you have to do
> > is
> > file a bug against gluster-infra and I'll grant you access immediately.
> >
> 
> Just need a clarification. Does a "commit in the last 90 days" means
> merging a patch sent by someone else by maintainer or maintainer sending a
> patch to be merged?

Interesting question. I was wondering about something similar as well.
What about commits/permissions for the different repositories we host on
Gerrit? Does each repository has its own maintainers, or is it one group
of maintainers that has merge permissions for all repos?

Niels

> 
> 
> >
> > When I remove someone's access, I'll send an invidual email about it.
> > Again,
> > your membership on gluster-maintainers has no say on your maintainer
> > status.
> > This is only for security reasons.
> >
> > Thoughts on implementing this policy?
> >
> > --
> > nigelb
> > ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> >
> 
> 
> 
> -- 
> Pranith

> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel



signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Decommission of the old download server

2016-08-29 Thread Niels de Vos
On Mon, Aug 29, 2016 at 04:13:49PM +0200, Michael Scherer wrote:
> Hi,
> 
> as a follow up of the previous plan regarding download server, I stopped
> httpd on the old server. From a quick look at the log, there is only 2
> people in the world using it, 1 for debian 3.6 debian package, and 1 for
> 3.3 rpms. As both are deprecated, I guess this should cause too much
> issues.

3.6 is still (a little) active, it goes EOL when 3.9 is released.
Is the same contents (still) available on download.gluster.org?

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Please welcome Worker Ant

2016-08-23 Thread Niels de Vos
On Mon, Aug 22, 2016 at 05:17:49PM -0700, Amye Scavarda wrote:
> On Mon, Aug 22, 2016 at 2:47 AM, Niels de Vos <nde...@redhat.com> wrote:
> 
> > On Mon, Aug 22, 2016 at 11:16:15AM +0200, Michael Scherer wrote:
> > > Le lundi 22 août 2016 à 12:41 +0530, Nigel Babu a écrit :
> > > > Hello,
> > > >
> > > > I've just switched the bugzilla authentication on Gerrit today. From
> > today
> > > > onwards, Gerrit will comment on Bugzilla using a new account:
> > > >
> > > > bugzilla-...@gluster.org
> > > >
> > > > If you notice any issues, please file a bug against
> > project-infrastructure.
> > > > This bot will only comment on public bugs, so if you open a review
> > request when
> > > > the bug is private, it will fail. This is intended behavior.
> > >
> > > You got me curious, in which case are private bugs required ?
> > >
> > > Do we need to make sure that the review is private also or something ?
> >
> > I do not expect any private bugs. On occasion it happens that a bug gets
> > cloned from Red Hat Gluster Storage and it can contain customer details.
> > Sometimes those bugs are not cleaned during the cloning (BAD!) and keep
> > the references to details from customers.
> >
> > All bugs that we have in the GlusterFS product in bugzilla must be
> > public, and must have a public description of the reported problem.
> >
> > Niels
> >
> > ___
> > Gluster-infra mailing list
> > Gluster-infra@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-infra
> >
> 
> Is it worth denying all posts from RHGS and causing people to manually
> submit bugs?
> I realize that's using a hammer on an edge-case, but if someone is moving a
> bug to the gluster.org project, I feel like it's on them to scrub it.

I do not think it is possible to configure the 'clone' functionality in
Bugzilla to reject clones from one product (RHGS) to an other
(GlusterFS). We could ask the Bugzilla team, I guess. It would be a
clean way of doing things, and should get us better quality bugs in the
Gluster community.

It definitely is the task of the person cloning the bug to

1. clean it from customer references and private data
2. make sure the correct component/sub-component is set
3. have an understandable problem description and other supportive data
4. .. all other standard Bug Triage points

Maybe it is good to repeat this towards RHGS Engineering + Management
again, if it happens regularly.

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Please welcome Worker Ant

2016-08-22 Thread Niels de Vos
On Mon, Aug 22, 2016 at 11:16:15AM +0200, Michael Scherer wrote:
> Le lundi 22 août 2016 à 12:41 +0530, Nigel Babu a écrit :
> > Hello,
> > 
> > I've just switched the bugzilla authentication on Gerrit today. From today
> > onwards, Gerrit will comment on Bugzilla using a new account:
> > 
> > bugzilla-...@gluster.org
> > 
> > If you notice any issues, please file a bug against project-infrastructure.
> > This bot will only comment on public bugs, so if you open a review request 
> > when
> > the bug is private, it will fail. This is intended behavior.
> 
> You got me curious, in which case are private bugs required ?
> 
> Do we need to make sure that the review is private also or something ?

I do not expect any private bugs. On occasion it happens that a bug gets
cloned from Red Hat Gluster Storage and it can contain customer details.
Sometimes those bugs are not cleaned during the cloning (BAD!) and keep
the references to details from customers.

All bugs that we have in the GlusterFS product in bugzilla must be
public, and must have a public description of the reported problem.

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Please welcome Worker Ant

2016-08-22 Thread Niels de Vos
On Mon, Aug 22, 2016 at 12:41:32PM +0530, Nigel Babu wrote:
> Hello,
> 
> I've just switched the bugzilla authentication on Gerrit today. From today
> onwards, Gerrit will comment on Bugzilla using a new account:
> 
> bugzilla-...@gluster.org

Welcome to our colony, Worker Ant!
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] The old gluster.org/community/documentation is still found by our users...

2016-07-22 Thread Niels de Vos
On Thu, Jul 21, 2016 at 01:56:30PM -0400, Kaleb KEITHLEY wrote:
> 
> I have archived the old community documention (in
> /root/old-community-documenation.tgz) and set up a redirect to our
> documentation on readthedocs.io.
> 

Thanks! It is good to see some progress here.

However we should improve the redirection in such a way that any URL
starting with gluster.org/community/documentation/ gets pointed to the
new site.  There was a plan to map all pages to their right new
location, but that seems rather painful to do. A generic redirect would
still allow existing search results, bookmarks or links on other pages
reach the new docs.

Maybe Misc could put that in the webserver config? Or someone could
write an index.php (that was in all URLs anyway, right?) that does the
redirection.

Thanks,
Niels


> 
> On 07/21/2016 01:22 PM, Niels de Vos wrote:
> > On Thu, Jul 21, 2016 at 09:19:54AM -0700, Amye Scavarda wrote:
> >> Niels,
> >> This may be a problem with Google search showing up the /community site in
> >> search as well.
> >> Is that the issue?
> > 
> > Probably it is one of the reasons that users still read them. A BIG RED
> > header mentioning the new site would go a long way until it is
> > completely merged+removed.
> > 
> > Niels
> > 
> >> - amye
> >>
> >> On Thu, Jul 21, 2016 at 8:59 AM, Niels de Vos <nde...@redhat.com> wrote:
> >>
> >>> Hi,
> >>>
> >>> It seems the old wiki is still available, and there is no header that
> >>> redirects users to the new http://gluster.readthedocs.io/ site. Except
> >>> that it is the previous (graphical) design, it also contains no updated
> >>> information for the current versions and newer features. New users seem
> >>> to land at the old docs, and are not impressed...
> >>>
> >>> Amye was working with Sean(?) for a while to get it more in order. It
> >>> would be nice to see some results from that.
> >>>
> >>> Thanks,
> >>> Niels
> >>>
> >>
> >>
> >>
> >> -- 
> >> Amye Scavarda | a...@redhat.com | Gluster Community Lead
> >>
> >>
> >> ___
> >> Gluster-infra mailing list
> >> Gluster-infra@gluster.org
> >> http://www.gluster.org/mailman/listinfo/gluster-infra
> 
> 





signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] The old gluster.org/community/documentation is still found by our users...

2016-07-21 Thread Niels de Vos
On Thu, Jul 21, 2016 at 09:19:54AM -0700, Amye Scavarda wrote:
> Niels,
> This may be a problem with Google search showing up the /community site in
> search as well.
> Is that the issue?

Probably it is one of the reasons that users still read them. A BIG RED
header mentioning the new site would go a long way until it is
completely merged+removed.

Niels

> - amye
> 
> On Thu, Jul 21, 2016 at 8:59 AM, Niels de Vos <nde...@redhat.com> wrote:
> 
> > Hi,
> >
> > It seems the old wiki is still available, and there is no header that
> > redirects users to the new http://gluster.readthedocs.io/ site. Except
> > that it is the previous (graphical) design, it also contains no updated
> > information for the current versions and newer features. New users seem
> > to land at the old docs, and are not impressed...
> >
> > Amye was working with Sean(?) for a while to get it more in order. It
> > would be nice to see some results from that.
> >
> > Thanks,
> > Niels
> >
> 
> 
> 
> -- 
> Amye Scavarda | a...@redhat.com | Gluster Community Lead


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Gerrit issues

2016-07-18 Thread Niels de Vos
On Mon, Jul 18, 2016 at 04:34:18PM +0530, Muthu Vigneshwaran wrote:
> Hi,
> 
> I am a new contributor to Glusterfs. I have been sending some patches.
> Sadly, it neither maps my name in the bugzilla(It says as "Anonymous
> coward")[1] nor suggest my name to be added as a reviewer in Gerrit. I have
> cloned the gluster repository from Gerrit with the same user name I have on
> Github[2]

I think you have to set your "Full Name" and email address in Gerrit as
well. After loggin in, go to teh settings:
  http://review.gluster.org/#/settings/contact

This might require you to logout/login after changing. Commenting on a
patch should show your name correctly after that.

HTH,
Niels


> 
> Here is the output of 'git config --list'
> 
> user.email=mvign...@redhat.com
> user.name=Muthu-vigneshwaran
> core.repositoryformatversion=0
> core.filemode=true
> core.bare=false
> core.logallrefupdates=true
> remote.origin.url=ssh://muthu-vigneshwa...@git.gluster.org/glusterfs.git
> remote.origin.fetch=+refs/heads/*:refs/remotes/origin/*
> branch.master.remote=origin
> branch.master.merge=refs/heads/master
> 
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=789278  (Look on the recent
> comment)
> 
> [2] https://github.com/Muthu-vigneshwaran
> 
> 
> --
> Thanks,
> Muthu Vigneshwaran.


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] nbslave71.cloud.gluster.org can't clone git repo ? DNS issue

2016-07-17 Thread Niels de Vos
On Sun, Jul 17, 2016 at 06:29:10PM +0530, Nigel Babu wrote:
> Both netbsd0 and nbslave71 are likely to be running into the same problem -
> out of space :(

I've disabled nbslave71 because it causes all regression tests to fail.
Once the problem is fixed, please enable the system in the webui again.
  https://build.gluster.org/computer/nbslave71.cloud.gluster.org/

Thanks,
Niels

> 
> On Sun, Jul 17, 2016 at 5:59 PM, Kaleb Keithley  wrote:
> 
> >
> > And seems to be the only machine running netbsd regressions
> > ___
> > Gluster-infra mailing list
> > Gluster-infra@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-infra
> >
> 
> 
> 
> -- 
> nigelb

> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra



signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] git checkout on netbsd0.cloud.gluster.org hung for 15+ hours

2016-07-17 Thread Niels de Vos
I've aborted https://build.gluster.org/job/netbsd6-smoke/14852/console
because it was hung. We only have one netbsd slave running this smoke
test, so it was blocking the testing status of some patches.

Maybe a watchdog was added for other jobs, and this one was missesd out?

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] Adding more jobs to test Gluster regularly

2016-06-29 Thread Niels de Vos
[and yes, NOW with gluster-devel-on CC, sorry for the duplicate]
[changed subject and added gluster-devel on CC]

On Wed, Jun 29, 2016 at 05:37:17AM +0530, Sankarshan Mukhopadhyay wrote:
> On Tue, Jun 28, 2016 at 9:45 PM, Niels de Vos <nde...@redhat.com> wrote:
> > Coincidentally I've asked Humble about the option to provide a container
> > (and maybe VM) image through the CentOS Storage SIG. Just as with the
> > packages, we should try to utilize the integration with different
> > distributions.
> 
> I agree. Containers (and even VM) images are build-time artifacts
> which we should produce and make available in a regular manner.
> 
> I have a follow-up question on the production of these artifacts -
> when do we check whether the RPMs or, the images produced are sane?
> For example, that the RPMs are packaged well and as per specifications
> ...

Well, the RPMs can have different guidelines depending on the
distribution they are made for. So the official RPMs are packaged in
Fedora [1], CentOS Storage SIG [2] and other distribution specific
repositories. The distributions are responsible for verification of the
packages they ship.

But, we could do something like this for the included .spec. The nightly
builds [3] can be consumed by other tests. It is possible to download
the RPMs and run "rpmlint" and other verification tools on them. It is
relatively straight forward to write a script that we can include as a
job in the CentOS CI [4]. The gluster_libgfapi-python script [5] can be
taken as an example.

Anyone is more than welcome to send me scripts that I can wrap in the
needed CentOS CI Jenkins environment. Depending on the time that the
tests need, we can run them every night, week, or whatever. If needed we
can run them upon each patch submission, but we'll probably need to
request a higher resource limit in that case.

I guess you could request an RPM-verification test [6], but please add
some details on what tools you would like to see used. At the moment
there are very few contributors helping out with the running of
automated tests, so do not have too high expectations on when someone
gets around to write+test a script from scratch.

Cheers,
Niels


1. http://pkgs.fedoraproject.org/cgit/glusterfs.git/
2. https://github.com/CentOS-Storage-SIG/glusterfs
3. http://artifacts.ci.centos.org/gluster/nightly/
4. https://ci.centos.org/view/Gluster/
5. 
https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/master/centos-ci/libgfapi-python/run-test.sh
6. https://github.com/gluster/glusterfs-patch-acceptance-tests/issues/new


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] jenkins-cli on build.gluster.org

2016-06-23 Thread Niels de Vos
On Wed, Jun 22, 2016 at 01:37:50PM +0530, Nigel Babu wrote:
> Hello,
> 
> If anyone has successfully got Jenkins CLI working on build.gluster.org,
> please let me know how to get it working? My attempts seem to have failed
> so far with this:

I tend to run it on build.gluster.org, something like this:

  $ java -jar jenkins-cli.jar -s http://localhost:8080/ help

HTH,
Niels


> 
> 
> java.net.SocketTimeoutException: connect timed out
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
> at
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> at
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:589)
> at hudson.cli.CLI.connectViaCliPort(CLI.java:210)
> at hudson.cli.CLI.(CLI.java:128)
> at
> hudson.cli.CLIConnectionFactory.connect(CLIConnectionFactory.java:72)
> at hudson.cli.CLI._main(CLI.java:479)
> at hudson.cli.CLI.main(CLI.java:390)
> Suppressed: java.io.EOFException: unexpected stream termination
> at
> hudson.remoting.ChannelBuilder.negotiate(ChannelBuilder.java:365)
> at hudson.remoting.Channel.(Channel.java:437)
> at hudson.remoting.Channel.(Channel.java:415)
> at hudson.remoting.Channel.(Channel.java:411)
> at hudson.remoting.Channel.(Channel.java:399)
> at hudson.remoting.Channel.(Channel.java:390)
> at hudson.remoting.Channel.(Channel.java:363)
> at hudson.cli.CLI.connectViaHttp(CLI.java:159)
> at hudson.cli.CLI.(CLI.java:132)
> ... 3 more
> 
> 
> -- 
> nigelb

> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra



signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Access to Jenkins

2016-06-17 Thread Niels de Vos
On Thu, Jun 16, 2016 at 04:58:52PM +0530, Nigel Babu wrote:
> On Thu, Jun 16, 2016 at 12:29:24PM +0200, Niels de Vos wrote:
> > On Wed, Jun 15, 2016 at 09:14:16PM +0530, Nigel Babu wrote:
> > > Hello folks,
> > >
> > > We have 45 people who have access to Jenkins UI. Pretty much everyone 
> > > will be
> > > losing this access in the next couple of weeks.
> > >
> > > At the moment, I understand that access to the UI is essential for 
> > > configuring
> > > jobs. I’m going to change this in the near future. Jenkins Job Builder[1] 
> > > will
> > > talk to Jenkins to create/update jobs. Job information will be managed as 
> > > a
> > > collection of yaml files. If you want a new job, you can give us a pull 
> > > request
> > > with the correct format. The jobs will then be updated (probably via 
> > > Jenkins).
> > > You will then no longer need access to Jenkins to create or manage jobs. 
> > > In
> > > fact, editing the jobs in the UI will make the YAML files out of sync.
> > >
> > > Before we start this process, I’d love to know if you use your Jenkins 
> > > access.
> >
> > Initially it was only possible to start regression tests by logging into
> > Jenkins and running the job manually. There were fewer patches and the
> > duration of the job was much shorter too.
> >
> > Later on, we got more slaves and longer running regression tests.
> > Someone configured the Jenkins jobs to get triggered on each change that
> > was posted (now modified to only run after Verified=+1). This caused a
> > lot of weird side-effects, including tests failing more regularly for
> > unknown reasons. Retriggering a job was only possible by clicking the
> > "retrigger" link in the Jenkins webui (now we can do so with "recheck
> > ..." in Gerrit comments).
> >
> > Sometimes regression tests cause a fatal problem on a slave. The only
> > way to get the slave back, could be through a hard reboot. The reboot-vm
> > job in Jenkins made that possible. The job prevented people from
> > requiring a Rackspace account.
> >
> > Release tarballs are created through Jenkins. Anyone doing releases
> > should have their own Jenkins account to run the job.
> >
> > I think many of the maintainers should be able to run at least the
> > release job. Configuration of the jobs can be restricted, specially if
> > Jenkins Job Builder is used. Somewhere in the Contributors Guide there
> > should be a description on how to make changes to the Jenkins jobs. I've
> > never used JBB and definitely need some assistance with it.
> >   http://gluster.readthedocs.io/en/latest/Contributors-Guide/Index/
> >
> > HTH,
> > Niels
> >
> >
> > > If you do use it, please let me know off-list what you use it for.
> > >
> > > [1]: http://docs.openstack.org/infra/system-config/jjb.html
> > >
> > > --
> > > nigelb
> > > ___
> > > Gluster-infra mailing list
> > > Gluster-infra@gluster.org
> > > http://www.gluster.org/mailman/listinfo/gluster-infra
> 
> Hi Niels,
> 
> This is more complex than I thought, I won't be doing *anything* immediately.
> I'll take some time to get configuration right so we don't accidentally end up
> in a place where nobody can get work done.
> 
> I'm going to slightly change what I originally intended:
> 1. We will still go ahead with Jenkins Job Builder. It has reasonably good
>documentation, but I'll be available to help in case anyone is stuck. I 
> will
>convert the existing jobs into yaml format for jjb, so they will serve as 
> an
>example. I'll make sure our documentation has a section for how to write
>jobs. Context: JJB was written by Openstack infra team who also use Jenkins
>and Gerrit, so it shouldn't give us too much trouble.
> 2. The VM restart is an interesting problem that I didn't have insight into
>before. I'm happy to grant current developers access to have access to the
>restart job (We have to define "current developers" at some point).
>Eventually, I hope to make it less of a problem by restarting after every
>X number of jobs, restarting after a failure, or some such (we can talk
>about this later).
> 3. For release jobs, I'll similiarly create a group who will have full access
>to that job. This way I won't be blocking work while we look at better
>options.
> 
> Does this solve our immediate concerns?

Sure! It is not so much concerns, just explaining how Jenkins is
currently used :)

I think there are other Gluster-related projects (like libgfapi-python)
that run in build.gluster.org. Not sure if their workflow matches what I
described above.

Cheers,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Rackspace Account Removal

2016-06-16 Thread Niels de Vos
On Thu, Jun 16, 2016 at 09:45:34AM -0700, Amye Scavarda wrote:
> On Thu, Jun 16, 2016 at 8:40 AM, Niels de Vos <nde...@redhat.com> wrote:
> > On Thu, Jun 16, 2016 at 08:32:26PM +0530, Nigel Babu wrote:
> >> On Thu, Jun 16, 2016 at 07:42:29AM -0700, Amye Scavarda wrote:
> >> > On Thu, Jun 16, 2016 at 3:11 AM, Niels de Vos <nde...@redhat.com> wrote:
> >> > > On Wed, Jun 15, 2016 at 08:45:02PM -0700, Amye Scavarda wrote:
> >> > >> Hi all,
> >> > >>
> >> > >> In the spirit of cleaning out access to various things, we're also
> >> > >> taking a look at Rackspace. We've got a great many accounts on there
> >> > >> that probably aren't needed, so we're removing the bulk of them.
> >> > >>
> >> > >> If you're currently needing access to Rackspace and it's been removed,
> >> > >> please file a bug and assign it to me. We'll work through the access
> >> > >> level required.
> >> > >
> >> > > I do not need access, but by removing my account the associated API key
> >> > > probably has been revoked as well. That means /etc/rax-reboot.conf on
> >> > > build.gluster.org is not correct anymore. This file is used by the
> >> > > reboot-vm job we have in Jenkins.
> >> > >   https://build.gluster.org/job/reboot-vm/
> >> > >   
> >> > > https://github.com/gluster/glusterfs-patch-acceptance-tests/tree/master/rax-reboot
> >> > >
> >> > > The VMs and tests seem to have been more stable recently, but it can be
> >> > > useful to reboot a Jenkins slave in case it has issues. This job 
> >> > > allowed
> >> > > anyone with a Jenkins account to take care of that.
> >> > >
> >> > > You probably want to replace the /etc/rax-reboot.conf symlink with
> >> > > username and API key from someone else.
> >> > >
> >> > > Thanks!
> >> > > Niels
> >> >
> >> > This is a really good point. We'll resolve this with working on not
> >> > having this be tied to a named (personal) account, because that's more
> >> > stable long term and makes our infra documentation easier to follow.
> >> > The issue is that we had a lot of accounts that are no longer active
> >> > with gluster.org, so I expect that we're going to run into more of
> >> > these small things.
> >> > Please respond if you know that there was an API key being used!
> >> >
> >> > - amye
> >> >
> >> >
> >> > --
> >> > Amye Scavarda | a...@redhat.com | Gluster Community Lead
> >> > ___
> >> > Gluster-infra mailing list
> >> > Gluster-infra@gluster.org
> >> > http://www.gluster.org/mailman/listinfo/gluster-infra
> >>
> >>
> >> Hi Niels,
> >>
> >> I've setup credentials for our bot user on the build server. I've tested 
> >> it and
> >> it works.
> >
> > Great, thanks!
> > Niels
> 
> Anything else that we know was tied to personal Rackspace accounts?

Not that I know...

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Rackspace Account Removal

2016-06-16 Thread Niels de Vos
On Thu, Jun 16, 2016 at 08:32:26PM +0530, Nigel Babu wrote:
> On Thu, Jun 16, 2016 at 07:42:29AM -0700, Amye Scavarda wrote:
> > On Thu, Jun 16, 2016 at 3:11 AM, Niels de Vos <nde...@redhat.com> wrote:
> > > On Wed, Jun 15, 2016 at 08:45:02PM -0700, Amye Scavarda wrote:
> > >> Hi all,
> > >>
> > >> In the spirit of cleaning out access to various things, we're also
> > >> taking a look at Rackspace. We've got a great many accounts on there
> > >> that probably aren't needed, so we're removing the bulk of them.
> > >>
> > >> If you're currently needing access to Rackspace and it's been removed,
> > >> please file a bug and assign it to me. We'll work through the access
> > >> level required.
> > >
> > > I do not need access, but by removing my account the associated API key
> > > probably has been revoked as well. That means /etc/rax-reboot.conf on
> > > build.gluster.org is not correct anymore. This file is used by the
> > > reboot-vm job we have in Jenkins.
> > >   https://build.gluster.org/job/reboot-vm/
> > >   
> > > https://github.com/gluster/glusterfs-patch-acceptance-tests/tree/master/rax-reboot
> > >
> > > The VMs and tests seem to have been more stable recently, but it can be
> > > useful to reboot a Jenkins slave in case it has issues. This job allowed
> > > anyone with a Jenkins account to take care of that.
> > >
> > > You probably want to replace the /etc/rax-reboot.conf symlink with
> > > username and API key from someone else.
> > >
> > > Thanks!
> > > Niels
> >
> > This is a really good point. We'll resolve this with working on not
> > having this be tied to a named (personal) account, because that's more
> > stable long term and makes our infra documentation easier to follow.
> > The issue is that we had a lot of accounts that are no longer active
> > with gluster.org, so I expect that we're going to run into more of
> > these small things.
> > Please respond if you know that there was an API key being used!
> >
> > - amye
> >
> >
> > --
> > Amye Scavarda | a...@redhat.com | Gluster Community Lead
> > ___
> > Gluster-infra mailing list
> > Gluster-infra@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-infra
> 
> 
> Hi Niels,
> 
> I've setup credentials for our bot user on the build server. I've tested it 
> and
> it works.

Great, thanks!
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Access to Jenkins

2016-06-16 Thread Niels de Vos
On Wed, Jun 15, 2016 at 09:14:16PM +0530, Nigel Babu wrote:
> Hello folks,
> 
> We have 45 people who have access to Jenkins UI. Pretty much everyone will be
> losing this access in the next couple of weeks.
> 
> At the moment, I understand that access to the UI is essential for configuring
> jobs. I’m going to change this in the near future. Jenkins Job Builder[1] will
> talk to Jenkins to create/update jobs. Job information will be managed as a
> collection of yaml files. If you want a new job, you can give us a pull 
> request
> with the correct format. The jobs will then be updated (probably via Jenkins).
> You will then no longer need access to Jenkins to create or manage jobs. In
> fact, editing the jobs in the UI will make the YAML files out of sync.
> 
> Before we start this process, I’d love to know if you use your Jenkins access.

Initially it was only possible to start regression tests by logging into
Jenkins and running the job manually. There were fewer patches and the
duration of the job was much shorter too.

Later on, we got more slaves and longer running regression tests.
Someone configured the Jenkins jobs to get triggered on each change that
was posted (now modified to only run after Verified=+1). This caused a
lot of weird side-effects, including tests failing more regularly for
unknown reasons. Retriggering a job was only possible by clicking the
"retrigger" link in the Jenkins webui (now we can do so with "recheck
..." in Gerrit comments).

Sometimes regression tests cause a fatal problem on a slave. The only
way to get the slave back, could be through a hard reboot. The reboot-vm
job in Jenkins made that possible. The job prevented people from
requiring a Rackspace account.

Release tarballs are created through Jenkins. Anyone doing releases
should have their own Jenkins account to run the job.

I think many of the maintainers should be able to run at least the
release job. Configuration of the jobs can be restricted, specially if
Jenkins Job Builder is used. Somewhere in the Contributors Guide there
should be a description on how to make changes to the Jenkins jobs. I've
never used JBB and definitely need some assistance with it.
  http://gluster.readthedocs.io/en/latest/Contributors-Guide/Index/

HTH,
Niels


> If you do use it, please let me know off-list what you use it for.
> 
> [1]: http://docs.openstack.org/infra/system-config/jjb.html
> 
> --
> nigelb
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Rackspace Account Removal

2016-06-16 Thread Niels de Vos
On Wed, Jun 15, 2016 at 08:45:02PM -0700, Amye Scavarda wrote:
> Hi all,
> 
> In the spirit of cleaning out access to various things, we're also
> taking a look at Rackspace. We've got a great many accounts on there
> that probably aren't needed, so we're removing the bulk of them.
> 
> If you're currently needing access to Rackspace and it's been removed,
> please file a bug and assign it to me. We'll work through the access
> level required.

I do not need access, but by removing my account the associated API key
probably has been revoked as well. That means /etc/rax-reboot.conf on
build.gluster.org is not correct anymore. This file is used by the
reboot-vm job we have in Jenkins.
  https://build.gluster.org/job/reboot-vm/
  
https://github.com/gluster/glusterfs-patch-acceptance-tests/tree/master/rax-reboot

The VMs and tests seem to have been more stable recently, but it can be
useful to reboot a Jenkins slave in case it has issues. This job allowed
anyone with a Jenkins account to take care of that.

You probably want to replace the /etc/rax-reboot.conf symlink with
username and API key from someone else.

Thanks!
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] gstatus upstream has been moved to https://github.com/gluster/gstatus

2016-06-16 Thread Niels de Vos
Hi,

gstatus was located under the account of its creator, Paul Cuzner. It
has now been moved to https://github.com/gluster/gstatus so that the
Gluster Community can find the tool/project quicker.

Both Paul and Sac are admins for this repository.

Cheers,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Updating the email address of our Jenkins test user?

2016-06-14 Thread Niels de Vos
On Tue, Jun 14, 2016 at 02:47:47PM +0530, Nigel Babu wrote:
> On Mon, Jun 13, 2016 at 03:36:06PM +0200, Niels de Vos wrote:
> > Hi,
> >
> > when patches get tested by Jenkins jobs, the following tags are added to
> > the git commit message:
> >
> > Smoke: Gluster Build System <jenk...@build.gluster.com>
> > CentOS-regression: Gluster Build System <jenk...@build.gluster.com>
> > NetBSD-regression: NetBSD Build System <jenk...@build.gluster.org>
> >
> > I guess the email address should get updated to
> > jenk...@build.gluster.org just like the NetBSD regression uses. The
> > (Gerrit) user that has the wrong address is "build". A correction to
> > replace the gluster.com domain would be nice.
> >
> > Thanks,
> > Niels
> 
> I've added jenk...@build.gluster.org to the build account and made it the
> preferred email. The old one will be associated with this account in case
> anyone is still running a script with that account

Great, thanks!


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] permission denied error in smoke test

2016-06-13 Thread Niels de Vos
On Sat, Jun 11, 2016 at 02:10:09PM +0530, Saravanakumar Arumugam wrote:
> Hi Niels,
> 
> On 06/10/2016 07:15 PM, Niels de Vos wrote:
> > On Fri, Jun 10, 2016 at 06:48:38PM +0530, Nigel Babu wrote:
> > >  From looking at IRC, this seems like legit bustage. There was a fix for
> > > this, which seems to be not backported for 3.7.
> > Indeed, these are the changes that we need merged for this issue to be
> > resolved:
> > 
> >
> > http://review.gluster.org/#/q/project:glusterfs+branch:release-3.7+topic:bug-1336137
> > 
> > http://review.gluster.org/14340 is the top level patch, and if you base
> > your 3.7 changes on that one the tests should pass. You can do this in
> > different ways, but I would do it like:
> > 
> >$ git checkout my-branch-for-3.7-backport
> >$ git fetch origin refs/changes/40/14340/2
> >$ git rebase FETCH_HEAD
> >$ git review --no-rebase --remote origin -t bug-0123456
> >(bug-0123456 should be the bug- for the 3.7 bug)
> Thanks!
> 
> I have tried the steps, but the last command failed saying "remote
> rejected".
> 
> ===
> sarumuga@gant glusterfs$ git fetch origin refs/changes/40/14340/2
> remote: Counting objects: 53, done
> remote: Finding sources: 100% (33/33)
> remote: Total 33 (delta 21), reused 29 (delta 21)
> Unpacking objects: 100% (33/33), done.
> From ssh://git.gluster.org/glusterfs
>  * branchrefs/changes/40/14340/2 -> FETCH_HEAD
> 
> sarumuga@gant glusterfs$ git status
> On branch release3.7_upgrade
> 
> sarumuga@gant glusterfs$ git branch
>   master
> * release3.7_upgrade
>   release3.8_upgrade
> 
> sarumuga@gant glusterfs$ git rebase FETCH_HEAD
> First, rewinding head to replay your work on top of it...
> Applying: glusterd/geo-rep: upgrade path when slave vol uuid involved
> 
> sarumuga@gant glusterfs$ git log
> commit 5653ffc8785661a655112c84d0fcad042c6bcfe2
> Author: Saravanakumar Arumugam <sarum...@redhat.com>
> Date:   Thu May 19 21:13:04 2016 +0530
> 
> glusterd/geo-rep: upgrade path when slave vol uuid involved
> commit 5653ffc8785661a655112c84d0fcad042c6bcfe2
> Author: Saravanakumar Arumugam <sarum...@redhat.com>
> Date:   Thu May 19 21:13:04 2016 +0530
> 
> glusterd/geo-rep: upgrade path when slave vol uuid involved
> 
> slave volume uuid is involved in identifying a geo-replication
> session.
> 
> This patch addresses upgrade path, where existing geo-rep session
> is gracefully upgraded to involve slave volume uuid.
> 
> Change-Id: Ib7ff5109b161592f24fc86fc7e93a407655fab86
> BUG: 1342453
> Reviewed-on: http://review.gluster.org/#/c/14425
> Signed-off-by: Saravanakumar Arumugam <sarum...@redhat.com>
> 
> commit 3d946a44df559ce1b9089983edb3465c61ba1ea0
> Author: Niels de Vos <nde...@redhat.com>
> Date:   Sat May 14 19:23:20 2016 +0200
> 
> configure: Prevent glupy installation outside $prefix
> 
> glupy was installed in the global path outside the prefix path,
> even if --prefix is passed.
> 
> ./configure --prefix=/usr/local
> make install
> 
> Expected:
> ${DESTDIR}${prefix}/lib64/python/site-packages/gluster
> Actual: ${DESTDIR}/usr/lib64/python/site-packages/gluster
> 
> 
> sarumuga@gant glusterfs$ git review --no-rebase --remote origin -t
> bug-1342453
> You are about to submit multiple commits. This is expected if you are
> submitting a commit that is dependent on one or more in-review
> commits. Otherwise you should consider squashing your changes into one
> commit before submitting.
> 
> The outstanding commits are:
> 
> 5653ffc (HEAD -> release3.7_upgrade) glusterd/geo-rep: upgrade path when
> slave vol uuid involved
> 3d946a4 configure: Prevent glupy installation outside $prefix
> 01cf67a build: Filter -D_FORTIFY_SOURCE from CFLAGS
> 235966b build: place glupy under $prefix while installing
> 37ac79f cluster/ec: Restrict the launch of replace brick heal
> 3a82ea3 libglusterfs: Even anonymous fds must have fd->flags set
> 
> Do you really want to submit the above commits?
> Type 'yes' to confirm, other to cancel: yes
> remote: Processing changes: refs: 1, done
> To ssh://saravanastoragenetw...@git.gluster.org/glusterfs
>  ! [remote rejected] HEAD -> refs/publish/master/bug-1342453 (change
> http://review.gluster.org/14425 closed)
> error: failed to push some refs to
> 'ssh://saravanastoragenetw...@git.gluster.org/glusterfs'
> sarumuga@gant glusterfs$

Hmm, it tried to push the change for the master branch. And that one is
already merged, so you can not update it.

Oh, even the version for r

Re: [Gluster-infra] Bash scripts for Jenkins jobs

2016-06-13 Thread Niels de Vos
On Mon, Jun 13, 2016 at 02:01:42PM +0530, Nigel Babu wrote:
> Hello folks,
> 
> We define shell scripts inside the Jenkins job[1], which Jenkins allows you 
> to do
> for flexibility. The ideal workflow is that the xml for the job is updated
> every time the job is changed, but this does not always happen.
> 
> I'd like to propose that we split this script into a bash script. This means
> when changes are made, we're all aware of it and there's a code review for
> these changes.
> 
> What this means is that there will be the following files:
> 
> jenkins/job/rackspace-regression-2GB-triggered.xml
> jenkins/script/rackspace-regression-2GB-triggered.sh
> 
> The job will call it's corresponding bash script.
> 
> I'm happy to do the initial conversion. What does everyone think about this
> change of process?

That works for me, it is similar to how we do the Jenkins jobs in the
CentOS CI. It makes it a little easier for others to contribute to the
jobs.

Thanks,
Niels


> 
> 
> [1]: 
> https://github.com/gluster/glusterfs-patch-acceptance-tests/blob/master/jenkins/jobs/rackspace-regression-2GB-triggered.xml#L107
> 
> --
> nigelb
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Regression fails due to infra issue

2016-06-07 Thread Niels de Vos
On Tue, Jun 07, 2016 at 10:29:34AM +0200, Michael Scherer wrote:
> Le mardi 07 juin 2016 à 10:00 +0200, Michael Scherer a écrit :
> > Le mardi 07 juin 2016 à 09:54 +0200, Michael Scherer a écrit :
> > > Le lundi 06 juin 2016 à 21:18 +0200, Niels de Vos a écrit :
> > > > On Mon, Jun 06, 2016 at 09:59:02PM +0530, Nigel Babu wrote:
> > > > > On Mon, Jun 6, 2016 at 12:56 PM, Poornima Gurusiddaiah 
> > > > > <pguru...@redhat.com>
> > > > > wrote:
> > > > > 
> > > > > > Hi,
> > > > > >
> > > > > > There are multiple issues that we saw with regressions lately:
> > > > > >
> > > > > > 1. On certain slaves the regression fails during build and i see 
> > > > > > those on
> > > > > > slave26.cloud.gluster.org, slave25.cloud.gluster.org and may be 
> > > > > > others
> > > > > > also.
> > > > > > Eg:
> > > > > > https://build.gluster.org/job/rackspace-regression-2GB-triggered/21422/console
> > > > > >
> > > > > 
> > > > > Are you sure this isn't a code breakage?
> > > > 
> > > > No, it really does not look like that.
> > > > 
> > > > This is an other one, it seems the testcase got killed for some reason:
> > > > 
> > > >   
> > > > https://build.gluster.org/job/rackspace-regression-2GB-triggered/21459/console
> > > > 
> > > > It was running on slave25.cloud.gluster.org too... Is it possible that
> > > > there is some watchdog or other configuration checking for resources and
> > > > killing testcases on occasion? The number of slaves where this happens
> > > > seems limited, were these more recently installed/configured?
> > > 
> > > So dmesg speak of segfault in yum
> > > 
> > > yum[2711] trap invalid opcode ip:7f2efac38d60 sp:7ffd77322658 error:0 in
> > > libfreeblpriv3.so[7f2efabe6000+72000]
> > > 
> > > and
> > > https://access.redhat.com/solutions/2313911
> > > 
> > > That's exactly the problem.
> > > [root@slave25 ~]# /usr/bin/curl https://google.com
> > > Illegal instruction
> > > 
> > > I propose to remove the builder from rotation while we investigate.
> > 
> > Or we can:
> > 
> > export NSS_DISABLE_HW_AES=1
> > 
> > to work around, cf the bug listed on the article.
> > 
> > Not sure the best way to deploy that.
> 
> So we are testing the fix on slave25, and if that's what fix the error,
> I will deploy to the whole gluster builders, and investigate for the non
> builders server. That's only for RHEL 6/Centos 6 on rackspace.

If this does not work, configuring mock to use http (without the 's')
might be an option too. The export variable would probably need to get
set inside the mock chroot. It can possibly be done in
/etc/mock/site-defaults.cfg.

For the normal test cases, placing the environment variable (and maybe
NSS_DISABLE_HW_GCM=1 too?) in the global bashrc might be sufficient.

Good luck!
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Please install netstat on the Jenkins slaves for regression testing

2016-04-27 Thread Niels de Vos
On Wed, Apr 27, 2016 at 11:29:53AM +0200, Michael Scherer wrote:
> Le mercredi 27 avril 2016 à 11:04 +0200, Niels de Vos a écrit :
> > We have one test-case that uses netstat (tests/bugs/fuse/bug-924726.t).
> > When netstat is not installed, this testcase will not be run correctly.
> > 
> > Please merge/squash this change and apply it to the slaves:
> >   https://github.com/gluster/gluster.org_ansible_configuration/pull/1
> 
> So I did merged, then I figured that I should have tested and/or not
> pushed to the repo first.
> 
> But there is no netstat package and netstat is already in net-tools.

Hmm, well, I assumed the playbook was run on slave27... The patch that
explicitly tests for netstat failed here:

  https://build.gluster.org/job/rackspace-regression-2GB-triggered/20020/console

> So I need to investigate a bit more the problem.

Maybe is it a failure in how the existance of netstat is tested? I
remember something about NetBSD not supporting the --version option:

  http://review.gluster.org/#/c/13547/3/run-tests.sh

But, it seems that regression succeeded for NetBSD, so maybe I'm
remembering things incorrectly. I'll try to check it out an other time
too, it is not urgent.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [ovirt-users] [Attention needed] GlusterFS repository down - affects CI / Installations

2016-04-27 Thread Niels de Vos
On Wed, Apr 27, 2016 at 02:30:57PM +0530, Ravishankar N wrote:
> @gluster infra  - FYI.
> 
> On 04/27/2016 02:20 PM, Nadav Goldin wrote:
> >Hi,
> >The GlusterFS repository became unavailable this morning, as a result all
> >Jenkins jobs that use the repository will fail, the common error would be:
> >
> >
> > http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-7/noarch/repodata/repomd.xml:
> >[Errno 14] HTTP Error 403 - Forbidden
> >
> >
> >Also, installations of oVirt will fail.

I thought oVirt moved to using the packages from the CentOS Storage SIG?
In any case, automated tests should probably use those instead of the
packages on download.gluster.org. We're trying to minimize the work
packagers need to do, and get the glusterfs and other components in the
repositories that are provided by different distributions.

For more details, see the quickstart for the Storage SIG here:
  https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart

HTH,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] Please install netstat on the Jenkins slaves for regression testing

2016-04-27 Thread Niels de Vos
We have one test-case that uses netstat (tests/bugs/fuse/bug-924726.t).
When netstat is not installed, this testcase will not be run correctly.

Please merge/squash this change and apply it to the slaves:
  https://github.com/gluster/gluster.org_ansible_configuration/pull/1

Once done, we can re-run the regression tests for
http://review.gluster.org/13547 .

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] Returned slave34.cloud.gluster.org back to Jenkins

2016-04-26 Thread Niels de Vos
Hi,

I've just returned slave34.cloud.gluster.org back to Jenkins so that it
can run smoke/regression tests again. http://review.gluster.org/13331
has been rebased on top of my rpc/xdr generation cleanups and should
pass the tests later today.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] regression machines reporting slowly ? here is the reason ...

2016-04-25 Thread Niels de Vos
On Mon, Apr 25, 2016 at 10:43:13AM +0200, Michael Scherer wrote:
> Le dimanche 24 avril 2016 à 15:59 +0200, Niels de Vos a écrit :
> > On Sun, Apr 24, 2016 at 04:22:55PM +0530, Prasanna Kalever wrote:
> > > On Sun, Apr 24, 2016 at 7:11 AM, Vijay Bellur <vbel...@redhat.com> wrote:
> > > > On Sat, Apr 23, 2016 at 9:30 AM, Prasanna Kalever <pkale...@redhat.com> 
> > > > wrote:
> > > >> Hi all,
> > > >>
> > > >> Noticed our regression machines are reporting back really slow,
> > > >> especially CentOs and Smoke
> > > >>
> > > >> I found that most of the slaves are marked offline, this could be the
> > > >> biggest reasons ?
> > > >>
> > > >>
> > > >
> > > > Regression machines are scheduled to be offline if there are no active
> > > > jobs. I wonder if the slowness is related to LVM or related factors as
> > > > detailed in a recent thread?
> > > >
> > > 
> > > Sorry, the previous mail was sent incomplete (blame some Gmail shortcut)
> > > 
> > > Hi Vijay,
> > > 
> > > Honestly I was not aware of this case where the machines move to
> > > offline state by them self, I was only aware that they just go to idle
> > > state,
> > > Thanks for sharing that information. But we still need to reclaim most
> > > of machines, Here are the reasons why each of them are offline.
> > 
> > Well, slaves go into offline, and should be woken up when needed.
> > However it seems that Jenkins fails to connect to many slaves :-/
> > 
> > I've rebooted:
> > 
> >  - slave46
> >  - slave28
> >  - slave26
> >  - slave25
> >  - slave24
> >  - slave23
> >  - slave21
> > 
> > These all seem to have come up correctly after clicking the 'Lauch slave
> > agent' button on the slave's status page.
> > 
> > Remember that anyone with a Jankins account can reboot VMs. This most
> > often is sufficient to get them working again. Just go to
> > https://build.gluster.org/job/reboot-vm/ , login and press some buttons.
> > 
> > One slave is in a weird status, maybe one of the tests overwrote the ssh
> > key?
> > 
> > [04/24/16 06:48:02] [SSH] Opening SSH connection to 
> > slave29.cloud.gluster.org:22.
> > ERROR: Failed to authenticate as jenkins. Wrong password. 
> > (credentialId:c31bff89-36c0-4f41-aed8-7c87ba53621e/method:password)
> > [04/24/16 06:48:04] [SSH] Authentication failed.
> > hudson.AbortException: Authentication failed.
> > at 
> > hudson.plugins.sshslaves.SSHLauncher.openConnection(SSHLauncher.java:1217)
> > at 
> > hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:711)
> > at 
> > hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:706)
> > at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> > at 
> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > at 
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > at java.lang.Thread.run(Thread.java:745)
> > [04/24/16 06:48:04] Launch failed - cleaning up connection
> > [04/24/16 06:48:05] [SSH] Connection closed.
> > 
> > Leaving slave29 as is, maybe one of our admins can have a look and see
> > if it needs reprovisioning.
> 
> Seems slave29 was reinstalled and/or slightly damaged, it was no longer
> in salt configuration, but I could connect as root. 
> 
> It should work better now, but please tell me if anything is incorrect
> with it.

Hmm, not really. Launching the Jenkins slave agent in it through the
webui still fails the same:

  https://build.gluster.org/computer/slave29.cloud.gluster.org/log

Maybe the "jenkins" user on the slave has the wrong password?

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] https access with gerrit

2016-04-24 Thread Niels de Vos
On Sat, Apr 23, 2016 at 08:05:50PM -0400, Vijay Bellur wrote:
> On Sat, Apr 23, 2016 at 12:48 AM, Kaushal M  wrote:
> > You need to use the ssh:// URIs to push. The https:// URIs just allow git 
> > clone.
> >
> > The ssh:// URIs should be in the format
> > `ssh://@review.gluster.org/glusterfs.git`
> >
> 
> 
> That used to be the case a while back. git supports pushing over http
> for a while now and so does gerrit [1]. We need to determine why it is
> failing so with r.g.o.

I tried it, and pulling over https works just fine. When pushing
(./rfc.sh) over https, I get asked for a password (username in URL).
Pasting the generated password from
http://review.gluster.org/#/settings/http-password did not work, I got
this error:

remote: Unauthorized
fatal: Authentication failed for 
'https://nde...@review.gluster.org/glusterfs/'

It would be nice to have this working. Most of us should be able to use
ssh, but in some corperate environments firewalls can be pretty
restrictive.

HTH,
Niels

> 
> -Vijay
> 
> [1] https://groups.google.com/forum/#!topic/repo-discuss/7BXB_t7cHs8
> 
> 
> >
> >
> > On Fri, Apr 22, 2016 at 11:47 PM, Vijay Bellur  wrote:
> >> Hey All,
> >>
> >> Whenever I try to push changes (via rfc.sh or git push) from a https clone
> >> of glusterfs repository, it fails as:
> >>
> >> remote: Unauthorized
> >> fatal: Authentication failed for 
> >> 'https://review.gluster.org/glusterfs.git/'
> >>
> >> Does this work successfully for anybody?
> >>
> >> Thanks,
> >> Vijay
> >> ___
> >> Gluster-infra mailing list
> >> Gluster-infra@gluster.org
> >> http://www.gluster.org/mailman/listinfo/gluster-infra
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] regression machines reporting slowly ? here is the reason ...

2016-04-24 Thread Niels de Vos
On Sun, Apr 24, 2016 at 04:22:55PM +0530, Prasanna Kalever wrote:
> On Sun, Apr 24, 2016 at 7:11 AM, Vijay Bellur  wrote:
> > On Sat, Apr 23, 2016 at 9:30 AM, Prasanna Kalever  
> > wrote:
> >> Hi all,
> >>
> >> Noticed our regression machines are reporting back really slow,
> >> especially CentOs and Smoke
> >>
> >> I found that most of the slaves are marked offline, this could be the
> >> biggest reasons ?
> >>
> >>
> >
> > Regression machines are scheduled to be offline if there are no active
> > jobs. I wonder if the slowness is related to LVM or related factors as
> > detailed in a recent thread?
> >
> 
> Sorry, the previous mail was sent incomplete (blame some Gmail shortcut)
> 
> Hi Vijay,
> 
> Honestly I was not aware of this case where the machines move to
> offline state by them self, I was only aware that they just go to idle
> state,
> Thanks for sharing that information. But we still need to reclaim most
> of machines, Here are the reasons why each of them are offline.

Well, slaves go into offline, and should be woken up when needed.
However it seems that Jenkins fails to connect to many slaves :-/

I've rebooted:

 - slave46
 - slave28
 - slave26
 - slave25
 - slave24
 - slave23
 - slave21

These all seem to have come up correctly after clicking the 'Lauch slave
agent' button on the slave's status page.

Remember that anyone with a Jankins account can reboot VMs. This most
often is sufficient to get them working again. Just go to
https://build.gluster.org/job/reboot-vm/ , login and press some buttons.

One slave is in a weird status, maybe one of the tests overwrote the ssh
key?

[04/24/16 06:48:02] [SSH] Opening SSH connection to 
slave29.cloud.gluster.org:22.
ERROR: Failed to authenticate as jenkins. Wrong password. 
(credentialId:c31bff89-36c0-4f41-aed8-7c87ba53621e/method:password)
[04/24/16 06:48:04] [SSH] Authentication failed.
hudson.AbortException: Authentication failed.
at 
hudson.plugins.sshslaves.SSHLauncher.openConnection(SSHLauncher.java:1217)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:711)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:706)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[04/24/16 06:48:04] Launch failed - cleaning up connection
[04/24/16 06:48:05] [SSH] Connection closed.

Leaving slave29 as is, maybe one of our admins can have a look and see
if it needs reprovisioning.

Cheers,
Niels

> 
> 
> CentOs slaves: Hardly (2/14) salves are online [1]
> 
> slave20.cloud.gluster.org (online)
> slave21.cloud.gluster.org [Offline Reason: This node is offline
> because Jenkins failed to launch the slave agent on it.]
> slave22.cloud.gluster.org (online)
> slave23.cloud.gluster.org [Offline Reason: This node is offline
> because Jenkins failed to launch the slave agent on it.]
> slave24.cloud.gluster.org [Offline Reason: This node is offline
> because Jenkins failed to launch the slave agent on it.]
> slave25.cloud.gluster.org [Offline Reason: This node is offline
> because Jenkins failed to launch the slave agent on it.]
> slave26.cloud.gluster.org [Offline Reason: This node is offline
> because Jenkins failed to launch the slave agent on it.]
> slave27.cloud.gluster.org [Offline Reason: Disconnected by rastar :
> rastar taking this down for pranith. Needed for debugging with tar
> issue.  Apr 20, 2016 3:44:14 AM]
> slave28.cloud.gluster.org [Offline Reason: This node is offline
> because Jenkins failed to launch the slave agent on it.]
> slave29.cloud.gluster.org [Offline Reason: This node is offline
> because Jenkins failed to launch the slave agent on it.]
> 
> slave32.cloud.gluster.org [Offline Reason: idle]
> slave33.cloud.gluster.org [Offline Reason: idle]
> slave34.cloud.gluster.org [Offline Reason: idle]
> 
> slave46.cloud.gluster.org [Offline Reason: This node is offline
> because Jenkins failed to launch the slave agent on it.]
> 
> 
> 
> 
> Smoke slaves:  Hardly (2/15) slaves are online [2]
> 
> slave20.cloud.gluster.org (onine)
> slave21.cloud.gluster.org [Offline Reason: This node is offline
> because Jenkins failed to launch the slave agent on it.]
> slave22.cloud.gluster.org (online)
> slave23.cloud.gluster.org [Offline Reason: This node is offline
> because Jenkins failed to launch the slave agent on it.]
> slave24.cloud.gluster.org [Offline Reason: This node is offline
> because Jenkins failed to launch the slave agent on it.]
> slave25.cloud.gluster.org [Offline Reason: This node is offline
> because Jenkins failed to launch the slave agent on it.]
> slave26.cloud.gluster.org [Offline Reason: This node is offline
> because Jenkins failed to launch the slave agent on it.]
> 

Re: [Gluster-infra] [Gluster-devel] freebsd-smoke failures

2016-04-04 Thread Niels de Vos
On Sat, Apr 02, 2016 at 11:04:48AM -0400, Jeff Darcy wrote:
> > Please make sure that this change also gets included in the repository:
> > 
> >   https://github.com/gluster/glusterfs-patch-acceptance-tests
> 
> Looks like we're getting a bit of a queue there.  Who can merge some of
> these?

I normally leave that for the people that maintain the test
infrastructure. Raghavendra Talur, Kaushal and MS are the ones that
should merge most of these changes.

I can help with the ones that are related to tests in the CentOS CI :)

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] freebsd-smoke failures

2016-04-02 Thread Niels de Vos
On Sat, Apr 02, 2016 at 07:53:32AM -0400, Jeff Darcy wrote:
> > IIRC, this happens because in the build job use "--enable-bd-xlator"
> > option while configure
> 
> I came to the same conclusion, and set --enable-bd-xlator=no on the
> slave.  I also had to remove -Werror because that was also causing
> failures.  FreeBSD smoke is now succeeding.

Please make sure that this change also gets included in the repository:

  https://github.com/gluster/glusterfs-patch-acceptance-tests

I always thought the bd-xlator was Linux only because it depends on LVM.
Does anyone have an idea why this suddenly(?) got enabled on FreeBSD?

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Creation of Clang Job in Upstream Jenkins

2016-03-29 Thread Niels de Vos
On Tue, Mar 29, 2016 at 03:47:16PM +0530, Prasanna Kalever wrote:
> On Tue, Mar 29, 2016 at 3:28 PM, Niels de Vos <nde...@redhat.com> wrote:
> > On Tue, Mar 29, 2016 at 12:01:59PM +0530, Prasanna Kalever wrote:
> >> On Tue, Mar 22, 2016 at 6:12 PM, Michael Scherer <msche...@redhat.com> 
> >> wrote:
> >> > Le mardi 22 mars 2016 à 15:48 +0530, Prasanna Kalever a écrit :
> >> >> Hello,
> >> >>
> >> >> W
> >> >> e have integrated clang checker job in local Jenkins 10.70.41.41, 
> >> >> thanks to
> >> >> Raghavendra Talur for sitting with me.
> >> >>
> >> >> Can someone grant me the access to upstream Jenkins
> >> >>
> >> >> ? so that I can replicate the clang job there.
> >> >> As part of this, I request to install '
> >> >> clang-analyzer.noarch package in all the slaves
> >> >
> >> > I assume you mean "all the Linux slave", since I am not sure there is
> >> > such package for netbsd and freebsd ?
> >> >
> >> > I pushed the change on salt and ansible.
> >>
> >> Thanks misc,
> >> I am planning to use Centos slaves, so your assumption is right.
> >>
> >>
> >> Can some one grant me credentials of upstream Jenkins please?
> >> I am waiting for this...
> >
> > I might be late to the conversation...
> >
> > Should we run this on our Gluster slaves in our already difficult to
> > maintain Jenkins infra. Or should this be a (scheduled) job that runs on
> > the machines in the CentOS CI?
> >
> > Examples of jobs that we currently have in the CentOS CI:
> >
> >  - https://ci.centos.org/view/Gluster/
> >  - 
> > https://github.com/gluster/glusterfs-patch-acceptance-tests/tree/master/centos-ci
> >
> > We can also trigger on patch submissions, but if we do that, we need to
> > decide how the reporting back to Gerrit should be done. Would it require
> > a new label?
> 
> Hi Kaushal, Vishwanath Bhat,
> 
> I would like to do this as per Niels suggestions i.e. on CentOS CI,
> 
> As part of this I need your help in doing the following:
> 
> 1. Creating a new label "Clang-Check" in gerrit
> 2. Adding ssh keys in CentOS CI side slaves
> 3. Triggering part of the clang JOB

You can already write a script that checks out the change from Gerrit
and runs the tests. This script runs on a cleanly installed CentOS
machine, so it will need to install any dependencies for building with
clang too. See the libgfapi test-case in the link above for an example.

Send the script as a pull-request to the GitHub repository, and we'll be
able to put that in a Jenkins job in the CentOS CI. After a few
test-runs, we can then enable the reporting back to Gerrit (assuming the
label has been created).

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Creation of Clang Job in Upstream Jenkins

2016-03-29 Thread Niels de Vos
On Tue, Mar 29, 2016 at 12:01:59PM +0530, Prasanna Kalever wrote:
> On Tue, Mar 22, 2016 at 6:12 PM, Michael Scherer  wrote:
> > Le mardi 22 mars 2016 à 15:48 +0530, Prasanna Kalever a écrit :
> >> Hello,
> >>
> >> W
> >> e have integrated clang checker job in local Jenkins 10.70.41.41, thanks to
> >> Raghavendra Talur for sitting with me.
> >>
> >> Can someone grant me the access to upstream Jenkins
> >>
> >> ? so that I can replicate the clang job there.
> >> As part of this, I request to install '
> >> clang-analyzer.noarch package in all the slaves
> >
> > I assume you mean "all the Linux slave", since I am not sure there is
> > such package for netbsd and freebsd ?
> >
> > I pushed the change on salt and ansible.
> 
> Thanks misc,
> I am planning to use Centos slaves, so your assumption is right.
> 
> 
> Can some one grant me credentials of upstream Jenkins please?
> I am waiting for this...

I might be late to the conversation...

Should we run this on our Gluster slaves in our already difficult to
maintain Jenkins infra. Or should this be a (scheduled) job that runs on
the machines in the CentOS CI?

Examples of jobs that we currently have in the CentOS CI:

 - https://ci.centos.org/view/Gluster/
 - 
https://github.com/gluster/glusterfs-patch-acceptance-tests/tree/master/centos-ci

We can also trigger on patch submissions, but if we do that, we need to
decide how the reporting back to Gerrit should be done. Would it require
a new label?

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] Signing in on review.g.o returns a "Server Error"

2016-03-27 Thread Niels de Vos
Hi,

I'm trying to sign in on review.gluster.org, but when I click the
"Sign-in with GitHub" link, I get an almost empty page with

Server Error

on it.

It would be nice if someone can fix that :)

Thanks!
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Core pattern change in centos regression machines

2016-03-15 Thread Niels de Vos
On Tue, Mar 15, 2016 at 02:59:35AM -0400, Prasanna Kumar Kalever wrote:
> On Tuesday, March 15, 2016 10:16:06 AM, Niels de Vos wrote:
> > On Mon, Mar 14, 2016 at 09:04:04AM -0400, Prasanna Kumar Kalever wrote:
> > > Hi,
> > > 
> > > As part of dumping back-trace of core files in console log of jenkins
> > > we need executable name, finding executable name from the corefile is
> > > not common across platfoms, hence lets make the executable name as
> > > part of corefile name in all Centos jenkin slaves similar to netbsd
> > > slaves.
> > > 
> > > Currently Core pattern in Netbsd slaves is
> > > # /sbin/sysctl -n kern.defcorename
> > > /%n-%p.core
> > > 
> > > Lets set same in Centos slaves as well
> > > # /sbin/sysctl -w kernel.core_pattern="/%e-%p.core"
> > > 
> > > Once the above changes are done in CentOs machines, I shall push a
> > > patch to glusterfs-patch-acceptance-tests making changes needed in
> > > regression.sh
> > 
> > This most likely needs to be run by the regression test itself. Or at
> > least have the check in the main script starting the regressions. On
> > CentOS and other Fedora/RHEL based systems, we should probably integrate
> > with abrt instead of writing our own solution? abrt generates everything
> > we need, and it matches much more closely to what users have on their
> > systems. It has a huge advantage to use existing tools so that
> > developers and users can apply standard knowledge of the OS.
> > 
> > Not sure how NetBSD and FreeBSD do this, I do not think they have abrt.
> 
> Just verified whether this package is installed in one of our Netbsd slaves,
> it was not installed, also I don't see it even in
> ftp://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/i386/7.0_2015Q4/All/
> 
> For now lets dump the core with our solution as stated above.

How do you address the running of this on systems that developers use
for their testing? I normally have my systems configured to capture
cores with abrt, and that is the default for Fedora, RHEL and CentOS
installations too. To me it is important that we do not break this
behaviour when we run the regression tests.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Core pattern change in centos regression machines

2016-03-14 Thread Niels de Vos
On Mon, Mar 14, 2016 at 09:04:04AM -0400, Prasanna Kumar Kalever wrote:
> Hi,
> 
> As part of dumping back-trace of core files in console log of jenkins
> we need executable name, finding executable name from the corefile is
> not common across platfoms, hence lets make the executable name as
> part of corefile name in all Centos jenkin slaves similar to netbsd
> slaves.
> 
> Currently Core pattern in Netbsd slaves is
> # /sbin/sysctl -n kern.defcorename
> /%n-%p.core
> 
> Lets set same in Centos slaves as well
> # /sbin/sysctl -w kernel.core_pattern="/%e-%p.core"
> 
> Once the above changes are done in CentOs machines, I shall push a
> patch to glusterfs-patch-acceptance-tests making changes needed in
> regression.sh

This most likely needs to be run by the regression test itself. Or at
least have the check in the main script starting the regressions. On
CentOS and other Fedora/RHEL based systems, we should probably integrate
with abrt instead of writing our own solution? abrt generates everything
we need, and it matches much more closely to what users have on their
systems. It has a huge advantage to use existing tools so that
developers and users can apply standard knowledge of the OS.

Not sure how NetBSD and FreeBSD do this, I do not think they have abrt.

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Adding Apache mod_proxy_gluster to github.com/gluster ?

2016-03-12 Thread Niels de Vos
On Sat, Mar 12, 2016 at 06:24:41AM +0100, Niels de Vos wrote:
> Hi David!
> 
> On Freenode IRC in #gluster someone posted a link to your
> mod_proxy_gluster repository on GitHub. I was not aware that someone
> took the effort to improve my stalled weekend project. Many thanks for
> doing that!
> 
> If you do not object, we can make mod_proxy_gluster a project under the
> Gluster organisation. It makes it a little easier for people to find it
> there, and we may get some additional contributions.
> 
> What distribution are you using? I would like to add packages with the
> module and default configuration file to Fedora and the CentOS Storage
> SIG. Would you be interested in that too?
> 
> Cheers,
> Niels

After a few email exchanges with David off-list, we moved his updated
repository under the Gluster organisation in GitHub:
  https://github.com/gluster/mod_proxy_gluster

It would be great if we can get this added to some of the distributions.
I'll probably get it packages for the CentOS Storage SIG at one point
too, unless someone beats me to it.

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] Added Raghavendra Talur as committer to glusterfs-patch-acceptance-tests on GitHub

2016-03-11 Thread Niels de Vos
Hi,

Raghavendra Talur is one of the maintainers of the test framework in our
glusterfs repository. I have now added him as a contributor to the
glusterfs-patch-acceptance-tests repository on GitHub. This should make
it easier for people to send improvements to the CentOS CI jobs that
will get added over the next few weeks.

Cheers,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] qcow2 images / jenkins machines

2016-03-10 Thread Niels de Vos
On Thu, Mar 10, 2016 at 07:09:29PM -0500, Dan Lambright wrote:
> 
> 
> - Original Message -
> > From: "Michael Scherer" 
> > To: "Dan Lambright" 
> > Cc: "gluster-infra" 
> > Sent: Thursday, March 10, 2016 2:10:10 PM
> > Subject: Re: [Gluster-infra] qcow2 images / jenkins machines
> > 
> > Le mercredi 09 mars 2016 à 11:49 -0500, Dan Lambright a écrit :
> > > I would like to load onto openstack qcow2 images and profiles most closely
> > > resembling what we run in jenkins, to recreate problems seen in
> > > regression.
> > > 
> > > What are the OS levels do we use on Jenkins for testing? I see:
> > > 
> > > netbsd7
> > > 
> > > centos6
> > > 
> > > fedora22
> > > 
> > > and "nbslave", RHEL7 ?
> > > 
> > > This is a very cursory look. How much memory and cores are these images
> > > given?
> > > 
> > > If anyone knows the answer to these questions, response is appreciated.
> > 
> > so we have mostly centos 6 and netbsd 7, running on rackspace
> > 
> > The centos 6 are
> > 2g of ram, 2 core
> > model name  : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
> > 
> > Fedora 22 is just for smoke tests.
> > Ther eis another Centos - who was here to validate the automated
> > deployment, and a freebsd server for smoke test.
> > 
> > We do not use image based deployment, but mostly salt, using this
> > states :
> > https://github.com/gluster/gluster.org_salt_states/blob/master/jenkins/slave.sls
> > 
> > We will be converting this to ansible in the coming month, and
> > Raghavendra Talur was working on a vagrant/ansible setup for testing too
> > (hope we can unify both at some time in the future too)
> > 
> > No RHEL/Centos 7 yet, but we should.
> 
> I think RHEL/Centos7 would be a good thing from the point of view of testing.

Indeed. And coincidentally I spoke with Kaushal and MS about it
yesterday. We plan to setup a shadow job in the CentOS CI that runs
regression tests on CentOS 7. It will not start voting in Gerrit
immediately, we'll need to see if it runs stable enough for that first.
This makes it easy to try out the amazing infrastructure that the CentOS
team provides for us, and see how our tests function on CentOS 7.

Hopefully more details about this follow soon.

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Please install the net-tools package on our Jenkins slaves

2016-03-09 Thread Niels de Vos
On Wed, Mar 09, 2016 at 02:25:38PM +0100, Michael Scherer wrote:
> Le mercredi 09 mars 2016 à 12:19 +0100, Niels de Vos a écrit :
> > On Tue, Mar 01, 2016 at 06:40:16PM +0100, Michael Scherer wrote:
> > > Le mardi 01 mars 2016 à 14:20 +0100, Niels de Vos a écrit :
> > > > On Tue, Mar 01, 2016 at 01:41:15PM +0100, Michael Scherer wrote:
> > > > > Le mardi 01 mars 2016 à 12:12 +0100, Niels de Vos a écrit :
> > > > > > Hi,
> > > > > > 
> > > > > > on at leastsome of the slaves there is no netstat executable 
> > > > > > available.
> > > > > > This causes some tests to fail. 'netstat' on Fedora/RHEL/CentOS 
> > > > > > comes
> > > > > > from the net-tools RPM. Could you please add that to the list of
> > > > > > packages that are installed on the slaves?
> > > > > 
> > > > > Yup.
> > > > > Only on the Linux ones, or also on freebsd/netbsd ?
> > > > 
> > > > We only run all regression tests on Linux (for now). NetBSD does not
> > > > seem to complain that netstat is missing, so it is either there, or the
> > > > check is skipped somehow.
> > > 
> > > So it was pushed on salt, but then I found out that the jenkins queue
> > > was starting to fill up, and then meeting marathon started.
> > > 
> > > So ping me if it doesn't work fine :)
> > > (the change were not pushed to the salt github repo for a reason I do
> > > not understand...)
> > 
> > Hmm, this does not seem to have been applied to the slaves yet? Todays
> > re-run of the tests failed because netstat is not available:
> > 
> >   
> > https://build.gluster.org/job/rackspace-regression-2GB-triggered/18966/console
> 
> $ ssh r...@slave33.cloud.gluster.org rpm -q net-tools
> net-tools-1.60-110.el6_2.x86_64
> 
> And it was installed since a long time:
> $ ssh r...@slave33.cloud.gluster.org rpm -qi net-tools
> Name: net-toolsRelocations: (not
> relocatable)
> Version : 1.60  Vendor: CentOS
> Release : 110.el6_2 Build Date: Thu 10 May 2012
> 08:17:33 AM UTC
> Install Date: Fri 20 Mar 2015 07:34:49 PM UTC  Build Host:
> c6b5.bsys.dev.centos.org
> Group   : System Environment/Base   Source RPM:
> net-tools-1.60-110.el6_2.src.rpm
> Size: 778085   License: GPL+
> Signature   : RSA/SHA1, Thu 10 May 2012 10:02:47 AM UTC, Key ID
> 0946fca2c105b9de
> Packager: CentOS BuildSystem <http://bugs.centos.org>
> URL : http://net-tools.berlios.de/
> Summary : Basic networking tools
> Description :
> The net-tools package contains basic networking tools,
> including ifconfig, netstat, route, and others.
> Most of them are obsolete. For replacement check iproute package.
> 
> And netstat can be run. 

Thanks for checkin!

It seems that "netstat --version" on CentOS-6 returns 5, whereas it
returns 0 (success) on Fedora and NetBSD. Who would have expected that!?

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Need merge access to Gluster repo

2016-03-09 Thread Niels de Vos
On Wed, Mar 09, 2016 at 04:53:24PM +0530, Vijaikumar Mallikarjuna wrote:
> On Wed, Mar 9, 2016 at 4:47 PM, Niels de Vos <nde...@redhat.com> wrote:
> 
> > On Wed, Mar 09, 2016 at 11:15:06AM +0530, Vijaikumar Mallikarjuna wrote:
> > > Hi,
> > >
> > > I will be the maintainer for quota and marker component and the same is
> > > updated in the Maintainer's List.
> > > Could you please provide merge access to the Gluster repo?
> >
> > Please make sure you are very familiar with these guidelines:
> >
> >
> > http://gluster.readthedocs.org/en/latest/Contributors-Guide/Guidelines-For-Maintainers/
> >
> > http://gluster.readthedocs.org/en/latest/Developer-guide/Backport-Guidelines/
> >
> > Maintainers are also expected to keep track of the bugs that users file.
> > Different ways to get notifications about new bugs are listed here:
> >
> >
> Thank you for sharing these documents :)

Oh, *and* subscribe to the maintainers list by sending an email to
maintainers-requ...@gluster.org with subject "subscribe". It would be
great if you can send a pull request for the guidelines for maintainers
too.

Thanks,
Niels (who's approving the posts from non-members to the list)


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Please install the net-tools package on our Jenkins slaves

2016-03-09 Thread Niels de Vos
On Tue, Mar 01, 2016 at 06:40:16PM +0100, Michael Scherer wrote:
> Le mardi 01 mars 2016 à 14:20 +0100, Niels de Vos a écrit :
> > On Tue, Mar 01, 2016 at 01:41:15PM +0100, Michael Scherer wrote:
> > > Le mardi 01 mars 2016 à 12:12 +0100, Niels de Vos a écrit :
> > > > Hi,
> > > > 
> > > > on at leastsome of the slaves there is no netstat executable available.
> > > > This causes some tests to fail. 'netstat' on Fedora/RHEL/CentOS comes
> > > > from the net-tools RPM. Could you please add that to the list of
> > > > packages that are installed on the slaves?
> > > 
> > > Yup.
> > > Only on the Linux ones, or also on freebsd/netbsd ?
> > 
> > We only run all regression tests on Linux (for now). NetBSD does not
> > seem to complain that netstat is missing, so it is either there, or the
> > check is skipped somehow.
> 
> So it was pushed on salt, but then I found out that the jenkins queue
> was starting to fill up, and then meeting marathon started.
> 
> So ping me if it doesn't work fine :)
> (the change were not pushed to the salt github repo for a reason I do
> not understand...)

Hmm, this does not seem to have been applied to the slaves yet? Todays
re-run of the tests failed because netstat is not available:

  https://build.gluster.org/job/rackspace-regression-2GB-triggered/18966/console

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Need merge access to Gluster repo

2016-03-09 Thread Niels de Vos
On Wed, Mar 09, 2016 at 11:15:06AM +0530, Vijaikumar Mallikarjuna wrote:
> Hi,
> 
> I will be the maintainer for quota and marker component and the same is
> updated in the Maintainer's List.
> Could you please provide merge access to the Gluster repo?

Please make sure you are very familiar with these guidelines:

  
http://gluster.readthedocs.org/en/latest/Contributors-Guide/Guidelines-For-Maintainers/
  http://gluster.readthedocs.org/en/latest/Developer-guide/Backport-Guidelines/

Maintainers are also expected to keep track of the bugs that users file.
Different ways to get notifications about new bugs are listed here:

  
https://github.com/gluster/glusterdocs/blob/master/Developer-guide/Bugzilla%20Notifications.md
  (I can't find it on gluster.readthedocs.org, could someone send a
   patch for that?)

Cheers,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] tests/basic/tier/tier-file-create.t dumping core on Linux

2016-03-08 Thread Niels de Vos
On Tue, Mar 08, 2016 at 07:00:07PM +0530, Krutika Dhananjay wrote:
> I did talk to the author and he is going to look into the issue.
> 3.7.9 is round the corner and we certainly don't want bad tests to block
> patches that need to go in.

3.7.9 has been delayed already. There is hardly a good reason for
changes to delay the relase even more. 3.7.10 will be done in a few
weeks at the end of March, anything non-regression should probably not
be included at this point anymore.

Vijay is the release manager for 3.7.9 and .10, you'll need to come with
extreme strong points for getting more patches included.

Niels


> 
> -Krutika
> 
> On Tue, Mar 8, 2016 at 6:50 PM, Dan Lambright  wrote:
> 
> >
> >
> > - Original Message -
> > > From: "Krutika Dhananjay" 
> > > To: "Pranith Karampuri" 
> > > Cc: "gluster-infra" , "Gluster Devel" <
> > gluster-de...@gluster.org>, "RHGS tiering mailing
> > > list" , "Dan Lambright" 
> > > Sent: Tuesday, March 8, 2016 12:15:20 AM
> > > Subject: Re: [Gluster-infra] tests/basic/tier/tier-file-create.t dumping
> > core on Linux
> > >
> > > It has been failing rather frequently.
> > > Have reported a bug at
> > https://bugzilla.redhat.com/show_bug.cgi?id=1315560
> > > For now, have moved it to bad tests here:
> > > http://review.gluster.org/#/c/13632/1
> > >
> >
> >
> > Masking tests is a bad habit. It would be better to fix the problem, and
> > it looks like a real bug.
> > The author of the test should help chase this down.
> >
> > > -Krutika
> > >
> > > On Mon, Mar 7, 2016 at 4:17 PM, Krutika Dhananjay 
> > > wrote:
> > >
> > > > +Pranith
> > > >
> > > > -Krutika
> > > >
> > > >
> > > > On Sat, Mar 5, 2016 at 11:34 PM, Dan Lambright 
> > > > wrote:
> > > >
> > > >>
> > > >>
> > > >> - Original Message -
> > > >> > From: "Dan Lambright" 
> > > >> > To: "Shyam" 
> > > >> > Cc: "Krutika Dhananjay" , "Gluster Devel" <
> > > >> gluster-de...@gluster.org>, "Rafi Kavungal Chundattu
> > > >> > Parambil" , "Nithya Balachandran" <
> > > >> nbala...@redhat.com>, "Joseph Fernandes"
> > > >> > , "gluster-infra" 
> > > >> > Sent: Friday, March 4, 2016 9:51:18 AM
> > > >> > Subject: Re: [Gluster-infra] tests/basic/tier/tier-file-create.t
> > > >> dumping core on Linux
> > > >> >
> > > >> >
> > > >> >
> > > >> > - Original Message -
> > > >> > > From: "Shyam" 
> > > >> > > To: "Krutika Dhananjay" , "Gluster Devel"
> > > >> > > , "Rafi Kavungal Chundattu
> > > >> > > Parambil" , "Nithya Balachandran"
> > > >> > > , "Joseph Fernandes"
> > > >> > > , "Dan Lambright" 
> > > >> > > Cc: "gluster-infra" 
> > > >> > > Sent: Friday, March 4, 2016 9:45:17 AM
> > > >> > > Subject: Re: [Gluster-infra] tests/basic/tier/tier-file-create.t
> > > >> dumping
> > > >> > > core on Linux
> > > >> > >
> > > >> > > Facing the same problem in the following runs as well,
> > > >> > >
> > > >> > > 1)
> > > >> > >
> > > >>
> > https://build.gluster.org/job/rackspace-regression-2GB-triggered/18767/console
> > > >> > > 2)
> > https://build.gluster.org/job/regression-test-burn-in/546/console
> > > >> > > 3)
> > https://build.gluster.org/job/regression-test-burn-in/547/console
> > > >> > > 4)
> > https://build.gluster.org/job/regression-test-burn-in/549/console
> > > >> > >
> > > >> > > Last successful burn-in was: 545 (but do not see the test having
> > been
> > > >> > > run here, so this is inconclusive)
> > > >> > >
> > > >> > > burn-in test 544 is hung on the same test here,
> > > >> > > https://build.gluster.org/job/regression-test-burn-in/544/console
> > > >> > >
> > > >> > > (and at this point I am stopping the hunt for when this last
> > > >> succeeded :) )
> > > >> > >
> > > >> > > Let's know if anyone is taking a peek at the cores.
> > > >> >
> > > >> > hm. Not familiar with this test. Written by Pranith? I'll look.
> > > >>
> > > >> We are doing lookup everywhere, and building up a dict of the extended
> > > >> attributes of a file as we traverse each sub volume across the hot and
> > > >> cold
> > > >> tiers. The length field of one of the EC keys is corrupted.
> > > >>
> > > >> Not clear why this is happening.. I see no tiering relationship as of
> > > >> yet, its possible the file is being demoted in parallel to the
> > foreground
> > > >> script operation.
> > > >>
> > > >> The test runs fine on my machines.  Does this reproduce consistently
> > on
> > > >> one of the Jenkins machines? If so, getting onto it would be the next
> > > >> step.
> > > >> I think that would be preferable to masking this test 

Re: [Gluster-infra] How to use jenkins cli for editing job

2016-03-04 Thread Niels de Vos
On Fri, Mar 04, 2016 at 06:07:41PM +0530, Raghavendra Talur wrote:
> Hi,
> 
> I changed the worker thread count for gerrit-trigger plugin send worker to
> 3 from 1.
> This was because there were warning in jenkins home page saying the queue
> has 2400 events and suggested to increase worker threads.
> 
> As Niels had suggested before, I tried performing the same update using the
> cli.
> 
> I was not able to use it though, kept getting "resource has moved" error.
> Could someone else try too and also write a simple doc on how to use the
> cli tool?

I have downloaded the cli from http://build.gluster.org/cli on
build.gluster.org itself and run it from there. Something like this
used to work for me, I expect it to work for you too:

  $ wget https://build.gluster.org/jnlpJars/jenkins-cli.jar
  $ java \
  -jar jenkins-cli.jar \
  -s http://localhost:8080/ \
  login --username ndevos
  $ java \
  -jar jenkins-cli.jar \
  -s http://localhost:8080/ \
  who-am-i
  $ java \
  -jar jenkins-cli.jar \
  -s http://localhost:8080/ \
  help

Cheers,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Please install the net-tools package on our Jenkins slaves

2016-03-01 Thread Niels de Vos
On Tue, Mar 01, 2016 at 01:41:15PM +0100, Michael Scherer wrote:
> Le mardi 01 mars 2016 à 12:12 +0100, Niels de Vos a écrit :
> > Hi,
> > 
> > on at leastsome of the slaves there is no netstat executable available.
> > This causes some tests to fail. 'netstat' on Fedora/RHEL/CentOS comes
> > from the net-tools RPM. Could you please add that to the list of
> > packages that are installed on the slaves?
> 
> Yup.
> Only on the Linux ones, or also on freebsd/netbsd ?

We only run all regression tests on Linux (for now). NetBSD does not
seem to complain that netstat is missing, so it is either there, or the
check is skipped somehow.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] review.gluster.org not working

2016-02-26 Thread Niels de Vos
On Fri, Feb 26, 2016 at 07:59:14AM -0500, Kaleb Keithley wrote:
> 
> 
> - Original Message -
> > From: "Raghavendra Talur" 
> > 
> > Most likely it looks like disk is full. Someone please have a look at it.
> > 
> 
> Someone who? Misc is on holiday IIRC. Who else has access?
> 
> Why do we think running our own gerrit and jenkins is a good thing?
> (When we could be using the CentOS gerrit and jenkins?)

s/CentOS gerrit/gerrithub.io/ :)

And yeah, that was the first thing that I felt on responding about too,
but refrained from doing so because I'm sure others would bring it up.

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Giving Sahina temporary owner rights to github.com/gluster

2016-02-23 Thread Niels de Vos
On Tue, Feb 23, 2016 at 12:29:10PM +0530, Sahina Bose wrote:
> Thanks, Kaushal! Repositories have been transferred.

Thanks for doing this, Sahina!

Do you have any ideas/plans to provide RPM packages for the Nagios
plugins? Could they become part of Fedora and Fedora EPEL?

Niels


> 
> On 02/23/2016 12:09 PM, Kaushal M wrote:
> >Hi all,
> >
> >Sahina would like to transfer the gluster nagios repositories to the
> >Gluster organization in Github. I'm temporarily giving Sahina owner
> >rights so she can perform the transfer.
> >
> >Regards,
> >Kaushal
> 
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Read the docs account ?

2016-02-11 Thread Niels de Vos
On Thu, Feb 11, 2016 at 10:25:48AM +0100, Michael Scherer wrote:
> Hi,
> 
> so after a rather "interesting" incident yesterday involving people
> using their personal work email, I started to look on where we depend on
> personal people account. 
> 
> Did we use a specific gluster.org account for gluster, or did someone
> register using their personal email ?

Humble, Prasanth and Amye would know.

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Restricted trigger server for centos and netbsd regression to review.gluster.org

2016-02-09 Thread Niels de Vos
On Mon, Feb 08, 2016 at 12:34:17PM +0530, Raghavendra Talur wrote:
> Hi,
> 
> After misc fixed ssl and cert issues on gerrit, jenkins now connects to
> review.gluster.org using both the configurations.
> 1. old one which has name review.gluster.org
> 2. new one which has review.gluster.org_for_smoke_builds
> 
> 2 was configured by ndevos for CI work I guess.

Yes, thanks! I could not get it get to connect. We need different Gerrit
servers in Jenkins as explained here:

http://thread.gmane.org/gmane.comp.file-systems.gluster.infra/801/focus=816

> CentOS and NetBSD regression jobs were configured to take events from any
> server and this resulted in duplicate/concurrent runs for each event from
> gerrit.
> 
> I have changed the event trigger on these jobs to use only
> review.gluster.org. Terminated some of the duplicate builds on jenkins for
> the same reason.

Did you change the smoke jobs to use the newly added Gerrit server?

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Code-Review+2 and Verified+1 cause multiple retriggers on Jenkins

2016-02-04 Thread Niels de Vos
On Thu, Feb 04, 2016 at 03:34:05PM +0530, Raghavendra Talur wrote:
> Hi,
> 
> We recently changed the jenkins builds to be triggered on the following
> triggers.
> 
> 1. Verified+1
> 2. Code-review+2
> 3. recheck (netbsd|centos|smoke)
> 
> There is a bug in 1 and 2.
> 
> Multiple triggers of 1 or 2 would result in re-runs even when not intended.
> 
> I would like to replace 1 and 2 with a comment "run-all-regression" or
> something like that.
> Thoughts?

Maybe starting regressions on Code-Review +1 (or +2) only?

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Different version of run-tests.sh in jenkin slaves?

2016-01-28 Thread Niels de Vos
On Thu, Jan 28, 2016 at 12:00:41PM +0530, Raghavendra Talur wrote:
> Ok, RCA:
> 
> In NetBSD cores are being generated in /d/backends/*/*.core
> run-tests.sh looks only for "/core*" when looking for cores.
> 
> So, at the end of test run when regression.sh looks for core everywhere, it
> finds one and errors out.
> 
> Should think of a solution which is generic. Will update.

regression.sh is maintained on GitHub. All slaves should have this
repository checked out as /opt/qa. Please make sure any changes to this
script are pushed into the repo too:
https://github.com/gluster/glusterfs-patch-acceptance-tests/

Niels

> 
> 
> On Thu, Jan 28, 2016 at 11:37 AM, Raghavendra Talur 
> wrote:
> 
> >
> >
> > On Thu, Jan 28, 2016 at 11:17 AM, Atin Mukherjee 
> > wrote:
> >
> >> Are we running a different version of run-tests.sh in jenkin slaves. The
> >> reason of suspection is beacuse in last couple of runs [1] & [2] in
> >> NetBSD I am seeing no failures apart from bad tests but the regression
> >> voted failure and I can not make out any valid reason out of it.
> >>
> >> [1]
> >>
> >> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/13756/consoleFull
> >> [2]
> >>
> >> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/13755/consoleFull
> >
> >
> >
> > I checked the slave machine now.
> > regression.sh file is different but the run-tests.sh script is same.
> >
> > A wild guess here, is it possible that the core generation takes time and
> > when we check for a core right after a test is run it is not present yet?
> > Does anyone know how to work around that?
> >
> >
> >>
> >> ~Atin
> >> ___
> >> Gluster-infra mailing list
> >> Gluster-infra@gluster.org
> >> http://www.gluster.org/mailman/listinfo/gluster-infra
> >>
> >
> >

> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel



signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-Maintainers] Removed "clean workspace" step from NetBSD regression job

2016-01-25 Thread Niels de Vos
On Tue, Jan 26, 2016 at 02:36:52AM +0530, Raghavendra Talur wrote:
> Hi,
> 
> I have removed "clean workspace" from NetBSD regression job too. Vijay had
> earlier removed it from CentOS regression.
> Cleaning the workspace resulted in deletion of complete git repo and
> re-clone takes a lot of time. Gerrit trigger/Git plugin intelligently reset
> to HEAD~n when moving to next job and hence save time.

Ok, just update the xml file too:

  
https://github.com/gluster/glusterfs-patch-acceptance-tests/tree/master/jenkins/jobs

You can use https://build.gluster.org/cli/ with the "get-job" command.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Enabling comment based trigger on jenkins

2016-01-25 Thread Niels de Vos
On Tue, Jan 26, 2016 at 02:13:14AM +0530, Raghavendra Talur wrote:
> Hi,
> 
> Prashanth informed about a simple way to let every contributor retrigger
> the regression runs that openstack uses.
> Contributor has to simply post "recheck no bug" as comment on the patch set.
> Read more here:
> http://zqfan.github.io/openstack/2014/01/03/what-should-i-do-when-jenkins-fails/
> 
> I tweaked it for gluster, enabled and tested it. Trigger comments are:
> 
> recheck netbsd
> recheck smoke
> recheck centos
> 
> If no one has any objections to this method(security implications
> included), then I would like to announce it on gluster-devel.

Sounds good to me! This should reduce the need for handing out Jenkins
accounts.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Smoke tests run on the builder in RH DC (at least)

2016-01-25 Thread Niels de Vos
On Mon, Jan 25, 2016 at 10:24:33PM +0100, Niels de Vos wrote:
> On Mon, Jan 25, 2016 at 06:59:33PM +0100, Michael Scherer wrote:
> > Hi,
> > 
> > so today, after fixing one last config item, the smoke test jobs run
> > fine on the Centos 6 builder in the RH DC, which build things as non
> > root, then start the tests, then reboot the server.
> 
> Nice, sounds like great progress!
> 
> Did you need to change anything in the build or test scripts under
> /opt/qa? If so, please make sure that the changes land in the
> repository:
> 
>   https://github.com/gluster/glusterfs-patch-acceptance-tests/
> 
> > Now, I am looking at the fedora one, but once this one is good, I will
> > likely reinstall a few builders as a test, and go on Centos 7 builder.
> 
> I'm not sure yet if I made an error, or what is going on. But for some
> reason smoke tests for my patch series fails... This is the smoke result
> of the 1st patch in the serie, it only updates the fuse-header to a
> newer version. Of course local testing works just fine... The output and
> (not available) logs of the smoke test do not really help me :-/
> 
>   https://build.gluster.org/job/smoke/24395/console
> 
> Could this be related to the changes that were made? If not, I'd
> appreciate a pointer to my mistake.

Well, I guess that this is a limitation in the FUSE kernel module that
is part of EL6 and EL7. One of the structures sent is probably too big
and the kernel refuses to accept it. I guess I'll need to go back to the
drawing board and add a real check for the FUSE version, or something
like that.

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Jenkins accounts for all devs.

2016-01-22 Thread Niels de Vos
On Fri, Jan 22, 2016 at 02:44:05PM +0530, Raghavendra Talur wrote:
> On Fri, Jan 22, 2016 at 2:41 PM, Michael Scherer 
> wrote:
> 
> > Le vendredi 22 janvier 2016 à 11:31 +0530, Ravishankar N a écrit :
> > > On 01/14/2016 12:16 PM, Kaushal M wrote:
> > > > On Thu, Jan 14, 2016 at 10:33 AM, Raghavendra Talur 
> > wrote:
> > > >>
> > > >> On Thu, Jan 14, 2016 at 10:32 AM, Ravishankar N <
> > ravishan...@redhat.com>
> > > >> wrote:
> > > >>> On 01/08/2016 12:03 PM, Raghavendra Talur wrote:
> > >  P.S: Stop using the "universal" jenkins account to trigger jenkins
> > build
> > >  if you are not a maintainer.
> > >  If you are a maintainer and don't have your own jenkins account
> > then get
> > >  one soon!
> > > 
> > > >>> I would request for a jenkins account for non-maintainers too, at
> > least
> > > >>> for the devs who are actively contributing code (as opposed to random
> > > >>> one-off commits from persons). That way, if the regression failure is
> > > >>> *definitely* not in my patch (or) is a spurious failure (or) is
> > something
> > > >>> that I need to take a netbsd slave offline to debug etc.,  I don't
> > have to
> > > >>> be blocked on the Maintainer. Since the accounts are anyway tied to
> > an
> > > >>> individual, it should be easy to spot if someone habitually
> > re-trigger
> > > >>> regressions without any initial debugging.
> > > >>>
> > > >> +1
> > > > We'd like to give everyone accounts. But the way we're providing
> > > > accounts now gives admin accounts to all. This is not very secure.
> > > >
> > > > This was one of the reasons misc setup freeipa.gluster.org, to provide
> > > > controlled accounts for all. But it hasn't been used yet. We would
> > > > need to integrate jenkins and the slaves with freeipa, which would
> > > > give everyone easy access.
> > >
> > > Hi Michael,
> > > Do you think it is possible to have this integration soon so that all
> > > contributors can re-trigger/initiate builds by themselves?
> >
> > The thing that is missing is still the same, how do we consider that
> > someone is a contributor. IE, do we want people just say "add me" and
> > get root access to all our jenkins builder (because that's also what go
> > with jenkins way of restarting a build for now) ?

Contributors would need to get root permissions on the Jenkins slaves
(the machines that do the actual building/testing).  There is no need
for root access on the Jenkins master (build.gluster.org). Because
Jenkins accounts are connected to the PAM cofiguration on
build.gluster.org, contributors would get an account there (does not
need to have a shell?).

> > I did the technical stuff, but so far, no one did the organisational
> > part of giving a criteria for who has access to what. Without clear
> > process, I can't do much.
> >
> 
> 
> +ndevos +vijay
> 
> Something like "should have contributed 10 patches to Gluster and be
> supported by at least 1 maintainer" would do?

Works for me. Please send a new page with a description on what
requirements a (new) contributor needs to fullfill, what privileges are
given and a little on when/how to use those.

  http://gluster.readthedocs.org/en/latest/Contributors-Guide/Index/

Thanks!
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Jenkins accounts for all devs.

2016-01-22 Thread Niels de Vos
On Fri, Jan 22, 2016 at 12:49:28PM +0100, Michael Scherer wrote:
> Le vendredi 22 janvier 2016 à 11:31 +0100, Niels de Vos a écrit :
> > On Fri, Jan 22, 2016 at 02:44:05PM +0530, Raghavendra Talur wrote:
> > > On Fri, Jan 22, 2016 at 2:41 PM, Michael Scherer <msche...@redhat.com>
> > > wrote:
> > > 
> > > > Le vendredi 22 janvier 2016 à 11:31 +0530, Ravishankar N a écrit :
> > > > > On 01/14/2016 12:16 PM, Kaushal M wrote:
> > > > > > On Thu, Jan 14, 2016 at 10:33 AM, Raghavendra Talur 
> > > > > > <rta...@redhat.com>
> > > > wrote:
> > > > > >>
> > > > > >> On Thu, Jan 14, 2016 at 10:32 AM, Ravishankar N <
> > > > ravishan...@redhat.com>
> > > > > >> wrote:
> > > > > >>> On 01/08/2016 12:03 PM, Raghavendra Talur wrote:
> > > > > >>>> P.S: Stop using the "universal" jenkins account to trigger 
> > > > > >>>> jenkins
> > > > build
> > > > > >>>> if you are not a maintainer.
> > > > > >>>> If you are a maintainer and don't have your own jenkins account
> > > > then get
> > > > > >>>> one soon!
> > > > > >>>>
> > > > > >>> I would request for a jenkins account for non-maintainers too, at
> > > > least
> > > > > >>> for the devs who are actively contributing code (as opposed to 
> > > > > >>> random
> > > > > >>> one-off commits from persons). That way, if the regression 
> > > > > >>> failure is
> > > > > >>> *definitely* not in my patch (or) is a spurious failure (or) is
> > > > something
> > > > > >>> that I need to take a netbsd slave offline to debug etc.,  I don't
> > > > have to
> > > > > >>> be blocked on the Maintainer. Since the accounts are anyway tied 
> > > > > >>> to
> > > > an
> > > > > >>> individual, it should be easy to spot if someone habitually
> > > > re-trigger
> > > > > >>> regressions without any initial debugging.
> > > > > >>>
> > > > > >> +1
> > > > > > We'd like to give everyone accounts. But the way we're providing
> > > > > > accounts now gives admin accounts to all. This is not very secure.
> > > > > >
> > > > > > This was one of the reasons misc setup freeipa.gluster.org, to 
> > > > > > provide
> > > > > > controlled accounts for all. But it hasn't been used yet. We would
> > > > > > need to integrate jenkins and the slaves with freeipa, which would
> > > > > > give everyone easy access.
> > > > >
> > > > > Hi Michael,
> > > > > Do you think it is possible to have this integration soon so that all
> > > > > contributors can re-trigger/initiate builds by themselves?
> > > >
> > > > The thing that is missing is still the same, how do we consider that
> > > > someone is a contributor. IE, do we want people just say "add me" and
> > > > get root access to all our jenkins builder (because that's also what go
> > > > with jenkins way of restarting a build for now) ?
> > 
> > Contributors would need to get root permissions on the Jenkins slaves
> > (the machines that do the actual building/testing). 
> 
> I rather prefer to not have people have root access on the builder.
> 
> 1) they are used to build the rpms we distribute
> 2) root access also mean that some people might just do a quick fix to
> make some tests pass instead of making a proper long term fix where it
> is needed.

I mainly added this for the contributors that need to debug hanging test
cases. They need to login on the Jenkins slaves and without root
privileges they will not be able to attach gdb, strace or other tools to
the Gluster processes running as root. (We use the shared jenkins
account for that now, which is rather ugly.)

Ideally there would not be any need to retrigger tests. So there would
be little need for contributors to get an account to login on the
Jenkins webui. But unfortunately the regression tests are not stable
enough for that yet. In the future, only contributors that write jobs
for Jenkins would need accounts there.

RPMs that we distribute are built in Fedora Koji and the CentOS CBS. The
RPMs built on the

Re: [Gluster-infra] Vagrant box upload: Was: [Change in glusterfs[master]: vagrant-test: Use pre-baked box for better perf]

2016-01-19 Thread Niels de Vos
On Mon, Jan 18, 2016 at 10:23:00PM +0530, Raghavendra Talur wrote:
> Michael has a good point that we should host this(or updated and better)
> vagrant box for Gluster development under Gluster entity. It could be a
> Gluster account in https://atlas.hashicorp.com or could be hosted somewhere
> on gluster.org.
> 
> Thoughts?

Maybe start with placing it on download.gluster.org?

It would surely need some documentation on gluster.readthedocs.org too?

Thanks,
Niels

> 
> -- Forwarded message --
> From: Michael Adam (Code Review) <rev...@dev.gluster.org>
> Date: Mon, Jan 18, 2016 at 4:15 PM
> Subject: Change in glusterfs[master]: vagrant-test: Use pre-baked box for
> better perf
> To: Raghavendra Talur <rta...@redhat.com>
> Cc: Gluster Build System <jenk...@build.gluster.com>, Kaushal M <
> kaus...@redhat.com>, Niels de Vos <nde...@redhat.com>, Michael Adam <
> ob...@samba.org>
> 
> 
> Michael Adam has posted comments on this change.
> 
> Change subject: vagrant-test: Use pre-baked box for better perf
> ..
> 
> 
> Patch Set 1:
> 
> (2 comments)
> 
> some comments inline
> 
> http://review.gluster.org/#/c/13251/1/tests/vagrant/vagrant-template/Vagrantfile
> File tests/vagrant/vagrant-template/Vagrantfile:
> 
> Ideally, at some point, there should be a gluster entity under atlast.
> Alternatively we could directly specify a download url for the box onder
> gluster.org

> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra



signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] Trouble adding a new Gerrit configuration to Jenkins for Smoke tests

2016-01-16 Thread Niels de Vos
Hi,

I'm trying to add a copy of the existing Gerrit server to Jenkins, but
seem to be unable to get it connected. I've started with [add server]
and [copy from existing server] to create
"review.gluster.org_for-smoke-jobs". The [Advanced] "Gerrit Verified
Commands" have been modified to set the Smoke label instead of the
default Verified.

Clicking the [test connection] button return success (after removing the
***-out password). But after saving and clicking the red [o] button to
connect the server, it just seems to time-out.

Some assistence with getting this server connected is most welcome.
Please see these links for checking:

  https://build.gluster.org/gerrit-trigger/
  
https://build.gluster.org/gerrit-trigger/server/review.gluster.org_for-smoke-jobs/

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Requesting separate labels in Gerrit for better testing results

2016-01-15 Thread Niels de Vos
On Thu, Jan 14, 2016 at 10:26:46PM +0530, Kaushal M wrote:
> I'd pushed the config to a new branch instead of updating the
> `refs/meta/config` branch. I've corrected this now.
> 
> The 3 new labels are,
> - Smoke
> - CentOS-regression
> - NetBSD-regression
> 
> The new labels are active now. Changes cannot be merged without all of
> them being +1. Only the bot accounts (Gluster Build System and NetBSD
> Build System) can set them.

It seems that Verified is also a label that is required. Because this is
now the label for manual testing by reviewers/qa, I do not think it
should be a requirement anymore.

Could the labels that are needed for merging be setup like this?

  Code-Review=+2 && (Verified=+1 || (Smoke=+1 && CentOS-regression=+1 && 
NetBSD-regression=+1))

I managed to get http://review.gluster.org/13208 merged now, please
check if the added tags in the commit message are ok, or need to get
modified.

Thanks,
Niels


> 
> On Thu, Jan 14, 2016 at 9:22 PM, Kaushal M <kshlms...@gmail.com> wrote:
> > On Thu, Jan 14, 2016 at 5:12 PM, Niels de Vos <nde...@redhat.com> wrote:
> >> On Thu, Jan 14, 2016 at 03:46:02PM +0530, Kaushal M wrote:
> >>> On Thu, Jan 14, 2016 at 2:43 PM, Niels de Vos <nde...@redhat.com> wrote:
> >>> > On Thu, Jan 14, 2016 at 11:51:15AM +0530, Raghavendra Talur wrote:
> >>> >> On Tue, Jan 12, 2016 at 7:59 PM, Atin Mukherjee 
> >>> >> <atin.mukherje...@gmail.com>
> >>> >> wrote:
> >>> >>
> >>> >> > -Atin
> >>> >> > Sent from one plus one
> >>> >> > On Jan 12, 2016 7:41 PM, "Niels de Vos" <nde...@redhat.com> wrote:
> >>> >> > >
> >>> >> > > On Tue, Jan 12, 2016 at 07:21:37PM +0530, Raghavendra Talur wrote:
> >>> >> > > > We have now changed the gerrit-jenkins workflow as follows:
> >>> >> > > >
> >>> >> > > > 1. Developer works on a new feature/bug fix and tests it 
> >>> >> > > > locally(run
> >>> >> > > > run-tests.sh completely).
> >>> >> > > > 2. Developer sends the patch to gerrit using rfc.sh.
> >>> >> > > >
> >>> >> > > > +++Note that no regression runs have started automatically for 
> >>> >> > > > this
> >>> >> > patch
> >>> >> > > > at this point.+++
> >>> >> > > >
> >>> >> > > > 3. Developer marks the patch as +1 verified on gerrit as a 
> >>> >> > > > promise of
> >>> >> > > > having tested the patch completely. For cases where patches 
> >>> >> > > > don't have
> >>> >> > a +1
> >>> >> > > > verified from the developer, maintainer has the following options
> >>> >> > > > a. just do the code-review and award a +2 code review.
> >>> >> > > > b. pull the patch locally and test completely and award a +1 
> >>> >> > > > verified.
> >>> >> > > > Both the above actions would result in triggering of regression 
> >>> >> > > > runs
> >>> >> > for
> >>> >> > > > the patch.
> >>> >> > >
> >>> >> > > Would it not help if anyone giving +1 code-review starts the 
> >>> >> > > regression
> >>> >> > > tests too? When developers ask me to review, I prefer to see 
> >>> >> > > reviews
> >>> >> > > done by others first, and any regression failures should have been 
> >>> >> > > fixed
> >>> >> > > by the time I look at the change.
> >>> >> > When this idea was originated (long back) I was in favour of having
> >>> >> > regression triggered on a +1, however verified flag set by the 
> >>> >> > developer
> >>> >> > would still trigger the regression. Being a maintainer I would always
> >>> >> > prefer to look at a patch when its verified  flag is +1 which means 
> >>> >> > the
> >>> >> > regression result would also be available.
> >>> >> >
> >>> >>
> >>> >>
> >>> >> Niels requested in IRC that it is good have a mech

Re: [Gluster-infra] Requesting separate labels in Gerrit for better testing results

2016-01-15 Thread Niels de Vos
On Thu, Jan 14, 2016 at 06:43:33PM +0100, Niels de Vos wrote:
> On Thu, Jan 14, 2016 at 10:26:46PM +0530, Kaushal M wrote:
> > I'd pushed the config to a new branch instead of updating the
> > `refs/meta/config` branch. I've corrected this now.
> > 
> > The 3 new labels are,
> > - Smoke
> > - CentOS-regression
> > - NetBSD-regression
> > 
> > The new labels are active now. Changes cannot be merged without all of
> > them being +1. Only the bot accounts (Gluster Build System and NetBSD
> > Build System) can set them.
> 
> The smoke and NetBSD jobs have been updated to vote on their own label
> now. All jobs have been added to the glusterfs-patch-acceptance-tests
> repository too:
> 
>   
> https://github.com/gluster/glusterfs-patch-acceptance-tests/commits/master/jenkins/jobs

I've scheduled a Jenkins "safe restart", so that the whole configuration
gets reloaded. At the moment it is not possible to select "Smoke" as
label for the different smoke tests. Hopefully is can be used without
scripted ssh-commands once the restart took place.

Niels

> 
> I do not seem to be able to modify the standard regression test though.
> When opening the configuration, I get connection timeouts from Jenkins.
> Will try later through the Jenkins commandline, but I will need to
> modify the XML file by hand, and that needs a little more care than
> updating through a clickable interface.
> 
> Niels
> 
> > 
> > On Thu, Jan 14, 2016 at 9:22 PM, Kaushal M <kshlms...@gmail.com> wrote:
> > > On Thu, Jan 14, 2016 at 5:12 PM, Niels de Vos <nde...@redhat.com> wrote:
> > >> On Thu, Jan 14, 2016 at 03:46:02PM +0530, Kaushal M wrote:
> > >>> On Thu, Jan 14, 2016 at 2:43 PM, Niels de Vos <nde...@redhat.com> wrote:
> > >>> > On Thu, Jan 14, 2016 at 11:51:15AM +0530, Raghavendra Talur wrote:
> > >>> >> On Tue, Jan 12, 2016 at 7:59 PM, Atin Mukherjee 
> > >>> >> <atin.mukherje...@gmail.com>
> > >>> >> wrote:
> > >>> >>
> > >>> >> > -Atin
> > >>> >> > Sent from one plus one
> > >>> >> > On Jan 12, 2016 7:41 PM, "Niels de Vos" <nde...@redhat.com> wrote:
> > >>> >> > >
> > >>> >> > > On Tue, Jan 12, 2016 at 07:21:37PM +0530, Raghavendra Talur 
> > >>> >> > > wrote:
> > >>> >> > > > We have now changed the gerrit-jenkins workflow as follows:
> > >>> >> > > >
> > >>> >> > > > 1. Developer works on a new feature/bug fix and tests it 
> > >>> >> > > > locally(run
> > >>> >> > > > run-tests.sh completely).
> > >>> >> > > > 2. Developer sends the patch to gerrit using rfc.sh.
> > >>> >> > > >
> > >>> >> > > > +++Note that no regression runs have started automatically for 
> > >>> >> > > > this
> > >>> >> > patch
> > >>> >> > > > at this point.+++
> > >>> >> > > >
> > >>> >> > > > 3. Developer marks the patch as +1 verified on gerrit as a 
> > >>> >> > > > promise of
> > >>> >> > > > having tested the patch completely. For cases where patches 
> > >>> >> > > > don't have
> > >>> >> > a +1
> > >>> >> > > > verified from the developer, maintainer has the following 
> > >>> >> > > > options
> > >>> >> > > > a. just do the code-review and award a +2 code review.
> > >>> >> > > > b. pull the patch locally and test completely and award a +1 
> > >>> >> > > > verified.
> > >>> >> > > > Both the above actions would result in triggering of 
> > >>> >> > > > regression runs
> > >>> >> > for
> > >>> >> > > > the patch.
> > >>> >> > >
> > >>> >> > > Would it not help if anyone giving +1 code-review starts the 
> > >>> >> > > regression
> > >>> >> > > tests too? When developers ask me to review, I prefer to see 
> > >>> >> > > reviews
> > >>> >> > > done by others first, and any regress

Re: [Gluster-infra] Requesting separate labels in Gerrit for better testing results

2016-01-14 Thread Niels de Vos
On Thu, Jan 14, 2016 at 10:26:46PM +0530, Kaushal M wrote:
> I'd pushed the config to a new branch instead of updating the
> `refs/meta/config` branch. I've corrected this now.
> 
> The 3 new labels are,
> - Smoke
> - CentOS-regression
> - NetBSD-regression
> 
> The new labels are active now. Changes cannot be merged without all of
> them being +1. Only the bot accounts (Gluster Build System and NetBSD
> Build System) can set them.

The smoke and NetBSD jobs have been updated to vote on their own label
now. All jobs have been added to the glusterfs-patch-acceptance-tests
repository too:

  
https://github.com/gluster/glusterfs-patch-acceptance-tests/commits/master/jenkins/jobs

I do not seem to be able to modify the standard regression test though.
When opening the configuration, I get connection timeouts from Jenkins.
Will try later through the Jenkins commandline, but I will need to
modify the XML file by hand, and that needs a little more care than
updating through a clickable interface.

Niels

> 
> On Thu, Jan 14, 2016 at 9:22 PM, Kaushal M <kshlms...@gmail.com> wrote:
> > On Thu, Jan 14, 2016 at 5:12 PM, Niels de Vos <nde...@redhat.com> wrote:
> >> On Thu, Jan 14, 2016 at 03:46:02PM +0530, Kaushal M wrote:
> >>> On Thu, Jan 14, 2016 at 2:43 PM, Niels de Vos <nde...@redhat.com> wrote:
> >>> > On Thu, Jan 14, 2016 at 11:51:15AM +0530, Raghavendra Talur wrote:
> >>> >> On Tue, Jan 12, 2016 at 7:59 PM, Atin Mukherjee 
> >>> >> <atin.mukherje...@gmail.com>
> >>> >> wrote:
> >>> >>
> >>> >> > -Atin
> >>> >> > Sent from one plus one
> >>> >> > On Jan 12, 2016 7:41 PM, "Niels de Vos" <nde...@redhat.com> wrote:
> >>> >> > >
> >>> >> > > On Tue, Jan 12, 2016 at 07:21:37PM +0530, Raghavendra Talur wrote:
> >>> >> > > > We have now changed the gerrit-jenkins workflow as follows:
> >>> >> > > >
> >>> >> > > > 1. Developer works on a new feature/bug fix and tests it 
> >>> >> > > > locally(run
> >>> >> > > > run-tests.sh completely).
> >>> >> > > > 2. Developer sends the patch to gerrit using rfc.sh.
> >>> >> > > >
> >>> >> > > > +++Note that no regression runs have started automatically for 
> >>> >> > > > this
> >>> >> > patch
> >>> >> > > > at this point.+++
> >>> >> > > >
> >>> >> > > > 3. Developer marks the patch as +1 verified on gerrit as a 
> >>> >> > > > promise of
> >>> >> > > > having tested the patch completely. For cases where patches 
> >>> >> > > > don't have
> >>> >> > a +1
> >>> >> > > > verified from the developer, maintainer has the following options
> >>> >> > > > a. just do the code-review and award a +2 code review.
> >>> >> > > > b. pull the patch locally and test completely and award a +1 
> >>> >> > > > verified.
> >>> >> > > > Both the above actions would result in triggering of regression 
> >>> >> > > > runs
> >>> >> > for
> >>> >> > > > the patch.
> >>> >> > >
> >>> >> > > Would it not help if anyone giving +1 code-review starts the 
> >>> >> > > regression
> >>> >> > > tests too? When developers ask me to review, I prefer to see 
> >>> >> > > reviews
> >>> >> > > done by others first, and any regression failures should have been 
> >>> >> > > fixed
> >>> >> > > by the time I look at the change.
> >>> >> > When this idea was originated (long back) I was in favour of having
> >>> >> > regression triggered on a +1, however verified flag set by the 
> >>> >> > developer
> >>> >> > would still trigger the regression. Being a maintainer I would always
> >>> >> > prefer to look at a patch when its verified  flag is +1 which means 
> >>> >> > the
> >>> >> > regression result would also be available.
> >>> >> >
> >>> >>
> >>> >>
> >>> >

[Gluster-infra] Requesting separate labels in Gerrit for better testing results

2016-01-14 Thread Niels de Vos
On Thu, Jan 14, 2016 at 11:51:15AM +0530, Raghavendra Talur wrote:
> On Tue, Jan 12, 2016 at 7:59 PM, Atin Mukherjee <atin.mukherje...@gmail.com>
> wrote:
> 
> > -Atin
> > Sent from one plus one
> > On Jan 12, 2016 7:41 PM, "Niels de Vos" <nde...@redhat.com> wrote:
> > >
> > > On Tue, Jan 12, 2016 at 07:21:37PM +0530, Raghavendra Talur wrote:
> > > > We have now changed the gerrit-jenkins workflow as follows:
> > > >
> > > > 1. Developer works on a new feature/bug fix and tests it locally(run
> > > > run-tests.sh completely).
> > > > 2. Developer sends the patch to gerrit using rfc.sh.
> > > >
> > > > +++Note that no regression runs have started automatically for this
> > patch
> > > > at this point.+++
> > > >
> > > > 3. Developer marks the patch as +1 verified on gerrit as a promise of
> > > > having tested the patch completely. For cases where patches don't have
> > a +1
> > > > verified from the developer, maintainer has the following options
> > > > a. just do the code-review and award a +2 code review.
> > > > b. pull the patch locally and test completely and award a +1 verified.
> > > > Both the above actions would result in triggering of regression runs
> > for
> > > > the patch.
> > >
> > > Would it not help if anyone giving +1 code-review starts the regression
> > > tests too? When developers ask me to review, I prefer to see reviews
> > > done by others first, and any regression failures should have been fixed
> > > by the time I look at the change.
> > When this idea was originated (long back) I was in favour of having
> > regression triggered on a +1, however verified flag set by the developer
> > would still trigger the regression. Being a maintainer I would always
> > prefer to look at a patch when its verified  flag is +1 which means the
> > regression result would also be available.
> >
> 
> 
> Niels requested in IRC that it is good have a mechanism of getting all
> patches that have already passed all regressions before starting review.
> Here is what I found
> a. You can use the search string
> status:open label:Verified+1,user=build AND label:Verified+1,user=nb7build
> b. You can bookmark this link and it will take you directly to the page
> with list of such patches.
> 
> http://review.gluster.org/#/q/status:open+label:Verified%252B1%252Cuser%253Dbuild+AND+label:Verified%252B1%252Cuser%253Dnb7build

Hmm, copy/pasting this URL does not work for me, I get an error:

Code Review - Error
line 1:26 no viable alternative at character '%'
[Continue]


Kaushal, could you add the following labels to gerrit, so that we can
update the Jenkins jobs and they can start setting their own labels?

http://review.gluster.org/Documentation/config-labels.html#label_custom

- Smoke: misc smoke testing, compile, bug check, posix, ..
- NetBSD: NetBSD-7 regression
- Linux: Linux regression on CentOS-6

Users/developers should not be able to set these labels, only the
Jenkins accounts are allowed to.

The standard Verified label can then be used for manual verification by
developers, qa and reviewers.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Requesting separate labels in Gerrit for better testing results

2016-01-14 Thread Niels de Vos
On Thu, Jan 14, 2016 at 03:46:02PM +0530, Kaushal M wrote:
> On Thu, Jan 14, 2016 at 2:43 PM, Niels de Vos <nde...@redhat.com> wrote:
> > On Thu, Jan 14, 2016 at 11:51:15AM +0530, Raghavendra Talur wrote:
> >> On Tue, Jan 12, 2016 at 7:59 PM, Atin Mukherjee 
> >> <atin.mukherje...@gmail.com>
> >> wrote:
> >>
> >> > -Atin
> >> > Sent from one plus one
> >> > On Jan 12, 2016 7:41 PM, "Niels de Vos" <nde...@redhat.com> wrote:
> >> > >
> >> > > On Tue, Jan 12, 2016 at 07:21:37PM +0530, Raghavendra Talur wrote:
> >> > > > We have now changed the gerrit-jenkins workflow as follows:
> >> > > >
> >> > > > 1. Developer works on a new feature/bug fix and tests it locally(run
> >> > > > run-tests.sh completely).
> >> > > > 2. Developer sends the patch to gerrit using rfc.sh.
> >> > > >
> >> > > > +++Note that no regression runs have started automatically for this
> >> > patch
> >> > > > at this point.+++
> >> > > >
> >> > > > 3. Developer marks the patch as +1 verified on gerrit as a promise of
> >> > > > having tested the patch completely. For cases where patches don't 
> >> > > > have
> >> > a +1
> >> > > > verified from the developer, maintainer has the following options
> >> > > > a. just do the code-review and award a +2 code review.
> >> > > > b. pull the patch locally and test completely and award a +1 
> >> > > > verified.
> >> > > > Both the above actions would result in triggering of regression runs
> >> > for
> >> > > > the patch.
> >> > >
> >> > > Would it not help if anyone giving +1 code-review starts the regression
> >> > > tests too? When developers ask me to review, I prefer to see reviews
> >> > > done by others first, and any regression failures should have been 
> >> > > fixed
> >> > > by the time I look at the change.
> >> > When this idea was originated (long back) I was in favour of having
> >> > regression triggered on a +1, however verified flag set by the developer
> >> > would still trigger the regression. Being a maintainer I would always
> >> > prefer to look at a patch when its verified  flag is +1 which means the
> >> > regression result would also be available.
> >> >
> >>
> >>
> >> Niels requested in IRC that it is good have a mechanism of getting all
> >> patches that have already passed all regressions before starting review.
> >> Here is what I found
> >> a. You can use the search string
> >> status:open label:Verified+1,user=build AND label:Verified+1,user=nb7build
> >> b. You can bookmark this link and it will take you directly to the page
> >> with list of such patches.
> >>
> >> http://review.gluster.org/#/q/status:open+label:Verified%252B1%252Cuser%253Dbuild+AND+label:Verified%252B1%252Cuser%253Dnb7build
> >
> > Hmm, copy/pasting this URL does not work for me, I get an error:
> >
> > Code Review - Error
> > line 1:26 no viable alternative at character '%'
> > [Continue]
> >
> >
> > Kaushal, could you add the following labels to gerrit, so that we can
> > update the Jenkins jobs and they can start setting their own labels?
> >
> > http://review.gluster.org/Documentation/config-labels.html#label_custom
> >
> > - Smoke: misc smoke testing, compile, bug check, posix, ..
> > - NetBSD: NetBSD-7 regression
> > - Linux: Linux regression on CentOS-6
> 
> I added these labels to the gluster projects' project.config, but they
> don't seem to be showing up. I'll check once more when I get back
> home.

Might need a restart/reload of Gerrit? It seems required for the main
gerrit.config file too:

  
http://review.gluster.org/Documentation/config-gerrit.html#_file_code_etc_gerrit_config_code

Niels


signature.asc
Description: PGP signature
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Requesting separate labels in Gerrit for better testing results

2016-01-14 Thread Niels de Vos
On Thu, Jan 14, 2016 at 06:43:33PM +0100, Niels de Vos wrote:
> On Thu, Jan 14, 2016 at 10:26:46PM +0530, Kaushal M wrote:
> > I'd pushed the config to a new branch instead of updating the
> > `refs/meta/config` branch. I've corrected this now.
> > 
> > The 3 new labels are,
> > - Smoke
> > - CentOS-regression
> > - NetBSD-regression
> > 
> > The new labels are active now. Changes cannot be merged without all of
> > them being +1. Only the bot accounts (Gluster Build System and NetBSD
> > Build System) can set them.
> 
> The smoke and NetBSD jobs have been updated to vote on their own label
> now. All jobs have been added to the glusterfs-patch-acceptance-tests
> repository too:
> 
>   
> https://github.com/gluster/glusterfs-patch-acceptance-tests/commits/master/jenkins/jobs
> 
> I do not seem to be able to modify the standard regression test though.
> When opening the configuration, I get connection timeouts from Jenkins.
> Will try later through the Jenkins commandline, but I will need to
> modify the XML file by hand, and that needs a little more care than
> updating through a clickable interface.

The webui started to work now, the CentOS regression test has been
updated as well, with the .xml modification pushed to the git repo.

Lets see how it goes, and if the next regression tests start to use the
correct labels.

Niels

> 
> Niels
> 
> > 
> > On Thu, Jan 14, 2016 at 9:22 PM, Kaushal M <kshlms...@gmail.com> wrote:
> > > On Thu, Jan 14, 2016 at 5:12 PM, Niels de Vos <nde...@redhat.com> wrote:
> > >> On Thu, Jan 14, 2016 at 03:46:02PM +0530, Kaushal M wrote:
> > >>> On Thu, Jan 14, 2016 at 2:43 PM, Niels de Vos <nde...@redhat.com> wrote:
> > >>> > On Thu, Jan 14, 2016 at 11:51:15AM +0530, Raghavendra Talur wrote:
> > >>> >> On Tue, Jan 12, 2016 at 7:59 PM, Atin Mukherjee 
> > >>> >> <atin.mukherje...@gmail.com>
> > >>> >> wrote:
> > >>> >>
> > >>> >> > -Atin
> > >>> >> > Sent from one plus one
> > >>> >> > On Jan 12, 2016 7:41 PM, "Niels de Vos" <nde...@redhat.com> wrote:
> > >>> >> > >
> > >>> >> > > On Tue, Jan 12, 2016 at 07:21:37PM +0530, Raghavendra Talur 
> > >>> >> > > wrote:
> > >>> >> > > > We have now changed the gerrit-jenkins workflow as follows:
> > >>> >> > > >
> > >>> >> > > > 1. Developer works on a new feature/bug fix and tests it 
> > >>> >> > > > locally(run
> > >>> >> > > > run-tests.sh completely).
> > >>> >> > > > 2. Developer sends the patch to gerrit using rfc.sh.
> > >>> >> > > >
> > >>> >> > > > +++Note that no regression runs have started automatically for 
> > >>> >> > > > this
> > >>> >> > patch
> > >>> >> > > > at this point.+++
> > >>> >> > > >
> > >>> >> > > > 3. Developer marks the patch as +1 verified on gerrit as a 
> > >>> >> > > > promise of
> > >>> >> > > > having tested the patch completely. For cases where patches 
> > >>> >> > > > don't have
> > >>> >> > a +1
> > >>> >> > > > verified from the developer, maintainer has the following 
> > >>> >> > > > options
> > >>> >> > > > a. just do the code-review and award a +2 code review.
> > >>> >> > > > b. pull the patch locally and test completely and award a +1 
> > >>> >> > > > verified.
> > >>> >> > > > Both the above actions would result in triggering of 
> > >>> >> > > > regression runs
> > >>> >> > for
> > >>> >> > > > the patch.
> > >>> >> > >
> > >>> >> > > Would it not help if anyone giving +1 code-review starts the 
> > >>> >> > > regression
> > >>> >> > > tests too? When developers ask me to review, I prefer to see 
> > >>> >> > > reviews
> > >>> >> > > done by others first, and any regression failures should have 
> > >>>

Re: [Gluster-infra] Jenkins slave32 seems broken?

2016-01-13 Thread Niels de Vos
On Wed, Jan 13, 2016 at 10:35:42AM +0100, Xavier Hernandez wrote:
> The same has happened to slave34.cloud.gluster.org. I've disabled it to
> allow regressions to be run on other slaves.
> 
> There are two files owned by root inside
> /home/jenkins/root/workspace/rackspace-regression-2GB-triggered:
> 
> -rwxr-xr-x  1 rootroot 10124 Jan  7 17:54 file_lock
> drwxr-xr-x  3 rootroot  4096 Jan  7 18:31 slave34.cloud.gluster.org:

Thanks!

I've looked into this a little more now, and might have identified the
problem.

This one failed with an unrelated error:

  https://build.gluster.org/job/rackspace-regression-2GB-triggered/17413/console

  ...
  Building remotely on slave34.cloud.gluster.org (rackspace_regression_2gb) in 
workspace /home/jenkins/root/workspace/rackspace-regression-2GB-triggered
   > git rev-parse --is-inside-work-tree # timeout=10
  Fetching changes from the remote Git repository
   > git config remote.origin.url git://review.gluster.org/glusterfs.git # 
timeout=10
  Fetching upstream changes from git://review.gluster.org/glusterfs.git
  ...

The next run on slave34 failed because of the weird directory:

  https://build.gluster.org/job/rackspace-regression-2GB-triggered/17440/console
  
  ...
  Building remotely on slave34.cloud.gluster.org (rackspace_regression_2gb) in 
workspace /home/jenkins/root/workspace/rackspace-regression-2GB-triggered
  Wiping out workspace first.
  java.io.IOException: remote file operation failed: 
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered at 
hudson.remoting.Channel@62ecdacb:slave34.cloud.gluster.org: 
  ...

Note the "Wiping out workspace first." line. This comes from an option
in the regression job. This seems to be a recently added "Additional
Behaviour" in the Jenkins job. Did anyone add this on purpose, or was
that automatically done with a Jenkins update or something?

Niels

> 
> Xavi
> 
> On 12/01/16 12:06, Niels de Vos wrote:
> >Hi,
> >
> >I've disabled slave32.cloud.gluster.org because it failed multiple
> >regression tests with a weird error. After disabling slave32 and
> >retriggering the failed run, the same job executed fine on a different
> >slave.
> >
> >The affected directory is owned by root, so the jenkins user is not
> >allowed to wipe it. Does anyone know how this could happen? The dirname
> >is rather awkward too...
> >
> >   
> > /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/slave32.cloud.gluster.org:/d
> >
> >I think we can just remove that dir and the slave can be enabled again.
> >Leaving the status as is for further investigation.
> >
> >Thanks,
> >Niels
> >
> >
> >Full error:
> >
> > Wiping out workspace first.
> > java.io.IOException: remote file operation failed: 
> > /home/jenkins/root/workspace/rackspace-regression-2GB-triggered at 
> > hudson.remoting.Channel@7bc1e07d:slave32.cloud.gluster.org: 
> > java.nio.file.AccessDeniedException: 
> > /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/slave32.cloud.gluster.org:/d
> > at hudson.FilePath.act(FilePath.java:986)
> > at hudson.FilePath.act(FilePath.java:968)
> > at hudson.FilePath.deleteContents(FilePath.java:1183)
> > at 
> > hudson.plugins.git.extensions.impl.WipeWorkspace.beforeCheckout(WipeWorkspace.java:28)
> > at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1040)
> > at hudson.scm.SCM.checkout(SCM.java:485)
> > at 
> > hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
> > at 
> > hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
> > at 
> > jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
> > at 
> > hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
> > at hudson.model.Run.execute(Run.java:1738)
> > at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
> > at 
> > hudson.model.ResourceController.execute(ResourceController.java:98)
> > at hudson.model.Executor.run(Executor.java:410)
> > Caused by: java.nio.file.AccessDeniedException: 
> > /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/slave32.cloud.gluster.org:/d
> > at 
> > sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
> > at 
> > sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
> > at 
> > sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
> >

  1   2   >