Re: [Gluster-infra] Lot of 'centos7-regression' failures
Here is one of the console output which Amar is pointing to https://build.gluster.org/job/centos7-regression/5051/console It showed up after we did reboot yesterday only on builder207 On Thu, Mar 7, 2019 at 8:09 PM Michael Scherer wrote: > Le jeudi 07 mars 2019 à 18:47 +0530, Amar Tumballi Suryanarayan a > écrit : > > And it is happening with 'failed to determine' the job... anything > > different in jenkins ? > > No, we didn't touch to jenkins as far as I know, besides removing nigel > from a group on a github this morning. > > > Also happening with regression-full-run > > > > Would be good to resolve sooner, so we can get in many patches which > > are blocking releases. > > Can you give a bit more information, like which execution exactly ? > > For example: > https://build.gluster.org/job/regression-on-demand-full-run/255/ is > what you are speaking of ? > > (as I do not see the exact string you pointed, I am not sure that's the > issue) > > -- > Michael Scherer > Sysadmin, Community Infrastructure and Platform, OSAS > > > ___ Gluster-infra mailing list Gluster-infra@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-infra
Re: [Gluster-infra] Lot of 'centos7-regression' failures
https://build.gluster.org/job/regression-on-demand-full-run/ All recent failures (4-5 of them), and centos-regression like https://build.gluster.org/job/centos7-regression/5048/console On Thu, Mar 7, 2019 at 8:09 PM Michael Scherer wrote: > Le jeudi 07 mars 2019 à 18:47 +0530, Amar Tumballi Suryanarayan a > écrit : > > And it is happening with 'failed to determine' the job... anything > > different in jenkins ? > > No, we didn't touch to jenkins as far as I know, besides removing nigel > from a group on a github this morning. > > > Also happening with regression-full-run > > > > Would be good to resolve sooner, so we can get in many patches which > > are blocking releases. > > Can you give a bit more information, like which execution exactly ? > > For example: > https://build.gluster.org/job/regression-on-demand-full-run/255/ is > what you are speaking of ? > > (as I do not see the exact string you pointed, I am not sure that's the > issue) > > -- > Michael Scherer > Sysadmin, Community Infrastructure and Platform, OSAS > > > -- Amar Tumballi (amarts) ___ Gluster-infra mailing list Gluster-infra@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-infra
[Gluster-infra] [Bug 1686371] Cleanup nigel access and document it
https://bugzilla.redhat.com/show_bug.cgi?id=1686371 --- Comment #2 from M. Scherer --- Alos, removed from jenkins-admins on github. -- You are receiving this mail because: You are on the CC list for the bug. ___ Gluster-infra mailing list Gluster-infra@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-infra
[Gluster-infra] Lot of 'centos7-regression' failures
And it is happening with 'failed to determine' the job... anything different in jenkins ? Also happening with regression-full-run Would be good to resolve sooner, so we can get in many patches which are blocking releases. -Amar ___ Gluster-infra mailing list Gluster-infra@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-infra
Re: [Gluster-infra] Lot of 'centos7-regression' failures
Le jeudi 07 mars 2019 à 18:47 +0530, Amar Tumballi Suryanarayan a écrit : > And it is happening with 'failed to determine' the job... anything > different in jenkins ? No, we didn't touch to jenkins as far as I know, besides removing nigel from a group on a github this morning. > Also happening with regression-full-run > > Would be good to resolve sooner, so we can get in many patches which > are blocking releases. Can you give a bit more information, like which execution exactly ? For example: https://build.gluster.org/job/regression-on-demand-full-run/255/ is what you are speaking of ? (as I do not see the exact string you pointed, I am not sure that's the issue) -- Michael Scherer Sysadmin, Community Infrastructure and Platform, OSAS signature.asc Description: This is a digitally signed message part ___ Gluster-infra mailing list Gluster-infra@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-infra
Re: [Gluster-infra] Lot of 'centos7-regression' failures
Le jeudi 07 mars 2019 à 20:12 +0530, Deepshikha Khandelwal a écrit : > Here is one of the console output which Amar is pointing to > https://build.gluster.org/job/centos7-regression/5051/console > > It showed up after we did reboot yesterday only on builder207 Seems 202 also has a issue. > On Thu, Mar 7, 2019 at 8:09 PM Michael Scherer > wrote: > > > Le jeudi 07 mars 2019 à 18:47 +0530, Amar Tumballi Suryanarayan a > > écrit : > > > And it is happening with 'failed to determine' the job... > > > anything > > > different in jenkins ? > > > > No, we didn't touch to jenkins as far as I know, besides removing > > nigel > > from a group on a github this morning. > > > > > Also happening with regression-full-run > > > > > > Would be good to resolve sooner, so we can get in many patches > > > which > > > are blocking releases. > > > > Can you give a bit more information, like which execution exactly ? > > > > For example: > > https://build.gluster.org/job/regression-on-demand-full-run/255/ is > > what you are speaking of ? > > > > (as I do not see the exact string you pointed, I am not sure that's > > the > > issue) > > > > -- > > Michael Scherer > > Sysadmin, Community Infrastructure and Platform, OSAS > > > > > > -- Michael Scherer Sysadmin, Community Infrastructure and Platform, OSAS signature.asc Description: This is a digitally signed message part ___ Gluster-infra mailing list Gluster-infra@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-infra
Re: [Gluster-infra] Lot of 'centos7-regression' failures
Le jeudi 07 mars 2019 à 20:12 +0530, Deepshikha Khandelwal a écrit : > Here is one of the console output which Amar is pointing to > https://build.gluster.org/job/centos7-regression/5051/console > > It showed up after we did reboot yesterday only on builder207 ok, so let's put the node offline for now, the others should pick the work > On Thu, Mar 7, 2019 at 8:09 PM Michael Scherer > wrote: > > > Le jeudi 07 mars 2019 à 18:47 +0530, Amar Tumballi Suryanarayan a > > écrit : > > > And it is happening with 'failed to determine' the job... > > > anything > > > different in jenkins ? > > > > No, we didn't touch to jenkins as far as I know, besides removing > > nigel > > from a group on a github this morning. > > > > > Also happening with regression-full-run > > > > > > Would be good to resolve sooner, so we can get in many patches > > > which > > > are blocking releases. > > > > Can you give a bit more information, like which execution exactly ? > > > > For example: > > https://build.gluster.org/job/regression-on-demand-full-run/255/ is > > what you are speaking of ? > > > > (as I do not see the exact string you pointed, I am not sure that's > > the > > issue) > > > > -- > > Michael Scherer > > Sysadmin, Community Infrastructure and Platform, OSAS > > > > > > -- Michael Scherer Sysadmin, Community Infrastructure and Platform, OSAS signature.asc Description: This is a digitally signed message part ___ Gluster-infra mailing list Gluster-infra@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-infra
[Gluster-infra] [Bug 1686371] New: Cleanup nigel access and document it
https://bugzilla.redhat.com/show_bug.cgi?id=1686371 Bug ID: 1686371 Summary: Cleanup nigel access and document it Product: GlusterFS Version: 4.1 Status: NEW Component: project-infrastructure Assignee: b...@gluster.org Reporter: msche...@redhat.com CC: b...@gluster.org, gluster-infra@gluster.org Target Milestone: --- Classification: Community Description of problem: Nigel babu left the admin team as well as Red Hat. We should clean and remove access and document that. SO far, here is what we have to do: Access to remove: - remove from github (group Github-organization-Admins) - remove ssh keys in ansible => done - remove alias from root on private repo => done - remove alias from group_vars/nagios/admins.yml => done - remove entry from jenkins (on https://build.gluster.org/configureSecurity/) => done - remove from gerrit permission => TODO - remove from gluster repo => edit ./MAINTAINERS - remove from ec2 => TODO While on it, there is a few passwords and stuff to rotate: - rotate the ansible ssh keys => done, but we need to write down the process (ideally, a ansible playbook) - change nagios password => TODO - rotate the jenkins ssh keys => TODO, write a process Maybe more need to be done -- You are receiving this mail because: You are on the CC list for the bug. ___ Gluster-infra mailing list Gluster-infra@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-infra
[Gluster-infra] [Bug 1686371] Cleanup nigel access and document it
https://bugzilla.redhat.com/show_bug.cgi?id=1686371 Worker Ant changed: What|Removed |Added Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22320 (Remove Nigel, as he left the company) posted (#1) for review on master by Michael Scherer -- You are receiving this mail because: You are on the CC list for the bug. ___ Gluster-infra mailing list Gluster-infra@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-infra
[Gluster-infra] [Bug 1686371] Cleanup nigel access and document it
https://bugzilla.redhat.com/show_bug.cgi?id=1686371 Worker Ant changed: What|Removed |Added External Bug ID||Gluster.org Gerrit 22320 -- You are receiving this mail because: You are on the CC list for the bug. ___ Gluster-infra mailing list Gluster-infra@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-infra
[Gluster-infra] Upgrading build.gluster.org
Hello, I’ve planned to do an upgrade of build.gluster.org tomorrow morning so as to install and pull in the latest security upgrade of the Jenkins plugins. I’ll stop all the running jobs and re-trigger them once I'm done with upgradation. The downtime window will be from : UTC: 0330 to 0400 IST: 0900 to 0930 The outage is for 30 minutes. Please bear with us as we continue to ensure the latest plugins and fixes for build.gluster.org Thanks, Deepshikha ___ Gluster-infra mailing list Gluster-infra@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-infra
[Gluster-infra] [Bug 1686034] Request access to docker hub gluster organisation.
https://bugzilla.redhat.com/show_bug.cgi?id=1686034 Sridhar Seshasayee changed: What|Removed |Added Status|NEW |CLOSED Resolution|--- |NOTABUG Last Closed||2019-03-08 04:34:51 -- You are receiving this mail because: You are on the CC list for the bug. ___ Gluster-infra mailing list Gluster-infra@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-infra