[Gluster-infra] [Bug 1375521] slave33.cloud.gluster.org is out of space

2016-09-13 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1375521

Nigel Babu  changed:

   What|Removed |Added

 Status|ASSIGNED|CLOSED
 Resolution|--- |CURRENTRELEASE
Last Closed||2016-09-13 06:41:26



--- Comment #4 from Nigel Babu  ---
Back online.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=G40Wg4q01M&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [Bug 1375521] slave33.cloud.gluster.org is out of space

2016-09-13 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1375521



--- Comment #3 from Nigel Babu  ---
Filed bug 1375526 for the test harness issue. I'll clean up the /var/messages
file and bring the node back online.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=1r30OWS4SK&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [Bug 1375521] slave33.cloud.gluster.org is out of space

2016-09-13 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1375521



--- Comment #2 from Nigel Babu  ---
Also relevant:
[root@slave33 log]# ps ax | grep rpc
 2154 ?Ss 0:00 /sbin/rpc.statd
 6606 ?S  0:25 [rpciod/0]
 6607 ?S  0:27 [rpciod/1]
 6786 ?Ss   500:19 /sbin/rpc.statd
 8108 ?Ss 0:00 /sbin/rpc.statd
16957 ?Ss 3:02 rpcbind -w
18389 pts/0S+ 0:00 grep rpc

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=FnnjZEgQRo&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [Bug 1375521] slave33.cloud.gluster.org is out of space

2016-09-13 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1375521

Nigel Babu  changed:

   What|Removed |Added

 Status|NEW |ASSIGNED
   Assignee|b...@gluster.org|nig...@redhat.com



--- Comment #1 from Nigel Babu  ---
Seeing lots of these:

Sep 11 04:20:42 slave33 sm-notify[16681]: Already notifying clients; Exiting!
Sep 11 04:20:42 slave33 sm-notify[16684]: Version 1.2.3 starting
Sep 11 04:20:42 slave33 sm-notify[16684]: Already notifying clients; Exiting!
Sep 11 04:20:42 slave33 sm-notify[16689]: Version 1.2.3 starting
Sep 11 04:20:42 slave33 sm-notify[16689]: Already notifying clients; Exiting!
Sep 11 04:20:42 slave33 sm-notify[16692]: Version 1.2.3 starting
Sep 11 04:20:42 slave33 sm-notify[16692]: Already notifying clients; Exiting!
Sep 11 04:20:42 slave33 sm-notify[16695]: Version 1.2.3 starting
Sep 11 04:20:42 slave33 sm-notify[16695]: Already notifying clients; Exiting!
Sep 11 04:20:42 slave33 sm-notify[16698]: Version 1.2.3 starting
Sep 11 04:20:42 slave33 sm-notify[16698]: Already notifying clients; Exiting!

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=ox4sUrB5Gc&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [Bug 1375521] New: slave33.cloud.gluster.org is out of space

2016-09-13 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1375521

Bug ID: 1375521
   Summary: slave33.cloud.gluster.org is out of space
   Product: GlusterFS
   Version: mainline
 Component: project-infrastructure
  Keywords: Triaged
  Assignee: b...@gluster.org
  Reporter: nde...@redhat.com
CC: b...@gluster.org, gluster-infra@gluster.org,
nig...@redhat.com



Description of problem:
slave33.cloud.gluster.org has been marked offline in Jenkins.

https://build.gluster.org/job/centos6-regression/737/console failed with a
weird Jenkins error:

ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from
git://review.gluster.org/glusterfs.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:810)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1066)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1097)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
at
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: Command "git config
remote.origin.url git://review.gluster.org/glusterfs.git" returned status code
4:
stdout: 
stderr: error: failed to write new configuration file .git/config.lock


An other job on the same system failed with ENOSPACE:
https://build.gluster.org/job/devrpm-el7/1408/console

ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Could not init
/home/jenkins/root/workspace/devrpm-el7
at
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$5.execute(CliGitAPIImpl.java:656)
at
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:463)
at
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:120)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:332)
at
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to slave33.cloud.gluster.org(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:220)
at hudson.remoting.Channel.call(Channel.java:781)
at
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:145)
at sun.reflect.GeneratedMethodAccessor129.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:131)
at com.sun.proxy.$Proxy48.execute(Unknown Source)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1057)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1097)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
at
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: Command "git init
/home/jenkins/root/workspace/devrpm-el7" returned status code 1:
stdout: 
stderr: /home/jenkins/root/workspace/devrpm-el7/.git: No space left on device

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=Nozoo3T7LM&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.o

[Gluster-infra] Gluster Infra Updates - August

2016-09-13 Thread Nigel Babu
Hello folks,

Here's a delayed summary of infra updates from August.

# Gerrit
* We now use Worker Ant to update bugs on bugzilla.

# Jenkins
* The conversion to JJB for all jobs on build.gluster.org complete. We have our
  first new job entirely written in JJB (strfmt_errors)
* We've built a prototype to visualize regression failures. This should go live
  in September.
* Fixed a large part of NetBSD infra-related hangs.

# General Infrastructure
* Continue to migrate code out of salt, for ansible. Only 1 single role exists
  in salt now. Work is being done with others project to have a common postfix
  role.
* We've physically moved the ci.gluster.org server and reinstalled it properly.
  Now preparing for VM migration.

--
nigelb
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [Bug 1375440] cleanup script unexpectedly called during smoke tests

2016-09-13 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1375440

Nigel Babu  changed:

   What|Removed |Added

 Status|NEW |CLOSED
 CC||nig...@redhat.com
 Resolution|--- |CURRENTRELEASE
Last Closed||2016-09-13 03:11:46



--- Comment #1 from Nigel Babu  ---
This was because the bash script had `set -e` and cleanup is called before the
test starts. Since the log wouldn't exist, it would fail. The fix was adding an
`rm -rf`. See:
https://github.com/gluster/glusterfs-patch-acceptance-tests/commit/fc1fdb39c37b2a19c1c6f201a49646c82facb919

Fix committed and deployed to all servers.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=6c9YG5OGlC&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [Bug 1375440] New: cleanup script unexpectedly called during smoke tests

2016-09-13 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1375440

Bug ID: 1375440
   Summary: cleanup script unexpectedly called during smoke tests
   Product: GlusterFS
   Version: mainline
 Component: project-infrastructure
  Assignee: b...@gluster.org
  Reporter: mchan...@redhat.com
CC: b...@gluster.org, gluster-infra@gluster.org



Job URL for reference: https://build.gluster.org/job/smoke/30578/console

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=ZprxyWso9l&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra