[Gluster-devel] Regression testing status report

2014-06-13 Thread Justin Clift
Small update. The new "rackspace-regression-2GB" queue on build.gluster.org has been running all day doing regression tests: http://build.gluster.org/job/rackspace-regression-2GB/ It just finished running it's 100th one a few seconds ago. 100 regression runs in less than 14 hours is pretty go

Re: [Gluster-devel] Debugging regression-test failures

2014-06-13 Thread Justin Clift
On 13/06/2014, at 1:29 PM, Jeff Darcy wrote: > Lately it still seems that the "glusterfs" command has been returning an exit > status of zero even when mounts fail. This is particularly irksome in > regression tests, which then execute the next step and report failure on the > wrong line. This us

[Gluster-devel] New Rackspace regression tests can now be used

2014-06-13 Thread Justin Clift
Hi all, The new _2GB_ Rackspace regression testing slaves seem to be working fairly well. (the 1GB ones weren't) So, feel encouraged to submit jobs to this new queue: http://build.gluster.org/job/rackspace-regression-2GB/ (note the "2GB" on the end there. Make sure you pick this one ;>) Th

[Gluster-devel] Want more spurious regression failure alerts... ?

2014-06-13 Thread Justin Clift
Hi Pranith, Do you want me to keep sending you spurious regression failure notification? There's a fair few of them isn't there? Maybe we should make 1 BZ for the lot, and attach the logs to that BZ for later analysis? + Justin -- GlusterFS - http://www.gluster.org An open source, distributed

[Gluster-devel] spurious failure on bug-918437-sh-mtime.t

2014-06-13 Thread Justin Clift
csys = 752.54 CPU) Result: FAIL Same CR passed perfectly when run again. Logs for the above failure: http://slave22.cloud.gluster.org/logs/glusterfs-logs-20140613:10:30:16.tgz + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several

[Gluster-devel] Debugging regression-test failures

2014-06-13 Thread Jeff Darcy
Lately it still seems that the "glusterfs" command has been returning an exit status of zero even when mounts fail. This is particularly irksome in regression tests, which then execute the next step and report failure on the wrong line. This used to work correctly. When did it break? Is anyone

Re: [Gluster-devel] epel-7 mock broken in rpm.t (due to ftp.redhat.com change?)

2014-06-13 Thread Justin Clift
On 13/06/2014, at 11:25 AM, Niels de Vos wrote: > I did not check any specifics, but maybe an update mock package is > needed to get working configuration files. (If EPEL-7 is already fully > available.): > - > https://admin.fedoraproject.org/updates/FEDORA-EPEL-2014-1506/mock-1.1.39-1.el6 > >

Re: [Gluster-devel] epel-7 mock broken in rpm.t (due to ftp.redhat.com change?)

2014-06-13 Thread Niels de Vos
On Fri, Jun 13, 2014 at 06:48:51AM +0100, Justin Clift wrote: > Hi Kaleb, > > This just started showing up in rpm.t test output: > > ERROR: > Exception(/home/jenkins/root/workspace/rackspace-regression-2GB/rpmbuild-mock.d/glusterfs-3.5qa2-0.621.gita22a2f0.el6.src.rpm) > Config(epel-7-x86_64)

Re: [Gluster-devel] Merge this?

2014-06-13 Thread Vijay Bellur
On 06/13/2014 01:34 PM, Justin Clift wrote: Would you have a sec to merge this? http://review.gluster.org/#/c/8057/ It's just passed regression testing, and is a fix for the spurious failure of bug-830665.t, which is happening a lot. :) Done, thanks! -Vijay _

[Gluster-devel] Merge this?

2014-06-13 Thread Justin Clift
Would you have a sec to merge this? http://review.gluster.org/#/c/8057/ It's just passed regression testing, and is a fix for the spurious failure of bug-830665.t, which is happening a lot. :) + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to s

Re: [Gluster-devel] epel-7 mock broken in rpm.t (due to ftp.redhat.com change?)

2014-06-13 Thread Harshavardhana
Thought so! - On Thu, Jun 12, 2014 at 11:28 PM, Justin Clift wrote: > At a guess, it'll be somewhere on: > > https://git.centos.org > > No idea of specifics though. > > + Justin > > On 13/06/2014, at 7:21 AM, Harshavardhana wrote: >> Interesting - looks like all the sources have been moved? do