On 30/04/2014, at 3:52 PM, Justin Clift wrote:
Reminder!!!
The weekly Gluster Community meeting is in 8 mins, in #gluster-meeting on IRC.
This is a completely public meeting, everyone is encouraged to attend and be
a part of it. :)
To add Agenda items
***
Just add
ideas?
Regards and best wishes,
Justin Clift
--
Open Source and Standards @ Red Hat
twitter.com/realjustinclift
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
On 03/05/2014, at 8:50 AM, Dennis Schafroth wrote:
Going back to the OS X commit (a3cb38e3edf005bef73da4c9cfd958474a14d50f), it
works better. Something has regressed the build system on OS X, but I cannot
pinpoint what.
On a side note: In the last week of my last job, I finally saw my boss
On 04/05/2014, at 12:25 AM, Harshavardhana wrote:
Been able to start a distribute service - but after applying this -
http://review.gluster.com/#/c/7655/
k. I just tried it here, but still getting that sig 11.
$ sudo gluster volume info
Volume Name: dht
Type: Distribute
Volume ID:
On 04/05/2014, at 6:07 AM, Vijay Bellur wrote:
snip
Slightly OT, what would it take to setup a jenkins integration for Mac? We
seem to be getting close to a NetBSD integration and having one for Mac would
be cool too.
Good question. I haven't looked at the Jenkins integration part
yet in
Hi Vijay,
Looks like we've started using PKG_CHECK_MODULES again in git master:
$ grep -ri PKG_CHECK_MODULES *
api/examples/configure.ac:PKG_CHECK_MODULES([GLFS], [glusterfs-api = 3])
configure.ac: PKG_CHECK_MODULES([GLIB], [glib-2.0],
configure.ac:PKG_CHECK_MODULES([ZLIB], [zlib =
On 04/05/2014, at 8:44 AM, Harshavardhana wrote:
snip
Don't know what to make of this - perhaps i should just force -O0 for
OSX for now.
Yep, your patch for that worked. :)
Now I get out Eclipse and start tracing through the cli code, to learn
how it works. (not today though, need to get away
On 06/05/2014, at 10:10 PM, James wrote:
On Tue, May 6, 2014 at 4:15 PM, Justin Clift jus...@gluster.org wrote:
Hi all,
The old gluster-devel mailing list archives have been migrated
to the new host server. If you notice any weirdness, please
let us know. :)
http
On 13/05/2014, at 12:27 AM, Anand Avati wrote:
snip
http://build.gluster.org/job/regression/build - key in the gerrit patch
number for the CHANGE_ID field, and click 'Build'.
Doesn't that just apply the given change to HEAD of its
associated branch? eg it won't apply dependent commits
first?
On 13/05/2014, at 11:43 AM, Vijay Bellur wrote:
Hi All,
Me and Kaushal have effected the following changes on regression.sh in
build.gluster.org:
1. If a regression run results in a core and all tests pass, that particular
run will be flagged as a failure. Previously a core that would
On 13/05/2014, at 12:47 PM, Kaushal M wrote:
On Tue, May 13, 2014 at 4:20 PM, Justin Clift jus...@gluster.org wrote:
On 13/05/2014, at 11:43 AM, Vijay Bellur wrote:
snip
Do let us know if you have any comments on these changes.
Would you be ok to push these changes into the scripts
Reminder!!!
The weekly Gluster Community meeting is in 1 hour, in #gluster-meeting on IRC.
This is a completely public meeting, everyone is encouraged to attend and be a
part of it. :)
To add Agenda items
***
Just add them to the main text of the Google Doc, and **be at the
On 14/05/2014, at 3:02 PM, Justin Clift wrote:
Reminder!!!
The weekly Gluster Community meeting is in 1 hour, in #gluster-meeting on IRC.
This is a completely public meeting, everyone is encouraged to attend and be
a part of it. :)
To add Agenda items
***
Just add
Ignore this, just testing mailing list archiving...
+ Justin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
. :)
http://titanpad.com/gluster-community-meetings
Regards and best wishes,
Justin Clift
--
Open Source and Standards @ Red Hat
twitter.com/realjustinclift
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org
On 21/05/2014, at 3:57 PM, Justin Clift wrote:
Reminder!!!
The weekly Gluster Community meeting is in 3 mins, in
#gluster-meeting on IRC.
This is a completely public meeting, everyone is encouraged
to attend and be a part of it. :)
Thanks to everyone for attending this meeting. :)
(1hr
, they're
still online and untouched. Just let me know. If no-one pings me, I'll
nuke them tomorrow night.
Regards and best wishes,
Justin Clift
--
Open Source and Standards @ Red Hat
twitter.com/realjustinclift
___
Gluster-devel mailing list
Gluster
somepath/glusterd-backend%N.log maybe?
On 22/05/2014, at 8:03 AM, Kaushal M wrote:
The glusterds spawned using cluster.rc store their logs at
/d/backends/N/glusterd.log . But the cleanup() function cleans
/d/backends/, so those logs are lost before we can archive.
cluster.rc should be
Hi guys,
Do you reckon we should get that Mac Mini in the Westford
lab set up to automatically test Gluster builds each
night or something?
If so, we should probably take/claim ownership of it,
upgrade the memory in it, and (possibly) see if it can be
put in the DMZ.
Thoughts?
+ Justin
--
Hi Pranith,
You don't have an account on build.gluster.org yet do you?
It's where the current (not my stuff) regression tests are run.
This is the script presently used to build the regression
tests:
$ more /opt/qa/build.sh
#!/bin/bash
set -e
SRC=$(pwd);
rpm -qa | grep glusterfs |
On 23/05/2014, at 10:17 AM, Pranith Kumar Karampuri wrote:
snip
2) That would need more bricks, more processes, more ports.
Meh to more ports. We should be moving to a model (maybe in 4.x?)
where we use less ports. Preferably just one or two in total if its
feasible from a network layer.
, and be at
the meeting. :)
https://public.pad.fsfe.org/p/gluster-community-meetings
Regards and best wishes,
Justin Clift
--
Open Source and Standards @ Red Hat
twitter.com/realjustinclift
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http
Hi all,
There's a new rackspace-regression testing testing project on Jenkins:
http://build.gluster.org/job/rackspace-regression/
This one runs regression testing jobs on specially set up Rackspace
VM's. Two of them atm, but I'll setup more over next few days as
weird kinks get worked out.
On 29/05/2014, at 8:04 PM, Ben Turner wrote:
From: James purplei...@gmail.com
Sent: Wednesday, May 28, 2014 5:21:21 PM
On Wed, May 28, 2014 at 5:02 PM, Justin Clift jus...@gluster.org wrote:
Hi all,
Are there any Community members around who can test the GlusterFS 3.4.4
beta (rpms
On 29/05/2014, at 5:58 AM, Justin Clift wrote:
snip
Any idea what causes the job history for a project
(eg rackspace-regression) to disappear?
Asked on #jenkins IRC, and the it seems to be a bug in Jenkins. :(
They said possibly this one:
https://issues.jenkins-ci.org/browse/JENKINS-16845
It _may_ be related to the new regression test cleanup function
I'm working on:
http://review.gluster.org/#/c/7937/
(possibly cleaned up / removed too much stuff)
It also might not be, as it's only supposed to have run on the
slaves.
+ Justin
On 01/06/2014, at 9:18 AM, Pranith Kumar
On 02/06/2014, at 7:04 AM, Kaleb KEITHLEY wrote:
snip
someone cleaned the loopback devices. I deleted 500 unix domain sockets in
/d/install/var/run and requeued the regressions.
Interesting. The extra sockets problem is what prompted me
to rewrite the cleanup function. The sockets are being
On 04/06/2014, at 6:33 AM, Pranith Kumar Karampuri wrote:
On 06/04/2014 01:35 AM, Ben Turner wrote:
Sent: Thursday, May 29, 2014 6:12:40 PM
snip
FSSANITY_TEST_LIST: arequal bonnie glusterfs_build compile_kernel dbench dd
ffsb fileop fsx fs_mark iozone locks ltp multiple_files posix_compliance
On 03/06/2014, at 9:05 PM, Ben Turner wrote:
snip
So far so good on 3.4.4, sorry for the delay here. I had to fix my
downstream test suites to run outside of RHS / downstream gluster. I did
basic sanity testing on glusterfs mounts including:
FSSANITY_TEST_LIST: arequal bonnie
Good news. After reloading the Jenkins configuration from disk the other
day, the complete job history isn't disappearing any more.
+ Justin
On 30/05/2014, at 8:44 PM, Justin Clift wrote:
As a FYI, there weren't any jobs running on build.gluster.org, so
I hit the reload configuration from
On 04/06/2014, at 3:14 PM, Ben Turner wrote:
- Original Message -
From: Justin Clift jus...@gluster.org
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: Ben Turner btur...@redhat.com, gluster-us...@gluster.org, Gluster
Devel gluster-devel@gluster.org
Sent: Wednesday, June 4, 2014
, and be at
the meeting. :)
https://public.pad.fsfe.org/p/gluster-community-meetings
Regards and best wishes,
Justin Clift
--
Open Source and Standards @ Red Hat
twitter.com/realjustinclift
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http
Hi all,
We need some people to review:
http://review.gluster.org/#/c/7963/
and:
http://review.gluster.org/#/c/7978/
If that gets done, we can release 3.5.1 beta 2 this week.
eg if anyone has the time, that would be directly helpful :)
+ Justin
--
Open Source and Standards @ Red Hat
, person to contact, whatever)
Any thoughts? Suggestions on places/people to include?
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter
On 11/06/2014, at 11:24 PM, Joe Julian wrote:
snip
This should be farmed out to some other company and linked-to. Isn't there
someone like Angie's List for contractors (though preferably not Angie's
List) that does that?
Hadn't heard of Angies list until you mentioned it. Something
like
On 11/06/2014, at 9:01 AM, Vijay Bellur wrote:
snip
I have seen dd being in uninterruptible sleep on b.g.o. There are also
instances [1] where anon-fd-nfs has run for close to 6000+ seconds. This
definitely points to the nfs deadlock.
A few of the rackspace regression runs seem to be getting
On 12/06/2014, at 12:12 AM, Justin Clift wrote:
On 11/06/2014, at 9:01 AM, Vijay Bellur wrote:
snip
I have seen dd being in uninterruptible sleep on b.g.o. There are also
instances [1] where anon-fd-nfs has run for close to 6000+ seconds. This
definitely points to the nfs deadlock.
A few
Hi Niels,
4 out of 5 Rackspace slave VM's hung overnight. Rebooted one of
them and it didn't come back. Checking out it's console (they
have an in-browser Java applet for it) showed a kernel traceback.
Scrollback for the console (took some effort ;) is attached.
It's showing a bunch of XFS
like xfs bug. It is likely caused by a bad disk /
array or a really busy host.
On Wed, Jun 11, 2014 at 10:28 PM, Justin Clift jus...@gluster.org wrote:
Hi Niels,
4 out of 5 Rackspace slave VM's hung overnight. Rebooted one of
them and it didn't come back. Checking out it's console
On 12/06/2014, at 6:58 AM, Niels de Vos wrote:
snip
If you capture a vmcore (needs kdump installed and configured), we may
be able to see the cause more clearly.
That does help, and so will Harsha's suggestion too probably. :)
I'll look into it properly later on today.
For the moment, I've
On 12/06/2014, at 10:22 AM, Pranith Kumar Karampuri wrote:
hi Guys,
Rackspace slaves are in action now, thanks to Justin. Please use the URL
in Subject to run the regressions. I already shifted some jobs to rackspace.
Good thinking, but please hold off on this for now.
The slaves are
At a guess, it'll be somewhere on:
https://git.centos.org
No idea of specifics though.
+ Justin
On 13/06/2014, at 7:21 AM, Harshavardhana wrote:
Interesting - looks like all the sources have been moved? do we know where?
On Thu, Jun 12, 2014 at 10:48 PM, Justin Clift jus...@gluster.org
Would you have a sec to merge this?
http://review.gluster.org/#/c/8057/
It's just passed regression testing, and is a fix for
the spurious failure of bug-830665.t, which is
happening a lot. :)
+ Justin
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to
On 13/06/2014, at 11:25 AM, Niels de Vos wrote:
snip
I did not check any specifics, but maybe an update mock package is
needed to get working configuration files. (If EPEL-7 is already fully
available.):
-
https://admin.fedoraproject.org/updates/FEDORA-EPEL-2014-1506/mock-1.1.39-1.el6
Hi Pranith,
Came across another spurious failure (no urgency):
Test Summary Report
---
./tests/bugs/bug-918437-sh-mtime.t (Wstat: 0 Tests: 31 Failed: 1)
Failed test: 23
Files=236, Tests=4604, 4022 wallclock secs ( 1.88 usr 1.39 sys + 344.26 cusr
405.01
Hi Pranith,
Do you want me to keep sending you spurious regression failure
notification?
There's a fair few of them isn't there?
Maybe we should make 1 BZ for the lot, and attach the logs
to that BZ for later analysis?
+ Justin
--
GlusterFS - http://www.gluster.org
An open source,
;)
This queue has several slaves running simultaneously, so
you'll get a faster result using this than the old one.
If anything weird happens, let me know. :)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
helpful. :)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
___
Gluster
On 14/06/2014, at 12:02 AM, Justin Clift wrote:
Small update. The new rackspace-regression-2GB queue on build.gluster.org
has been running all day doing regression tests:
http://build.gluster.org/job/rackspace-regression-2GB/
This is going well. It's doing it's 200th regression test
On 16/06/2014, at 4:50 PM, Jeff Darcy wrote:
Can't thank you enough for this :-)
+100
Justin has done a lot of hard, tedious work whipping this infrastructure into
better shape, and has significantly improved the project as a result. Such
efforts deserve to be recognized. Justin, I
and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
___
Gluster-devel mailing list
On 19/06/2014, at 11:07 AM, Justin Clift wrote:
On 19/06/2014, at 10:52 AM, Krishnan Parthasarathi wrote:
Pranith,
The core's backtrace [1] is not 'analysable'. It doesn't show function names
and displays ()? for all the frames across all threads. It would be helpful
if we had the glusterd
On 19/06/2014, at 6:55 PM, Benjamin Turner wrote:
snip
I went through these a while back and removed anything that wasn't valid for
GlusterFS. This test was passing on 3.4.59 when it was released, i am
thinking it may have something to do with a sym link to the same directory bz
i found
. But wondering it's something
Gluster should take care of itself. :)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
On 20/06/2014, at 3:43 PM, Raghavendra Bhat wrote:
snip
I am seeing glupy.t test being failed in some testcases. It is failing in my
local machine as well (with latest master). Is it a genuine failure or a
spurious one?
/tests/features/glupy.t(Wstat: 0 Tests: 6
On 20/06/2014, at 3:49 PM, Vijay Bellur wrote:
snip
Side-effect of merging this patch [1]. Have reverted the change to let
regression tests pass.
That seems to have fixed it.
+ Justin
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
instead)
Gerrit has been restarted ($ bin/gerrit.sh restart), and
things seem to be working ok. I've signed in using the new
OpenID end point, and no problems so far.
Hope that helps. :)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file
On 23/06/2014, at 1:14 AM, Harshavardhana wrote:
On Jun 22, 2014 4:21 PM, Justin Clift jus...@gluster.org wrote:
Hmmm, does anyone know the answer to these questions being asked on the
upstream FreeBSD forum?
1. What about file AIO support by glusterfs under FreeBSD?
AIO support
Hi all,
Does anyone have time to do the 2nd review of this CR with
spurious regression test failure fixes?
http://review.gluster.org/#/c/8117/
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes
://meetbot.fedoraproject.org/gluster-meeting/2014-06-25/gluster-meeting.2014-06-25-15.10.log.html
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter
and integrated in the debian
package.
Btw, the backport of this was merged into the release-3.5 branch
yesterday. It'll be in GlusterFS 3.5.2.
Hope that helps. :)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
On 25/06/2014, at 11:19 PM, Justin Clift wrote:
There's a new rackspace-regression-2GB-triggered in Jenkins on
build.gluster.org.
Please ignore it for now. I'm just experimenting with having Gerrit
automatically trigger regression tests.
This seems to be working ok, so I've enabled
On 26/06/2014, at 1:40 AM, Pranith Kumar Karampuri wrote:
snip
While I agree with everything you said. Complaining about tabs/spaces should
be done by a script. Something like http://review.gluster.com/#/c/5404
+1
And we can use a git trigger to reject future patches that have tabs in
them.
On 26/06/2014, at 2:12 AM, Pranith Kumar Karampuri wrote:
On 06/26/2014 06:19 AM, Justin Clift wrote:
On 26/06/2014, at 1:40 AM, Pranith Kumar Karampuri wrote:
snip
While I agree with everything you said. Complaining about tabs/spaces
should be done by a script. Something like
http
On 26/06/2014, at 4:54 PM, Dan Lambright wrote:
Implementing brick splitting using LVM would allow you to treat each logical
volume (split) as an independent brick. Each split would have its own
.glusterfs subdirectory. I think this would help with taking snapshots as
well.
Would brick
On 26/06/2014, at 9:15 PM, James Shubin wrote:
Does someone have an operating version table for GlusterFS?
These are the operating-version= values in glusterd.info
I'm looking to know which gluster versions, correspond to which
operating versions, eg:
'3.3' = '1', # eg: blank...
On 27/06/2014, at 2:51 AM, Kaushal M wrote:
3.3 - 1
3.4.x - 2
3.5.0 - 3
3.5.1 - 30501
3.6.0 - 30600
Thanks Kaushal, added it here:
http://www.gluster.org/community/documentation/index.php/OperatingVersions
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
On 27/06/2014, at 5:20 AM, Ravishankar N wrote:
On 06/27/2014 09:10 AM, Justin Clift wrote:
On 27/06/2014, at 2:51 AM, Kaushal M wrote:
3.3 - 1
3.4.x - 2
3.5.0 - 3
3.5.1 - 30501
3.6.0 - 30600
Thanks Kaushal, added it here:
http://www.gluster.org/community/documentation/index.php
On 01/07/2014, at 11:30 AM, Kaushal M wrote:
snip
Varun (CCd) and I have been working on
this since last week, and are hoping to get at least the base
framework ready and merged into 3.6.
Cool. Personally, I reckon this is extremely important, as a lot
of future changes will rely on it being
On 26/06/2014, at 7:27 PM, Harshavardhana wrote:
http://review.gluster.org/#/c/8181/ - posted a new change, wouldn't it
be worth to add this in smoke tests? rather than at ./rfc.sh ? - we
can provide a detailed summary - since we do not have 'commit/push'
style patch submission.
We can
On 04/07/2014, at 7:30 AM, Santosh Pradhan wrote:
Thanks guys for looking into this. I am just wondering how this passed the
regression before Niels could merged this in?
It was due to stupidity on my part. ;)
Was adjusting the bash script in jenkins the other day, attempting
to get the
On 04/07/2014, at 7:34 AM, Harshavardhana wrote:
On Thu, Jul 3, 2014 at 11:30 PM, Santosh Pradhan sprad...@redhat.com wrote:
Thanks guys for looking into this. I am just wondering how this passed the
regression before Niels could merged this in? Good part is test case needs
modification not
On 04/07/2014, at 12:30 PM, Vijay Bellur wrote:
Hi All,
Given the holiday weekend in US, I feel that it would be appropriate to move
the 3.6 feature freeze date to mid next week so that we can have more reviews
done address review comments too. We can still continue to track other
On 04/07/2014, at 1:14 PM, Vijay Bellur wrote:
Hi All,
I have pulled together some statistics from gerrit for fun. The statistics
that I have generated are limited by my understanding of gerrit's gsql
interface. I don't claim that these stats are 100% accurate - if you notice
any
On 05/07/2014, at 3:52 AM, Vijay Bellur wrote:
On 07/05/2014 01:59 AM, Justin Clift wrote:
snip
Wonder if it's picking up the Change has been successfully
cherry-picked as xxx as messages and stuff as reviews?
Only if the CR accompanying that message has a +1 or +2 vote. If a patch gets
though. ;)
Personally, I reckon we should have a discussion
on gluster-devel about this. There might be really
good + / - for each, so a clear decision can be made.
And there may be other better ideas too.
What're your thoughts on this stuff?
Regards and best wishes,
Justin Clift
On 06/07/2014, at 8:03 AM, Justin Clift wrote:
snip
There are a few ways we could address this:
* Adjust the smoke test job so it runs on the Rackspace
slaves
Hopefully not hard. But not sure. We can try it out.
* Change the triggered regression test, so it doesn't
start
On 06/07/2014, at 8:24 AM, Pranith Kumar Karampuri wrote:
On 07/06/2014 12:36 PM, Justin Clift wrote:
snip
* We could also disable the old regression job, so it
doesn't run on build.gluster.org.
This only reduces the race window. Doesn't fix the problem ;-). But I see
your point
On 07/07/2014, at 2:50 AM, Pranith Kumar Karampuri wrote:
On 07/06/2014 11:05 PM, Vijay Bellur wrote:
On 07/06/2014 07:47 PM, Pranith Kumar Karampuri wrote:
hi Justin/Vijay,
I always felt '-1' saying 'I prefer you didn't submit this' is a
bit harsh. Most of the times all it means is
On 08/07/2014, at 9:53 AM, Anders Blomdell wrote:
snip
2. What are the rules for marking a bug as blocking the trackers (can't find
one for
3.6.0 yet, so currently a moot point).
We should probably create an alias thing for 3.6.0, like we
have for 3.5.2:
. :)
https://public.pad.fsfe.org/p/gluster-community-meetings
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
That v3.5qa2 tag name on master is annoying, due to the RPM
naming it causes when building on master.
Did we figure out a solution?
Maybe we should do a v3.6something tag at feature freeze
time or something?
+ Justin
--
GlusterFS - http://www.gluster.org
An open source, distributed file
Looks like Rackspace had some issues during one of their upgrades
or something, and two of the slave VMs permanently died:
* slave20
* slave22
Those VMs have been nuked and new VMs built to take their place
(same names).
Any logs that were on the old VMs are no longer available,
having also
On 10/07/2014, at 1:41 PM, Niels de Vos wrote:
On Thu, Jul 10, 2014 at 05:14:08PM +0530, Vijay Bellur wrote:
snip
A lot of regression runs are failing because of this test unit.
Given feature freeze is around the corner, shall we provide a +1
verified manually for those patchsets that fail
On 10/07/2014, at 2:27 PM, Joseph Fernandes wrote:
Hi All,
1) Tried reproducing the issue in local setup by running the regression test
multiple times in a for loop. But the issue never hit!
2) As Avra pointed out the logs suggests that the port(49159) assigned by the
glusterd(host1) to
On 10/07/2014, at 12:44 PM, Vijay Bellur wrote:
snip
A lot of regression runs are failing because of this test unit. Given feature
freeze is around the corner, shall we provide a +1 verified manually for
those patchsets that fail this test?
Went through and did this manually, as Gluster
On 11/07/2014, at 2:30 PM, Jeff Darcy wrote:
On Fri, Jul 11, 2014 at 11:48:18AM +0100, Justin Clift wrote:
It turns out that with the 'dd' command, the block size parameter
(bs=) needs a bit of special treatment to work cross-platform.
Instead of using the 'M' suffix (eg 1M for 1MB), it's
On 11/07/2014, at 11:36 AM, Anders Blomdell wrote:
In
http://build.gluster.org/job/rackspace-regression-2GB-triggered/297/consoleFull,
I have
one failure:
No volumes present
read failed: No data available
read returning junk
fd based file operation 1 failed
read failed: No data
On 14/07/2014, at 6:38 AM, Pranith Kumar Karampuri wrote:
snip
Kick off a regression test manually here, and see if the same
failure occurs:
http://build.gluster.org/job/rackspace-regression-2GB/
If it happens again, it's not a spurious one.
I believe this is a spurious one. I didn't
investigate at your leisure. (useful?)
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
Hi Eco,
Humble just published a blog post about the new GlusterFS 3.4.5
beta2 RPMs:
http://blog.gluster.org/2014/07/glusterfs-3-4-5beta2-rpms-are-available-now/
How do we get the www.gluster.org website updated, so new blog
posts appear automagically?
Regards and best wishes,
Justin Clift
On 14/07/2014, at 4:20 PM, Anders Blomdell wrote:
On 2014-07-14 16:03, Vijay Bellur wrote:
Hi All,
I intend creating the 3.6 branch tomorrow. After that, the branch
will be restricted to bug fixes only. If you have any major patches
to be reviewed and merged for release-3.6, please update
On 15/07/2014, at 1:45 PM, Jeff Darcy wrote:
Please respond if you guys volunteer to add documentation for any
of the following things that are not already taken.
I think the most important thing to describe for each of these is the
life cycle rules. When I've tried to teach people
On 17/07/2014, at 7:38 PM, Vijay Bellur wrote:
snip
Had a discussion with Pranith and we felt that 3.5.2 beta is of
more importance than 3.6 community test days.
Definitely agree personally. People have 3.5.x in production, so
to my thinking issues with that should receive priority. 3.6 will
On 21/07/2014, at 11:03 AM, Anders Blomdell wrote:
Is it possible to run a single regression test on Jenkins (preferably
from a suitably crafted rfc, in order not to clutter BZ with random
noise [i.e. my feeble test-cases])?
H, with a bit of mucking around, that can be done. You've got
enthusiasts to data geeks
For further information
***
Website: fifthelephant.in
Schedule: funnel.hasgeek.com
Tickets: fifthel.doattend.com
Send your queries to i...@hasgeek.com or call +91 80 6768 4422
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
On 23/07/2014, at 2:09 PM, Emmanuel Dreyfus wrote:
snip
I am a bit furstrated by the status of NetBSD autobuilds: failures are
ignored for now, which makes me wonder why I spent time setting it up :-)
Sorry about that Manu. :(
The NetBSD autobuild has been configured to not vote so far, so
On Thursday 24 July 2014 07:04:11 Justin Clift wrote:
snip
Surely there's some way we can make this work, such that the optimised
assembler code is only used for cpu's the support it. With non-optimised
C or something used for the others.
I'm just working on it. The use of intel's SSE2
with a compiler different than gcc, a non x86 host...
If someone has a box or VM online that would suit (and we can get
to via the internet), we should be able to hook it up.
Volunteers anyone? :D
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster.org
An open source
On 28/07/2014, at 3:02 AM, Emmanuel Dreyfus wrote:
On Mon, Jul 28, 2014 at 01:52:53AM +0100, Justin Clift wrote:
If someone has a box or VM online that would suit (and we can get
to via the internet), we should be able to hook it up.
Or we can run an emulator on a VM at rakspace. Here
1 - 100 of 358 matches
Mail list logo