At a guess, it'll be somewhere on:
https://git.centos.org
No idea of specifics though.
+ Justin
On 13/06/2014, at 7:21 AM, Harshavardhana wrote:
> Interesting - looks like all the sources have been moved? do we know where?
>
> On Thu, Jun 12, 2014 at 10:48 PM, Justin Clift wrote:
>> Hi Kale
Interesting - looks like all the sources have been moved? do we know where?
On Thu, Jun 12, 2014 at 10:48 PM, Justin Clift wrote:
> Hi Kaleb,
>
> This just started showing up in rpm.t test output:
>
> ERROR:
> Exception(/home/jenkins/root/workspace/rackspace-regression-2GB/rpmbuild-mock.d/glus
I've got no logs so I can't confirm it. But it is most likely the same
issue we found.
~kaushal
On Thu, Jun 12, 2014 at 10:49 PM, Pranith Kumar Karampuri
wrote:
> Kaushal,
> Could you check if this is this the same rebalance failure we
> discovered?
>
> Pranith
>
> On 06/12/2014 10:35 PM, Ju
Hi Kaleb,
This just started showing up in rpm.t test output:
ERROR:
Exception(/home/jenkins/root/workspace/rackspace-regression-2GB/rpmbuild-mock.d/glusterfs-3.5qa2-0.621.gita22a2f0.el6.src.rpm)
Config(epel-7-x86_64) 0 minutes 2 seconds
INFO: Results and/or logs in:
/home/jenkins/root/work
hi,
Could you let us know what is the exact problem you are running into?
Pranith
On 06/13/2014 09:27 AM, Krishnan Parthasarathi wrote:
Hi,
Pranith, who is the AFR maintainer, would be the best person to answer this
question. CC'ing Pranith and gluster-devel.
Krish
- Original Message -
Hi,
Pranith, who is the AFR maintainer, would be the best person to answer this
question. CC'ing Pranith and gluster-devel.
Krish
- Original Message -
> hi Krishnan Parthasarathi
>
> Do you tell me which glusterfs-version has great improvement for glusterfs
> split-brain problem?
> Can
r this. For now reverted it
> >
> > Pranith
> > On 06/13/2014 02:16 AM, Justin Clift wrote:
> >> This one seems to be happening a lot now. The last 3 failures (across
> >> different nodes) were from this test.
> >>
> >> Log files here:
> >&
ems to be happening a lot now. The last 3 failures (across
>> different nodes) were from this test.
>>
>> Log files here:
>>
>>
>> http://slave2.cloud.gluster.org/logs/glusterfs-logs-20140612%3a18%3a53%3a06.tgz
>>
>> (am installing Nginx on
:
http://slave2.cloud.gluster.org/logs/glusterfs-logs-20140612%3a18%3a53%3a06.tgz
(am installing Nginx on the slaves now, for easy log retrieval as recommended
by Kaushal M)
+ Justin
On 12/06/2014, at 1:23 PM, Pranith Kumar Karampuri wrote:
Thanks for reporting. Will take a look.
Pranith
On 12/06/2014, at 6:47 PM, Pranith Kumar Karampuri wrote:
> On 06/12/2014 11:16 PM, Anand Avati wrote:
>> The client can actually be fixed to be compatible with both old and new
>> servers. We can change the errno from ESTALE to ENOENT before doing the GFID
>> mismatch check in client_lookup_cbk
This one seems to be happening a lot now. The last 3 failures (across
different nodes) were from this test.
Log files here:
http://slave2.cloud.gluster.org/logs/glusterfs-logs-20140612%3a18%3a53%3a06.tgz
(am installing Nginx on the slaves now, for easy log retrieval as recommended
by
Hi all,
Please don't start rackspace-regression testing jobs in Jenkins
yet.
I'm very manually changing settings on them, running a job, then
rebooting each node (not fun) while I try to make them more
reliable.
If you want a Gerrit CR run on one of the nodes, let me know
which one(s) and I'll g
On Thu, Jun 12, 2014 at 10:47 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
> On 06/12/2014 11:16 PM, Anand Avati wrote:
>
>
>
>
> On Thu, Jun 12, 2014 at 10:39 AM, Ravishankar N
> wrote:
>
>> On 06/12/2014 08:19 PM, Justin Clift wrote:
>>
>>> On 12/06/2014, at 2:22 PM, Ravishankar
On 06/12/2014 11:16 PM, Anand Avati wrote:
On Thu, Jun 12, 2014 at 10:39 AM, Ravishankar N
mailto:ravishan...@redhat.com>> wrote:
On 06/12/2014 08:19 PM, Justin Clift wrote:
On 12/06/2014, at 2:22 PM, Ravishankar N wrote:
But we will still hit the problem
On 12/06/2014, at 6:39 PM, Ravishankar N wrote:
> On 06/12/2014 08:19 PM, Justin Clift wrote:
>> On 12/06/2014, at 2:22 PM, Ravishankar N wrote:
>>
>>> But we will still hit the problem when rolling upgrade is performed
>>> from 3.4 to 3.5, unless the clients are also upgraded to 3.5
>>
>> Could
On Thu, Jun 12, 2014 at 10:39 AM, Ravishankar N
wrote:
> On 06/12/2014 08:19 PM, Justin Clift wrote:
>
>> On 12/06/2014, at 2:22 PM, Ravishankar N wrote:
>>
>>
>>> But we will still hit the problem when rolling upgrade is performed
>>> from 3.4 to 3.5, unless the clients are also upgraded to 3.
On Thu, Jun 12, 2014 at 10:33 AM, Vijay Bellur wrote:
> On 06/12/2014 06:52 PM, Ravishankar N wrote:
>
>> Hi Vijay,
>>
>> Since glusterfs 3.5, posix_lookup() sends ESTALE instead of ENOENT [1]
>> when when a parent gfid (entry) is not present on the brick . In a
>> replicate set up, this causes a
On 06/12/2014 11:09 PM, Ravishankar N wrote:
On 06/12/2014 08:19 PM, Justin Clift wrote:
On 12/06/2014, at 2:22 PM, Ravishankar N wrote:
But we will still hit the problem when rolling upgrade is performed
from 3.4 to 3.5, unless the clients are also upgraded to 3.5
Could we introduce a cli
On 06/12/2014 08:19 PM, Justin Clift wrote:
On 12/06/2014, at 2:22 PM, Ravishankar N wrote:
But we will still hit the problem when rolling upgrade is performed
from 3.4 to 3.5, unless the clients are also upgraded to 3.5
Could we introduce a client side patch into (say) 3.4.5 that helps
with
On 06/12/2014 06:52 PM, Ravishankar N wrote:
Hi Vijay,
Since glusterfs 3.5, posix_lookup() sends ESTALE instead of ENOENT [1]
when when a parent gfid (entry) is not present on the brick . In a
replicate set up, this causes a problem because AFR gives more priority
to ESTALE than ENOENT, causing
Kaushal,
Could you check if this is this the same rebalance failure we
discovered?
Pranith
On 06/12/2014 10:35 PM, Justin Clift wrote:
This one seems like a "proper" failure. Is it on your radar?
Test Summary Report
---
./tests/bugs/bug-857330/normal.t
This one seems like a "proper" failure. Is it on your radar?
Test Summary Report
---
./tests/bugs/bug-857330/normal.t(Wstat: 0 Tests: 24 Failed: 1)
Failed test: 13
http://build.gluster.org/job/rackspace-regression/123/console
I've disconnected that s
/consoleFull
Download-log-at ==>
http://build.gluster.org:443/logs/regression/glusterfs-logs-20140612:08:37:44.tgz
Test written by ==> Author: Avra Sengupta
./tests/basic/mgmt_v3-locks.t [11, 12, 13]
0 #!/bin/bash
1
2 . $(dirname $0)/../include.rc
On 12/06/2014, at 2:22 PM, Ravishankar N wrote:
> But we will still hit the problem when rolling upgrade is performed
> from 3.4 to 3.5, unless the clients are also upgraded to 3.5
Could we introduce a client side patch into (say) 3.4.5 that helps
with this?
Then mandate that 3.4 -> 3.5 rollin
On 12/06/2014, at 10:22 AM, Pranith Kumar Karampuri wrote:
> hi Guys,
> Rackspace slaves are in action now, thanks to Justin. Please use the URL
> in Subject to run the regressions. I already shifted some jobs to rackspace.
Good thinking, but please hold off on this for now.
The slaves are
Hi Vijay,
Since glusterfs 3.5, posix_lookup() sends ESTALE instead of ENOENT [1]
when when a parent gfid (entry) is not present on the brick . In a
replicate set up, this causes a problem because AFR gives more priority
to ESTALE than ENOENT, causing IO to fail [2]. The fix is in progress at
Thanks for reporting. Will take a look.
Pranith
On 06/12/2014 05:52 PM, Raghavendra Talur wrote:
Hi Pranith,
This test failed for my patch set today and seems to be a spurious
failure.
Here is the console output for the run.
http://build.gluster.org/job/rackspace-regression/107/consoleFull
Hi Pranith,
This test failed for my patch set today and seems to be a spurious failure.
Here is the console output for the run.
http://build.gluster.org/job/rackspace-regression/107/consoleFull
Could you please have a look at it?
--
Thanks!
Raghavendra Talur | Red Hat Storage Developer |
Thanks a lot for quick resolution Sachin
Pranith
On 06/12/2014 04:38 PM, Sachin Pandit wrote:
http://review.gluster.org/#/c/8041/ is merged upstream.
~ Sachin.
- Original Message -
From: "Sachin Pandit"
To: "Raghavendra Talur"
Cc: "Pranith Kumar Karampuri" , "Gluster Devel"
Sent: Th
http://review.gluster.org/#/c/8041/ is merged upstream.
~ Sachin.
- Original Message -
From: "Sachin Pandit"
To: "Raghavendra Talur"
Cc: "Pranith Kumar Karampuri" , "Gluster Devel"
Sent: Thursday, June 12, 2014 12:58:44 PM
Subject: Re: [Gluster-devel] spurious regression failure in
On 06/12/2014 01:35 PM, Pranith Kumar Karampuri wrote:
Vijay,
Could you merge this patch please.
http://review.gluster.org/7928
Done, thanks.
-Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailma
hi Guys,
Rackspace slaves are in action now, thanks to Justin. Please use
the URL in Subject to run the regressions. I already shifted some jobs
to rackspace.
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gl
Vijay,
Could you merge this patch please.
http://review.gluster.org/7928
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Patch link http://review.gluster.org/#/c/8041/.
~ Sachin.
- Original Message -
From: "Raghavendra Talur"
To: "Pranith Kumar Karampuri"
Cc: "Sachin Pandit" , "Gluster Devel"
Sent: Thursday, June 12, 2014 10:46:14 AM
Subject: Re: [Gluster-devel] spurious regression failure in
tests
On Thu, Jun 12, 2014 at 07:26:25AM +0100, Justin Clift wrote:
> On 12/06/2014, at 6:58 AM, Niels de Vos wrote:
>
> > If you capture a vmcore (needs kdump installed and configured), we may
> > be able to see the cause more clearly.
Oh, these seem to be Xen hosts. I don't think kdump (mainly kexec
35 matches
Mail list logo