Hello folks,
This change has gone through, but I wanted to let folks here know as well. I'm
removing myself as maintainer from everything to reflect that I will no longer
be the primary point of contact for any of the components I used to own.
However, I will still be around and contributing as
inted in the logs.
On Fri, Feb 8, 2019 at 7:49 AM Nigel Babu wrote:
> Hello,
>
> We've reached the half way mark in the migration and half our builders
> today are now running on AWS. I've turned off the RAX builders and have
> them try to be online only if the AWS builde
Hello,
We've reached the half way mark in the migration and half our builders
today are now running on AWS. I've turned off the RAX builders and have
them try to be online only if the AWS builders cannot handle the number of
jobs running at any given point.
The new builders are named builder2xx.a
Hello folks,
In the last week, if you have had a regression job that failed, you will
not find a log for it. This is due to a mistake I made while deleting code.
Rather than deleting the code for the push to an internal HTTP server, I
also deleted a line which handled the log creation. Apologies f
Hello,
Deepshikha and I have been working on understanding and using the k8s
framework for testing the GCS stack. With the help of the folks from
sig-storage, we've managed to write a sample test that needs to be run
against an already setup k8s gluster with GCS installed on top[1]. This is
a temp
Hello folks,
The infra team has not been sending regular updates recently because we’ve
been caught up in several different pieces of work that were running into
longer than 2 week sprint cycles. This is a summary of what we’ve done so
far since the last update.
* The bugzilla updates are done wi
Hello folks,
We've had a bug to automate adding a comment on Github when there is a new
spec patch. I'm going to deny this request.
* The glusterfs-specs repo does not have an issue tracker and does not seem
to ever need an issue tracker. We currently limit pre-merge commenting on
Github to the r
Hello folks,
Going to restart gerrit on review.gluster.org for a quick config change in
the next 15 mins. Estimate outage of 5 mins. I'll update this thread when
we're back online
--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https
Oops, missed finishing a line.
Please avoid making any changes directly via the Jenkins UI going forward.
Any configuration changes need to be made from the repo so the config
drives Jenkins.
On Fri, Nov 2, 2018 at 11:32 AM Nigel Babu wrote:
> Hello folks,
>
> On Monday, I merg
Hello folks,
On Monday, I merged in the changes that allowed all the jobs in Centos CI
to be handled in an automated fashion. In the past, it depended on Infra
team members to review, merge, and apply the changes on Centos CI. I've now
changed that so that the individual job owners can do their ow
Hello folks,
Here's the update from the last 2 weeks from the Infra team.
* Created an architecture document for Automated Upgrade Testing. This is
now done and is undergoing reviews. It is scheduled to be published on the
devel list as soon as we have a decent PoC.
* Finished part of the migrati
Hello folks,
I meant to send this out on Monday, but it's been a busy few days.
* The infra pieces of distributed regression are now complete. A big shout
out to Deepshikha for driving this and Ramky for his help in get this to
completion.
* The GD2 containers and CSI container builds work now. We
On Sun, Sep 30, 2018 at 6:45 PM Sachidananda URS wrote:
>
>
> On Sun, Sep 30, 2018 at 12:56 PM, Yaniv Kaul wrote:
>
>>
>>
>> On Fri, Sep 28, 2018 at 2:33 PM Sachidananda URS wrote:
>>
>>> Hi,
>>>
>>> gluster-ansible project is aimed at automating the deployment and
>>> maintenance of GlusterFS
Hello folks,
I did a quick unplanned Jenkins maintenance today to upgrade 3 plugins with
security issues in them. This is now complete. There was a brief period
where we did not start new jobs until Jenkins restarted. There should have
been no interruption of existing jobs or any jobs canceled. Pl
On Tue, Sep 11, 2018 at 7:06 PM Michael Scherer wrote:
> And... rescue mode is not working. So the server is down until
> Rackspace fix it.
>
> Can someone disable the freebsd smoke test, as I think our 2nd builder
> is not yet building fine ?
>
Disabled. Please do not merge any JJB review requ
On Mon, Sep 10, 2018 at 7:08 PM Shyam Ranganathan
wrote:
> My assumption here is that for each patch that mentions a BZ, an
> additional tracker would be added to the tracker list, right?
>
Correct.
>
> Further assumption (as I have not used trackers before) is that this
> would reduce noise a
Hello,
In an effort to make the devel list and maintainer lists more noise free,
I'm going to move all the Jenkins related alerts to a new list. This does
not apply to the alert sent out for new releases. This is part of a
longer-term plan to monitor build failures in Centos CI and the nightly
reg
Hello folks,
We now have review.gluster.org as an external tracker on Bugzilla. Our
current automation when there is a bugzilla attached to a patch is as
follows:
1. When a new patchset has "Fixes: bz#1234" or "Updates: bz#1234", we will
post a comment to the bug with a link to the patch and chan
Hello folks,
Just an FYI given so many people have run into this failure this week. We
now switch over to python3 if it's installed on your machine. If you're on
F27 or above, you will most likely have python3 installed. However, you
will not have python3-devel installed. After you install python3
Oops, big note: Centos Regression jobs may have ended up canceled. Please
retry them.
On Fri, Aug 24, 2018 at 9:31 PM Nigel Babu wrote:
> Hello,
>
> We've had to do an unplanned Jenkins restart. Jenkins was overloaded and
> not responding to any requests. There was a backlog of
Hello,
We've had to do an unplanned Jenkins restart. Jenkins was overloaded and
not responding to any requests. There was a backlog of over 100 jobs as
well. The restart seems to have fixed things up.
More details in bug: https://bugzilla.redhat.com/show_bug.cgi?id=1622173
--
nigelb
___
On Fri, Aug 10, 2018 at 5:59 PM Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:
> On Fri, Aug 10, 2018 at 5:47 PM, Nigel Babu wrote:
> > Hello folks,
> >
> > We're currently in a transition to python3. Right now, there's a bug in
> one
Hello folks,
We're going to do an urgent reboot of the Gerrit server in the next 1h or
so. For some reason, hot-adding RAM on this machine isn't working, so we're
going to do a reboot to get this working. This is needed to prevent the OOM
Kill problems we've been running into since last night.
--
Heads up: Centos CI will be undergoing maintenance tomorrow.
-- Forwarded message -
From: Brian Stinson
Date: Tue, Aug 21, 2018 at 1:58 AM
Subject: [Ci-users] Maintenance Window 22-Aug-2018 12:00PM UTC
To:
Hi All,
Due to some pending OS updates we will be rebooting machines i
On Tue, Aug 14, 2018 at 5:52 PM Humble Chirammal
wrote:
>
>
> On Tue, Aug 14, 2018 at 2:09 PM, Nigel Babu wrote:
>
>> Hello folks,
>>
>> Do we know who's the admin of the Gluster organization on Docker hub? I'd
>> like to be added to the org so
Hello folks,
Do we know who's the admin of the Gluster organization on Docker hub? I'd
like to be added to the org so I can set up nightly builds for all the
GCS-related containers.
--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
htt
Oops, I apparently forgot to send out a note. Master has been since ~7 am
IST.
On Mon, Aug 13, 2018 at 4:25 PM Atin Mukherjee wrote:
> Nigel,
>
> Now that mater branch is reopened, can you please revoke the commit access
> restrictions?
>
> On Mon, 6 Aug 2018 at 09:12,
Hello folks,
Deepshikha did the work to make loaning a machine to running your
regressions on them faster a while ago. I've tested them a few times today
to confirm it works as expected. In the past, Softserve[1] machines would
be a clean Centos 7 image. Now, we have an image with all the dependen
Hello folks,
Thanks to Niels, we now have ASAN builds compiling and a flag for getting
it to work locally. The patch[1] is not merged yet, but I can trigger runs
off the patch for now. The first run is off[2]
[1]: https://review.gluster.org/c/glusterfs/+/20589/2
[2]: https://build.gluster.org/job
Hello folks,
We're currently in a transition to python3. Right now, there's a bug in one
piece of this transition code. I saw Nithya run into this yesterday. The
challenge here is, none of our testing for python2/python3 transition
catches this bug. Both Pylint and the ast-based testing that Kaleb
Hello folks,
This is a quick retrospective we (the Infra team) did for the Gerrit
upgrade from 2 days ago.
## Went Well
* We had a full back up to fall back to. We had to fall back on this.
* We had a good 4h window so we had time to make mistakes and recover from
them.
* We had a good number of
Hello folks,
Based on Yaniv's feedback, I've removed deadcode.DeadStores checker. We are
left with 161 failures. I'm going to move this to 140 as a target for now.
The job will continue to be yellow and we need to fix at least 21 failures
by 31 Aug. That's about 7 issues per week to fix.
If anyon
Infra issue. Please file a bug.
On Thu, Aug 9, 2018 at 3:57 PM Pranith Kumar Karampuri
wrote:
> https://build.gluster.org/job/devrpm-el7/10441/console
>
> *10:12:42* Wrote:
> /home/jenkins/root/workspace/devrpm-el7/extras/LinuxRPM/rpmbuild/SRPMS/glusterfs-4.2dev-0.240.git4657137.el7.src.rpm*10:
Hello folks,
We have two post-upgrade issues
1. Jenkins jobs are failing because git clones fail. This is now fixed.
2. git.gluster.org shows no repos at the moment. I'm currently debugging
this.
--
nigelb
___
Gluster-devel mailing list
Gluster-devel@
On Wed, Aug 8, 2018 at 4:59 PM Yaniv Kaul wrote:
>
> Nice, thanks!
> I'm trying out the new UI. Needs getting used to, I guess.
> Have we upgraded to NotesDB?
>
Yep! Account information is now completely in NoteDB and not in
ReviewDB(which is backed by postgresql for us) anymore.
___
On Wed, Aug 8, 2018 at 2:00 PM Ravishankar N wrote:
>
> On 08/08/2018 05:07 AM, Shyam Ranganathan wrote:
> > 5) Current test failures
> > We still have the following tests failing and some without any RCA or
> > attention, (If something is incorrect, write back).
> >
> > ./tests/basic/afr/add-bri
Reminder, this upgrade is tomorrow.
-- Forwarded message -
From: Nigel Babu
Date: Fri, Jul 27, 2018 at 5:28 PM
Subject: Gerrit downtime on Aug 8, 2016
To: gluster-devel
Cc: gluster-infra , <
automated-test...@gluster.org>
Hello,
It's been a while since we upgraded
Hello folks,
We've run a new Coverity run that was entirely automated. Current split of
Coverity issues:
High: 132
Medium: 241
Low: 83
Total: 456
We will be pushing a nightly build into scan.coverity.com via Jenkins. So,
you should be able to see updates to these numbers as you merge in fixes.
-
Hello folks,
Master branch is now closed. Only a few people have commit access now and
it's to be exclusively used to merge fixes to make master stable again.
--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/
On Thu, Aug 2, 2018 at 5:12 PM Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:
> Don't know, something to do with perf xlators I suppose. It's not
> repdroduced on my local system with brick-mux enabled as well. But it's
> happening on Xavis' system.
>
> Xavi,
> Could you try with the p
> That is fine with me. It is prepared for GlusterFS 5, so nothing needs
> to be done for that. Only for 4.1 and 3.12 FreeBSD needs to be disabled
> from the smoke job(s).
>
> I could not find the repo that contains the smoke job, otherwise I would
> have tried to send a PR.
>
> Niels
>
For future
Hi Shyam,
Amar and I sat down to debug this failure[1] this morning. There was a bit
of fun looking at the logs. It looked like the test restarted itself. The
first log entry is at 16:20:03. This test has a timeout of 400 seconds
which is around 16:26:43.
However, if you account for the fact that
>
> The outcome is to get existing maintained release branches building and
> working on FreeBSD, would that be correct?
>
> If so I think we can use the cherry-picked version, the changes seem
> mostly straight forward, and it is possibly easier to maintain.
>
> Although, I have to ask, what is th
ay. My fix will get
overwritten by Ansible tonight :)
On Fri, Jul 27, 2018 at 5:28 PM Nigel Babu wrote:
> Hello,
>
> It's been a while since we upgraded Gerrit. We plan to do a full upgrade
> and move to 2.15.3. Among other changes, this brings in the new PolyGerrit
> interfac
e:
>
>> The staging URL seems to be missing from the note
>>
>> On Fri, Jul 27, 2018 at 5:28 PM, Nigel Babu wrote:
>> > Hello,
>> >
>> > It's been a while since we upgraded Gerrit. We plan to do a full
>> upgrade and
>> > move to 2.1
Hello,
It's been a while since we upgraded Gerrit. We plan to do a full upgrade
and move to 2.15.3. Among other changes, this brings in the new PolyGerrit
interface which brings significant frontend changes. You can take a look at
how this would look on the staging site[1].
## Outage Window
0330
Replies inline
On Thu, Jul 26, 2018 at 1:48 AM Shyam Ranganathan
wrote:
> On 07/24/2018 03:28 PM, Shyam Ranganathan wrote:
> > On 07/24/2018 03:12 PM, Shyam Ranganathan wrote:
> >> 1) master branch health checks (weekly, till branching)
> >> - Expect every Monday a status update on various tes
On Wed, Jul 25, 2018 at 6:51 PM Niels de Vos wrote:
> We had someone working on starting/stopping Jenkins slaves in Rackspace
> on-demand. He since has left Red Hat and I do not think the infra team
> had a great interest in this either (with the move out of Rackspace).
>
> It can be deleted from
> So while cleaning thing up, I wonder if we can remove this one:
> https://github.com/gluster/jenkins-ssh-slaves-plugin
>
> We have just a fork, lagging from upstream and I am sure we do not use
> it.
>
Safe to delete. We're not using it for sure.
>
> The same goes for:
> https://github.com/glu
I think our team structure on Github has become unruly. I prefer that we
use teams only when we can demonstrate that there is a strong need. At the
moment, the gluster-maintainers and the glusterd2 projects have teams that
have a strong need. If any other repo has a strong need for teams, please
sp
Hello folks,
I had to take down Jenkins for some time today. The server ran out of space
and was silently ignoring Gerrit requests for new jobs. If you think one of
your jobs needed a smoke or regression run and it wasn't triggered, this is
the root cause. Please retrigger your jobs.
## Summary o
Hello folks,
Our infra also runs in the same network, so if you notice issues, they're
most likely related to the same network issues.
-- Forwarded message -
From: Fabian Arrotin
Date: Fri, Jul 20, 2018 at 12:49 PM
Subject: [Ci-users] [FYI] GitHub connectivity issue
To: ci-us...@
Hello folks,
Deepshikha is working on getting the distributed-regression testing into
production. This is a good time to discuss how we log our regression. We
tend to go with the approach of "get as many logs as possible" and then we
try to make sense of it when it something fails.
In a setup whe
Hello folks,
A while ago we talked about using clang-format for our codebase[1]. We
started doing several pieces of this work asynchronously. Here's an update
on the current state of affairs:
* Team agrees on a style and a config file representing the style.
This has been happening asynchronously
On Mon, Jun 25, 2018 at 7:28 PM Amar Tumballi wrote:
>
>
> There are currently a few known issues:
>> * Not collecting the entire logs (/var/log/glusterfs) from servers.
>>
>
> If I look at the activities involved with regression failures, this can
> wait.
>
Well, we can't debug the current fail
Hello,
We ran into a problem where builds for F28 and above will not build on
CentOS7 chroots. We caught this when F28 was rawhide but deemed it not yet
important enough to fix, however, recent developments have forced us to
make the switch. Our Fedora builds will also switch to using F28.
We hav
Hello,
We're nearly at 4.1 release, I think now is a time to decide when to flip
the switch to default to GD2 server for all regressions or a nightly GD2
run against the current regression.
Can someone help with what tasks need to be done for this to be
accomplished and how the CI team can help.
Hello folks,
I'd like to propose that we clean up artifacts.ci.centos.org/gluster.
Here's my proposal:
1. Nightly folder will only have rpms from pre-release versions. That is,
I'll be deleting everything that's not 4.1 or 4.2.
2. Releases that are no longer actively supported will be deleted.
T
Hello,
This is a reminder that we have a an outage today at the community cage
outage window. The switches and routers will be getting updated and
rebooted. This will cause an outage for a short period of time.
--
nigelb
___
Gluster-devel mailing list
Merging this in and deploying on builders based on Kotresh's +1 to unblock
builds and merges.
On Mon, May 14, 2018 at 9:49 AM, Nigel Babu wrote:
> This is because of a new warning by liblvm2app. I have a hacky fix to the
> compilation process to get rid of the warning. Please revi
This is because of a new warning by liblvm2app. I have a hacky fix to the
compilation process to get rid of the warning. Please review:
https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/130
However, this will soon become more than just a warning. We should either
fix this or complet
I hope I've made the changes that Jeff's recommended in the first comment
correctly[1]. Xavi, I've not pulled in any of your suggestions yet, because
I figured you'd want to see the output and send suggestions.
Please send pull requests to the .clang-format file (and only that file)
for anything I
I've reverted the original patch entirely. Our policy is to either mark the
test as bad or revert the entire patch. This seems to have caused multiple
failures in the test system, so I've reverted the entire patch. Please
re-land the patch with any fixes as a fresh review.
On Wed, Apr 18, 2018 at
Hello folks,
In the past if you had a patch that was fixing a brick multiplex failure,
you couldn't test whether it actually fixed brick multiplex failures
easily. You had two options:
* Create a new review where you turn on brick multiplex via the code and
also apply your patch. Mark a -1 for th
Hello folks,
I've just restarted Jenkins for an security update to a plugin. There was
one running centos-regression job that I had to cancel.
--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/
Hello,
We have a job that tries to turn on stripe cache and run EC tests. It looks
like we recently made the decision to turn on stripe cache by default. Is
this job needed anymore? It fails at the moment due to a merge conflict.
--
nigelb
___
Gluster
Hello folks,
There's a Jenkins security fix scheduled to be released today. This will
most likely happen in the morning EDT. The Jenkins team has not specified a
time. When we're ready for an upgrade, I'll cancel all running jobs and
re-trigger them at te end of the upgrade. The downtime should be
to configure the machine and all
> other stuff from the beginning.
>
> Thanks,
> Sanju
>
> On Tue, Mar 13, 2018 at 12:37 PM, Nigel Babu wrote:
>
>>
>> We’ve enabled certain limits for this application:
>>>>
>>>>1.
>>>>
>&g
As is the practice for any infra problems, please file a bug:
https://bugzilla.redhat.com/enter_bug.cgi?product=
GlusterFS&component=project-infrastructure
On Mon, Mar 19, 2018 at 5:58 PM, Raghavendra Gowdappa
wrote:
> Hi Nigel,
>
> I am not able to download the archive of core from:
> http://bu
Hello folks,
Our docs need a significant facelift. Nithya has suggested that we branch
out the current docs into a branch called version-3 (or some such, please
let's not bikeshed about the name) and have the master branch track 4.x
series. We will significantly change the documentation for master
Hello,
If there's a repo that's synced from Gerrit to Github, gluster-ant is now
admin on those repos. This is so that when issues are closed via commit
message, it is closed by the right user (the bot). Rather than the Infra
person who set that repo up.
As always, please file a bug if you notice
When the test works it takes less than 60 seconds. If it needs more than
200 seconds, that means there's an actual issue.
On Wed, Mar 14, 2018 at 10:16 AM, Raghavendra Gowdappa
wrote:
> All,
>
> I was trying to debug a regression failure [1]. When I ran test locally on
> my laptop, I see some wa
> We’ve enabled certain limits for this application:
>>
>>1.
>>
>>Maximum allowance of 5 VM at a time across all the users. User have
>>to wait until a slot is available for them after 5 machines allocation.
>>2.
>>
>>User will get the requesting machines maximum upto 4 hours.
>
Hello,
It's that time again. We need to move up a Gerrit release. Staging has now
been upgraded to the latest version. Please help test it and give us
feedback on any issues you notice: https://gerrit-stage.rht.gluster.org/
--
nigelb
___
Gluster-devel
This is now fixed. Shyam found the root case. After a mock upgrade, mock
would wait for user confirmation that DNF wasn't installed on the system.
Given this was a centos machine, DNF wasn't readily available. I set the
config option dnf_warning=False and that fixed the failures. All previously
fai
Aha. Thanks Mohit. That was infra. Sorry about that. The first line in
/etc/hosts said
::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
Once I removed it, the tests started running faster. I'll update my patch
to remove this particular test from timeout fix
On W
The immediate cause of this failure is that we merged the timeout patch
which gives each test 200 seconds to finish. This test and another one
takes over 200 seconds on regression nodes.
I have a patch up to change the timeout
https://review.gluster.org/#/c/19605/1
However, tests/bugs/rpc/bug-921
Hello folks,
We're all out of Centos 6 nodes from today. I've just deleted the last of
them. We now run exclusively on Centos 7 nodes.
We've not received any negative feedback about plans to move NetBSD, so
I've disabled and removed all the NetBSD jobs and nodes as well.
--
nigelb
_
On Mon, Feb 19, 2018 at 5:58 PM, Nithya Balachandran
wrote:
>
>
> On 19 February 2018 at 13:12, Atin Mukherjee wrote:
>
>>
>>
>> On Mon, Feb 19, 2018 at 8:53 AM, Nigel Babu wrote:
>>
>>> Hello,
>>>
>>> As you all most likely know,
Hello,
As you all most likely know, we store the tarball of the binaries and core
if there's a core during regression. Occasionally, we've introduced a bug
in Gluster and this tar can take up a lot of space. This has happened
recently with brick multiplex tests. The build-install tar takes up 25G,
So we have a job that's unmaintained and unwatched. If nobody steps up to
own it in the next 2 weeks, I'll be deleting this job.
On Wed, Feb 14, 2018 at 4:49 PM, Niels de Vos wrote:
> On Wed, Feb 14, 2018 at 11:15:23AM +0530, Nigel Babu wrote:
> > Hello,
> >
> &g
This upgrade is now complete and we're now running the latest version of
Jenkins.
On Thu, Feb 15, 2018 at 9:53 AM, Nigel Babu wrote:
> Hello,
>
> I've just placed Jenkins in shutdown mode. No new jobs will be started for
> about an hour from now. I intend to upgrade
Hello,
I've just placed Jenkins in shutdown mode. No new jobs will be started for
about an hour from now. I intend to upgrade Jenkins to pull in the latest
security fixes.
--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.
Hello,
Centos CI has a run-tests-in-vagrant job. Do we continue to need this
anymore? It still runs master and 3.8. I don't see this job adding much
value at this point given we only look at results that are on
build.gluster.org. I'd like to use the extra capacity for other tests that
will run on
Hello folks,
I'm trying to make the glusterfs-patch-acceptance-tests repo lighter by
really only having code that's needed to run regressions and build for
gluster. The Centos CI jobs, therefore need to move to it's own repo. As a
first step, I've created a new repo[1] for centos ci jobs.
The job
er down in some specific test cases and
the SSD disks. I'm going to add one more Centos 7 machine to the pool today.
On Thu, Feb 1, 2018 at 9:26 AM, Nigel Babu wrote:
> Hello folks,
>
> Today, I'm putting the first Centos 7 node in our regression pool.
>
> slave28.c
Hello folks,
Today, I'm putting the first Centos 7 node in our regression pool.
slave28.cloud.gluster.org -> Shutdown and removed
builder100.cloud.gluster.org -> New Centos7 node (we'll be starting from
100 upwards)
If this run goes well, we'll be replacing the nodes one by one with Centos
7. If
Hello folks,
We're going to be resizing the supercolony.gluster.org on our cloud
provider. This will definitely lead to a small outage for 5 mins. In the
event that something goes wrong in this process, we're taking a 2-hour
window for this outage.
Date: Feb 21
Server: supercolony.gluster.org
Tim
More details: https://build.gluster.org/job/rpm-rawhide/1182/
On Wed, Jan 24, 2018 at 2:03 PM, Niels de Vos wrote:
> On Wed, Jan 24, 2018 at 09:14:51AM +0530, Nigel Babu wrote:
> > Hello folks,
> >
> > Our rawhide rpm builds seem to be failing with what looks like a spe
Hello folks,
Our rawhide rpm builds seem to be failing with what looks like a specfile
issue. It's worth looking into this now before F28 is released in May.
--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/ma
Update: All the nodes that had problems with geo-rep are now fixed. Waiting
on the patch to be merged before we switch over to Centos 7. If things go
well, we'll replace nodes one by one as soon as we have one green on Centos
7.
On Mon, Jan 22, 2018 at 12:21 PM, Nigel Babu wrote:
> Hel
Hello folks,
As you may have noticed, we've had a lot of centos6-regression failures
lately. The geo-replication failures are the new ones which particularly
concern me. These failures have nothing to do with the test. The tests are
exposing a problem in our infrastructure that we've carried aroun
Hello folks,
If you take a machine offline, please file a bug so that the machine can be
debugged and return to the pool.
--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
Hello folks,
We may have been a little too quick to blame Meltdown on the Jenkins
failures yesterday. In any case, we've open a ticket with our provider and
they're looking into the failures. I've looked at the last 90 failures to
get a comprehensive number on the failures.
Total Jobs: 90
Failure
Hello folks,
Can the respective maintainers look at the following tests for any oddity?
They seem to be taking too much time.
./tests/bugs/nfs/bug-1053579.t - 2261
./tests/bugs/fuse/many-groups-for-acl.t - 1196
./tests/bugs/core/bug-1432542-mpx-restart-crash.t - 684
./tests/bugs/rpc/bug-884452.t
Hello folks,
We've been using Centos 6 for our regressions for a long time. I believe
it's time that we moved to Centos 7. It's causing us minor issues. For
example, tests run fine on the regression boxes but don't work on local
machines or vice-versa. Moving up gives us the ability to use newer
v
Hello folks,
I'm going to be restarting Jenkins for an important security update. Any
running jobs will be canceled and retriggered.
--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-de
Hello folks,
We talked about this last week at the maintainer's meeting. We're going to
restrict +2 votes to people who can also submit the patch. This makes sure
that patches have actual maintainers giving +2. Everyone else will be able
to give a +1.
If this affects your project/component's deve
Pranith,
Our logging has changed slightly. Please read my email titled "Changes in
handling logs from (centos) regressions and smoke" to gluster-devel and
gluster-infra.
On Tue, Nov 28, 2017 at 8:06 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> One of my patches(https://review.glus
Hello folks,
I have an update on chunking. There's good news and bad. The first bit is
that We a chunked regression job now. It splits it out into 10 chunks that
are run in parallel. This chunking is quite simple at the moment and
doesn't try to be very smart. The intelligence steps will come in o
1 - 100 of 331 matches
Mail list logo